diff options
author | David S. Miller <davem@davemloft.net> | 2010-05-19 08:01:55 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2010-05-19 08:01:55 +0200 |
commit | 2ec8c6bb5d8f3a62a79f463525054bae1e3d4487 (patch) | |
tree | fa7f8400ac685fb52e96f64997c7c682fc2aa021 | |
parent | qlcnic: adding co maintainer (diff) | |
parent | Merge branch 'x86-uv-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/... (diff) | |
download | linux-2ec8c6bb5d8f3a62a79f463525054bae1e3d4487.tar.xz linux-2ec8c6bb5d8f3a62a79f463525054bae1e3d4487.zip |
Merge branch 'master' of /home/davem/src/GIT/linux-2.6/
Conflicts:
include/linux/mod_devicetable.h
scripts/mod/file2alias.c
577 files changed, 24542 insertions, 14740 deletions
diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt index 1423d2570d78..44c6dcc93d6d 100644 --- a/Documentation/RCU/stallwarn.txt +++ b/Documentation/RCU/stallwarn.txt @@ -3,35 +3,79 @@ Using RCU's CPU Stall Detector The CONFIG_RCU_CPU_STALL_DETECTOR kernel config parameter enables RCU's CPU stall detector, which detects conditions that unduly delay RCU grace periods. The stall detector's idea of what constitutes -"unduly delayed" is controlled by a pair of C preprocessor macros: +"unduly delayed" is controlled by a set of C preprocessor macros: RCU_SECONDS_TILL_STALL_CHECK This macro defines the period of time that RCU will wait from the beginning of a grace period until it issues an RCU CPU - stall warning. It is normally ten seconds. + stall warning. This time period is normally ten seconds. RCU_SECONDS_TILL_STALL_RECHECK This macro defines the period of time that RCU will wait after - issuing a stall warning until it issues another stall warning. - It is normally set to thirty seconds. + issuing a stall warning until it issues another stall warning + for the same stall. This time period is normally set to thirty + seconds. RCU_STALL_RAT_DELAY - The CPU stall detector tries to make the offending CPU rat on itself, - as this often gives better-quality stack traces. However, if - the offending CPU does not detect its own stall in the number - of jiffies specified by RCU_STALL_RAT_DELAY, then other CPUs will - complain. This is normally set to two jiffies. + The CPU stall detector tries to make the offending CPU print its + own warnings, as this often gives better-quality stack traces. + However, if the offending CPU does not detect its own stall in + the number of jiffies specified by RCU_STALL_RAT_DELAY, then + some other CPU will complain. This delay is normally set to + two jiffies. -The following problems can result in an RCU CPU stall warning: +When a CPU detects that it is stalling, it will print a message similar +to the following: + +INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies) + +This message indicates that CPU 5 detected that it was causing a stall, +and that the stall was affecting RCU-sched. This message will normally be +followed by a stack dump of the offending CPU. On TREE_RCU kernel builds, +RCU and RCU-sched are implemented by the same underlying mechanism, +while on TREE_PREEMPT_RCU kernel builds, RCU is instead implemented +by rcu_preempt_state. + +On the other hand, if the offending CPU fails to print out a stall-warning +message quickly enough, some other CPU will print a message similar to +the following: + +INFO: rcu_bh_state detected stalls on CPUs/tasks: { 3 5 } (detected by 2, 2502 jiffies) + +This message indicates that CPU 2 detected that CPUs 3 and 5 were both +causing stalls, and that the stall was affecting RCU-bh. This message +will normally be followed by stack dumps for each CPU. Please note that +TREE_PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, +and that the tasks will be indicated by PID, for example, "P3421". +It is even possible for a rcu_preempt_state stall to be caused by both +CPUs -and- tasks, in which case the offending CPUs and tasks will all +be called out in the list. + +Finally, if the grace period ends just as the stall warning starts +printing, there will be a spurious stall-warning message: + +INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffies) + +This is rare, but does happen from time to time in real life. + +So your kernel printed an RCU CPU stall warning. The next question is +"What caused it?" The following problems can result in RCU CPU stall +warnings: o A CPU looping in an RCU read-side critical section. -o A CPU looping with interrupts disabled. +o A CPU looping with interrupts disabled. This condition can + result in RCU-sched and RCU-bh stalls. -o A CPU looping with preemption disabled. +o A CPU looping with preemption disabled. This condition can + result in RCU-sched stalls and, if ksoftirqd is in use, RCU-bh + stalls. + +o A CPU looping with bottom halves disabled. This condition can + result in RCU-sched and RCU-bh stalls. o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel without invoking schedule(). @@ -39,20 +83,24 @@ o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel o A bug in the RCU implementation. o A hardware failure. This is quite unlikely, but has occurred - at least once in a former life. A CPU failed in a running system, + at least once in real life. A CPU failed in a running system, becoming unresponsive, but not causing an immediate crash. This resulted in a series of RCU CPU stall warnings, eventually leading the realization that the CPU had failed. -The RCU, RCU-sched, and RCU-bh implementations have CPU stall warning. -SRCU does not do so directly, but its calls to synchronize_sched() will -result in RCU-sched detecting any CPU stalls that might be occurring. - -To diagnose the cause of the stall, inspect the stack traces. The offending -function will usually be near the top of the stack. If you have a series -of stall warnings from a single extended stall, comparing the stack traces -can often help determine where the stall is occurring, which will usually -be in the function nearest the top of the stack that stays the same from -trace to trace. +The RCU, RCU-sched, and RCU-bh implementations have CPU stall +warning. SRCU does not have its own CPU stall warnings, but its +calls to synchronize_sched() will result in RCU-sched detecting +RCU-sched-related CPU stalls. Please note that RCU only detects +CPU stalls when there is a grace period in progress. No grace period, +no CPU stall warnings. + +To diagnose the cause of the stall, inspect the stack traces. +The offending function will usually be near the top of the stack. +If you have a series of stall warnings from a single extended stall, +comparing the stack traces can often help determine where the stall +is occurring, which will usually be in the function nearest the top of +that portion of the stack which remains the same from trace to trace. +If you can reliably trigger the stall, ftrace can be quite helpful. RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE. diff --git a/Documentation/RCU/torture.txt b/Documentation/RCU/torture.txt index 0e50bc2aa1e2..5d9016795fd8 100644 --- a/Documentation/RCU/torture.txt +++ b/Documentation/RCU/torture.txt @@ -182,16 +182,6 @@ Similarly, sched_expedited RCU provides the following: sched_expedited-torture: Reader Pipe: 12660320201 95875 0 0 0 0 0 0 0 0 0 sched_expedited-torture: Reader Batch: 12660424885 0 0 0 0 0 0 0 0 0 0 sched_expedited-torture: Free-Block Circulation: 1090795 1090795 1090794 1090793 1090792 1090791 1090790 1090789 1090788 1090787 0 - state: -1 / 0:0 3:0 4:0 - -As before, the first four lines are similar to those for RCU. -The last line shows the task-migration state. The first number is --1 if synchronize_sched_expedited() is idle, -2 if in the process of -posting wakeups to the migration kthreads, and N when waiting on CPU N. -Each of the colon-separated fields following the "/" is a CPU:state pair. -Valid states are "0" for idle, "1" for waiting for quiescent state, -"2" for passed through quiescent state, and "3" when a race with a -CPU-hotplug event forces use of the synchronize_sched() primitive. USAGE diff --git a/Documentation/RCU/trace.txt b/Documentation/RCU/trace.txt index 8608fd85e921..efd8cc95c06b 100644 --- a/Documentation/RCU/trace.txt +++ b/Documentation/RCU/trace.txt @@ -256,23 +256,23 @@ o Each element of the form "1/1 0:127 ^0" represents one struct The output of "cat rcu/rcu_pending" looks as follows: rcu_sched: - 0 np=255892 qsp=53936 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741 - 1 np=261224 qsp=54638 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792 - 2 np=237496 qsp=49664 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629 - 3 np=236249 qsp=48766 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723 - 4 np=221310 qsp=46850 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110 - 5 np=237332 qsp=48449 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456 - 6 np=219995 qsp=46718 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834 - 7 np=249893 qsp=49390 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888 + 0 np=255892 qsp=53936 rpq=85 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741 + 1 np=261224 qsp=54638 rpq=33 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792 + 2 np=237496 qsp=49664 rpq=23 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629 + 3 np=236249 qsp=48766 rpq=98 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723 + 4 np=221310 qsp=46850 rpq=7 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110 + 5 np=237332 qsp=48449 rpq=9 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456 + 6 np=219995 qsp=46718 rpq=12 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834 + 7 np=249893 qsp=49390 rpq=42 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888 rcu_bh: - 0 np=146741 qsp=1419 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314 - 1 np=155792 qsp=12597 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180 - 2 np=136629 qsp=18680 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936 - 3 np=137723 qsp=2843 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863 - 4 np=123110 qsp=12433 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671 - 5 np=137456 qsp=4210 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235 - 6 np=120834 qsp=9902 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921 - 7 np=144888 qsp=26336 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542 + 0 np=146741 qsp=1419 rpq=6 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314 + 1 np=155792 qsp=12597 rpq=3 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180 + 2 np=136629 qsp=18680 rpq=1 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936 + 3 np=137723 qsp=2843 rpq=0 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863 + 4 np=123110 qsp=12433 rpq=0 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671 + 5 np=137456 qsp=4210 rpq=1 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235 + 6 np=120834 qsp=9902 rpq=2 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921 + 7 np=144888 qsp=26336 rpq=0 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542 As always, this is once again split into "rcu_sched" and "rcu_bh" portions, with CONFIG_TREE_PREEMPT_RCU kernels having an additional @@ -284,6 +284,9 @@ o "np" is the number of times that __rcu_pending() has been invoked o "qsp" is the number of times that the RCU was waiting for a quiescent state from this CPU. +o "rpq" is the number of times that the CPU had passed through + a quiescent state, but not yet reported it to RCU. + o "cbr" is the number of times that this CPU had RCU callbacks that had passed through a grace period, and were thus ready to be invoked. diff --git a/Documentation/intel_txt.txt b/Documentation/intel_txt.txt index f40a1f030019..87c8990dbbd9 100644 --- a/Documentation/intel_txt.txt +++ b/Documentation/intel_txt.txt @@ -161,13 +161,15 @@ o In order to put a system into any of the sleep states after a TXT has been restored, it will restore the TPM PCRs and then transfer control back to the kernel's S3 resume vector. In order to preserve system integrity across S3, the kernel - provides tboot with a set of memory ranges (kernel - code/data/bss, S3 resume code, and AP trampoline) that tboot - will calculate a MAC (message authentication code) over and then - seal with the TPM. On resume and once the measured environment - has been re-established, tboot will re-calculate the MAC and - verify it against the sealed value. Tboot's policy determines - what happens if the verification fails. + provides tboot with a set of memory ranges (RAM and RESERVED_KERN + in the e820 table, but not any memory that BIOS might alter over + the S3 transition) that tboot will calculate a MAC (message + authentication code) over and then seal with the TPM. On resume + and once the measured environment has been re-established, tboot + will re-calculate the MAC and verify it against the sealed value. + Tboot's policy determines what happens if the verification fails. + Note that the c/s 194 of tboot which has the new MAC code supports + this. That's pretty much it for TXT support. diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 839b21b0699a..567b7a8eb878 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -324,6 +324,8 @@ and is between 256 and 4096 characters. It is defined in the file they are unmapped. Otherwise they are flushed before they will be reused, which is a lot of faster + off - do not initialize any AMD IOMMU found in + the system amijoy.map= [HW,JOY] Amiga joystick support Map of devices attached to JOY0DAT and JOY1DAT @@ -784,8 +786,12 @@ and is between 256 and 4096 characters. It is defined in the file as early as possible in order to facilitate early boot debugging. - ftrace_dump_on_oops + ftrace_dump_on_oops[=orig_cpu] [FTRACE] will dump the trace buffers on oops. + If no parameter is passed, ftrace will dump + buffers of all CPUs, but if you pass orig_cpu, it will + dump only the buffer of the CPU that triggered the + oops. ftrace_filter=[function-list] [FTRACE] Limit the functions traced by the function diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt index 2f9115c0ae62..61c291cddf18 100644 --- a/Documentation/kprobes.txt +++ b/Documentation/kprobes.txt @@ -165,8 +165,8 @@ the user entry_handler invocation is also skipped. 1.4 How Does Jump Optimization Work? -If you configured your kernel with CONFIG_OPTPROBES=y (currently -this option is supported on x86/x86-64, non-preemptive kernel) and +If your kernel is built with CONFIG_OPTPROBES=y (currently this flag +is automatically set 'y' on x86/x86-64, non-preemptive kernel) and the "debug.kprobes_optimization" kernel parameter is set to 1 (see sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump instruction instead of a breakpoint instruction at each probepoint. @@ -271,8 +271,6 @@ tweak the kernel's execution path, you need to suppress optimization, using one of the following techniques: - Specify an empty function for the kprobe's post_handler or break_handler. or -- Config CONFIG_OPTPROBES=n. - or - Execute 'sysctl -w debug.kprobes_optimization=n' 2. Architectures Supported @@ -307,10 +305,6 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO), so you can use "objdump -d -l vmlinux" to see the source-to-object code mapping. -If you want to reduce probing overhead, set "Kprobes jump optimization -support" (CONFIG_OPTPROBES) to "y". You can find this option under the -"Kprobes" line. - 4. API Reference The Kprobes API includes a "register" function and an "unregister" diff --git a/Documentation/rbtree.txt b/Documentation/rbtree.txt index aae8355d3166..221f38be98f4 100644 --- a/Documentation/rbtree.txt +++ b/Documentation/rbtree.txt @@ -190,3 +190,61 @@ Example: for (node = rb_first(&mytree); node; node = rb_next(node)) printk("key=%s\n", rb_entry(node, struct mytype, node)->keystring); +Support for Augmented rbtrees +----------------------------- + +Augmented rbtree is an rbtree with "some" additional data stored in each node. +This data can be used to augment some new functionality to rbtree. +Augmented rbtree is an optional feature built on top of basic rbtree +infrastructure. rbtree user who wants this feature will have an augment +callback function in rb_root initialized. + +This callback function will be called from rbtree core routines whenever +a node has a change in one or both of its children. It is the responsibility +of the callback function to recalculate the additional data that is in the +rb node using new children information. Note that if this new additional +data affects the parent node's additional data, then callback function has +to handle it and do the recursive updates. + + +Interval tree is an example of augmented rb tree. Reference - +"Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein. +More details about interval trees: + +Classical rbtree has a single key and it cannot be directly used to store +interval ranges like [lo:hi] and do a quick lookup for any overlap with a new +lo:hi or to find whether there is an exact match for a new lo:hi. + +However, rbtree can be augmented to store such interval ranges in a structured +way making it possible to do efficient lookup and exact match. + +This "extra information" stored in each node is the maximum hi +(max_hi) value among all the nodes that are its descendents. This +information can be maintained at each node just be looking at the node +and its immediate children. And this will be used in O(log n) lookup +for lowest match (lowest start address among all possible matches) +with something like: + +find_lowest_match(lo, hi, node) +{ + lowest_match = NULL; + while (node) { + if (max_hi(node->left) > lo) { + // Lowest overlap if any must be on left side + node = node->left; + } else if (overlap(lo, hi, node)) { + lowest_match = node; + break; + } else if (lo > node->lo) { + // Lowest overlap if any must be on right side + node = node->right; + } else { + break; + } + } + return lowest_match; +} + +Finding exact match will be to first find lowest match and then to follow +successor nodes looking for exact match, until the start of a node is beyond +the hi value we are looking for. diff --git a/Documentation/scheduler/sched-design-CFS.txt b/Documentation/scheduler/sched-design-CFS.txt index 6f33593e59e2..8239ebbcddce 100644 --- a/Documentation/scheduler/sched-design-CFS.txt +++ b/Documentation/scheduler/sched-design-CFS.txt @@ -211,7 +211,7 @@ provide fair CPU time to each such task group. For example, it may be desirable to first provide fair CPU time to each user on the system and then to each task belonging to a user. -CONFIG_GROUP_SCHED strives to achieve exactly that. It lets tasks to be +CONFIG_CGROUP_SCHED strives to achieve exactly that. It lets tasks to be grouped and divides CPU time fairly among such groups. CONFIG_RT_GROUP_SCHED permits to group real-time (i.e., SCHED_FIFO and @@ -220,38 +220,11 @@ SCHED_RR) tasks. CONFIG_FAIR_GROUP_SCHED permits to group CFS (i.e., SCHED_NORMAL and SCHED_BATCH) tasks. -At present, there are two (mutually exclusive) mechanisms to group tasks for -CPU bandwidth control purposes: - - - Based on user id (CONFIG_USER_SCHED) - - With this option, tasks are grouped according to their user id. - - - Based on "cgroup" pseudo filesystem (CONFIG_CGROUP_SCHED) - - This options needs CONFIG_CGROUPS to be defined, and lets the administrator + These options need CONFIG_CGROUPS to be defined, and let the administrator create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See Documentation/cgroups/cgroups.txt for more information about this filesystem. -Only one of these options to group tasks can be chosen and not both. - -When CONFIG_USER_SCHED is defined, a directory is created in sysfs for each new -user and a "cpu_share" file is added in that directory. - - # cd /sys/kernel/uids - # cat 512/cpu_share # Display user 512's CPU share - 1024 - # echo 2048 > 512/cpu_share # Modify user 512's CPU share - # cat 512/cpu_share # Display user 512's CPU share - 2048 - # - -CPU bandwidth between two users is divided in the ratio of their CPU shares. -For example: if you would like user "root" to get twice the bandwidth of user -"guest," then set the cpu_share for both the users such that "root"'s cpu_share -is twice "guest"'s cpu_share. - -When CONFIG_CGROUP_SCHED is defined, a "cpu.shares" file is created for each +When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each group created using the pseudo filesystem. See example steps below to create task groups and modify their CPU share using the "cgroups" pseudo filesystem. @@ -273,24 +246,3 @@ task groups and modify their CPU share using the "cgroups" pseudo filesystem. # #Launch gmplayer (or your favourite movie player) # echo <movie_player_pid> > multimedia/tasks - -8. Implementation note: user namespaces - -User namespaces are intended to be hierarchical. But they are currently -only partially implemented. Each of those has ramifications for CFS. - -First, since user namespaces are hierarchical, the /sys/kernel/uids -presentation is inadequate. Eventually we will likely want to use sysfs -tagging to provide private views of /sys/kernel/uids within each user -namespace. - -Second, the hierarchical nature is intended to support completely -unprivileged use of user namespaces. So if using user groups, then -we want the users in a user namespace to be children of the user -who created it. - -That is currently unimplemented. So instead, every user in a new -user namespace will receive 1024 shares just like any user in the -initial user namespace. Note that at the moment creation of a new -user namespace requires each of CAP_SYS_ADMIN, CAP_SETUID, and -CAP_SETGID. diff --git a/Documentation/scheduler/sched-rt-group.txt b/Documentation/scheduler/sched-rt-group.txt index 86eabe6c3419..605b0d40329d 100644 --- a/Documentation/scheduler/sched-rt-group.txt +++ b/Documentation/scheduler/sched-rt-group.txt @@ -126,23 +126,12 @@ priority! 2.3 Basis for grouping tasks ---------------------------- -There are two compile-time settings for allocating CPU bandwidth. These are -configured using the "Basis for grouping tasks" multiple choice menu under -General setup > Group CPU Scheduler: - -a. CONFIG_USER_SCHED (aka "Basis for grouping tasks" = "user id") - -This lets you use the virtual files under -"/sys/kernel/uids/<uid>/cpu_rt_runtime_us" to control he CPU time reserved for -each user . - -The other option is: - -.o CONFIG_CGROUP_SCHED (aka "Basis for grouping tasks" = "Control groups") +Enabling CONFIG_RT_GROUP_SCHED lets you explicitly allocate real +CPU bandwidth to task groups. This uses the /cgroup virtual file system and "/cgroup/<cgroup>/cpu.rt_runtime_us" to control the CPU time reserved for each -control group instead. +control group. For more information on working with control groups, you should read Documentation/cgroups/cgroups.txt as well. @@ -161,8 +150,7 @@ For now, this can be simplified to just the following (but see Future plans): =============== There is work in progress to make the scheduling period for each group -("/sys/kernel/uids/<uid>/cpu_rt_period_us" or -"/cgroup/<cgroup>/cpu.rt_period_us" respectively) configurable as well. +("/cgroup/<cgroup>/cpu.rt_period_us") configurable as well. The constraint on the period is that a subgroup must have a smaller or equal period to its parent. But realistically its not very useful _yet_ diff --git a/Documentation/trace/events.txt b/Documentation/trace/events.txt index 02ac6ed38b2d..778ddf38b82c 100644 --- a/Documentation/trace/events.txt +++ b/Documentation/trace/events.txt @@ -90,7 +90,8 @@ In order to facilitate early boot debugging, use boot option: trace_event=[event-list] -The format of this boot option is the same as described in section 2.1. +event-list is a comma separated list of events. See section 2.1 for event +format. 3. Defining an event-enabled tracepoint ======================================= diff --git a/Documentation/trace/ftrace.txt b/Documentation/trace/ftrace.txt index 03485bfbd797..557c1edeccaf 100644 --- a/Documentation/trace/ftrace.txt +++ b/Documentation/trace/ftrace.txt @@ -155,6 +155,9 @@ of ftrace. Here is a list of some of the key files: to be traced. Echoing names of functions into this file will limit the trace to only those functions. + This interface also allows for commands to be used. See the + "Filter commands" section for more details. + set_ftrace_notrace: This has an effect opposite to that of @@ -1337,12 +1340,14 @@ ftrace_dump_on_oops must be set. To set ftrace_dump_on_oops, one can either use the sysctl function or set it via the proc system interface. - sysctl kernel.ftrace_dump_on_oops=1 + sysctl kernel.ftrace_dump_on_oops=n or - echo 1 > /proc/sys/kernel/ftrace_dump_on_oops + echo n > /proc/sys/kernel/ftrace_dump_on_oops +If n = 1, ftrace will dump buffers of all CPUs, if n = 2 ftrace will +only dump the buffer of the CPU that triggered the oops. Here's an example of such a dump after a null pointer dereference in a kernel module: @@ -1822,6 +1827,47 @@ this special filter via: echo > set_graph_function +Filter commands +--------------- + +A few commands are supported by the set_ftrace_filter interface. +Trace commands have the following format: + +<function>:<command>:<parameter> + +The following commands are supported: + +- mod + This command enables function filtering per module. The + parameter defines the module. For example, if only the write* + functions in the ext3 module are desired, run: + + echo 'write*:mod:ext3' > set_ftrace_filter + + This command interacts with the filter in the same way as + filtering based on function names. Thus, adding more functions + in a different module is accomplished by appending (>>) to the + filter file. Remove specific module functions by prepending + '!': + + echo '!writeback*:mod:ext3' >> set_ftrace_filter + +- traceon/traceoff + These commands turn tracing on and off when the specified + functions are hit. The parameter determines how many times the + tracing system is turned on and off. If unspecified, there is + no limit. For example, to disable tracing when a schedule bug + is hit the first 5 times, run: + + echo '__schedule_bug:traceoff:5' > set_ftrace_filter + + These commands are cumulative whether or not they are appended + to set_ftrace_filter. To remove a command, prepend it by '!' + and drop the parameter: + + echo '!__schedule_bug:traceoff' > set_ftrace_filter + + trace_pipe ---------- diff --git a/Documentation/trace/kprobetrace.txt b/Documentation/trace/kprobetrace.txt index a9100b28eb84..ec94748ae65b 100644 --- a/Documentation/trace/kprobetrace.txt +++ b/Documentation/trace/kprobetrace.txt @@ -40,7 +40,9 @@ Synopsis of kprobe_events $stack : Fetch stack address. $retval : Fetch return value.(*) +|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**) - NAME=FETCHARG: Set NAME as the argument name of FETCHARG. + NAME=FETCHARG : Set NAME as the argument name of FETCHARG. + FETCHARG:TYPE : Set TYPE as the type of FETCHARG. Currently, basic types + (u8/u16/u32/u64/s8/s16/s32/s64) are supported. (*) only for return probe. (**) this is useful for fetching a field of data structures. diff --git a/MAINTAINERS b/MAINTAINERS index 9372c742c3bc..b375bf9f4e2c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2954,6 +2954,17 @@ S: Odd Fixes F: Documentation/networking/README.ipw2200 F: drivers/net/wireless/ipw2x00/ipw2200.* +INTEL(R) TRUSTED EXECUTION TECHNOLOGY (TXT) +M: Joseph Cihula <joseph.cihula@intel.com> +M: Shane Wang <shane.wang@intel.com> +L: tboot-devel@lists.sourceforge.net +W: http://tboot.sourceforge.net +T: Mercurial http://www.bughost.org/repos.hg/tboot.hg +S: Supported +F: Documentation/intel_txt.txt +F: include/linux/tboot.h +F: arch/x86/kernel/tboot.c + INTEL WIRELESS WIMAX CONNECTION 2400 M: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> M: linux-wimax@intel.com @@ -4165,6 +4176,7 @@ OPROFILE M: Robert Richter <robert.richter@amd.com> L: oprofile-list@lists.sf.net S: Maintained +F: arch/*/include/asm/oprofile*.h F: arch/*/oprofile/ F: drivers/oprofile/ F: include/linux/oprofile.h @@ -4353,13 +4365,13 @@ M: Paul Mackerras <paulus@samba.org> M: Ingo Molnar <mingo@elte.hu> M: Arnaldo Carvalho de Melo <acme@redhat.com> S: Supported -F: kernel/perf_event.c +F: kernel/perf_event*.c F: include/linux/perf_event.h -F: arch/*/kernel/perf_event.c -F: arch/*/kernel/*/perf_event.c -F: arch/*/kernel/*/*/perf_event.c +F: arch/*/kernel/perf_event*.c +F: arch/*/kernel/*/perf_event*.c +F: arch/*/kernel/*/*/perf_event*.c F: arch/*/include/asm/perf_event.h -F: arch/*/lib/perf_event.c +F: arch/*/lib/perf_event*.c F: arch/*/kernel/perf_callchain.c F: tools/perf/ @@ -5493,7 +5505,7 @@ S: Maintained F: drivers/mmc/host/tmio_mmc.* TMPFS (SHMEM FILESYSTEM) -M: Hugh Dickins <hugh.dickins@tiscali.co.uk> +M: Hugh Dickins <hughd@google.com> L: linux-mm@kvack.org S: Maintained F: include/linux/shmem_fs.h @@ -1,7 +1,7 @@ VERSION = 2 PATCHLEVEL = 6 SUBLEVEL = 34 -EXTRAVERSION = -rc7 +EXTRAVERSION = NAME = Sheep on Meth # *DOCUMENTATION* diff --git a/arch/Kconfig b/arch/Kconfig index e5eb1337a537..acda512da2e2 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -42,15 +42,10 @@ config KPROBES If in doubt, say "N". config OPTPROBES - bool "Kprobes jump optimization support (EXPERIMENTAL)" - default y - depends on KPROBES + def_bool y + depends on KPROBES && HAVE_OPTPROBES depends on !PREEMPT - depends on HAVE_OPTPROBES select KALLSYMS_ALL - help - This option will allow kprobes to optimize breakpoint to - a jump for reducing its overhead. config HAVE_EFFICIENT_UNALIGNED_ACCESS bool @@ -142,6 +137,17 @@ config HAVE_HW_BREAKPOINT bool depends on PERF_EVENTS +config HAVE_MIXED_BREAKPOINTS_REGS + bool + depends on HAVE_HW_BREAKPOINT + help + Depending on the arch implementation of hardware breakpoints, + some of them have separate registers for data and instruction + breakpoints addresses, others have mixed registers to store + them but define the access type in a control register. + Select this option if your arch implements breakpoints under the + latter fashion. + config HAVE_USER_RETURN_NOTIFIER bool diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 610dff44d94b..e756d04b6cd5 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -17,8 +17,8 @@ #define ATOMIC_INIT(i) ( (atomic_t) { (i) } ) #define ATOMIC64_INIT(i) ( (atomic64_t) { (i) } ) -#define atomic_read(v) ((v)->counter + 0) -#define atomic64_read(v) ((v)->counter + 0) +#define atomic_read(v) (*(volatile int *)&(v)->counter) +#define atomic64_read(v) (*(volatile long *)&(v)->counter) #define atomic_set(v,i) ((v)->counter = (i)) #define atomic64_set(v,i) ((v)->counter = (i)) diff --git a/arch/alpha/include/asm/bitops.h b/arch/alpha/include/asm/bitops.h index 15f3ae25c511..296da1d5ed57 100644 --- a/arch/alpha/include/asm/bitops.h +++ b/arch/alpha/include/asm/bitops.h @@ -405,29 +405,31 @@ static inline int fls(int x) #if defined(CONFIG_ALPHA_EV6) && defined(CONFIG_ALPHA_EV67) /* Whee. EV67 can calculate it directly. */ -static inline unsigned long hweight64(unsigned long w) +static inline unsigned long __arch_hweight64(unsigned long w) { return __kernel_ctpop(w); } -static inline unsigned int hweight32(unsigned int w) +static inline unsigned int __arch_weight32(unsigned int w) { - return hweight64(w); + return __arch_hweight64(w); } -static inline unsigned int hweight16(unsigned int w) +static inline unsigned int __arch_hweight16(unsigned int w) { - return hweight64(w & 0xffff); + return __arch_hweight64(w & 0xffff); } -static inline unsigned int hweight8(unsigned int w) +static inline unsigned int __arch_hweight8(unsigned int w) { - return hweight64(w & 0xff); + return __arch_hweight64(w & 0xff); } #else -#include <asm-generic/bitops/hweight.h> +#include <asm-generic/bitops/arch_hweight.h> #endif +#include <asm-generic/bitops/const_hweight.h> + #endif /* __KERNEL__ */ #include <asm-generic/bitops/find.h> diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index e8ddec2cb158..a0162fa94564 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -24,7 +24,7 @@ * strex/ldrex monitor on some implementations. The reason we can use it for * atomic_set() is the clrex or dummy strex done on every exception return. */ -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v,i) (((v)->counter) = (i)) #if __LINUX_ARM_ARCH__ >= 6 diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 0d08d4170b64..4656a24058d2 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -371,6 +371,10 @@ static inline void __flush_icache_all(void) #ifdef CONFIG_ARM_ERRATA_411920 extern void v6_icache_inval_all(void); v6_icache_inval_all(); +#elif defined(CONFIG_SMP) && __LINUX_ARM_ARCH__ >= 7 + asm("mcr p15, 0, %0, c7, c1, 0 @ invalidate I-cache inner shareable\n" + : + : "r" (0)); #else asm("mcr p15, 0, %0, c7, c5, 0 @ invalidate I-cache\n" : diff --git a/arch/arm/include/asm/smp_twd.h b/arch/arm/include/asm/smp_twd.h index 7be0978b2625..634f357be6bb 100644 --- a/arch/arm/include/asm/smp_twd.h +++ b/arch/arm/include/asm/smp_twd.h @@ -1,6 +1,23 @@ #ifndef __ASMARM_SMP_TWD_H #define __ASMARM_SMP_TWD_H +#define TWD_TIMER_LOAD 0x00 +#define TWD_TIMER_COUNTER 0x04 +#define TWD_TIMER_CONTROL 0x08 +#define TWD_TIMER_INTSTAT 0x0C + +#define TWD_WDOG_LOAD 0x20 +#define TWD_WDOG_COUNTER 0x24 +#define TWD_WDOG_CONTROL 0x28 +#define TWD_WDOG_INTSTAT 0x2C +#define TWD_WDOG_RESETSTAT 0x30 +#define TWD_WDOG_DISABLE 0x34 + +#define TWD_TIMER_CONTROL_ENABLE (1 << 0) +#define TWD_TIMER_CONTROL_ONESHOT (0 << 1) +#define TWD_TIMER_CONTROL_PERIODIC (1 << 1) +#define TWD_TIMER_CONTROL_IT_ENABLE (1 << 2) + struct clock_event_device; extern void __iomem *twd_base; diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index e085e2c545eb..bd863d8608cd 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -46,6 +46,9 @@ #define TLB_V7_UIS_FULL (1 << 20) #define TLB_V7_UIS_ASID (1 << 21) +/* Inner Shareable BTB operation (ARMv7 MP extensions) */ +#define TLB_V7_IS_BTB (1 << 22) + #define TLB_L2CLEAN_FR (1 << 29) /* Feroceon */ #define TLB_DCLEAN (1 << 30) #define TLB_WB (1 << 31) @@ -183,7 +186,7 @@ #endif #ifdef CONFIG_SMP -#define v7wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_BTB | \ +#define v7wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_V7_IS_BTB | \ TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | TLB_V7_UIS_ASID) #else #define v7wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_BTB | \ @@ -339,6 +342,12 @@ static inline void local_flush_tlb_all(void) dsb(); isb(); } + if (tlb_flag(TLB_V7_IS_BTB)) { + /* flush the branch target cache */ + asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc"); + dsb(); + isb(); + } } static inline void local_flush_tlb_mm(struct mm_struct *mm) @@ -376,6 +385,12 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm) asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero) : "cc"); dsb(); } + if (tlb_flag(TLB_V7_IS_BTB)) { + /* flush the branch target cache */ + asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc"); + dsb(); + isb(); + } } static inline void @@ -416,6 +431,12 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero) : "cc"); dsb(); } + if (tlb_flag(TLB_V7_IS_BTB)) { + /* flush the branch target cache */ + asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc"); + dsb(); + isb(); + } } static inline void local_flush_tlb_kernel_page(unsigned long kaddr) @@ -454,6 +475,12 @@ static inline void local_flush_tlb_kernel_page(unsigned long kaddr) dsb(); isb(); } + if (tlb_flag(TLB_V7_IS_BTB)) { + /* flush the branch target cache */ + asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero) : "cc"); + dsb(); + isb(); + } } /* diff --git a/arch/arm/kernel/smp_twd.c b/arch/arm/kernel/smp_twd.c index ea02a7b1c244..7c5f0c024db7 100644 --- a/arch/arm/kernel/smp_twd.c +++ b/arch/arm/kernel/smp_twd.c @@ -21,23 +21,6 @@ #include <asm/smp_twd.h> #include <asm/hardware/gic.h> -#define TWD_TIMER_LOAD 0x00 -#define TWD_TIMER_COUNTER 0x04 -#define TWD_TIMER_CONTROL 0x08 -#define TWD_TIMER_INTSTAT 0x0C - -#define TWD_WDOG_LOAD 0x20 -#define TWD_WDOG_COUNTER 0x24 -#define TWD_WDOG_CONTROL 0x28 -#define TWD_WDOG_INTSTAT 0x2C -#define TWD_WDOG_RESETSTAT 0x30 -#define TWD_WDOG_DISABLE 0x34 - -#define TWD_TIMER_CONTROL_ENABLE (1 << 0) -#define TWD_TIMER_CONTROL_ONESHOT (0 << 1) -#define TWD_TIMER_CONTROL_PERIODIC (1 << 1) -#define TWD_TIMER_CONTROL_IT_ENABLE (1 << 2) - /* set up by the platform code */ void __iomem *twd_base; diff --git a/arch/arm/lib/clear_user.S b/arch/arm/lib/clear_user.S index 5e3f99620c04..14a0d988c82c 100644 --- a/arch/arm/lib/clear_user.S +++ b/arch/arm/lib/clear_user.S @@ -45,6 +45,7 @@ USER( strnebt r2, [r0]) mov r0, #0 ldmfd sp!, {r1, pc} ENDPROC(__clear_user) +ENDPROC(__clear_user_std) .pushsection .fixup,"ax" .align 0 diff --git a/arch/arm/lib/copy_to_user.S b/arch/arm/lib/copy_to_user.S index 027b69bdbad1..d066df686e17 100644 --- a/arch/arm/lib/copy_to_user.S +++ b/arch/arm/lib/copy_to_user.S @@ -93,6 +93,7 @@ WEAK(__copy_to_user) #include "copy_template.S" ENDPROC(__copy_to_user) +ENDPROC(__copy_to_user_std) .pushsection .fixup,"ax" .align 0 diff --git a/arch/arm/mach-davinci/da830.c b/arch/arm/mach-davinci/da830.c index 122e61a9f505..e8cb982f5e8e 100644 --- a/arch/arm/mach-davinci/da830.c +++ b/arch/arm/mach-davinci/da830.c @@ -410,7 +410,7 @@ static struct clk_lookup da830_clks[] = { CLK("davinci-mcasp.0", NULL, &mcasp0_clk), CLK("davinci-mcasp.1", NULL, &mcasp1_clk), CLK("davinci-mcasp.2", NULL, &mcasp2_clk), - CLK("musb_hdrc", NULL, &usb20_clk), + CLK(NULL, "usb20", &usb20_clk), CLK(NULL, "aemif", &aemif_clk), CLK(NULL, "aintc", &aintc_clk), CLK(NULL, "secu_mgr", &secu_mgr_clk), diff --git a/arch/arm/mm/cache-v6.S b/arch/arm/mm/cache-v6.S index 9d89c67a1cc3..e46ecd847138 100644 --- a/arch/arm/mm/cache-v6.S +++ b/arch/arm/mm/cache-v6.S @@ -211,6 +211,9 @@ v6_dma_inv_range: mcrne p15, 0, r1, c7, c15, 1 @ clean & invalidate unified line #endif 1: +#ifdef CONFIG_SMP + str r0, [r0] @ write for ownership +#endif #ifdef HARVARD_CACHE mcr p15, 0, r0, c7, c6, 1 @ invalidate D line #else @@ -231,6 +234,9 @@ v6_dma_inv_range: v6_dma_clean_range: bic r0, r0, #D_CACHE_LINE_SIZE - 1 1: +#ifdef CONFIG_SMP + ldr r2, [r0] @ read for ownership +#endif #ifdef HARVARD_CACHE mcr p15, 0, r0, c7, c10, 1 @ clean D line #else @@ -251,6 +257,10 @@ v6_dma_clean_range: ENTRY(v6_dma_flush_range) bic r0, r0, #D_CACHE_LINE_SIZE - 1 1: +#ifdef CONFIG_SMP + ldr r2, [r0] @ read for ownership + str r2, [r0] @ write for ownership +#endif #ifdef HARVARD_CACHE mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line #else @@ -273,7 +283,9 @@ ENTRY(v6_dma_map_area) add r1, r1, r0 teq r2, #DMA_FROM_DEVICE beq v6_dma_inv_range - b v6_dma_clean_range + teq r2, #DMA_TO_DEVICE + beq v6_dma_clean_range + b v6_dma_flush_range ENDPROC(v6_dma_map_area) /* @@ -283,9 +295,6 @@ ENDPROC(v6_dma_map_area) * - dir - DMA direction */ ENTRY(v6_dma_unmap_area) - add r1, r1, r0 - teq r2, #DMA_TO_DEVICE - bne v6_dma_inv_range mov pc, lr ENDPROC(v6_dma_unmap_area) diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S index bcd64f265870..06a90dcfc60a 100644 --- a/arch/arm/mm/cache-v7.S +++ b/arch/arm/mm/cache-v7.S @@ -167,7 +167,11 @@ ENTRY(v7_coherent_user_range) cmp r0, r1 blo 1b mov r0, #0 +#ifdef CONFIG_SMP + mcr p15, 0, r0, c7, c1, 6 @ invalidate BTB Inner Shareable +#else mcr p15, 0, r0, c7, c5, 6 @ invalidate BTB +#endif dsb isb mov pc, lr diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 9bfeb6b9509a..33b327379f07 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -65,6 +65,15 @@ void flush_dcache_page(struct page *page) } EXPORT_SYMBOL(flush_dcache_page); +void copy_to_user_page(struct vm_area_struct *vma, struct page *page, + unsigned long uaddr, void *dst, const void *src, + unsigned long len) +{ + memcpy(dst, src, len); + if (vma->vm_flags & VM_EXEC) + __cpuc_coherent_user_range(uaddr, uaddr + len); +} + void __iomem *__arm_ioremap_pfn(unsigned long pfn, unsigned long offset, size_t size, unsigned int mtype) { @@ -87,8 +96,8 @@ void __iomem *__arm_ioremap(unsigned long phys_addr, size_t size, } EXPORT_SYMBOL(__arm_ioremap); -void __iomem *__arm_ioremap(unsigned long phys_addr, size_t size, - unsigned int mtype, void *caller) +void __iomem *__arm_ioremap_caller(unsigned long phys_addr, size_t size, + unsigned int mtype, void *caller) { return __arm_ioremap(phys_addr, size, mtype); } diff --git a/arch/arm/mm/tlb-v7.S b/arch/arm/mm/tlb-v7.S index 0cb1848bd876..f3f288a9546d 100644 --- a/arch/arm/mm/tlb-v7.S +++ b/arch/arm/mm/tlb-v7.S @@ -50,7 +50,11 @@ ENTRY(v7wbi_flush_user_tlb_range) cmp r0, r1 blo 1b mov ip, #0 +#ifdef CONFIG_SMP + mcr p15, 0, ip, c7, c1, 6 @ flush BTAC/BTB Inner Shareable +#else mcr p15, 0, ip, c7, c5, 6 @ flush BTAC/BTB +#endif dsb mov pc, lr ENDPROC(v7wbi_flush_user_tlb_range) @@ -79,7 +83,11 @@ ENTRY(v7wbi_flush_kern_tlb_range) cmp r0, r1 blo 1b mov r2, #0 +#ifdef CONFIG_SMP + mcr p15, 0, r2, c7, c1, 6 @ flush BTAC/BTB Inner Shareable +#else mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB +#endif dsb isb mov pc, lr diff --git a/arch/avr32/include/asm/atomic.h b/arch/avr32/include/asm/atomic.h index b131c27ddf57..bbce6a1c6bb6 100644 --- a/arch/avr32/include/asm/atomic.h +++ b/arch/avr32/include/asm/atomic.h @@ -19,7 +19,7 @@ #define ATOMIC_INIT(i) { (i) } -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v, i) (((v)->counter) = i) /* diff --git a/arch/cris/include/asm/atomic.h b/arch/cris/include/asm/atomic.h index a6aca819e9f3..88dc9b9c4ba0 100644 --- a/arch/cris/include/asm/atomic.h +++ b/arch/cris/include/asm/atomic.h @@ -15,7 +15,7 @@ #define ATOMIC_INIT(i) { (i) } -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v,i) (((v)->counter) = (i)) /* These should be written in asm but we do it in C for now. */ diff --git a/arch/frv/include/asm/atomic.h b/arch/frv/include/asm/atomic.h index 00a57af79afc..fae32c7fdcb6 100644 --- a/arch/frv/include/asm/atomic.h +++ b/arch/frv/include/asm/atomic.h @@ -36,7 +36,7 @@ #define smp_mb__after_atomic_inc() barrier() #define ATOMIC_INIT(i) { (i) } -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v, i) (((v)->counter) = (i)) #ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h index 33c8c0fa9583..e936804b7508 100644 --- a/arch/h8300/include/asm/atomic.h +++ b/arch/h8300/include/asm/atomic.h @@ -10,7 +10,7 @@ #define ATOMIC_INIT(i) { (i) } -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v, i) (((v)->counter) = i) #include <asm/system.h> diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 88405cb0832a..4e1948447a00 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -21,8 +21,8 @@ #define ATOMIC_INIT(i) ((atomic_t) { (i) }) #define ATOMIC64_INIT(i) ((atomic64_t) { (i) }) -#define atomic_read(v) ((v)->counter) -#define atomic64_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) +#define atomic64_read(v) (*(volatile long *)&(v)->counter) #define atomic_set(v,i) (((v)->counter) = (i)) #define atomic64_set(v,i) (((v)->counter) = (i)) diff --git a/arch/ia64/include/asm/bitops.h b/arch/ia64/include/asm/bitops.h index 6ebc229a1c51..9da3df6f1a52 100644 --- a/arch/ia64/include/asm/bitops.h +++ b/arch/ia64/include/asm/bitops.h @@ -437,17 +437,18 @@ __fls (unsigned long x) * hweightN: returns the hamming weight (i.e. the number * of bits set) of a N-bit word */ -static __inline__ unsigned long -hweight64 (unsigned long x) +static __inline__ unsigned long __arch_hweight64(unsigned long x) { unsigned long result; result = ia64_popcnt(x); return result; } -#define hweight32(x) (unsigned int) hweight64((x) & 0xfffffffful) -#define hweight16(x) (unsigned int) hweight64((x) & 0xfffful) -#define hweight8(x) (unsigned int) hweight64((x) & 0xfful) +#define __arch_hweight32(x) ((unsigned int) __arch_hweight64((x) & 0xfffffffful)) +#define __arch_hweight16(x) ((unsigned int) __arch_hweight64((x) & 0xfffful)) +#define __arch_hweight8(x) ((unsigned int) __arch_hweight64((x) & 0xfful)) + +#include <asm-generic/bitops/const_hweight.h> #endif /* __KERNEL__ */ diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c index 4d1a7e9314cf..c6c90f39f4d9 100644 --- a/arch/ia64/kernel/acpi.c +++ b/arch/ia64/kernel/acpi.c @@ -785,6 +785,14 @@ int acpi_gsi_to_irq(u32 gsi, unsigned int *irq) return 0; } +int acpi_isa_irq_to_gsi(unsigned isa_irq, u32 *gsi) +{ + if (isa_irq >= 16) + return -1; + *gsi = isa_irq; + return 0; +} + /* * ACPI based hotplug CPU support */ diff --git a/arch/m32r/include/asm/atomic.h b/arch/m32r/include/asm/atomic.h index 63f0cf0f50dd..d44a51e5271b 100644 --- a/arch/m32r/include/asm/atomic.h +++ b/arch/m32r/include/asm/atomic.h @@ -26,7 +26,7 @@ * * Atomically reads the value of @v. */ -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) /** * atomic_set - set atomic variable diff --git a/arch/m68k/amiga/Makefile b/arch/m68k/amiga/Makefile index 6a0d7650f980..11dd30b16b3b 100644 --- a/arch/m68k/amiga/Makefile +++ b/arch/m68k/amiga/Makefile @@ -2,6 +2,6 @@ # Makefile for Linux arch/m68k/amiga source directory # -obj-y := config.o amiints.o cia.o chipram.o amisound.o +obj-y := config.o amiints.o cia.o chipram.o amisound.o platform.o obj-$(CONFIG_AMIGA_PCMCIA) += pcmcia.o diff --git a/arch/m68k/amiga/platform.c b/arch/m68k/amiga/platform.c new file mode 100644 index 000000000000..38f18bf14737 --- /dev/null +++ b/arch/m68k/amiga/platform.c @@ -0,0 +1,83 @@ +/* + * Copyright (C) 2007-2009 Geert Uytterhoeven + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file COPYING in the main directory of this archive + * for more details. + */ + +#include <linux/init.h> +#include <linux/platform_device.h> +#include <linux/zorro.h> + +#include <asm/amigahw.h> + + +#ifdef CONFIG_ZORRO + +static const struct resource zorro_resources[] __initconst = { + /* Zorro II regions (on Zorro II/III) */ + { + .name = "Zorro II exp", + .start = 0x00e80000, + .end = 0x00efffff, + .flags = IORESOURCE_MEM, + }, { + .name = "Zorro II mem", + .start = 0x00200000, + .end = 0x009fffff, + .flags = IORESOURCE_MEM, + }, + /* Zorro III regions (on Zorro III only) */ + { + .name = "Zorro III exp", + .start = 0xff000000, + .end = 0xffffffff, + .flags = IORESOURCE_MEM, + }, { + .name = "Zorro III cfg", + .start = 0x40000000, + .end = 0x7fffffff, + .flags = IORESOURCE_MEM, + } +}; + + +static int __init amiga_init_bus(void) +{ + if (!MACH_IS_AMIGA || !AMIGAHW_PRESENT(ZORRO)) + return -ENODEV; + + platform_device_register_simple("amiga-zorro", -1, zorro_resources, + AMIGAHW_PRESENT(ZORRO3) ? 4 : 2); + return 0; +} + +subsys_initcall(amiga_init_bus); + +#endif /* CONFIG_ZORRO */ + + +static int __init amiga_init_devices(void) +{ + if (!MACH_IS_AMIGA) + return -ENODEV; + + /* video hardware */ + if (AMIGAHW_PRESENT(AMI_VIDEO)) + platform_device_register_simple("amiga-video", -1, NULL, 0); + + + /* sound hardware */ + if (AMIGAHW_PRESENT(AMI_AUDIO)) + platform_device_register_simple("amiga-audio", -1, NULL, 0); + + + /* storage interfaces */ + if (AMIGAHW_PRESENT(AMI_FLOPPY)) + platform_device_register_simple("amiga-floppy", -1, NULL, 0); + + return 0; +} + +device_initcall(amiga_init_devices); diff --git a/arch/m68k/bvme6000/rtc.c b/arch/m68k/bvme6000/rtc.c index b46ea1714a89..cb8617bb194b 100644 --- a/arch/m68k/bvme6000/rtc.c +++ b/arch/m68k/bvme6000/rtc.c @@ -9,7 +9,6 @@ #include <linux/types.h> #include <linux/errno.h> #include <linux/miscdevice.h> -#include <linux/smp_lock.h> #include <linux/ioport.h> #include <linux/capability.h> #include <linux/fcntl.h> @@ -35,10 +34,9 @@ static unsigned char days_in_mo[] = {0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; -static char rtc_status; +static atomic_t rtc_status = ATOMIC_INIT(1); -static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd, - unsigned long arg) +static long rtc_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { volatile RtcPtr_t rtc = (RtcPtr_t)BVME_RTC_BASE; unsigned char msr; @@ -132,29 +130,20 @@ static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd, } /* - * We enforce only one user at a time here with the open/close. - * Also clear the previous interrupt data on an open, and clean - * up things on a close. + * We enforce only one user at a time here with the open/close. */ - static int rtc_open(struct inode *inode, struct file *file) { - lock_kernel(); - if(rtc_status) { - unlock_kernel(); + if (!atomic_dec_and_test(&rtc_status)) { + atomic_inc(&rtc_status); return -EBUSY; } - - rtc_status = 1; - unlock_kernel(); return 0; } static int rtc_release(struct inode *inode, struct file *file) { - lock_kernel(); - rtc_status = 0; - unlock_kernel(); + atomic_inc(&rtc_status); return 0; } @@ -163,9 +152,9 @@ static int rtc_release(struct inode *inode, struct file *file) */ static const struct file_operations rtc_fops = { - .ioctl = rtc_ioctl, - .open = rtc_open, - .release = rtc_release, + .unlocked_ioctl = rtc_ioctl, + .open = rtc_open, + .release = rtc_release, }; static struct miscdevice rtc_dev = { diff --git a/arch/m68k/hp300/time.h b/arch/m68k/hp300/time.h index f5b3d098b0f5..7b98242960de 100644 --- a/arch/m68k/hp300/time.h +++ b/arch/m68k/hp300/time.h @@ -1,4 +1,2 @@ extern void hp300_sched_init(irq_handler_t vector); -extern unsigned long hp300_gettimeoffset (void); - - +extern unsigned long hp300_gettimeoffset(void); diff --git a/arch/m68k/include/asm/atomic_mm.h b/arch/m68k/include/asm/atomic_mm.h index d9d2ed647435..6a223b3f7e74 100644 --- a/arch/m68k/include/asm/atomic_mm.h +++ b/arch/m68k/include/asm/atomic_mm.h @@ -15,7 +15,7 @@ #define ATOMIC_INIT(i) { (i) } -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v, i) (((v)->counter) = i) static inline void atomic_add(int i, atomic_t *v) diff --git a/arch/m68k/include/asm/atomic_no.h b/arch/m68k/include/asm/atomic_no.h index 5674cb9449bd..289310c63a8a 100644 --- a/arch/m68k/include/asm/atomic_no.h +++ b/arch/m68k/include/asm/atomic_no.h @@ -15,7 +15,7 @@ #define ATOMIC_INIT(i) { (i) } -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v, i) (((v)->counter) = i) static __inline__ void atomic_add(int i, atomic_t *v) diff --git a/arch/m68k/include/asm/bitops_mm.h b/arch/m68k/include/asm/bitops_mm.h index 9bde784e7bad..b4ecdaada520 100644 --- a/arch/m68k/include/asm/bitops_mm.h +++ b/arch/m68k/include/asm/bitops_mm.h @@ -365,6 +365,10 @@ static inline int minix_test_bit(int nr, const void *vaddr) #define ext2_set_bit_atomic(lock, nr, addr) test_and_set_bit((nr) ^ 24, (unsigned long *)(addr)) #define ext2_clear_bit(nr, addr) __test_and_clear_bit((nr) ^ 24, (unsigned long *)(addr)) #define ext2_clear_bit_atomic(lock, nr, addr) test_and_clear_bit((nr) ^ 24, (unsigned long *)(addr)) +#define ext2_find_next_zero_bit(addr, size, offset) \ + generic_find_next_zero_le_bit((unsigned long *)addr, size, offset) +#define ext2_find_next_bit(addr, size, offset) \ + generic_find_next_le_bit((unsigned long *)addr, size, offset) static inline int ext2_test_bit(int nr, const void *vaddr) { @@ -394,10 +398,9 @@ static inline int ext2_find_first_zero_bit(const void *vaddr, unsigned size) return (p - addr) * 32 + res; } -static inline int ext2_find_next_zero_bit(const void *vaddr, unsigned size, - unsigned offset) +static inline unsigned long generic_find_next_zero_le_bit(const unsigned long *addr, + unsigned long size, unsigned long offset) { - const unsigned long *addr = vaddr; const unsigned long *p = addr + (offset >> 5); int bit = offset & 31UL, res; @@ -437,10 +440,9 @@ static inline int ext2_find_first_bit(const void *vaddr, unsigned size) return (p - addr) * 32 + res; } -static inline int ext2_find_next_bit(const void *vaddr, unsigned size, - unsigned offset) +static inline unsigned long generic_find_next_le_bit(const unsigned long *addr, + unsigned long size, unsigned long offset) { - const unsigned long *addr = vaddr; const unsigned long *p = addr + (offset >> 5); int bit = offset & 31UL, res; diff --git a/arch/m68k/include/asm/param.h b/arch/m68k/include/asm/param.h index 85c41b75aa78..36265ccf5c7b 100644 --- a/arch/m68k/include/asm/param.h +++ b/arch/m68k/include/asm/param.h @@ -1,26 +1,12 @@ #ifndef _M68K_PARAM_H #define _M68K_PARAM_H -#ifdef __KERNEL__ -# define HZ CONFIG_HZ /* Internal kernel timer frequency */ -# define USER_HZ 100 /* .. some user interfaces are in "ticks" */ -# define CLOCKS_PER_SEC (USER_HZ) /* like times() */ -#endif - -#ifndef HZ -#define HZ 100 -#endif - #ifdef __uClinux__ #define EXEC_PAGESIZE 4096 #else #define EXEC_PAGESIZE 8192 #endif -#ifndef NOGROUP -#define NOGROUP (-1) -#endif - -#define MAXHOSTNAMELEN 64 /* max length of hostname */ +#include <asm-generic/param.h> #endif /* _M68K_PARAM_H */ diff --git a/arch/m68k/kernel/traps.c b/arch/m68k/kernel/traps.c index aacd6d17b833..ada4f4cca811 100644 --- a/arch/m68k/kernel/traps.c +++ b/arch/m68k/kernel/traps.c @@ -455,7 +455,7 @@ static inline void access_error040(struct frame *fp) if (do_page_fault(&fp->ptregs, addr, errorcode)) { #ifdef DEBUG - printk("do_page_fault() !=0 \n"); + printk("do_page_fault() !=0\n"); #endif if (user_mode(&fp->ptregs)){ /* delay writebacks after signal delivery */ diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c index 0356da9bf763..1c16b1baf8db 100644 --- a/arch/m68k/mac/config.c +++ b/arch/m68k/mac/config.c @@ -148,7 +148,7 @@ static void mac_cache_card_flush(int writeback) void __init config_mac(void) { if (!MACH_IS_MAC) - printk(KERN_ERR "ERROR: no Mac, but config_mac() called!! \n"); + printk(KERN_ERR "ERROR: no Mac, but config_mac() called!!\n"); mach_sched_init = mac_sched_init; mach_init_IRQ = mac_init_IRQ; @@ -867,7 +867,7 @@ static void __init mac_identify(void) */ iop_preinit(); - printk(KERN_INFO "Detected Macintosh model: %d \n", model); + printk(KERN_INFO "Detected Macintosh model: %d\n", model); /* * Report booter data: @@ -878,12 +878,12 @@ static void __init mac_identify(void) mac_bi_data.videoaddr, mac_bi_data.videorow, mac_bi_data.videodepth, mac_bi_data.dimensions & 0xFFFF, mac_bi_data.dimensions >> 16); - printk(KERN_DEBUG " Videological 0x%lx phys. 0x%lx, SCC at 0x%lx \n", + printk(KERN_DEBUG " Videological 0x%lx phys. 0x%lx, SCC at 0x%lx\n", mac_bi_data.videological, mac_orig_videoaddr, mac_bi_data.sccbase); - printk(KERN_DEBUG " Boottime: 0x%lx GMTBias: 0x%lx \n", + printk(KERN_DEBUG " Boottime: 0x%lx GMTBias: 0x%lx\n", mac_bi_data.boottime, mac_bi_data.gmtbias); - printk(KERN_DEBUG " Machine ID: %ld CPUid: 0x%lx memory size: 0x%lx \n", + printk(KERN_DEBUG " Machine ID: %ld CPUid: 0x%lx memory size: 0x%lx\n", mac_bi_data.id, mac_bi_data.cpuid, mac_bi_data.memsize); iop_init(); diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index d0e35cf99fc6..a96394a0333d 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -154,7 +154,6 @@ good_area: * the fault. */ - survive: fault = handle_mm_fault(mm, vma, address, write ? FAULT_FLAG_WRITE : 0); #ifdef DEBUG printk("handle_mm_fault returns %d\n",fault); @@ -180,15 +179,10 @@ good_area: */ out_of_memory: up_read(&mm->mmap_sem); - if (is_global_init(current)) { - yield(); - down_read(&mm->mmap_sem); - goto survive; - } - - printk("VM: killing process %s\n", current->comm); - if (user_mode(regs)) - do_group_exit(SIGKILL); + if (!user_mode(regs)) + goto no_context; + pagefault_out_of_memory(); + return 0; no_context: current->thread.signo = SIGBUS; diff --git a/arch/m68k/mvme16x/rtc.c b/arch/m68k/mvme16x/rtc.c index 8da9c250d3e1..11ac6f63967a 100644 --- a/arch/m68k/mvme16x/rtc.c +++ b/arch/m68k/mvme16x/rtc.c @@ -9,7 +9,6 @@ #include <linux/types.h> #include <linux/errno.h> #include <linux/miscdevice.h> -#include <linux/smp_lock.h> #include <linux/ioport.h> #include <linux/capability.h> #include <linux/fcntl.h> @@ -36,8 +35,7 @@ static const unsigned char days_in_mo[] = static atomic_t rtc_ready = ATOMIC_INIT(1); -static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd, - unsigned long arg) +static long rtc_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { volatile MK48T08ptr_t rtc = (MK48T08ptr_t)MVME_RTC_BASE; unsigned long flags; @@ -120,22 +118,15 @@ static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd, } /* - * We enforce only one user at a time here with the open/close. - * Also clear the previous interrupt data on an open, and clean - * up things on a close. + * We enforce only one user at a time here with the open/close. */ - static int rtc_open(struct inode *inode, struct file *file) { - lock_kernel(); if( !atomic_dec_and_test(&rtc_ready) ) { atomic_inc( &rtc_ready ); - unlock_kernel(); return -EBUSY; } - unlock_kernel(); - return 0; } @@ -150,9 +141,9 @@ static int rtc_release(struct inode *inode, struct file *file) */ static const struct file_operations rtc_fops = { - .ioctl = rtc_ioctl, - .open = rtc_open, - .release = rtc_release, + .unlocked_ioctl = rtc_ioctl, + .open = rtc_open, + .release = rtc_release, }; static struct miscdevice rtc_dev= diff --git a/arch/m68k/q40/config.c b/arch/m68k/q40/config.c index 31ab3f08bbda..ad10fecec2fe 100644 --- a/arch/m68k/q40/config.c +++ b/arch/m68k/q40/config.c @@ -126,7 +126,7 @@ static void q40_reset(void) { halted = 1; printk("\n\n*******************************************\n" - "Called q40_reset : press the RESET button!! \n" + "Called q40_reset : press the RESET button!!\n" "*******************************************\n"); Q40_LED_ON(); while (1) diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h index 446bec29b142..26460d15b338 100644 --- a/arch/microblaze/include/asm/uaccess.h +++ b/arch/microblaze/include/asm/uaccess.h @@ -182,6 +182,39 @@ extern long __user_bad(void); * Returns zero on success, or -EFAULT on error. * On error, the variable @x is set to zero. */ +#define get_user(x, ptr) \ + __get_user_check((x), (ptr), sizeof(*(ptr))) + +#define __get_user_check(x, ptr, size) \ +({ \ + unsigned long __gu_val = 0; \ + const typeof(*(ptr)) __user *__gu_addr = (ptr); \ + int __gu_err = 0; \ + \ + if (access_ok(VERIFY_READ, __gu_addr, size)) { \ + switch (size) { \ + case 1: \ + __get_user_asm("lbu", __gu_addr, __gu_val, \ + __gu_err); \ + break; \ + case 2: \ + __get_user_asm("lhu", __gu_addr, __gu_val, \ + __gu_err); \ + break; \ + case 4: \ + __get_user_asm("lw", __gu_addr, __gu_val, \ + __gu_err); \ + break; \ + default: \ + __gu_err = __user_bad(); \ + break; \ + } \ + } else { \ + __gu_err = -EFAULT; \ + } \ + x = (typeof(*(ptr)))__gu_val; \ + __gu_err; \ +}) #define __get_user(x, ptr) \ ({ \ @@ -206,12 +239,6 @@ extern long __user_bad(void); }) -#define get_user(x, ptr) \ -({ \ - access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) \ - ? __get_user((x), (ptr)) : -EFAULT; \ -}) - #define __put_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \ ({ \ __asm__ __volatile__ ( \ @@ -266,6 +293,42 @@ extern long __user_bad(void); * * Returns zero on success, or -EFAULT on error. */ +#define put_user(x, ptr) \ + __put_user_check((x), (ptr), sizeof(*(ptr))) + +#define __put_user_check(x, ptr, size) \ +({ \ + typeof(*(ptr)) __pu_val; \ + typeof(*(ptr)) __user *__pu_addr = (ptr); \ + int __pu_err = 0; \ + \ + __pu_val = (x); \ + if (access_ok(VERIFY_WRITE, __pu_addr, size)) { \ + switch (size) { \ + case 1: \ + __put_user_asm("sb", __pu_addr, __pu_val, \ + __pu_err); \ + break; \ + case 2: \ + __put_user_asm("sh", __pu_addr, __pu_val, \ + __pu_err); \ + break; \ + case 4: \ + __put_user_asm("sw", __pu_addr, __pu_val, \ + __pu_err); \ + break; \ + case 8: \ + __put_user_asm_8(__pu_addr, __pu_val, __pu_err);\ + break; \ + default: \ + __pu_err = __user_bad(); \ + break; \ + } \ + } else { \ + __pu_err = -EFAULT; \ + } \ + __pu_err; \ +}) #define __put_user(x, ptr) \ ({ \ @@ -290,18 +353,6 @@ extern long __user_bad(void); __gu_err; \ }) -#ifndef CONFIG_MMU - -#define put_user(x, ptr) __put_user((x), (ptr)) - -#else /* CONFIG_MMU */ - -#define put_user(x, ptr) \ -({ \ - access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) \ - ? __put_user((x), (ptr)) : -EFAULT; \ -}) -#endif /* CONFIG_MMU */ /* copy_to_from_user */ #define __copy_from_user(to, from, n) \ diff --git a/arch/microblaze/kernel/cpu/cache.c b/arch/microblaze/kernel/cpu/cache.c index 21c3a92394de..109876e8d643 100644 --- a/arch/microblaze/kernel/cpu/cache.c +++ b/arch/microblaze/kernel/cpu/cache.c @@ -137,8 +137,9 @@ do { \ do { \ int step = -line_length; \ int align = ~(line_length - 1); \ + int count; \ end = ((end & align) == end) ? end - line_length : end & align; \ - int count = end - start; \ + count = end - start; \ WARN_ON(count < 0); \ \ __asm__ __volatile__ (" 1: " #op " %0, %1; \ diff --git a/arch/microblaze/kernel/entry-nommu.S b/arch/microblaze/kernel/entry-nommu.S index 391d6197fc3b..8cc18cd2cce6 100644 --- a/arch/microblaze/kernel/entry-nommu.S +++ b/arch/microblaze/kernel/entry-nommu.S @@ -476,6 +476,8 @@ ENTRY(ret_from_fork) nop work_pending: + enable_irq + andi r11, r19, _TIF_NEED_RESCHED beqi r11, 1f bralid r15, schedule diff --git a/arch/microblaze/kernel/microblaze_ksyms.c b/arch/microblaze/kernel/microblaze_ksyms.c index bc4dcb7d3861..ff85f7718035 100644 --- a/arch/microblaze/kernel/microblaze_ksyms.c +++ b/arch/microblaze/kernel/microblaze_ksyms.c @@ -52,3 +52,14 @@ EXPORT_SYMBOL_GPL(_ebss); extern void _mcount(void); EXPORT_SYMBOL(_mcount); #endif + +/* + * Assembly functions that may be used (directly or indirectly) by modules + */ +EXPORT_SYMBOL(__copy_tofrom_user); +EXPORT_SYMBOL(__strncpy_user); + +#ifdef CONFIG_OPT_LIB_ASM +EXPORT_SYMBOL(memcpy); +EXPORT_SYMBOL(memmove); +#endif diff --git a/arch/microblaze/kernel/module.c b/arch/microblaze/kernel/module.c index cbecf110dc30..0e73f6606547 100644 --- a/arch/microblaze/kernel/module.c +++ b/arch/microblaze/kernel/module.c @@ -16,6 +16,7 @@ #include <linux/string.h> #include <asm/pgtable.h> +#include <asm/cacheflush.h> void *module_alloc(unsigned long size) { @@ -151,6 +152,7 @@ int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *module) { + flush_dcache(); return 0; } diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c index f42c2dde8b1c..cca3579d4268 100644 --- a/arch/microblaze/mm/init.c +++ b/arch/microblaze/mm/init.c @@ -47,6 +47,7 @@ unsigned long memory_start; EXPORT_SYMBOL(memory_start); unsigned long memory_end; /* due to mm/nommu.c */ unsigned long memory_size; +EXPORT_SYMBOL(memory_size); /* * paging_init() sets up the page tables - in fact we've already done this. diff --git a/arch/microblaze/mm/pgtable.c b/arch/microblaze/mm/pgtable.c index 784557fb28cf..59bf2335a4ce 100644 --- a/arch/microblaze/mm/pgtable.c +++ b/arch/microblaze/mm/pgtable.c @@ -42,6 +42,7 @@ unsigned long ioremap_base; unsigned long ioremap_bot; +EXPORT_SYMBOL(ioremap_bot); /* The maximum lowmem defaults to 768Mb, but this can be configured to * another value. diff --git a/arch/microblaze/pci/pci-common.c b/arch/microblaze/pci/pci-common.c index 01c8c97c15b7..9cb782b8e036 100644 --- a/arch/microblaze/pci/pci-common.c +++ b/arch/microblaze/pci/pci-common.c @@ -1507,7 +1507,7 @@ void pcibios_finish_adding_to_bus(struct pci_bus *bus) pci_bus_add_devices(bus); /* Fixup EEH */ - eeh_add_device_tree_late(bus); + /* eeh_add_device_tree_late(bus); */ } EXPORT_SYMBOL_GPL(pcibios_finish_adding_to_bus); diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 519197ede089..59dc0c7ef733 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -29,7 +29,7 @@ * * Atomically reads the value of @v. */ -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) /* * atomic_set - set atomic variable @@ -410,7 +410,7 @@ static __inline__ int atomic_add_unless(atomic_t *v, int a, int u) * @v: pointer of type atomic64_t * */ -#define atomic64_read(v) ((v)->counter) +#define atomic64_read(v) (*(volatile long *)&(v)->counter) /* * atomic64_set - set atomic variable diff --git a/arch/mips/include/asm/i8253.h b/arch/mips/include/asm/i8253.h index 032ca73f181b..48bb82372994 100644 --- a/arch/mips/include/asm/i8253.h +++ b/arch/mips/include/asm/i8253.h @@ -12,7 +12,7 @@ #define PIT_CH0 0x40 #define PIT_CH2 0x42 -extern spinlock_t i8253_lock; +extern raw_spinlock_t i8253_lock; extern void setup_pit_timer(void); diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h index 49382d5e891a..c6e3c93ce7c7 100644 --- a/arch/mips/include/asm/mipsregs.h +++ b/arch/mips/include/asm/mipsregs.h @@ -135,6 +135,12 @@ #define FPU_CSR_COND7 0x80000000 /* $fcc7 */ /* + * Bits 18 - 20 of the FPU Status Register will be read as 0, + * and should be written as zero. + */ +#define FPU_CSR_RSVD 0x001c0000 + +/* * X the exception cause indicator * E the exception enable * S the sticky/flag bit @@ -161,7 +167,8 @@ #define FPU_CSR_UDF_S 0x00000008 #define FPU_CSR_INE_S 0x00000004 -/* rounding mode */ +/* Bits 0 and 1 of FPU Status Register specify the rounding mode */ +#define FPU_CSR_RM 0x00000003 #define FPU_CSR_RN 0x0 /* nearest */ #define FPU_CSR_RZ 0x1 /* towards zero */ #define FPU_CSR_RU 0x2 /* towards +Infinity */ diff --git a/arch/mips/kernel/i8253.c b/arch/mips/kernel/i8253.c index ed5c441615e4..94794062a177 100644 --- a/arch/mips/kernel/i8253.c +++ b/arch/mips/kernel/i8253.c @@ -15,7 +15,7 @@ #include <asm/io.h> #include <asm/time.h> -DEFINE_SPINLOCK(i8253_lock); +DEFINE_RAW_SPINLOCK(i8253_lock); EXPORT_SYMBOL(i8253_lock); /* @@ -26,7 +26,7 @@ EXPORT_SYMBOL(i8253_lock); static void init_pit_timer(enum clock_event_mode mode, struct clock_event_device *evt) { - spin_lock(&i8253_lock); + raw_spin_lock(&i8253_lock); switch(mode) { case CLOCK_EVT_MODE_PERIODIC: @@ -55,7 +55,7 @@ static void init_pit_timer(enum clock_event_mode mode, /* Nothing to do here */ break; } - spin_unlock(&i8253_lock); + raw_spin_unlock(&i8253_lock); } /* @@ -65,10 +65,10 @@ static void init_pit_timer(enum clock_event_mode mode, */ static int pit_next_event(unsigned long delta, struct clock_event_device *evt) { - spin_lock(&i8253_lock); + raw_spin_lock(&i8253_lock); outb_p(delta & 0xff , PIT_CH0); /* LSB */ outb(delta >> 8 , PIT_CH0); /* MSB */ - spin_unlock(&i8253_lock); + raw_spin_unlock(&i8253_lock); return 0; } @@ -137,7 +137,7 @@ static cycle_t pit_read(struct clocksource *cs) static int old_count; static u32 old_jifs; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); /* * Although our caller may have the read side of xtime_lock, * this is now a seqlock, and we are cheating in this routine @@ -183,7 +183,7 @@ static cycle_t pit_read(struct clocksource *cs) old_count = count; old_jifs = jifs; - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); count = (LATCH - 1) - count; diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S index 44337ba03717..a5297e2a353a 100644 --- a/arch/mips/kernel/scall64-n32.S +++ b/arch/mips/kernel/scall64-n32.S @@ -385,7 +385,7 @@ EXPORT(sysn32_call_table) PTR sys_fchmodat PTR sys_faccessat PTR compat_sys_pselect6 - PTR sys_ppoll /* 6265 */ + PTR compat_sys_ppoll /* 6265 */ PTR sys_unshare PTR sys_splice PTR sys_sync_file_range diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c index 8f2f8e9d8b21..f2338d1c0b48 100644 --- a/arch/mips/math-emu/cp1emu.c +++ b/arch/mips/math-emu/cp1emu.c @@ -78,6 +78,9 @@ DEFINE_PER_CPU(struct mips_fpu_emulator_stats, fpuemustats); #define FPCREG_RID 0 /* $0 = revision id */ #define FPCREG_CSR 31 /* $31 = csr */ +/* Determine rounding mode from the RM bits of the FCSR */ +#define modeindex(v) ((v) & FPU_CSR_RM) + /* Convert Mips rounding mode (0..3) to IEEE library modes. */ static const unsigned char ieee_rm[4] = { [FPU_CSR_RN] = IEEE754_RN, @@ -384,10 +387,14 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx) (void *) (xcp->cp0_epc), MIPSInst_RT(ir), value); #endif - value &= (FPU_CSR_FLUSH | FPU_CSR_ALL_E | FPU_CSR_ALL_S | 0x03); - ctx->fcr31 &= ~(FPU_CSR_FLUSH | FPU_CSR_ALL_E | FPU_CSR_ALL_S | 0x03); - /* convert to ieee library modes */ - ctx->fcr31 |= (value & ~0x3) | ieee_rm[value & 0x3]; + + /* + * Don't write reserved bits, + * and convert to ieee library modes + */ + ctx->fcr31 = (value & + ~(FPU_CSR_RSVD | FPU_CSR_RM)) | + ieee_rm[modeindex(value)]; } if ((ctx->fcr31 >> 5) & ctx->fcr31 & FPU_CSR_ALL_E) { return SIGFPE; diff --git a/arch/mips/oprofile/op_model_loongson2.c b/arch/mips/oprofile/op_model_loongson2.c index 29e2326b6257..fa3bf661ae29 100644 --- a/arch/mips/oprofile/op_model_loongson2.c +++ b/arch/mips/oprofile/op_model_loongson2.c @@ -122,7 +122,7 @@ static irqreturn_t loongson2_perfcount_handler(int irq, void *dev_id) */ /* Check whether the irq belongs to me */ - enabled = read_c0_perfcnt() & LOONGSON2_PERFCNT_INT_EN; + enabled = read_c0_perfctrl() & LOONGSON2_PERFCNT_INT_EN; if (!enabled) return IRQ_NONE; enabled = reg.cnt1_enabled | reg.cnt2_enabled; diff --git a/arch/mn10300/include/asm/atomic.h b/arch/mn10300/include/asm/atomic.h index 5bf5be9566de..e41222d6c2fd 100644 --- a/arch/mn10300/include/asm/atomic.h +++ b/arch/mn10300/include/asm/atomic.h @@ -31,7 +31,7 @@ * Atomically reads the value of @v. Note that the guaranteed * useful range of an atomic_t is only 24 bits. */ -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) /** * atomic_set - set atomic variable diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 716634d1f546..f81955934aeb 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -189,7 +189,7 @@ static __inline__ void atomic_set(atomic_t *v, int i) static __inline__ int atomic_read(const atomic_t *v) { - return v->counter; + return (*(volatile int *)&(v)->counter); } /* exported interface */ @@ -286,7 +286,7 @@ atomic64_set(atomic64_t *v, s64 i) static __inline__ s64 atomic64_read(const atomic64_t *v) { - return v->counter; + return (*(volatile long *)&(v)->counter); } #define atomic64_add(i,v) ((void)(__atomic64_add_return( ((s64)(i)),(v)))) diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h index 9f4c9d4f5803..bd100fcf40d0 100644 --- a/arch/powerpc/include/asm/hw_irq.h +++ b/arch/powerpc/include/asm/hw_irq.h @@ -130,43 +130,5 @@ static inline int irqs_disabled_flags(unsigned long flags) */ struct irq_chip; -#ifdef CONFIG_PERF_EVENTS - -#ifdef CONFIG_PPC64 -static inline unsigned long test_perf_event_pending(void) -{ - unsigned long x; - - asm volatile("lbz %0,%1(13)" - : "=r" (x) - : "i" (offsetof(struct paca_struct, perf_event_pending))); - return x; -} - -static inline void set_perf_event_pending(void) -{ - asm volatile("stb %0,%1(13)" : : - "r" (1), - "i" (offsetof(struct paca_struct, perf_event_pending))); -} - -static inline void clear_perf_event_pending(void) -{ - asm volatile("stb %0,%1(13)" : : - "r" (0), - "i" (offsetof(struct paca_struct, perf_event_pending))); -} -#endif /* CONFIG_PPC64 */ - -#else /* CONFIG_PERF_EVENTS */ - -static inline unsigned long test_perf_event_pending(void) -{ - return 0; -} - -static inline void clear_perf_event_pending(void) {} -#endif /* CONFIG_PERF_EVENTS */ - #endif /* __KERNEL__ */ #endif /* _ASM_POWERPC_HW_IRQ_H */ diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 957ceb7059c5..c09138d150d4 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -133,7 +133,6 @@ int main(void) DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr)); DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled)); DEFINE(PACAHARDIRQEN, offsetof(struct paca_struct, hard_enabled)); - DEFINE(PACAPERFPEND, offsetof(struct paca_struct, perf_event_pending)); DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id)); #ifdef CONFIG_PPC_MM_SLICES DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct, diff --git a/arch/powerpc/kernel/dma-swiotlb.c b/arch/powerpc/kernel/dma-swiotlb.c index 59c928564a03..4ff4da2c238b 100644 --- a/arch/powerpc/kernel/dma-swiotlb.c +++ b/arch/powerpc/kernel/dma-swiotlb.c @@ -1,7 +1,8 @@ /* * Contains routines needed to support swiotlb for ppc. * - * Copyright (C) 2009 Becky Bruce, Freescale Semiconductor + * Copyright (C) 2009-2010 Freescale Semiconductor, Inc. + * Author: Becky Bruce * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the @@ -70,7 +71,7 @@ static int ppc_swiotlb_bus_notify(struct notifier_block *nb, sd->max_direct_dma_addr = 0; /* May need to bounce if the device can't address all of DRAM */ - if (dma_get_mask(dev) < lmb_end_of_DRAM()) + if ((dma_get_mask(dev) + 1) < lmb_end_of_DRAM()) set_dma_ops(dev, &swiotlb_dma_ops); return NOTIFY_DONE; diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 07109d843787..42e9d908914a 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -556,15 +556,6 @@ ALT_FW_FTR_SECTION_END_IFCLR(FW_FEATURE_ISERIES) 2: TRACE_AND_RESTORE_IRQ(r5); -#ifdef CONFIG_PERF_EVENTS - /* check paca->perf_event_pending if we're enabling ints */ - lbz r3,PACAPERFPEND(r13) - and. r3,r3,r5 - beq 27f - bl .perf_event_do_pending -27: -#endif /* CONFIG_PERF_EVENTS */ - /* extract EE bit and use it to restore paca->hard_enabled */ ld r3,_MSR(r1) rldicl r4,r3,49,63 /* r0 = (r3 >> 15) & 1 */ diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c index 64f6f2031c22..066bd31551d5 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -53,7 +53,6 @@ #include <linux/bootmem.h> #include <linux/pci.h> #include <linux/debugfs.h> -#include <linux/perf_event.h> #include <asm/uaccess.h> #include <asm/system.h> @@ -145,11 +144,6 @@ notrace void raw_local_irq_restore(unsigned long en) } #endif /* CONFIG_PPC_STD_MMU_64 */ - if (test_perf_event_pending()) { - clear_perf_event_pending(); - perf_event_do_pending(); - } - /* * if (get_paca()->hard_enabled) return; * But again we need to take care that gcc gets hard_enabled directly diff --git a/arch/powerpc/kernel/perf_event.c b/arch/powerpc/kernel/perf_event.c index 08460a2e9f41..43b83c35cf54 100644 --- a/arch/powerpc/kernel/perf_event.c +++ b/arch/powerpc/kernel/perf_event.c @@ -35,6 +35,9 @@ struct cpu_hw_events { u64 alternatives[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; unsigned long amasks[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; unsigned long avalues[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; + + unsigned int group_flag; + int n_txn_start; }; DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events); @@ -718,66 +721,6 @@ static int collect_events(struct perf_event *group, int max_count, return n; } -static void event_sched_in(struct perf_event *event) -{ - event->state = PERF_EVENT_STATE_ACTIVE; - event->oncpu = smp_processor_id(); - event->tstamp_running += event->ctx->time - event->tstamp_stopped; - if (is_software_event(event)) - event->pmu->enable(event); -} - -/* - * Called to enable a whole group of events. - * Returns 1 if the group was enabled, or -EAGAIN if it could not be. - * Assumes the caller has disabled interrupts and has - * frozen the PMU with hw_perf_save_disable. - */ -int hw_perf_group_sched_in(struct perf_event *group_leader, - struct perf_cpu_context *cpuctx, - struct perf_event_context *ctx) -{ - struct cpu_hw_events *cpuhw; - long i, n, n0; - struct perf_event *sub; - - if (!ppmu) - return 0; - cpuhw = &__get_cpu_var(cpu_hw_events); - n0 = cpuhw->n_events; - n = collect_events(group_leader, ppmu->n_counter - n0, - &cpuhw->event[n0], &cpuhw->events[n0], - &cpuhw->flags[n0]); - if (n < 0) - return -EAGAIN; - if (check_excludes(cpuhw->event, cpuhw->flags, n0, n)) - return -EAGAIN; - i = power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n + n0); - if (i < 0) - return -EAGAIN; - cpuhw->n_events = n0 + n; - cpuhw->n_added += n; - - /* - * OK, this group can go on; update event states etc., - * and enable any software events - */ - for (i = n0; i < n0 + n; ++i) - cpuhw->event[i]->hw.config = cpuhw->events[i]; - cpuctx->active_oncpu += n; - n = 1; - event_sched_in(group_leader); - list_for_each_entry(sub, &group_leader->sibling_list, group_entry) { - if (sub->state != PERF_EVENT_STATE_OFF) { - event_sched_in(sub); - ++n; - } - } - ctx->nr_active += n; - - return 1; -} - /* * Add a event to the PMU. * If all events are not already frozen, then we disable and @@ -805,12 +748,22 @@ static int power_pmu_enable(struct perf_event *event) cpuhw->event[n0] = event; cpuhw->events[n0] = event->hw.config; cpuhw->flags[n0] = event->hw.event_base; + + /* + * If group events scheduling transaction was started, + * skip the schedulability test here, it will be peformed + * at commit time(->commit_txn) as a whole + */ + if (cpuhw->group_flag & PERF_EVENT_TXN_STARTED) + goto nocheck; + if (check_excludes(cpuhw->event, cpuhw->flags, n0, 1)) goto out; if (power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n0 + 1)) goto out; - event->hw.config = cpuhw->events[n0]; + +nocheck: ++cpuhw->n_events; ++cpuhw->n_added; @@ -896,11 +849,65 @@ static void power_pmu_unthrottle(struct perf_event *event) local_irq_restore(flags); } +/* + * Start group events scheduling transaction + * Set the flag to make pmu::enable() not perform the + * schedulability test, it will be performed at commit time + */ +void power_pmu_start_txn(const struct pmu *pmu) +{ + struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); + + cpuhw->group_flag |= PERF_EVENT_TXN_STARTED; + cpuhw->n_txn_start = cpuhw->n_events; +} + +/* + * Stop group events scheduling transaction + * Clear the flag and pmu::enable() will perform the + * schedulability test. + */ +void power_pmu_cancel_txn(const struct pmu *pmu) +{ + struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); + + cpuhw->group_flag &= ~PERF_EVENT_TXN_STARTED; +} + +/* + * Commit group events scheduling transaction + * Perform the group schedulability test as a whole + * Return 0 if success + */ +int power_pmu_commit_txn(const struct pmu *pmu) +{ + struct cpu_hw_events *cpuhw; + long i, n; + + if (!ppmu) + return -EAGAIN; + cpuhw = &__get_cpu_var(cpu_hw_events); + n = cpuhw->n_events; + if (check_excludes(cpuhw->event, cpuhw->flags, 0, n)) + return -EAGAIN; + i = power_check_constraints(cpuhw, cpuhw->events, cpuhw->flags, n); + if (i < 0) + return -EAGAIN; + + for (i = cpuhw->n_txn_start; i < n; ++i) + cpuhw->event[i]->hw.config = cpuhw->events[i]; + + return 0; +} + struct pmu power_pmu = { .enable = power_pmu_enable, .disable = power_pmu_disable, .read = power_pmu_read, .unthrottle = power_pmu_unthrottle, + .start_txn = power_pmu_start_txn, + .cancel_txn = power_pmu_cancel_txn, + .commit_txn = power_pmu_commit_txn, }; /* diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 1b16b9a3e49a..0441bbdadbd1 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -532,25 +532,60 @@ void __init iSeries_time_init_early(void) } #endif /* CONFIG_PPC_ISERIES */ -#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_PPC32) -DEFINE_PER_CPU(u8, perf_event_pending); +#ifdef CONFIG_PERF_EVENTS -void set_perf_event_pending(void) +/* + * 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable... + */ +#ifdef CONFIG_PPC64 +static inline unsigned long test_perf_event_pending(void) { - get_cpu_var(perf_event_pending) = 1; - set_dec(1); - put_cpu_var(perf_event_pending); + unsigned long x; + + asm volatile("lbz %0,%1(13)" + : "=r" (x) + : "i" (offsetof(struct paca_struct, perf_event_pending))); + return x; } +static inline void set_perf_event_pending_flag(void) +{ + asm volatile("stb %0,%1(13)" : : + "r" (1), + "i" (offsetof(struct paca_struct, perf_event_pending))); +} + +static inline void clear_perf_event_pending(void) +{ + asm volatile("stb %0,%1(13)" : : + "r" (0), + "i" (offsetof(struct paca_struct, perf_event_pending))); +} + +#else /* 32-bit */ + +DEFINE_PER_CPU(u8, perf_event_pending); + +#define set_perf_event_pending_flag() __get_cpu_var(perf_event_pending) = 1 #define test_perf_event_pending() __get_cpu_var(perf_event_pending) #define clear_perf_event_pending() __get_cpu_var(perf_event_pending) = 0 -#else /* CONFIG_PERF_EVENTS && CONFIG_PPC32 */ +#endif /* 32 vs 64 bit */ + +void set_perf_event_pending(void) +{ + preempt_disable(); + set_perf_event_pending_flag(); + set_dec(1); + preempt_enable(); +} + +#else /* CONFIG_PERF_EVENTS */ #define test_perf_event_pending() 0 #define clear_perf_event_pending() -#endif /* CONFIG_PERF_EVENTS && CONFIG_PPC32 */ +#endif /* CONFIG_PERF_EVENTS */ /* * For iSeries shared processors, we have to let the hypervisor @@ -582,10 +617,6 @@ void timer_interrupt(struct pt_regs * regs) set_dec(DECREMENTER_MAX); #ifdef CONFIG_PPC32 - if (test_perf_event_pending()) { - clear_perf_event_pending(); - perf_event_do_pending(); - } if (atomic_read(&ppc_n_lost_interrupts) != 0) do_IRQ(regs); #endif @@ -604,6 +635,11 @@ void timer_interrupt(struct pt_regs * regs) calculate_steal_time(); + if (test_perf_event_pending()) { + clear_perf_event_pending(); + perf_event_do_pending(); + } + #ifdef CONFIG_PPC_ISERIES if (firmware_has_feature(FW_FEATURE_ISERIES)) get_lppaca()->int_dword.fields.decr_int = 0; diff --git a/arch/powerpc/kvm/44x_tlb.c b/arch/powerpc/kvm/44x_tlb.c index 2570fcc7665d..812312542e50 100644 --- a/arch/powerpc/kvm/44x_tlb.c +++ b/arch/powerpc/kvm/44x_tlb.c @@ -440,7 +440,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) unsigned int gtlb_index; gtlb_index = kvmppc_get_gpr(vcpu, ra); - if (gtlb_index > KVM44x_GUEST_TLB_SIZE) { + if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) { printk("%s: index %d\n", __func__, gtlb_index); kvmppc_dump_vcpu(vcpu); return EMULATE_FAIL; diff --git a/arch/s390/kernel/head31.S b/arch/s390/kernel/head31.S index 1bbcc499d455..b8f8dc126102 100644 --- a/arch/s390/kernel/head31.S +++ b/arch/s390/kernel/head31.S @@ -82,7 +82,7 @@ startup_continue: _ehead: #ifdef CONFIG_SHARED_KERNEL - .org 0x100000 + .org 0x100000 - 0x11000 # head.o ends at 0x11000 #endif # diff --git a/arch/s390/kernel/head64.S b/arch/s390/kernel/head64.S index 1f70970de0aa..cdef68717416 100644 --- a/arch/s390/kernel/head64.S +++ b/arch/s390/kernel/head64.S @@ -80,7 +80,7 @@ startup_continue: _ehead: #ifdef CONFIG_SHARED_KERNEL - .org 0x100000 + .org 0x100000 - 0x11000 # head.o ends at 0x11000 #endif # diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c index 33fdc5a79764..9f654da4cecc 100644 --- a/arch/s390/kernel/ptrace.c +++ b/arch/s390/kernel/ptrace.c @@ -640,7 +640,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, asmlinkage long do_syscall_trace_enter(struct pt_regs *regs) { - long ret; + long ret = 0; /* Do the secure computing check first. */ secure_computing(regs->gprs[2]); @@ -649,7 +649,6 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs) * The sysc_tracesys code in entry.S stored the system * call number to gprs[2]. */ - ret = regs->gprs[2]; if (test_thread_flag(TIF_SYSCALL_TRACE) && (tracehook_report_syscall_entry(regs) || regs->gprs[2] >= NR_syscalls)) { @@ -671,7 +670,7 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs) regs->gprs[2], regs->orig_gpr2, regs->gprs[3], regs->gprs[4], regs->gprs[5]); - return ret; + return ret ?: regs->gprs[2]; } asmlinkage void do_syscall_trace_exit(struct pt_regs *regs) diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c index d906bf19c14a..a2163c95eb98 100644 --- a/arch/s390/kernel/time.c +++ b/arch/s390/kernel/time.c @@ -391,7 +391,6 @@ static void __init time_init_wq(void) if (time_sync_wq) return; time_sync_wq = create_singlethread_workqueue("timesync"); - stop_machine_create(); } /* diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 8d90564c2bcf..e6d8ab5cfa9d 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -44,6 +44,7 @@ config SUPERH32 select HAVE_FUNCTION_GRAPH_TRACER select HAVE_ARCH_KGDB select HAVE_HW_BREAKPOINT + select HAVE_MIXED_BREAKPOINTS_REGS select PERF_EVENTS if HAVE_HW_BREAKPOINT select ARCH_HIBERNATION_POSSIBLE if MMU diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 275a448ae8c2..c7983124d99d 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -13,7 +13,7 @@ #define ATOMIC_INIT(i) ( (atomic_t) { (i) } ) -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v,i) ((v)->counter = (i)) #if defined(CONFIG_GUSA_RB) diff --git a/arch/sh/include/asm/hw_breakpoint.h b/arch/sh/include/asm/hw_breakpoint.h index 965dd780d51b..e14cad96798f 100644 --- a/arch/sh/include/asm/hw_breakpoint.h +++ b/arch/sh/include/asm/hw_breakpoint.h @@ -46,10 +46,14 @@ struct pmu; /* Maximum number of UBC channels */ #define HBP_NUM 2 +static inline int hw_breakpoint_slots(int type) +{ + return HBP_NUM; +} + /* arch/sh/kernel/hw_breakpoint.c */ -extern int arch_check_va_in_userspace(unsigned long va, u16 hbp_len); -extern int arch_validate_hwbkpt_settings(struct perf_event *bp, - struct task_struct *tsk); +extern int arch_check_bp_in_kernelspace(struct perf_event *bp); +extern int arch_validate_hwbkpt_settings(struct perf_event *bp); extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); diff --git a/arch/sh/kernel/hw_breakpoint.c b/arch/sh/kernel/hw_breakpoint.c index 675eea7785d9..1f2cf6229862 100644 --- a/arch/sh/kernel/hw_breakpoint.c +++ b/arch/sh/kernel/hw_breakpoint.c @@ -120,25 +120,16 @@ static int get_hbp_len(u16 hbp_len) } /* - * Check for virtual address in user space. - */ -int arch_check_va_in_userspace(unsigned long va, u16 hbp_len) -{ - unsigned int len; - - len = get_hbp_len(hbp_len); - - return (va <= TASK_SIZE - len); -} - -/* * Check for virtual address in kernel space. */ -static int arch_check_va_in_kernelspace(unsigned long va, u8 hbp_len) +int arch_check_bp_in_kernelspace(struct perf_event *bp) { unsigned int len; + unsigned long va; + struct arch_hw_breakpoint *info = counter_arch_bp(bp); - len = get_hbp_len(hbp_len); + va = info->address; + len = get_hbp_len(info->len); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); } @@ -226,8 +217,7 @@ static int arch_build_bp_info(struct perf_event *bp) /* * Validate the arch-specific HW Breakpoint register settings */ -int arch_validate_hwbkpt_settings(struct perf_event *bp, - struct task_struct *tsk) +int arch_validate_hwbkpt_settings(struct perf_event *bp) { struct arch_hw_breakpoint *info = counter_arch_bp(bp); unsigned int align; @@ -270,15 +260,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp, if (info->address & align) return -EINVAL; - /* Check that the virtual address is in the proper range */ - if (tsk) { - if (!arch_check_va_in_userspace(info->address, info->len)) - return -EFAULT; - } else { - if (!arch_check_va_in_kernelspace(info->address, info->len)) - return -EFAULT; - } - return 0; } @@ -363,8 +344,7 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args) perf_bp_event(bp, args->regs); /* Deliver the signal to userspace */ - if (arch_check_va_in_userspace(bp->attr.bp_addr, - bp->attr.bp_len)) { + if (!arch_check_bp_in_kernelspace(bp)) { siginfo_t info; info.si_signo = args->signr; diff --git a/arch/sh/kernel/ptrace_32.c b/arch/sh/kernel/ptrace_32.c index 7759a9a93211..d4104ce9fe53 100644 --- a/arch/sh/kernel/ptrace_32.c +++ b/arch/sh/kernel/ptrace_32.c @@ -85,7 +85,7 @@ static int set_single_step(struct task_struct *tsk, unsigned long addr) bp = thread->ptrace_bps[0]; if (!bp) { - hw_breakpoint_init(&attr); + ptrace_breakpoint_init(&attr); attr.bp_addr = addr; attr.bp_len = HW_BREAKPOINT_LEN_2; diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index f0d343c3b956..7ae128b19d3f 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -25,7 +25,7 @@ extern int atomic_cmpxchg(atomic_t *, int, int); extern int atomic_add_unless(atomic_t *, int, int); extern void atomic_set(atomic_t *, int); -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_add(i, v) ((void)__atomic_add_return( (int)(i), (v))) #define atomic_sub(i, v) ((void)__atomic_add_return(-(int)(i), (v))) diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index f2e48009989e..2050ca02c423 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -13,8 +13,8 @@ #define ATOMIC_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) } -#define atomic_read(v) ((v)->counter) -#define atomic64_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) +#define atomic64_read(v) (*(volatile long *)&(v)->counter) #define atomic_set(v, i) (((v)->counter) = i) #define atomic64_set(v, i) (((v)->counter) = i) diff --git a/arch/sparc/include/asm/bitops_64.h b/arch/sparc/include/asm/bitops_64.h index e72ac9cdfb98..766121a67a24 100644 --- a/arch/sparc/include/asm/bitops_64.h +++ b/arch/sparc/include/asm/bitops_64.h @@ -44,7 +44,7 @@ extern void change_bit(unsigned long nr, volatile unsigned long *addr); #ifdef ULTRA_HAS_POPULATION_COUNT -static inline unsigned int hweight64(unsigned long w) +static inline unsigned int __arch_hweight64(unsigned long w) { unsigned int res; @@ -52,7 +52,7 @@ static inline unsigned int hweight64(unsigned long w) return res; } -static inline unsigned int hweight32(unsigned int w) +static inline unsigned int __arch_hweight32(unsigned int w) { unsigned int res; @@ -60,7 +60,7 @@ static inline unsigned int hweight32(unsigned int w) return res; } -static inline unsigned int hweight16(unsigned int w) +static inline unsigned int __arch_hweight16(unsigned int w) { unsigned int res; @@ -68,7 +68,7 @@ static inline unsigned int hweight16(unsigned int w) return res; } -static inline unsigned int hweight8(unsigned int w) +static inline unsigned int __arch_hweight8(unsigned int w) { unsigned int res; @@ -78,9 +78,10 @@ static inline unsigned int hweight8(unsigned int w) #else -#include <asm-generic/bitops/hweight.h> +#include <asm-generic/bitops/arch_hweight.h> #endif +#include <asm-generic/bitops/const_hweight.h> #include <asm-generic/bitops/lock.h> #endif /* __KERNEL__ */ diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9458685902bd..a2d3a5fbeeda 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -53,11 +53,15 @@ config X86 select HAVE_KERNEL_LZMA select HAVE_KERNEL_LZO select HAVE_HW_BREAKPOINT + select HAVE_MIXED_BREAKPOINTS_REGS select PERF_EVENTS select ANON_INODES select HAVE_ARCH_KMEMCHECK select HAVE_USER_RETURN_NOTIFIER +config INSTRUCTION_DECODER + def_bool (KPROBES || PERF_EVENTS) + config OUTPUT_FORMAT string default "elf32-i386" if X86_32 @@ -197,20 +201,17 @@ config HAVE_INTEL_TXT # Use the generic interrupt handling code in kernel/irq/: config GENERIC_HARDIRQS - bool - default y + def_bool y config GENERIC_HARDIRQS_NO__DO_IRQ def_bool y config GENERIC_IRQ_PROBE - bool - default y + def_bool y config GENERIC_PENDING_IRQ - bool + def_bool y depends on GENERIC_HARDIRQS && SMP - default y config USE_GENERIC_SMP_HELPERS def_bool y @@ -225,19 +226,22 @@ config X86_64_SMP depends on X86_64 && SMP config X86_HT - bool + def_bool y depends on SMP - default y config X86_TRAMPOLINE - bool + def_bool y depends on SMP || (64BIT && ACPI_SLEEP) - default y config X86_32_LAZY_GS def_bool y depends on X86_32 && !CC_STACKPROTECTOR +config ARCH_HWEIGHT_CFLAGS + string + default "-fcall-saved-ecx -fcall-saved-edx" if X86_32 + default "-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11" if X86_64 + config KTIME_SCALAR def_bool X86_32 source "init/Kconfig" @@ -447,7 +451,7 @@ config X86_NUMAQ firmware with - send email to <Martin.Bligh@us.ibm.com>. config X86_SUPPORTS_MEMORY_FAILURE - bool + def_bool y # MCE code calls memory_failure(): depends on X86_MCE # On 32-bit this adds too big of NODES_SHIFT and we run out of page flags: @@ -455,7 +459,6 @@ config X86_SUPPORTS_MEMORY_FAILURE # On 32-bit SPARSEMEM adds too big of SECTIONS_WIDTH: depends on X86_64 || !SPARSEMEM select ARCH_SUPPORTS_MEMORY_FAILURE - default y config X86_VISWS bool "SGI 320/540 (Visual Workstation)" @@ -570,7 +573,6 @@ config PARAVIRT_SPINLOCKS config PARAVIRT_CLOCK bool - default n endif @@ -749,7 +751,6 @@ config MAXSMP bool "Configure Maximum number of SMP Processors and NUMA Nodes" depends on X86_64 && SMP && DEBUG_KERNEL && EXPERIMENTAL select CPUMASK_OFFSTACK - default n ---help--- Configure maximum number of CPUS and NUMA Nodes for this architecture. If unsure, say N. @@ -829,7 +830,6 @@ config X86_VISWS_APIC config X86_REROUTE_FOR_BROKEN_BOOT_IRQS bool "Reroute for broken boot IRQs" - default n depends on X86_IO_APIC ---help--- This option enables a workaround that fixes a source of @@ -876,9 +876,8 @@ config X86_MCE_AMD the DRAM Error Threshold. config X86_ANCIENT_MCE - def_bool n + bool "Support for old Pentium 5 / WinChip machine checks" depends on X86_32 && X86_MCE - prompt "Support for old Pentium 5 / WinChip machine checks" ---help--- Include support for machine check handling on old Pentium 5 or WinChip systems. These typically need to be enabled explicitely on the command @@ -886,8 +885,7 @@ config X86_ANCIENT_MCE config X86_MCE_THRESHOLD depends on X86_MCE_AMD || X86_MCE_INTEL - bool - default y + def_bool y config X86_MCE_INJECT depends on X86_MCE @@ -1026,8 +1024,8 @@ config X86_CPUID choice prompt "High Memory Support" - default HIGHMEM4G if !X86_NUMAQ default HIGHMEM64G if X86_NUMAQ + default HIGHMEM4G depends on X86_32 config NOHIGHMEM @@ -1285,7 +1283,7 @@ source "mm/Kconfig" config HIGHPTE bool "Allocate 3rd-level pagetables from highmem" - depends on X86_32 && (HIGHMEM4G || HIGHMEM64G) + depends on HIGHMEM ---help--- The VM uses one page table entry for each page of physical memory. For systems with a lot of RAM, this can be wasteful of precious @@ -1369,8 +1367,7 @@ config MATH_EMULATION kernel, it won't hurt. config MTRR - bool - default y + def_bool y prompt "MTRR (Memory Type Range Register) support" if EMBEDDED ---help--- On Intel P6 family processors (Pentium Pro, Pentium II and later) @@ -1436,8 +1433,7 @@ config MTRR_SANITIZER_SPARE_REG_NR_DEFAULT mtrr_spare_reg_nr=N on the kernel command line. config X86_PAT - bool - default y + def_bool y prompt "x86 PAT support" if EMBEDDED depends on MTRR ---help--- @@ -1605,8 +1601,7 @@ config X86_NEED_RELOCS depends on X86_32 && RELOCATABLE config PHYSICAL_ALIGN - hex - prompt "Alignment value to which kernel should be aligned" if X86_32 + hex "Alignment value to which kernel should be aligned" if X86_32 default "0x1000000" range 0x2000 0x1000000 ---help--- @@ -1653,7 +1648,6 @@ config COMPAT_VDSO config CMDLINE_BOOL bool "Built-in kernel command line" - default n ---help--- Allow for specifying boot arguments to the kernel at build time. On some systems (e.g. embedded ones), it is @@ -1687,7 +1681,6 @@ config CMDLINE config CMDLINE_OVERRIDE bool "Built-in command line overrides boot loader arguments" - default n depends on CMDLINE_BOOL ---help--- Set this option to 'Y' to have the kernel ignore the boot loader @@ -1723,8 +1716,7 @@ source "drivers/acpi/Kconfig" source "drivers/sfi/Kconfig" config X86_APM_BOOT - bool - default y + def_bool y depends on APM || APM_MODULE menuconfig APM @@ -1953,8 +1945,7 @@ config DMAR_DEFAULT_ON experimental. config DMAR_BROKEN_GFX_WA - def_bool n - prompt "Workaround broken graphics drivers (going away soon)" + bool "Workaround broken graphics drivers (going away soon)" depends on DMAR && BROKEN ---help--- Current Graphics drivers tend to use physical address @@ -2052,7 +2043,6 @@ config SCx200HR_TIMER config OLPC bool "One Laptop Per Child support" select GPIOLIB - default n ---help--- Add support for detecting the unique features of the OLPC XO hardware. diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index a19829374e6a..2ac9069890cd 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -338,6 +338,10 @@ config X86_F00F_BUG def_bool y depends on M586MMX || M586TSC || M586 || M486 || M386 +config X86_INVD_BUG + def_bool y + depends on M486 || M386 + config X86_WP_WORKS_OK def_bool y depends on !M386 @@ -502,23 +506,3 @@ config CPU_SUP_UMC_32 CPU might render the kernel unbootable. If unsure, say N. - -config X86_DS - def_bool X86_PTRACE_BTS - depends on X86_DEBUGCTLMSR - select HAVE_HW_BRANCH_TRACER - -config X86_PTRACE_BTS - bool "Branch Trace Store" - default y - depends on X86_DEBUGCTLMSR - depends on BROKEN - ---help--- - This adds a ptrace interface to the hardware's branch trace store. - - Debuggers may use it to collect an execution trace of the debugged - application in order to answer the question 'how did I get here?'. - Debuggers may trace user mode as well as kernel mode. - - Say Y unless there is no application development on this machine - and you want to save a small amount of code size. diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug index bc01e3ebfeb2..75085080b63e 100644 --- a/arch/x86/Kconfig.debug +++ b/arch/x86/Kconfig.debug @@ -45,7 +45,6 @@ config EARLY_PRINTK config EARLY_PRINTK_DBGP bool "Early printk via EHCI debug port" - default n depends on EARLY_PRINTK && PCI ---help--- Write kernel log output directly into the EHCI debug port. @@ -76,7 +75,6 @@ config DEBUG_PER_CPU_MAPS bool "Debug access to per_cpu maps" depends on DEBUG_KERNEL depends on SMP - default n ---help--- Say Y to verify that the per_cpu map being accessed has been setup. Adds a fair amount of code to kernel memory @@ -174,15 +172,6 @@ config IOMMU_LEAK Add a simple leak tracer to the IOMMU code. This is useful when you are debugging a buggy device driver that leaks IOMMU mappings. -config X86_DS_SELFTEST - bool "DS selftest" - default y - depends on DEBUG_KERNEL - depends on X86_DS - ---help--- - Perform Debug Store selftests at boot time. - If in doubt, say "N". - config HAVE_MMIOTRACE_SUPPORT def_bool y diff --git a/arch/x86/Makefile b/arch/x86/Makefile index 0a43dc515e4c..8aa1b59b9074 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -95,8 +95,9 @@ sp-$(CONFIG_X86_64) := rsp cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1) # is .cfi_signal_frame supported too? cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1) -KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) -KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) +cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1) +KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) +KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) LDFLAGS := -m elf_$(UTS_MACHINE) diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h index b97f786a48d5..a63a68be1cce 100644 --- a/arch/x86/include/asm/alternative-asm.h +++ b/arch/x86/include/asm/alternative-asm.h @@ -6,8 +6,8 @@ .macro LOCK_PREFIX 1: lock .section .smp_locks,"a" - _ASM_ALIGN - _ASM_PTR 1b + .balign 4 + .long 1b - . .previous .endm #else diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index b09ec55650b3..03b6bb5394a0 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -28,20 +28,20 @@ */ #ifdef CONFIG_SMP -#define LOCK_PREFIX \ +#define LOCK_PREFIX_HERE \ ".section .smp_locks,\"a\"\n" \ - _ASM_ALIGN "\n" \ - _ASM_PTR "661f\n" /* address */ \ + ".balign 4\n" \ + ".long 671f - .\n" /* offset */ \ ".previous\n" \ - "661:\n\tlock; " + "671:" + +#define LOCK_PREFIX LOCK_PREFIX_HERE "\n\tlock; " #else /* ! CONFIG_SMP */ +#define LOCK_PREFIX_HERE "" #define LOCK_PREFIX "" #endif -/* This must be included *after* the definition of LOCK_PREFIX */ -#include <asm/cpufeature.h> - struct alt_instr { u8 *instr; /* original instruction */ u8 *replacement; @@ -96,6 +96,12 @@ static inline int alternatives_text_reserved(void *start, void *end) ".previous" /* + * This must be included *after* the definition of ALTERNATIVE due to + * <asm/arch_hweight.h> + */ +#include <asm/cpufeature.h> + +/* * Alternative instructions for different CPU types or capabilities. * * This allows to use optimized instructions even on generic binary diff --git a/arch/x86/include/asm/amd_iommu_types.h b/arch/x86/include/asm/amd_iommu_types.h index 86a0ff0aeac7..7014e88bc779 100644 --- a/arch/x86/include/asm/amd_iommu_types.h +++ b/arch/x86/include/asm/amd_iommu_types.h @@ -174,6 +174,40 @@ (~((1ULL << (12 + ((lvl) * 9))) - 1))) #define PM_ALIGNED(lvl, addr) ((PM_MAP_MASK(lvl) & (addr)) == (addr)) +/* + * Returns the page table level to use for a given page size + * Pagesize is expected to be a power-of-two + */ +#define PAGE_SIZE_LEVEL(pagesize) \ + ((__ffs(pagesize) - 12) / 9) +/* + * Returns the number of ptes to use for a given page size + * Pagesize is expected to be a power-of-two + */ +#define PAGE_SIZE_PTE_COUNT(pagesize) \ + (1ULL << ((__ffs(pagesize) - 12) % 9)) + +/* + * Aligns a given io-virtual address to a given page size + * Pagesize is expected to be a power-of-two + */ +#define PAGE_SIZE_ALIGN(address, pagesize) \ + ((address) & ~((pagesize) - 1)) +/* + * Creates an IOMMU PTE for an address an a given pagesize + * The PTE has no permission bits set + * Pagesize is expected to be a power-of-two larger than 4096 + */ +#define PAGE_SIZE_PTE(address, pagesize) \ + (((address) | ((pagesize) - 1)) & \ + (~(pagesize >> 1)) & PM_ADDR_MASK) + +/* + * Takes a PTE value with mode=0x07 and returns the page size it maps + */ +#define PTE_PAGE_SIZE(pte) \ + (1ULL << (1 + ffz(((pte) | 0xfffULL)))) + #define IOMMU_PTE_P (1ULL << 0) #define IOMMU_PTE_TV (1ULL << 1) #define IOMMU_PTE_U (1ULL << 59) diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h index b4ac2cdcb64f..1fa03e04ae44 100644 --- a/arch/x86/include/asm/apic.h +++ b/arch/x86/include/asm/apic.h @@ -373,6 +373,7 @@ extern atomic_t init_deasserted; extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip); #endif +#ifdef CONFIG_X86_LOCAL_APIC static inline u32 apic_read(u32 reg) { return apic->read(reg); @@ -403,10 +404,19 @@ static inline u32 safe_apic_wait_icr_idle(void) return apic->safe_wait_icr_idle(); } +#else /* CONFIG_X86_LOCAL_APIC */ + +static inline u32 apic_read(u32 reg) { return 0; } +static inline void apic_write(u32 reg, u32 val) { } +static inline u64 apic_icr_read(void) { return 0; } +static inline void apic_icr_write(u32 low, u32 high) { } +static inline void apic_wait_icr_idle(void) { } +static inline u32 safe_apic_wait_icr_idle(void) { return 0; } + +#endif /* CONFIG_X86_LOCAL_APIC */ static inline void ack_APIC_irq(void) { -#ifdef CONFIG_X86_LOCAL_APIC /* * ack_APIC_irq() actually gets compiled as a single instruction * ... yummie. @@ -414,7 +424,6 @@ static inline void ack_APIC_irq(void) /* Docs say use 0 for future compatibility */ apic_write(APIC_EOI, 0); -#endif } static inline unsigned default_get_apic_id(unsigned long x) diff --git a/arch/x86/include/asm/arch_hweight.h b/arch/x86/include/asm/arch_hweight.h new file mode 100644 index 000000000000..9686c3d9ff73 --- /dev/null +++ b/arch/x86/include/asm/arch_hweight.h @@ -0,0 +1,61 @@ +#ifndef _ASM_X86_HWEIGHT_H +#define _ASM_X86_HWEIGHT_H + +#ifdef CONFIG_64BIT +/* popcnt %edi, %eax -- redundant REX prefix for alignment */ +#define POPCNT32 ".byte 0xf3,0x40,0x0f,0xb8,0xc7" +/* popcnt %rdi, %rax */ +#define POPCNT64 ".byte 0xf3,0x48,0x0f,0xb8,0xc7" +#define REG_IN "D" +#define REG_OUT "a" +#else +/* popcnt %eax, %eax */ +#define POPCNT32 ".byte 0xf3,0x0f,0xb8,0xc0" +#define REG_IN "a" +#define REG_OUT "a" +#endif + +/* + * __sw_hweightXX are called from within the alternatives below + * and callee-clobbered registers need to be taken care of. See + * ARCH_HWEIGHT_CFLAGS in <arch/x86/Kconfig> for the respective + * compiler switches. + */ +static inline unsigned int __arch_hweight32(unsigned int w) +{ + unsigned int res = 0; + + asm (ALTERNATIVE("call __sw_hweight32", POPCNT32, X86_FEATURE_POPCNT) + : "="REG_OUT (res) + : REG_IN (w)); + + return res; +} + +static inline unsigned int __arch_hweight16(unsigned int w) +{ + return __arch_hweight32(w & 0xffff); +} + +static inline unsigned int __arch_hweight8(unsigned int w) +{ + return __arch_hweight32(w & 0xff); +} + +static inline unsigned long __arch_hweight64(__u64 w) +{ + unsigned long res = 0; + +#ifdef CONFIG_X86_32 + return __arch_hweight32((u32)w) + + __arch_hweight32((u32)(w >> 32)); +#else + asm (ALTERNATIVE("call __sw_hweight64", POPCNT64, X86_FEATURE_POPCNT) + : "="REG_OUT (res) + : REG_IN (w)); +#endif /* CONFIG_X86_32 */ + + return res; +} + +#endif diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 8f8217b9bdac..952a826ac4e5 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -22,7 +22,7 @@ */ static inline int atomic_read(const atomic_t *v) { - return v->counter; + return (*(volatile int *)&(v)->counter); } /** @@ -246,6 +246,29 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) +/* + * atomic_dec_if_positive - decrement by 1 if old value positive + * @v: pointer of type atomic_t + * + * The function returns the old value of *v minus 1, even if + * the atomic variable, v, was not decremented. + */ +static inline int atomic_dec_if_positive(atomic_t *v) +{ + int c, old, dec; + c = atomic_read(v); + for (;;) { + dec = c - 1; + if (unlikely(dec < 0)) + break; + old = atomic_cmpxchg((v), c, dec); + if (likely(old == c)) + break; + c = old; + } + return dec; +} + /** * atomic_inc_short - increment of a short integer * @v: pointer to type int diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 03027bf28de5..2a934aa19a43 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -14,109 +14,193 @@ typedef struct { #define ATOMIC64_INIT(val) { (val) } -extern u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old_val, u64 new_val); +#ifdef CONFIG_X86_CMPXCHG64 +#define ATOMIC64_ALTERNATIVE_(f, g) "call atomic64_" #g "_cx8" +#else +#define ATOMIC64_ALTERNATIVE_(f, g) ALTERNATIVE("call atomic64_" #f "_386", "call atomic64_" #g "_cx8", X86_FEATURE_CX8) +#endif + +#define ATOMIC64_ALTERNATIVE(f) ATOMIC64_ALTERNATIVE_(f, f) + +/** + * atomic64_cmpxchg - cmpxchg atomic64 variable + * @p: pointer to type atomic64_t + * @o: expected value + * @n: new value + * + * Atomically sets @v to @n if it was equal to @o and returns + * the old value. + */ + +static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n) +{ + return cmpxchg64(&v->counter, o, n); +} /** * atomic64_xchg - xchg atomic64 variable - * @ptr: pointer to type atomic64_t - * @new_val: value to assign + * @v: pointer to type atomic64_t + * @n: value to assign * - * Atomically xchgs the value of @ptr to @new_val and returns + * Atomically xchgs the value of @v to @n and returns * the old value. */ -extern u64 atomic64_xchg(atomic64_t *ptr, u64 new_val); +static inline long long atomic64_xchg(atomic64_t *v, long long n) +{ + long long o; + unsigned high = (unsigned)(n >> 32); + unsigned low = (unsigned)n; + asm volatile(ATOMIC64_ALTERNATIVE(xchg) + : "=A" (o), "+b" (low), "+c" (high) + : "S" (v) + : "memory" + ); + return o; +} /** * atomic64_set - set atomic64 variable - * @ptr: pointer to type atomic64_t - * @new_val: value to assign + * @v: pointer to type atomic64_t + * @n: value to assign * - * Atomically sets the value of @ptr to @new_val. + * Atomically sets the value of @v to @n. */ -extern void atomic64_set(atomic64_t *ptr, u64 new_val); +static inline void atomic64_set(atomic64_t *v, long long i) +{ + unsigned high = (unsigned)(i >> 32); + unsigned low = (unsigned)i; + asm volatile(ATOMIC64_ALTERNATIVE(set) + : "+b" (low), "+c" (high) + : "S" (v) + : "eax", "edx", "memory" + ); +} /** * atomic64_read - read atomic64 variable - * @ptr: pointer to type atomic64_t + * @v: pointer to type atomic64_t * - * Atomically reads the value of @ptr and returns it. + * Atomically reads the value of @v and returns it. */ -static inline u64 atomic64_read(atomic64_t *ptr) +static inline long long atomic64_read(atomic64_t *v) { - u64 res; - - /* - * Note, we inline this atomic64_t primitive because - * it only clobbers EAX/EDX and leaves the others - * untouched. We also (somewhat subtly) rely on the - * fact that cmpxchg8b returns the current 64-bit value - * of the memory location we are touching: - */ - asm volatile( - "mov %%ebx, %%eax\n\t" - "mov %%ecx, %%edx\n\t" - LOCK_PREFIX "cmpxchg8b %1\n" - : "=&A" (res) - : "m" (*ptr) - ); - - return res; -} - -extern u64 atomic64_read(atomic64_t *ptr); + long long r; + asm volatile(ATOMIC64_ALTERNATIVE(read) + : "=A" (r), "+c" (v) + : : "memory" + ); + return r; + } /** * atomic64_add_return - add and return - * @delta: integer value to add - * @ptr: pointer to type atomic64_t + * @i: integer value to add + * @v: pointer to type atomic64_t * - * Atomically adds @delta to @ptr and returns @delta + *@ptr + * Atomically adds @i to @v and returns @i + *@v */ -extern u64 atomic64_add_return(u64 delta, atomic64_t *ptr); +static inline long long atomic64_add_return(long long i, atomic64_t *v) +{ + asm volatile(ATOMIC64_ALTERNATIVE(add_return) + : "+A" (i), "+c" (v) + : : "memory" + ); + return i; +} /* * Other variants with different arithmetic operators: */ -extern u64 atomic64_sub_return(u64 delta, atomic64_t *ptr); -extern u64 atomic64_inc_return(atomic64_t *ptr); -extern u64 atomic64_dec_return(atomic64_t *ptr); +static inline long long atomic64_sub_return(long long i, atomic64_t *v) +{ + asm volatile(ATOMIC64_ALTERNATIVE(sub_return) + : "+A" (i), "+c" (v) + : : "memory" + ); + return i; +} + +static inline long long atomic64_inc_return(atomic64_t *v) +{ + long long a; + asm volatile(ATOMIC64_ALTERNATIVE(inc_return) + : "=A" (a) + : "S" (v) + : "memory", "ecx" + ); + return a; +} + +static inline long long atomic64_dec_return(atomic64_t *v) +{ + long long a; + asm volatile(ATOMIC64_ALTERNATIVE(dec_return) + : "=A" (a) + : "S" (v) + : "memory", "ecx" + ); + return a; +} /** * atomic64_add - add integer to atomic64 variable - * @delta: integer value to add - * @ptr: pointer to type atomic64_t + * @i: integer value to add + * @v: pointer to type atomic64_t * - * Atomically adds @delta to @ptr. + * Atomically adds @i to @v. */ -extern void atomic64_add(u64 delta, atomic64_t *ptr); +static inline long long atomic64_add(long long i, atomic64_t *v) +{ + asm volatile(ATOMIC64_ALTERNATIVE_(add, add_return) + : "+A" (i), "+c" (v) + : : "memory" + ); + return i; +} /** * atomic64_sub - subtract the atomic64 variable - * @delta: integer value to subtract - * @ptr: pointer to type atomic64_t + * @i: integer value to subtract + * @v: pointer to type atomic64_t * - * Atomically subtracts @delta from @ptr. + * Atomically subtracts @i from @v. */ -extern void atomic64_sub(u64 delta, atomic64_t *ptr); +static inline long long atomic64_sub(long long i, atomic64_t *v) +{ + asm volatile(ATOMIC64_ALTERNATIVE_(sub, sub_return) + : "+A" (i), "+c" (v) + : : "memory" + ); + return i; +} /** * atomic64_sub_and_test - subtract value from variable and test result - * @delta: integer value to subtract - * @ptr: pointer to type atomic64_t - * - * Atomically subtracts @delta from @ptr and returns + * @i: integer value to subtract + * @v: pointer to type atomic64_t + * + * Atomically subtracts @i from @v and returns * true if the result is zero, or false for all * other cases. */ -extern int atomic64_sub_and_test(u64 delta, atomic64_t *ptr); +static inline int atomic64_sub_and_test(long long i, atomic64_t *v) +{ + return atomic64_sub_return(i, v) == 0; +} /** * atomic64_inc - increment atomic64 variable - * @ptr: pointer to type atomic64_t + * @v: pointer to type atomic64_t * - * Atomically increments @ptr by 1. + * Atomically increments @v by 1. */ -extern void atomic64_inc(atomic64_t *ptr); +static inline void atomic64_inc(atomic64_t *v) +{ + asm volatile(ATOMIC64_ALTERNATIVE_(inc, inc_return) + : : "S" (v) + : "memory", "eax", "ecx", "edx" + ); +} /** * atomic64_dec - decrement atomic64 variable @@ -124,37 +208,97 @@ extern void atomic64_inc(atomic64_t *ptr); * * Atomically decrements @ptr by 1. */ -extern void atomic64_dec(atomic64_t *ptr); +static inline void atomic64_dec(atomic64_t *v) +{ + asm volatile(ATOMIC64_ALTERNATIVE_(dec, dec_return) + : : "S" (v) + : "memory", "eax", "ecx", "edx" + ); +} /** * atomic64_dec_and_test - decrement and test - * @ptr: pointer to type atomic64_t + * @v: pointer to type atomic64_t * - * Atomically decrements @ptr by 1 and + * Atomically decrements @v by 1 and * returns true if the result is 0, or false for all other * cases. */ -extern int atomic64_dec_and_test(atomic64_t *ptr); +static inline int atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_return(v) == 0; +} /** * atomic64_inc_and_test - increment and test - * @ptr: pointer to type atomic64_t + * @v: pointer to type atomic64_t * - * Atomically increments @ptr by 1 + * Atomically increments @v by 1 * and returns true if the result is zero, or false for all * other cases. */ -extern int atomic64_inc_and_test(atomic64_t *ptr); +static inline int atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_return(v) == 0; +} /** * atomic64_add_negative - add and test if negative - * @delta: integer value to add - * @ptr: pointer to type atomic64_t + * @i: integer value to add + * @v: pointer to type atomic64_t * - * Atomically adds @delta to @ptr and returns true + * Atomically adds @i to @v and returns true * if the result is negative, or false when * result is greater than or equal to zero. */ -extern int atomic64_add_negative(u64 delta, atomic64_t *ptr); +static inline int atomic64_add_negative(long long i, atomic64_t *v) +{ + return atomic64_add_return(i, v) < 0; +} + +/** + * atomic64_add_unless - add unless the number is a given value + * @v: pointer of type atomic64_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as it was not @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +{ + unsigned low = (unsigned)u; + unsigned high = (unsigned)(u >> 32); + asm volatile(ATOMIC64_ALTERNATIVE(add_unless) "\n\t" + : "+A" (a), "+c" (v), "+S" (low), "+D" (high) + : : "memory"); + return (int)a; +} + + +static inline int atomic64_inc_not_zero(atomic64_t *v) +{ + int r; + asm volatile(ATOMIC64_ALTERNATIVE(inc_not_zero) + : "=a" (r) + : "S" (v) + : "ecx", "edx", "memory" + ); + return r; +} + +static inline long long atomic64_dec_if_positive(atomic64_t *v) +{ + long long r; + asm volatile(ATOMIC64_ALTERNATIVE(dec_if_positive) + : "=A" (r) + : "S" (v) + : "ecx", "memory" + ); + return r; +} + +#undef ATOMIC64_ALTERNATIVE +#undef ATOMIC64_ALTERNATIVE_ #endif /* _ASM_X86_ATOMIC64_32_H */ diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 51c5b4056929..49fd1ea22951 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -18,7 +18,7 @@ */ static inline long atomic64_read(const atomic64_t *v) { - return v->counter; + return (*(volatile long *)&(v)->counter); } /** @@ -221,4 +221,27 @@ static inline int atomic64_add_unless(atomic64_t *v, long a, long u) #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) +/* + * atomic64_dec_if_positive - decrement by 1 if old value positive + * @v: pointer of type atomic_t + * + * The function returns the old value of *v minus 1, even if + * the atomic variable, v, was not decremented. + */ +static inline long atomic64_dec_if_positive(atomic64_t *v) +{ + long c, old, dec; + c = atomic64_read(v); + for (;;) { + dec = c - 1; + if (unlikely(dec < 0)) + break; + old = atomic64_cmpxchg((v), c, dec); + if (likely(old == c)) + break; + c = old; + } + return dec; +} + #endif /* _ASM_X86_ATOMIC64_64_H */ diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h index 02b47a603fc8..545776efeb16 100644 --- a/arch/x86/include/asm/bitops.h +++ b/arch/x86/include/asm/bitops.h @@ -444,7 +444,9 @@ static inline int fls(int x) #define ARCH_HAS_FAST_MULTIPLIER 1 -#include <asm-generic/bitops/hweight.h> +#include <asm/arch_hweight.h> + +#include <asm-generic/bitops/const_hweight.h> #endif /* __KERNEL__ */ diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h index 7a1065958ba9..3b62ab56c7a0 100644 --- a/arch/x86/include/asm/boot.h +++ b/arch/x86/include/asm/boot.h @@ -24,7 +24,7 @@ #define MIN_KERNEL_ALIGN (_AC(1, UL) << MIN_KERNEL_ALIGN_LG2) #if (CONFIG_PHYSICAL_ALIGN & (CONFIG_PHYSICAL_ALIGN-1)) || \ - (CONFIG_PHYSICAL_ALIGN < (_AC(1, UL) << MIN_KERNEL_ALIGN_LG2)) + (CONFIG_PHYSICAL_ALIGN < MIN_KERNEL_ALIGN) #error "Invalid value for CONFIG_PHYSICAL_ALIGN" #endif diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 634c40a739a6..c70068d05f70 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -44,9 +44,6 @@ static inline void copy_from_user_page(struct vm_area_struct *vma, memcpy(dst, src, len); } -#define PG_WC PG_arch_1 -PAGEFLAG(WC, WC) - #ifdef CONFIG_X86_PAT /* * X86 PAT uses page flags WC and Uncached together to keep track of @@ -55,16 +52,24 @@ PAGEFLAG(WC, WC) * _PAGE_CACHE_UC_MINUS and fourth state where page's memory type has not * been changed from its default (value of -1 used to denote this). * Note we do not support _PAGE_CACHE_UC here. - * - * Caller must hold memtype_lock for atomicity. */ + +#define _PGMT_DEFAULT 0 +#define _PGMT_WC (1UL << PG_arch_1) +#define _PGMT_UC_MINUS (1UL << PG_uncached) +#define _PGMT_WB (1UL << PG_uncached | 1UL << PG_arch_1) +#define _PGMT_MASK (1UL << PG_uncached | 1UL << PG_arch_1) +#define _PGMT_CLEAR_MASK (~_PGMT_MASK) + static inline unsigned long get_page_memtype(struct page *pg) { - if (!PageUncached(pg) && !PageWC(pg)) + unsigned long pg_flags = pg->flags & _PGMT_MASK; + + if (pg_flags == _PGMT_DEFAULT) return -1; - else if (!PageUncached(pg) && PageWC(pg)) + else if (pg_flags == _PGMT_WC) return _PAGE_CACHE_WC; - else if (PageUncached(pg) && !PageWC(pg)) + else if (pg_flags == _PGMT_UC_MINUS) return _PAGE_CACHE_UC_MINUS; else return _PAGE_CACHE_WB; @@ -72,25 +77,26 @@ static inline unsigned long get_page_memtype(struct page *pg) static inline void set_page_memtype(struct page *pg, unsigned long memtype) { + unsigned long memtype_flags = _PGMT_DEFAULT; + unsigned long old_flags; + unsigned long new_flags; + switch (memtype) { case _PAGE_CACHE_WC: - ClearPageUncached(pg); - SetPageWC(pg); + memtype_flags = _PGMT_WC; break; case _PAGE_CACHE_UC_MINUS: - SetPageUncached(pg); - ClearPageWC(pg); + memtype_flags = _PGMT_UC_MINUS; break; case _PAGE_CACHE_WB: - SetPageUncached(pg); - SetPageWC(pg); - break; - default: - case -1: - ClearPageUncached(pg); - ClearPageWC(pg); + memtype_flags = _PGMT_WB; break; } + + do { + old_flags = pg->flags; + new_flags = (old_flags & _PGMT_CLEAR_MASK) | memtype_flags; + } while (cmpxchg(&pg->flags, old_flags, new_flags) != old_flags); } #else static inline unsigned long get_page_memtype(struct page *pg) { return -1; } diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h index ffb9bb6b6c37..8859e12dd3cf 100644 --- a/arch/x86/include/asm/cmpxchg_32.h +++ b/arch/x86/include/asm/cmpxchg_32.h @@ -271,7 +271,8 @@ extern unsigned long long cmpxchg_486_u64(volatile void *, u64, u64); __typeof__(*(ptr)) __ret; \ __typeof__(*(ptr)) __old = (o); \ __typeof__(*(ptr)) __new = (n); \ - alternative_io("call cmpxchg8b_emu", \ + alternative_io(LOCK_PREFIX_HERE \ + "call cmpxchg8b_emu", \ "lock; cmpxchg8b (%%esi)" , \ X86_FEATURE_CX8, \ "=A" (__ret), \ diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 0cd82d068613..dca9c545f44e 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -161,6 +161,7 @@ */ #define X86_FEATURE_IDA (7*32+ 0) /* Intel Dynamic Acceleration */ #define X86_FEATURE_ARAT (7*32+ 1) /* Always Running APIC Timer */ +#define X86_FEATURE_CPB (7*32+ 2) /* AMD Core Performance Boost */ /* Virtualization flags: Linux defined */ #define X86_FEATURE_TPR_SHADOW (8*32+ 0) /* Intel TPR Shadow */ @@ -175,6 +176,7 @@ #if defined(__KERNEL__) && !defined(__ASSEMBLY__) +#include <asm/asm.h> #include <linux/bitops.h> extern const char * const x86_cap_flags[NCAPINTS*32]; @@ -283,6 +285,62 @@ extern const char * const x86_power_flags[32]; #endif /* CONFIG_X86_64 */ +/* + * Static testing of CPU features. Used the same as boot_cpu_has(). + * These are only valid after alternatives have run, but will statically + * patch the target code for additional performance. + * + */ +static __always_inline __pure bool __static_cpu_has(u8 bit) +{ +#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5) + asm goto("1: jmp %l[t_no]\n" + "2:\n" + ".section .altinstructions,\"a\"\n" + _ASM_ALIGN "\n" + _ASM_PTR "1b\n" + _ASM_PTR "0\n" /* no replacement */ + " .byte %P0\n" /* feature bit */ + " .byte 2b - 1b\n" /* source len */ + " .byte 0\n" /* replacement len */ + " .byte 0xff + 0 - (2b-1b)\n" /* padding */ + ".previous\n" + : : "i" (bit) : : t_no); + return true; + t_no: + return false; +#else + u8 flag; + /* Open-coded due to __stringify() in ALTERNATIVE() */ + asm volatile("1: movb $0,%0\n" + "2:\n" + ".section .altinstructions,\"a\"\n" + _ASM_ALIGN "\n" + _ASM_PTR "1b\n" + _ASM_PTR "3f\n" + " .byte %P1\n" /* feature bit */ + " .byte 2b - 1b\n" /* source len */ + " .byte 4f - 3f\n" /* replacement len */ + " .byte 0xff + (4f-3f) - (2b-1b)\n" /* padding */ + ".previous\n" + ".section .altinstr_replacement,\"ax\"\n" + "3: movb $1,%0\n" + "4:\n" + ".previous\n" + : "=qm" (flag) : "i" (bit)); + return flag; +#endif +} + +#define static_cpu_has(bit) \ +( \ + __builtin_constant_p(boot_cpu_has(bit)) ? \ + boot_cpu_has(bit) : \ + (__builtin_constant_p(bit) && !((bit) & ~0xff)) ? \ + __static_cpu_has(bit) : \ + boot_cpu_has(bit) \ +) + #endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */ #endif /* _ASM_X86_CPUFEATURE_H */ diff --git a/arch/x86/include/asm/ds.h b/arch/x86/include/asm/ds.h deleted file mode 100644 index 70dac199b093..000000000000 --- a/arch/x86/include/asm/ds.h +++ /dev/null @@ -1,302 +0,0 @@ -/* - * Debug Store (DS) support - * - * This provides a low-level interface to the hardware's Debug Store - * feature that is used for branch trace store (BTS) and - * precise-event based sampling (PEBS). - * - * It manages: - * - DS and BTS hardware configuration - * - buffer overflow handling (to be done) - * - buffer access - * - * It does not do: - * - security checking (is the caller allowed to trace the task) - * - buffer allocation (memory accounting) - * - * - * Copyright (C) 2007-2009 Intel Corporation. - * Markus Metzger <markus.t.metzger@intel.com>, 2007-2009 - */ - -#ifndef _ASM_X86_DS_H -#define _ASM_X86_DS_H - - -#include <linux/types.h> -#include <linux/init.h> -#include <linux/err.h> - - -#ifdef CONFIG_X86_DS - -struct task_struct; -struct ds_context; -struct ds_tracer; -struct bts_tracer; -struct pebs_tracer; - -typedef void (*bts_ovfl_callback_t)(struct bts_tracer *); -typedef void (*pebs_ovfl_callback_t)(struct pebs_tracer *); - - -/* - * A list of features plus corresponding macros to talk about them in - * the ds_request function's flags parameter. - * - * We use the enum to index an array of corresponding control bits; - * we use the macro to index a flags bit-vector. - */ -enum ds_feature { - dsf_bts = 0, - dsf_bts_kernel, -#define BTS_KERNEL (1 << dsf_bts_kernel) - /* trace kernel-mode branches */ - - dsf_bts_user, -#define BTS_USER (1 << dsf_bts_user) - /* trace user-mode branches */ - - dsf_bts_overflow, - dsf_bts_max, - dsf_pebs = dsf_bts_max, - - dsf_pebs_max, - dsf_ctl_max = dsf_pebs_max, - dsf_bts_timestamps = dsf_ctl_max, -#define BTS_TIMESTAMPS (1 << dsf_bts_timestamps) - /* add timestamps into BTS trace */ - -#define BTS_USER_FLAGS (BTS_KERNEL | BTS_USER | BTS_TIMESTAMPS) -}; - - -/* - * Request BTS or PEBS - * - * Due to alignement constraints, the actual buffer may be slightly - * smaller than the requested or provided buffer. - * - * Returns a pointer to a tracer structure on success, or - * ERR_PTR(errcode) on failure. - * - * The interrupt threshold is independent from the overflow callback - * to allow users to use their own overflow interrupt handling mechanism. - * - * The function might sleep. - * - * task: the task to request recording for - * cpu: the cpu to request recording for - * base: the base pointer for the (non-pageable) buffer; - * size: the size of the provided buffer in bytes - * ovfl: pointer to a function to be called on buffer overflow; - * NULL if cyclic buffer requested - * th: the interrupt threshold in records from the end of the buffer; - * -1 if no interrupt threshold is requested. - * flags: a bit-mask of the above flags - */ -extern struct bts_tracer *ds_request_bts_task(struct task_struct *task, - void *base, size_t size, - bts_ovfl_callback_t ovfl, - size_t th, unsigned int flags); -extern struct bts_tracer *ds_request_bts_cpu(int cpu, void *base, size_t size, - bts_ovfl_callback_t ovfl, - size_t th, unsigned int flags); -extern struct pebs_tracer *ds_request_pebs_task(struct task_struct *task, - void *base, size_t size, - pebs_ovfl_callback_t ovfl, - size_t th, unsigned int flags); -extern struct pebs_tracer *ds_request_pebs_cpu(int cpu, - void *base, size_t size, - pebs_ovfl_callback_t ovfl, - size_t th, unsigned int flags); - -/* - * Release BTS or PEBS resources - * Suspend and resume BTS or PEBS tracing - * - * Must be called with irq's enabled. - * - * tracer: the tracer handle returned from ds_request_~() - */ -extern void ds_release_bts(struct bts_tracer *tracer); -extern void ds_suspend_bts(struct bts_tracer *tracer); -extern void ds_resume_bts(struct bts_tracer *tracer); -extern void ds_release_pebs(struct pebs_tracer *tracer); -extern void ds_suspend_pebs(struct pebs_tracer *tracer); -extern void ds_resume_pebs(struct pebs_tracer *tracer); - -/* - * Release BTS or PEBS resources - * Suspend and resume BTS or PEBS tracing - * - * Cpu tracers must call this on the traced cpu. - * Task tracers must call ds_release_~_noirq() for themselves. - * - * May be called with irq's disabled. - * - * Returns 0 if successful; - * -EPERM if the cpu tracer does not trace the current cpu. - * -EPERM if the task tracer does not trace itself. - * - * tracer: the tracer handle returned from ds_request_~() - */ -extern int ds_release_bts_noirq(struct bts_tracer *tracer); -extern int ds_suspend_bts_noirq(struct bts_tracer *tracer); -extern int ds_resume_bts_noirq(struct bts_tracer *tracer); -extern int ds_release_pebs_noirq(struct pebs_tracer *tracer); -extern int ds_suspend_pebs_noirq(struct pebs_tracer *tracer); -extern int ds_resume_pebs_noirq(struct pebs_tracer *tracer); - - -/* - * The raw DS buffer state as it is used for BTS and PEBS recording. - * - * This is the low-level, arch-dependent interface for working - * directly on the raw trace data. - */ -struct ds_trace { - /* the number of bts/pebs records */ - size_t n; - /* the size of a bts/pebs record in bytes */ - size_t size; - /* pointers into the raw buffer: - - to the first entry */ - void *begin; - /* - one beyond the last entry */ - void *end; - /* - one beyond the newest entry */ - void *top; - /* - the interrupt threshold */ - void *ith; - /* flags given on ds_request() */ - unsigned int flags; -}; - -/* - * An arch-independent view on branch trace data. - */ -enum bts_qualifier { - bts_invalid, -#define BTS_INVALID bts_invalid - - bts_branch, -#define BTS_BRANCH bts_branch - - bts_task_arrives, -#define BTS_TASK_ARRIVES bts_task_arrives - - bts_task_departs, -#define BTS_TASK_DEPARTS bts_task_departs - - bts_qual_bit_size = 4, - bts_qual_max = (1 << bts_qual_bit_size), -}; - -struct bts_struct { - __u64 qualifier; - union { - /* BTS_BRANCH */ - struct { - __u64 from; - __u64 to; - } lbr; - /* BTS_TASK_ARRIVES or BTS_TASK_DEPARTS */ - struct { - __u64 clock; - pid_t pid; - } event; - } variant; -}; - - -/* - * The BTS state. - * - * This gives access to the raw DS state and adds functions to provide - * an arch-independent view of the BTS data. - */ -struct bts_trace { - struct ds_trace ds; - - int (*read)(struct bts_tracer *tracer, const void *at, - struct bts_struct *out); - int (*write)(struct bts_tracer *tracer, const struct bts_struct *in); -}; - - -/* - * The PEBS state. - * - * This gives access to the raw DS state and the PEBS-specific counter - * reset value. - */ -struct pebs_trace { - struct ds_trace ds; - - /* the number of valid counters in the below array */ - unsigned int counters; - -#define MAX_PEBS_COUNTERS 4 - /* the counter reset value */ - unsigned long long counter_reset[MAX_PEBS_COUNTERS]; -}; - - -/* - * Read the BTS or PEBS trace. - * - * Returns a view on the trace collected for the parameter tracer. - * - * The view remains valid as long as the traced task is not running or - * the tracer is suspended. - * Writes into the trace buffer are not reflected. - * - * tracer: the tracer handle returned from ds_request_~() - */ -extern const struct bts_trace *ds_read_bts(struct bts_tracer *tracer); -extern const struct pebs_trace *ds_read_pebs(struct pebs_tracer *tracer); - - -/* - * Reset the write pointer of the BTS/PEBS buffer. - * - * Returns 0 on success; -Eerrno on error - * - * tracer: the tracer handle returned from ds_request_~() - */ -extern int ds_reset_bts(struct bts_tracer *tracer); -extern int ds_reset_pebs(struct pebs_tracer *tracer); - -/* - * Set the PEBS counter reset value. - * - * Returns 0 on success; -Eerrno on error - * - * tracer: the tracer handle returned from ds_request_pebs() - * counter: the index of the counter - * value: the new counter reset value - */ -extern int ds_set_pebs_reset(struct pebs_tracer *tracer, - unsigned int counter, u64 value); - -/* - * Initialization - */ -struct cpuinfo_x86; -extern void __cpuinit ds_init_intel(struct cpuinfo_x86 *); - -/* - * Context switch work - */ -extern void ds_switch_to(struct task_struct *prev, struct task_struct *next); - -#else /* CONFIG_X86_DS */ - -struct cpuinfo_x86; -static inline void __cpuinit ds_init_intel(struct cpuinfo_x86 *ignored) {} -static inline void ds_switch_to(struct task_struct *prev, - struct task_struct *next) {} - -#endif /* CONFIG_X86_DS */ -#endif /* _ASM_X86_DS_H */ diff --git a/arch/x86/include/asm/dwarf2.h b/arch/x86/include/asm/dwarf2.h index ae6253ab9029..733f7e91e7a9 100644 --- a/arch/x86/include/asm/dwarf2.h +++ b/arch/x86/include/asm/dwarf2.h @@ -34,6 +34,18 @@ #define CFI_SIGNAL_FRAME #endif +#if defined(CONFIG_AS_CFI_SECTIONS) && defined(__ASSEMBLY__) + /* + * Emit CFI data in .debug_frame sections, not .eh_frame sections. + * The latter we currently just discard since we don't do DWARF + * unwinding at runtime. So only the offline DWARF information is + * useful to anyone. Note we should not use this directive if this + * file is used in the vDSO assembly, or if vmlinux.lds.S gets + * changed so it doesn't discard .eh_frame. + */ + .cfi_sections .debug_frame +#endif + #else /* diff --git a/arch/x86/include/asm/e820.h b/arch/x86/include/asm/e820.h index 0e22296790d3..ec8a52d14ab1 100644 --- a/arch/x86/include/asm/e820.h +++ b/arch/x86/include/asm/e820.h @@ -45,7 +45,12 @@ #define E820_NVS 4 #define E820_UNUSABLE 5 -/* reserved RAM used by kernel itself */ +/* + * reserved RAM used by kernel itself + * if CONFIG_INTEL_TXT is enabled, memory of this type will be + * included in the S3 integrity calculation and so should not include + * any memory that BIOS might alter over the S3 transition + */ #define E820_RESERVED_KERN 128 #ifndef __ASSEMBLY__ diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h index 0f8576427cfe..aeab29aee617 100644 --- a/arch/x86/include/asm/hardirq.h +++ b/arch/x86/include/asm/hardirq.h @@ -35,7 +35,7 @@ DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat); #define __ARCH_IRQ_STAT -#define inc_irq_stat(member) percpu_add(irq_stat.member, 1) +#define inc_irq_stat(member) percpu_inc(irq_stat.member) #define local_softirq_pending() percpu_read(irq_stat.__softirq_pending) diff --git a/arch/x86/include/asm/hw_breakpoint.h b/arch/x86/include/asm/hw_breakpoint.h index 2a1bd8f4f23a..942255310e6a 100644 --- a/arch/x86/include/asm/hw_breakpoint.h +++ b/arch/x86/include/asm/hw_breakpoint.h @@ -41,12 +41,16 @@ struct arch_hw_breakpoint { /* Total number of available HW breakpoint registers */ #define HBP_NUM 4 +static inline int hw_breakpoint_slots(int type) +{ + return HBP_NUM; +} + struct perf_event; struct pmu; -extern int arch_check_va_in_userspace(unsigned long va, u8 hbp_len); -extern int arch_validate_hwbkpt_settings(struct perf_event *bp, - struct task_struct *tsk); +extern int arch_check_bp_in_kernelspace(struct perf_event *bp); +extern int arch_validate_hwbkpt_settings(struct perf_event *bp); extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, unsigned long val, void *data); diff --git a/arch/x86/include/asm/hyperv.h b/arch/x86/include/asm/hyperv.h index e153a2b3889a..5df477ac3af7 100644 --- a/arch/x86/include/asm/hyperv.h +++ b/arch/x86/include/asm/hyperv.h @@ -1,5 +1,5 @@ -#ifndef _ASM_X86_KVM_HYPERV_H -#define _ASM_X86_KVM_HYPERV_H +#ifndef _ASM_X86_HYPERV_H +#define _ASM_X86_HYPERV_H #include <linux/types.h> @@ -14,6 +14,10 @@ #define HYPERV_CPUID_ENLIGHTMENT_INFO 0x40000004 #define HYPERV_CPUID_IMPLEMENT_LIMITS 0x40000005 +#define HYPERV_HYPERVISOR_PRESENT_BIT 0x80000000 +#define HYPERV_CPUID_MIN 0x40000005 +#define HYPERV_CPUID_MAX 0x4000ffff + /* * Feature identification. EAX indicates which features are available * to the partition based upon the current partition privileges. @@ -129,6 +133,9 @@ /* MSR used to provide vcpu index */ #define HV_X64_MSR_VP_INDEX 0x40000002 +/* MSR used to read the per-partition time reference counter */ +#define HV_X64_MSR_TIME_REF_COUNT 0x40000020 + /* Define the virtual APIC registers */ #define HV_X64_MSR_EOI 0x40000070 #define HV_X64_MSR_ICR 0x40000071 diff --git a/arch/x86/include/asm/hypervisor.h b/arch/x86/include/asm/hypervisor.h index b78c0941e422..70abda7058c8 100644 --- a/arch/x86/include/asm/hypervisor.h +++ b/arch/x86/include/asm/hypervisor.h @@ -17,10 +17,33 @@ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * */ -#ifndef ASM_X86__HYPERVISOR_H -#define ASM_X86__HYPERVISOR_H +#ifndef _ASM_X86_HYPERVISOR_H +#define _ASM_X86_HYPERVISOR_H extern void init_hypervisor(struct cpuinfo_x86 *c); extern void init_hypervisor_platform(void); +/* + * x86 hypervisor information + */ +struct hypervisor_x86 { + /* Hypervisor name */ + const char *name; + + /* Detection routine */ + bool (*detect)(void); + + /* Adjust CPU feature bits (run once per CPU) */ + void (*set_cpu_features)(struct cpuinfo_x86 *); + + /* Platform setup (run once per boot) */ + void (*init_platform)(void); +}; + +extern const struct hypervisor_x86 *x86_hyper; + +/* Recognized hypervisors */ +extern const struct hypervisor_x86 x86_hyper_vmware; +extern const struct hypervisor_x86 x86_hyper_ms_hyperv; + #endif diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index da2930924501..c991b3a7b904 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -16,7 +16,9 @@ #include <linux/kernel_stat.h> #include <linux/regset.h> #include <linux/hardirq.h> +#include <linux/slab.h> #include <asm/asm.h> +#include <asm/cpufeature.h> #include <asm/processor.h> #include <asm/sigcontext.h> #include <asm/user.h> @@ -56,6 +58,11 @@ extern int restore_i387_xstate_ia32(void __user *buf); #define X87_FSW_ES (1 << 7) /* Exception Summary */ +static __always_inline __pure bool use_xsave(void) +{ + return static_cpu_has(X86_FEATURE_XSAVE); +} + #ifdef CONFIG_X86_64 /* Ignore delayed exceptions from user space */ @@ -91,15 +98,15 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) values. The kernel data segment can be sometimes 0 and sometimes new user value. Both should be ok. Use the PDA as safe address because it should be already in L1. */ -static inline void clear_fpu_state(struct task_struct *tsk) +static inline void fpu_clear(struct fpu *fpu) { - struct xsave_struct *xstate = &tsk->thread.xstate->xsave; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct xsave_struct *xstate = &fpu->state->xsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; /* * xsave header may indicate the init state of the FP. */ - if ((task_thread_info(tsk)->status & TS_XSAVE) && + if (use_xsave() && !(xstate->xsave_hdr.xstate_bv & XSTATE_FP)) return; @@ -111,6 +118,11 @@ static inline void clear_fpu_state(struct task_struct *tsk) X86_FEATURE_FXSAVE_LEAK); } +static inline void clear_fpu_state(struct task_struct *tsk) +{ + fpu_clear(&tsk->thread.fpu); +} + static inline int fxsave_user(struct i387_fxsave_struct __user *fx) { int err; @@ -135,7 +147,7 @@ static inline int fxsave_user(struct i387_fxsave_struct __user *fx) return err; } -static inline void fxsave(struct task_struct *tsk) +static inline void fpu_fxsave(struct fpu *fpu) { /* Using "rex64; fxsave %0" is broken because, if the memory operand uses any extended registers for addressing, a second REX prefix @@ -145,42 +157,45 @@ static inline void fxsave(struct task_struct *tsk) /* Using "fxsaveq %0" would be the ideal choice, but is only supported starting with gas 2.16. */ __asm__ __volatile__("fxsaveq %0" - : "=m" (tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave)); #elif 0 /* Using, as a workaround, the properly prefixed form below isn't accepted by any binutils version so far released, complaining that the same type of prefix is used twice if an extended register is needed for addressing (fix submitted to mainline 2005-11-21). */ __asm__ __volatile__("rex64/fxsave %0" - : "=m" (tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave)); #else /* This, however, we can work around by forcing the compiler to select an addressing mode that doesn't require extended registers. */ __asm__ __volatile__("rex64/fxsave (%1)" - : "=m" (tsk->thread.xstate->fxsave) - : "cdaSDb" (&tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave) + : "cdaSDb" (&fpu->state->fxsave)); #endif } -static inline void __save_init_fpu(struct task_struct *tsk) +static inline void fpu_save_init(struct fpu *fpu) { - if (task_thread_info(tsk)->status & TS_XSAVE) - xsave(tsk); + if (use_xsave()) + fpu_xsave(fpu); else - fxsave(tsk); + fpu_fxsave(fpu); + + fpu_clear(fpu); +} - clear_fpu_state(tsk); +static inline void __save_init_fpu(struct task_struct *tsk) +{ + fpu_save_init(&tsk->thread.fpu); task_thread_info(tsk)->status &= ~TS_USEDFPU; } #else /* CONFIG_X86_32 */ #ifdef CONFIG_MATH_EMULATION -extern void finit_task(struct task_struct *tsk); +extern void finit_soft_fpu(struct i387_soft_struct *soft); #else -static inline void finit_task(struct task_struct *tsk) -{ -} +static inline void finit_soft_fpu(struct i387_soft_struct *soft) {} #endif static inline void tolerant_fwait(void) @@ -216,13 +231,13 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) /* * These must be called with preempt disabled */ -static inline void __save_init_fpu(struct task_struct *tsk) +static inline void fpu_save_init(struct fpu *fpu) { - if (task_thread_info(tsk)->status & TS_XSAVE) { - struct xsave_struct *xstate = &tsk->thread.xstate->xsave; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + if (use_xsave()) { + struct xsave_struct *xstate = &fpu->state->xsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; - xsave(tsk); + fpu_xsave(fpu); /* * xsave header may indicate the init state of the FP. @@ -246,8 +261,8 @@ static inline void __save_init_fpu(struct task_struct *tsk) "fxsave %[fx]\n" "bt $7,%[fsw] ; jnc 1f ; fnclex\n1:", X86_FEATURE_FXSR, - [fx] "m" (tsk->thread.xstate->fxsave), - [fsw] "m" (tsk->thread.xstate->fxsave.swd) : "memory"); + [fx] "m" (fpu->state->fxsave), + [fsw] "m" (fpu->state->fxsave.swd) : "memory"); clear_state: /* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is pending. Clear the x87 state here by setting it to fixed @@ -259,17 +274,34 @@ clear_state: X86_FEATURE_FXSAVE_LEAK, [addr] "m" (safe_address)); end: + ; +} + +static inline void __save_init_fpu(struct task_struct *tsk) +{ + fpu_save_init(&tsk->thread.fpu); task_thread_info(tsk)->status &= ~TS_USEDFPU; } + #endif /* CONFIG_X86_64 */ -static inline int restore_fpu_checking(struct task_struct *tsk) +static inline int fpu_fxrstor_checking(struct fpu *fpu) { - if (task_thread_info(tsk)->status & TS_XSAVE) - return xrstor_checking(&tsk->thread.xstate->xsave); + return fxrstor_checking(&fpu->state->fxsave); +} + +static inline int fpu_restore_checking(struct fpu *fpu) +{ + if (use_xsave()) + return fpu_xrstor_checking(fpu); else - return fxrstor_checking(&tsk->thread.xstate->fxsave); + return fpu_fxrstor_checking(fpu); +} + +static inline int restore_fpu_checking(struct task_struct *tsk) +{ + return fpu_restore_checking(&tsk->thread.fpu); } /* @@ -397,30 +429,59 @@ static inline void clear_fpu(struct task_struct *tsk) static inline unsigned short get_fpu_cwd(struct task_struct *tsk) { if (cpu_has_fxsr) { - return tsk->thread.xstate->fxsave.cwd; + return tsk->thread.fpu.state->fxsave.cwd; } else { - return (unsigned short)tsk->thread.xstate->fsave.cwd; + return (unsigned short)tsk->thread.fpu.state->fsave.cwd; } } static inline unsigned short get_fpu_swd(struct task_struct *tsk) { if (cpu_has_fxsr) { - return tsk->thread.xstate->fxsave.swd; + return tsk->thread.fpu.state->fxsave.swd; } else { - return (unsigned short)tsk->thread.xstate->fsave.swd; + return (unsigned short)tsk->thread.fpu.state->fsave.swd; } } static inline unsigned short get_fpu_mxcsr(struct task_struct *tsk) { if (cpu_has_xmm) { - return tsk->thread.xstate->fxsave.mxcsr; + return tsk->thread.fpu.state->fxsave.mxcsr; } else { return MXCSR_DEFAULT; } } +static bool fpu_allocated(struct fpu *fpu) +{ + return fpu->state != NULL; +} + +static inline int fpu_alloc(struct fpu *fpu) +{ + if (fpu_allocated(fpu)) + return 0; + fpu->state = kmem_cache_alloc(task_xstate_cachep, GFP_KERNEL); + if (!fpu->state) + return -ENOMEM; + WARN_ON((unsigned long)fpu->state & 15); + return 0; +} + +static inline void fpu_free(struct fpu *fpu) +{ + if (fpu->state) { + kmem_cache_free(task_xstate_cachep, fpu->state); + fpu->state = NULL; + } +} + +static inline void fpu_copy(struct fpu *dst, struct fpu *src) +{ + memcpy(dst->state, src->state, xstate_size); +} + #endif /* __ASSEMBLY__ */ #define PSHUFB_XMM5_XMM0 .byte 0x66, 0x0f, 0x38, 0x00, 0xc5 diff --git a/arch/x86/include/asm/i8253.h b/arch/x86/include/asm/i8253.h index 1edbf89680fd..fc1f579fb965 100644 --- a/arch/x86/include/asm/i8253.h +++ b/arch/x86/include/asm/i8253.h @@ -6,7 +6,7 @@ #define PIT_CH0 0x40 #define PIT_CH2 0x42 -extern spinlock_t i8253_lock; +extern raw_spinlock_t i8253_lock; extern struct clock_event_device *global_clock_event; diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h index 96c2e0ad04ca..88c765e16410 100644 --- a/arch/x86/include/asm/insn.h +++ b/arch/x86/include/asm/insn.h @@ -68,6 +68,8 @@ struct insn { const insn_byte_t *next_byte; }; +#define MAX_INSN_SIZE 16 + #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6) #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3) #define X86_MODRM_RM(modrm) ((modrm) & 0x07) diff --git a/arch/x86/include/asm/io_apic.h b/arch/x86/include/asm/io_apic.h index 35832a03a515..63cb4096c3dc 100644 --- a/arch/x86/include/asm/io_apic.h +++ b/arch/x86/include/asm/io_apic.h @@ -159,7 +159,6 @@ struct io_apic_irq_attr; extern int io_apic_set_pci_routing(struct device *dev, int irq, struct io_apic_irq_attr *irq_attr); void setup_IO_APIC_irq_extra(u32 gsi); -extern int (*ioapic_renumber_irq)(int ioapic, int irq); extern void ioapic_init_mappings(void); extern void ioapic_insert_resources(void); @@ -180,12 +179,13 @@ extern void ioapic_write_entry(int apic, int pin, extern void setup_ioapic_ids_from_mpc(void); struct mp_ioapic_gsi{ - int gsi_base; - int gsi_end; + u32 gsi_base; + u32 gsi_end; }; extern struct mp_ioapic_gsi mp_gsi_routing[]; -int mp_find_ioapic(int gsi); -int mp_find_ioapic_pin(int ioapic, int gsi); +extern u32 gsi_end; +int mp_find_ioapic(u32 gsi); +int mp_find_ioapic_pin(int ioapic, u32 gsi); void __init mp_register_ioapic(int id, u32 address, u32 gsi_base); extern void __init pre_init_apic_IRQ0(void); @@ -197,7 +197,8 @@ static const int timer_through_8259 = 0; static inline void ioapic_init_mappings(void) { } static inline void ioapic_insert_resources(void) { } static inline void probe_nr_irqs_gsi(void) { } -static inline int mp_find_ioapic(int gsi) { return 0; } +#define gsi_end (NR_IRQS_LEGACY - 1) +static inline int mp_find_ioapic(u32 gsi) { return 0; } struct io_apic_irq_attr; static inline int io_apic_set_pci_routing(struct device *dev, int irq, diff --git a/arch/x86/include/asm/k8.h b/arch/x86/include/asm/k8.h index f70e60071fe8..af00bd1d2089 100644 --- a/arch/x86/include/asm/k8.h +++ b/arch/x86/include/asm/k8.h @@ -16,11 +16,16 @@ extern int k8_numa_init(unsigned long start_pfn, unsigned long end_pfn); extern int k8_scan_nodes(void); #ifdef CONFIG_K8_NB +extern int num_k8_northbridges; + static inline struct pci_dev *node_to_k8_nb_misc(int node) { return (node < num_k8_northbridges) ? k8_northbridges[node] : NULL; } + #else +#define num_k8_northbridges 0 + static inline struct pci_dev *node_to_k8_nb_misc(int node) { return NULL; diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h index 4ffa345a8ccb..547882539157 100644 --- a/arch/x86/include/asm/kprobes.h +++ b/arch/x86/include/asm/kprobes.h @@ -24,6 +24,7 @@ #include <linux/types.h> #include <linux/ptrace.h> #include <linux/percpu.h> +#include <asm/insn.h> #define __ARCH_WANT_KPROBES_INSN_SLOT @@ -36,7 +37,6 @@ typedef u8 kprobe_opcode_t; #define RELATIVEJUMP_SIZE 5 #define RELATIVECALL_OPCODE 0xe8 #define RELATIVE_ADDR_SIZE 4 -#define MAX_INSN_SIZE 16 #define MAX_STACK_SIZE 64 #define MIN_STACK_SIZE(ADDR) \ (((MAX_STACK_SIZE) < (((unsigned long)current_thread_info()) + \ diff --git a/arch/x86/include/asm/mpspec.h b/arch/x86/include/asm/mpspec.h index d8bf23a88d05..c82868e9f905 100644 --- a/arch/x86/include/asm/mpspec.h +++ b/arch/x86/include/asm/mpspec.h @@ -105,16 +105,6 @@ extern void mp_config_acpi_legacy_irqs(void); struct device; extern int mp_register_gsi(struct device *dev, u32 gsi, int edge_level, int active_high_low); -extern int acpi_probe_gsi(void); -#ifdef CONFIG_X86_IO_APIC -extern int mp_find_ioapic(int gsi); -extern int mp_find_ioapic_pin(int ioapic, int gsi); -#endif -#else /* !CONFIG_ACPI: */ -static inline int acpi_probe_gsi(void) -{ - return 0; -} #endif /* CONFIG_ACPI */ #define PHYSID_ARRAY_SIZE BITS_TO_LONGS(MAX_APICS) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h new file mode 100644 index 000000000000..79ce5685ab64 --- /dev/null +++ b/arch/x86/include/asm/mshyperv.h @@ -0,0 +1,14 @@ +#ifndef _ASM_X86_MSHYPER_H +#define _ASM_X86_MSHYPER_H + +#include <linux/types.h> +#include <asm/hyperv.h> + +struct ms_hyperv_info { + u32 features; + u32 hints; +}; + +extern struct ms_hyperv_info ms_hyperv; + +#endif diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 4604e6a54d36..bc473acfa7f9 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -71,11 +71,14 @@ #define MSR_IA32_LASTINTTOIP 0x000001de /* DEBUGCTLMSR bits (others vary by model): */ -#define _DEBUGCTLMSR_LBR 0 /* last branch recording */ -#define _DEBUGCTLMSR_BTF 1 /* single-step on branches */ - -#define DEBUGCTLMSR_LBR (1UL << _DEBUGCTLMSR_LBR) -#define DEBUGCTLMSR_BTF (1UL << _DEBUGCTLMSR_BTF) +#define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */ +#define DEBUGCTLMSR_BTF (1UL << 1) /* single-step on branches */ +#define DEBUGCTLMSR_TR (1UL << 6) +#define DEBUGCTLMSR_BTS (1UL << 7) +#define DEBUGCTLMSR_BTINT (1UL << 8) +#define DEBUGCTLMSR_BTS_OFF_OS (1UL << 9) +#define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10) +#define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11) #define MSR_IA32_MC0_CTL 0x00000400 #define MSR_IA32_MC0_STATUS 0x00000401 @@ -359,6 +362,8 @@ #define MSR_P4_U2L_ESCR0 0x000003b0 #define MSR_P4_U2L_ESCR1 0x000003b1 +#define MSR_P4_PEBS_MATRIX_VERT 0x000003f2 + /* Intel Core-based CPU performance counters */ #define MSR_CORE_PERF_FIXED_CTR0 0x00000309 #define MSR_CORE_PERF_FIXED_CTR1 0x0000030a diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h index 66a272dfd8b8..0ec6d12d84e6 100644 --- a/arch/x86/include/asm/percpu.h +++ b/arch/x86/include/asm/percpu.h @@ -190,6 +190,29 @@ do { \ pfo_ret__; \ }) +#define percpu_unary_op(op, var) \ +({ \ + switch (sizeof(var)) { \ + case 1: \ + asm(op "b "__percpu_arg(0) \ + : "+m" (var)); \ + break; \ + case 2: \ + asm(op "w "__percpu_arg(0) \ + : "+m" (var)); \ + break; \ + case 4: \ + asm(op "l "__percpu_arg(0) \ + : "+m" (var)); \ + break; \ + case 8: \ + asm(op "q "__percpu_arg(0) \ + : "+m" (var)); \ + break; \ + default: __bad_percpu_size(); \ + } \ +}) + /* * percpu_read() makes gcc load the percpu variable every time it is * accessed while percpu_read_stable() allows the value to be cached. @@ -207,6 +230,7 @@ do { \ #define percpu_and(var, val) percpu_to_op("and", var, val) #define percpu_or(var, val) percpu_to_op("or", var, val) #define percpu_xor(var, val) percpu_to_op("xor", var, val) +#define percpu_inc(var) percpu_unary_op("inc", var) #define __this_cpu_read_1(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) #define __this_cpu_read_2(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index db6109a885a7..254883d0c7e0 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -5,7 +5,7 @@ * Performance event hw details: */ -#define X86_PMC_MAX_GENERIC 8 +#define X86_PMC_MAX_GENERIC 32 #define X86_PMC_MAX_FIXED 3 #define X86_PMC_IDX_GENERIC 0 @@ -18,39 +18,31 @@ #define MSR_ARCH_PERFMON_EVENTSEL0 0x186 #define MSR_ARCH_PERFMON_EVENTSEL1 0x187 -#define ARCH_PERFMON_EVENTSEL_ENABLE (1 << 22) -#define ARCH_PERFMON_EVENTSEL_ANY (1 << 21) -#define ARCH_PERFMON_EVENTSEL_INT (1 << 20) -#define ARCH_PERFMON_EVENTSEL_OS (1 << 17) -#define ARCH_PERFMON_EVENTSEL_USR (1 << 16) - -/* - * Includes eventsel and unit mask as well: - */ - - -#define INTEL_ARCH_EVTSEL_MASK 0x000000FFULL -#define INTEL_ARCH_UNIT_MASK 0x0000FF00ULL -#define INTEL_ARCH_EDGE_MASK 0x00040000ULL -#define INTEL_ARCH_INV_MASK 0x00800000ULL -#define INTEL_ARCH_CNT_MASK 0xFF000000ULL -#define INTEL_ARCH_EVENT_MASK (INTEL_ARCH_UNIT_MASK|INTEL_ARCH_EVTSEL_MASK) - -/* - * filter mask to validate fixed counter events. - * the following filters disqualify for fixed counters: - * - inv - * - edge - * - cnt-mask - * The other filters are supported by fixed counters. - * The any-thread option is supported starting with v3. - */ -#define INTEL_ARCH_FIXED_MASK \ - (INTEL_ARCH_CNT_MASK| \ - INTEL_ARCH_INV_MASK| \ - INTEL_ARCH_EDGE_MASK|\ - INTEL_ARCH_UNIT_MASK|\ - INTEL_ARCH_EVENT_MASK) +#define ARCH_PERFMON_EVENTSEL_EVENT 0x000000FFULL +#define ARCH_PERFMON_EVENTSEL_UMASK 0x0000FF00ULL +#define ARCH_PERFMON_EVENTSEL_USR (1ULL << 16) +#define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17) +#define ARCH_PERFMON_EVENTSEL_EDGE (1ULL << 18) +#define ARCH_PERFMON_EVENTSEL_INT (1ULL << 20) +#define ARCH_PERFMON_EVENTSEL_ANY (1ULL << 21) +#define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22) +#define ARCH_PERFMON_EVENTSEL_INV (1ULL << 23) +#define ARCH_PERFMON_EVENTSEL_CMASK 0xFF000000ULL + +#define AMD64_EVENTSEL_EVENT \ + (ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32)) +#define INTEL_ARCH_EVENT_MASK \ + (ARCH_PERFMON_EVENTSEL_UMASK | ARCH_PERFMON_EVENTSEL_EVENT) + +#define X86_RAW_EVENT_MASK \ + (ARCH_PERFMON_EVENTSEL_EVENT | \ + ARCH_PERFMON_EVENTSEL_UMASK | \ + ARCH_PERFMON_EVENTSEL_EDGE | \ + ARCH_PERFMON_EVENTSEL_INV | \ + ARCH_PERFMON_EVENTSEL_CMASK) +#define AMD64_RAW_EVENT_MASK \ + (X86_RAW_EVENT_MASK | \ + AMD64_EVENTSEL_EVENT) #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL 0x3c #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_UMASK (0x00 << 8) @@ -67,7 +59,7 @@ union cpuid10_eax { struct { unsigned int version_id:8; - unsigned int num_events:8; + unsigned int num_counters:8; unsigned int bit_width:8; unsigned int mask_length:8; } split; @@ -76,7 +68,7 @@ union cpuid10_eax { union cpuid10_edx { struct { - unsigned int num_events_fixed:4; + unsigned int num_counters_fixed:4; unsigned int reserved:28; } split; unsigned int full; @@ -136,6 +128,18 @@ extern void perf_events_lapic_init(void); #define PERF_EVENT_INDEX_OFFSET 0 +/* + * Abuse bit 3 of the cpu eflags register to indicate proper PEBS IP fixups. + * This flag is otherwise unused and ABI specified to be 0, so nobody should + * care what we do with it. + */ +#define PERF_EFLAGS_EXACT (1UL << 3) + +struct pt_regs; +extern unsigned long perf_instruction_pointer(struct pt_regs *regs); +extern unsigned long perf_misc_flags(struct pt_regs *regs); +#define perf_misc_flags(regs) perf_misc_flags(regs) + #else static inline void init_hw_perf_events(void) { } static inline void perf_events_lapic_init(void) { } diff --git a/arch/x86/include/asm/perf_event_p4.h b/arch/x86/include/asm/perf_event_p4.h new file mode 100644 index 000000000000..b05400a542ff --- /dev/null +++ b/arch/x86/include/asm/perf_event_p4.h @@ -0,0 +1,794 @@ +/* + * Netburst Perfomance Events (P4, old Xeon) + */ + +#ifndef PERF_EVENT_P4_H +#define PERF_EVENT_P4_H + +#include <linux/cpu.h> +#include <linux/bitops.h> + +/* + * NetBurst has perfomance MSRs shared between + * threads if HT is turned on, ie for both logical + * processors (mem: in turn in Atom with HT support + * perf-MSRs are not shared and every thread has its + * own perf-MSRs set) + */ +#define ARCH_P4_TOTAL_ESCR (46) +#define ARCH_P4_RESERVED_ESCR (2) /* IQ_ESCR(0,1) not always present */ +#define ARCH_P4_MAX_ESCR (ARCH_P4_TOTAL_ESCR - ARCH_P4_RESERVED_ESCR) +#define ARCH_P4_MAX_CCCR (18) +#define ARCH_P4_MAX_COUNTER (ARCH_P4_MAX_CCCR / 2) + +#define P4_ESCR_EVENT_MASK 0x7e000000U +#define P4_ESCR_EVENT_SHIFT 25 +#define P4_ESCR_EVENTMASK_MASK 0x01fffe00U +#define P4_ESCR_EVENTMASK_SHIFT 9 +#define P4_ESCR_TAG_MASK 0x000001e0U +#define P4_ESCR_TAG_SHIFT 5 +#define P4_ESCR_TAG_ENABLE 0x00000010U +#define P4_ESCR_T0_OS 0x00000008U +#define P4_ESCR_T0_USR 0x00000004U +#define P4_ESCR_T1_OS 0x00000002U +#define P4_ESCR_T1_USR 0x00000001U + +#define P4_ESCR_EVENT(v) ((v) << P4_ESCR_EVENT_SHIFT) +#define P4_ESCR_EMASK(v) ((v) << P4_ESCR_EVENTMASK_SHIFT) +#define P4_ESCR_TAG(v) ((v) << P4_ESCR_TAG_SHIFT) + +/* Non HT mask */ +#define P4_ESCR_MASK \ + (P4_ESCR_EVENT_MASK | \ + P4_ESCR_EVENTMASK_MASK | \ + P4_ESCR_TAG_MASK | \ + P4_ESCR_TAG_ENABLE | \ + P4_ESCR_T0_OS | \ + P4_ESCR_T0_USR) + +/* HT mask */ +#define P4_ESCR_MASK_HT \ + (P4_ESCR_MASK | P4_ESCR_T1_OS | P4_ESCR_T1_USR) + +#define P4_CCCR_OVF 0x80000000U +#define P4_CCCR_CASCADE 0x40000000U +#define P4_CCCR_OVF_PMI_T0 0x04000000U +#define P4_CCCR_OVF_PMI_T1 0x08000000U +#define P4_CCCR_FORCE_OVF 0x02000000U +#define P4_CCCR_EDGE 0x01000000U +#define P4_CCCR_THRESHOLD_MASK 0x00f00000U +#define P4_CCCR_THRESHOLD_SHIFT 20 +#define P4_CCCR_COMPLEMENT 0x00080000U +#define P4_CCCR_COMPARE 0x00040000U +#define P4_CCCR_ESCR_SELECT_MASK 0x0000e000U +#define P4_CCCR_ESCR_SELECT_SHIFT 13 +#define P4_CCCR_ENABLE 0x00001000U +#define P4_CCCR_THREAD_SINGLE 0x00010000U +#define P4_CCCR_THREAD_BOTH 0x00020000U +#define P4_CCCR_THREAD_ANY 0x00030000U +#define P4_CCCR_RESERVED 0x00000fffU + +#define P4_CCCR_THRESHOLD(v) ((v) << P4_CCCR_THRESHOLD_SHIFT) +#define P4_CCCR_ESEL(v) ((v) << P4_CCCR_ESCR_SELECT_SHIFT) + +/* Custom bits in reerved CCCR area */ +#define P4_CCCR_CACHE_OPS_MASK 0x0000003fU + + +/* Non HT mask */ +#define P4_CCCR_MASK \ + (P4_CCCR_OVF | \ + P4_CCCR_CASCADE | \ + P4_CCCR_OVF_PMI_T0 | \ + P4_CCCR_FORCE_OVF | \ + P4_CCCR_EDGE | \ + P4_CCCR_THRESHOLD_MASK | \ + P4_CCCR_COMPLEMENT | \ + P4_CCCR_COMPARE | \ + P4_CCCR_ESCR_SELECT_MASK | \ + P4_CCCR_ENABLE) + +/* HT mask */ +#define P4_CCCR_MASK_HT (P4_CCCR_MASK | P4_CCCR_THREAD_ANY) + +#define P4_GEN_ESCR_EMASK(class, name, bit) \ + class##__##name = ((1 << bit) << P4_ESCR_EVENTMASK_SHIFT) +#define P4_ESCR_EMASK_BIT(class, name) class##__##name + +/* + * config field is 64bit width and consists of + * HT << 63 | ESCR << 32 | CCCR + * where HT is HyperThreading bit (since ESCR + * has it reserved we may use it for own purpose) + * + * note that this is NOT the addresses of respective + * ESCR and CCCR but rather an only packed value should + * be unpacked and written to a proper addresses + * + * the base idea is to pack as much info as + * possible + */ +#define p4_config_pack_escr(v) (((u64)(v)) << 32) +#define p4_config_pack_cccr(v) (((u64)(v)) & 0xffffffffULL) +#define p4_config_unpack_escr(v) (((u64)(v)) >> 32) +#define p4_config_unpack_cccr(v) (((u64)(v)) & 0xffffffffULL) + +#define p4_config_unpack_emask(v) \ + ({ \ + u32 t = p4_config_unpack_escr((v)); \ + t = t & P4_ESCR_EVENTMASK_MASK; \ + t = t >> P4_ESCR_EVENTMASK_SHIFT; \ + t; \ + }) + +#define p4_config_unpack_event(v) \ + ({ \ + u32 t = p4_config_unpack_escr((v)); \ + t = t & P4_ESCR_EVENT_MASK; \ + t = t >> P4_ESCR_EVENT_SHIFT; \ + t; \ + }) + +#define p4_config_unpack_cache_event(v) (((u64)(v)) & P4_CCCR_CACHE_OPS_MASK) + +#define P4_CONFIG_HT_SHIFT 63 +#define P4_CONFIG_HT (1ULL << P4_CONFIG_HT_SHIFT) + +static inline bool p4_is_event_cascaded(u64 config) +{ + u32 cccr = p4_config_unpack_cccr(config); + return !!(cccr & P4_CCCR_CASCADE); +} + +static inline int p4_ht_config_thread(u64 config) +{ + return !!(config & P4_CONFIG_HT); +} + +static inline u64 p4_set_ht_bit(u64 config) +{ + return config | P4_CONFIG_HT; +} + +static inline u64 p4_clear_ht_bit(u64 config) +{ + return config & ~P4_CONFIG_HT; +} + +static inline int p4_ht_active(void) +{ +#ifdef CONFIG_SMP + return smp_num_siblings > 1; +#endif + return 0; +} + +static inline int p4_ht_thread(int cpu) +{ +#ifdef CONFIG_SMP + if (smp_num_siblings == 2) + return cpu != cpumask_first(__get_cpu_var(cpu_sibling_map)); +#endif + return 0; +} + +static inline int p4_should_swap_ts(u64 config, int cpu) +{ + return p4_ht_config_thread(config) ^ p4_ht_thread(cpu); +} + +static inline u32 p4_default_cccr_conf(int cpu) +{ + /* + * Note that P4_CCCR_THREAD_ANY is "required" on + * non-HT machines (on HT machines we count TS events + * regardless the state of second logical processor + */ + u32 cccr = P4_CCCR_THREAD_ANY; + + if (!p4_ht_thread(cpu)) + cccr |= P4_CCCR_OVF_PMI_T0; + else + cccr |= P4_CCCR_OVF_PMI_T1; + + return cccr; +} + +static inline u32 p4_default_escr_conf(int cpu, int exclude_os, int exclude_usr) +{ + u32 escr = 0; + + if (!p4_ht_thread(cpu)) { + if (!exclude_os) + escr |= P4_ESCR_T0_OS; + if (!exclude_usr) + escr |= P4_ESCR_T0_USR; + } else { + if (!exclude_os) + escr |= P4_ESCR_T1_OS; + if (!exclude_usr) + escr |= P4_ESCR_T1_USR; + } + + return escr; +} + +enum P4_EVENTS { + P4_EVENT_TC_DELIVER_MODE, + P4_EVENT_BPU_FETCH_REQUEST, + P4_EVENT_ITLB_REFERENCE, + P4_EVENT_MEMORY_CANCEL, + P4_EVENT_MEMORY_COMPLETE, + P4_EVENT_LOAD_PORT_REPLAY, + P4_EVENT_STORE_PORT_REPLAY, + P4_EVENT_MOB_LOAD_REPLAY, + P4_EVENT_PAGE_WALK_TYPE, + P4_EVENT_BSQ_CACHE_REFERENCE, + P4_EVENT_IOQ_ALLOCATION, + P4_EVENT_IOQ_ACTIVE_ENTRIES, + P4_EVENT_FSB_DATA_ACTIVITY, + P4_EVENT_BSQ_ALLOCATION, + P4_EVENT_BSQ_ACTIVE_ENTRIES, + P4_EVENT_SSE_INPUT_ASSIST, + P4_EVENT_PACKED_SP_UOP, + P4_EVENT_PACKED_DP_UOP, + P4_EVENT_SCALAR_SP_UOP, + P4_EVENT_SCALAR_DP_UOP, + P4_EVENT_64BIT_MMX_UOP, + P4_EVENT_128BIT_MMX_UOP, + P4_EVENT_X87_FP_UOP, + P4_EVENT_TC_MISC, + P4_EVENT_GLOBAL_POWER_EVENTS, + P4_EVENT_TC_MS_XFER, + P4_EVENT_UOP_QUEUE_WRITES, + P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE, + P4_EVENT_RETIRED_BRANCH_TYPE, + P4_EVENT_RESOURCE_STALL, + P4_EVENT_WC_BUFFER, + P4_EVENT_B2B_CYCLES, + P4_EVENT_BNR, + P4_EVENT_SNOOP, + P4_EVENT_RESPONSE, + P4_EVENT_FRONT_END_EVENT, + P4_EVENT_EXECUTION_EVENT, + P4_EVENT_REPLAY_EVENT, + P4_EVENT_INSTR_RETIRED, + P4_EVENT_UOPS_RETIRED, + P4_EVENT_UOP_TYPE, + P4_EVENT_BRANCH_RETIRED, + P4_EVENT_MISPRED_BRANCH_RETIRED, + P4_EVENT_X87_ASSIST, + P4_EVENT_MACHINE_CLEAR, + P4_EVENT_INSTR_COMPLETED, +}; + +#define P4_OPCODE(event) event##_OPCODE +#define P4_OPCODE_ESEL(opcode) ((opcode & 0x00ff) >> 0) +#define P4_OPCODE_EVNT(opcode) ((opcode & 0xff00) >> 8) +#define P4_OPCODE_PACK(event, sel) (((event) << 8) | sel) + +/* + * Comments below the event represent ESCR restriction + * for this event and counter index per ESCR + * + * MSR_P4_IQ_ESCR0 and MSR_P4_IQ_ESCR1 are available only on early + * processor builds (family 0FH, models 01H-02H). These MSRs + * are not available on later versions, so that we don't use + * them completely + * + * Also note that CCCR1 do not have P4_CCCR_ENABLE bit properly + * working so that we should not use this CCCR and respective + * counter as result + */ +enum P4_EVENT_OPCODES { + P4_OPCODE(P4_EVENT_TC_DELIVER_MODE) = P4_OPCODE_PACK(0x01, 0x01), + /* + * MSR_P4_TC_ESCR0: 4, 5 + * MSR_P4_TC_ESCR1: 6, 7 + */ + + P4_OPCODE(P4_EVENT_BPU_FETCH_REQUEST) = P4_OPCODE_PACK(0x03, 0x00), + /* + * MSR_P4_BPU_ESCR0: 0, 1 + * MSR_P4_BPU_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_ITLB_REFERENCE) = P4_OPCODE_PACK(0x18, 0x03), + /* + * MSR_P4_ITLB_ESCR0: 0, 1 + * MSR_P4_ITLB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_MEMORY_CANCEL) = P4_OPCODE_PACK(0x02, 0x05), + /* + * MSR_P4_DAC_ESCR0: 8, 9 + * MSR_P4_DAC_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_MEMORY_COMPLETE) = P4_OPCODE_PACK(0x08, 0x02), + /* + * MSR_P4_SAAT_ESCR0: 8, 9 + * MSR_P4_SAAT_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_LOAD_PORT_REPLAY) = P4_OPCODE_PACK(0x04, 0x02), + /* + * MSR_P4_SAAT_ESCR0: 8, 9 + * MSR_P4_SAAT_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_STORE_PORT_REPLAY) = P4_OPCODE_PACK(0x05, 0x02), + /* + * MSR_P4_SAAT_ESCR0: 8, 9 + * MSR_P4_SAAT_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_MOB_LOAD_REPLAY) = P4_OPCODE_PACK(0x03, 0x02), + /* + * MSR_P4_MOB_ESCR0: 0, 1 + * MSR_P4_MOB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_PAGE_WALK_TYPE) = P4_OPCODE_PACK(0x01, 0x04), + /* + * MSR_P4_PMH_ESCR0: 0, 1 + * MSR_P4_PMH_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_BSQ_CACHE_REFERENCE) = P4_OPCODE_PACK(0x0c, 0x07), + /* + * MSR_P4_BSU_ESCR0: 0, 1 + * MSR_P4_BSU_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_IOQ_ALLOCATION) = P4_OPCODE_PACK(0x03, 0x06), + /* + * MSR_P4_FSB_ESCR0: 0, 1 + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_IOQ_ACTIVE_ENTRIES) = P4_OPCODE_PACK(0x1a, 0x06), + /* + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_FSB_DATA_ACTIVITY) = P4_OPCODE_PACK(0x17, 0x06), + /* + * MSR_P4_FSB_ESCR0: 0, 1 + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_BSQ_ALLOCATION) = P4_OPCODE_PACK(0x05, 0x07), + /* + * MSR_P4_BSU_ESCR0: 0, 1 + */ + + P4_OPCODE(P4_EVENT_BSQ_ACTIVE_ENTRIES) = P4_OPCODE_PACK(0x06, 0x07), + /* + * NOTE: no ESCR name in docs, it's guessed + * MSR_P4_BSU_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_SSE_INPUT_ASSIST) = P4_OPCODE_PACK(0x34, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_PACKED_SP_UOP) = P4_OPCODE_PACK(0x08, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_PACKED_DP_UOP) = P4_OPCODE_PACK(0x0c, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_SCALAR_SP_UOP) = P4_OPCODE_PACK(0x0a, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_SCALAR_DP_UOP) = P4_OPCODE_PACK(0x0e, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_64BIT_MMX_UOP) = P4_OPCODE_PACK(0x02, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_128BIT_MMX_UOP) = P4_OPCODE_PACK(0x1a, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_X87_FP_UOP) = P4_OPCODE_PACK(0x04, 0x01), + /* + * MSR_P4_FIRM_ESCR0: 8, 9 + * MSR_P4_FIRM_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_TC_MISC) = P4_OPCODE_PACK(0x06, 0x01), + /* + * MSR_P4_TC_ESCR0: 4, 5 + * MSR_P4_TC_ESCR1: 6, 7 + */ + + P4_OPCODE(P4_EVENT_GLOBAL_POWER_EVENTS) = P4_OPCODE_PACK(0x13, 0x06), + /* + * MSR_P4_FSB_ESCR0: 0, 1 + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_TC_MS_XFER) = P4_OPCODE_PACK(0x05, 0x00), + /* + * MSR_P4_MS_ESCR0: 4, 5 + * MSR_P4_MS_ESCR1: 6, 7 + */ + + P4_OPCODE(P4_EVENT_UOP_QUEUE_WRITES) = P4_OPCODE_PACK(0x09, 0x00), + /* + * MSR_P4_MS_ESCR0: 4, 5 + * MSR_P4_MS_ESCR1: 6, 7 + */ + + P4_OPCODE(P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE) = P4_OPCODE_PACK(0x05, 0x02), + /* + * MSR_P4_TBPU_ESCR0: 4, 5 + * MSR_P4_TBPU_ESCR1: 6, 7 + */ + + P4_OPCODE(P4_EVENT_RETIRED_BRANCH_TYPE) = P4_OPCODE_PACK(0x04, 0x02), + /* + * MSR_P4_TBPU_ESCR0: 4, 5 + * MSR_P4_TBPU_ESCR1: 6, 7 + */ + + P4_OPCODE(P4_EVENT_RESOURCE_STALL) = P4_OPCODE_PACK(0x01, 0x01), + /* + * MSR_P4_ALF_ESCR0: 12, 13, 16 + * MSR_P4_ALF_ESCR1: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_WC_BUFFER) = P4_OPCODE_PACK(0x05, 0x05), + /* + * MSR_P4_DAC_ESCR0: 8, 9 + * MSR_P4_DAC_ESCR1: 10, 11 + */ + + P4_OPCODE(P4_EVENT_B2B_CYCLES) = P4_OPCODE_PACK(0x16, 0x03), + /* + * MSR_P4_FSB_ESCR0: 0, 1 + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_BNR) = P4_OPCODE_PACK(0x08, 0x03), + /* + * MSR_P4_FSB_ESCR0: 0, 1 + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_SNOOP) = P4_OPCODE_PACK(0x06, 0x03), + /* + * MSR_P4_FSB_ESCR0: 0, 1 + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_RESPONSE) = P4_OPCODE_PACK(0x04, 0x03), + /* + * MSR_P4_FSB_ESCR0: 0, 1 + * MSR_P4_FSB_ESCR1: 2, 3 + */ + + P4_OPCODE(P4_EVENT_FRONT_END_EVENT) = P4_OPCODE_PACK(0x08, 0x05), + /* + * MSR_P4_CRU_ESCR2: 12, 13, 16 + * MSR_P4_CRU_ESCR3: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_EXECUTION_EVENT) = P4_OPCODE_PACK(0x0c, 0x05), + /* + * MSR_P4_CRU_ESCR2: 12, 13, 16 + * MSR_P4_CRU_ESCR3: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_REPLAY_EVENT) = P4_OPCODE_PACK(0x09, 0x05), + /* + * MSR_P4_CRU_ESCR2: 12, 13, 16 + * MSR_P4_CRU_ESCR3: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_INSTR_RETIRED) = P4_OPCODE_PACK(0x02, 0x04), + /* + * MSR_P4_CRU_ESCR0: 12, 13, 16 + * MSR_P4_CRU_ESCR1: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_UOPS_RETIRED) = P4_OPCODE_PACK(0x01, 0x04), + /* + * MSR_P4_CRU_ESCR0: 12, 13, 16 + * MSR_P4_CRU_ESCR1: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_UOP_TYPE) = P4_OPCODE_PACK(0x02, 0x02), + /* + * MSR_P4_RAT_ESCR0: 12, 13, 16 + * MSR_P4_RAT_ESCR1: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_BRANCH_RETIRED) = P4_OPCODE_PACK(0x06, 0x05), + /* + * MSR_P4_CRU_ESCR2: 12, 13, 16 + * MSR_P4_CRU_ESCR3: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_MISPRED_BRANCH_RETIRED) = P4_OPCODE_PACK(0x03, 0x04), + /* + * MSR_P4_CRU_ESCR0: 12, 13, 16 + * MSR_P4_CRU_ESCR1: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_X87_ASSIST) = P4_OPCODE_PACK(0x03, 0x05), + /* + * MSR_P4_CRU_ESCR2: 12, 13, 16 + * MSR_P4_CRU_ESCR3: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_MACHINE_CLEAR) = P4_OPCODE_PACK(0x02, 0x05), + /* + * MSR_P4_CRU_ESCR2: 12, 13, 16 + * MSR_P4_CRU_ESCR3: 14, 15, 17 + */ + + P4_OPCODE(P4_EVENT_INSTR_COMPLETED) = P4_OPCODE_PACK(0x07, 0x04), + /* + * MSR_P4_CRU_ESCR0: 12, 13, 16 + * MSR_P4_CRU_ESCR1: 14, 15, 17 + */ +}; + +/* + * a caller should use P4_ESCR_EMASK_NAME helper to + * pick the EventMask needed, for example + * + * P4_ESCR_EMASK_NAME(P4_EVENT_TC_DELIVER_MODE, DD) + */ +enum P4_ESCR_EMASKS { + P4_GEN_ESCR_EMASK(P4_EVENT_TC_DELIVER_MODE, DD, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_TC_DELIVER_MODE, DB, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_TC_DELIVER_MODE, DI, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_TC_DELIVER_MODE, BD, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_TC_DELIVER_MODE, BB, 4), + P4_GEN_ESCR_EMASK(P4_EVENT_TC_DELIVER_MODE, BI, 5), + P4_GEN_ESCR_EMASK(P4_EVENT_TC_DELIVER_MODE, ID, 6), + + P4_GEN_ESCR_EMASK(P4_EVENT_BPU_FETCH_REQUEST, TCMISS, 0), + + P4_GEN_ESCR_EMASK(P4_EVENT_ITLB_REFERENCE, HIT, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_ITLB_REFERENCE, MISS, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_ITLB_REFERENCE, HIT_UK, 2), + + P4_GEN_ESCR_EMASK(P4_EVENT_MEMORY_CANCEL, ST_RB_FULL, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_MEMORY_CANCEL, 64K_CONF, 3), + + P4_GEN_ESCR_EMASK(P4_EVENT_MEMORY_COMPLETE, LSC, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_MEMORY_COMPLETE, SSC, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_LOAD_PORT_REPLAY, SPLIT_LD, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_STORE_PORT_REPLAY, SPLIT_ST, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_MOB_LOAD_REPLAY, NO_STA, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_MOB_LOAD_REPLAY, NO_STD, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_MOB_LOAD_REPLAY, PARTIAL_DATA, 4), + P4_GEN_ESCR_EMASK(P4_EVENT_MOB_LOAD_REPLAY, UNALGN_ADDR, 5), + + P4_GEN_ESCR_EMASK(P4_EVENT_PAGE_WALK_TYPE, DTMISS, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_PAGE_WALK_TYPE, ITMISS, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_HITS, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_HITE, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_HITM, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_HITS, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_HITE, 4), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_HITM, 5), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_MISS, 8), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_MISS, 9), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_CACHE_REFERENCE, WR_2ndL_MISS, 10), + + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, DEFAULT, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, ALL_READ, 5), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, ALL_WRITE, 6), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, MEM_UC, 7), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, MEM_WC, 8), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, MEM_WT, 9), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, MEM_WP, 10), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, MEM_WB, 11), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, OWN, 13), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, OTHER, 14), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ALLOCATION, PREFETCH, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, DEFAULT, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, ALL_READ, 5), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, ALL_WRITE, 6), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, MEM_UC, 7), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, MEM_WC, 8), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, MEM_WT, 9), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, MEM_WP, 10), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, MEM_WB, 11), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, OWN, 13), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, OTHER, 14), + P4_GEN_ESCR_EMASK(P4_EVENT_IOQ_ACTIVE_ENTRIES, PREFETCH, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_FSB_DATA_ACTIVITY, DRDY_DRV, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_FSB_DATA_ACTIVITY, DRDY_OWN, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_FSB_DATA_ACTIVITY, DRDY_OTHER, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_FSB_DATA_ACTIVITY, DBSY_DRV, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_FSB_DATA_ACTIVITY, DBSY_OWN, 4), + P4_GEN_ESCR_EMASK(P4_EVENT_FSB_DATA_ACTIVITY, DBSY_OTHER, 5), + + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_TYPE0, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_TYPE1, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_LEN0, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_LEN1, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_IO_TYPE, 5), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_LOCK_TYPE, 6), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_CACHE_TYPE, 7), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_SPLIT_TYPE, 8), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_DEM_TYPE, 9), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, REQ_ORD_TYPE, 10), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, MEM_TYPE0, 11), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, MEM_TYPE1, 12), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ALLOCATION, MEM_TYPE2, 13), + + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_TYPE0, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_TYPE1, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_LEN0, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_LEN1, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_IO_TYPE, 5), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_LOCK_TYPE, 6), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_CACHE_TYPE, 7), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_SPLIT_TYPE, 8), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_DEM_TYPE, 9), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, REQ_ORD_TYPE, 10), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, MEM_TYPE0, 11), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, MEM_TYPE1, 12), + P4_GEN_ESCR_EMASK(P4_EVENT_BSQ_ACTIVE_ENTRIES, MEM_TYPE2, 13), + + P4_GEN_ESCR_EMASK(P4_EVENT_SSE_INPUT_ASSIST, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_PACKED_SP_UOP, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_PACKED_DP_UOP, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_SCALAR_SP_UOP, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_SCALAR_DP_UOP, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_64BIT_MMX_UOP, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_128BIT_MMX_UOP, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_X87_FP_UOP, ALL, 15), + + P4_GEN_ESCR_EMASK(P4_EVENT_TC_MISC, FLUSH, 4), + + P4_GEN_ESCR_EMASK(P4_EVENT_GLOBAL_POWER_EVENTS, RUNNING, 0), + + P4_GEN_ESCR_EMASK(P4_EVENT_TC_MS_XFER, CISC, 0), + + P4_GEN_ESCR_EMASK(P4_EVENT_UOP_QUEUE_WRITES, FROM_TC_BUILD, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_UOP_QUEUE_WRITES, FROM_TC_DELIVER, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_UOP_QUEUE_WRITES, FROM_ROM, 2), + + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE, CONDITIONAL, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE, CALL, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE, RETURN, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE, INDIRECT, 4), + + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_BRANCH_TYPE, CONDITIONAL, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_BRANCH_TYPE, CALL, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_BRANCH_TYPE, RETURN, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_RETIRED_BRANCH_TYPE, INDIRECT, 4), + + P4_GEN_ESCR_EMASK(P4_EVENT_RESOURCE_STALL, SBFULL, 5), + + P4_GEN_ESCR_EMASK(P4_EVENT_WC_BUFFER, WCB_EVICTS, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_WC_BUFFER, WCB_FULL_EVICTS, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_FRONT_END_EVENT, NBOGUS, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_FRONT_END_EVENT, BOGUS, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, NBOGUS0, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, NBOGUS1, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, NBOGUS2, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, NBOGUS3, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, BOGUS0, 4), + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, BOGUS1, 5), + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, BOGUS2, 6), + P4_GEN_ESCR_EMASK(P4_EVENT_EXECUTION_EVENT, BOGUS3, 7), + + P4_GEN_ESCR_EMASK(P4_EVENT_REPLAY_EVENT, NBOGUS, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_REPLAY_EVENT, BOGUS, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_INSTR_RETIRED, NBOGUSNTAG, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_INSTR_RETIRED, NBOGUSTAG, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_INSTR_RETIRED, BOGUSNTAG, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_INSTR_RETIRED, BOGUSTAG, 3), + + P4_GEN_ESCR_EMASK(P4_EVENT_UOPS_RETIRED, NBOGUS, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_UOPS_RETIRED, BOGUS, 1), + + P4_GEN_ESCR_EMASK(P4_EVENT_UOP_TYPE, TAGLOADS, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_UOP_TYPE, TAGSTORES, 2), + + P4_GEN_ESCR_EMASK(P4_EVENT_BRANCH_RETIRED, MMNP, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_BRANCH_RETIRED, MMNM, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_BRANCH_RETIRED, MMTP, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_BRANCH_RETIRED, MMTM, 3), + + P4_GEN_ESCR_EMASK(P4_EVENT_MISPRED_BRANCH_RETIRED, NBOGUS, 0), + + P4_GEN_ESCR_EMASK(P4_EVENT_X87_ASSIST, FPSU, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_X87_ASSIST, FPSO, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_X87_ASSIST, POAO, 2), + P4_GEN_ESCR_EMASK(P4_EVENT_X87_ASSIST, POAU, 3), + P4_GEN_ESCR_EMASK(P4_EVENT_X87_ASSIST, PREA, 4), + + P4_GEN_ESCR_EMASK(P4_EVENT_MACHINE_CLEAR, CLEAR, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_MACHINE_CLEAR, MOCLEAR, 1), + P4_GEN_ESCR_EMASK(P4_EVENT_MACHINE_CLEAR, SMCLEAR, 2), + + P4_GEN_ESCR_EMASK(P4_EVENT_INSTR_COMPLETED, NBOGUS, 0), + P4_GEN_ESCR_EMASK(P4_EVENT_INSTR_COMPLETED, BOGUS, 1), +}; + +/* P4 PEBS: stale for a while */ +#define P4_PEBS_METRIC_MASK 0x00001fffU +#define P4_PEBS_UOB_TAG 0x01000000U +#define P4_PEBS_ENABLE 0x02000000U + +/* Replay metrics for MSR_IA32_PEBS_ENABLE and MSR_P4_PEBS_MATRIX_VERT */ +#define P4_PEBS__1stl_cache_load_miss_retired 0x3000001 +#define P4_PEBS__2ndl_cache_load_miss_retired 0x3000002 +#define P4_PEBS__dtlb_load_miss_retired 0x3000004 +#define P4_PEBS__dtlb_store_miss_retired 0x3000004 +#define P4_PEBS__dtlb_all_miss_retired 0x3000004 +#define P4_PEBS__tagged_mispred_branch 0x3018000 +#define P4_PEBS__mob_load_replay_retired 0x3000200 +#define P4_PEBS__split_load_retired 0x3000400 +#define P4_PEBS__split_store_retired 0x3000400 + +#define P4_VERT__1stl_cache_load_miss_retired 0x0000001 +#define P4_VERT__2ndl_cache_load_miss_retired 0x0000001 +#define P4_VERT__dtlb_load_miss_retired 0x0000001 +#define P4_VERT__dtlb_store_miss_retired 0x0000002 +#define P4_VERT__dtlb_all_miss_retired 0x0000003 +#define P4_VERT__tagged_mispred_branch 0x0000010 +#define P4_VERT__mob_load_replay_retired 0x0000001 +#define P4_VERT__split_load_retired 0x0000001 +#define P4_VERT__split_store_retired 0x0000002 + +enum P4_CACHE_EVENTS { + P4_CACHE__NONE, + + P4_CACHE__1stl_cache_load_miss_retired, + P4_CACHE__2ndl_cache_load_miss_retired, + P4_CACHE__dtlb_load_miss_retired, + P4_CACHE__dtlb_store_miss_retired, + P4_CACHE__itlb_reference_hit, + P4_CACHE__itlb_reference_miss, + + P4_CACHE__MAX +}; + +#endif /* PERF_EVENT_P4_H */ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index b753ea59703a..5a51379dcbe4 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -21,7 +21,6 @@ struct mm_struct; #include <asm/msr.h> #include <asm/desc_defs.h> #include <asm/nops.h> -#include <asm/ds.h> #include <linux/personality.h> #include <linux/cpumask.h> @@ -29,6 +28,7 @@ struct mm_struct; #include <linux/threads.h> #include <linux/math64.h> #include <linux/init.h> +#include <linux/err.h> #define HBP_NUM 4 /* @@ -113,7 +113,6 @@ struct cpuinfo_x86 { /* Index into per_cpu list: */ u16 cpu_index; #endif - unsigned int x86_hyper_vendor; } __attribute__((__aligned__(SMP_CACHE_BYTES))); #define X86_VENDOR_INTEL 0 @@ -127,9 +126,6 @@ struct cpuinfo_x86 { #define X86_VENDOR_UNKNOWN 0xff -#define X86_HYPER_VENDOR_NONE 0 -#define X86_HYPER_VENDOR_VMWARE 1 - /* * capabilities of CPUs */ @@ -380,6 +376,10 @@ union thread_xstate { struct xsave_struct xsave; }; +struct fpu { + union thread_xstate *state; +}; + #ifdef CONFIG_X86_64 DECLARE_PER_CPU(struct orig_ist, orig_ist); @@ -457,7 +457,7 @@ struct thread_struct { unsigned long trap_no; unsigned long error_code; /* floating point and extended processor state */ - union thread_xstate *xstate; + struct fpu fpu; #ifdef CONFIG_X86_32 /* Virtual 86 mode info */ struct vm86_struct __user *vm86_info; @@ -473,10 +473,6 @@ struct thread_struct { unsigned long iopl; /* Max allowed port in the bitmap, in bytes: */ unsigned io_bitmap_max; -/* MSR_IA32_DEBUGCTLMSR value to switch in if TIF_DEBUGCTLMSR is set. */ - unsigned long debugctlmsr; - /* Debug Store context; see asm/ds.h */ - struct ds_context *ds_ctx; }; static inline unsigned long native_get_debugreg(int regno) @@ -803,7 +799,7 @@ extern void cpu_init(void); static inline unsigned long get_debugctlmsr(void) { - unsigned long debugctlmsr = 0; + unsigned long debugctlmsr = 0; #ifndef CONFIG_X86_DEBUGCTLMSR if (boot_cpu_data.x86 < 6) @@ -811,21 +807,6 @@ static inline unsigned long get_debugctlmsr(void) #endif rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctlmsr); - return debugctlmsr; -} - -static inline unsigned long get_debugctlmsr_on_cpu(int cpu) -{ - u64 debugctlmsr = 0; - u32 val1, val2; - -#ifndef CONFIG_X86_DEBUGCTLMSR - if (boot_cpu_data.x86 < 6) - return 0; -#endif - rdmsr_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, &val1, &val2); - debugctlmsr = val1 | ((u64)val2 << 32); - return debugctlmsr; } @@ -838,18 +819,6 @@ static inline void update_debugctlmsr(unsigned long debugctlmsr) wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctlmsr); } -static inline void update_debugctlmsr_on_cpu(int cpu, - unsigned long debugctlmsr) -{ -#ifndef CONFIG_X86_DEBUGCTLMSR - if (boot_cpu_data.x86 < 6) - return; -#endif - wrmsr_on_cpu(cpu, MSR_IA32_DEBUGCTLMSR, - (u32)((u64)debugctlmsr), - (u32)((u64)debugctlmsr >> 32)); -} - /* * from system description table in BIOS. Mostly for MCA use, but * others may find it useful: diff --git a/arch/x86/include/asm/ptrace-abi.h b/arch/x86/include/asm/ptrace-abi.h index 86723035a515..52b098a6eebb 100644 --- a/arch/x86/include/asm/ptrace-abi.h +++ b/arch/x86/include/asm/ptrace-abi.h @@ -82,61 +82,6 @@ #ifndef __ASSEMBLY__ #include <linux/types.h> - -/* configuration/status structure used in PTRACE_BTS_CONFIG and - PTRACE_BTS_STATUS commands. -*/ -struct ptrace_bts_config { - /* requested or actual size of BTS buffer in bytes */ - __u32 size; - /* bitmask of below flags */ - __u32 flags; - /* buffer overflow signal */ - __u32 signal; - /* actual size of bts_struct in bytes */ - __u32 bts_size; -}; -#endif /* __ASSEMBLY__ */ - -#define PTRACE_BTS_O_TRACE 0x1 /* branch trace */ -#define PTRACE_BTS_O_SCHED 0x2 /* scheduling events w/ jiffies */ -#define PTRACE_BTS_O_SIGNAL 0x4 /* send SIG<signal> on buffer overflow - instead of wrapping around */ -#define PTRACE_BTS_O_ALLOC 0x8 /* (re)allocate buffer */ - -#define PTRACE_BTS_CONFIG 40 -/* Configure branch trace recording. - ADDR points to a struct ptrace_bts_config. - DATA gives the size of that buffer. - A new buffer is allocated, if requested in the flags. - An overflow signal may only be requested for new buffers. - Returns the number of bytes read. -*/ -#define PTRACE_BTS_STATUS 41 -/* Return the current configuration in a struct ptrace_bts_config - pointed to by ADDR; DATA gives the size of that buffer. - Returns the number of bytes written. -*/ -#define PTRACE_BTS_SIZE 42 -/* Return the number of available BTS records for draining. - DATA and ADDR are ignored. -*/ -#define PTRACE_BTS_GET 43 -/* Get a single BTS record. - DATA defines the index into the BTS array, where 0 is the newest - entry, and higher indices refer to older entries. - ADDR is pointing to struct bts_struct (see asm/ds.h). -*/ -#define PTRACE_BTS_CLEAR 44 -/* Clear the BTS buffer. - DATA and ADDR are ignored. -*/ -#define PTRACE_BTS_DRAIN 45 -/* Read all available BTS records and clear the buffer. - ADDR points to an array of struct bts_struct. - DATA gives the size of that buffer. - BTS records are read from oldest to newest. - Returns number of BTS records drained. -*/ +#endif #endif /* _ASM_X86_PTRACE_ABI_H */ diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h index 69a686a7dff0..78cd1ea94500 100644 --- a/arch/x86/include/asm/ptrace.h +++ b/arch/x86/include/asm/ptrace.h @@ -289,12 +289,6 @@ extern int do_get_thread_area(struct task_struct *p, int idx, extern int do_set_thread_area(struct task_struct *p, int idx, struct user_desc __user *info, int can_allocate); -#ifdef CONFIG_X86_PTRACE_BTS -extern void ptrace_bts_untrace(struct task_struct *tsk); - -#define arch_ptrace_untrace(tsk) ptrace_bts_untrace(tsk) -#endif /* CONFIG_X86_PTRACE_BTS */ - #endif /* __KERNEL__ */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h index e0d28901e969..d4092fac226b 100644 --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -92,8 +92,7 @@ struct thread_info { #define TIF_IO_BITMAP 22 /* uses I/O bitmap */ #define TIF_FREEZE 23 /* is freezing for suspend */ #define TIF_FORCED_TF 24 /* true if TF in eflags artificially */ -#define TIF_DEBUGCTLMSR 25 /* uses thread_struct.debugctlmsr */ -#define TIF_DS_AREA_MSR 26 /* uses thread_struct.ds_area_msr */ +#define TIF_BLOCKSTEP 25 /* set when we want DEBUGCTLMSR_BTF */ #define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */ #define TIF_SYSCALL_TRACEPOINT 28 /* syscall tracepoint instrumentation */ @@ -115,8 +114,7 @@ struct thread_info { #define _TIF_IO_BITMAP (1 << TIF_IO_BITMAP) #define _TIF_FREEZE (1 << TIF_FREEZE) #define _TIF_FORCED_TF (1 << TIF_FORCED_TF) -#define _TIF_DEBUGCTLMSR (1 << TIF_DEBUGCTLMSR) -#define _TIF_DS_AREA_MSR (1 << TIF_DS_AREA_MSR) +#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP) #define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES) #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) @@ -147,7 +145,7 @@ struct thread_info { /* flags to check in __switch_to() */ #define _TIF_WORK_CTXSW \ - (_TIF_IO_BITMAP|_TIF_DEBUGCTLMSR|_TIF_DS_AREA_MSR|_TIF_NOTSC) + (_TIF_IO_BITMAP|_TIF_NOTSC|_TIF_BLOCKSTEP) #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY) #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW|_TIF_DEBUG) @@ -244,7 +242,6 @@ static inline struct thread_info *current_thread_info(void) #define TS_POLLING 0x0004 /* true if in idle loop and not sleeping */ #define TS_RESTORE_SIGMASK 0x0008 /* restore signal mask in do_signal() */ -#define TS_XSAVE 0x0010 /* Use xsave/xrstor */ #define tsk_is_polling(t) (task_thread_info(t)->status & TS_POLLING) diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 4da91ad69e0d..f66cda56781d 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -79,7 +79,7 @@ static inline int get_si_code(unsigned long condition) extern int panic_on_unrecovered_nmi; -void math_error(void __user *); +void math_error(struct pt_regs *, int, int); void math_emulate(struct math_emu_info *); #ifndef CONFIG_X86_32 asmlinkage void smp_thermal_interrupt(void); diff --git a/arch/x86/include/asm/uv/uv_bau.h b/arch/x86/include/asm/uv/uv_bau.h index b414d2b401f6..aa558ac0306e 100644 --- a/arch/x86/include/asm/uv/uv_bau.h +++ b/arch/x86/include/asm/uv/uv_bau.h @@ -27,13 +27,14 @@ * set 2 is at BASE + 2*512, set 3 at BASE + 3*512, and so on. * * We will use 31 sets, one for sending BAU messages from each of the 32 - * cpu's on the node. + * cpu's on the uvhub. * * TLB shootdown will use the first of the 8 descriptors of each set. * Each of the descriptors is 64 bytes in size (8*64 = 512 bytes in a set). */ #define UV_ITEMS_PER_DESCRIPTOR 8 +#define MAX_BAU_CONCURRENT 3 #define UV_CPUS_PER_ACT_STATUS 32 #define UV_ACT_STATUS_MASK 0x3 #define UV_ACT_STATUS_SIZE 2 @@ -45,6 +46,9 @@ #define UV_PAYLOADQ_PNODE_SHIFT 49 #define UV_PTC_BASENAME "sgi_uv/ptc_statistics" #define uv_physnodeaddr(x) ((__pa((unsigned long)(x)) & uv_mmask)) +#define UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT 15 +#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT 16 +#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD 0x000000000bUL /* * bits in UVH_LB_BAU_SB_ACTIVATION_STATUS_0/1 @@ -55,15 +59,29 @@ #define DESC_STATUS_SOURCE_TIMEOUT 3 /* - * source side thresholds at which message retries print a warning + * source side threshholds at which message retries print a warning */ #define SOURCE_TIMEOUT_LIMIT 20 #define DESTINATION_TIMEOUT_LIMIT 20 /* + * misc. delays, in microseconds + */ +#define THROTTLE_DELAY 10 +#define TIMEOUT_DELAY 10 +#define BIOS_TO 1000 +/* BIOS is assumed to set the destination timeout to 1003520 nanoseconds */ + +/* + * threshholds at which to use IPI to free resources + */ +#define PLUGSB4RESET 100 +#define TIMEOUTSB4RESET 100 + +/* * number of entries in the destination side payload queue */ -#define DEST_Q_SIZE 17 +#define DEST_Q_SIZE 20 /* * number of destination side software ack resources */ @@ -72,9 +90,10 @@ /* * completion statuses for sending a TLB flush message */ -#define FLUSH_RETRY 1 -#define FLUSH_GIVEUP 2 -#define FLUSH_COMPLETE 3 +#define FLUSH_RETRY_PLUGGED 1 +#define FLUSH_RETRY_TIMEOUT 2 +#define FLUSH_GIVEUP 3 +#define FLUSH_COMPLETE 4 /* * Distribution: 32 bytes (256 bits) (bytes 0-0x1f of descriptor) @@ -86,14 +105,14 @@ * 'base_dest_nodeid' field of the header corresponds to the * destination nodeID associated with that specified bit. */ -struct bau_target_nodemask { - unsigned long bits[BITS_TO_LONGS(256)]; +struct bau_target_uvhubmask { + unsigned long bits[BITS_TO_LONGS(UV_DISTRIBUTION_SIZE)]; }; /* - * mask of cpu's on a node + * mask of cpu's on a uvhub * (during initialization we need to check that unsigned long has - * enough bits for max. cpu's per node) + * enough bits for max. cpu's per uvhub) */ struct bau_local_cpumask { unsigned long bits; @@ -135,8 +154,8 @@ struct bau_msg_payload { struct bau_msg_header { unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */ /* bits 5:0 */ - unsigned int base_dest_nodeid:15; /* nasid>>1 (pnode) of */ - /* bits 20:6 */ /* first bit in node_map */ + unsigned int base_dest_nodeid:15; /* nasid (pnode<<1) of */ + /* bits 20:6 */ /* first bit in uvhub map */ unsigned int command:8; /* message type */ /* bits 28:21 */ /* 0x38: SN3net EndPoint Message */ @@ -146,26 +165,38 @@ struct bau_msg_header { unsigned int rsvd_2:9; /* must be zero */ /* bits 40:32 */ /* Suppl_A is 56-41 */ - unsigned int payload_2a:8;/* becomes byte 16 of msg */ - /* bits 48:41 */ /* not currently using */ - unsigned int payload_2b:8;/* becomes byte 17 of msg */ - /* bits 56:49 */ /* not currently using */ + unsigned int sequence:16;/* message sequence number */ + /* bits 56:41 */ /* becomes bytes 16-17 of msg */ /* Address field (96:57) is never used as an address (these are address bits 42:3) */ + unsigned int rsvd_3:1; /* must be zero */ /* bit 57 */ /* address bits 27:4 are payload */ - /* these 24 bits become bytes 12-14 of msg */ + /* these next 24 (58-81) bits become bytes 12-14 of msg */ + + /* bits 65:58 land in byte 12 */ unsigned int replied_to:1;/* sent as 0 by the source to byte 12 */ /* bit 58 */ - - unsigned int payload_1a:5;/* not currently used */ - /* bits 63:59 */ - unsigned int payload_1b:8;/* not currently used */ - /* bits 71:64 */ - unsigned int payload_1c:8;/* not currently used */ - /* bits 79:72 */ - unsigned int payload_1d:2;/* not currently used */ + unsigned int msg_type:3; /* software type of the message*/ + /* bits 61:59 */ + unsigned int canceled:1; /* message canceled, resource to be freed*/ + /* bit 62 */ + unsigned int payload_1a:1;/* not currently used */ + /* bit 63 */ + unsigned int payload_1b:2;/* not currently used */ + /* bits 65:64 */ + + /* bits 73:66 land in byte 13 */ + unsigned int payload_1ca:6;/* not currently used */ + /* bits 71:66 */ + unsigned int payload_1c:2;/* not currently used */ + /* bits 73:72 */ + + /* bits 81:74 land in byte 14 */ + unsigned int payload_1d:6;/* not currently used */ + /* bits 79:74 */ + unsigned int payload_1e:2;/* not currently used */ /* bits 81:80 */ unsigned int rsvd_4:7; /* must be zero */ @@ -178,7 +209,7 @@ struct bau_msg_header { /* bits 95:90 */ unsigned int rsvd_6:5; /* must be zero */ /* bits 100:96 */ - unsigned int int_both:1;/* if 1, interrupt both sockets on the blade */ + unsigned int int_both:1;/* if 1, interrupt both sockets on the uvhub */ /* bit 101*/ unsigned int fairness:3;/* usually zero */ /* bits 104:102 */ @@ -191,13 +222,18 @@ struct bau_msg_header { /* bits 127:107 */ }; +/* see msg_type: */ +#define MSG_NOOP 0 +#define MSG_REGULAR 1 +#define MSG_RETRY 2 + /* * The activation descriptor: * The format of the message to send, plus all accompanying control * Should be 64 bytes */ struct bau_desc { - struct bau_target_nodemask distribution; + struct bau_target_uvhubmask distribution; /* * message template, consisting of header and payload: */ @@ -237,19 +273,25 @@ struct bau_payload_queue_entry { unsigned short acknowledge_count; /* filled in by destination */ /* 16 bits, bytes 10-11 */ - unsigned short replied_to:1; /* sent as 0 by the source */ - /* 1 bit */ - unsigned short unused1:7; /* not currently using */ - /* 7 bits: byte 12) */ + /* these next 3 bytes come from bits 58-81 of the message header */ + unsigned short replied_to:1; /* sent as 0 by the source */ + unsigned short msg_type:3; /* software message type */ + unsigned short canceled:1; /* sent as 0 by the source */ + unsigned short unused1:3; /* not currently using */ + /* byte 12 */ - unsigned char unused2[2]; /* not currently using */ - /* bytes 13-14 */ + unsigned char unused2a; /* not currently using */ + /* byte 13 */ + unsigned char unused2; /* not currently using */ + /* byte 14 */ unsigned char sw_ack_vector; /* filled in by the hardware */ /* byte 15 (bits 127:120) */ - unsigned char unused4[3]; /* not currently using bytes 17-19 */ - /* bytes 17-19 */ + unsigned short sequence; /* message sequence number */ + /* bytes 16-17 */ + unsigned char unused4[2]; /* not currently using bytes 18-19 */ + /* bytes 18-19 */ int number_of_cpus; /* filled in at destination */ /* 32 bits, bytes 20-23 (aligned) */ @@ -259,63 +301,93 @@ struct bau_payload_queue_entry { }; /* - * one for every slot in the destination payload queue - */ -struct bau_msg_status { - struct bau_local_cpumask seen_by; /* map of cpu's */ -}; - -/* - * one for every slot in the destination software ack resources - */ -struct bau_sw_ack_status { - struct bau_payload_queue_entry *msg; /* associated message */ - int watcher; /* cpu monitoring, or -1 */ -}; - -/* - * one on every node and per-cpu; to locate the software tables + * one per-cpu; to locate the software tables */ struct bau_control { struct bau_desc *descriptor_base; - struct bau_payload_queue_entry *bau_msg_head; struct bau_payload_queue_entry *va_queue_first; struct bau_payload_queue_entry *va_queue_last; - struct bau_msg_status *msg_statuses; - int *watching; /* pointer to array */ + struct bau_payload_queue_entry *bau_msg_head; + struct bau_control *uvhub_master; + struct bau_control *socket_master; + unsigned long timeout_interval; + atomic_t active_descriptor_count; + int max_concurrent; + int max_concurrent_constant; + int retry_message_scans; + int plugged_tries; + int timeout_tries; + int ipi_attempts; + int conseccompletes; + short cpu; + short uvhub_cpu; + short uvhub; + short cpus_in_socket; + short cpus_in_uvhub; + unsigned short message_number; + unsigned short uvhub_quiesce; + short socket_acknowledge_count[DEST_Q_SIZE]; + cycles_t send_message; + spinlock_t masks_lock; + spinlock_t uvhub_lock; + spinlock_t queue_lock; }; /* * This structure is allocated per_cpu for UV TLB shootdown statistics. */ struct ptc_stats { - unsigned long ptc_i; /* number of IPI-style flushes */ - unsigned long requestor; /* number of nodes this cpu sent to */ - unsigned long requestee; /* times cpu was remotely requested */ - unsigned long alltlb; /* times all tlb's on this cpu were flushed */ - unsigned long onetlb; /* times just one tlb on this cpu was flushed */ - unsigned long s_retry; /* retries on source side timeouts */ - unsigned long d_retry; /* retries on destination side timeouts */ - unsigned long sflush; /* cycles spent in uv_flush_tlb_others */ - unsigned long dflush; /* cycles spent on destination side */ - unsigned long retriesok; /* successes on retries */ - unsigned long nomsg; /* interrupts with no message */ - unsigned long multmsg; /* interrupts with multiple messages */ - unsigned long ntargeted;/* nodes targeted */ + /* sender statistics */ + unsigned long s_giveup; /* number of fall backs to IPI-style flushes */ + unsigned long s_requestor; /* number of shootdown requests */ + unsigned long s_stimeout; /* source side timeouts */ + unsigned long s_dtimeout; /* destination side timeouts */ + unsigned long s_time; /* time spent in sending side */ + unsigned long s_retriesok; /* successful retries */ + unsigned long s_ntargcpu; /* number of cpus targeted */ + unsigned long s_ntarguvhub; /* number of uvhubs targeted */ + unsigned long s_ntarguvhub16; /* number of times >= 16 target hubs */ + unsigned long s_ntarguvhub8; /* number of times >= 8 target hubs */ + unsigned long s_ntarguvhub4; /* number of times >= 4 target hubs */ + unsigned long s_ntarguvhub2; /* number of times >= 2 target hubs */ + unsigned long s_ntarguvhub1; /* number of times == 1 target hub */ + unsigned long s_resets_plug; /* ipi-style resets from plug state */ + unsigned long s_resets_timeout; /* ipi-style resets from timeouts */ + unsigned long s_busy; /* status stayed busy past s/w timer */ + unsigned long s_throttles; /* waits in throttle */ + unsigned long s_retry_messages; /* retry broadcasts */ + /* destination statistics */ + unsigned long d_alltlb; /* times all tlb's on this cpu were flushed */ + unsigned long d_onetlb; /* times just one tlb on this cpu was flushed */ + unsigned long d_multmsg; /* interrupts with multiple messages */ + unsigned long d_nomsg; /* interrupts with no message */ + unsigned long d_time; /* time spent on destination side */ + unsigned long d_requestee; /* number of messages processed */ + unsigned long d_retries; /* number of retry messages processed */ + unsigned long d_canceled; /* number of messages canceled by retries */ + unsigned long d_nocanceled; /* retries that found nothing to cancel */ + unsigned long d_resets; /* number of ipi-style requests processed */ + unsigned long d_rcanceled; /* number of messages canceled by resets */ }; -static inline int bau_node_isset(int node, struct bau_target_nodemask *dstp) +static inline int bau_uvhub_isset(int uvhub, struct bau_target_uvhubmask *dstp) { - return constant_test_bit(node, &dstp->bits[0]); + return constant_test_bit(uvhub, &dstp->bits[0]); } -static inline void bau_node_set(int node, struct bau_target_nodemask *dstp) +static inline void bau_uvhub_set(int uvhub, struct bau_target_uvhubmask *dstp) { - __set_bit(node, &dstp->bits[0]); + __set_bit(uvhub, &dstp->bits[0]); } -static inline void bau_nodes_clear(struct bau_target_nodemask *dstp, int nbits) +static inline void bau_uvhubs_clear(struct bau_target_uvhubmask *dstp, + int nbits) { bitmap_zero(&dstp->bits[0], nbits); } +static inline int bau_uvhub_weight(struct bau_target_uvhubmask *dstp) +{ + return bitmap_weight((unsigned long *)&dstp->bits[0], + UV_DISTRIBUTION_SIZE); +} static inline void bau_cpubits_clear(struct bau_local_cpumask *dstp, int nbits) { @@ -328,4 +400,35 @@ static inline void bau_cpubits_clear(struct bau_local_cpumask *dstp, int nbits) extern void uv_bau_message_intr1(void); extern void uv_bau_timeout_intr1(void); +struct atomic_short { + short counter; +}; + +/** + * atomic_read_short - read a short atomic variable + * @v: pointer of type atomic_short + * + * Atomically reads the value of @v. + */ +static inline int atomic_read_short(const struct atomic_short *v) +{ + return v->counter; +} + +/** + * atomic_add_short_return - add and return a short int + * @i: short value to add + * @v: pointer of type atomic_short + * + * Atomically adds @i to @v and returns @i + @v + */ +static inline int atomic_add_short_return(short i, struct atomic_short *v) +{ + short __i = i; + asm volatile(LOCK_PREFIX "xaddw %0, %1" + : "+r" (i), "+m" (v->counter) + : : "memory"); + return i + __i; +} + #endif /* _ASM_X86_UV_UV_BAU_H */ diff --git a/arch/x86/include/asm/uv/uv_hub.h b/arch/x86/include/asm/uv/uv_hub.h index 14cc74ba5d23..bf6b88ef8eeb 100644 --- a/arch/x86/include/asm/uv/uv_hub.h +++ b/arch/x86/include/asm/uv/uv_hub.h @@ -307,7 +307,7 @@ static inline unsigned long uv_read_global_mmr32(int pnode, unsigned long offset * Access Global MMR space using the MMR space located at the top of physical * memory. */ -static inline unsigned long *uv_global_mmr64_address(int pnode, unsigned long offset) +static inline volatile void __iomem *uv_global_mmr64_address(int pnode, unsigned long offset) { return __va(UV_GLOBAL_MMR64_BASE | UV_GLOBAL_MMR64_PNODE_BITS(pnode) | offset); diff --git a/arch/x86/include/asm/uv/uv_mmrs.h b/arch/x86/include/asm/uv/uv_mmrs.h index 2cae46c7c8a2..b2f2d2e05cec 100644 --- a/arch/x86/include/asm/uv/uv_mmrs.h +++ b/arch/x86/include/asm/uv/uv_mmrs.h @@ -1,4 +1,3 @@ - /* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive @@ -15,13 +14,25 @@ #define UV_MMR_ENABLE (1UL << 63) /* ========================================================================= */ +/* UVH_BAU_DATA_BROADCAST */ +/* ========================================================================= */ +#define UVH_BAU_DATA_BROADCAST 0x61688UL +#define UVH_BAU_DATA_BROADCAST_32 0x0440 + +#define UVH_BAU_DATA_BROADCAST_ENABLE_SHFT 0 +#define UVH_BAU_DATA_BROADCAST_ENABLE_MASK 0x0000000000000001UL + +union uvh_bau_data_broadcast_u { + unsigned long v; + struct uvh_bau_data_broadcast_s { + unsigned long enable : 1; /* RW */ + unsigned long rsvd_1_63: 63; /* */ + } s; +}; + +/* ========================================================================= */ /* UVH_BAU_DATA_CONFIG */ /* ========================================================================= */ -#define UVH_LB_BAU_MISC_CONTROL 0x320170UL -#define UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT 15 -#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT 16 -#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD 0x000000000bUL -/* 1011 timebase 7 (168millisec) * 3 ticks -> 500ms */ #define UVH_BAU_DATA_CONFIG 0x61680UL #define UVH_BAU_DATA_CONFIG_32 0x0438 @@ -604,6 +615,68 @@ union uvh_lb_bau_intd_software_acknowledge_u { #define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS_32 0x0a70 /* ========================================================================= */ +/* UVH_LB_BAU_MISC_CONTROL */ +/* ========================================================================= */ +#define UVH_LB_BAU_MISC_CONTROL 0x320170UL +#define UVH_LB_BAU_MISC_CONTROL_32 0x00a10 + +#define UVH_LB_BAU_MISC_CONTROL_REJECTION_DELAY_SHFT 0 +#define UVH_LB_BAU_MISC_CONTROL_REJECTION_DELAY_MASK 0x00000000000000ffUL +#define UVH_LB_BAU_MISC_CONTROL_APIC_MODE_SHFT 8 +#define UVH_LB_BAU_MISC_CONTROL_APIC_MODE_MASK 0x0000000000000100UL +#define UVH_LB_BAU_MISC_CONTROL_FORCE_BROADCAST_SHFT 9 +#define UVH_LB_BAU_MISC_CONTROL_FORCE_BROADCAST_MASK 0x0000000000000200UL +#define UVH_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_SHFT 10 +#define UVH_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_MASK 0x0000000000000400UL +#define UVH_LB_BAU_MISC_CONTROL_CSI_AGENT_PRESENCE_VECTOR_SHFT 11 +#define UVH_LB_BAU_MISC_CONTROL_CSI_AGENT_PRESENCE_VECTOR_MASK 0x0000000000003800UL +#define UVH_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_SHFT 14 +#define UVH_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_MASK 0x0000000000004000UL +#define UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT 15 +#define UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK 0x0000000000008000UL +#define UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT 16 +#define UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK 0x00000000000f0000UL +#define UVH_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_SHFT 20 +#define UVH_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_MASK 0x0000000000100000UL +#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_SHFT 21 +#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_MASK 0x0000000000200000UL +#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_SHFT 22 +#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_MASK 0x0000000000400000UL +#define UVH_LB_BAU_MISC_CONTROL_SUPPRESS_DEST_REGISTRATION_SHFT 23 +#define UVH_LB_BAU_MISC_CONTROL_SUPPRESS_DEST_REGISTRATION_MASK 0x0000000000800000UL +#define UVH_LB_BAU_MISC_CONTROL_PROGRAMMED_INITIAL_PRIORITY_SHFT 24 +#define UVH_LB_BAU_MISC_CONTROL_PROGRAMMED_INITIAL_PRIORITY_MASK 0x0000000007000000UL +#define UVH_LB_BAU_MISC_CONTROL_USE_INCOMING_PRIORITY_SHFT 27 +#define UVH_LB_BAU_MISC_CONTROL_USE_INCOMING_PRIORITY_MASK 0x0000000008000000UL +#define UVH_LB_BAU_MISC_CONTROL_ENABLE_PROGRAMMED_INITIAL_PRIORITY_SHFT 28 +#define UVH_LB_BAU_MISC_CONTROL_ENABLE_PROGRAMMED_INITIAL_PRIORITY_MASK 0x0000000010000000UL +#define UVH_LB_BAU_MISC_CONTROL_FUN_SHFT 48 +#define UVH_LB_BAU_MISC_CONTROL_FUN_MASK 0xffff000000000000UL + +union uvh_lb_bau_misc_control_u { + unsigned long v; + struct uvh_lb_bau_misc_control_s { + unsigned long rejection_delay : 8; /* RW */ + unsigned long apic_mode : 1; /* RW */ + unsigned long force_broadcast : 1; /* RW */ + unsigned long force_lock_nop : 1; /* RW */ + unsigned long csi_agent_presence_vector : 3; /* RW */ + unsigned long descriptor_fetch_mode : 1; /* RW */ + unsigned long enable_intd_soft_ack_mode : 1; /* RW */ + unsigned long intd_soft_ack_timeout_period : 4; /* RW */ + unsigned long enable_dual_mapping_mode : 1; /* RW */ + unsigned long vga_io_port_decode_enable : 1; /* RW */ + unsigned long vga_io_port_16_bit_decode : 1; /* RW */ + unsigned long suppress_dest_registration : 1; /* RW */ + unsigned long programmed_initial_priority : 3; /* RW */ + unsigned long use_incoming_priority : 1; /* RW */ + unsigned long enable_programmed_initial_priority : 1; /* RW */ + unsigned long rsvd_29_47 : 19; /* */ + unsigned long fun : 16; /* RW */ + } s; +}; + +/* ========================================================================= */ /* UVH_LB_BAU_SB_ACTIVATION_CONTROL */ /* ========================================================================= */ #define UVH_LB_BAU_SB_ACTIVATION_CONTROL 0x320020UL @@ -681,334 +754,6 @@ union uvh_lb_bau_sb_descriptor_base_u { }; /* ========================================================================= */ -/* UVH_LB_MCAST_AOERR0_RPT_ENABLE */ -/* ========================================================================= */ -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE 0x50b20UL - -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_OBESE_MSG_SHFT 0 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_OBESE_MSG_MASK 0x0000000000000001UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_DATA_SB_ERR_SHFT 1 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_DATA_SB_ERR_MASK 0x0000000000000002UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_NACK_BUFF_PARITY_SHFT 2 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_NACK_BUFF_PARITY_MASK 0x0000000000000004UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_TIMEOUT_SHFT 3 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_TIMEOUT_MASK 0x0000000000000008UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_INACTIVE_REPLY_SHFT 4 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_INACTIVE_REPLY_MASK 0x0000000000000010UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_UPGRADE_ERROR_SHFT 5 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_UPGRADE_ERROR_MASK 0x0000000000000020UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_REG_COUNT_UNDERFLOW_SHFT 6 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_REG_COUNT_UNDERFLOW_MASK 0x0000000000000040UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_REP_OBESE_MSG_SHFT 7 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MCAST_REP_OBESE_MSG_MASK 0x0000000000000080UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REQ_RUNT_MSG_SHFT 8 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REQ_RUNT_MSG_MASK 0x0000000000000100UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REQ_OBESE_MSG_SHFT 9 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REQ_OBESE_MSG_MASK 0x0000000000000200UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REQ_DATA_SB_ERR_SHFT 10 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REQ_DATA_SB_ERR_MASK 0x0000000000000400UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_RUNT_MSG_SHFT 11 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_RUNT_MSG_MASK 0x0000000000000800UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_OBESE_MSG_SHFT 12 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_OBESE_MSG_MASK 0x0000000000001000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_DATA_SB_ERR_SHFT 13 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_DATA_SB_ERR_MASK 0x0000000000002000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_COMMAND_ERR_SHFT 14 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_REP_COMMAND_ERR_MASK 0x0000000000004000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_PEND_TIMEOUT_SHFT 15 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_UCACHE_PEND_TIMEOUT_MASK 0x0000000000008000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REQ_RUNT_MSG_SHFT 16 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REQ_RUNT_MSG_MASK 0x0000000000010000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REQ_OBESE_MSG_SHFT 17 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REQ_OBESE_MSG_MASK 0x0000000000020000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REQ_DATA_SB_ERR_SHFT 18 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REQ_DATA_SB_ERR_MASK 0x0000000000040000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REP_RUNT_MSG_SHFT 19 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REP_RUNT_MSG_MASK 0x0000000000080000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REP_OBESE_MSG_SHFT 20 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REP_OBESE_MSG_MASK 0x0000000000100000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REP_DATA_SB_ERR_SHFT 21 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_REP_DATA_SB_ERR_MASK 0x0000000000200000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_AMO_TIMEOUT_SHFT 22 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_AMO_TIMEOUT_MASK 0x0000000000400000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_PUT_TIMEOUT_SHFT 23 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_PUT_TIMEOUT_MASK 0x0000000000800000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_SPURIOUS_EVENT_SHFT 24 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_MACC_SPURIOUS_EVENT_MASK 0x0000000001000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_IOH_DESTINATION_TABLE_PARITY_SHFT 25 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_IOH_DESTINATION_TABLE_PARITY_MASK 0x0000000002000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_GET_HAD_ERROR_REPLY_SHFT 26 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_GET_HAD_ERROR_REPLY_MASK 0x0000000004000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_GET_TIMEOUT_SHFT 27 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_GET_TIMEOUT_MASK 0x0000000008000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_LOCK_MANAGER_HAD_ERROR_REPLY_SHFT 28 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_LOCK_MANAGER_HAD_ERROR_REPLY_MASK 0x0000000010000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_PUT_HAD_ERROR_REPLY_SHFT 29 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_PUT_HAD_ERROR_REPLY_MASK 0x0000000020000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_PUT_TIMEOUT_SHFT 30 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_PUT_TIMEOUT_MASK 0x0000000040000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_SB_ACTIVATION_OVERRUN_SHFT 31 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_SB_ACTIVATION_OVERRUN_MASK 0x0000000080000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_COMPLETED_GB_ACTIVATION_HAD_ERROR_REPLY_SHFT 32 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_COMPLETED_GB_ACTIVATION_HAD_ERROR_REPLY_MASK 0x0000000100000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_COMPLETED_GB_ACTIVATION_TIMEOUT_SHFT 33 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_COMPLETED_GB_ACTIVATION_TIMEOUT_MASK 0x0000000200000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_DESCRIPTOR_BUFFER_0_PARITY_SHFT 34 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_DESCRIPTOR_BUFFER_0_PARITY_MASK 0x0000000400000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_DESCRIPTOR_BUFFER_1_PARITY_SHFT 35 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_DESCRIPTOR_BUFFER_1_PARITY_MASK 0x0000000800000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_SOCKET_DESTINATION_TABLE_PARITY_SHFT 36 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_SOCKET_DESTINATION_TABLE_PARITY_MASK 0x0000001000000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_BAU_REPLY_PAYLOAD_CORRUPTION_SHFT 37 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_BAU_REPLY_PAYLOAD_CORRUPTION_MASK 0x0000002000000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_IO_PORT_DESTINATION_TABLE_PARITY_SHFT 38 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_IO_PORT_DESTINATION_TABLE_PARITY_MASK 0x0000004000000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INTD_SOFT_ACK_TIMEOUT_SHFT 39 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INTD_SOFT_ACK_TIMEOUT_MASK 0x0000008000000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INT_REP_OBESE_MSG_SHFT 40 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INT_REP_OBESE_MSG_MASK 0x0000010000000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INT_REP_COMMAND_ERR_SHFT 41 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INT_REP_COMMAND_ERR_MASK 0x0000020000000000UL -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INT_TIMEOUT_SHFT 42 -#define UVH_LB_MCAST_AOERR0_RPT_ENABLE_INT_TIMEOUT_MASK 0x0000040000000000UL - -union uvh_lb_mcast_aoerr0_rpt_enable_u { - unsigned long v; - struct uvh_lb_mcast_aoerr0_rpt_enable_s { - unsigned long mcast_obese_msg : 1; /* RW */ - unsigned long mcast_data_sb_err : 1; /* RW */ - unsigned long mcast_nack_buff_parity : 1; /* RW */ - unsigned long mcast_timeout : 1; /* RW */ - unsigned long mcast_inactive_reply : 1; /* RW */ - unsigned long mcast_upgrade_error : 1; /* RW */ - unsigned long mcast_reg_count_underflow : 1; /* RW */ - unsigned long mcast_rep_obese_msg : 1; /* RW */ - unsigned long ucache_req_runt_msg : 1; /* RW */ - unsigned long ucache_req_obese_msg : 1; /* RW */ - unsigned long ucache_req_data_sb_err : 1; /* RW */ - unsigned long ucache_rep_runt_msg : 1; /* RW */ - unsigned long ucache_rep_obese_msg : 1; /* RW */ - unsigned long ucache_rep_data_sb_err : 1; /* RW */ - unsigned long ucache_rep_command_err : 1; /* RW */ - unsigned long ucache_pend_timeout : 1; /* RW */ - unsigned long macc_req_runt_msg : 1; /* RW */ - unsigned long macc_req_obese_msg : 1; /* RW */ - unsigned long macc_req_data_sb_err : 1; /* RW */ - unsigned long macc_rep_runt_msg : 1; /* RW */ - unsigned long macc_rep_obese_msg : 1; /* RW */ - unsigned long macc_rep_data_sb_err : 1; /* RW */ - unsigned long macc_amo_timeout : 1; /* RW */ - unsigned long macc_put_timeout : 1; /* RW */ - unsigned long macc_spurious_event : 1; /* RW */ - unsigned long ioh_destination_table_parity : 1; /* RW */ - unsigned long get_had_error_reply : 1; /* RW */ - unsigned long get_timeout : 1; /* RW */ - unsigned long lock_manager_had_error_reply : 1; /* RW */ - unsigned long put_had_error_reply : 1; /* RW */ - unsigned long put_timeout : 1; /* RW */ - unsigned long sb_activation_overrun : 1; /* RW */ - unsigned long completed_gb_activation_had_error_reply : 1; /* RW */ - unsigned long completed_gb_activation_timeout : 1; /* RW */ - unsigned long descriptor_buffer_0_parity : 1; /* RW */ - unsigned long descriptor_buffer_1_parity : 1; /* RW */ - unsigned long socket_destination_table_parity : 1; /* RW */ - unsigned long bau_reply_payload_corruption : 1; /* RW */ - unsigned long io_port_destination_table_parity : 1; /* RW */ - unsigned long intd_soft_ack_timeout : 1; /* RW */ - unsigned long int_rep_obese_msg : 1; /* RW */ - unsigned long int_rep_command_err : 1; /* RW */ - unsigned long int_timeout : 1; /* RW */ - unsigned long rsvd_43_63 : 21; /* */ - } s; -}; - -/* ========================================================================= */ -/* UVH_LOCAL_INT0_CONFIG */ -/* ========================================================================= */ -#define UVH_LOCAL_INT0_CONFIG 0x61000UL - -#define UVH_LOCAL_INT0_CONFIG_VECTOR_SHFT 0 -#define UVH_LOCAL_INT0_CONFIG_VECTOR_MASK 0x00000000000000ffUL -#define UVH_LOCAL_INT0_CONFIG_DM_SHFT 8 -#define UVH_LOCAL_INT0_CONFIG_DM_MASK 0x0000000000000700UL -#define UVH_LOCAL_INT0_CONFIG_DESTMODE_SHFT 11 -#define UVH_LOCAL_INT0_CONFIG_DESTMODE_MASK 0x0000000000000800UL -#define UVH_LOCAL_INT0_CONFIG_STATUS_SHFT 12 -#define UVH_LOCAL_INT0_CONFIG_STATUS_MASK 0x0000000000001000UL -#define UVH_LOCAL_INT0_CONFIG_P_SHFT 13 -#define UVH_LOCAL_INT0_CONFIG_P_MASK 0x0000000000002000UL -#define UVH_LOCAL_INT0_CONFIG_T_SHFT 15 -#define UVH_LOCAL_INT0_CONFIG_T_MASK 0x0000000000008000UL -#define UVH_LOCAL_INT0_CONFIG_M_SHFT 16 -#define UVH_LOCAL_INT0_CONFIG_M_MASK 0x0000000000010000UL -#define UVH_LOCAL_INT0_CONFIG_APIC_ID_SHFT 32 -#define UVH_LOCAL_INT0_CONFIG_APIC_ID_MASK 0xffffffff00000000UL - -union uvh_local_int0_config_u { - unsigned long v; - struct uvh_local_int0_config_s { - unsigned long vector_ : 8; /* RW */ - unsigned long dm : 3; /* RW */ - unsigned long destmode : 1; /* RW */ - unsigned long status : 1; /* RO */ - unsigned long p : 1; /* RO */ - unsigned long rsvd_14 : 1; /* */ - unsigned long t : 1; /* RO */ - unsigned long m : 1; /* RW */ - unsigned long rsvd_17_31: 15; /* */ - unsigned long apic_id : 32; /* RW */ - } s; -}; - -/* ========================================================================= */ -/* UVH_LOCAL_INT0_ENABLE */ -/* ========================================================================= */ -#define UVH_LOCAL_INT0_ENABLE 0x65000UL - -#define UVH_LOCAL_INT0_ENABLE_LB_HCERR_SHFT 0 -#define UVH_LOCAL_INT0_ENABLE_LB_HCERR_MASK 0x0000000000000001UL -#define UVH_LOCAL_INT0_ENABLE_GR0_HCERR_SHFT 1 -#define UVH_LOCAL_INT0_ENABLE_GR0_HCERR_MASK 0x0000000000000002UL -#define UVH_LOCAL_INT0_ENABLE_GR1_HCERR_SHFT 2 -#define UVH_LOCAL_INT0_ENABLE_GR1_HCERR_MASK 0x0000000000000004UL -#define UVH_LOCAL_INT0_ENABLE_LH_HCERR_SHFT 3 -#define UVH_LOCAL_INT0_ENABLE_LH_HCERR_MASK 0x0000000000000008UL -#define UVH_LOCAL_INT0_ENABLE_RH_HCERR_SHFT 4 -#define UVH_LOCAL_INT0_ENABLE_RH_HCERR_MASK 0x0000000000000010UL -#define UVH_LOCAL_INT0_ENABLE_XN_HCERR_SHFT 5 -#define UVH_LOCAL_INT0_ENABLE_XN_HCERR_MASK 0x0000000000000020UL -#define UVH_LOCAL_INT0_ENABLE_SI_HCERR_SHFT 6 -#define UVH_LOCAL_INT0_ENABLE_SI_HCERR_MASK 0x0000000000000040UL -#define UVH_LOCAL_INT0_ENABLE_LB_AOERR0_SHFT 7 -#define UVH_LOCAL_INT0_ENABLE_LB_AOERR0_MASK 0x0000000000000080UL -#define UVH_LOCAL_INT0_ENABLE_GR0_AOERR0_SHFT 8 -#define UVH_LOCAL_INT0_ENABLE_GR0_AOERR0_MASK 0x0000000000000100UL -#define UVH_LOCAL_INT0_ENABLE_GR1_AOERR0_SHFT 9 -#define UVH_LOCAL_INT0_ENABLE_GR1_AOERR0_MASK 0x0000000000000200UL -#define UVH_LOCAL_INT0_ENABLE_LH_AOERR0_SHFT 10 -#define UVH_LOCAL_INT0_ENABLE_LH_AOERR0_MASK 0x0000000000000400UL -#define UVH_LOCAL_INT0_ENABLE_RH_AOERR0_SHFT 11 -#define UVH_LOCAL_INT0_ENABLE_RH_AOERR0_MASK 0x0000000000000800UL -#define UVH_LOCAL_INT0_ENABLE_XN_AOERR0_SHFT 12 -#define UVH_LOCAL_INT0_ENABLE_XN_AOERR0_MASK 0x0000000000001000UL -#define UVH_LOCAL_INT0_ENABLE_SI_AOERR0_SHFT 13 -#define UVH_LOCAL_INT0_ENABLE_SI_AOERR0_MASK 0x0000000000002000UL -#define UVH_LOCAL_INT0_ENABLE_LB_AOERR1_SHFT 14 -#define UVH_LOCAL_INT0_ENABLE_LB_AOERR1_MASK 0x0000000000004000UL -#define UVH_LOCAL_INT0_ENABLE_GR0_AOERR1_SHFT 15 -#define UVH_LOCAL_INT0_ENABLE_GR0_AOERR1_MASK 0x0000000000008000UL -#define UVH_LOCAL_INT0_ENABLE_GR1_AOERR1_SHFT 16 -#define UVH_LOCAL_INT0_ENABLE_GR1_AOERR1_MASK 0x0000000000010000UL -#define UVH_LOCAL_INT0_ENABLE_LH_AOERR1_SHFT 17 -#define UVH_LOCAL_INT0_ENABLE_LH_AOERR1_MASK 0x0000000000020000UL -#define UVH_LOCAL_INT0_ENABLE_RH_AOERR1_SHFT 18 -#define UVH_LOCAL_INT0_ENABLE_RH_AOERR1_MASK 0x0000000000040000UL -#define UVH_LOCAL_INT0_ENABLE_XN_AOERR1_SHFT 19 -#define UVH_LOCAL_INT0_ENABLE_XN_AOERR1_MASK 0x0000000000080000UL -#define UVH_LOCAL_INT0_ENABLE_SI_AOERR1_SHFT 20 -#define UVH_LOCAL_INT0_ENABLE_SI_AOERR1_MASK 0x0000000000100000UL -#define UVH_LOCAL_INT0_ENABLE_RH_VPI_INT_SHFT 21 -#define UVH_LOCAL_INT0_ENABLE_RH_VPI_INT_MASK 0x0000000000200000UL -#define UVH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 22 -#define UVH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000400000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_0_SHFT 23 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_0_MASK 0x0000000000800000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_1_SHFT 24 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_1_MASK 0x0000000001000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_2_SHFT 25 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_2_MASK 0x0000000002000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_3_SHFT 26 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_3_MASK 0x0000000004000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_4_SHFT 27 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_4_MASK 0x0000000008000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_5_SHFT 28 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_5_MASK 0x0000000010000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_6_SHFT 29 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_6_MASK 0x0000000020000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_7_SHFT 30 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_7_MASK 0x0000000040000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_8_SHFT 31 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_8_MASK 0x0000000080000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_9_SHFT 32 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_9_MASK 0x0000000100000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_10_SHFT 33 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_10_MASK 0x0000000200000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_11_SHFT 34 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_11_MASK 0x0000000400000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_12_SHFT 35 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_12_MASK 0x0000000800000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_13_SHFT 36 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_13_MASK 0x0000001000000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_14_SHFT 37 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_14_MASK 0x0000002000000000UL -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_15_SHFT 38 -#define UVH_LOCAL_INT0_ENABLE_LB_IRQ_INT_15_MASK 0x0000004000000000UL -#define UVH_LOCAL_INT0_ENABLE_L1_NMI_INT_SHFT 39 -#define UVH_LOCAL_INT0_ENABLE_L1_NMI_INT_MASK 0x0000008000000000UL -#define UVH_LOCAL_INT0_ENABLE_STOP_CLOCK_SHFT 40 -#define UVH_LOCAL_INT0_ENABLE_STOP_CLOCK_MASK 0x0000010000000000UL -#define UVH_LOCAL_INT0_ENABLE_ASIC_TO_L1_SHFT 41 -#define UVH_LOCAL_INT0_ENABLE_ASIC_TO_L1_MASK 0x0000020000000000UL -#define UVH_LOCAL_INT0_ENABLE_L1_TO_ASIC_SHFT 42 -#define UVH_LOCAL_INT0_ENABLE_L1_TO_ASIC_MASK 0x0000040000000000UL -#define UVH_LOCAL_INT0_ENABLE_LTC_INT_SHFT 43 -#define UVH_LOCAL_INT0_ENABLE_LTC_INT_MASK 0x0000080000000000UL -#define UVH_LOCAL_INT0_ENABLE_LA_SEQ_TRIGGER_SHFT 44 -#define UVH_LOCAL_INT0_ENABLE_LA_SEQ_TRIGGER_MASK 0x0000100000000000UL - -union uvh_local_int0_enable_u { - unsigned long v; - struct uvh_local_int0_enable_s { - unsigned long lb_hcerr : 1; /* RW */ - unsigned long gr0_hcerr : 1; /* RW */ - unsigned long gr1_hcerr : 1; /* RW */ - unsigned long lh_hcerr : 1; /* RW */ - unsigned long rh_hcerr : 1; /* RW */ - unsigned long xn_hcerr : 1; /* RW */ - unsigned long si_hcerr : 1; /* RW */ - unsigned long lb_aoerr0 : 1; /* RW */ - unsigned long gr0_aoerr0 : 1; /* RW */ - unsigned long gr1_aoerr0 : 1; /* RW */ - unsigned long lh_aoerr0 : 1; /* RW */ - unsigned long rh_aoerr0 : 1; /* RW */ - unsigned long xn_aoerr0 : 1; /* RW */ - unsigned long si_aoerr0 : 1; /* RW */ - unsigned long lb_aoerr1 : 1; /* RW */ - unsigned long gr0_aoerr1 : 1; /* RW */ - unsigned long gr1_aoerr1 : 1; /* RW */ - unsigned long lh_aoerr1 : 1; /* RW */ - unsigned long rh_aoerr1 : 1; /* RW */ - unsigned long xn_aoerr1 : 1; /* RW */ - unsigned long si_aoerr1 : 1; /* RW */ - unsigned long rh_vpi_int : 1; /* RW */ - unsigned long system_shutdown_int : 1; /* RW */ - unsigned long lb_irq_int_0 : 1; /* RW */ - unsigned long lb_irq_int_1 : 1; /* RW */ - unsigned long lb_irq_int_2 : 1; /* RW */ - unsigned long lb_irq_int_3 : 1; /* RW */ - unsigned long lb_irq_int_4 : 1; /* RW */ - unsigned long lb_irq_int_5 : 1; /* RW */ - unsigned long lb_irq_int_6 : 1; /* RW */ - unsigned long lb_irq_int_7 : 1; /* RW */ - unsigned long lb_irq_int_8 : 1; /* RW */ - unsigned long lb_irq_int_9 : 1; /* RW */ - unsigned long lb_irq_int_10 : 1; /* RW */ - unsigned long lb_irq_int_11 : 1; /* RW */ - unsigned long lb_irq_int_12 : 1; /* RW */ - unsigned long lb_irq_int_13 : 1; /* RW */ - unsigned long lb_irq_int_14 : 1; /* RW */ - unsigned long lb_irq_int_15 : 1; /* RW */ - unsigned long l1_nmi_int : 1; /* RW */ - unsigned long stop_clock : 1; /* RW */ - unsigned long asic_to_l1 : 1; /* RW */ - unsigned long l1_to_asic : 1; /* RW */ - unsigned long ltc_int : 1; /* RW */ - unsigned long la_seq_trigger : 1; /* RW */ - unsigned long rsvd_45_63 : 19; /* */ - } s; -}; - -/* ========================================================================= */ /* UVH_NODE_ID */ /* ========================================================================= */ #define UVH_NODE_ID 0x0UL @@ -1112,26 +857,6 @@ union uvh_rh_gam_alias210_redirect_config_2_mmr_u { }; /* ========================================================================= */ -/* UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR */ -/* ========================================================================= */ -#define UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR 0x1600020UL - -#define UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR_BASE_SHFT 26 -#define UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffffc000000UL -#define UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63 -#define UVH_RH_GAM_CFG_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL - -union uvh_rh_gam_cfg_overlay_config_mmr_u { - unsigned long v; - struct uvh_rh_gam_cfg_overlay_config_mmr_s { - unsigned long rsvd_0_25: 26; /* */ - unsigned long base : 20; /* RW */ - unsigned long rsvd_46_62: 17; /* */ - unsigned long enable : 1; /* RW */ - } s; -}; - -/* ========================================================================= */ /* UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR */ /* ========================================================================= */ #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR 0x1600010UL @@ -1263,101 +988,6 @@ union uvh_rtc1_int_config_u { }; /* ========================================================================= */ -/* UVH_RTC2_INT_CONFIG */ -/* ========================================================================= */ -#define UVH_RTC2_INT_CONFIG 0x61600UL - -#define UVH_RTC2_INT_CONFIG_VECTOR_SHFT 0 -#define UVH_RTC2_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL -#define UVH_RTC2_INT_CONFIG_DM_SHFT 8 -#define UVH_RTC2_INT_CONFIG_DM_MASK 0x0000000000000700UL -#define UVH_RTC2_INT_CONFIG_DESTMODE_SHFT 11 -#define UVH_RTC2_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL -#define UVH_RTC2_INT_CONFIG_STATUS_SHFT 12 -#define UVH_RTC2_INT_CONFIG_STATUS_MASK 0x0000000000001000UL -#define UVH_RTC2_INT_CONFIG_P_SHFT 13 -#define UVH_RTC2_INT_CONFIG_P_MASK 0x0000000000002000UL -#define UVH_RTC2_INT_CONFIG_T_SHFT 15 -#define UVH_RTC2_INT_CONFIG_T_MASK 0x0000000000008000UL -#define UVH_RTC2_INT_CONFIG_M_SHFT 16 -#define UVH_RTC2_INT_CONFIG_M_MASK 0x0000000000010000UL -#define UVH_RTC2_INT_CONFIG_APIC_ID_SHFT 32 -#define UVH_RTC2_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL - -union uvh_rtc2_int_config_u { - unsigned long v; - struct uvh_rtc2_int_config_s { - unsigned long vector_ : 8; /* RW */ - unsigned long dm : 3; /* RW */ - unsigned long destmode : 1; /* RW */ - unsigned long status : 1; /* RO */ - unsigned long p : 1; /* RO */ - unsigned long rsvd_14 : 1; /* */ - unsigned long t : 1; /* RO */ - unsigned long m : 1; /* RW */ - unsigned long rsvd_17_31: 15; /* */ - unsigned long apic_id : 32; /* RW */ - } s; -}; - -/* ========================================================================= */ -/* UVH_RTC3_INT_CONFIG */ -/* ========================================================================= */ -#define UVH_RTC3_INT_CONFIG 0x61640UL - -#define UVH_RTC3_INT_CONFIG_VECTOR_SHFT 0 -#define UVH_RTC3_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL -#define UVH_RTC3_INT_CONFIG_DM_SHFT 8 -#define UVH_RTC3_INT_CONFIG_DM_MASK 0x0000000000000700UL -#define UVH_RTC3_INT_CONFIG_DESTMODE_SHFT 11 -#define UVH_RTC3_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL -#define UVH_RTC3_INT_CONFIG_STATUS_SHFT 12 -#define UVH_RTC3_INT_CONFIG_STATUS_MASK 0x0000000000001000UL -#define UVH_RTC3_INT_CONFIG_P_SHFT 13 -#define UVH_RTC3_INT_CONFIG_P_MASK 0x0000000000002000UL -#define UVH_RTC3_INT_CONFIG_T_SHFT 15 -#define UVH_RTC3_INT_CONFIG_T_MASK 0x0000000000008000UL -#define UVH_RTC3_INT_CONFIG_M_SHFT 16 -#define UVH_RTC3_INT_CONFIG_M_MASK 0x0000000000010000UL -#define UVH_RTC3_INT_CONFIG_APIC_ID_SHFT 32 -#define UVH_RTC3_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL - -union uvh_rtc3_int_config_u { - unsigned long v; - struct uvh_rtc3_int_config_s { - unsigned long vector_ : 8; /* RW */ - unsigned long dm : 3; /* RW */ - unsigned long destmode : 1; /* RW */ - unsigned long status : 1; /* RO */ - unsigned long p : 1; /* RO */ - unsigned long rsvd_14 : 1; /* */ - unsigned long t : 1; /* RO */ - unsigned long m : 1; /* RW */ - unsigned long rsvd_17_31: 15; /* */ - unsigned long apic_id : 32; /* RW */ - } s; -}; - -/* ========================================================================= */ -/* UVH_RTC_INC_RATIO */ -/* ========================================================================= */ -#define UVH_RTC_INC_RATIO 0x350000UL - -#define UVH_RTC_INC_RATIO_FRACTION_SHFT 0 -#define UVH_RTC_INC_RATIO_FRACTION_MASK 0x00000000000fffffUL -#define UVH_RTC_INC_RATIO_RATIO_SHFT 20 -#define UVH_RTC_INC_RATIO_RATIO_MASK 0x0000000000700000UL - -union uvh_rtc_inc_ratio_u { - unsigned long v; - struct uvh_rtc_inc_ratio_s { - unsigned long fraction : 20; /* RW */ - unsigned long ratio : 3; /* RW */ - unsigned long rsvd_23_63: 41; /* */ - } s; -}; - -/* ========================================================================= */ /* UVH_SI_ADDR_MAP_CONFIG */ /* ========================================================================= */ #define UVH_SI_ADDR_MAP_CONFIG 0xc80000UL diff --git a/arch/x86/include/asm/vmware.h b/arch/x86/include/asm/vmware.h deleted file mode 100644 index e49ed6d2fd4e..000000000000 --- a/arch/x86/include/asm/vmware.h +++ /dev/null @@ -1,27 +0,0 @@ -/* - * Copyright (C) 2008, VMware, Inc. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or - * NON INFRINGEMENT. See the GNU General Public License for more - * details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - */ -#ifndef ASM_X86__VMWARE_H -#define ASM_X86__VMWARE_H - -extern void vmware_platform_setup(void); -extern int vmware_platform(void); -extern void vmware_set_feature_bits(struct cpuinfo_x86 *c); - -#endif diff --git a/arch/x86/include/asm/xsave.h b/arch/x86/include/asm/xsave.h index ddc04ccad03b..2c4390cae228 100644 --- a/arch/x86/include/asm/xsave.h +++ b/arch/x86/include/asm/xsave.h @@ -37,8 +37,9 @@ extern int check_for_xstate(struct i387_fxsave_struct __user *buf, void __user *fpstate, struct _fpx_sw_bytes *sw); -static inline int xrstor_checking(struct xsave_struct *fx) +static inline int fpu_xrstor_checking(struct fpu *fpu) { + struct xsave_struct *fx = &fpu->state->xsave; int err; asm volatile("1: .byte " REX_PREFIX "0x0f,0xae,0x2f\n\t" @@ -110,12 +111,12 @@ static inline void xrstor_state(struct xsave_struct *fx, u64 mask) : "memory"); } -static inline void xsave(struct task_struct *tsk) +static inline void fpu_xsave(struct fpu *fpu) { /* This, however, we can work around by forcing the compiler to select an addressing mode that doesn't require extended registers. */ __asm__ __volatile__(".byte " REX_PREFIX "0x0f,0xae,0x27" - : : "D" (&(tsk->thread.xstate->xsave)), + : : "D" (&(fpu->state->xsave)), "a" (-1), "d"(-1) : "memory"); } #endif diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 4c58352209e0..e77b22083721 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -47,8 +47,6 @@ obj-$(CONFIG_X86_TRAMPOLINE) += trampoline.o obj-y += process.o obj-y += i387.o xsave.o obj-y += ptrace.o -obj-$(CONFIG_X86_DS) += ds.o -obj-$(CONFIG_X86_DS_SELFTEST) += ds_selftest.o obj-$(CONFIG_X86_32) += tls.o obj-$(CONFIG_IA32_EMULATION) += tls.o obj-y += step.o diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c index cd40aba6aa95..9a5ed58f09dc 100644 --- a/arch/x86/kernel/acpi/boot.c +++ b/arch/x86/kernel/acpi/boot.c @@ -94,6 +94,53 @@ enum acpi_irq_model_id acpi_irq_model = ACPI_IRQ_MODEL_PIC; /* + * ISA irqs by default are the first 16 gsis but can be + * any gsi as specified by an interrupt source override. + */ +static u32 isa_irq_to_gsi[NR_IRQS_LEGACY] __read_mostly = { + 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 +}; + +static unsigned int gsi_to_irq(unsigned int gsi) +{ + unsigned int irq = gsi + NR_IRQS_LEGACY; + unsigned int i; + + for (i = 0; i < NR_IRQS_LEGACY; i++) { + if (isa_irq_to_gsi[i] == gsi) { + return i; + } + } + + /* Provide an identity mapping of gsi == irq + * except on truly weird platforms that have + * non isa irqs in the first 16 gsis. + */ + if (gsi >= NR_IRQS_LEGACY) + irq = gsi; + else + irq = gsi_end + 1 + gsi; + + return irq; +} + +static u32 irq_to_gsi(int irq) +{ + unsigned int gsi; + + if (irq < NR_IRQS_LEGACY) + gsi = isa_irq_to_gsi[irq]; + else if (irq <= gsi_end) + gsi = irq; + else if (irq <= (gsi_end + NR_IRQS_LEGACY)) + gsi = irq - gsi_end; + else + gsi = 0xffffffff; + + return gsi; +} + +/* * Temporarily use the virtual area starting from FIX_IO_APIC_BASE_END, * to map the target physical address. The problem is that set_fixmap() * provides a single page, and it is possible that the page is not @@ -313,7 +360,7 @@ acpi_parse_ioapic(struct acpi_subtable_header * header, const unsigned long end) /* * Parse Interrupt Source Override for the ACPI SCI */ -static void __init acpi_sci_ioapic_setup(u32 gsi, u16 polarity, u16 trigger) +static void __init acpi_sci_ioapic_setup(u8 bus_irq, u16 polarity, u16 trigger, u32 gsi) { if (trigger == 0) /* compatible SCI trigger is level */ trigger = 3; @@ -333,7 +380,7 @@ static void __init acpi_sci_ioapic_setup(u32 gsi, u16 polarity, u16 trigger) * If GSI is < 16, this will update its flags, * else it will create a new mp_irqs[] entry. */ - mp_override_legacy_irq(gsi, polarity, trigger, gsi); + mp_override_legacy_irq(bus_irq, polarity, trigger, gsi); /* * stash over-ride to indicate we've been here @@ -357,9 +404,10 @@ acpi_parse_int_src_ovr(struct acpi_subtable_header * header, acpi_table_print_madt_entry(header); if (intsrc->source_irq == acpi_gbl_FADT.sci_interrupt) { - acpi_sci_ioapic_setup(intsrc->global_irq, + acpi_sci_ioapic_setup(intsrc->source_irq, intsrc->inti_flags & ACPI_MADT_POLARITY_MASK, - (intsrc->inti_flags & ACPI_MADT_TRIGGER_MASK) >> 2); + (intsrc->inti_flags & ACPI_MADT_TRIGGER_MASK) >> 2, + intsrc->global_irq); return 0; } @@ -448,7 +496,7 @@ void __init acpi_pic_sci_set_trigger(unsigned int irq, u16 trigger) int acpi_gsi_to_irq(u32 gsi, unsigned int *irq) { - *irq = gsi; + *irq = gsi_to_irq(gsi); #ifdef CONFIG_X86_IO_APIC if (acpi_irq_model == ACPI_IRQ_MODEL_IOAPIC) @@ -458,6 +506,14 @@ int acpi_gsi_to_irq(u32 gsi, unsigned int *irq) return 0; } +int acpi_isa_irq_to_gsi(unsigned isa_irq, u32 *gsi) +{ + if (isa_irq >= 16) + return -1; + *gsi = irq_to_gsi(isa_irq); + return 0; +} + /* * success: return IRQ number (>=0) * failure: return < 0 @@ -482,7 +538,7 @@ int acpi_register_gsi(struct device *dev, u32 gsi, int trigger, int polarity) plat_gsi = mp_register_gsi(dev, gsi, trigger, polarity); } #endif - irq = plat_gsi; + irq = gsi_to_irq(plat_gsi); return irq; } @@ -867,29 +923,6 @@ static int __init acpi_parse_madt_lapic_entries(void) extern int es7000_plat; #endif -int __init acpi_probe_gsi(void) -{ - int idx; - int gsi; - int max_gsi = 0; - - if (acpi_disabled) - return 0; - - if (!acpi_ioapic) - return 0; - - max_gsi = 0; - for (idx = 0; idx < nr_ioapics; idx++) { - gsi = mp_gsi_routing[idx].gsi_end; - - if (gsi > max_gsi) - max_gsi = gsi; - } - - return max_gsi + 1; -} - static void assign_to_mp_irq(struct mpc_intsrc *m, struct mpc_intsrc *mp_irq) { @@ -947,13 +980,13 @@ void __init mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger, u32 gsi) mp_irq.dstirq = pin; /* INTIN# */ save_mp_irq(&mp_irq); + + isa_irq_to_gsi[bus_irq] = gsi; } void __init mp_config_acpi_legacy_irqs(void) { int i; - int ioapic; - unsigned int dstapic; struct mpc_intsrc mp_irq; #if defined (CONFIG_MCA) || defined (CONFIG_EISA) @@ -974,19 +1007,27 @@ void __init mp_config_acpi_legacy_irqs(void) #endif /* - * Locate the IOAPIC that manages the ISA IRQs (0-15). - */ - ioapic = mp_find_ioapic(0); - if (ioapic < 0) - return; - dstapic = mp_ioapics[ioapic].apicid; - - /* * Use the default configuration for the IRQs 0-15. Unless * overridden by (MADT) interrupt source override entries. */ for (i = 0; i < 16; i++) { + int ioapic, pin; + unsigned int dstapic; int idx; + u32 gsi; + + /* Locate the gsi that irq i maps to. */ + if (acpi_isa_irq_to_gsi(i, &gsi)) + continue; + + /* + * Locate the IOAPIC that manages the ISA IRQ. + */ + ioapic = mp_find_ioapic(gsi); + if (ioapic < 0) + continue; + pin = mp_find_ioapic_pin(ioapic, gsi); + dstapic = mp_ioapics[ioapic].apicid; for (idx = 0; idx < mp_irq_entries; idx++) { struct mpc_intsrc *irq = mp_irqs + idx; @@ -996,7 +1037,7 @@ void __init mp_config_acpi_legacy_irqs(void) break; /* Do we already have a mapping for this IOAPIC pin */ - if (irq->dstapic == dstapic && irq->dstirq == i) + if (irq->dstapic == dstapic && irq->dstirq == pin) break; } @@ -1011,7 +1052,7 @@ void __init mp_config_acpi_legacy_irqs(void) mp_irq.dstapic = dstapic; mp_irq.irqtype = mp_INT; mp_irq.srcbusirq = i; /* Identity mapped */ - mp_irq.dstirq = i; + mp_irq.dstirq = pin; save_mp_irq(&mp_irq); } @@ -1076,11 +1117,6 @@ int mp_register_gsi(struct device *dev, u32 gsi, int trigger, int polarity) ioapic_pin = mp_find_ioapic_pin(ioapic, gsi); -#ifdef CONFIG_X86_32 - if (ioapic_renumber_irq) - gsi = ioapic_renumber_irq(ioapic, gsi); -#endif - if (ioapic_pin > MP_MAX_IOAPIC_PIN) { printk(KERN_ERR "Invalid reference to IOAPIC pin " "%d-%d\n", mp_ioapics[ioapic].apicid, @@ -1094,7 +1130,7 @@ int mp_register_gsi(struct device *dev, u32 gsi, int trigger, int polarity) set_io_apic_irq_attr(&irq_attr, ioapic, ioapic_pin, trigger == ACPI_EDGE_SENSITIVE ? 0 : 1, polarity == ACPI_ACTIVE_HIGH ? 0 : 1); - io_apic_set_pci_routing(dev, gsi, &irq_attr); + io_apic_set_pci_routing(dev, gsi_to_irq(gsi), &irq_attr); return gsi; } @@ -1154,7 +1190,8 @@ static int __init acpi_parse_madt_ioapic_entries(void) * pretend we got one so we can set the SCI flags. */ if (!acpi_sci_override_gsi) - acpi_sci_ioapic_setup(acpi_gbl_FADT.sci_interrupt, 0, 0); + acpi_sci_ioapic_setup(acpi_gbl_FADT.sci_interrupt, 0, 0, + acpi_gbl_FADT.sci_interrupt); /* Fill in identity legacy mappings where no override */ mp_config_acpi_legacy_irqs(); diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 1a160d5d44d0..70237732a6c7 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -194,7 +194,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len) } extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; -extern u8 *__smp_locks[], *__smp_locks_end[]; +extern s32 __smp_locks[], __smp_locks_end[]; static void *text_poke_early(void *addr, const void *opcode, size_t len); /* Replace instructions with better alternatives for this CPU type. @@ -235,37 +235,41 @@ void __init_or_module apply_alternatives(struct alt_instr *start, #ifdef CONFIG_SMP -static void alternatives_smp_lock(u8 **start, u8 **end, u8 *text, u8 *text_end) +static void alternatives_smp_lock(const s32 *start, const s32 *end, + u8 *text, u8 *text_end) { - u8 **ptr; + const s32 *poff; mutex_lock(&text_mutex); - for (ptr = start; ptr < end; ptr++) { - if (*ptr < text) - continue; - if (*ptr > text_end) + for (poff = start; poff < end; poff++) { + u8 *ptr = (u8 *)poff + *poff; + + if (!*poff || ptr < text || ptr >= text_end) continue; /* turn DS segment override prefix into lock prefix */ - text_poke(*ptr, ((unsigned char []){0xf0}), 1); + if (*ptr == 0x3e) + text_poke(ptr, ((unsigned char []){0xf0}), 1); }; mutex_unlock(&text_mutex); } -static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end) +static void alternatives_smp_unlock(const s32 *start, const s32 *end, + u8 *text, u8 *text_end) { - u8 **ptr; + const s32 *poff; if (noreplace_smp) return; mutex_lock(&text_mutex); - for (ptr = start; ptr < end; ptr++) { - if (*ptr < text) - continue; - if (*ptr > text_end) + for (poff = start; poff < end; poff++) { + u8 *ptr = (u8 *)poff + *poff; + + if (!*poff || ptr < text || ptr >= text_end) continue; /* turn lock prefix into DS segment override prefix */ - text_poke(*ptr, ((unsigned char []){0x3E}), 1); + if (*ptr == 0xf0) + text_poke(ptr, ((unsigned char []){0x3E}), 1); }; mutex_unlock(&text_mutex); } @@ -276,8 +280,8 @@ struct smp_alt_module { char *name; /* ptrs to lock prefixes */ - u8 **locks; - u8 **locks_end; + const s32 *locks; + const s32 *locks_end; /* .text segment, needed to avoid patching init code ;) */ u8 *text; @@ -398,16 +402,19 @@ void alternatives_smp_switch(int smp) int alternatives_text_reserved(void *start, void *end) { struct smp_alt_module *mod; - u8 **ptr; + const s32 *poff; u8 *text_start = start; u8 *text_end = end; list_for_each_entry(mod, &smp_alt_modules, next) { if (mod->text > text_end || mod->text_end < text_start) continue; - for (ptr = mod->locks; ptr < mod->locks_end; ptr++) - if (text_start <= *ptr && text_end >= *ptr) + for (poff = mod->locks; poff < mod->locks_end; poff++) { + const u8 *ptr = (const u8 *)poff + *poff; + + if (text_start <= ptr && text_end > ptr) return 1; + } } return 0; diff --git a/arch/x86/kernel/amd_iommu.c b/arch/x86/kernel/amd_iommu.c index f854d89b7edf..fa5a1474cd18 100644 --- a/arch/x86/kernel/amd_iommu.c +++ b/arch/x86/kernel/amd_iommu.c @@ -731,18 +731,22 @@ static bool increase_address_space(struct protection_domain *domain, static u64 *alloc_pte(struct protection_domain *domain, unsigned long address, - int end_lvl, + unsigned long page_size, u64 **pte_page, gfp_t gfp) { + int level, end_lvl; u64 *pte, *page; - int level; + + BUG_ON(!is_power_of_2(page_size)); while (address > PM_LEVEL_SIZE(domain->mode)) increase_address_space(domain, gfp); - level = domain->mode - 1; - pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)]; + level = domain->mode - 1; + pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)]; + address = PAGE_SIZE_ALIGN(address, page_size); + end_lvl = PAGE_SIZE_LEVEL(page_size); while (level > end_lvl) { if (!IOMMU_PTE_PRESENT(*pte)) { @@ -752,6 +756,10 @@ static u64 *alloc_pte(struct protection_domain *domain, *pte = PM_LEVEL_PDE(level, virt_to_phys(page)); } + /* No level skipping support yet */ + if (PM_PTE_LEVEL(*pte) != level) + return NULL; + level -= 1; pte = IOMMU_PTE_PAGE(*pte); @@ -769,28 +777,47 @@ static u64 *alloc_pte(struct protection_domain *domain, * This function checks if there is a PTE for a given dma address. If * there is one, it returns the pointer to it. */ -static u64 *fetch_pte(struct protection_domain *domain, - unsigned long address, int map_size) +static u64 *fetch_pte(struct protection_domain *domain, unsigned long address) { int level; u64 *pte; - level = domain->mode - 1; - pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)]; + if (address > PM_LEVEL_SIZE(domain->mode)) + return NULL; + + level = domain->mode - 1; + pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)]; - while (level > map_size) { + while (level > 0) { + + /* Not Present */ if (!IOMMU_PTE_PRESENT(*pte)) return NULL; + /* Large PTE */ + if (PM_PTE_LEVEL(*pte) == 0x07) { + unsigned long pte_mask, __pte; + + /* + * If we have a series of large PTEs, make + * sure to return a pointer to the first one. + */ + pte_mask = PTE_PAGE_SIZE(*pte); + pte_mask = ~((PAGE_SIZE_PTE_COUNT(pte_mask) << 3) - 1); + __pte = ((unsigned long)pte) & pte_mask; + + return (u64 *)__pte; + } + + /* No level skipping support yet */ + if (PM_PTE_LEVEL(*pte) != level) + return NULL; + level -= 1; + /* Walk to the next level */ pte = IOMMU_PTE_PAGE(*pte); pte = &pte[PM_LEVEL_INDEX(level, address)]; - - if ((PM_PTE_LEVEL(*pte) == 0) && level != map_size) { - pte = NULL; - break; - } } return pte; @@ -807,44 +834,84 @@ static int iommu_map_page(struct protection_domain *dom, unsigned long bus_addr, unsigned long phys_addr, int prot, - int map_size) + unsigned long page_size) { u64 __pte, *pte; - - bus_addr = PAGE_ALIGN(bus_addr); - phys_addr = PAGE_ALIGN(phys_addr); - - BUG_ON(!PM_ALIGNED(map_size, bus_addr)); - BUG_ON(!PM_ALIGNED(map_size, phys_addr)); + int i, count; if (!(prot & IOMMU_PROT_MASK)) return -EINVAL; - pte = alloc_pte(dom, bus_addr, map_size, NULL, GFP_KERNEL); + bus_addr = PAGE_ALIGN(bus_addr); + phys_addr = PAGE_ALIGN(phys_addr); + count = PAGE_SIZE_PTE_COUNT(page_size); + pte = alloc_pte(dom, bus_addr, page_size, NULL, GFP_KERNEL); + + for (i = 0; i < count; ++i) + if (IOMMU_PTE_PRESENT(pte[i])) + return -EBUSY; - if (IOMMU_PTE_PRESENT(*pte)) - return -EBUSY; + if (page_size > PAGE_SIZE) { + __pte = PAGE_SIZE_PTE(phys_addr, page_size); + __pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_P | IOMMU_PTE_FC; + } else + __pte = phys_addr | IOMMU_PTE_P | IOMMU_PTE_FC; - __pte = phys_addr | IOMMU_PTE_P; if (prot & IOMMU_PROT_IR) __pte |= IOMMU_PTE_IR; if (prot & IOMMU_PROT_IW) __pte |= IOMMU_PTE_IW; - *pte = __pte; + for (i = 0; i < count; ++i) + pte[i] = __pte; update_domain(dom); return 0; } -static void iommu_unmap_page(struct protection_domain *dom, - unsigned long bus_addr, int map_size) +static unsigned long iommu_unmap_page(struct protection_domain *dom, + unsigned long bus_addr, + unsigned long page_size) { - u64 *pte = fetch_pte(dom, bus_addr, map_size); + unsigned long long unmap_size, unmapped; + u64 *pte; + + BUG_ON(!is_power_of_2(page_size)); + + unmapped = 0; - if (pte) - *pte = 0; + while (unmapped < page_size) { + + pte = fetch_pte(dom, bus_addr); + + if (!pte) { + /* + * No PTE for this address + * move forward in 4kb steps + */ + unmap_size = PAGE_SIZE; + } else if (PM_PTE_LEVEL(*pte) == 0) { + /* 4kb PTE found for this address */ + unmap_size = PAGE_SIZE; + *pte = 0ULL; + } else { + int count, i; + + /* Large PTE found which maps this address */ + unmap_size = PTE_PAGE_SIZE(*pte); + count = PAGE_SIZE_PTE_COUNT(unmap_size); + for (i = 0; i < count; i++) + pte[i] = 0ULL; + } + + bus_addr = (bus_addr & ~(unmap_size - 1)) + unmap_size; + unmapped += unmap_size; + } + + BUG_ON(!is_power_of_2(unmapped)); + + return unmapped; } /* @@ -878,7 +945,7 @@ static int dma_ops_unity_map(struct dma_ops_domain *dma_dom, for (addr = e->address_start; addr < e->address_end; addr += PAGE_SIZE) { ret = iommu_map_page(&dma_dom->domain, addr, addr, e->prot, - PM_MAP_4k); + PAGE_SIZE); if (ret) return ret; /* @@ -1006,7 +1073,7 @@ static int alloc_new_range(struct dma_ops_domain *dma_dom, u64 *pte, *pte_page; for (i = 0; i < num_ptes; ++i) { - pte = alloc_pte(&dma_dom->domain, address, PM_MAP_4k, + pte = alloc_pte(&dma_dom->domain, address, PAGE_SIZE, &pte_page, gfp); if (!pte) goto out_free; @@ -1042,7 +1109,7 @@ static int alloc_new_range(struct dma_ops_domain *dma_dom, for (i = dma_dom->aperture[index]->offset; i < dma_dom->aperture_size; i += PAGE_SIZE) { - u64 *pte = fetch_pte(&dma_dom->domain, i, PM_MAP_4k); + u64 *pte = fetch_pte(&dma_dom->domain, i); if (!pte || !IOMMU_PTE_PRESENT(*pte)) continue; @@ -1712,7 +1779,7 @@ static u64* dma_ops_get_pte(struct dma_ops_domain *dom, pte = aperture->pte_pages[APERTURE_PAGE_INDEX(address)]; if (!pte) { - pte = alloc_pte(&dom->domain, address, PM_MAP_4k, &pte_page, + pte = alloc_pte(&dom->domain, address, PAGE_SIZE, &pte_page, GFP_ATOMIC); aperture->pte_pages[APERTURE_PAGE_INDEX(address)] = pte_page; } else @@ -2439,12 +2506,11 @@ static int amd_iommu_attach_device(struct iommu_domain *dom, return ret; } -static int amd_iommu_map_range(struct iommu_domain *dom, - unsigned long iova, phys_addr_t paddr, - size_t size, int iommu_prot) +static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova, + phys_addr_t paddr, int gfp_order, int iommu_prot) { + unsigned long page_size = 0x1000UL << gfp_order; struct protection_domain *domain = dom->priv; - unsigned long i, npages = iommu_num_pages(paddr, size, PAGE_SIZE); int prot = 0; int ret; @@ -2453,61 +2519,50 @@ static int amd_iommu_map_range(struct iommu_domain *dom, if (iommu_prot & IOMMU_WRITE) prot |= IOMMU_PROT_IW; - iova &= PAGE_MASK; - paddr &= PAGE_MASK; - mutex_lock(&domain->api_lock); - - for (i = 0; i < npages; ++i) { - ret = iommu_map_page(domain, iova, paddr, prot, PM_MAP_4k); - if (ret) - return ret; - - iova += PAGE_SIZE; - paddr += PAGE_SIZE; - } - + ret = iommu_map_page(domain, iova, paddr, prot, page_size); mutex_unlock(&domain->api_lock); - return 0; + return ret; } -static void amd_iommu_unmap_range(struct iommu_domain *dom, - unsigned long iova, size_t size) +static int amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova, + int gfp_order) { - struct protection_domain *domain = dom->priv; - unsigned long i, npages = iommu_num_pages(iova, size, PAGE_SIZE); + unsigned long page_size, unmap_size; - iova &= PAGE_MASK; + page_size = 0x1000UL << gfp_order; mutex_lock(&domain->api_lock); - - for (i = 0; i < npages; ++i) { - iommu_unmap_page(domain, iova, PM_MAP_4k); - iova += PAGE_SIZE; - } + unmap_size = iommu_unmap_page(domain, iova, page_size); + mutex_unlock(&domain->api_lock); iommu_flush_tlb_pde(domain); - mutex_unlock(&domain->api_lock); + return get_order(unmap_size); } static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom, unsigned long iova) { struct protection_domain *domain = dom->priv; - unsigned long offset = iova & ~PAGE_MASK; + unsigned long offset_mask; phys_addr_t paddr; - u64 *pte; + u64 *pte, __pte; - pte = fetch_pte(domain, iova, PM_MAP_4k); + pte = fetch_pte(domain, iova); if (!pte || !IOMMU_PTE_PRESENT(*pte)) return 0; - paddr = *pte & IOMMU_PAGE_MASK; - paddr |= offset; + if (PM_PTE_LEVEL(*pte) == 0) + offset_mask = PAGE_SIZE - 1; + else + offset_mask = PTE_PAGE_SIZE(*pte) - 1; + + __pte = *pte & PM_ADDR_MASK; + paddr = (__pte & ~offset_mask) | (iova & offset_mask); return paddr; } @@ -2523,8 +2578,8 @@ static struct iommu_ops amd_iommu_ops = { .domain_destroy = amd_iommu_domain_destroy, .attach_dev = amd_iommu_attach_device, .detach_dev = amd_iommu_detach_device, - .map = amd_iommu_map_range, - .unmap = amd_iommu_unmap_range, + .map = amd_iommu_map, + .unmap = amd_iommu_unmap, .iova_to_phys = amd_iommu_iova_to_phys, .domain_has_cap = amd_iommu_domain_has_cap, }; diff --git a/arch/x86/kernel/amd_iommu_init.c b/arch/x86/kernel/amd_iommu_init.c index 6360abf993d4..3bacb4d0844c 100644 --- a/arch/x86/kernel/amd_iommu_init.c +++ b/arch/x86/kernel/amd_iommu_init.c @@ -120,6 +120,7 @@ struct ivmd_header { bool amd_iommu_dump; static int __initdata amd_iommu_detected; +static bool __initdata amd_iommu_disabled; u16 amd_iommu_last_bdf; /* largest PCI device id we have to handle */ @@ -1372,6 +1373,9 @@ void __init amd_iommu_detect(void) if (no_iommu || (iommu_detected && !gart_iommu_aperture)) return; + if (amd_iommu_disabled) + return; + if (acpi_table_parse("IVRS", early_amd_iommu_detect) == 0) { iommu_detected = 1; amd_iommu_detected = 1; @@ -1401,6 +1405,8 @@ static int __init parse_amd_iommu_options(char *str) for (; *str; ++str) { if (strncmp(str, "fullflush", 9) == 0) amd_iommu_unmap_flush = true; + if (strncmp(str, "off", 3) == 0) + amd_iommu_disabled = true; } return 1; diff --git a/arch/x86/kernel/apic/es7000_32.c b/arch/x86/kernel/apic/es7000_32.c index 03ba1b895f5e..425e53a87feb 100644 --- a/arch/x86/kernel/apic/es7000_32.c +++ b/arch/x86/kernel/apic/es7000_32.c @@ -131,24 +131,6 @@ int es7000_plat; static unsigned int base; -static int -es7000_rename_gsi(int ioapic, int gsi) -{ - if (es7000_plat == ES7000_ZORRO) - return gsi; - - if (!base) { - int i; - for (i = 0; i < nr_ioapics; i++) - base += nr_ioapic_registers[i]; - } - - if (!ioapic && (gsi < 16)) - gsi += base; - - return gsi; -} - static int __cpuinit wakeup_secondary_cpu_via_mip(int cpu, unsigned long eip) { unsigned long vect = 0, psaival = 0; @@ -190,7 +172,6 @@ static void setup_unisys(void) es7000_plat = ES7000_ZORRO; else es7000_plat = ES7000_CLASSIC; - ioapic_renumber_irq = es7000_rename_gsi; } /* diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c index eb2789c3f721..33f3563a2a52 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c @@ -89,6 +89,9 @@ int nr_ioapics; /* IO APIC gsi routing info */ struct mp_ioapic_gsi mp_gsi_routing[MAX_IO_APICS]; +/* The last gsi number used */ +u32 gsi_end; + /* MP IRQ source entries */ struct mpc_intsrc mp_irqs[MAX_IRQ_SOURCES]; @@ -1013,10 +1016,9 @@ static inline int irq_trigger(int idx) return MPBIOS_trigger(idx); } -int (*ioapic_renumber_irq)(int ioapic, int irq); static int pin_2_irq(int idx, int apic, int pin) { - int irq, i; + int irq; int bus = mp_irqs[idx].srcbus; /* @@ -1028,18 +1030,12 @@ static int pin_2_irq(int idx, int apic, int pin) if (test_bit(bus, mp_bus_not_pci)) { irq = mp_irqs[idx].srcbusirq; } else { - /* - * PCI IRQs are mapped in order - */ - i = irq = 0; - while (i < apic) - irq += nr_ioapic_registers[i++]; - irq += pin; - /* - * For MPS mode, so far only needed by ES7000 platform - */ - if (ioapic_renumber_irq) - irq = ioapic_renumber_irq(apic, irq); + u32 gsi = mp_gsi_routing[apic].gsi_base + pin; + + if (gsi >= NR_IRQS_LEGACY) + irq = gsi; + else + irq = gsi_end + 1 + gsi; } #ifdef CONFIG_X86_32 @@ -1950,20 +1946,8 @@ static struct { int pin, apic; } ioapic_i8259 = { -1, -1 }; void __init enable_IO_APIC(void) { - union IO_APIC_reg_01 reg_01; int i8259_apic, i8259_pin; int apic; - unsigned long flags; - - /* - * The number of IO-APIC IRQ registers (== #pins): - */ - for (apic = 0; apic < nr_ioapics; apic++) { - raw_spin_lock_irqsave(&ioapic_lock, flags); - reg_01.raw = io_apic_read(apic, 1); - raw_spin_unlock_irqrestore(&ioapic_lock, flags); - nr_ioapic_registers[apic] = reg_01.bits.entries+1; - } if (!legacy_pic->nr_legacy_irqs) return; @@ -3858,27 +3842,20 @@ int __init io_apic_get_redir_entries (int ioapic) reg_01.raw = io_apic_read(ioapic, 1); raw_spin_unlock_irqrestore(&ioapic_lock, flags); - return reg_01.bits.entries; + /* The register returns the maximum index redir index + * supported, which is one less than the total number of redir + * entries. + */ + return reg_01.bits.entries + 1; } void __init probe_nr_irqs_gsi(void) { - int nr = 0; + int nr; - nr = acpi_probe_gsi(); - if (nr > nr_irqs_gsi) { + nr = gsi_end + 1 + NR_IRQS_LEGACY; + if (nr > nr_irqs_gsi) nr_irqs_gsi = nr; - } else { - /* for acpi=off or acpi is not compiled in */ - int idx; - - nr = 0; - for (idx = 0; idx < nr_ioapics; idx++) - nr += io_apic_get_redir_entries(idx) + 1; - - if (nr > nr_irqs_gsi) - nr_irqs_gsi = nr; - } printk(KERN_DEBUG "nr_irqs_gsi: %d\n", nr_irqs_gsi); } @@ -4085,22 +4062,27 @@ int __init io_apic_get_version(int ioapic) return reg_01.bits.version; } -int acpi_get_override_irq(int bus_irq, int *trigger, int *polarity) +int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity) { - int i; + int ioapic, pin, idx; if (skip_ioapic_setup) return -1; - for (i = 0; i < mp_irq_entries; i++) - if (mp_irqs[i].irqtype == mp_INT && - mp_irqs[i].srcbusirq == bus_irq) - break; - if (i >= mp_irq_entries) + ioapic = mp_find_ioapic(gsi); + if (ioapic < 0) return -1; - *trigger = irq_trigger(i); - *polarity = irq_polarity(i); + pin = mp_find_ioapic_pin(ioapic, gsi); + if (pin < 0) + return -1; + + idx = find_irq_entry(ioapic, pin, mp_INT); + if (idx < 0) + return -1; + + *trigger = irq_trigger(idx); + *polarity = irq_polarity(idx); return 0; } @@ -4241,7 +4223,7 @@ void __init ioapic_insert_resources(void) } } -int mp_find_ioapic(int gsi) +int mp_find_ioapic(u32 gsi) { int i = 0; @@ -4256,7 +4238,7 @@ int mp_find_ioapic(int gsi) return -1; } -int mp_find_ioapic_pin(int ioapic, int gsi) +int mp_find_ioapic_pin(int ioapic, u32 gsi) { if (WARN_ON(ioapic == -1)) return -1; @@ -4284,6 +4266,7 @@ static int bad_ioapic(unsigned long address) void __init mp_register_ioapic(int id, u32 address, u32 gsi_base) { int idx = 0; + int entries; if (bad_ioapic(address)) return; @@ -4302,9 +4285,17 @@ void __init mp_register_ioapic(int id, u32 address, u32 gsi_base) * Build basic GSI lookup table to facilitate gsi->io_apic lookups * and to prevent reprogramming of IOAPIC pins (PCI GSIs). */ + entries = io_apic_get_redir_entries(idx); mp_gsi_routing[idx].gsi_base = gsi_base; - mp_gsi_routing[idx].gsi_end = gsi_base + - io_apic_get_redir_entries(idx); + mp_gsi_routing[idx].gsi_end = gsi_base + entries - 1; + + /* + * The number of IO-APIC IRQ registers (== #pins): + */ + nr_ioapic_registers[idx] = entries; + + if (mp_gsi_routing[idx].gsi_end > gsi_end) + gsi_end = mp_gsi_routing[idx].gsi_end; printk(KERN_INFO "IOAPIC[%d]: apic_id %d, version %d, address 0x%x, " "GSI %d-%d\n", idx, mp_ioapics[idx].apicid, diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c index c085d52dbaf2..e46f98f36e31 100644 --- a/arch/x86/kernel/apic/x2apic_uv_x.c +++ b/arch/x86/kernel/apic/x2apic_uv_x.c @@ -735,9 +735,6 @@ void __init uv_system_init(void) uv_node_to_blade[nid] = blade; uv_cpu_to_blade[cpu] = blade; max_pnode = max(pnode, max_pnode); - - printk(KERN_DEBUG "UV: cpu %d, apicid 0x%x, pnode %d, nid %d, lcpu %d, blade %d\n", - cpu, apicid, pnode, nid, lcpu, blade); } /* Add blade/pnode info for nodes without cpus */ diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c index 031aa887b0eb..c4f9182ca3ac 100644 --- a/arch/x86/kernel/apm_32.c +++ b/arch/x86/kernel/apm_32.c @@ -1224,7 +1224,7 @@ static void reinit_timer(void) #ifdef INIT_TIMER_AFTER_SUSPEND unsigned long flags; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); /* set the clock to HZ */ outb_pit(0x34, PIT_MODE); /* binary, mode 2, LSB/MSB, ch 0 */ udelay(10); @@ -1232,7 +1232,7 @@ static void reinit_timer(void) udelay(10); outb_pit(LATCH >> 8, PIT_CH0); /* MSB */ udelay(10); - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); #endif } diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index c202b62f3671..3a785da34b6f 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -14,7 +14,7 @@ CFLAGS_common.o := $(nostackp) obj-y := intel_cacheinfo.o addon_cpuid_features.o obj-y += proc.o capflags.o powerflags.o common.o -obj-y += vmware.o hypervisor.o sched.o +obj-y += vmware.o hypervisor.o sched.o mshyperv.o obj-$(CONFIG_X86_32) += bugs.o cmpxchg.o obj-$(CONFIG_X86_64) += bugs_64.o diff --git a/arch/x86/kernel/cpu/addon_cpuid_features.c b/arch/x86/kernel/cpu/addon_cpuid_features.c index 97ad79cdf688..10fa5684a662 100644 --- a/arch/x86/kernel/cpu/addon_cpuid_features.c +++ b/arch/x86/kernel/cpu/addon_cpuid_features.c @@ -30,12 +30,14 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c) const struct cpuid_bit *cb; static const struct cpuid_bit __cpuinitconst cpuid_bits[] = { - { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 }, - { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 }, - { X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a }, - { X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a }, - { X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a }, - { X86_FEATURE_NRIPS, CR_EDX, 3, 0x8000000a }, + { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 }, + { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 }, + { X86_FEATURE_APERFMPERF, CR_ECX, 0, 0x00000006 }, + { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007 }, + { X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a }, + { X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a }, + { X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a }, + { X86_FEATURE_NRIPS, CR_EDX, 3, 0x8000000a }, { 0, 0, 0, 0 } }; diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 01a265212395..c39576cb3018 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -86,7 +86,7 @@ static void __init check_fpu(void) static void __init check_hlt(void) { - if (paravirt_enabled()) + if (boot_cpu_data.x86 >= 5 || paravirt_enabled()) return; printk(KERN_INFO "Checking 'hlt' instruction... "); diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 4868e4a951ee..c1c00d0b1692 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1243,10 +1243,7 @@ void __cpuinit cpu_init(void) /* * Force FPU initialization: */ - if (cpu_has_xsave) - current_thread_info()->status = TS_XSAVE; - else - current_thread_info()->status = 0; + current_thread_info()->status = 0; clear_used_math(); mxcsr_feature_mask_init(); diff --git a/arch/x86/kernel/cpu/cpufreq/Makefile b/arch/x86/kernel/cpu/cpufreq/Makefile index 1840c0a5170b..bd54bf67e6fb 100644 --- a/arch/x86/kernel/cpu/cpufreq/Makefile +++ b/arch/x86/kernel/cpu/cpufreq/Makefile @@ -2,8 +2,8 @@ # K8 systems. ACPI is preferred to all other hardware-specific drivers. # speedstep-* is preferred over p4-clockmod. -obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o -obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o +obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o mperf.o +obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o mperf.o obj-$(CONFIG_X86_PCC_CPUFREQ) += pcc-cpufreq.o obj-$(CONFIG_X86_POWERNOW_K6) += powernow-k6.o obj-$(CONFIG_X86_POWERNOW_K7) += powernow-k7.o diff --git a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c index 459168083b77..1d3cddaa40ee 100644 --- a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c +++ b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c @@ -46,6 +46,7 @@ #include <asm/msr.h> #include <asm/processor.h> #include <asm/cpufeature.h> +#include "mperf.h" #define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, \ "acpi-cpufreq", msg) @@ -71,8 +72,6 @@ struct acpi_cpufreq_data { static DEFINE_PER_CPU(struct acpi_cpufreq_data *, acfreq_data); -static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf); - /* acpi_perf_data is a pointer to percpu data. */ static struct acpi_processor_performance *acpi_perf_data; @@ -240,45 +239,6 @@ static u32 get_cur_val(const struct cpumask *mask) return cmd.val; } -/* Called via smp_call_function_single(), on the target CPU */ -static void read_measured_perf_ctrs(void *_cur) -{ - struct aperfmperf *am = _cur; - - get_aperfmperf(am); -} - -/* - * Return the measured active (C0) frequency on this CPU since last call - * to this function. - * Input: cpu number - * Return: Average CPU frequency in terms of max frequency (zero on error) - * - * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance - * over a period of time, while CPU is in C0 state. - * IA32_MPERF counts at the rate of max advertised frequency - * IA32_APERF counts at the rate of actual CPU frequency - * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and - * no meaning should be associated with absolute values of these MSRs. - */ -static unsigned int get_measured_perf(struct cpufreq_policy *policy, - unsigned int cpu) -{ - struct aperfmperf perf; - unsigned long ratio; - unsigned int retval; - - if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1)) - return 0; - - ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf); - per_cpu(acfreq_old_perf, cpu) = perf; - - retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT; - - return retval; -} - static unsigned int get_cur_freq_on_cpu(unsigned int cpu) { struct acpi_cpufreq_data *data = per_cpu(acfreq_data, cpu); @@ -702,7 +662,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) /* Check for APERF/MPERF support in hardware */ if (cpu_has(c, X86_FEATURE_APERFMPERF)) - acpi_cpufreq_driver.getavg = get_measured_perf; + acpi_cpufreq_driver.getavg = cpufreq_get_measured_perf; dprintk("CPU%u - ACPI performance management activated.\n", cpu); for (i = 0; i < perf->state_count; i++) diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.c b/arch/x86/kernel/cpu/cpufreq/mperf.c new file mode 100644 index 000000000000..911e193018ae --- /dev/null +++ b/arch/x86/kernel/cpu/cpufreq/mperf.c @@ -0,0 +1,51 @@ +#include <linux/kernel.h> +#include <linux/smp.h> +#include <linux/module.h> +#include <linux/init.h> +#include <linux/cpufreq.h> +#include <linux/slab.h> + +#include "mperf.h" + +static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf); + +/* Called via smp_call_function_single(), on the target CPU */ +static void read_measured_perf_ctrs(void *_cur) +{ + struct aperfmperf *am = _cur; + + get_aperfmperf(am); +} + +/* + * Return the measured active (C0) frequency on this CPU since last call + * to this function. + * Input: cpu number + * Return: Average CPU frequency in terms of max frequency (zero on error) + * + * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance + * over a period of time, while CPU is in C0 state. + * IA32_MPERF counts at the rate of max advertised frequency + * IA32_APERF counts at the rate of actual CPU frequency + * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and + * no meaning should be associated with absolute values of these MSRs. + */ +unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy, + unsigned int cpu) +{ + struct aperfmperf perf; + unsigned long ratio; + unsigned int retval; + + if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1)) + return 0; + + ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf); + per_cpu(acfreq_old_perf, cpu) = perf; + + retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT; + + return retval; +} +EXPORT_SYMBOL_GPL(cpufreq_get_measured_perf); +MODULE_LICENSE("GPL"); diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.h b/arch/x86/kernel/cpu/cpufreq/mperf.h new file mode 100644 index 000000000000..5dbf2950dc22 --- /dev/null +++ b/arch/x86/kernel/cpu/cpufreq/mperf.h @@ -0,0 +1,9 @@ +/* + * (c) 2010 Advanced Micro Devices, Inc. + * Your use of this code is subject to the terms and conditions of the + * GNU general public license version 2. See "COPYING" or + * http://www.gnu.org/licenses/gpl.html + */ + +unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy, + unsigned int cpu); diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c index b6215b9798e2..6f3dc8fbbfdc 100644 --- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c +++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c @@ -1,6 +1,5 @@ - /* - * (c) 2003-2006 Advanced Micro Devices, Inc. + * (c) 2003-2010 Advanced Micro Devices, Inc. * Your use of this code is subject to the terms and conditions of the * GNU general public license version 2. See "COPYING" or * http://www.gnu.org/licenses/gpl.html @@ -46,6 +45,7 @@ #define PFX "powernow-k8: " #define VERSION "version 2.20.00" #include "powernow-k8.h" +#include "mperf.h" /* serialize freq changes */ static DEFINE_MUTEX(fidvid_mutex); @@ -54,6 +54,12 @@ static DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data); static int cpu_family = CPU_OPTERON; +/* core performance boost */ +static bool cpb_capable, cpb_enabled; +static struct msr __percpu *msrs; + +static struct cpufreq_driver cpufreq_amd64_driver; + #ifndef CONFIG_SMP static inline const struct cpumask *cpu_core_mask(int cpu) { @@ -1249,6 +1255,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) struct powernow_k8_data *data; struct init_on_cpu init_on_cpu; int rc; + struct cpuinfo_x86 *c = &cpu_data(pol->cpu); if (!cpu_online(pol->cpu)) return -ENODEV; @@ -1323,6 +1330,10 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) return -EINVAL; } + /* Check for APERF/MPERF support in hardware */ + if (cpu_has(c, X86_FEATURE_APERFMPERF)) + cpufreq_amd64_driver.getavg = cpufreq_get_measured_perf; + cpufreq_frequency_table_get_attr(data->powernow_table, pol->cpu); if (cpu_family == CPU_HW_PSTATE) @@ -1394,8 +1405,77 @@ out: return khz; } +static void _cpb_toggle_msrs(bool t) +{ + int cpu; + + get_online_cpus(); + + rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs); + + for_each_cpu(cpu, cpu_online_mask) { + struct msr *reg = per_cpu_ptr(msrs, cpu); + if (t) + reg->l &= ~BIT(25); + else + reg->l |= BIT(25); + } + wrmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs); + + put_online_cpus(); +} + +/* + * Switch on/off core performance boosting. + * + * 0=disable + * 1=enable. + */ +static void cpb_toggle(bool t) +{ + if (!cpb_capable) + return; + + if (t && !cpb_enabled) { + cpb_enabled = true; + _cpb_toggle_msrs(t); + printk(KERN_INFO PFX "Core Boosting enabled.\n"); + } else if (!t && cpb_enabled) { + cpb_enabled = false; + _cpb_toggle_msrs(t); + printk(KERN_INFO PFX "Core Boosting disabled.\n"); + } +} + +static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf, + size_t count) +{ + int ret = -EINVAL; + unsigned long val = 0; + + ret = strict_strtoul(buf, 10, &val); + if (!ret && (val == 0 || val == 1) && cpb_capable) + cpb_toggle(val); + else + return -EINVAL; + + return count; +} + +static ssize_t show_cpb(struct cpufreq_policy *policy, char *buf) +{ + return sprintf(buf, "%u\n", cpb_enabled); +} + +#define define_one_rw(_name) \ +static struct freq_attr _name = \ +__ATTR(_name, 0644, show_##_name, store_##_name) + +define_one_rw(cpb); + static struct freq_attr *powernow_k8_attr[] = { &cpufreq_freq_attr_scaling_available_freqs, + &cpb, NULL, }; @@ -1411,10 +1491,51 @@ static struct cpufreq_driver cpufreq_amd64_driver = { .attr = powernow_k8_attr, }; +/* + * Clear the boost-disable flag on the CPU_DOWN path so that this cpu + * cannot block the remaining ones from boosting. On the CPU_UP path we + * simply keep the boost-disable flag in sync with the current global + * state. + */ +static int __cpuinit cpb_notify(struct notifier_block *nb, unsigned long action, + void *hcpu) +{ + unsigned cpu = (long)hcpu; + u32 lo, hi; + + switch (action) { + case CPU_UP_PREPARE: + case CPU_UP_PREPARE_FROZEN: + + if (!cpb_enabled) { + rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi); + lo |= BIT(25); + wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi); + } + break; + + case CPU_DOWN_PREPARE: + case CPU_DOWN_PREPARE_FROZEN: + rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi); + lo &= ~BIT(25); + wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi); + break; + + default: + break; + } + + return NOTIFY_OK; +} + +static struct notifier_block __cpuinitdata cpb_nb = { + .notifier_call = cpb_notify, +}; + /* driver entry point for init */ static int __cpuinit powernowk8_init(void) { - unsigned int i, supported_cpus = 0; + unsigned int i, supported_cpus = 0, cpu; for_each_online_cpu(i) { int rc; @@ -1423,15 +1544,36 @@ static int __cpuinit powernowk8_init(void) supported_cpus++; } - if (supported_cpus == num_online_cpus()) { - printk(KERN_INFO PFX "Found %d %s " - "processors (%d cpu cores) (" VERSION ")\n", - num_online_nodes(), - boot_cpu_data.x86_model_id, supported_cpus); - return cpufreq_register_driver(&cpufreq_amd64_driver); + if (supported_cpus != num_online_cpus()) + return -ENODEV; + + printk(KERN_INFO PFX "Found %d %s (%d cpu cores) (" VERSION ")\n", + num_online_nodes(), boot_cpu_data.x86_model_id, supported_cpus); + + if (boot_cpu_has(X86_FEATURE_CPB)) { + + cpb_capable = true; + + register_cpu_notifier(&cpb_nb); + + msrs = msrs_alloc(); + if (!msrs) { + printk(KERN_ERR "%s: Error allocating msrs!\n", __func__); + return -ENOMEM; + } + + rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs); + + for_each_cpu(cpu, cpu_online_mask) { + struct msr *reg = per_cpu_ptr(msrs, cpu); + cpb_enabled |= !(!!(reg->l & BIT(25))); + } + + printk(KERN_INFO PFX "Core Performance Boosting: %s.\n", + (cpb_enabled ? "on" : "off")); } - return -ENODEV; + return cpufreq_register_driver(&cpufreq_amd64_driver); } /* driver entry point for term */ @@ -1439,6 +1581,13 @@ static void __exit powernowk8_exit(void) { dprintk("exit\n"); + if (boot_cpu_has(X86_FEATURE_CPB)) { + msrs_free(msrs); + msrs = NULL; + + unregister_cpu_notifier(&cpb_nb); + } + cpufreq_unregister_driver(&cpufreq_amd64_driver); } diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.h b/arch/x86/kernel/cpu/cpufreq/powernow-k8.h index 02ce824073cb..df3529b1c02d 100644 --- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.h +++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.h @@ -5,7 +5,6 @@ * http://www.gnu.org/licenses/gpl.html */ - enum pstate { HW_PSTATE_INVALID = 0xff, HW_PSTATE_0 = 0, @@ -55,7 +54,6 @@ struct powernow_k8_data { struct cpumask *available_cores; }; - /* processor's cpuid instruction support */ #define CPUID_PROCESSOR_SIGNATURE 1 /* function 1 */ #define CPUID_XFAM 0x0ff00000 /* extended family */ diff --git a/arch/x86/kernel/cpu/hypervisor.c b/arch/x86/kernel/cpu/hypervisor.c index 08be922de33a..dd531cc56a8f 100644 --- a/arch/x86/kernel/cpu/hypervisor.c +++ b/arch/x86/kernel/cpu/hypervisor.c @@ -21,37 +21,55 @@ * */ +#include <linux/module.h> #include <asm/processor.h> -#include <asm/vmware.h> #include <asm/hypervisor.h> -static inline void __cpuinit -detect_hypervisor_vendor(struct cpuinfo_x86 *c) +/* + * Hypervisor detect order. This is specified explicitly here because + * some hypervisors might implement compatibility modes for other + * hypervisors and therefore need to be detected in specific sequence. + */ +static const __initconst struct hypervisor_x86 * const hypervisors[] = { - if (vmware_platform()) - c->x86_hyper_vendor = X86_HYPER_VENDOR_VMWARE; - else - c->x86_hyper_vendor = X86_HYPER_VENDOR_NONE; -} + &x86_hyper_vmware, + &x86_hyper_ms_hyperv, +}; -static inline void __cpuinit -hypervisor_set_feature_bits(struct cpuinfo_x86 *c) +const struct hypervisor_x86 *x86_hyper; +EXPORT_SYMBOL(x86_hyper); + +static inline void __init +detect_hypervisor_vendor(void) { - if (boot_cpu_data.x86_hyper_vendor == X86_HYPER_VENDOR_VMWARE) { - vmware_set_feature_bits(c); - return; + const struct hypervisor_x86 *h, * const *p; + + for (p = hypervisors; p < hypervisors + ARRAY_SIZE(hypervisors); p++) { + h = *p; + if (h->detect()) { + x86_hyper = h; + printk(KERN_INFO "Hypervisor detected: %s\n", h->name); + break; + } } } void __cpuinit init_hypervisor(struct cpuinfo_x86 *c) { - detect_hypervisor_vendor(c); - hypervisor_set_feature_bits(c); + if (x86_hyper && x86_hyper->set_cpu_features) + x86_hyper->set_cpu_features(c); } void __init init_hypervisor_platform(void) { + + detect_hypervisor_vendor(); + + if (!x86_hyper) + return; + init_hypervisor(&boot_cpu_data); - if (boot_cpu_data.x86_hyper_vendor == X86_HYPER_VENDOR_VMWARE) - vmware_platform_setup(); + + if (x86_hyper->init_platform) + x86_hyper->init_platform(); } diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 1366c7cfd483..85f69cdeae10 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -12,7 +12,6 @@ #include <asm/processor.h> #include <asm/pgtable.h> #include <asm/msr.h> -#include <asm/ds.h> #include <asm/bugs.h> #include <asm/cpu.h> @@ -373,12 +372,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON); } - if (c->cpuid_level > 6) { - unsigned ecx = cpuid_ecx(6); - if (ecx & 0x01) - set_cpu_cap(c, X86_FEATURE_APERFMPERF); - } - if (cpu_has_xmm2) set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); if (cpu_has_ds) { @@ -388,7 +381,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) set_cpu_cap(c, X86_FEATURE_BTS); if (!(l1 & (1<<12))) set_cpu_cap(c, X86_FEATURE_PEBS); - ds_init_intel(c); } if (c->x86 == 6 && c->x86_model == 29 && cpu_has_clflush) diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index b3eeb66c0a51..33eae2062cf5 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -148,13 +148,19 @@ union _cpuid4_leaf_ecx { u32 full; }; +struct amd_l3_cache { + struct pci_dev *dev; + bool can_disable; + unsigned indices; + u8 subcaches[4]; +}; + struct _cpuid4_info { union _cpuid4_leaf_eax eax; union _cpuid4_leaf_ebx ebx; union _cpuid4_leaf_ecx ecx; unsigned long size; - bool can_disable; - unsigned int l3_indices; + struct amd_l3_cache *l3; DECLARE_BITMAP(shared_cpu_map, NR_CPUS); }; @@ -164,8 +170,7 @@ struct _cpuid4_info_regs { union _cpuid4_leaf_ebx ebx; union _cpuid4_leaf_ecx ecx; unsigned long size; - bool can_disable; - unsigned int l3_indices; + struct amd_l3_cache *l3; }; unsigned short num_cache_leaves; @@ -302,87 +307,163 @@ struct _cache_attr { }; #ifdef CONFIG_CPU_SUP_AMD -static unsigned int __cpuinit amd_calc_l3_indices(void) + +/* + * L3 cache descriptors + */ +static struct amd_l3_cache **__cpuinitdata l3_caches; + +static void __cpuinit amd_calc_l3_indices(struct amd_l3_cache *l3) { - /* - * We're called over smp_call_function_single() and therefore - * are on the correct cpu. - */ - int cpu = smp_processor_id(); - int node = cpu_to_node(cpu); - struct pci_dev *dev = node_to_k8_nb_misc(node); unsigned int sc0, sc1, sc2, sc3; u32 val = 0; - pci_read_config_dword(dev, 0x1C4, &val); + pci_read_config_dword(l3->dev, 0x1C4, &val); /* calculate subcache sizes */ - sc0 = !(val & BIT(0)); - sc1 = !(val & BIT(4)); - sc2 = !(val & BIT(8)) + !(val & BIT(9)); - sc3 = !(val & BIT(12)) + !(val & BIT(13)); + l3->subcaches[0] = sc0 = !(val & BIT(0)); + l3->subcaches[1] = sc1 = !(val & BIT(4)); + l3->subcaches[2] = sc2 = !(val & BIT(8)) + !(val & BIT(9)); + l3->subcaches[3] = sc3 = !(val & BIT(12)) + !(val & BIT(13)); - return (max(max(max(sc0, sc1), sc2), sc3) << 10) - 1; + l3->indices = (max(max(max(sc0, sc1), sc2), sc3) << 10) - 1; +} + +static struct amd_l3_cache * __cpuinit amd_init_l3_cache(int node) +{ + struct amd_l3_cache *l3; + struct pci_dev *dev = node_to_k8_nb_misc(node); + + l3 = kzalloc(sizeof(struct amd_l3_cache), GFP_ATOMIC); + if (!l3) { + printk(KERN_WARNING "Error allocating L3 struct\n"); + return NULL; + } + + l3->dev = dev; + + amd_calc_l3_indices(l3); + + return l3; } static void __cpuinit amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) { - if (index < 3) + int node; + + if (boot_cpu_data.x86 != 0x10) return; - if (boot_cpu_data.x86 == 0x11) + if (index < 3) return; /* see errata #382 and #388 */ - if ((boot_cpu_data.x86 == 0x10) && - ((boot_cpu_data.x86_model < 0x8) || - (boot_cpu_data.x86_mask < 0x1))) + if (boot_cpu_data.x86_model < 0x8) + return; + + if ((boot_cpu_data.x86_model == 0x8 || + boot_cpu_data.x86_model == 0x9) + && + boot_cpu_data.x86_mask < 0x1) + return; + + /* not in virtualized environments */ + if (num_k8_northbridges == 0) return; - this_leaf->can_disable = true; - this_leaf->l3_indices = amd_calc_l3_indices(); + /* + * Strictly speaking, the amount in @size below is leaked since it is + * never freed but this is done only on shutdown so it doesn't matter. + */ + if (!l3_caches) { + int size = num_k8_northbridges * sizeof(struct amd_l3_cache *); + + l3_caches = kzalloc(size, GFP_ATOMIC); + if (!l3_caches) + return; + } + + node = amd_get_nb_id(smp_processor_id()); + + if (!l3_caches[node]) { + l3_caches[node] = amd_init_l3_cache(node); + l3_caches[node]->can_disable = true; + } + + WARN_ON(!l3_caches[node]); + + this_leaf->l3 = l3_caches[node]; } static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf, - unsigned int index) + unsigned int slot) { - int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - int node = amd_get_nb_id(cpu); - struct pci_dev *dev = node_to_k8_nb_misc(node); + struct pci_dev *dev = this_leaf->l3->dev; unsigned int reg = 0; - if (!this_leaf->can_disable) + if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; if (!dev) return -EINVAL; - pci_read_config_dword(dev, 0x1BC + index * 4, ®); + pci_read_config_dword(dev, 0x1BC + slot * 4, ®); return sprintf(buf, "0x%08x\n", reg); } -#define SHOW_CACHE_DISABLE(index) \ +#define SHOW_CACHE_DISABLE(slot) \ static ssize_t \ -show_cache_disable_##index(struct _cpuid4_info *this_leaf, char *buf) \ +show_cache_disable_##slot(struct _cpuid4_info *this_leaf, char *buf) \ { \ - return show_cache_disable(this_leaf, buf, index); \ + return show_cache_disable(this_leaf, buf, slot); \ } SHOW_CACHE_DISABLE(0) SHOW_CACHE_DISABLE(1) +static void amd_l3_disable_index(struct amd_l3_cache *l3, int cpu, + unsigned slot, unsigned long idx) +{ + int i; + + idx |= BIT(30); + + /* + * disable index in all 4 subcaches + */ + for (i = 0; i < 4; i++) { + u32 reg = idx | (i << 20); + + if (!l3->subcaches[i]) + continue; + + pci_write_config_dword(l3->dev, 0x1BC + slot * 4, reg); + + /* + * We need to WBINVD on a core on the node containing the L3 + * cache which indices we disable therefore a simple wbinvd() + * is not sufficient. + */ + wbinvd_on_cpu(cpu); + + reg |= BIT(31); + pci_write_config_dword(l3->dev, 0x1BC + slot * 4, reg); + } +} + + static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, - const char *buf, size_t count, unsigned int index) + const char *buf, size_t count, + unsigned int slot) { + struct pci_dev *dev = this_leaf->l3->dev; int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - int node = amd_get_nb_id(cpu); - struct pci_dev *dev = node_to_k8_nb_misc(node); unsigned long val = 0; #define SUBCACHE_MASK (3UL << 20) #define SUBCACHE_INDEX 0xfff - if (!this_leaf->can_disable) + if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; if (!capable(CAP_SYS_ADMIN)) @@ -396,26 +477,20 @@ static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, /* do not allow writes outside of allowed bits */ if ((val & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || - ((val & SUBCACHE_INDEX) > this_leaf->l3_indices)) + ((val & SUBCACHE_INDEX) > this_leaf->l3->indices)) return -EINVAL; - val |= BIT(30); - pci_write_config_dword(dev, 0x1BC + index * 4, val); - /* - * We need to WBINVD on a core on the node containing the L3 cache which - * indices we disable therefore a simple wbinvd() is not sufficient. - */ - wbinvd_on_cpu(cpu); - pci_write_config_dword(dev, 0x1BC + index * 4, val | BIT(31)); + amd_l3_disable_index(this_leaf->l3, cpu, slot, val); + return count; } -#define STORE_CACHE_DISABLE(index) \ +#define STORE_CACHE_DISABLE(slot) \ static ssize_t \ -store_cache_disable_##index(struct _cpuid4_info *this_leaf, \ +store_cache_disable_##slot(struct _cpuid4_info *this_leaf, \ const char *buf, size_t count) \ { \ - return store_cache_disable(this_leaf, buf, count, index); \ + return store_cache_disable(this_leaf, buf, count, slot); \ } STORE_CACHE_DISABLE(0) STORE_CACHE_DISABLE(1) @@ -443,8 +518,7 @@ __cpuinit cpuid4_cache_lookup_regs(int index, if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { amd_cpuid4(index, &eax, &ebx, &ecx); - if (boot_cpu_data.x86 >= 0x10) - amd_check_l3_disable(index, this_leaf); + amd_check_l3_disable(index, this_leaf); } else { cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &edx); } @@ -701,6 +775,7 @@ static void __cpuinit free_cache_attributes(unsigned int cpu) for (i = 0; i < num_cache_leaves; i++) cache_remove_shared_cpu_map(cpu, i); + kfree(per_cpu(ici_cpuid4_info, cpu)->l3); kfree(per_cpu(ici_cpuid4_info, cpu)); per_cpu(ici_cpuid4_info, cpu) = NULL; } @@ -985,7 +1060,7 @@ static int __cpuinit cache_add_dev(struct sys_device * sys_dev) this_leaf = CPUID4_INFO_IDX(cpu, i); - if (this_leaf->can_disable) + if (this_leaf->l3 && this_leaf->l3->can_disable) ktype_cache.default_attrs = default_l3_attrs; else ktype_cache.default_attrs = default_attrs; diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c index 8a6f0afa767e..7a355ddcc64b 100644 --- a/arch/x86/kernel/cpu/mcheck/mce.c +++ b/arch/x86/kernel/cpu/mcheck/mce.c @@ -539,7 +539,7 @@ void machine_check_poll(enum mcp_flags flags, mce_banks_t *b) struct mce m; int i; - __get_cpu_var(mce_poll_count)++; + percpu_inc(mce_poll_count); mce_setup(&m); @@ -934,7 +934,7 @@ void do_machine_check(struct pt_regs *regs, long error_code) atomic_inc(&mce_entry); - __get_cpu_var(mce_exception_count)++; + percpu_inc(mce_exception_count); if (notify_die(DIE_NMI, "machine check", regs, error_code, 18, SIGKILL) == NOTIFY_STOP) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c new file mode 100644 index 000000000000..16f41bbe46b6 --- /dev/null +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -0,0 +1,55 @@ +/* + * HyperV Detection code. + * + * Copyright (C) 2010, Novell, Inc. + * Author : K. Y. Srinivasan <ksrinivasan@novell.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; version 2 of the License. + * + */ + +#include <linux/types.h> +#include <linux/module.h> +#include <asm/processor.h> +#include <asm/hypervisor.h> +#include <asm/hyperv.h> +#include <asm/mshyperv.h> + +struct ms_hyperv_info ms_hyperv; + +static bool __init ms_hyperv_platform(void) +{ + u32 eax; + u32 hyp_signature[3]; + + if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) + return false; + + cpuid(HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS, + &eax, &hyp_signature[0], &hyp_signature[1], &hyp_signature[2]); + + return eax >= HYPERV_CPUID_MIN && + eax <= HYPERV_CPUID_MAX && + !memcmp("Microsoft Hv", hyp_signature, 12); +} + +static void __init ms_hyperv_init_platform(void) +{ + /* + * Extract the features and hints + */ + ms_hyperv.features = cpuid_eax(HYPERV_CPUID_FEATURES); + ms_hyperv.hints = cpuid_eax(HYPERV_CPUID_ENLIGHTMENT_INFO); + + printk(KERN_INFO "HyperV: features 0x%x, hints 0x%x\n", + ms_hyperv.features, ms_hyperv.hints); +} + +const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = { + .name = "Microsoft HyperV", + .detect = ms_hyperv_platform, + .init_platform = ms_hyperv_init_platform, +}; +EXPORT_SYMBOL(x86_hyper_ms_hyperv); diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index db5bdc8addf8..fd4db0db3708 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -31,46 +31,51 @@ #include <asm/nmi.h> #include <asm/compat.h> -static u64 perf_event_mask __read_mostly; +#if 0 +#undef wrmsrl +#define wrmsrl(msr, val) \ +do { \ + trace_printk("wrmsrl(%lx, %lx)\n", (unsigned long)(msr),\ + (unsigned long)(val)); \ + native_write_msr((msr), (u32)((u64)(val)), \ + (u32)((u64)(val) >> 32)); \ +} while (0) +#endif -/* The maximal number of PEBS events: */ -#define MAX_PEBS_EVENTS 4 +/* + * best effort, GUP based copy_from_user() that assumes IRQ or NMI context + */ +static unsigned long +copy_from_user_nmi(void *to, const void __user *from, unsigned long n) +{ + unsigned long offset, addr = (unsigned long)from; + int type = in_nmi() ? KM_NMI : KM_IRQ0; + unsigned long size, len = 0; + struct page *page; + void *map; + int ret; -/* The size of a BTS record in bytes: */ -#define BTS_RECORD_SIZE 24 + do { + ret = __get_user_pages_fast(addr, 1, 0, &page); + if (!ret) + break; -/* The size of a per-cpu BTS buffer in bytes: */ -#define BTS_BUFFER_SIZE (BTS_RECORD_SIZE * 2048) + offset = addr & (PAGE_SIZE - 1); + size = min(PAGE_SIZE - offset, n - len); -/* The BTS overflow threshold in bytes from the end of the buffer: */ -#define BTS_OVFL_TH (BTS_RECORD_SIZE * 128) + map = kmap_atomic(page, type); + memcpy(to, map+offset, size); + kunmap_atomic(map, type); + put_page(page); + len += size; + to += size; + addr += size; -/* - * Bits in the debugctlmsr controlling branch tracing. - */ -#define X86_DEBUGCTL_TR (1 << 6) -#define X86_DEBUGCTL_BTS (1 << 7) -#define X86_DEBUGCTL_BTINT (1 << 8) -#define X86_DEBUGCTL_BTS_OFF_OS (1 << 9) -#define X86_DEBUGCTL_BTS_OFF_USR (1 << 10) + } while (len < n); -/* - * A debug store configuration. - * - * We only support architectures that use 64bit fields. - */ -struct debug_store { - u64 bts_buffer_base; - u64 bts_index; - u64 bts_absolute_maximum; - u64 bts_interrupt_threshold; - u64 pebs_buffer_base; - u64 pebs_index; - u64 pebs_absolute_maximum; - u64 pebs_interrupt_threshold; - u64 pebs_event_reset[MAX_PEBS_EVENTS]; -}; + return len; +} struct event_constraint { union { @@ -89,18 +94,41 @@ struct amd_nb { struct event_constraint event_constraints[X86_PMC_IDX_MAX]; }; +#define MAX_LBR_ENTRIES 16 + struct cpu_hw_events { + /* + * Generic x86 PMC bits + */ struct perf_event *events[X86_PMC_IDX_MAX]; /* in counter order */ unsigned long active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; - unsigned long interrupts; int enabled; - struct debug_store *ds; int n_events; int n_added; int assign[X86_PMC_IDX_MAX]; /* event to counter assignment */ u64 tags[X86_PMC_IDX_MAX]; struct perf_event *event_list[X86_PMC_IDX_MAX]; /* in enabled order */ + + unsigned int group_flag; + + /* + * Intel DebugStore bits + */ + struct debug_store *ds; + u64 pebs_enabled; + + /* + * Intel LBR bits + */ + int lbr_users; + void *lbr_context; + struct perf_branch_stack lbr_stack; + struct perf_branch_entry lbr_entries[MAX_LBR_ENTRIES]; + + /* + * AMD specific bits + */ struct amd_nb *amd_nb; }; @@ -114,44 +142,75 @@ struct cpu_hw_events { #define EVENT_CONSTRAINT(c, n, m) \ __EVENT_CONSTRAINT(c, n, m, HWEIGHT(n)) +/* + * Constraint on the Event code. + */ #define INTEL_EVENT_CONSTRAINT(c, n) \ - EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVTSEL_MASK) + EVENT_CONSTRAINT(c, n, ARCH_PERFMON_EVENTSEL_EVENT) +/* + * Constraint on the Event code + UMask + fixed-mask + * + * filter mask to validate fixed counter events. + * the following filters disqualify for fixed counters: + * - inv + * - edge + * - cnt-mask + * The other filters are supported by fixed counters. + * The any-thread option is supported starting with v3. + */ #define FIXED_EVENT_CONSTRAINT(c, n) \ - EVENT_CONSTRAINT(c, (1ULL << (32+n)), INTEL_ARCH_FIXED_MASK) + EVENT_CONSTRAINT(c, (1ULL << (32+n)), X86_RAW_EVENT_MASK) + +/* + * Constraint on the Event code + UMask + */ +#define PEBS_EVENT_CONSTRAINT(c, n) \ + EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK) #define EVENT_CONSTRAINT_END \ EVENT_CONSTRAINT(0, 0, 0) #define for_each_event_constraint(e, c) \ - for ((e) = (c); (e)->cmask; (e)++) + for ((e) = (c); (e)->weight; (e)++) + +union perf_capabilities { + struct { + u64 lbr_format : 6; + u64 pebs_trap : 1; + u64 pebs_arch_reg : 1; + u64 pebs_format : 4; + u64 smm_freeze : 1; + }; + u64 capabilities; +}; /* * struct x86_pmu - generic x86 pmu */ struct x86_pmu { + /* + * Generic x86 PMC bits + */ const char *name; int version; int (*handle_irq)(struct pt_regs *); void (*disable_all)(void); - void (*enable_all)(void); + void (*enable_all)(int added); void (*enable)(struct perf_event *); void (*disable)(struct perf_event *); + int (*hw_config)(struct perf_event *event); + int (*schedule_events)(struct cpu_hw_events *cpuc, int n, int *assign); unsigned eventsel; unsigned perfctr; u64 (*event_map)(int); - u64 (*raw_event)(u64); int max_events; - int num_events; - int num_events_fixed; - int event_bits; - u64 event_mask; + int num_counters; + int num_counters_fixed; + int cntval_bits; + u64 cntval_mask; int apic; u64 max_period; - u64 intel_ctrl; - void (*enable_bts)(u64 config); - void (*disable_bts)(void); - struct event_constraint * (*get_event_constraints)(struct cpu_hw_events *cpuc, struct perf_event *event); @@ -159,11 +218,32 @@ struct x86_pmu { void (*put_event_constraints)(struct cpu_hw_events *cpuc, struct perf_event *event); struct event_constraint *event_constraints; + void (*quirks)(void); int (*cpu_prepare)(int cpu); void (*cpu_starting)(int cpu); void (*cpu_dying)(int cpu); void (*cpu_dead)(int cpu); + + /* + * Intel Arch Perfmon v2+ + */ + u64 intel_ctrl; + union perf_capabilities intel_cap; + + /* + * Intel DebugStore bits + */ + int bts, pebs; + int pebs_record_size; + void (*drain_pebs)(struct pt_regs *regs); + struct event_constraint *pebs_constraints; + + /* + * Intel LBR + */ + unsigned long lbr_tos, lbr_from, lbr_to; /* MSR base regs */ + int lbr_nr; /* hardware stack size */ }; static struct x86_pmu x86_pmu __read_mostly; @@ -198,7 +278,7 @@ static u64 x86_perf_event_update(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; - int shift = 64 - x86_pmu.event_bits; + int shift = 64 - x86_pmu.cntval_bits; u64 prev_raw_count, new_raw_count; int idx = hwc->idx; s64 delta; @@ -241,33 +321,32 @@ again: static atomic_t active_events; static DEFINE_MUTEX(pmc_reserve_mutex); +#ifdef CONFIG_X86_LOCAL_APIC + static bool reserve_pmc_hardware(void) { -#ifdef CONFIG_X86_LOCAL_APIC int i; if (nmi_watchdog == NMI_LOCAL_APIC) disable_lapic_nmi_watchdog(); - for (i = 0; i < x86_pmu.num_events; i++) { + for (i = 0; i < x86_pmu.num_counters; i++) { if (!reserve_perfctr_nmi(x86_pmu.perfctr + i)) goto perfctr_fail; } - for (i = 0; i < x86_pmu.num_events; i++) { + for (i = 0; i < x86_pmu.num_counters; i++) { if (!reserve_evntsel_nmi(x86_pmu.eventsel + i)) goto eventsel_fail; } -#endif return true; -#ifdef CONFIG_X86_LOCAL_APIC eventsel_fail: for (i--; i >= 0; i--) release_evntsel_nmi(x86_pmu.eventsel + i); - i = x86_pmu.num_events; + i = x86_pmu.num_counters; perfctr_fail: for (i--; i >= 0; i--) @@ -277,128 +356,36 @@ perfctr_fail: enable_lapic_nmi_watchdog(); return false; -#endif } static void release_pmc_hardware(void) { -#ifdef CONFIG_X86_LOCAL_APIC int i; - for (i = 0; i < x86_pmu.num_events; i++) { + for (i = 0; i < x86_pmu.num_counters; i++) { release_perfctr_nmi(x86_pmu.perfctr + i); release_evntsel_nmi(x86_pmu.eventsel + i); } if (nmi_watchdog == NMI_LOCAL_APIC) enable_lapic_nmi_watchdog(); -#endif -} - -static inline bool bts_available(void) -{ - return x86_pmu.enable_bts != NULL; } -static void init_debug_store_on_cpu(int cpu) -{ - struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds; - - if (!ds) - return; - - wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, - (u32)((u64)(unsigned long)ds), - (u32)((u64)(unsigned long)ds >> 32)); -} - -static void fini_debug_store_on_cpu(int cpu) -{ - if (!per_cpu(cpu_hw_events, cpu).ds) - return; - - wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0, 0); -} - -static void release_bts_hardware(void) -{ - int cpu; - - if (!bts_available()) - return; - - get_online_cpus(); - - for_each_online_cpu(cpu) - fini_debug_store_on_cpu(cpu); - - for_each_possible_cpu(cpu) { - struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds; - - if (!ds) - continue; - - per_cpu(cpu_hw_events, cpu).ds = NULL; - - kfree((void *)(unsigned long)ds->bts_buffer_base); - kfree(ds); - } - - put_online_cpus(); -} - -static int reserve_bts_hardware(void) -{ - int cpu, err = 0; - - if (!bts_available()) - return 0; - - get_online_cpus(); - - for_each_possible_cpu(cpu) { - struct debug_store *ds; - void *buffer; - - err = -ENOMEM; - buffer = kzalloc(BTS_BUFFER_SIZE, GFP_KERNEL); - if (unlikely(!buffer)) - break; - - ds = kzalloc(sizeof(*ds), GFP_KERNEL); - if (unlikely(!ds)) { - kfree(buffer); - break; - } - - ds->bts_buffer_base = (u64)(unsigned long)buffer; - ds->bts_index = ds->bts_buffer_base; - ds->bts_absolute_maximum = - ds->bts_buffer_base + BTS_BUFFER_SIZE; - ds->bts_interrupt_threshold = - ds->bts_absolute_maximum - BTS_OVFL_TH; - - per_cpu(cpu_hw_events, cpu).ds = ds; - err = 0; - } +#else - if (err) - release_bts_hardware(); - else { - for_each_online_cpu(cpu) - init_debug_store_on_cpu(cpu); - } +static bool reserve_pmc_hardware(void) { return true; } +static void release_pmc_hardware(void) {} - put_online_cpus(); +#endif - return err; -} +static int reserve_ds_buffers(void); +static void release_ds_buffers(void); static void hw_perf_event_destroy(struct perf_event *event) { if (atomic_dec_and_mutex_lock(&active_events, &pmc_reserve_mutex)) { release_pmc_hardware(); - release_bts_hardware(); + release_ds_buffers(); mutex_unlock(&pmc_reserve_mutex); } } @@ -441,54 +428,11 @@ set_ext_hw_attr(struct hw_perf_event *hwc, struct perf_event_attr *attr) return 0; } -/* - * Setup the hardware configuration for a given attr_type - */ -static int __hw_perf_event_init(struct perf_event *event) +static int x86_setup_perfctr(struct perf_event *event) { struct perf_event_attr *attr = &event->attr; struct hw_perf_event *hwc = &event->hw; u64 config; - int err; - - if (!x86_pmu_initialized()) - return -ENODEV; - - err = 0; - if (!atomic_inc_not_zero(&active_events)) { - mutex_lock(&pmc_reserve_mutex); - if (atomic_read(&active_events) == 0) { - if (!reserve_pmc_hardware()) - err = -EBUSY; - else - err = reserve_bts_hardware(); - } - if (!err) - atomic_inc(&active_events); - mutex_unlock(&pmc_reserve_mutex); - } - if (err) - return err; - - event->destroy = hw_perf_event_destroy; - - /* - * Generate PMC IRQs: - * (keep 'enabled' bit clear for now) - */ - hwc->config = ARCH_PERFMON_EVENTSEL_INT; - - hwc->idx = -1; - hwc->last_cpu = -1; - hwc->last_tag = ~0ULL; - - /* - * Count user and OS events unless requested not to. - */ - if (!attr->exclude_user) - hwc->config |= ARCH_PERFMON_EVENTSEL_USR; - if (!attr->exclude_kernel) - hwc->config |= ARCH_PERFMON_EVENTSEL_OS; if (!hwc->sample_period) { hwc->sample_period = x86_pmu.max_period; @@ -505,16 +449,8 @@ static int __hw_perf_event_init(struct perf_event *event) return -EOPNOTSUPP; } - /* - * Raw hw_event type provide the config in the hw_event structure - */ - if (attr->type == PERF_TYPE_RAW) { - hwc->config |= x86_pmu.raw_event(attr->config); - if ((hwc->config & ARCH_PERFMON_EVENTSEL_ANY) && - perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN)) - return -EACCES; + if (attr->type == PERF_TYPE_RAW) return 0; - } if (attr->type == PERF_TYPE_HW_CACHE) return set_ext_hw_attr(hwc, attr); @@ -539,11 +475,11 @@ static int __hw_perf_event_init(struct perf_event *event) if ((attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS) && (hwc->sample_period == 1)) { /* BTS is not supported by this architecture. */ - if (!bts_available()) + if (!x86_pmu.bts) return -EOPNOTSUPP; /* BTS is currently only allowed for user-mode. */ - if (hwc->config & ARCH_PERFMON_EVENTSEL_OS) + if (!attr->exclude_kernel) return -EOPNOTSUPP; } @@ -552,12 +488,87 @@ static int __hw_perf_event_init(struct perf_event *event) return 0; } +static int x86_pmu_hw_config(struct perf_event *event) +{ + if (event->attr.precise_ip) { + int precise = 0; + + /* Support for constant skid */ + if (x86_pmu.pebs) + precise++; + + /* Support for IP fixup */ + if (x86_pmu.lbr_nr) + precise++; + + if (event->attr.precise_ip > precise) + return -EOPNOTSUPP; + } + + /* + * Generate PMC IRQs: + * (keep 'enabled' bit clear for now) + */ + event->hw.config = ARCH_PERFMON_EVENTSEL_INT; + + /* + * Count user and OS events unless requested not to + */ + if (!event->attr.exclude_user) + event->hw.config |= ARCH_PERFMON_EVENTSEL_USR; + if (!event->attr.exclude_kernel) + event->hw.config |= ARCH_PERFMON_EVENTSEL_OS; + + if (event->attr.type == PERF_TYPE_RAW) + event->hw.config |= event->attr.config & X86_RAW_EVENT_MASK; + + return x86_setup_perfctr(event); +} + +/* + * Setup the hardware configuration for a given attr_type + */ +static int __hw_perf_event_init(struct perf_event *event) +{ + int err; + + if (!x86_pmu_initialized()) + return -ENODEV; + + err = 0; + if (!atomic_inc_not_zero(&active_events)) { + mutex_lock(&pmc_reserve_mutex); + if (atomic_read(&active_events) == 0) { + if (!reserve_pmc_hardware()) + err = -EBUSY; + else { + err = reserve_ds_buffers(); + if (err) + release_pmc_hardware(); + } + } + if (!err) + atomic_inc(&active_events); + mutex_unlock(&pmc_reserve_mutex); + } + if (err) + return err; + + event->destroy = hw_perf_event_destroy; + + event->hw.idx = -1; + event->hw.last_cpu = -1; + event->hw.last_tag = ~0ULL; + + return x86_pmu.hw_config(event); +} + static void x86_pmu_disable_all(void) { struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); int idx; - for (idx = 0; idx < x86_pmu.num_events; idx++) { + for (idx = 0; idx < x86_pmu.num_counters; idx++) { u64 val; if (!test_bit(idx, cpuc->active_mask)) @@ -587,12 +598,12 @@ void hw_perf_disable(void) x86_pmu.disable_all(); } -static void x86_pmu_enable_all(void) +static void x86_pmu_enable_all(int added) { struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); int idx; - for (idx = 0; idx < x86_pmu.num_events; idx++) { + for (idx = 0; idx < x86_pmu.num_counters; idx++) { struct perf_event *event = cpuc->events[idx]; u64 val; @@ -667,14 +678,14 @@ static int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) * assign events to counters starting with most * constrained events. */ - wmax = x86_pmu.num_events; + wmax = x86_pmu.num_counters; /* * when fixed event counters are present, * wmax is incremented by 1 to account * for one more choice */ - if (x86_pmu.num_events_fixed) + if (x86_pmu.num_counters_fixed) wmax++; for (w = 1, num = n; num && w <= wmax; w++) { @@ -724,7 +735,7 @@ static int collect_events(struct cpu_hw_events *cpuc, struct perf_event *leader, struct perf_event *event; int n, max_count; - max_count = x86_pmu.num_events + x86_pmu.num_events_fixed; + max_count = x86_pmu.num_counters + x86_pmu.num_counters_fixed; /* current number of events already accepted */ n = cpuc->n_events; @@ -795,7 +806,7 @@ void hw_perf_enable(void) struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); struct perf_event *event; struct hw_perf_event *hwc; - int i; + int i, added = cpuc->n_added; if (!x86_pmu_initialized()) return; @@ -847,19 +858,20 @@ void hw_perf_enable(void) cpuc->enabled = 1; barrier(); - x86_pmu.enable_all(); + x86_pmu.enable_all(added); } -static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc) +static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc, + u64 enable_mask) { - (void)checking_wrmsrl(hwc->config_base + hwc->idx, - hwc->config | ARCH_PERFMON_EVENTSEL_ENABLE); + wrmsrl(hwc->config_base + hwc->idx, hwc->config | enable_mask); } static inline void x86_pmu_disable_event(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; - (void)checking_wrmsrl(hwc->config_base + hwc->idx, hwc->config); + + wrmsrl(hwc->config_base + hwc->idx, hwc->config); } static DEFINE_PER_CPU(u64 [X86_PMC_IDX_MAX], pmc_prev_left); @@ -874,7 +886,7 @@ x86_perf_event_set_period(struct perf_event *event) struct hw_perf_event *hwc = &event->hw; s64 left = atomic64_read(&hwc->period_left); s64 period = hwc->sample_period; - int err, ret = 0, idx = hwc->idx; + int ret = 0, idx = hwc->idx; if (idx == X86_PMC_IDX_FIXED_BTS) return 0; @@ -912,8 +924,8 @@ x86_perf_event_set_period(struct perf_event *event) */ atomic64_set(&hwc->prev_count, (u64)-left); - err = checking_wrmsrl(hwc->event_base + idx, - (u64)(-left) & x86_pmu.event_mask); + wrmsrl(hwc->event_base + idx, + (u64)(-left) & x86_pmu.cntval_mask); perf_event_update_userpage(event); @@ -924,7 +936,8 @@ static void x86_pmu_enable_event(struct perf_event *event) { struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); if (cpuc->enabled) - __x86_pmu_enable_event(&event->hw); + __x86_pmu_enable_event(&event->hw, + ARCH_PERFMON_EVENTSEL_ENABLE); } /* @@ -950,7 +963,15 @@ static int x86_pmu_enable(struct perf_event *event) if (n < 0) return n; - ret = x86_schedule_events(cpuc, n, assign); + /* + * If group events scheduling transaction was started, + * skip the schedulability test here, it will be peformed + * at commit time(->commit_txn) as a whole + */ + if (cpuc->group_flag & PERF_EVENT_TXN_STARTED) + goto out; + + ret = x86_pmu.schedule_events(cpuc, n, assign); if (ret) return ret; /* @@ -959,6 +980,7 @@ static int x86_pmu_enable(struct perf_event *event) */ memcpy(cpuc->assign, assign, n*sizeof(int)); +out: cpuc->n_events = n; cpuc->n_added += n - n0; @@ -991,11 +1013,12 @@ static void x86_pmu_unthrottle(struct perf_event *event) void perf_event_print_debug(void) { u64 ctrl, status, overflow, pmc_ctrl, pmc_count, prev_left, fixed; + u64 pebs; struct cpu_hw_events *cpuc; unsigned long flags; int cpu, idx; - if (!x86_pmu.num_events) + if (!x86_pmu.num_counters) return; local_irq_save(flags); @@ -1008,16 +1031,18 @@ void perf_event_print_debug(void) rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, status); rdmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, overflow); rdmsrl(MSR_ARCH_PERFMON_FIXED_CTR_CTRL, fixed); + rdmsrl(MSR_IA32_PEBS_ENABLE, pebs); pr_info("\n"); pr_info("CPU#%d: ctrl: %016llx\n", cpu, ctrl); pr_info("CPU#%d: status: %016llx\n", cpu, status); pr_info("CPU#%d: overflow: %016llx\n", cpu, overflow); pr_info("CPU#%d: fixed: %016llx\n", cpu, fixed); + pr_info("CPU#%d: pebs: %016llx\n", cpu, pebs); } - pr_info("CPU#%d: active: %016llx\n", cpu, *(u64 *)cpuc->active_mask); + pr_info("CPU#%d: active: %016llx\n", cpu, *(u64 *)cpuc->active_mask); - for (idx = 0; idx < x86_pmu.num_events; idx++) { + for (idx = 0; idx < x86_pmu.num_counters; idx++) { rdmsrl(x86_pmu.eventsel + idx, pmc_ctrl); rdmsrl(x86_pmu.perfctr + idx, pmc_count); @@ -1030,7 +1055,7 @@ void perf_event_print_debug(void) pr_info("CPU#%d: gen-PMC%d left: %016llx\n", cpu, idx, prev_left); } - for (idx = 0; idx < x86_pmu.num_events_fixed; idx++) { + for (idx = 0; idx < x86_pmu.num_counters_fixed; idx++) { rdmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, pmc_count); pr_info("CPU#%d: fixed-PMC%d count: %016llx\n", @@ -1095,7 +1120,7 @@ static int x86_pmu_handle_irq(struct pt_regs *regs) cpuc = &__get_cpu_var(cpu_hw_events); - for (idx = 0; idx < x86_pmu.num_events; idx++) { + for (idx = 0; idx < x86_pmu.num_counters; idx++) { if (!test_bit(idx, cpuc->active_mask)) continue; @@ -1103,7 +1128,7 @@ static int x86_pmu_handle_irq(struct pt_regs *regs) hwc = &event->hw; val = x86_perf_event_update(event); - if (val & (1ULL << (x86_pmu.event_bits - 1))) + if (val & (1ULL << (x86_pmu.cntval_bits - 1))) continue; /* @@ -1146,7 +1171,6 @@ void set_perf_event_pending(void) void perf_events_lapic_init(void) { -#ifdef CONFIG_X86_LOCAL_APIC if (!x86_pmu.apic || !x86_pmu_initialized()) return; @@ -1154,7 +1178,6 @@ void perf_events_lapic_init(void) * Always use NMI for PMU */ apic_write(APIC_LVTPC, APIC_DM_NMI); -#endif } static int __kprobes @@ -1178,9 +1201,7 @@ perf_event_nmi_handler(struct notifier_block *self, regs = args->regs; -#ifdef CONFIG_X86_LOCAL_APIC apic_write(APIC_LVTPC, APIC_DM_NMI); -#endif /* * Can't rely on the handled return value to say it was our NMI, two * events could trigger 'simultaneously' raising two back-to-back NMIs. @@ -1217,118 +1238,11 @@ x86_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event) return &unconstrained; } -static int x86_event_sched_in(struct perf_event *event, - struct perf_cpu_context *cpuctx) -{ - int ret = 0; - - event->state = PERF_EVENT_STATE_ACTIVE; - event->oncpu = smp_processor_id(); - event->tstamp_running += event->ctx->time - event->tstamp_stopped; - - if (!is_x86_event(event)) - ret = event->pmu->enable(event); - - if (!ret && !is_software_event(event)) - cpuctx->active_oncpu++; - - if (!ret && event->attr.exclusive) - cpuctx->exclusive = 1; - - return ret; -} - -static void x86_event_sched_out(struct perf_event *event, - struct perf_cpu_context *cpuctx) -{ - event->state = PERF_EVENT_STATE_INACTIVE; - event->oncpu = -1; - - if (!is_x86_event(event)) - event->pmu->disable(event); - - event->tstamp_running -= event->ctx->time - event->tstamp_stopped; - - if (!is_software_event(event)) - cpuctx->active_oncpu--; - - if (event->attr.exclusive || !cpuctx->active_oncpu) - cpuctx->exclusive = 0; -} - -/* - * Called to enable a whole group of events. - * Returns 1 if the group was enabled, or -EAGAIN if it could not be. - * Assumes the caller has disabled interrupts and has - * frozen the PMU with hw_perf_save_disable. - * - * called with PMU disabled. If successful and return value 1, - * then guaranteed to call perf_enable() and hw_perf_enable() - */ -int hw_perf_group_sched_in(struct perf_event *leader, - struct perf_cpu_context *cpuctx, - struct perf_event_context *ctx) -{ - struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); - struct perf_event *sub; - int assign[X86_PMC_IDX_MAX]; - int n0, n1, ret; - - /* n0 = total number of events */ - n0 = collect_events(cpuc, leader, true); - if (n0 < 0) - return n0; - - ret = x86_schedule_events(cpuc, n0, assign); - if (ret) - return ret; - - ret = x86_event_sched_in(leader, cpuctx); - if (ret) - return ret; - - n1 = 1; - list_for_each_entry(sub, &leader->sibling_list, group_entry) { - if (sub->state > PERF_EVENT_STATE_OFF) { - ret = x86_event_sched_in(sub, cpuctx); - if (ret) - goto undo; - ++n1; - } - } - /* - * copy new assignment, now we know it is possible - * will be used by hw_perf_enable() - */ - memcpy(cpuc->assign, assign, n0*sizeof(int)); - - cpuc->n_events = n0; - cpuc->n_added += n1; - ctx->nr_active += n1; - - /* - * 1 means successful and events are active - * This is not quite true because we defer - * actual activation until hw_perf_enable() but - * this way we* ensure caller won't try to enable - * individual events - */ - return 1; -undo: - x86_event_sched_out(leader, cpuctx); - n0 = 1; - list_for_each_entry(sub, &leader->sibling_list, group_entry) { - if (sub->state == PERF_EVENT_STATE_ACTIVE) { - x86_event_sched_out(sub, cpuctx); - if (++n0 == n1) - break; - } - } - return ret; -} - #include "perf_event_amd.c" #include "perf_event_p6.c" +#include "perf_event_p4.c" +#include "perf_event_intel_lbr.c" +#include "perf_event_intel_ds.c" #include "perf_event_intel.c" static int __cpuinit @@ -1402,48 +1316,50 @@ void __init init_hw_perf_events(void) pr_cont("%s PMU driver.\n", x86_pmu.name); - if (x86_pmu.num_events > X86_PMC_MAX_GENERIC) { + if (x86_pmu.quirks) + x86_pmu.quirks(); + + if (x86_pmu.num_counters > X86_PMC_MAX_GENERIC) { WARN(1, KERN_ERR "hw perf events %d > max(%d), clipping!", - x86_pmu.num_events, X86_PMC_MAX_GENERIC); - x86_pmu.num_events = X86_PMC_MAX_GENERIC; + x86_pmu.num_counters, X86_PMC_MAX_GENERIC); + x86_pmu.num_counters = X86_PMC_MAX_GENERIC; } - perf_event_mask = (1 << x86_pmu.num_events) - 1; - perf_max_events = x86_pmu.num_events; + x86_pmu.intel_ctrl = (1 << x86_pmu.num_counters) - 1; + perf_max_events = x86_pmu.num_counters; - if (x86_pmu.num_events_fixed > X86_PMC_MAX_FIXED) { + if (x86_pmu.num_counters_fixed > X86_PMC_MAX_FIXED) { WARN(1, KERN_ERR "hw perf events fixed %d > max(%d), clipping!", - x86_pmu.num_events_fixed, X86_PMC_MAX_FIXED); - x86_pmu.num_events_fixed = X86_PMC_MAX_FIXED; + x86_pmu.num_counters_fixed, X86_PMC_MAX_FIXED); + x86_pmu.num_counters_fixed = X86_PMC_MAX_FIXED; } - perf_event_mask |= - ((1LL << x86_pmu.num_events_fixed)-1) << X86_PMC_IDX_FIXED; - x86_pmu.intel_ctrl = perf_event_mask; + x86_pmu.intel_ctrl |= + ((1LL << x86_pmu.num_counters_fixed)-1) << X86_PMC_IDX_FIXED; perf_events_lapic_init(); register_die_notifier(&perf_event_nmi_notifier); unconstrained = (struct event_constraint) - __EVENT_CONSTRAINT(0, (1ULL << x86_pmu.num_events) - 1, - 0, x86_pmu.num_events); + __EVENT_CONSTRAINT(0, (1ULL << x86_pmu.num_counters) - 1, + 0, x86_pmu.num_counters); if (x86_pmu.event_constraints) { for_each_event_constraint(c, x86_pmu.event_constraints) { - if (c->cmask != INTEL_ARCH_FIXED_MASK) + if (c->cmask != X86_RAW_EVENT_MASK) continue; - c->idxmsk64 |= (1ULL << x86_pmu.num_events) - 1; - c->weight += x86_pmu.num_events; + c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1; + c->weight += x86_pmu.num_counters; } } pr_info("... version: %d\n", x86_pmu.version); - pr_info("... bit width: %d\n", x86_pmu.event_bits); - pr_info("... generic registers: %d\n", x86_pmu.num_events); - pr_info("... value mask: %016Lx\n", x86_pmu.event_mask); + pr_info("... bit width: %d\n", x86_pmu.cntval_bits); + pr_info("... generic registers: %d\n", x86_pmu.num_counters); + pr_info("... value mask: %016Lx\n", x86_pmu.cntval_mask); pr_info("... max period: %016Lx\n", x86_pmu.max_period); - pr_info("... fixed-purpose events: %d\n", x86_pmu.num_events_fixed); - pr_info("... event mask: %016Lx\n", perf_event_mask); + pr_info("... fixed-purpose events: %d\n", x86_pmu.num_counters_fixed); + pr_info("... event mask: %016Lx\n", x86_pmu.intel_ctrl); perf_cpu_notifier(x86_pmu_notifier); } @@ -1453,6 +1369,59 @@ static inline void x86_pmu_read(struct perf_event *event) x86_perf_event_update(event); } +/* + * Start group events scheduling transaction + * Set the flag to make pmu::enable() not perform the + * schedulability test, it will be performed at commit time + */ +static void x86_pmu_start_txn(const struct pmu *pmu) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + cpuc->group_flag |= PERF_EVENT_TXN_STARTED; +} + +/* + * Stop group events scheduling transaction + * Clear the flag and pmu::enable() will perform the + * schedulability test. + */ +static void x86_pmu_cancel_txn(const struct pmu *pmu) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + cpuc->group_flag &= ~PERF_EVENT_TXN_STARTED; +} + +/* + * Commit group events scheduling transaction + * Perform the group schedulability test as a whole + * Return 0 if success + */ +static int x86_pmu_commit_txn(const struct pmu *pmu) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + int assign[X86_PMC_IDX_MAX]; + int n, ret; + + n = cpuc->n_events; + + if (!x86_pmu_initialized()) + return -EAGAIN; + + ret = x86_pmu.schedule_events(cpuc, n, assign); + if (ret) + return ret; + + /* + * copy new assignment, now we know it is possible + * will be used by hw_perf_enable() + */ + memcpy(cpuc->assign, assign, n*sizeof(int)); + + return 0; +} + static const struct pmu pmu = { .enable = x86_pmu_enable, .disable = x86_pmu_disable, @@ -1460,9 +1429,38 @@ static const struct pmu pmu = { .stop = x86_pmu_stop, .read = x86_pmu_read, .unthrottle = x86_pmu_unthrottle, + .start_txn = x86_pmu_start_txn, + .cancel_txn = x86_pmu_cancel_txn, + .commit_txn = x86_pmu_commit_txn, }; /* + * validate that we can schedule this event + */ +static int validate_event(struct perf_event *event) +{ + struct cpu_hw_events *fake_cpuc; + struct event_constraint *c; + int ret = 0; + + fake_cpuc = kmalloc(sizeof(*fake_cpuc), GFP_KERNEL | __GFP_ZERO); + if (!fake_cpuc) + return -ENOMEM; + + c = x86_pmu.get_event_constraints(fake_cpuc, event); + + if (!c || !c->weight) + ret = -ENOSPC; + + if (x86_pmu.put_event_constraints) + x86_pmu.put_event_constraints(fake_cpuc, event); + + kfree(fake_cpuc); + + return ret; +} + +/* * validate a single event group * * validation include: @@ -1502,7 +1500,7 @@ static int validate_group(struct perf_event *event) fake_cpuc->n_events = n; - ret = x86_schedule_events(fake_cpuc, n, NULL); + ret = x86_pmu.schedule_events(fake_cpuc, n, NULL); out_free: kfree(fake_cpuc); @@ -1527,6 +1525,8 @@ const struct pmu *hw_perf_event_init(struct perf_event *event) if (event->group_leader != event) err = validate_group(event); + else + err = validate_event(event); event->pmu = tmp; } @@ -1574,8 +1574,7 @@ static void backtrace_address(void *data, unsigned long addr, int reliable) { struct perf_callchain_entry *entry = data; - if (reliable) - callchain_store(entry, addr); + callchain_store(entry, addr); } static const struct stacktrace_ops backtrace_ops = { @@ -1597,41 +1596,6 @@ perf_callchain_kernel(struct pt_regs *regs, struct perf_callchain_entry *entry) dump_trace(NULL, regs, NULL, regs->bp, &backtrace_ops, entry); } -/* - * best effort, GUP based copy_from_user() that assumes IRQ or NMI context - */ -static unsigned long -copy_from_user_nmi(void *to, const void __user *from, unsigned long n) -{ - unsigned long offset, addr = (unsigned long)from; - int type = in_nmi() ? KM_NMI : KM_IRQ0; - unsigned long size, len = 0; - struct page *page; - void *map; - int ret; - - do { - ret = __get_user_pages_fast(addr, 1, 0, &page); - if (!ret) - break; - - offset = addr & (PAGE_SIZE - 1); - size = min(PAGE_SIZE - offset, n - len); - - map = kmap_atomic(page, type); - memcpy(to, map+offset, size); - kunmap_atomic(map, type); - put_page(page); - - len += size; - to += size; - addr += size; - - } while (len < n); - - return len; -} - #ifdef CONFIG_COMPAT static inline int perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry) @@ -1727,6 +1691,11 @@ struct perf_callchain_entry *perf_callchain(struct pt_regs *regs) { struct perf_callchain_entry *entry; + if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { + /* TODO: We don't support guest os callchain now */ + return NULL; + } + if (in_nmi()) entry = &__get_cpu_var(pmc_nmi_entry); else @@ -1750,3 +1719,37 @@ void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip, int ski regs->cs = __KERNEL_CS; local_save_flags(regs->flags); } + +unsigned long perf_instruction_pointer(struct pt_regs *regs) +{ + unsigned long ip; + + if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) + ip = perf_guest_cbs->get_guest_ip(); + else + ip = instruction_pointer(regs); + + return ip; +} + +unsigned long perf_misc_flags(struct pt_regs *regs) +{ + int misc = 0; + + if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { + if (perf_guest_cbs->is_user_mode()) + misc |= PERF_RECORD_MISC_GUEST_USER; + else + misc |= PERF_RECORD_MISC_GUEST_KERNEL; + } else { + if (user_mode(regs)) + misc |= PERF_RECORD_MISC_USER; + else + misc |= PERF_RECORD_MISC_KERNEL; + } + + if (regs->flags & PERF_EFLAGS_EXACT) + misc |= PERF_RECORD_MISC_EXACT_IP; + + return misc; +} diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c index db6f7d4056e1..611df11ba15e 100644 --- a/arch/x86/kernel/cpu/perf_event_amd.c +++ b/arch/x86/kernel/cpu/perf_event_amd.c @@ -2,7 +2,7 @@ static DEFINE_RAW_SPINLOCK(amd_nb_lock); -static __initconst u64 amd_hw_cache_event_ids +static __initconst const u64 amd_hw_cache_event_ids [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX] = @@ -111,22 +111,19 @@ static u64 amd_pmu_event_map(int hw_event) return amd_perfmon_event_map[hw_event]; } -static u64 amd_pmu_raw_event(u64 hw_event) +static int amd_pmu_hw_config(struct perf_event *event) { -#define K7_EVNTSEL_EVENT_MASK 0xF000000FFULL -#define K7_EVNTSEL_UNIT_MASK 0x00000FF00ULL -#define K7_EVNTSEL_EDGE_MASK 0x000040000ULL -#define K7_EVNTSEL_INV_MASK 0x000800000ULL -#define K7_EVNTSEL_REG_MASK 0x0FF000000ULL - -#define K7_EVNTSEL_MASK \ - (K7_EVNTSEL_EVENT_MASK | \ - K7_EVNTSEL_UNIT_MASK | \ - K7_EVNTSEL_EDGE_MASK | \ - K7_EVNTSEL_INV_MASK | \ - K7_EVNTSEL_REG_MASK) - - return hw_event & K7_EVNTSEL_MASK; + int ret = x86_pmu_hw_config(event); + + if (ret) + return ret; + + if (event->attr.type != PERF_TYPE_RAW) + return 0; + + event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK; + + return 0; } /* @@ -165,7 +162,7 @@ static void amd_put_event_constraints(struct cpu_hw_events *cpuc, * be removed on one CPU at a time AND PMU is disabled * when we come here */ - for (i = 0; i < x86_pmu.num_events; i++) { + for (i = 0; i < x86_pmu.num_counters; i++) { if (nb->owners[i] == event) { cmpxchg(nb->owners+i, event, NULL); break; @@ -215,7 +212,7 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event) struct hw_perf_event *hwc = &event->hw; struct amd_nb *nb = cpuc->amd_nb; struct perf_event *old = NULL; - int max = x86_pmu.num_events; + int max = x86_pmu.num_counters; int i, j, k = -1; /* @@ -293,7 +290,7 @@ static struct amd_nb *amd_alloc_nb(int cpu, int nb_id) /* * initialize all possible NB constraints */ - for (i = 0; i < x86_pmu.num_events; i++) { + for (i = 0; i < x86_pmu.num_counters; i++) { __set_bit(i, nb->event_constraints[i].idxmsk); nb->event_constraints[i].weight = 1; } @@ -371,21 +368,22 @@ static void amd_pmu_cpu_dead(int cpu) raw_spin_unlock(&amd_nb_lock); } -static __initconst struct x86_pmu amd_pmu = { +static __initconst const struct x86_pmu amd_pmu = { .name = "AMD", .handle_irq = x86_pmu_handle_irq, .disable_all = x86_pmu_disable_all, .enable_all = x86_pmu_enable_all, .enable = x86_pmu_enable_event, .disable = x86_pmu_disable_event, + .hw_config = amd_pmu_hw_config, + .schedule_events = x86_schedule_events, .eventsel = MSR_K7_EVNTSEL0, .perfctr = MSR_K7_PERFCTR0, .event_map = amd_pmu_event_map, - .raw_event = amd_pmu_raw_event, .max_events = ARRAY_SIZE(amd_perfmon_event_map), - .num_events = 4, - .event_bits = 48, - .event_mask = (1ULL << 48) - 1, + .num_counters = 4, + .cntval_bits = 48, + .cntval_mask = (1ULL << 48) - 1, .apic = 1, /* use highest bit to detect overflow */ .max_period = (1ULL << 47) - 1, diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c index 9c794ac87837..fdbc652d3feb 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c @@ -88,7 +88,7 @@ static u64 intel_pmu_event_map(int hw_event) return intel_perfmon_event_map[hw_event]; } -static __initconst u64 westmere_hw_cache_event_ids +static __initconst const u64 westmere_hw_cache_event_ids [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX] = @@ -179,7 +179,7 @@ static __initconst u64 westmere_hw_cache_event_ids }, }; -static __initconst u64 nehalem_hw_cache_event_ids +static __initconst const u64 nehalem_hw_cache_event_ids [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX] = @@ -270,7 +270,7 @@ static __initconst u64 nehalem_hw_cache_event_ids }, }; -static __initconst u64 core2_hw_cache_event_ids +static __initconst const u64 core2_hw_cache_event_ids [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX] = @@ -361,7 +361,7 @@ static __initconst u64 core2_hw_cache_event_ids }, }; -static __initconst u64 atom_hw_cache_event_ids +static __initconst const u64 atom_hw_cache_event_ids [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX] = @@ -452,60 +452,6 @@ static __initconst u64 atom_hw_cache_event_ids }, }; -static u64 intel_pmu_raw_event(u64 hw_event) -{ -#define CORE_EVNTSEL_EVENT_MASK 0x000000FFULL -#define CORE_EVNTSEL_UNIT_MASK 0x0000FF00ULL -#define CORE_EVNTSEL_EDGE_MASK 0x00040000ULL -#define CORE_EVNTSEL_INV_MASK 0x00800000ULL -#define CORE_EVNTSEL_REG_MASK 0xFF000000ULL - -#define CORE_EVNTSEL_MASK \ - (INTEL_ARCH_EVTSEL_MASK | \ - INTEL_ARCH_UNIT_MASK | \ - INTEL_ARCH_EDGE_MASK | \ - INTEL_ARCH_INV_MASK | \ - INTEL_ARCH_CNT_MASK) - - return hw_event & CORE_EVNTSEL_MASK; -} - -static void intel_pmu_enable_bts(u64 config) -{ - unsigned long debugctlmsr; - - debugctlmsr = get_debugctlmsr(); - - debugctlmsr |= X86_DEBUGCTL_TR; - debugctlmsr |= X86_DEBUGCTL_BTS; - debugctlmsr |= X86_DEBUGCTL_BTINT; - - if (!(config & ARCH_PERFMON_EVENTSEL_OS)) - debugctlmsr |= X86_DEBUGCTL_BTS_OFF_OS; - - if (!(config & ARCH_PERFMON_EVENTSEL_USR)) - debugctlmsr |= X86_DEBUGCTL_BTS_OFF_USR; - - update_debugctlmsr(debugctlmsr); -} - -static void intel_pmu_disable_bts(void) -{ - struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); - unsigned long debugctlmsr; - - if (!cpuc->ds) - return; - - debugctlmsr = get_debugctlmsr(); - - debugctlmsr &= - ~(X86_DEBUGCTL_TR | X86_DEBUGCTL_BTS | X86_DEBUGCTL_BTINT | - X86_DEBUGCTL_BTS_OFF_OS | X86_DEBUGCTL_BTS_OFF_USR); - - update_debugctlmsr(debugctlmsr); -} - static void intel_pmu_disable_all(void) { struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); @@ -514,12 +460,17 @@ static void intel_pmu_disable_all(void) if (test_bit(X86_PMC_IDX_FIXED_BTS, cpuc->active_mask)) intel_pmu_disable_bts(); + + intel_pmu_pebs_disable_all(); + intel_pmu_lbr_disable_all(); } -static void intel_pmu_enable_all(void) +static void intel_pmu_enable_all(int added) { struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + intel_pmu_pebs_enable_all(); + intel_pmu_lbr_enable_all(); wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, x86_pmu.intel_ctrl); if (test_bit(X86_PMC_IDX_FIXED_BTS, cpuc->active_mask)) { @@ -533,6 +484,42 @@ static void intel_pmu_enable_all(void) } } +/* + * Workaround for: + * Intel Errata AAK100 (model 26) + * Intel Errata AAP53 (model 30) + * Intel Errata BD53 (model 44) + * + * These chips need to be 'reset' when adding counters by programming + * the magic three (non counting) events 0x4300D2, 0x4300B1 and 0x4300B5 + * either in sequence on the same PMC or on different PMCs. + */ +static void intel_pmu_nhm_enable_all(int added) +{ + if (added) { + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + int i; + + wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + 0, 0x4300D2); + wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + 1, 0x4300B1); + wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + 2, 0x4300B5); + + wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0x3); + wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0x0); + + for (i = 0; i < 3; i++) { + struct perf_event *event = cpuc->events[i]; + + if (!event) + continue; + + __x86_pmu_enable_event(&event->hw, + ARCH_PERFMON_EVENTSEL_ENABLE); + } + } + intel_pmu_enable_all(added); +} + static inline u64 intel_pmu_get_status(void) { u64 status; @@ -547,8 +534,7 @@ static inline void intel_pmu_ack_status(u64 ack) wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, ack); } -static inline void -intel_pmu_disable_fixed(struct hw_perf_event *hwc) +static void intel_pmu_disable_fixed(struct hw_perf_event *hwc) { int idx = hwc->idx - X86_PMC_IDX_FIXED; u64 ctrl_val, mask; @@ -557,71 +543,10 @@ intel_pmu_disable_fixed(struct hw_perf_event *hwc) rdmsrl(hwc->config_base, ctrl_val); ctrl_val &= ~mask; - (void)checking_wrmsrl(hwc->config_base, ctrl_val); -} - -static void intel_pmu_drain_bts_buffer(void) -{ - struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); - struct debug_store *ds = cpuc->ds; - struct bts_record { - u64 from; - u64 to; - u64 flags; - }; - struct perf_event *event = cpuc->events[X86_PMC_IDX_FIXED_BTS]; - struct bts_record *at, *top; - struct perf_output_handle handle; - struct perf_event_header header; - struct perf_sample_data data; - struct pt_regs regs; - - if (!event) - return; - - if (!ds) - return; - - at = (struct bts_record *)(unsigned long)ds->bts_buffer_base; - top = (struct bts_record *)(unsigned long)ds->bts_index; - - if (top <= at) - return; - - ds->bts_index = ds->bts_buffer_base; - - perf_sample_data_init(&data, 0); - - data.period = event->hw.last_period; - regs.ip = 0; - - /* - * Prepare a generic sample, i.e. fill in the invariant fields. - * We will overwrite the from and to address before we output - * the sample. - */ - perf_prepare_sample(&header, &data, event, ®s); - - if (perf_output_begin(&handle, event, - header.size * (top - at), 1, 1)) - return; - - for (; at < top; at++) { - data.ip = at->from; - data.addr = at->to; - - perf_output_sample(&handle, &header, &data, event); - } - - perf_output_end(&handle); - - /* There's new data available. */ - event->hw.interrupts++; - event->pending_kill = POLL_IN; + wrmsrl(hwc->config_base, ctrl_val); } -static inline void -intel_pmu_disable_event(struct perf_event *event) +static void intel_pmu_disable_event(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; @@ -637,14 +562,15 @@ intel_pmu_disable_event(struct perf_event *event) } x86_pmu_disable_event(event); + + if (unlikely(event->attr.precise_ip)) + intel_pmu_pebs_disable(event); } -static inline void -intel_pmu_enable_fixed(struct hw_perf_event *hwc) +static void intel_pmu_enable_fixed(struct hw_perf_event *hwc) { int idx = hwc->idx - X86_PMC_IDX_FIXED; u64 ctrl_val, bits, mask; - int err; /* * Enable IRQ generation (0x8), @@ -669,7 +595,7 @@ intel_pmu_enable_fixed(struct hw_perf_event *hwc) rdmsrl(hwc->config_base, ctrl_val); ctrl_val &= ~mask; ctrl_val |= bits; - err = checking_wrmsrl(hwc->config_base, ctrl_val); + wrmsrl(hwc->config_base, ctrl_val); } static void intel_pmu_enable_event(struct perf_event *event) @@ -689,7 +615,10 @@ static void intel_pmu_enable_event(struct perf_event *event) return; } - __x86_pmu_enable_event(hwc); + if (unlikely(event->attr.precise_ip)) + intel_pmu_pebs_enable(event); + + __x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE); } /* @@ -708,20 +637,20 @@ static void intel_pmu_reset(void) unsigned long flags; int idx; - if (!x86_pmu.num_events) + if (!x86_pmu.num_counters) return; local_irq_save(flags); printk("clearing PMU state on CPU#%d\n", smp_processor_id()); - for (idx = 0; idx < x86_pmu.num_events; idx++) { + for (idx = 0; idx < x86_pmu.num_counters; idx++) { checking_wrmsrl(x86_pmu.eventsel + idx, 0ull); checking_wrmsrl(x86_pmu.perfctr + idx, 0ull); } - for (idx = 0; idx < x86_pmu.num_events_fixed; idx++) { + for (idx = 0; idx < x86_pmu.num_counters_fixed; idx++) checking_wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, 0ull); - } + if (ds) ds->bts_index = ds->bts_buffer_base; @@ -747,7 +676,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) intel_pmu_drain_bts_buffer(); status = intel_pmu_get_status(); if (!status) { - intel_pmu_enable_all(); + intel_pmu_enable_all(0); return 0; } @@ -762,6 +691,15 @@ again: inc_irq_stat(apic_perf_irqs); ack = status; + + intel_pmu_lbr_read(); + + /* + * PEBS overflow sets bit 62 in the global status register + */ + if (__test_and_clear_bit(62, (unsigned long *)&status)) + x86_pmu.drain_pebs(regs); + for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) { struct perf_event *event = cpuc->events[bit]; @@ -787,26 +725,22 @@ again: goto again; done: - intel_pmu_enable_all(); + intel_pmu_enable_all(0); return 1; } -static struct event_constraint bts_constraint = - EVENT_CONSTRAINT(0, 1ULL << X86_PMC_IDX_FIXED_BTS, 0); - static struct event_constraint * -intel_special_constraints(struct perf_event *event) +intel_bts_constraints(struct perf_event *event) { - unsigned int hw_event; - - hw_event = event->hw.config & INTEL_ARCH_EVENT_MASK; + struct hw_perf_event *hwc = &event->hw; + unsigned int hw_event, bts_event; - if (unlikely((hw_event == - x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS)) && - (event->hw.sample_period == 1))) { + hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; + bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + if (unlikely(hw_event == bts_event && hwc->sample_period == 1)) return &bts_constraint; - } + return NULL; } @@ -815,24 +749,53 @@ intel_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event { struct event_constraint *c; - c = intel_special_constraints(event); + c = intel_bts_constraints(event); + if (c) + return c; + + c = intel_pebs_constraints(event); if (c) return c; return x86_get_event_constraints(cpuc, event); } -static __initconst struct x86_pmu core_pmu = { +static int intel_pmu_hw_config(struct perf_event *event) +{ + int ret = x86_pmu_hw_config(event); + + if (ret) + return ret; + + if (event->attr.type != PERF_TYPE_RAW) + return 0; + + if (!(event->attr.config & ARCH_PERFMON_EVENTSEL_ANY)) + return 0; + + if (x86_pmu.version < 3) + return -EINVAL; + + if (perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN)) + return -EACCES; + + event->hw.config |= ARCH_PERFMON_EVENTSEL_ANY; + + return 0; +} + +static __initconst const struct x86_pmu core_pmu = { .name = "core", .handle_irq = x86_pmu_handle_irq, .disable_all = x86_pmu_disable_all, .enable_all = x86_pmu_enable_all, .enable = x86_pmu_enable_event, .disable = x86_pmu_disable_event, + .hw_config = x86_pmu_hw_config, + .schedule_events = x86_schedule_events, .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, .perfctr = MSR_ARCH_PERFMON_PERFCTR0, .event_map = intel_pmu_event_map, - .raw_event = intel_pmu_raw_event, .max_events = ARRAY_SIZE(intel_perfmon_event_map), .apic = 1, /* @@ -845,17 +808,32 @@ static __initconst struct x86_pmu core_pmu = { .event_constraints = intel_core_event_constraints, }; -static __initconst struct x86_pmu intel_pmu = { +static void intel_pmu_cpu_starting(int cpu) +{ + init_debug_store_on_cpu(cpu); + /* + * Deal with CPUs that don't clear their LBRs on power-up. + */ + intel_pmu_lbr_reset(); +} + +static void intel_pmu_cpu_dying(int cpu) +{ + fini_debug_store_on_cpu(cpu); +} + +static __initconst const struct x86_pmu intel_pmu = { .name = "Intel", .handle_irq = intel_pmu_handle_irq, .disable_all = intel_pmu_disable_all, .enable_all = intel_pmu_enable_all, .enable = intel_pmu_enable_event, .disable = intel_pmu_disable_event, + .hw_config = intel_pmu_hw_config, + .schedule_events = x86_schedule_events, .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, .perfctr = MSR_ARCH_PERFMON_PERFCTR0, .event_map = intel_pmu_event_map, - .raw_event = intel_pmu_raw_event, .max_events = ARRAY_SIZE(intel_perfmon_event_map), .apic = 1, /* @@ -864,14 +842,38 @@ static __initconst struct x86_pmu intel_pmu = { * the generic event period: */ .max_period = (1ULL << 31) - 1, - .enable_bts = intel_pmu_enable_bts, - .disable_bts = intel_pmu_disable_bts, .get_event_constraints = intel_get_event_constraints, - .cpu_starting = init_debug_store_on_cpu, - .cpu_dying = fini_debug_store_on_cpu, + .cpu_starting = intel_pmu_cpu_starting, + .cpu_dying = intel_pmu_cpu_dying, }; +static void intel_clovertown_quirks(void) +{ + /* + * PEBS is unreliable due to: + * + * AJ67 - PEBS may experience CPL leaks + * AJ68 - PEBS PMI may be delayed by one event + * AJ69 - GLOBAL_STATUS[62] will only be set when DEBUGCTL[12] + * AJ106 - FREEZE_LBRS_ON_PMI doesn't work in combination with PEBS + * + * AJ67 could be worked around by restricting the OS/USR flags. + * AJ69 could be worked around by setting PMU_FREEZE_ON_PMI. + * + * AJ106 could possibly be worked around by not allowing LBR + * usage from PEBS, including the fixup. + * AJ68 could possibly be worked around by always programming + * a pebs_event_reset[0] value and coping with the lost events. + * + * But taken together it might just make sense to not enable PEBS on + * these chips. + */ + printk(KERN_WARNING "PEBS disabled due to CPU errata.\n"); + x86_pmu.pebs = 0; + x86_pmu.pebs_constraints = NULL; +} + static __init int intel_pmu_init(void) { union cpuid10_edx edx; @@ -881,12 +883,13 @@ static __init int intel_pmu_init(void) int version; if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) { - /* check for P6 processor family */ - if (boot_cpu_data.x86 == 6) { - return p6_pmu_init(); - } else { + switch (boot_cpu_data.x86) { + case 0x6: + return p6_pmu_init(); + case 0xf: + return p4_pmu_init(); + } return -ENODEV; - } } /* @@ -904,16 +907,28 @@ static __init int intel_pmu_init(void) x86_pmu = intel_pmu; x86_pmu.version = version; - x86_pmu.num_events = eax.split.num_events; - x86_pmu.event_bits = eax.split.bit_width; - x86_pmu.event_mask = (1ULL << eax.split.bit_width) - 1; + x86_pmu.num_counters = eax.split.num_counters; + x86_pmu.cntval_bits = eax.split.bit_width; + x86_pmu.cntval_mask = (1ULL << eax.split.bit_width) - 1; /* * Quirk: v2 perfmon does not report fixed-purpose events, so * assume at least 3 events: */ if (version > 1) - x86_pmu.num_events_fixed = max((int)edx.split.num_events_fixed, 3); + x86_pmu.num_counters_fixed = max((int)edx.split.num_counters_fixed, 3); + + /* + * v2 and above have a perf capabilities MSR + */ + if (version > 1) { + u64 capabilities; + + rdmsrl(MSR_IA32_PERF_CAPABILITIES, capabilities); + x86_pmu.intel_cap.capabilities = capabilities; + } + + intel_ds_init(); /* * Install the hw-cache-events table: @@ -924,12 +939,15 @@ static __init int intel_pmu_init(void) break; case 15: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */ + x86_pmu.quirks = intel_clovertown_quirks; case 22: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */ case 23: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */ case 29: /* six-core 45 nm xeon "Dunnington" */ memcpy(hw_cache_event_ids, core2_hw_cache_event_ids, sizeof(hw_cache_event_ids)); + intel_pmu_lbr_init_core(); + x86_pmu.event_constraints = intel_core2_event_constraints; pr_cont("Core2 events, "); break; @@ -940,13 +958,19 @@ static __init int intel_pmu_init(void) memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids, sizeof(hw_cache_event_ids)); + intel_pmu_lbr_init_nhm(); + x86_pmu.event_constraints = intel_nehalem_event_constraints; - pr_cont("Nehalem/Corei7 events, "); + x86_pmu.enable_all = intel_pmu_nhm_enable_all; + pr_cont("Nehalem events, "); break; + case 28: /* Atom */ memcpy(hw_cache_event_ids, atom_hw_cache_event_ids, sizeof(hw_cache_event_ids)); + intel_pmu_lbr_init_atom(); + x86_pmu.event_constraints = intel_gen_event_constraints; pr_cont("Atom events, "); break; @@ -956,7 +980,10 @@ static __init int intel_pmu_init(void) memcpy(hw_cache_event_ids, westmere_hw_cache_event_ids, sizeof(hw_cache_event_ids)); + intel_pmu_lbr_init_nhm(); + x86_pmu.event_constraints = intel_westmere_event_constraints; + x86_pmu.enable_all = intel_pmu_nhm_enable_all; pr_cont("Westmere events, "); break; diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c new file mode 100644 index 000000000000..18018d1311cd --- /dev/null +++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c @@ -0,0 +1,641 @@ +#ifdef CONFIG_CPU_SUP_INTEL + +/* The maximal number of PEBS events: */ +#define MAX_PEBS_EVENTS 4 + +/* The size of a BTS record in bytes: */ +#define BTS_RECORD_SIZE 24 + +#define BTS_BUFFER_SIZE (PAGE_SIZE << 4) +#define PEBS_BUFFER_SIZE PAGE_SIZE + +/* + * pebs_record_32 for p4 and core not supported + +struct pebs_record_32 { + u32 flags, ip; + u32 ax, bc, cx, dx; + u32 si, di, bp, sp; +}; + + */ + +struct pebs_record_core { + u64 flags, ip; + u64 ax, bx, cx, dx; + u64 si, di, bp, sp; + u64 r8, r9, r10, r11; + u64 r12, r13, r14, r15; +}; + +struct pebs_record_nhm { + u64 flags, ip; + u64 ax, bx, cx, dx; + u64 si, di, bp, sp; + u64 r8, r9, r10, r11; + u64 r12, r13, r14, r15; + u64 status, dla, dse, lat; +}; + +/* + * A debug store configuration. + * + * We only support architectures that use 64bit fields. + */ +struct debug_store { + u64 bts_buffer_base; + u64 bts_index; + u64 bts_absolute_maximum; + u64 bts_interrupt_threshold; + u64 pebs_buffer_base; + u64 pebs_index; + u64 pebs_absolute_maximum; + u64 pebs_interrupt_threshold; + u64 pebs_event_reset[MAX_PEBS_EVENTS]; +}; + +static void init_debug_store_on_cpu(int cpu) +{ + struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds; + + if (!ds) + return; + + wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, + (u32)((u64)(unsigned long)ds), + (u32)((u64)(unsigned long)ds >> 32)); +} + +static void fini_debug_store_on_cpu(int cpu) +{ + if (!per_cpu(cpu_hw_events, cpu).ds) + return; + + wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0, 0); +} + +static void release_ds_buffers(void) +{ + int cpu; + + if (!x86_pmu.bts && !x86_pmu.pebs) + return; + + get_online_cpus(); + + for_each_online_cpu(cpu) + fini_debug_store_on_cpu(cpu); + + for_each_possible_cpu(cpu) { + struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds; + + if (!ds) + continue; + + per_cpu(cpu_hw_events, cpu).ds = NULL; + + kfree((void *)(unsigned long)ds->pebs_buffer_base); + kfree((void *)(unsigned long)ds->bts_buffer_base); + kfree(ds); + } + + put_online_cpus(); +} + +static int reserve_ds_buffers(void) +{ + int cpu, err = 0; + + if (!x86_pmu.bts && !x86_pmu.pebs) + return 0; + + get_online_cpus(); + + for_each_possible_cpu(cpu) { + struct debug_store *ds; + void *buffer; + int max, thresh; + + err = -ENOMEM; + ds = kzalloc(sizeof(*ds), GFP_KERNEL); + if (unlikely(!ds)) + break; + per_cpu(cpu_hw_events, cpu).ds = ds; + + if (x86_pmu.bts) { + buffer = kzalloc(BTS_BUFFER_SIZE, GFP_KERNEL); + if (unlikely(!buffer)) + break; + + max = BTS_BUFFER_SIZE / BTS_RECORD_SIZE; + thresh = max / 16; + + ds->bts_buffer_base = (u64)(unsigned long)buffer; + ds->bts_index = ds->bts_buffer_base; + ds->bts_absolute_maximum = ds->bts_buffer_base + + max * BTS_RECORD_SIZE; + ds->bts_interrupt_threshold = ds->bts_absolute_maximum - + thresh * BTS_RECORD_SIZE; + } + + if (x86_pmu.pebs) { + buffer = kzalloc(PEBS_BUFFER_SIZE, GFP_KERNEL); + if (unlikely(!buffer)) + break; + + max = PEBS_BUFFER_SIZE / x86_pmu.pebs_record_size; + + ds->pebs_buffer_base = (u64)(unsigned long)buffer; + ds->pebs_index = ds->pebs_buffer_base; + ds->pebs_absolute_maximum = ds->pebs_buffer_base + + max * x86_pmu.pebs_record_size; + /* + * Always use single record PEBS + */ + ds->pebs_interrupt_threshold = ds->pebs_buffer_base + + x86_pmu.pebs_record_size; + } + + err = 0; + } + + if (err) + release_ds_buffers(); + else { + for_each_online_cpu(cpu) + init_debug_store_on_cpu(cpu); + } + + put_online_cpus(); + + return err; +} + +/* + * BTS + */ + +static struct event_constraint bts_constraint = + EVENT_CONSTRAINT(0, 1ULL << X86_PMC_IDX_FIXED_BTS, 0); + +static void intel_pmu_enable_bts(u64 config) +{ + unsigned long debugctlmsr; + + debugctlmsr = get_debugctlmsr(); + + debugctlmsr |= DEBUGCTLMSR_TR; + debugctlmsr |= DEBUGCTLMSR_BTS; + debugctlmsr |= DEBUGCTLMSR_BTINT; + + if (!(config & ARCH_PERFMON_EVENTSEL_OS)) + debugctlmsr |= DEBUGCTLMSR_BTS_OFF_OS; + + if (!(config & ARCH_PERFMON_EVENTSEL_USR)) + debugctlmsr |= DEBUGCTLMSR_BTS_OFF_USR; + + update_debugctlmsr(debugctlmsr); +} + +static void intel_pmu_disable_bts(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + unsigned long debugctlmsr; + + if (!cpuc->ds) + return; + + debugctlmsr = get_debugctlmsr(); + + debugctlmsr &= + ~(DEBUGCTLMSR_TR | DEBUGCTLMSR_BTS | DEBUGCTLMSR_BTINT | + DEBUGCTLMSR_BTS_OFF_OS | DEBUGCTLMSR_BTS_OFF_USR); + + update_debugctlmsr(debugctlmsr); +} + +static void intel_pmu_drain_bts_buffer(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + struct debug_store *ds = cpuc->ds; + struct bts_record { + u64 from; + u64 to; + u64 flags; + }; + struct perf_event *event = cpuc->events[X86_PMC_IDX_FIXED_BTS]; + struct bts_record *at, *top; + struct perf_output_handle handle; + struct perf_event_header header; + struct perf_sample_data data; + struct pt_regs regs; + + if (!event) + return; + + if (!ds) + return; + + at = (struct bts_record *)(unsigned long)ds->bts_buffer_base; + top = (struct bts_record *)(unsigned long)ds->bts_index; + + if (top <= at) + return; + + ds->bts_index = ds->bts_buffer_base; + + perf_sample_data_init(&data, 0); + data.period = event->hw.last_period; + regs.ip = 0; + + /* + * Prepare a generic sample, i.e. fill in the invariant fields. + * We will overwrite the from and to address before we output + * the sample. + */ + perf_prepare_sample(&header, &data, event, ®s); + + if (perf_output_begin(&handle, event, header.size * (top - at), 1, 1)) + return; + + for (; at < top; at++) { + data.ip = at->from; + data.addr = at->to; + + perf_output_sample(&handle, &header, &data, event); + } + + perf_output_end(&handle); + + /* There's new data available. */ + event->hw.interrupts++; + event->pending_kill = POLL_IN; +} + +/* + * PEBS + */ + +static struct event_constraint intel_core_pebs_events[] = { + PEBS_EVENT_CONSTRAINT(0x00c0, 0x1), /* INSTR_RETIRED.ANY */ + PEBS_EVENT_CONSTRAINT(0xfec1, 0x1), /* X87_OPS_RETIRED.ANY */ + PEBS_EVENT_CONSTRAINT(0x00c5, 0x1), /* BR_INST_RETIRED.MISPRED */ + PEBS_EVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */ + PEBS_EVENT_CONSTRAINT(0x01cb, 0x1), /* MEM_LOAD_RETIRED.L1D_MISS */ + PEBS_EVENT_CONSTRAINT(0x02cb, 0x1), /* MEM_LOAD_RETIRED.L1D_LINE_MISS */ + PEBS_EVENT_CONSTRAINT(0x04cb, 0x1), /* MEM_LOAD_RETIRED.L2_MISS */ + PEBS_EVENT_CONSTRAINT(0x08cb, 0x1), /* MEM_LOAD_RETIRED.L2_LINE_MISS */ + PEBS_EVENT_CONSTRAINT(0x10cb, 0x1), /* MEM_LOAD_RETIRED.DTLB_MISS */ + EVENT_CONSTRAINT_END +}; + +static struct event_constraint intel_nehalem_pebs_events[] = { + PEBS_EVENT_CONSTRAINT(0x00c0, 0xf), /* INSTR_RETIRED.ANY */ + PEBS_EVENT_CONSTRAINT(0xfec1, 0xf), /* X87_OPS_RETIRED.ANY */ + PEBS_EVENT_CONSTRAINT(0x00c5, 0xf), /* BR_INST_RETIRED.MISPRED */ + PEBS_EVENT_CONSTRAINT(0x1fc7, 0xf), /* SIMD_INST_RETURED.ANY */ + PEBS_EVENT_CONSTRAINT(0x01cb, 0xf), /* MEM_LOAD_RETIRED.L1D_MISS */ + PEBS_EVENT_CONSTRAINT(0x02cb, 0xf), /* MEM_LOAD_RETIRED.L1D_LINE_MISS */ + PEBS_EVENT_CONSTRAINT(0x04cb, 0xf), /* MEM_LOAD_RETIRED.L2_MISS */ + PEBS_EVENT_CONSTRAINT(0x08cb, 0xf), /* MEM_LOAD_RETIRED.L2_LINE_MISS */ + PEBS_EVENT_CONSTRAINT(0x10cb, 0xf), /* MEM_LOAD_RETIRED.DTLB_MISS */ + EVENT_CONSTRAINT_END +}; + +static struct event_constraint * +intel_pebs_constraints(struct perf_event *event) +{ + struct event_constraint *c; + + if (!event->attr.precise_ip) + return NULL; + + if (x86_pmu.pebs_constraints) { + for_each_event_constraint(c, x86_pmu.pebs_constraints) { + if ((event->hw.config & c->cmask) == c->code) + return c; + } + } + + return &emptyconstraint; +} + +static void intel_pmu_pebs_enable(struct perf_event *event) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + struct hw_perf_event *hwc = &event->hw; + + hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT; + + cpuc->pebs_enabled |= 1ULL << hwc->idx; + WARN_ON_ONCE(cpuc->enabled); + + if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1) + intel_pmu_lbr_enable(event); +} + +static void intel_pmu_pebs_disable(struct perf_event *event) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + struct hw_perf_event *hwc = &event->hw; + + cpuc->pebs_enabled &= ~(1ULL << hwc->idx); + if (cpuc->enabled) + wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled); + + hwc->config |= ARCH_PERFMON_EVENTSEL_INT; + + if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1) + intel_pmu_lbr_disable(event); +} + +static void intel_pmu_pebs_enable_all(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + if (cpuc->pebs_enabled) + wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled); +} + +static void intel_pmu_pebs_disable_all(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + if (cpuc->pebs_enabled) + wrmsrl(MSR_IA32_PEBS_ENABLE, 0); +} + +#include <asm/insn.h> + +static inline bool kernel_ip(unsigned long ip) +{ +#ifdef CONFIG_X86_32 + return ip > PAGE_OFFSET; +#else + return (long)ip < 0; +#endif +} + +static int intel_pmu_pebs_fixup_ip(struct pt_regs *regs) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + unsigned long from = cpuc->lbr_entries[0].from; + unsigned long old_to, to = cpuc->lbr_entries[0].to; + unsigned long ip = regs->ip; + + /* + * We don't need to fixup if the PEBS assist is fault like + */ + if (!x86_pmu.intel_cap.pebs_trap) + return 1; + + /* + * No LBR entry, no basic block, no rewinding + */ + if (!cpuc->lbr_stack.nr || !from || !to) + return 0; + + /* + * Basic blocks should never cross user/kernel boundaries + */ + if (kernel_ip(ip) != kernel_ip(to)) + return 0; + + /* + * unsigned math, either ip is before the start (impossible) or + * the basic block is larger than 1 page (sanity) + */ + if ((ip - to) > PAGE_SIZE) + return 0; + + /* + * We sampled a branch insn, rewind using the LBR stack + */ + if (ip == to) { + regs->ip = from; + return 1; + } + + do { + struct insn insn; + u8 buf[MAX_INSN_SIZE]; + void *kaddr; + + old_to = to; + if (!kernel_ip(ip)) { + int bytes, size = MAX_INSN_SIZE; + + bytes = copy_from_user_nmi(buf, (void __user *)to, size); + if (bytes != size) + return 0; + + kaddr = buf; + } else + kaddr = (void *)to; + + kernel_insn_init(&insn, kaddr); + insn_get_length(&insn); + to += insn.length; + } while (to < ip); + + if (to == ip) { + regs->ip = old_to; + return 1; + } + + /* + * Even though we decoded the basic block, the instruction stream + * never matched the given IP, either the TO or the IP got corrupted. + */ + return 0; +} + +static int intel_pmu_save_and_restart(struct perf_event *event); + +static void __intel_pmu_pebs_event(struct perf_event *event, + struct pt_regs *iregs, void *__pebs) +{ + /* + * We cast to pebs_record_core since that is a subset of + * both formats and we don't use the other fields in this + * routine. + */ + struct pebs_record_core *pebs = __pebs; + struct perf_sample_data data; + struct pt_regs regs; + + if (!intel_pmu_save_and_restart(event)) + return; + + perf_sample_data_init(&data, 0); + data.period = event->hw.last_period; + + /* + * We use the interrupt regs as a base because the PEBS record + * does not contain a full regs set, specifically it seems to + * lack segment descriptors, which get used by things like + * user_mode(). + * + * In the simple case fix up only the IP and BP,SP regs, for + * PERF_SAMPLE_IP and PERF_SAMPLE_CALLCHAIN to function properly. + * A possible PERF_SAMPLE_REGS will have to transfer all regs. + */ + regs = *iregs; + regs.ip = pebs->ip; + regs.bp = pebs->bp; + regs.sp = pebs->sp; + + if (event->attr.precise_ip > 1 && intel_pmu_pebs_fixup_ip(®s)) + regs.flags |= PERF_EFLAGS_EXACT; + else + regs.flags &= ~PERF_EFLAGS_EXACT; + + if (perf_event_overflow(event, 1, &data, ®s)) + x86_pmu_stop(event); +} + +static void intel_pmu_drain_pebs_core(struct pt_regs *iregs) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + struct debug_store *ds = cpuc->ds; + struct perf_event *event = cpuc->events[0]; /* PMC0 only */ + struct pebs_record_core *at, *top; + int n; + + if (!ds || !x86_pmu.pebs) + return; + + at = (struct pebs_record_core *)(unsigned long)ds->pebs_buffer_base; + top = (struct pebs_record_core *)(unsigned long)ds->pebs_index; + + /* + * Whatever else happens, drain the thing + */ + ds->pebs_index = ds->pebs_buffer_base; + + if (!test_bit(0, cpuc->active_mask)) + return; + + WARN_ON_ONCE(!event); + + if (!event->attr.precise_ip) + return; + + n = top - at; + if (n <= 0) + return; + + /* + * Should not happen, we program the threshold at 1 and do not + * set a reset value. + */ + WARN_ON_ONCE(n > 1); + at += n - 1; + + __intel_pmu_pebs_event(event, iregs, at); +} + +static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + struct debug_store *ds = cpuc->ds; + struct pebs_record_nhm *at, *top; + struct perf_event *event = NULL; + u64 status = 0; + int bit, n; + + if (!ds || !x86_pmu.pebs) + return; + + at = (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base; + top = (struct pebs_record_nhm *)(unsigned long)ds->pebs_index; + + ds->pebs_index = ds->pebs_buffer_base; + + n = top - at; + if (n <= 0) + return; + + /* + * Should not happen, we program the threshold at 1 and do not + * set a reset value. + */ + WARN_ON_ONCE(n > MAX_PEBS_EVENTS); + + for ( ; at < top; at++) { + for_each_set_bit(bit, (unsigned long *)&at->status, MAX_PEBS_EVENTS) { + event = cpuc->events[bit]; + if (!test_bit(bit, cpuc->active_mask)) + continue; + + WARN_ON_ONCE(!event); + + if (!event->attr.precise_ip) + continue; + + if (__test_and_set_bit(bit, (unsigned long *)&status)) + continue; + + break; + } + + if (!event || bit >= MAX_PEBS_EVENTS) + continue; + + __intel_pmu_pebs_event(event, iregs, at); + } +} + +/* + * BTS, PEBS probe and setup + */ + +static void intel_ds_init(void) +{ + /* + * No support for 32bit formats + */ + if (!boot_cpu_has(X86_FEATURE_DTES64)) + return; + + x86_pmu.bts = boot_cpu_has(X86_FEATURE_BTS); + x86_pmu.pebs = boot_cpu_has(X86_FEATURE_PEBS); + if (x86_pmu.pebs) { + char pebs_type = x86_pmu.intel_cap.pebs_trap ? '+' : '-'; + int format = x86_pmu.intel_cap.pebs_format; + + switch (format) { + case 0: + printk(KERN_CONT "PEBS fmt0%c, ", pebs_type); + x86_pmu.pebs_record_size = sizeof(struct pebs_record_core); + x86_pmu.drain_pebs = intel_pmu_drain_pebs_core; + x86_pmu.pebs_constraints = intel_core_pebs_events; + break; + + case 1: + printk(KERN_CONT "PEBS fmt1%c, ", pebs_type); + x86_pmu.pebs_record_size = sizeof(struct pebs_record_nhm); + x86_pmu.drain_pebs = intel_pmu_drain_pebs_nhm; + x86_pmu.pebs_constraints = intel_nehalem_pebs_events; + break; + + default: + printk(KERN_CONT "no PEBS fmt%d%c, ", format, pebs_type); + x86_pmu.pebs = 0; + break; + } + } +} + +#else /* CONFIG_CPU_SUP_INTEL */ + +static int reserve_ds_buffers(void) +{ + return 0; +} + +static void release_ds_buffers(void) +{ +} + +#endif /* CONFIG_CPU_SUP_INTEL */ diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c new file mode 100644 index 000000000000..d202c1bece1a --- /dev/null +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c @@ -0,0 +1,218 @@ +#ifdef CONFIG_CPU_SUP_INTEL + +enum { + LBR_FORMAT_32 = 0x00, + LBR_FORMAT_LIP = 0x01, + LBR_FORMAT_EIP = 0x02, + LBR_FORMAT_EIP_FLAGS = 0x03, +}; + +/* + * We only support LBR implementations that have FREEZE_LBRS_ON_PMI + * otherwise it becomes near impossible to get a reliable stack. + */ + +static void __intel_pmu_lbr_enable(void) +{ + u64 debugctl; + + rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl); + debugctl |= (DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); + wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl); +} + +static void __intel_pmu_lbr_disable(void) +{ + u64 debugctl; + + rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl); + debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); + wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl); +} + +static void intel_pmu_lbr_reset_32(void) +{ + int i; + + for (i = 0; i < x86_pmu.lbr_nr; i++) + wrmsrl(x86_pmu.lbr_from + i, 0); +} + +static void intel_pmu_lbr_reset_64(void) +{ + int i; + + for (i = 0; i < x86_pmu.lbr_nr; i++) { + wrmsrl(x86_pmu.lbr_from + i, 0); + wrmsrl(x86_pmu.lbr_to + i, 0); + } +} + +static void intel_pmu_lbr_reset(void) +{ + if (!x86_pmu.lbr_nr) + return; + + if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_32) + intel_pmu_lbr_reset_32(); + else + intel_pmu_lbr_reset_64(); +} + +static void intel_pmu_lbr_enable(struct perf_event *event) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + if (!x86_pmu.lbr_nr) + return; + + WARN_ON_ONCE(cpuc->enabled); + + /* + * Reset the LBR stack if we changed task context to + * avoid data leaks. + */ + + if (event->ctx->task && cpuc->lbr_context != event->ctx) { + intel_pmu_lbr_reset(); + cpuc->lbr_context = event->ctx; + } + + cpuc->lbr_users++; +} + +static void intel_pmu_lbr_disable(struct perf_event *event) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + if (!x86_pmu.lbr_nr) + return; + + cpuc->lbr_users--; + WARN_ON_ONCE(cpuc->lbr_users < 0); + + if (cpuc->enabled && !cpuc->lbr_users) + __intel_pmu_lbr_disable(); +} + +static void intel_pmu_lbr_enable_all(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + if (cpuc->lbr_users) + __intel_pmu_lbr_enable(); +} + +static void intel_pmu_lbr_disable_all(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + if (cpuc->lbr_users) + __intel_pmu_lbr_disable(); +} + +static inline u64 intel_pmu_lbr_tos(void) +{ + u64 tos; + + rdmsrl(x86_pmu.lbr_tos, tos); + + return tos; +} + +static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc) +{ + unsigned long mask = x86_pmu.lbr_nr - 1; + u64 tos = intel_pmu_lbr_tos(); + int i; + + for (i = 0; i < x86_pmu.lbr_nr; i++) { + unsigned long lbr_idx = (tos - i) & mask; + union { + struct { + u32 from; + u32 to; + }; + u64 lbr; + } msr_lastbranch; + + rdmsrl(x86_pmu.lbr_from + lbr_idx, msr_lastbranch.lbr); + + cpuc->lbr_entries[i].from = msr_lastbranch.from; + cpuc->lbr_entries[i].to = msr_lastbranch.to; + cpuc->lbr_entries[i].flags = 0; + } + cpuc->lbr_stack.nr = i; +} + +#define LBR_FROM_FLAG_MISPRED (1ULL << 63) + +/* + * Due to lack of segmentation in Linux the effective address (offset) + * is the same as the linear address, allowing us to merge the LIP and EIP + * LBR formats. + */ +static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) +{ + unsigned long mask = x86_pmu.lbr_nr - 1; + int lbr_format = x86_pmu.intel_cap.lbr_format; + u64 tos = intel_pmu_lbr_tos(); + int i; + + for (i = 0; i < x86_pmu.lbr_nr; i++) { + unsigned long lbr_idx = (tos - i) & mask; + u64 from, to, flags = 0; + + rdmsrl(x86_pmu.lbr_from + lbr_idx, from); + rdmsrl(x86_pmu.lbr_to + lbr_idx, to); + + if (lbr_format == LBR_FORMAT_EIP_FLAGS) { + flags = !!(from & LBR_FROM_FLAG_MISPRED); + from = (u64)((((s64)from) << 1) >> 1); + } + + cpuc->lbr_entries[i].from = from; + cpuc->lbr_entries[i].to = to; + cpuc->lbr_entries[i].flags = flags; + } + cpuc->lbr_stack.nr = i; +} + +static void intel_pmu_lbr_read(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + + if (!cpuc->lbr_users) + return; + + if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_32) + intel_pmu_lbr_read_32(cpuc); + else + intel_pmu_lbr_read_64(cpuc); +} + +static void intel_pmu_lbr_init_core(void) +{ + x86_pmu.lbr_nr = 4; + x86_pmu.lbr_tos = 0x01c9; + x86_pmu.lbr_from = 0x40; + x86_pmu.lbr_to = 0x60; +} + +static void intel_pmu_lbr_init_nhm(void) +{ + x86_pmu.lbr_nr = 16; + x86_pmu.lbr_tos = 0x01c9; + x86_pmu.lbr_from = 0x680; + x86_pmu.lbr_to = 0x6c0; +} + +static void intel_pmu_lbr_init_atom(void) +{ + x86_pmu.lbr_nr = 8; + x86_pmu.lbr_tos = 0x01c9; + x86_pmu.lbr_from = 0x40; + x86_pmu.lbr_to = 0x60; +} + +#endif /* CONFIG_CPU_SUP_INTEL */ diff --git a/arch/x86/kernel/cpu/perf_event_p4.c b/arch/x86/kernel/cpu/perf_event_p4.c new file mode 100644 index 000000000000..424fc8de68e4 --- /dev/null +++ b/arch/x86/kernel/cpu/perf_event_p4.c @@ -0,0 +1,857 @@ +/* + * Netburst Perfomance Events (P4, old Xeon) + * + * Copyright (C) 2010 Parallels, Inc., Cyrill Gorcunov <gorcunov@openvz.org> + * Copyright (C) 2010 Intel Corporation, Lin Ming <ming.m.lin@intel.com> + * + * For licencing details see kernel-base/COPYING + */ + +#ifdef CONFIG_CPU_SUP_INTEL + +#include <asm/perf_event_p4.h> + +#define P4_CNTR_LIMIT 3 +/* + * array indices: 0,1 - HT threads, used with HT enabled cpu + */ +struct p4_event_bind { + unsigned int opcode; /* Event code and ESCR selector */ + unsigned int escr_msr[2]; /* ESCR MSR for this event */ + char cntr[2][P4_CNTR_LIMIT]; /* counter index (offset), -1 on abscence */ +}; + +struct p4_cache_event_bind { + unsigned int metric_pebs; + unsigned int metric_vert; +}; + +#define P4_GEN_CACHE_EVENT_BIND(name) \ + [P4_CACHE__##name] = { \ + .metric_pebs = P4_PEBS__##name, \ + .metric_vert = P4_VERT__##name, \ + } + +static struct p4_cache_event_bind p4_cache_event_bind_map[] = { + P4_GEN_CACHE_EVENT_BIND(1stl_cache_load_miss_retired), + P4_GEN_CACHE_EVENT_BIND(2ndl_cache_load_miss_retired), + P4_GEN_CACHE_EVENT_BIND(dtlb_load_miss_retired), + P4_GEN_CACHE_EVENT_BIND(dtlb_store_miss_retired), +}; + +/* + * Note that we don't use CCCR1 here, there is an + * exception for P4_BSQ_ALLOCATION but we just have + * no workaround + * + * consider this binding as resources which particular + * event may borrow, it doesn't contain EventMask, + * Tags and friends -- they are left to a caller + */ +static struct p4_event_bind p4_event_bind_map[] = { + [P4_EVENT_TC_DELIVER_MODE] = { + .opcode = P4_OPCODE(P4_EVENT_TC_DELIVER_MODE), + .escr_msr = { MSR_P4_TC_ESCR0, MSR_P4_TC_ESCR1 }, + .cntr = { {4, 5, -1}, {6, 7, -1} }, + }, + [P4_EVENT_BPU_FETCH_REQUEST] = { + .opcode = P4_OPCODE(P4_EVENT_BPU_FETCH_REQUEST), + .escr_msr = { MSR_P4_BPU_ESCR0, MSR_P4_BPU_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_ITLB_REFERENCE] = { + .opcode = P4_OPCODE(P4_EVENT_ITLB_REFERENCE), + .escr_msr = { MSR_P4_ITLB_ESCR0, MSR_P4_ITLB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_MEMORY_CANCEL] = { + .opcode = P4_OPCODE(P4_EVENT_MEMORY_CANCEL), + .escr_msr = { MSR_P4_DAC_ESCR0, MSR_P4_DAC_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_MEMORY_COMPLETE] = { + .opcode = P4_OPCODE(P4_EVENT_MEMORY_COMPLETE), + .escr_msr = { MSR_P4_SAAT_ESCR0 , MSR_P4_SAAT_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_LOAD_PORT_REPLAY] = { + .opcode = P4_OPCODE(P4_EVENT_LOAD_PORT_REPLAY), + .escr_msr = { MSR_P4_SAAT_ESCR0, MSR_P4_SAAT_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_STORE_PORT_REPLAY] = { + .opcode = P4_OPCODE(P4_EVENT_STORE_PORT_REPLAY), + .escr_msr = { MSR_P4_SAAT_ESCR0 , MSR_P4_SAAT_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_MOB_LOAD_REPLAY] = { + .opcode = P4_OPCODE(P4_EVENT_MOB_LOAD_REPLAY), + .escr_msr = { MSR_P4_MOB_ESCR0, MSR_P4_MOB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_PAGE_WALK_TYPE] = { + .opcode = P4_OPCODE(P4_EVENT_PAGE_WALK_TYPE), + .escr_msr = { MSR_P4_PMH_ESCR0, MSR_P4_PMH_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_BSQ_CACHE_REFERENCE] = { + .opcode = P4_OPCODE(P4_EVENT_BSQ_CACHE_REFERENCE), + .escr_msr = { MSR_P4_BSU_ESCR0, MSR_P4_BSU_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_IOQ_ALLOCATION] = { + .opcode = P4_OPCODE(P4_EVENT_IOQ_ALLOCATION), + .escr_msr = { MSR_P4_FSB_ESCR0, MSR_P4_FSB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_IOQ_ACTIVE_ENTRIES] = { /* shared ESCR */ + .opcode = P4_OPCODE(P4_EVENT_IOQ_ACTIVE_ENTRIES), + .escr_msr = { MSR_P4_FSB_ESCR1, MSR_P4_FSB_ESCR1 }, + .cntr = { {2, -1, -1}, {3, -1, -1} }, + }, + [P4_EVENT_FSB_DATA_ACTIVITY] = { + .opcode = P4_OPCODE(P4_EVENT_FSB_DATA_ACTIVITY), + .escr_msr = { MSR_P4_FSB_ESCR0, MSR_P4_FSB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_BSQ_ALLOCATION] = { /* shared ESCR, broken CCCR1 */ + .opcode = P4_OPCODE(P4_EVENT_BSQ_ALLOCATION), + .escr_msr = { MSR_P4_BSU_ESCR0, MSR_P4_BSU_ESCR0 }, + .cntr = { {0, -1, -1}, {1, -1, -1} }, + }, + [P4_EVENT_BSQ_ACTIVE_ENTRIES] = { /* shared ESCR */ + .opcode = P4_OPCODE(P4_EVENT_BSQ_ACTIVE_ENTRIES), + .escr_msr = { MSR_P4_BSU_ESCR1 , MSR_P4_BSU_ESCR1 }, + .cntr = { {2, -1, -1}, {3, -1, -1} }, + }, + [P4_EVENT_SSE_INPUT_ASSIST] = { + .opcode = P4_OPCODE(P4_EVENT_SSE_INPUT_ASSIST), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_PACKED_SP_UOP] = { + .opcode = P4_OPCODE(P4_EVENT_PACKED_SP_UOP), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_PACKED_DP_UOP] = { + .opcode = P4_OPCODE(P4_EVENT_PACKED_DP_UOP), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_SCALAR_SP_UOP] = { + .opcode = P4_OPCODE(P4_EVENT_SCALAR_SP_UOP), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_SCALAR_DP_UOP] = { + .opcode = P4_OPCODE(P4_EVENT_SCALAR_DP_UOP), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_64BIT_MMX_UOP] = { + .opcode = P4_OPCODE(P4_EVENT_64BIT_MMX_UOP), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_128BIT_MMX_UOP] = { + .opcode = P4_OPCODE(P4_EVENT_128BIT_MMX_UOP), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_X87_FP_UOP] = { + .opcode = P4_OPCODE(P4_EVENT_X87_FP_UOP), + .escr_msr = { MSR_P4_FIRM_ESCR0, MSR_P4_FIRM_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_TC_MISC] = { + .opcode = P4_OPCODE(P4_EVENT_TC_MISC), + .escr_msr = { MSR_P4_TC_ESCR0, MSR_P4_TC_ESCR1 }, + .cntr = { {4, 5, -1}, {6, 7, -1} }, + }, + [P4_EVENT_GLOBAL_POWER_EVENTS] = { + .opcode = P4_OPCODE(P4_EVENT_GLOBAL_POWER_EVENTS), + .escr_msr = { MSR_P4_FSB_ESCR0, MSR_P4_FSB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_TC_MS_XFER] = { + .opcode = P4_OPCODE(P4_EVENT_TC_MS_XFER), + .escr_msr = { MSR_P4_MS_ESCR0, MSR_P4_MS_ESCR1 }, + .cntr = { {4, 5, -1}, {6, 7, -1} }, + }, + [P4_EVENT_UOP_QUEUE_WRITES] = { + .opcode = P4_OPCODE(P4_EVENT_UOP_QUEUE_WRITES), + .escr_msr = { MSR_P4_MS_ESCR0, MSR_P4_MS_ESCR1 }, + .cntr = { {4, 5, -1}, {6, 7, -1} }, + }, + [P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE] = { + .opcode = P4_OPCODE(P4_EVENT_RETIRED_MISPRED_BRANCH_TYPE), + .escr_msr = { MSR_P4_TBPU_ESCR0 , MSR_P4_TBPU_ESCR0 }, + .cntr = { {4, 5, -1}, {6, 7, -1} }, + }, + [P4_EVENT_RETIRED_BRANCH_TYPE] = { + .opcode = P4_OPCODE(P4_EVENT_RETIRED_BRANCH_TYPE), + .escr_msr = { MSR_P4_TBPU_ESCR0 , MSR_P4_TBPU_ESCR1 }, + .cntr = { {4, 5, -1}, {6, 7, -1} }, + }, + [P4_EVENT_RESOURCE_STALL] = { + .opcode = P4_OPCODE(P4_EVENT_RESOURCE_STALL), + .escr_msr = { MSR_P4_ALF_ESCR0, MSR_P4_ALF_ESCR1 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_WC_BUFFER] = { + .opcode = P4_OPCODE(P4_EVENT_WC_BUFFER), + .escr_msr = { MSR_P4_DAC_ESCR0, MSR_P4_DAC_ESCR1 }, + .cntr = { {8, 9, -1}, {10, 11, -1} }, + }, + [P4_EVENT_B2B_CYCLES] = { + .opcode = P4_OPCODE(P4_EVENT_B2B_CYCLES), + .escr_msr = { MSR_P4_FSB_ESCR0, MSR_P4_FSB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_BNR] = { + .opcode = P4_OPCODE(P4_EVENT_BNR), + .escr_msr = { MSR_P4_FSB_ESCR0, MSR_P4_FSB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_SNOOP] = { + .opcode = P4_OPCODE(P4_EVENT_SNOOP), + .escr_msr = { MSR_P4_FSB_ESCR0, MSR_P4_FSB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_RESPONSE] = { + .opcode = P4_OPCODE(P4_EVENT_RESPONSE), + .escr_msr = { MSR_P4_FSB_ESCR0, MSR_P4_FSB_ESCR1 }, + .cntr = { {0, -1, -1}, {2, -1, -1} }, + }, + [P4_EVENT_FRONT_END_EVENT] = { + .opcode = P4_OPCODE(P4_EVENT_FRONT_END_EVENT), + .escr_msr = { MSR_P4_CRU_ESCR2, MSR_P4_CRU_ESCR3 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_EXECUTION_EVENT] = { + .opcode = P4_OPCODE(P4_EVENT_EXECUTION_EVENT), + .escr_msr = { MSR_P4_CRU_ESCR2, MSR_P4_CRU_ESCR3 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_REPLAY_EVENT] = { + .opcode = P4_OPCODE(P4_EVENT_REPLAY_EVENT), + .escr_msr = { MSR_P4_CRU_ESCR2, MSR_P4_CRU_ESCR3 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_INSTR_RETIRED] = { + .opcode = P4_OPCODE(P4_EVENT_INSTR_RETIRED), + .escr_msr = { MSR_P4_CRU_ESCR0, MSR_P4_CRU_ESCR1 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_UOPS_RETIRED] = { + .opcode = P4_OPCODE(P4_EVENT_UOPS_RETIRED), + .escr_msr = { MSR_P4_CRU_ESCR0, MSR_P4_CRU_ESCR1 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_UOP_TYPE] = { + .opcode = P4_OPCODE(P4_EVENT_UOP_TYPE), + .escr_msr = { MSR_P4_RAT_ESCR0, MSR_P4_RAT_ESCR1 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_BRANCH_RETIRED] = { + .opcode = P4_OPCODE(P4_EVENT_BRANCH_RETIRED), + .escr_msr = { MSR_P4_CRU_ESCR2, MSR_P4_CRU_ESCR3 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_MISPRED_BRANCH_RETIRED] = { + .opcode = P4_OPCODE(P4_EVENT_MISPRED_BRANCH_RETIRED), + .escr_msr = { MSR_P4_CRU_ESCR0, MSR_P4_CRU_ESCR1 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_X87_ASSIST] = { + .opcode = P4_OPCODE(P4_EVENT_X87_ASSIST), + .escr_msr = { MSR_P4_CRU_ESCR2, MSR_P4_CRU_ESCR3 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_MACHINE_CLEAR] = { + .opcode = P4_OPCODE(P4_EVENT_MACHINE_CLEAR), + .escr_msr = { MSR_P4_CRU_ESCR2, MSR_P4_CRU_ESCR3 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, + [P4_EVENT_INSTR_COMPLETED] = { + .opcode = P4_OPCODE(P4_EVENT_INSTR_COMPLETED), + .escr_msr = { MSR_P4_CRU_ESCR0, MSR_P4_CRU_ESCR1 }, + .cntr = { {12, 13, 16}, {14, 15, 17} }, + }, +}; + +#define P4_GEN_CACHE_EVENT(event, bit, cache_event) \ + p4_config_pack_escr(P4_ESCR_EVENT(event) | \ + P4_ESCR_EMASK_BIT(event, bit)) | \ + p4_config_pack_cccr(cache_event | \ + P4_CCCR_ESEL(P4_OPCODE_ESEL(P4_OPCODE(event)))) + +static __initconst const u64 p4_hw_cache_event_ids + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX] = +{ + [ C(L1D ) ] = { + [ C(OP_READ) ] = { + [ C(RESULT_ACCESS) ] = 0x0, + [ C(RESULT_MISS) ] = P4_GEN_CACHE_EVENT(P4_EVENT_REPLAY_EVENT, NBOGUS, + P4_CACHE__1stl_cache_load_miss_retired), + }, + }, + [ C(LL ) ] = { + [ C(OP_READ) ] = { + [ C(RESULT_ACCESS) ] = 0x0, + [ C(RESULT_MISS) ] = P4_GEN_CACHE_EVENT(P4_EVENT_REPLAY_EVENT, NBOGUS, + P4_CACHE__2ndl_cache_load_miss_retired), + }, +}, + [ C(DTLB) ] = { + [ C(OP_READ) ] = { + [ C(RESULT_ACCESS) ] = 0x0, + [ C(RESULT_MISS) ] = P4_GEN_CACHE_EVENT(P4_EVENT_REPLAY_EVENT, NBOGUS, + P4_CACHE__dtlb_load_miss_retired), + }, + [ C(OP_WRITE) ] = { + [ C(RESULT_ACCESS) ] = 0x0, + [ C(RESULT_MISS) ] = P4_GEN_CACHE_EVENT(P4_EVENT_REPLAY_EVENT, NBOGUS, + P4_CACHE__dtlb_store_miss_retired), + }, + }, + [ C(ITLB) ] = { + [ C(OP_READ) ] = { + [ C(RESULT_ACCESS) ] = P4_GEN_CACHE_EVENT(P4_EVENT_ITLB_REFERENCE, HIT, + P4_CACHE__itlb_reference_hit), + [ C(RESULT_MISS) ] = P4_GEN_CACHE_EVENT(P4_EVENT_ITLB_REFERENCE, MISS, + P4_CACHE__itlb_reference_miss), + }, + [ C(OP_WRITE) ] = { + [ C(RESULT_ACCESS) ] = -1, + [ C(RESULT_MISS) ] = -1, + }, + [ C(OP_PREFETCH) ] = { + [ C(RESULT_ACCESS) ] = -1, + [ C(RESULT_MISS) ] = -1, + }, + }, +}; + +static u64 p4_general_events[PERF_COUNT_HW_MAX] = { + /* non-halted CPU clocks */ + [PERF_COUNT_HW_CPU_CYCLES] = + p4_config_pack_escr(P4_ESCR_EVENT(P4_EVENT_GLOBAL_POWER_EVENTS) | + P4_ESCR_EMASK_BIT(P4_EVENT_GLOBAL_POWER_EVENTS, RUNNING)), + + /* + * retired instructions + * in a sake of simplicity we don't use the FSB tagging + */ + [PERF_COUNT_HW_INSTRUCTIONS] = + p4_config_pack_escr(P4_ESCR_EVENT(P4_EVENT_INSTR_RETIRED) | + P4_ESCR_EMASK_BIT(P4_EVENT_INSTR_RETIRED, NBOGUSNTAG) | + P4_ESCR_EMASK_BIT(P4_EVENT_INSTR_RETIRED, BOGUSNTAG)), + + /* cache hits */ + [PERF_COUNT_HW_CACHE_REFERENCES] = + p4_config_pack_escr(P4_ESCR_EVENT(P4_EVENT_BSQ_CACHE_REFERENCE) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_HITS) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_HITE) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_HITM) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_HITS) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_HITE) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_HITM)), + + /* cache misses */ + [PERF_COUNT_HW_CACHE_MISSES] = + p4_config_pack_escr(P4_ESCR_EVENT(P4_EVENT_BSQ_CACHE_REFERENCE) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_2ndL_MISS) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, RD_3rdL_MISS) | + P4_ESCR_EMASK_BIT(P4_EVENT_BSQ_CACHE_REFERENCE, WR_2ndL_MISS)), + + /* branch instructions retired */ + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = + p4_config_pack_escr(P4_ESCR_EVENT(P4_EVENT_RETIRED_BRANCH_TYPE) | + P4_ESCR_EMASK_BIT(P4_EVENT_RETIRED_BRANCH_TYPE, CONDITIONAL) | + P4_ESCR_EMASK_BIT(P4_EVENT_RETIRED_BRANCH_TYPE, CALL) | + P4_ESCR_EMASK_BIT(P4_EVENT_RETIRED_BRANCH_TYPE, RETURN) | + P4_ESCR_EMASK_BIT(P4_EVENT_RETIRED_BRANCH_TYPE, INDIRECT)), + + /* mispredicted branches retired */ + [PERF_COUNT_HW_BRANCH_MISSES] = + p4_config_pack_escr(P4_ESCR_EVENT(P4_EVENT_MISPRED_BRANCH_RETIRED) | + P4_ESCR_EMASK_BIT(P4_EVENT_MISPRED_BRANCH_RETIRED, NBOGUS)), + + /* bus ready clocks (cpu is driving #DRDY_DRV\#DRDY_OWN): */ + [PERF_COUNT_HW_BUS_CYCLES] = + p4_config_pack_escr(P4_ESCR_EVENT(P4_EVENT_FSB_DATA_ACTIVITY) | + P4_ESCR_EMASK_BIT(P4_EVENT_FSB_DATA_ACTIVITY, DRDY_DRV) | + P4_ESCR_EMASK_BIT(P4_EVENT_FSB_DATA_ACTIVITY, DRDY_OWN)) | + p4_config_pack_cccr(P4_CCCR_EDGE | P4_CCCR_COMPARE), +}; + +static struct p4_event_bind *p4_config_get_bind(u64 config) +{ + unsigned int evnt = p4_config_unpack_event(config); + struct p4_event_bind *bind = NULL; + + if (evnt < ARRAY_SIZE(p4_event_bind_map)) + bind = &p4_event_bind_map[evnt]; + + return bind; +} + +static u64 p4_pmu_event_map(int hw_event) +{ + struct p4_event_bind *bind; + unsigned int esel; + u64 config; + + config = p4_general_events[hw_event]; + bind = p4_config_get_bind(config); + esel = P4_OPCODE_ESEL(bind->opcode); + config |= p4_config_pack_cccr(P4_CCCR_ESEL(esel)); + + return config; +} + +static int p4_hw_config(struct perf_event *event) +{ + int cpu = get_cpu(); + int rc = 0; + unsigned int evnt; + u32 escr, cccr; + + /* + * the reason we use cpu that early is that: if we get scheduled + * first time on the same cpu -- we will not need swap thread + * specific flags in config (and will save some cpu cycles) + */ + + cccr = p4_default_cccr_conf(cpu); + escr = p4_default_escr_conf(cpu, event->attr.exclude_kernel, + event->attr.exclude_user); + event->hw.config = p4_config_pack_escr(escr) | + p4_config_pack_cccr(cccr); + + if (p4_ht_active() && p4_ht_thread(cpu)) + event->hw.config = p4_set_ht_bit(event->hw.config); + + if (event->attr.type == PERF_TYPE_RAW) { + + /* user data may have out-of-bound event index */ + evnt = p4_config_unpack_event(event->attr.config); + if (evnt >= ARRAY_SIZE(p4_event_bind_map)) { + rc = -EINVAL; + goto out; + } + + /* + * We don't control raw events so it's up to the caller + * to pass sane values (and we don't count the thread number + * on HT machine but allow HT-compatible specifics to be + * passed on) + * + * XXX: HT wide things should check perf_paranoid_cpu() && + * CAP_SYS_ADMIN + */ + event->hw.config |= event->attr.config & + (p4_config_pack_escr(P4_ESCR_MASK_HT) | + p4_config_pack_cccr(P4_CCCR_MASK_HT)); + } + + rc = x86_setup_perfctr(event); +out: + put_cpu(); + return rc; +} + +static inline void p4_pmu_clear_cccr_ovf(struct hw_perf_event *hwc) +{ + unsigned long dummy; + + rdmsrl(hwc->config_base + hwc->idx, dummy); + if (dummy & P4_CCCR_OVF) { + (void)checking_wrmsrl(hwc->config_base + hwc->idx, + ((u64)dummy) & ~P4_CCCR_OVF); + } +} + +static inline void p4_pmu_disable_event(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + + /* + * If event gets disabled while counter is in overflowed + * state we need to clear P4_CCCR_OVF, otherwise interrupt get + * asserted again and again + */ + (void)checking_wrmsrl(hwc->config_base + hwc->idx, + (u64)(p4_config_unpack_cccr(hwc->config)) & + ~P4_CCCR_ENABLE & ~P4_CCCR_OVF & ~P4_CCCR_RESERVED); +} + +static void p4_pmu_disable_all(void) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + int idx; + + for (idx = 0; idx < x86_pmu.num_counters; idx++) { + struct perf_event *event = cpuc->events[idx]; + if (!test_bit(idx, cpuc->active_mask)) + continue; + p4_pmu_disable_event(event); + } +} + +static void p4_pmu_enable_event(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + int thread = p4_ht_config_thread(hwc->config); + u64 escr_conf = p4_config_unpack_escr(p4_clear_ht_bit(hwc->config)); + unsigned int idx = p4_config_unpack_event(hwc->config); + unsigned int idx_cache = p4_config_unpack_cache_event(hwc->config); + struct p4_event_bind *bind; + struct p4_cache_event_bind *bind_cache; + u64 escr_addr, cccr; + + bind = &p4_event_bind_map[idx]; + escr_addr = (u64)bind->escr_msr[thread]; + + /* + * - we dont support cascaded counters yet + * - and counter 1 is broken (erratum) + */ + WARN_ON_ONCE(p4_is_event_cascaded(hwc->config)); + WARN_ON_ONCE(hwc->idx == 1); + + /* we need a real Event value */ + escr_conf &= ~P4_ESCR_EVENT_MASK; + escr_conf |= P4_ESCR_EVENT(P4_OPCODE_EVNT(bind->opcode)); + + cccr = p4_config_unpack_cccr(hwc->config); + + /* + * it could be Cache event so that we need to + * set metrics into additional MSRs + */ + BUILD_BUG_ON(P4_CACHE__MAX > P4_CCCR_CACHE_OPS_MASK); + if (idx_cache > P4_CACHE__NONE && + idx_cache < ARRAY_SIZE(p4_cache_event_bind_map)) { + bind_cache = &p4_cache_event_bind_map[idx_cache]; + (void)checking_wrmsrl(MSR_IA32_PEBS_ENABLE, (u64)bind_cache->metric_pebs); + (void)checking_wrmsrl(MSR_P4_PEBS_MATRIX_VERT, (u64)bind_cache->metric_vert); + } + + (void)checking_wrmsrl(escr_addr, escr_conf); + (void)checking_wrmsrl(hwc->config_base + hwc->idx, + (cccr & ~P4_CCCR_RESERVED) | P4_CCCR_ENABLE); +} + +static void p4_pmu_enable_all(int added) +{ + struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); + int idx; + + for (idx = 0; idx < x86_pmu.num_counters; idx++) { + struct perf_event *event = cpuc->events[idx]; + if (!test_bit(idx, cpuc->active_mask)) + continue; + p4_pmu_enable_event(event); + } +} + +static int p4_pmu_handle_irq(struct pt_regs *regs) +{ + struct perf_sample_data data; + struct cpu_hw_events *cpuc; + struct perf_event *event; + struct hw_perf_event *hwc; + int idx, handled = 0; + u64 val; + + data.addr = 0; + data.raw = NULL; + + cpuc = &__get_cpu_var(cpu_hw_events); + + for (idx = 0; idx < x86_pmu.num_counters; idx++) { + + if (!test_bit(idx, cpuc->active_mask)) + continue; + + event = cpuc->events[idx]; + hwc = &event->hw; + + WARN_ON_ONCE(hwc->idx != idx); + + /* + * FIXME: Redundant call, actually not needed + * but just to check if we're screwed + */ + p4_pmu_clear_cccr_ovf(hwc); + + val = x86_perf_event_update(event); + if (val & (1ULL << (x86_pmu.cntval_bits - 1))) + continue; + + /* + * event overflow + */ + handled = 1; + data.period = event->hw.last_period; + + if (!x86_perf_event_set_period(event)) + continue; + if (perf_event_overflow(event, 1, &data, regs)) + p4_pmu_disable_event(event); + } + + if (handled) { + /* p4 quirk: unmask it again */ + apic_write(APIC_LVTPC, apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED); + inc_irq_stat(apic_perf_irqs); + } + + return handled; +} + +/* + * swap thread specific fields according to a thread + * we are going to run on + */ +static void p4_pmu_swap_config_ts(struct hw_perf_event *hwc, int cpu) +{ + u32 escr, cccr; + + /* + * we either lucky and continue on same cpu or no HT support + */ + if (!p4_should_swap_ts(hwc->config, cpu)) + return; + + /* + * the event is migrated from an another logical + * cpu, so we need to swap thread specific flags + */ + + escr = p4_config_unpack_escr(hwc->config); + cccr = p4_config_unpack_cccr(hwc->config); + + if (p4_ht_thread(cpu)) { + cccr &= ~P4_CCCR_OVF_PMI_T0; + cccr |= P4_CCCR_OVF_PMI_T1; + if (escr & P4_ESCR_T0_OS) { + escr &= ~P4_ESCR_T0_OS; + escr |= P4_ESCR_T1_OS; + } + if (escr & P4_ESCR_T0_USR) { + escr &= ~P4_ESCR_T0_USR; + escr |= P4_ESCR_T1_USR; + } + hwc->config = p4_config_pack_escr(escr); + hwc->config |= p4_config_pack_cccr(cccr); + hwc->config |= P4_CONFIG_HT; + } else { + cccr &= ~P4_CCCR_OVF_PMI_T1; + cccr |= P4_CCCR_OVF_PMI_T0; + if (escr & P4_ESCR_T1_OS) { + escr &= ~P4_ESCR_T1_OS; + escr |= P4_ESCR_T0_OS; + } + if (escr & P4_ESCR_T1_USR) { + escr &= ~P4_ESCR_T1_USR; + escr |= P4_ESCR_T0_USR; + } + hwc->config = p4_config_pack_escr(escr); + hwc->config |= p4_config_pack_cccr(cccr); + hwc->config &= ~P4_CONFIG_HT; + } +} + +/* + * ESCR address hashing is tricky, ESCRs are not sequential + * in memory but all starts from MSR_P4_BSU_ESCR0 (0x03e0) and + * the metric between any ESCRs is laid in range [0xa0,0xe1] + * + * so we make ~70% filled hashtable + */ + +#define P4_ESCR_MSR_BASE 0x000003a0 +#define P4_ESCR_MSR_MAX 0x000003e1 +#define P4_ESCR_MSR_TABLE_SIZE (P4_ESCR_MSR_MAX - P4_ESCR_MSR_BASE + 1) +#define P4_ESCR_MSR_IDX(msr) (msr - P4_ESCR_MSR_BASE) +#define P4_ESCR_MSR_TABLE_ENTRY(msr) [P4_ESCR_MSR_IDX(msr)] = msr + +static const unsigned int p4_escr_table[P4_ESCR_MSR_TABLE_SIZE] = { + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_ALF_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_ALF_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_BPU_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_BPU_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_BSU_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_BSU_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_CRU_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_CRU_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_CRU_ESCR2), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_CRU_ESCR3), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_CRU_ESCR4), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_CRU_ESCR5), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_DAC_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_DAC_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_FIRM_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_FIRM_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_FLAME_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_FLAME_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_FSB_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_FSB_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_IQ_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_IQ_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_IS_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_IS_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_ITLB_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_ITLB_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_IX_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_IX_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_MOB_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_MOB_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_MS_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_MS_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_PMH_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_PMH_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_RAT_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_RAT_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_SAAT_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_SAAT_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_SSU_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_SSU_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_TBPU_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_TBPU_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_TC_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_TC_ESCR1), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_U2L_ESCR0), + P4_ESCR_MSR_TABLE_ENTRY(MSR_P4_U2L_ESCR1), +}; + +static int p4_get_escr_idx(unsigned int addr) +{ + unsigned int idx = P4_ESCR_MSR_IDX(addr); + + if (unlikely(idx >= P4_ESCR_MSR_TABLE_SIZE || + !p4_escr_table[idx])) { + WARN_ONCE(1, "P4 PMU: Wrong address passed: %x\n", addr); + return -1; + } + + return idx; +} + +static int p4_next_cntr(int thread, unsigned long *used_mask, + struct p4_event_bind *bind) +{ + int i, j; + + for (i = 0; i < P4_CNTR_LIMIT; i++) { + j = bind->cntr[thread][i]; + if (j != -1 && !test_bit(j, used_mask)) + return j; + } + + return -1; +} + +static int p4_pmu_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) +{ + unsigned long used_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; + unsigned long escr_mask[BITS_TO_LONGS(P4_ESCR_MSR_TABLE_SIZE)]; + int cpu = raw_smp_processor_id(); + struct hw_perf_event *hwc; + struct p4_event_bind *bind; + unsigned int i, thread, num; + int cntr_idx, escr_idx; + + bitmap_zero(used_mask, X86_PMC_IDX_MAX); + bitmap_zero(escr_mask, P4_ESCR_MSR_TABLE_SIZE); + + for (i = 0, num = n; i < n; i++, num--) { + + hwc = &cpuc->event_list[i]->hw; + thread = p4_ht_thread(cpu); + bind = p4_config_get_bind(hwc->config); + escr_idx = p4_get_escr_idx(bind->escr_msr[thread]); + if (unlikely(escr_idx == -1)) + goto done; + + if (hwc->idx != -1 && !p4_should_swap_ts(hwc->config, cpu)) { + cntr_idx = hwc->idx; + if (assign) + assign[i] = hwc->idx; + goto reserve; + } + + cntr_idx = p4_next_cntr(thread, used_mask, bind); + if (cntr_idx == -1 || test_bit(escr_idx, escr_mask)) + goto done; + + p4_pmu_swap_config_ts(hwc, cpu); + if (assign) + assign[i] = cntr_idx; +reserve: + set_bit(cntr_idx, used_mask); + set_bit(escr_idx, escr_mask); + } + +done: + return num ? -ENOSPC : 0; +} + +static __initconst const struct x86_pmu p4_pmu = { + .name = "Netburst P4/Xeon", + .handle_irq = p4_pmu_handle_irq, + .disable_all = p4_pmu_disable_all, + .enable_all = p4_pmu_enable_all, + .enable = p4_pmu_enable_event, + .disable = p4_pmu_disable_event, + .eventsel = MSR_P4_BPU_CCCR0, + .perfctr = MSR_P4_BPU_PERFCTR0, + .event_map = p4_pmu_event_map, + .max_events = ARRAY_SIZE(p4_general_events), + .get_event_constraints = x86_get_event_constraints, + /* + * IF HT disabled we may need to use all + * ARCH_P4_MAX_CCCR counters simulaneously + * though leave it restricted at moment assuming + * HT is on + */ + .num_counters = ARCH_P4_MAX_CCCR, + .apic = 1, + .cntval_bits = 40, + .cntval_mask = (1ULL << 40) - 1, + .max_period = (1ULL << 39) - 1, + .hw_config = p4_hw_config, + .schedule_events = p4_pmu_schedule_events, +}; + +static __init int p4_pmu_init(void) +{ + unsigned int low, high; + + /* If we get stripped -- indexig fails */ + BUILD_BUG_ON(ARCH_P4_MAX_CCCR > X86_PMC_MAX_GENERIC); + + rdmsr(MSR_IA32_MISC_ENABLE, low, high); + if (!(low & (1 << 7))) { + pr_cont("unsupported Netburst CPU model %d ", + boot_cpu_data.x86_model); + return -ENODEV; + } + + memcpy(hw_cache_event_ids, p4_hw_cache_event_ids, + sizeof(hw_cache_event_ids)); + + pr_cont("Netburst events, "); + + x86_pmu = p4_pmu; + + return 0; +} + +#endif /* CONFIG_CPU_SUP_INTEL */ diff --git a/arch/x86/kernel/cpu/perf_event_p6.c b/arch/x86/kernel/cpu/perf_event_p6.c index a330485d14da..34ba07be2cda 100644 --- a/arch/x86/kernel/cpu/perf_event_p6.c +++ b/arch/x86/kernel/cpu/perf_event_p6.c @@ -27,24 +27,6 @@ static u64 p6_pmu_event_map(int hw_event) */ #define P6_NOP_EVENT 0x0000002EULL -static u64 p6_pmu_raw_event(u64 hw_event) -{ -#define P6_EVNTSEL_EVENT_MASK 0x000000FFULL -#define P6_EVNTSEL_UNIT_MASK 0x0000FF00ULL -#define P6_EVNTSEL_EDGE_MASK 0x00040000ULL -#define P6_EVNTSEL_INV_MASK 0x00800000ULL -#define P6_EVNTSEL_REG_MASK 0xFF000000ULL - -#define P6_EVNTSEL_MASK \ - (P6_EVNTSEL_EVENT_MASK | \ - P6_EVNTSEL_UNIT_MASK | \ - P6_EVNTSEL_EDGE_MASK | \ - P6_EVNTSEL_INV_MASK | \ - P6_EVNTSEL_REG_MASK) - - return hw_event & P6_EVNTSEL_MASK; -} - static struct event_constraint p6_event_constraints[] = { INTEL_EVENT_CONSTRAINT(0xc1, 0x1), /* FLOPS */ @@ -66,7 +48,7 @@ static void p6_pmu_disable_all(void) wrmsrl(MSR_P6_EVNTSEL0, val); } -static void p6_pmu_enable_all(void) +static void p6_pmu_enable_all(int added) { unsigned long val; @@ -102,22 +84,23 @@ static void p6_pmu_enable_event(struct perf_event *event) (void)checking_wrmsrl(hwc->config_base + hwc->idx, val); } -static __initconst struct x86_pmu p6_pmu = { +static __initconst const struct x86_pmu p6_pmu = { .name = "p6", .handle_irq = x86_pmu_handle_irq, .disable_all = p6_pmu_disable_all, .enable_all = p6_pmu_enable_all, .enable = p6_pmu_enable_event, .disable = p6_pmu_disable_event, + .hw_config = x86_pmu_hw_config, + .schedule_events = x86_schedule_events, .eventsel = MSR_P6_EVNTSEL0, .perfctr = MSR_P6_PERFCTR0, .event_map = p6_pmu_event_map, - .raw_event = p6_pmu_raw_event, .max_events = ARRAY_SIZE(p6_perfmon_event_map), .apic = 1, .max_period = (1ULL << 31) - 1, .version = 0, - .num_events = 2, + .num_counters = 2, /* * Events have 40 bits implemented. However they are designed such * that bits [32-39] are sign extensions of bit 31. As such the @@ -125,8 +108,8 @@ static __initconst struct x86_pmu p6_pmu = { * * See IA-32 Intel Architecture Software developer manual Vol 3B */ - .event_bits = 32, - .event_mask = (1ULL << 32) - 1, + .cntval_bits = 32, + .cntval_mask = (1ULL << 32) - 1, .get_event_constraints = x86_get_event_constraints, .event_constraints = p6_event_constraints, }; diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c index dfdb4dba2320..b9d1ff588445 100644 --- a/arch/x86/kernel/cpu/vmware.c +++ b/arch/x86/kernel/cpu/vmware.c @@ -24,8 +24,8 @@ #include <linux/dmi.h> #include <linux/module.h> #include <asm/div64.h> -#include <asm/vmware.h> #include <asm/x86_init.h> +#include <asm/hypervisor.h> #define CPUID_VMWARE_INFO_LEAF 0x40000000 #define VMWARE_HYPERVISOR_MAGIC 0x564D5868 @@ -65,7 +65,7 @@ static unsigned long vmware_get_tsc_khz(void) return tsc_hz; } -void __init vmware_platform_setup(void) +static void __init vmware_platform_setup(void) { uint32_t eax, ebx, ecx, edx; @@ -83,26 +83,22 @@ void __init vmware_platform_setup(void) * serial key should be enough, as this will always have a VMware * specific string when running under VMware hypervisor. */ -int vmware_platform(void) +static bool __init vmware_platform(void) { if (cpu_has_hypervisor) { - unsigned int eax, ebx, ecx, edx; - char hyper_vendor_id[13]; - - cpuid(CPUID_VMWARE_INFO_LEAF, &eax, &ebx, &ecx, &edx); - memcpy(hyper_vendor_id + 0, &ebx, 4); - memcpy(hyper_vendor_id + 4, &ecx, 4); - memcpy(hyper_vendor_id + 8, &edx, 4); - hyper_vendor_id[12] = '\0'; - if (!strcmp(hyper_vendor_id, "VMwareVMware")) - return 1; + unsigned int eax; + unsigned int hyper_vendor_id[3]; + + cpuid(CPUID_VMWARE_INFO_LEAF, &eax, &hyper_vendor_id[0], + &hyper_vendor_id[1], &hyper_vendor_id[2]); + if (!memcmp(hyper_vendor_id, "VMwareVMware", 12)) + return true; } else if (dmi_available && dmi_name_in_serial("VMware") && __vmware_platform()) - return 1; + return true; - return 0; + return false; } -EXPORT_SYMBOL(vmware_platform); /* * VMware hypervisor takes care of exporting a reliable TSC to the guest. @@ -116,8 +112,16 @@ EXPORT_SYMBOL(vmware_platform); * so that the kernel could just trust the hypervisor with providing a * reliable virtual TSC that is suitable for timekeeping. */ -void __cpuinit vmware_set_feature_bits(struct cpuinfo_x86 *c) +static void __cpuinit vmware_set_cpu_features(struct cpuinfo_x86 *c) { set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); set_cpu_cap(c, X86_FEATURE_TSC_RELIABLE); } + +const __refconst struct hypervisor_x86 x86_hyper_vmware = { + .name = "VMware", + .detect = vmware_platform, + .set_cpu_features = vmware_set_cpu_features, + .init_platform = vmware_platform_setup, +}; +EXPORT_SYMBOL(x86_hyper_vmware); diff --git a/arch/x86/kernel/ds.c b/arch/x86/kernel/ds.c deleted file mode 100644 index 1c47390dd0e5..000000000000 --- a/arch/x86/kernel/ds.c +++ /dev/null @@ -1,1437 +0,0 @@ -/* - * Debug Store support - * - * This provides a low-level interface to the hardware's Debug Store - * feature that is used for branch trace store (BTS) and - * precise-event based sampling (PEBS). - * - * It manages: - * - DS and BTS hardware configuration - * - buffer overflow handling (to be done) - * - buffer access - * - * It does not do: - * - security checking (is the caller allowed to trace the task) - * - buffer allocation (memory accounting) - * - * - * Copyright (C) 2007-2009 Intel Corporation. - * Markus Metzger <markus.t.metzger@intel.com>, 2007-2009 - */ - -#include <linux/kernel.h> -#include <linux/string.h> -#include <linux/errno.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/mm.h> -#include <linux/trace_clock.h> - -#include <asm/ds.h> - -#include "ds_selftest.h" - -/* - * The configuration for a particular DS hardware implementation: - */ -struct ds_configuration { - /* The name of the configuration: */ - const char *name; - - /* The size of pointer-typed fields in DS, BTS, and PEBS: */ - unsigned char sizeof_ptr_field; - - /* The size of a BTS/PEBS record in bytes: */ - unsigned char sizeof_rec[2]; - - /* The number of pebs counter reset values in the DS structure. */ - unsigned char nr_counter_reset; - - /* Control bit-masks indexed by enum ds_feature: */ - unsigned long ctl[dsf_ctl_max]; -}; -static struct ds_configuration ds_cfg __read_mostly; - - -/* Maximal size of a DS configuration: */ -#define MAX_SIZEOF_DS 0x80 - -/* Maximal size of a BTS record: */ -#define MAX_SIZEOF_BTS (3 * 8) - -/* BTS and PEBS buffer alignment: */ -#define DS_ALIGNMENT (1 << 3) - -/* Number of buffer pointers in DS: */ -#define NUM_DS_PTR_FIELDS 8 - -/* Size of a pebs reset value in DS: */ -#define PEBS_RESET_FIELD_SIZE 8 - -/* Mask of control bits in the DS MSR register: */ -#define BTS_CONTROL \ - ( ds_cfg.ctl[dsf_bts] | \ - ds_cfg.ctl[dsf_bts_kernel] | \ - ds_cfg.ctl[dsf_bts_user] | \ - ds_cfg.ctl[dsf_bts_overflow] ) - -/* - * A BTS or PEBS tracer. - * - * This holds the configuration of the tracer and serves as a handle - * to identify tracers. - */ -struct ds_tracer { - /* The DS context (partially) owned by this tracer. */ - struct ds_context *context; - /* The buffer provided on ds_request() and its size in bytes. */ - void *buffer; - size_t size; -}; - -struct bts_tracer { - /* The common DS part: */ - struct ds_tracer ds; - - /* The trace including the DS configuration: */ - struct bts_trace trace; - - /* Buffer overflow notification function: */ - bts_ovfl_callback_t ovfl; - - /* Active flags affecting trace collection. */ - unsigned int flags; -}; - -struct pebs_tracer { - /* The common DS part: */ - struct ds_tracer ds; - - /* The trace including the DS configuration: */ - struct pebs_trace trace; - - /* Buffer overflow notification function: */ - pebs_ovfl_callback_t ovfl; -}; - -/* - * Debug Store (DS) save area configuration (see Intel64 and IA32 - * Architectures Software Developer's Manual, section 18.5) - * - * The DS configuration consists of the following fields; different - * architetures vary in the size of those fields. - * - * - double-word aligned base linear address of the BTS buffer - * - write pointer into the BTS buffer - * - end linear address of the BTS buffer (one byte beyond the end of - * the buffer) - * - interrupt pointer into BTS buffer - * (interrupt occurs when write pointer passes interrupt pointer) - * - double-word aligned base linear address of the PEBS buffer - * - write pointer into the PEBS buffer - * - end linear address of the PEBS buffer (one byte beyond the end of - * the buffer) - * - interrupt pointer into PEBS buffer - * (interrupt occurs when write pointer passes interrupt pointer) - * - value to which counter is reset following counter overflow - * - * Later architectures use 64bit pointers throughout, whereas earlier - * architectures use 32bit pointers in 32bit mode. - * - * - * We compute the base address for the first 8 fields based on: - * - the field size stored in the DS configuration - * - the relative field position - * - an offset giving the start of the respective region - * - * This offset is further used to index various arrays holding - * information for BTS and PEBS at the respective index. - * - * On later 32bit processors, we only access the lower 32bit of the - * 64bit pointer fields. The upper halves will be zeroed out. - */ - -enum ds_field { - ds_buffer_base = 0, - ds_index, - ds_absolute_maximum, - ds_interrupt_threshold, -}; - -enum ds_qualifier { - ds_bts = 0, - ds_pebs -}; - -static inline unsigned long -ds_get(const unsigned char *base, enum ds_qualifier qual, enum ds_field field) -{ - base += (ds_cfg.sizeof_ptr_field * (field + (4 * qual))); - return *(unsigned long *)base; -} - -static inline void -ds_set(unsigned char *base, enum ds_qualifier qual, enum ds_field field, - unsigned long value) -{ - base += (ds_cfg.sizeof_ptr_field * (field + (4 * qual))); - (*(unsigned long *)base) = value; -} - - -/* - * Locking is done only for allocating BTS or PEBS resources. - */ -static DEFINE_SPINLOCK(ds_lock); - -/* - * We either support (system-wide) per-cpu or per-thread allocation. - * We distinguish the two based on the task_struct pointer, where a - * NULL pointer indicates per-cpu allocation for the current cpu. - * - * Allocations are use-counted. As soon as resources are allocated, - * further allocations must be of the same type (per-cpu or - * per-thread). We model this by counting allocations (i.e. the number - * of tracers of a certain type) for one type negatively: - * =0 no tracers - * >0 number of per-thread tracers - * <0 number of per-cpu tracers - * - * Tracers essentially gives the number of ds contexts for a certain - * type of allocation. - */ -static atomic_t tracers = ATOMIC_INIT(0); - -static inline int get_tracer(struct task_struct *task) -{ - int error; - - spin_lock_irq(&ds_lock); - - if (task) { - error = -EPERM; - if (atomic_read(&tracers) < 0) - goto out; - atomic_inc(&tracers); - } else { - error = -EPERM; - if (atomic_read(&tracers) > 0) - goto out; - atomic_dec(&tracers); - } - - error = 0; -out: - spin_unlock_irq(&ds_lock); - return error; -} - -static inline void put_tracer(struct task_struct *task) -{ - if (task) - atomic_dec(&tracers); - else - atomic_inc(&tracers); -} - -/* - * The DS context is either attached to a thread or to a cpu: - * - in the former case, the thread_struct contains a pointer to the - * attached context. - * - in the latter case, we use a static array of per-cpu context - * pointers. - * - * Contexts are use-counted. They are allocated on first access and - * deallocated when the last user puts the context. - */ -struct ds_context { - /* The DS configuration; goes into MSR_IA32_DS_AREA: */ - unsigned char ds[MAX_SIZEOF_DS]; - - /* The owner of the BTS and PEBS configuration, respectively: */ - struct bts_tracer *bts_master; - struct pebs_tracer *pebs_master; - - /* Use count: */ - unsigned long count; - - /* Pointer to the context pointer field: */ - struct ds_context **this; - - /* The traced task; NULL for cpu tracing: */ - struct task_struct *task; - - /* The traced cpu; only valid if task is NULL: */ - int cpu; -}; - -static DEFINE_PER_CPU(struct ds_context *, cpu_ds_context); - - -static struct ds_context *ds_get_context(struct task_struct *task, int cpu) -{ - struct ds_context **p_context = - (task ? &task->thread.ds_ctx : &per_cpu(cpu_ds_context, cpu)); - struct ds_context *context = NULL; - struct ds_context *new_context = NULL; - - /* Chances are small that we already have a context. */ - new_context = kzalloc(sizeof(*new_context), GFP_KERNEL); - if (!new_context) - return NULL; - - spin_lock_irq(&ds_lock); - - context = *p_context; - if (likely(!context)) { - context = new_context; - - context->this = p_context; - context->task = task; - context->cpu = cpu; - context->count = 0; - - *p_context = context; - } - - context->count++; - - spin_unlock_irq(&ds_lock); - - if (context != new_context) - kfree(new_context); - - return context; -} - -static void ds_put_context(struct ds_context *context) -{ - struct task_struct *task; - unsigned long irq; - - if (!context) - return; - - spin_lock_irqsave(&ds_lock, irq); - - if (--context->count) { - spin_unlock_irqrestore(&ds_lock, irq); - return; - } - - *(context->this) = NULL; - - task = context->task; - - if (task) - clear_tsk_thread_flag(task, TIF_DS_AREA_MSR); - - /* - * We leave the (now dangling) pointer to the DS configuration in - * the DS_AREA msr. This is as good or as bad as replacing it with - * NULL - the hardware would crash if we enabled tracing. - * - * This saves us some problems with having to write an msr on a - * different cpu while preventing others from doing the same for the - * next context for that same cpu. - */ - - spin_unlock_irqrestore(&ds_lock, irq); - - /* The context might still be in use for context switching. */ - if (task && (task != current)) - wait_task_context_switch(task); - - kfree(context); -} - -static void ds_install_ds_area(struct ds_context *context) -{ - unsigned long ds; - - ds = (unsigned long)context->ds; - - /* - * There is a race between the bts master and the pebs master. - * - * The thread/cpu access is synchronized via get/put_cpu() for - * task tracing and via wrmsr_on_cpu for cpu tracing. - * - * If bts and pebs are collected for the same task or same cpu, - * the same confiuration is written twice. - */ - if (context->task) { - get_cpu(); - if (context->task == current) - wrmsrl(MSR_IA32_DS_AREA, ds); - set_tsk_thread_flag(context->task, TIF_DS_AREA_MSR); - put_cpu(); - } else - wrmsr_on_cpu(context->cpu, MSR_IA32_DS_AREA, - (u32)((u64)ds), (u32)((u64)ds >> 32)); -} - -/* - * Call the tracer's callback on a buffer overflow. - * - * context: the ds context - * qual: the buffer type - */ -static void ds_overflow(struct ds_context *context, enum ds_qualifier qual) -{ - switch (qual) { - case ds_bts: - if (context->bts_master && - context->bts_master->ovfl) - context->bts_master->ovfl(context->bts_master); - break; - case ds_pebs: - if (context->pebs_master && - context->pebs_master->ovfl) - context->pebs_master->ovfl(context->pebs_master); - break; - } -} - - -/* - * Write raw data into the BTS or PEBS buffer. - * - * The remainder of any partially written record is zeroed out. - * - * context: the DS context - * qual: the buffer type - * record: the data to write - * size: the size of the data - */ -static int ds_write(struct ds_context *context, enum ds_qualifier qual, - const void *record, size_t size) -{ - int bytes_written = 0; - - if (!record) - return -EINVAL; - - while (size) { - unsigned long base, index, end, write_end, int_th; - unsigned long write_size, adj_write_size; - - /* - * Write as much as possible without producing an - * overflow interrupt. - * - * Interrupt_threshold must either be - * - bigger than absolute_maximum or - * - point to a record between buffer_base and absolute_maximum - * - * Index points to a valid record. - */ - base = ds_get(context->ds, qual, ds_buffer_base); - index = ds_get(context->ds, qual, ds_index); - end = ds_get(context->ds, qual, ds_absolute_maximum); - int_th = ds_get(context->ds, qual, ds_interrupt_threshold); - - write_end = min(end, int_th); - - /* - * If we are already beyond the interrupt threshold, - * we fill the entire buffer. - */ - if (write_end <= index) - write_end = end; - - if (write_end <= index) - break; - - write_size = min((unsigned long) size, write_end - index); - memcpy((void *)index, record, write_size); - - record = (const char *)record + write_size; - size -= write_size; - bytes_written += write_size; - - adj_write_size = write_size / ds_cfg.sizeof_rec[qual]; - adj_write_size *= ds_cfg.sizeof_rec[qual]; - - /* Zero out trailing bytes. */ - memset((char *)index + write_size, 0, - adj_write_size - write_size); - index += adj_write_size; - - if (index >= end) - index = base; - ds_set(context->ds, qual, ds_index, index); - - if (index >= int_th) - ds_overflow(context, qual); - } - - return bytes_written; -} - - -/* - * Branch Trace Store (BTS) uses the following format. Different - * architectures vary in the size of those fields. - * - source linear address - * - destination linear address - * - flags - * - * Later architectures use 64bit pointers throughout, whereas earlier - * architectures use 32bit pointers in 32bit mode. - * - * We compute the base address for the fields based on: - * - the field size stored in the DS configuration - * - the relative field position - * - * In order to store additional information in the BTS buffer, we use - * a special source address to indicate that the record requires - * special interpretation. - * - * Netburst indicated via a bit in the flags field whether the branch - * was predicted; this is ignored. - * - * We use two levels of abstraction: - * - the raw data level defined here - * - an arch-independent level defined in ds.h - */ - -enum bts_field { - bts_from, - bts_to, - bts_flags, - - bts_qual = bts_from, - bts_clock = bts_to, - bts_pid = bts_flags, - - bts_qual_mask = (bts_qual_max - 1), - bts_escape = ((unsigned long)-1 & ~bts_qual_mask) -}; - -static inline unsigned long bts_get(const char *base, unsigned long field) -{ - base += (ds_cfg.sizeof_ptr_field * field); - return *(unsigned long *)base; -} - -static inline void bts_set(char *base, unsigned long field, unsigned long val) -{ - base += (ds_cfg.sizeof_ptr_field * field); - (*(unsigned long *)base) = val; -} - - -/* - * The raw BTS data is architecture dependent. - * - * For higher-level users, we give an arch-independent view. - * - ds.h defines struct bts_struct - * - bts_read translates one raw bts record into a bts_struct - * - bts_write translates one bts_struct into the raw format and - * writes it into the top of the parameter tracer's buffer. - * - * return: bytes read/written on success; -Eerrno, otherwise - */ -static int -bts_read(struct bts_tracer *tracer, const void *at, struct bts_struct *out) -{ - if (!tracer) - return -EINVAL; - - if (at < tracer->trace.ds.begin) - return -EINVAL; - - if (tracer->trace.ds.end < (at + tracer->trace.ds.size)) - return -EINVAL; - - memset(out, 0, sizeof(*out)); - if ((bts_get(at, bts_qual) & ~bts_qual_mask) == bts_escape) { - out->qualifier = (bts_get(at, bts_qual) & bts_qual_mask); - out->variant.event.clock = bts_get(at, bts_clock); - out->variant.event.pid = bts_get(at, bts_pid); - } else { - out->qualifier = bts_branch; - out->variant.lbr.from = bts_get(at, bts_from); - out->variant.lbr.to = bts_get(at, bts_to); - - if (!out->variant.lbr.from && !out->variant.lbr.to) - out->qualifier = bts_invalid; - } - - return ds_cfg.sizeof_rec[ds_bts]; -} - -static int bts_write(struct bts_tracer *tracer, const struct bts_struct *in) -{ - unsigned char raw[MAX_SIZEOF_BTS]; - - if (!tracer) - return -EINVAL; - - if (MAX_SIZEOF_BTS < ds_cfg.sizeof_rec[ds_bts]) - return -EOVERFLOW; - - switch (in->qualifier) { - case bts_invalid: - bts_set(raw, bts_from, 0); - bts_set(raw, bts_to, 0); - bts_set(raw, bts_flags, 0); - break; - case bts_branch: - bts_set(raw, bts_from, in->variant.lbr.from); - bts_set(raw, bts_to, in->variant.lbr.to); - bts_set(raw, bts_flags, 0); - break; - case bts_task_arrives: - case bts_task_departs: - bts_set(raw, bts_qual, (bts_escape | in->qualifier)); - bts_set(raw, bts_clock, in->variant.event.clock); - bts_set(raw, bts_pid, in->variant.event.pid); - break; - default: - return -EINVAL; - } - - return ds_write(tracer->ds.context, ds_bts, raw, - ds_cfg.sizeof_rec[ds_bts]); -} - - -static void ds_write_config(struct ds_context *context, - struct ds_trace *cfg, enum ds_qualifier qual) -{ - unsigned char *ds = context->ds; - - ds_set(ds, qual, ds_buffer_base, (unsigned long)cfg->begin); - ds_set(ds, qual, ds_index, (unsigned long)cfg->top); - ds_set(ds, qual, ds_absolute_maximum, (unsigned long)cfg->end); - ds_set(ds, qual, ds_interrupt_threshold, (unsigned long)cfg->ith); -} - -static void ds_read_config(struct ds_context *context, - struct ds_trace *cfg, enum ds_qualifier qual) -{ - unsigned char *ds = context->ds; - - cfg->begin = (void *)ds_get(ds, qual, ds_buffer_base); - cfg->top = (void *)ds_get(ds, qual, ds_index); - cfg->end = (void *)ds_get(ds, qual, ds_absolute_maximum); - cfg->ith = (void *)ds_get(ds, qual, ds_interrupt_threshold); -} - -static void ds_init_ds_trace(struct ds_trace *trace, enum ds_qualifier qual, - void *base, size_t size, size_t ith, - unsigned int flags) { - unsigned long buffer, adj; - - /* - * Adjust the buffer address and size to meet alignment - * constraints: - * - buffer is double-word aligned - * - size is multiple of record size - * - * We checked the size at the very beginning; we have enough - * space to do the adjustment. - */ - buffer = (unsigned long)base; - - adj = ALIGN(buffer, DS_ALIGNMENT) - buffer; - buffer += adj; - size -= adj; - - trace->n = size / ds_cfg.sizeof_rec[qual]; - trace->size = ds_cfg.sizeof_rec[qual]; - - size = (trace->n * trace->size); - - trace->begin = (void *)buffer; - trace->top = trace->begin; - trace->end = (void *)(buffer + size); - /* - * The value for 'no threshold' is -1, which will set the - * threshold outside of the buffer, just like we want it. - */ - ith *= ds_cfg.sizeof_rec[qual]; - trace->ith = (void *)(buffer + size - ith); - - trace->flags = flags; -} - - -static int ds_request(struct ds_tracer *tracer, struct ds_trace *trace, - enum ds_qualifier qual, struct task_struct *task, - int cpu, void *base, size_t size, size_t th) -{ - struct ds_context *context; - int error; - size_t req_size; - - error = -EOPNOTSUPP; - if (!ds_cfg.sizeof_rec[qual]) - goto out; - - error = -EINVAL; - if (!base) - goto out; - - req_size = ds_cfg.sizeof_rec[qual]; - /* We might need space for alignment adjustments. */ - if (!IS_ALIGNED((unsigned long)base, DS_ALIGNMENT)) - req_size += DS_ALIGNMENT; - - error = -EINVAL; - if (size < req_size) - goto out; - - if (th != (size_t)-1) { - th *= ds_cfg.sizeof_rec[qual]; - - error = -EINVAL; - if (size <= th) - goto out; - } - - tracer->buffer = base; - tracer->size = size; - - error = -ENOMEM; - context = ds_get_context(task, cpu); - if (!context) - goto out; - tracer->context = context; - - /* - * Defer any tracer-specific initialization work for the context until - * context ownership has been clarified. - */ - - error = 0; - out: - return error; -} - -static struct bts_tracer *ds_request_bts(struct task_struct *task, int cpu, - void *base, size_t size, - bts_ovfl_callback_t ovfl, size_t th, - unsigned int flags) -{ - struct bts_tracer *tracer; - int error; - - /* Buffer overflow notification is not yet implemented. */ - error = -EOPNOTSUPP; - if (ovfl) - goto out; - - error = get_tracer(task); - if (error < 0) - goto out; - - error = -ENOMEM; - tracer = kzalloc(sizeof(*tracer), GFP_KERNEL); - if (!tracer) - goto out_put_tracer; - tracer->ovfl = ovfl; - - /* Do some more error checking and acquire a tracing context. */ - error = ds_request(&tracer->ds, &tracer->trace.ds, - ds_bts, task, cpu, base, size, th); - if (error < 0) - goto out_tracer; - - /* Claim the bts part of the tracing context we acquired above. */ - spin_lock_irq(&ds_lock); - - error = -EPERM; - if (tracer->ds.context->bts_master) - goto out_unlock; - tracer->ds.context->bts_master = tracer; - - spin_unlock_irq(&ds_lock); - - /* - * Now that we own the bts part of the context, let's complete the - * initialization for that part. - */ - ds_init_ds_trace(&tracer->trace.ds, ds_bts, base, size, th, flags); - ds_write_config(tracer->ds.context, &tracer->trace.ds, ds_bts); - ds_install_ds_area(tracer->ds.context); - - tracer->trace.read = bts_read; - tracer->trace.write = bts_write; - - /* Start tracing. */ - ds_resume_bts(tracer); - - return tracer; - - out_unlock: - spin_unlock_irq(&ds_lock); - ds_put_context(tracer->ds.context); - out_tracer: - kfree(tracer); - out_put_tracer: - put_tracer(task); - out: - return ERR_PTR(error); -} - -struct bts_tracer *ds_request_bts_task(struct task_struct *task, - void *base, size_t size, - bts_ovfl_callback_t ovfl, - size_t th, unsigned int flags) -{ - return ds_request_bts(task, 0, base, size, ovfl, th, flags); -} - -struct bts_tracer *ds_request_bts_cpu(int cpu, void *base, size_t size, - bts_ovfl_callback_t ovfl, - size_t th, unsigned int flags) -{ - return ds_request_bts(NULL, cpu, base, size, ovfl, th, flags); -} - -static struct pebs_tracer *ds_request_pebs(struct task_struct *task, int cpu, - void *base, size_t size, - pebs_ovfl_callback_t ovfl, size_t th, - unsigned int flags) -{ - struct pebs_tracer *tracer; - int error; - - /* Buffer overflow notification is not yet implemented. */ - error = -EOPNOTSUPP; - if (ovfl) - goto out; - - error = get_tracer(task); - if (error < 0) - goto out; - - error = -ENOMEM; - tracer = kzalloc(sizeof(*tracer), GFP_KERNEL); - if (!tracer) - goto out_put_tracer; - tracer->ovfl = ovfl; - - /* Do some more error checking and acquire a tracing context. */ - error = ds_request(&tracer->ds, &tracer->trace.ds, - ds_pebs, task, cpu, base, size, th); - if (error < 0) - goto out_tracer; - - /* Claim the pebs part of the tracing context we acquired above. */ - spin_lock_irq(&ds_lock); - - error = -EPERM; - if (tracer->ds.context->pebs_master) - goto out_unlock; - tracer->ds.context->pebs_master = tracer; - - spin_unlock_irq(&ds_lock); - - /* - * Now that we own the pebs part of the context, let's complete the - * initialization for that part. - */ - ds_init_ds_trace(&tracer->trace.ds, ds_pebs, base, size, th, flags); - ds_write_config(tracer->ds.context, &tracer->trace.ds, ds_pebs); - ds_install_ds_area(tracer->ds.context); - - /* Start tracing. */ - ds_resume_pebs(tracer); - - return tracer; - - out_unlock: - spin_unlock_irq(&ds_lock); - ds_put_context(tracer->ds.context); - out_tracer: - kfree(tracer); - out_put_tracer: - put_tracer(task); - out: - return ERR_PTR(error); -} - -struct pebs_tracer *ds_request_pebs_task(struct task_struct *task, - void *base, size_t size, - pebs_ovfl_callback_t ovfl, - size_t th, unsigned int flags) -{ - return ds_request_pebs(task, 0, base, size, ovfl, th, flags); -} - -struct pebs_tracer *ds_request_pebs_cpu(int cpu, void *base, size_t size, - pebs_ovfl_callback_t ovfl, - size_t th, unsigned int flags) -{ - return ds_request_pebs(NULL, cpu, base, size, ovfl, th, flags); -} - -static void ds_free_bts(struct bts_tracer *tracer) -{ - struct task_struct *task; - - task = tracer->ds.context->task; - - WARN_ON_ONCE(tracer->ds.context->bts_master != tracer); - tracer->ds.context->bts_master = NULL; - - /* Make sure tracing stopped and the tracer is not in use. */ - if (task && (task != current)) - wait_task_context_switch(task); - - ds_put_context(tracer->ds.context); - put_tracer(task); - - kfree(tracer); -} - -void ds_release_bts(struct bts_tracer *tracer) -{ - might_sleep(); - - if (!tracer) - return; - - ds_suspend_bts(tracer); - ds_free_bts(tracer); -} - -int ds_release_bts_noirq(struct bts_tracer *tracer) -{ - struct task_struct *task; - unsigned long irq; - int error; - - if (!tracer) - return 0; - - task = tracer->ds.context->task; - - local_irq_save(irq); - - error = -EPERM; - if (!task && - (tracer->ds.context->cpu != smp_processor_id())) - goto out; - - error = -EPERM; - if (task && (task != current)) - goto out; - - ds_suspend_bts_noirq(tracer); - ds_free_bts(tracer); - - error = 0; - out: - local_irq_restore(irq); - return error; -} - -static void update_task_debugctlmsr(struct task_struct *task, - unsigned long debugctlmsr) -{ - task->thread.debugctlmsr = debugctlmsr; - - get_cpu(); - if (task == current) - update_debugctlmsr(debugctlmsr); - put_cpu(); -} - -void ds_suspend_bts(struct bts_tracer *tracer) -{ - struct task_struct *task; - unsigned long debugctlmsr; - int cpu; - - if (!tracer) - return; - - tracer->flags = 0; - - task = tracer->ds.context->task; - cpu = tracer->ds.context->cpu; - - WARN_ON(!task && irqs_disabled()); - - debugctlmsr = (task ? - task->thread.debugctlmsr : - get_debugctlmsr_on_cpu(cpu)); - debugctlmsr &= ~BTS_CONTROL; - - if (task) - update_task_debugctlmsr(task, debugctlmsr); - else - update_debugctlmsr_on_cpu(cpu, debugctlmsr); -} - -int ds_suspend_bts_noirq(struct bts_tracer *tracer) -{ - struct task_struct *task; - unsigned long debugctlmsr, irq; - int cpu, error = 0; - - if (!tracer) - return 0; - - tracer->flags = 0; - - task = tracer->ds.context->task; - cpu = tracer->ds.context->cpu; - - local_irq_save(irq); - - error = -EPERM; - if (!task && (cpu != smp_processor_id())) - goto out; - - debugctlmsr = (task ? - task->thread.debugctlmsr : - get_debugctlmsr()); - debugctlmsr &= ~BTS_CONTROL; - - if (task) - update_task_debugctlmsr(task, debugctlmsr); - else - update_debugctlmsr(debugctlmsr); - - error = 0; - out: - local_irq_restore(irq); - return error; -} - -static unsigned long ds_bts_control(struct bts_tracer *tracer) -{ - unsigned long control; - - control = ds_cfg.ctl[dsf_bts]; - if (!(tracer->trace.ds.flags & BTS_KERNEL)) - control |= ds_cfg.ctl[dsf_bts_kernel]; - if (!(tracer->trace.ds.flags & BTS_USER)) - control |= ds_cfg.ctl[dsf_bts_user]; - - return control; -} - -void ds_resume_bts(struct bts_tracer *tracer) -{ - struct task_struct *task; - unsigned long debugctlmsr; - int cpu; - - if (!tracer) - return; - - tracer->flags = tracer->trace.ds.flags; - - task = tracer->ds.context->task; - cpu = tracer->ds.context->cpu; - - WARN_ON(!task && irqs_disabled()); - - debugctlmsr = (task ? - task->thread.debugctlmsr : - get_debugctlmsr_on_cpu(cpu)); - debugctlmsr |= ds_bts_control(tracer); - - if (task) - update_task_debugctlmsr(task, debugctlmsr); - else - update_debugctlmsr_on_cpu(cpu, debugctlmsr); -} - -int ds_resume_bts_noirq(struct bts_tracer *tracer) -{ - struct task_struct *task; - unsigned long debugctlmsr, irq; - int cpu, error = 0; - - if (!tracer) - return 0; - - tracer->flags = tracer->trace.ds.flags; - - task = tracer->ds.context->task; - cpu = tracer->ds.context->cpu; - - local_irq_save(irq); - - error = -EPERM; - if (!task && (cpu != smp_processor_id())) - goto out; - - debugctlmsr = (task ? - task->thread.debugctlmsr : - get_debugctlmsr()); - debugctlmsr |= ds_bts_control(tracer); - - if (task) - update_task_debugctlmsr(task, debugctlmsr); - else - update_debugctlmsr(debugctlmsr); - - error = 0; - out: - local_irq_restore(irq); - return error; -} - -static void ds_free_pebs(struct pebs_tracer *tracer) -{ - struct task_struct *task; - - task = tracer->ds.context->task; - - WARN_ON_ONCE(tracer->ds.context->pebs_master != tracer); - tracer->ds.context->pebs_master = NULL; - - ds_put_context(tracer->ds.context); - put_tracer(task); - - kfree(tracer); -} - -void ds_release_pebs(struct pebs_tracer *tracer) -{ - might_sleep(); - - if (!tracer) - return; - - ds_suspend_pebs(tracer); - ds_free_pebs(tracer); -} - -int ds_release_pebs_noirq(struct pebs_tracer *tracer) -{ - struct task_struct *task; - unsigned long irq; - int error; - - if (!tracer) - return 0; - - task = tracer->ds.context->task; - - local_irq_save(irq); - - error = -EPERM; - if (!task && - (tracer->ds.context->cpu != smp_processor_id())) - goto out; - - error = -EPERM; - if (task && (task != current)) - goto out; - - ds_suspend_pebs_noirq(tracer); - ds_free_pebs(tracer); - - error = 0; - out: - local_irq_restore(irq); - return error; -} - -void ds_suspend_pebs(struct pebs_tracer *tracer) -{ - -} - -int ds_suspend_pebs_noirq(struct pebs_tracer *tracer) -{ - return 0; -} - -void ds_resume_pebs(struct pebs_tracer *tracer) -{ - -} - -int ds_resume_pebs_noirq(struct pebs_tracer *tracer) -{ - return 0; -} - -const struct bts_trace *ds_read_bts(struct bts_tracer *tracer) -{ - if (!tracer) - return NULL; - - ds_read_config(tracer->ds.context, &tracer->trace.ds, ds_bts); - return &tracer->trace; -} - -const struct pebs_trace *ds_read_pebs(struct pebs_tracer *tracer) -{ - if (!tracer) - return NULL; - - ds_read_config(tracer->ds.context, &tracer->trace.ds, ds_pebs); - - tracer->trace.counters = ds_cfg.nr_counter_reset; - memcpy(tracer->trace.counter_reset, - tracer->ds.context->ds + - (NUM_DS_PTR_FIELDS * ds_cfg.sizeof_ptr_field), - ds_cfg.nr_counter_reset * PEBS_RESET_FIELD_SIZE); - - return &tracer->trace; -} - -int ds_reset_bts(struct bts_tracer *tracer) -{ - if (!tracer) - return -EINVAL; - - tracer->trace.ds.top = tracer->trace.ds.begin; - - ds_set(tracer->ds.context->ds, ds_bts, ds_index, - (unsigned long)tracer->trace.ds.top); - - return 0; -} - -int ds_reset_pebs(struct pebs_tracer *tracer) -{ - if (!tracer) - return -EINVAL; - - tracer->trace.ds.top = tracer->trace.ds.begin; - - ds_set(tracer->ds.context->ds, ds_pebs, ds_index, - (unsigned long)tracer->trace.ds.top); - - return 0; -} - -int ds_set_pebs_reset(struct pebs_tracer *tracer, - unsigned int counter, u64 value) -{ - if (!tracer) - return -EINVAL; - - if (ds_cfg.nr_counter_reset < counter) - return -EINVAL; - - *(u64 *)(tracer->ds.context->ds + - (NUM_DS_PTR_FIELDS * ds_cfg.sizeof_ptr_field) + - (counter * PEBS_RESET_FIELD_SIZE)) = value; - - return 0; -} - -static const struct ds_configuration ds_cfg_netburst = { - .name = "Netburst", - .ctl[dsf_bts] = (1 << 2) | (1 << 3), - .ctl[dsf_bts_kernel] = (1 << 5), - .ctl[dsf_bts_user] = (1 << 6), - .nr_counter_reset = 1, -}; -static const struct ds_configuration ds_cfg_pentium_m = { - .name = "Pentium M", - .ctl[dsf_bts] = (1 << 6) | (1 << 7), - .nr_counter_reset = 1, -}; -static const struct ds_configuration ds_cfg_core2_atom = { - .name = "Core 2/Atom", - .ctl[dsf_bts] = (1 << 6) | (1 << 7), - .ctl[dsf_bts_kernel] = (1 << 9), - .ctl[dsf_bts_user] = (1 << 10), - .nr_counter_reset = 1, -}; -static const struct ds_configuration ds_cfg_core_i7 = { - .name = "Core i7", - .ctl[dsf_bts] = (1 << 6) | (1 << 7), - .ctl[dsf_bts_kernel] = (1 << 9), - .ctl[dsf_bts_user] = (1 << 10), - .nr_counter_reset = 4, -}; - -static void -ds_configure(const struct ds_configuration *cfg, - struct cpuinfo_x86 *cpu) -{ - unsigned long nr_pebs_fields = 0; - - printk(KERN_INFO "[ds] using %s configuration\n", cfg->name); - -#ifdef __i386__ - nr_pebs_fields = 10; -#else - nr_pebs_fields = 18; -#endif - - /* - * Starting with version 2, architectural performance - * monitoring supports a format specifier. - */ - if ((cpuid_eax(0xa) & 0xff) > 1) { - unsigned long perf_capabilities, format; - - rdmsrl(MSR_IA32_PERF_CAPABILITIES, perf_capabilities); - - format = (perf_capabilities >> 8) & 0xf; - - switch (format) { - case 0: - nr_pebs_fields = 18; - break; - case 1: - nr_pebs_fields = 22; - break; - default: - printk(KERN_INFO - "[ds] unknown PEBS format: %lu\n", format); - nr_pebs_fields = 0; - break; - } - } - - memset(&ds_cfg, 0, sizeof(ds_cfg)); - ds_cfg = *cfg; - - ds_cfg.sizeof_ptr_field = - (cpu_has(cpu, X86_FEATURE_DTES64) ? 8 : 4); - - ds_cfg.sizeof_rec[ds_bts] = ds_cfg.sizeof_ptr_field * 3; - ds_cfg.sizeof_rec[ds_pebs] = ds_cfg.sizeof_ptr_field * nr_pebs_fields; - - if (!cpu_has(cpu, X86_FEATURE_BTS)) { - ds_cfg.sizeof_rec[ds_bts] = 0; - printk(KERN_INFO "[ds] bts not available\n"); - } - if (!cpu_has(cpu, X86_FEATURE_PEBS)) { - ds_cfg.sizeof_rec[ds_pebs] = 0; - printk(KERN_INFO "[ds] pebs not available\n"); - } - - printk(KERN_INFO "[ds] sizes: address: %u bit, ", - 8 * ds_cfg.sizeof_ptr_field); - printk("bts/pebs record: %u/%u bytes\n", - ds_cfg.sizeof_rec[ds_bts], ds_cfg.sizeof_rec[ds_pebs]); - - WARN_ON_ONCE(MAX_PEBS_COUNTERS < ds_cfg.nr_counter_reset); -} - -void __cpuinit ds_init_intel(struct cpuinfo_x86 *c) -{ - /* Only configure the first cpu. Others are identical. */ - if (ds_cfg.name) - return; - - switch (c->x86) { - case 0x6: - switch (c->x86_model) { - case 0x9: - case 0xd: /* Pentium M */ - ds_configure(&ds_cfg_pentium_m, c); - break; - case 0xf: - case 0x17: /* Core2 */ - case 0x1c: /* Atom */ - ds_configure(&ds_cfg_core2_atom, c); - break; - case 0x1a: /* Core i7 */ - ds_configure(&ds_cfg_core_i7, c); - break; - default: - /* Sorry, don't know about them. */ - break; - } - break; - case 0xf: - switch (c->x86_model) { - case 0x0: - case 0x1: - case 0x2: /* Netburst */ - ds_configure(&ds_cfg_netburst, c); - break; - default: - /* Sorry, don't know about them. */ - break; - } - break; - default: - /* Sorry, don't know about them. */ - break; - } -} - -static inline void ds_take_timestamp(struct ds_context *context, - enum bts_qualifier qualifier, - struct task_struct *task) -{ - struct bts_tracer *tracer = context->bts_master; - struct bts_struct ts; - - /* Prevent compilers from reading the tracer pointer twice. */ - barrier(); - - if (!tracer || !(tracer->flags & BTS_TIMESTAMPS)) - return; - - memset(&ts, 0, sizeof(ts)); - ts.qualifier = qualifier; - ts.variant.event.clock = trace_clock_global(); - ts.variant.event.pid = task->pid; - - bts_write(tracer, &ts); -} - -/* - * Change the DS configuration from tracing prev to tracing next. - */ -void ds_switch_to(struct task_struct *prev, struct task_struct *next) -{ - struct ds_context *prev_ctx = prev->thread.ds_ctx; - struct ds_context *next_ctx = next->thread.ds_ctx; - unsigned long debugctlmsr = next->thread.debugctlmsr; - - /* Make sure all data is read before we start. */ - barrier(); - - if (prev_ctx) { - update_debugctlmsr(0); - - ds_take_timestamp(prev_ctx, bts_task_departs, prev); - } - - if (next_ctx) { - ds_take_timestamp(next_ctx, bts_task_arrives, next); - - wrmsrl(MSR_IA32_DS_AREA, (unsigned long)next_ctx->ds); - } - - update_debugctlmsr(debugctlmsr); -} - -static __init int ds_selftest(void) -{ - if (ds_cfg.sizeof_rec[ds_bts]) { - int error; - - error = ds_selftest_bts(); - if (error) { - WARN(1, "[ds] selftest failed. disabling bts.\n"); - ds_cfg.sizeof_rec[ds_bts] = 0; - } - } - - if (ds_cfg.sizeof_rec[ds_pebs]) { - int error; - - error = ds_selftest_pebs(); - if (error) { - WARN(1, "[ds] selftest failed. disabling pebs.\n"); - ds_cfg.sizeof_rec[ds_pebs] = 0; - } - } - - return 0; -} -device_initcall(ds_selftest); diff --git a/arch/x86/kernel/ds_selftest.c b/arch/x86/kernel/ds_selftest.c deleted file mode 100644 index 6bc7c199ab99..000000000000 --- a/arch/x86/kernel/ds_selftest.c +++ /dev/null @@ -1,408 +0,0 @@ -/* - * Debug Store support - selftest - * - * - * Copyright (C) 2009 Intel Corporation. - * Markus Metzger <markus.t.metzger@intel.com>, 2009 - */ - -#include "ds_selftest.h" - -#include <linux/kernel.h> -#include <linux/string.h> -#include <linux/smp.h> -#include <linux/cpu.h> - -#include <asm/ds.h> - - -#define BUFFER_SIZE 521 /* Intentionally chose an odd size. */ -#define SMALL_BUFFER_SIZE 24 /* A single bts entry. */ - -struct ds_selftest_bts_conf { - struct bts_tracer *tracer; - int error; - int (*suspend)(struct bts_tracer *); - int (*resume)(struct bts_tracer *); -}; - -static int ds_selftest_bts_consistency(const struct bts_trace *trace) -{ - int error = 0; - - if (!trace) { - printk(KERN_CONT "failed to access trace..."); - /* Bail out. Other tests are pointless. */ - return -1; - } - - if (!trace->read) { - printk(KERN_CONT "bts read not available..."); - error = -1; - } - - /* Do some sanity checks on the trace configuration. */ - if (!trace->ds.n) { - printk(KERN_CONT "empty bts buffer..."); - error = -1; - } - if (!trace->ds.size) { - printk(KERN_CONT "bad bts trace setup..."); - error = -1; - } - if (trace->ds.end != - (char *)trace->ds.begin + (trace->ds.n * trace->ds.size)) { - printk(KERN_CONT "bad bts buffer setup..."); - error = -1; - } - /* - * We allow top in [begin; end], since its not clear when the - * overflow adjustment happens: after the increment or before the - * write. - */ - if ((trace->ds.top < trace->ds.begin) || - (trace->ds.end < trace->ds.top)) { - printk(KERN_CONT "bts top out of bounds..."); - error = -1; - } - - return error; -} - -static int ds_selftest_bts_read(struct bts_tracer *tracer, - const struct bts_trace *trace, - const void *from, const void *to) -{ - const unsigned char *at; - - /* - * Check a few things which do not belong to this test. - * They should be covered by other tests. - */ - if (!trace) - return -1; - - if (!trace->read) - return -1; - - if (to < from) - return -1; - - if (from < trace->ds.begin) - return -1; - - if (trace->ds.end < to) - return -1; - - if (!trace->ds.size) - return -1; - - /* Now to the test itself. */ - for (at = from; (void *)at < to; at += trace->ds.size) { - struct bts_struct bts; - unsigned long index; - int error; - - if (((void *)at - trace->ds.begin) % trace->ds.size) { - printk(KERN_CONT - "read from non-integer index..."); - return -1; - } - index = ((void *)at - trace->ds.begin) / trace->ds.size; - - memset(&bts, 0, sizeof(bts)); - error = trace->read(tracer, at, &bts); - if (error < 0) { - printk(KERN_CONT - "error reading bts trace at [%lu] (0x%p)...", - index, at); - return error; - } - - switch (bts.qualifier) { - case BTS_BRANCH: - break; - default: - printk(KERN_CONT - "unexpected bts entry %llu at [%lu] (0x%p)...", - bts.qualifier, index, at); - return -1; - } - } - - return 0; -} - -static void ds_selftest_bts_cpu(void *arg) -{ - struct ds_selftest_bts_conf *conf = arg; - const struct bts_trace *trace; - void *top; - - if (IS_ERR(conf->tracer)) { - conf->error = PTR_ERR(conf->tracer); - conf->tracer = NULL; - - printk(KERN_CONT - "initialization failed (err: %d)...", conf->error); - return; - } - - /* We should meanwhile have enough trace. */ - conf->error = conf->suspend(conf->tracer); - if (conf->error < 0) - return; - - /* Let's see if we can access the trace. */ - trace = ds_read_bts(conf->tracer); - - conf->error = ds_selftest_bts_consistency(trace); - if (conf->error < 0) - return; - - /* If everything went well, we should have a few trace entries. */ - if (trace->ds.top == trace->ds.begin) { - /* - * It is possible but highly unlikely that we got a - * buffer overflow and end up at exactly the same - * position we started from. - * Let's issue a warning, but continue. - */ - printk(KERN_CONT "no trace/overflow..."); - } - - /* Let's try to read the trace we collected. */ - conf->error = - ds_selftest_bts_read(conf->tracer, trace, - trace->ds.begin, trace->ds.top); - if (conf->error < 0) - return; - - /* - * Let's read the trace again. - * Since we suspended tracing, we should get the same result. - */ - top = trace->ds.top; - - trace = ds_read_bts(conf->tracer); - conf->error = ds_selftest_bts_consistency(trace); - if (conf->error < 0) - return; - - if (top != trace->ds.top) { - printk(KERN_CONT "suspend not working..."); - conf->error = -1; - return; - } - - /* Let's collect some more trace - see if resume is working. */ - conf->error = conf->resume(conf->tracer); - if (conf->error < 0) - return; - - conf->error = conf->suspend(conf->tracer); - if (conf->error < 0) - return; - - trace = ds_read_bts(conf->tracer); - - conf->error = ds_selftest_bts_consistency(trace); - if (conf->error < 0) - return; - - if (trace->ds.top == top) { - /* - * It is possible but highly unlikely that we got a - * buffer overflow and end up at exactly the same - * position we started from. - * Let's issue a warning and check the full trace. - */ - printk(KERN_CONT - "no resume progress/overflow..."); - - conf->error = - ds_selftest_bts_read(conf->tracer, trace, - trace->ds.begin, trace->ds.end); - } else if (trace->ds.top < top) { - /* - * We had a buffer overflow - the entire buffer should - * contain trace records. - */ - conf->error = - ds_selftest_bts_read(conf->tracer, trace, - trace->ds.begin, trace->ds.end); - } else { - /* - * It is quite likely that the buffer did not overflow. - * Let's just check the delta trace. - */ - conf->error = - ds_selftest_bts_read(conf->tracer, trace, top, - trace->ds.top); - } - if (conf->error < 0) - return; - - conf->error = 0; -} - -static int ds_suspend_bts_wrap(struct bts_tracer *tracer) -{ - ds_suspend_bts(tracer); - return 0; -} - -static int ds_resume_bts_wrap(struct bts_tracer *tracer) -{ - ds_resume_bts(tracer); - return 0; -} - -static void ds_release_bts_noirq_wrap(void *tracer) -{ - (void)ds_release_bts_noirq(tracer); -} - -static int ds_selftest_bts_bad_release_noirq(int cpu, - struct bts_tracer *tracer) -{ - int error = -EPERM; - - /* Try to release the tracer on the wrong cpu. */ - get_cpu(); - if (cpu != smp_processor_id()) { - error = ds_release_bts_noirq(tracer); - if (error != -EPERM) - printk(KERN_CONT "release on wrong cpu..."); - } - put_cpu(); - - return error ? 0 : -1; -} - -static int ds_selftest_bts_bad_request_cpu(int cpu, void *buffer) -{ - struct bts_tracer *tracer; - int error; - - /* Try to request cpu tracing while task tracing is active. */ - tracer = ds_request_bts_cpu(cpu, buffer, BUFFER_SIZE, NULL, - (size_t)-1, BTS_KERNEL); - error = PTR_ERR(tracer); - if (!IS_ERR(tracer)) { - ds_release_bts(tracer); - error = 0; - } - - if (error != -EPERM) - printk(KERN_CONT "cpu/task tracing overlap..."); - - return error ? 0 : -1; -} - -static int ds_selftest_bts_bad_request_task(void *buffer) -{ - struct bts_tracer *tracer; - int error; - - /* Try to request cpu tracing while task tracing is active. */ - tracer = ds_request_bts_task(current, buffer, BUFFER_SIZE, NULL, - (size_t)-1, BTS_KERNEL); - error = PTR_ERR(tracer); - if (!IS_ERR(tracer)) { - error = 0; - ds_release_bts(tracer); - } - - if (error != -EPERM) - printk(KERN_CONT "task/cpu tracing overlap..."); - - return error ? 0 : -1; -} - -int ds_selftest_bts(void) -{ - struct ds_selftest_bts_conf conf; - unsigned char buffer[BUFFER_SIZE], *small_buffer; - unsigned long irq; - int cpu; - - printk(KERN_INFO "[ds] bts selftest..."); - conf.error = 0; - - small_buffer = (unsigned char *)ALIGN((unsigned long)buffer, 8) + 8; - - get_online_cpus(); - for_each_online_cpu(cpu) { - conf.suspend = ds_suspend_bts_wrap; - conf.resume = ds_resume_bts_wrap; - conf.tracer = - ds_request_bts_cpu(cpu, buffer, BUFFER_SIZE, - NULL, (size_t)-1, BTS_KERNEL); - ds_selftest_bts_cpu(&conf); - if (conf.error >= 0) - conf.error = ds_selftest_bts_bad_request_task(buffer); - ds_release_bts(conf.tracer); - if (conf.error < 0) - goto out; - - conf.suspend = ds_suspend_bts_noirq; - conf.resume = ds_resume_bts_noirq; - conf.tracer = - ds_request_bts_cpu(cpu, buffer, BUFFER_SIZE, - NULL, (size_t)-1, BTS_KERNEL); - smp_call_function_single(cpu, ds_selftest_bts_cpu, &conf, 1); - if (conf.error >= 0) { - conf.error = - ds_selftest_bts_bad_release_noirq(cpu, - conf.tracer); - /* We must not release the tracer twice. */ - if (conf.error < 0) - conf.tracer = NULL; - } - if (conf.error >= 0) - conf.error = ds_selftest_bts_bad_request_task(buffer); - smp_call_function_single(cpu, ds_release_bts_noirq_wrap, - conf.tracer, 1); - if (conf.error < 0) - goto out; - } - - conf.suspend = ds_suspend_bts_wrap; - conf.resume = ds_resume_bts_wrap; - conf.tracer = - ds_request_bts_task(current, buffer, BUFFER_SIZE, - NULL, (size_t)-1, BTS_KERNEL); - ds_selftest_bts_cpu(&conf); - if (conf.error >= 0) - conf.error = ds_selftest_bts_bad_request_cpu(0, buffer); - ds_release_bts(conf.tracer); - if (conf.error < 0) - goto out; - - conf.suspend = ds_suspend_bts_noirq; - conf.resume = ds_resume_bts_noirq; - conf.tracer = - ds_request_bts_task(current, small_buffer, SMALL_BUFFER_SIZE, - NULL, (size_t)-1, BTS_KERNEL); - local_irq_save(irq); - ds_selftest_bts_cpu(&conf); - if (conf.error >= 0) - conf.error = ds_selftest_bts_bad_request_cpu(0, buffer); - ds_release_bts_noirq(conf.tracer); - local_irq_restore(irq); - if (conf.error < 0) - goto out; - - conf.error = 0; - out: - put_online_cpus(); - printk(KERN_CONT "%s.\n", (conf.error ? "failed" : "passed")); - - return conf.error; -} - -int ds_selftest_pebs(void) -{ - return 0; -} diff --git a/arch/x86/kernel/ds_selftest.h b/arch/x86/kernel/ds_selftest.h deleted file mode 100644 index 2ba8745c6663..000000000000 --- a/arch/x86/kernel/ds_selftest.h +++ /dev/null @@ -1,15 +0,0 @@ -/* - * Debug Store support - selftest - * - * - * Copyright (C) 2009 Intel Corporation. - * Markus Metzger <markus.t.metzger@intel.com>, 2009 - */ - -#ifdef CONFIG_X86_DS_SELFTEST -extern int ds_selftest_bts(void); -extern int ds_selftest_pebs(void); -#else -static inline int ds_selftest_bts(void) { return 0; } -static inline int ds_selftest_pebs(void) { return 0; } -#endif diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index 6d817554780a..c89a386930b7 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -224,11 +224,6 @@ unsigned __kprobes long oops_begin(void) int cpu; unsigned long flags; - /* notify the hw-branch tracer so it may disable tracing and - add the last trace to the trace buffer - - the earlier this happens, the more useful the trace. */ - trace_hw_branch_oops(); - oops_enter(); /* racy, but better than risking deadlock. */ diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S index 44a8e0dc6737..cd49141cf153 100644 --- a/arch/x86/kernel/entry_32.S +++ b/arch/x86/kernel/entry_32.S @@ -53,6 +53,7 @@ #include <asm/processor-flags.h> #include <asm/ftrace.h> #include <asm/irq_vectors.h> +#include <asm/cpufeature.h> /* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */ #include <linux/elf-em.h> @@ -905,7 +906,25 @@ ENTRY(simd_coprocessor_error) RING0_INT_FRAME pushl $0 CFI_ADJUST_CFA_OFFSET 4 +#ifdef CONFIG_X86_INVD_BUG + /* AMD 486 bug: invd from userspace calls exception 19 instead of #GP */ +661: pushl $do_general_protection +662: +.section .altinstructions,"a" + .balign 4 + .long 661b + .long 663f + .byte X86_FEATURE_XMM + .byte 662b-661b + .byte 664f-663f +.previous +.section .altinstr_replacement,"ax" +663: pushl $do_simd_coprocessor_error +664: +.previous +#else pushl $do_simd_coprocessor_error +#endif CFI_ADJUST_CFA_OFFSET 4 jmp error_code CFI_ENDPROC diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c index d6cc065f519f..a8f1b803d2fd 100644 --- a/arch/x86/kernel/hw_breakpoint.c +++ b/arch/x86/kernel/hw_breakpoint.c @@ -189,25 +189,16 @@ static int get_hbp_len(u8 hbp_len) } /* - * Check for virtual address in user space. - */ -int arch_check_va_in_userspace(unsigned long va, u8 hbp_len) -{ - unsigned int len; - - len = get_hbp_len(hbp_len); - - return (va <= TASK_SIZE - len); -} - -/* * Check for virtual address in kernel space. */ -static int arch_check_va_in_kernelspace(unsigned long va, u8 hbp_len) +int arch_check_bp_in_kernelspace(struct perf_event *bp) { unsigned int len; + unsigned long va; + struct arch_hw_breakpoint *info = counter_arch_bp(bp); - len = get_hbp_len(hbp_len); + va = info->address; + len = get_hbp_len(info->len); return (va >= TASK_SIZE) && ((va + len - 1) >= TASK_SIZE); } @@ -300,8 +291,7 @@ static int arch_build_bp_info(struct perf_event *bp) /* * Validate the arch-specific HW Breakpoint register settings */ -int arch_validate_hwbkpt_settings(struct perf_event *bp, - struct task_struct *tsk) +int arch_validate_hwbkpt_settings(struct perf_event *bp) { struct arch_hw_breakpoint *info = counter_arch_bp(bp); unsigned int align; @@ -314,16 +304,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp, ret = -EINVAL; - if (info->type == X86_BREAKPOINT_EXECUTE) - /* - * Ptrace-refactoring code - * For now, we'll allow instruction breakpoint only for user-space - * addresses - */ - if ((!arch_check_va_in_userspace(info->address, info->len)) && - info->len != X86_BREAKPOINT_EXECUTE) - return ret; - switch (info->len) { case X86_BREAKPOINT_LEN_1: align = 0; @@ -350,15 +330,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp, if (info->address & align) return -EINVAL; - /* Check that the virtual address is in the proper range */ - if (tsk) { - if (!arch_check_va_in_userspace(info->address, info->len)) - return -EFAULT; - } else { - if (!arch_check_va_in_kernelspace(info->address, info->len)) - return -EFAULT; - } - return 0; } diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index 54c31c285488..86cef6b32253 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -102,65 +102,62 @@ void __cpuinit fpu_init(void) mxcsr_feature_mask_init(); /* clean state in init */ - if (cpu_has_xsave) - current_thread_info()->status = TS_XSAVE; - else - current_thread_info()->status = 0; + current_thread_info()->status = 0; clear_used_math(); } #endif /* CONFIG_X86_64 */ -/* - * The _current_ task is using the FPU for the first time - * so initialize it and set the mxcsr to its default - * value at reset if we support XMM instructions and then - * remeber the current task has used the FPU. - */ -int init_fpu(struct task_struct *tsk) +static void fpu_finit(struct fpu *fpu) { - if (tsk_used_math(tsk)) { - if (HAVE_HWFP && tsk == current) - unlazy_fpu(tsk); - return 0; - } - - /* - * Memory allocation at the first usage of the FPU and other state. - */ - if (!tsk->thread.xstate) { - tsk->thread.xstate = kmem_cache_alloc(task_xstate_cachep, - GFP_KERNEL); - if (!tsk->thread.xstate) - return -ENOMEM; - } - #ifdef CONFIG_X86_32 if (!HAVE_HWFP) { - memset(tsk->thread.xstate, 0, xstate_size); - finit_task(tsk); - set_stopped_child_used_math(tsk); - return 0; + finit_soft_fpu(&fpu->state->soft); + return; } #endif if (cpu_has_fxsr) { - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; memset(fx, 0, xstate_size); fx->cwd = 0x37f; if (cpu_has_xmm) fx->mxcsr = MXCSR_DEFAULT; } else { - struct i387_fsave_struct *fp = &tsk->thread.xstate->fsave; + struct i387_fsave_struct *fp = &fpu->state->fsave; memset(fp, 0, xstate_size); fp->cwd = 0xffff037fu; fp->swd = 0xffff0000u; fp->twd = 0xffffffffu; fp->fos = 0xffff0000u; } +} + +/* + * The _current_ task is using the FPU for the first time + * so initialize it and set the mxcsr to its default + * value at reset if we support XMM instructions and then + * remeber the current task has used the FPU. + */ +int init_fpu(struct task_struct *tsk) +{ + int ret; + + if (tsk_used_math(tsk)) { + if (HAVE_HWFP && tsk == current) + unlazy_fpu(tsk); + return 0; + } + /* - * Only the device not available exception or ptrace can call init_fpu. + * Memory allocation at the first usage of the FPU and other state. */ + ret = fpu_alloc(&tsk->thread.fpu); + if (ret) + return ret; + + fpu_finit(&tsk->thread.fpu); + set_stopped_child_used_math(tsk); return 0; } @@ -194,7 +191,7 @@ int xfpregs_get(struct task_struct *target, const struct user_regset *regset, return ret; return user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fxsave, 0, -1); + &target->thread.fpu.state->fxsave, 0, -1); } int xfpregs_set(struct task_struct *target, const struct user_regset *regset, @@ -211,19 +208,19 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset, return ret; ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fxsave, 0, -1); + &target->thread.fpu.state->fxsave, 0, -1); /* * mxcsr reserved bits must be masked to zero for security reasons. */ - target->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + target->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; /* * update the header bits in the xsave header, indicating the * presence of FP and SSE state. */ if (cpu_has_xsave) - target->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; + target->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; return ret; } @@ -246,14 +243,14 @@ int xstateregs_get(struct task_struct *target, const struct user_regset *regset, * memory layout in the thread struct, so that we can copy the entire * xstateregs to the user using one user_regset_copyout(). */ - memcpy(&target->thread.xstate->fxsave.sw_reserved, + memcpy(&target->thread.fpu.state->fxsave.sw_reserved, xstate_fx_sw_bytes, sizeof(xstate_fx_sw_bytes)); /* * Copy the xstate memory layout. */ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->xsave, 0, -1); + &target->thread.fpu.state->xsave, 0, -1); return ret; } @@ -272,14 +269,14 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset, return ret; ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->xsave, 0, -1); + &target->thread.fpu.state->xsave, 0, -1); /* * mxcsr reserved bits must be masked to zero for security reasons. */ - target->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + target->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; - xsave_hdr = &target->thread.xstate->xsave.xsave_hdr; + xsave_hdr = &target->thread.fpu.state->xsave.xsave_hdr; xsave_hdr->xstate_bv &= pcntxt_mask; /* @@ -365,7 +362,7 @@ static inline u32 twd_fxsr_to_i387(struct i387_fxsave_struct *fxsave) static void convert_from_fxsr(struct user_i387_ia32_struct *env, struct task_struct *tsk) { - struct i387_fxsave_struct *fxsave = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave; struct _fpreg *to = (struct _fpreg *) &env->st_space[0]; struct _fpxreg *from = (struct _fpxreg *) &fxsave->st_space[0]; int i; @@ -405,7 +402,7 @@ static void convert_to_fxsr(struct task_struct *tsk, const struct user_i387_ia32_struct *env) { - struct i387_fxsave_struct *fxsave = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave; struct _fpreg *from = (struct _fpreg *) &env->st_space[0]; struct _fpxreg *to = (struct _fpxreg *) &fxsave->st_space[0]; int i; @@ -445,7 +442,7 @@ int fpregs_get(struct task_struct *target, const struct user_regset *regset, if (!cpu_has_fxsr) { return user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fsave, 0, + &target->thread.fpu.state->fsave, 0, -1); } @@ -475,7 +472,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, if (!cpu_has_fxsr) { return user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fsave, 0, -1); + &target->thread.fpu.state->fsave, 0, -1); } if (pos > 0 || count < sizeof(env)) @@ -490,7 +487,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, * presence of FP. */ if (cpu_has_xsave) - target->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FP; + target->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FP; return ret; } @@ -501,7 +498,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, static inline int save_i387_fsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - struct i387_fsave_struct *fp = &tsk->thread.xstate->fsave; + struct i387_fsave_struct *fp = &tsk->thread.fpu.state->fsave; fp->status = fp->swd; if (__copy_to_user(buf, fp, sizeof(struct i387_fsave_struct))) @@ -512,7 +509,7 @@ static inline int save_i387_fsave(struct _fpstate_ia32 __user *buf) static int save_i387_fxsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fx = &tsk->thread.fpu.state->fxsave; struct user_i387_ia32_struct env; int err = 0; @@ -547,7 +544,7 @@ static int save_i387_xsave(void __user *buf) * header as well as change any contents in the memory layout. * xrestore as part of sigreturn will capture all the changes. */ - tsk->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; + tsk->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; if (save_i387_fxsave(fx) < 0) return -1; @@ -599,7 +596,7 @@ static inline int restore_i387_fsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - return __copy_from_user(&tsk->thread.xstate->fsave, buf, + return __copy_from_user(&tsk->thread.fpu.state->fsave, buf, sizeof(struct i387_fsave_struct)); } @@ -610,10 +607,10 @@ static int restore_i387_fxsave(struct _fpstate_ia32 __user *buf, struct user_i387_ia32_struct env; int err; - err = __copy_from_user(&tsk->thread.xstate->fxsave, &buf->_fxsr_env[0], + err = __copy_from_user(&tsk->thread.fpu.state->fxsave, &buf->_fxsr_env[0], size); /* mxcsr reserved bits must be masked to zero for security reasons */ - tsk->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + tsk->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; if (err || __copy_from_user(&env, buf, sizeof(env))) return 1; convert_to_fxsr(tsk, &env); @@ -629,7 +626,7 @@ static int restore_i387_xsave(void __user *buf) struct i387_fxsave_struct __user *fx = (struct i387_fxsave_struct __user *) &fx_user->_fxsr_env[0]; struct xsave_hdr_struct *xsave_hdr = - ¤t->thread.xstate->xsave.xsave_hdr; + ¤t->thread.fpu.state->xsave.xsave_hdr; u64 mask; int err; diff --git a/arch/x86/kernel/i8253.c b/arch/x86/kernel/i8253.c index 23c167925a5c..2dfd31597443 100644 --- a/arch/x86/kernel/i8253.c +++ b/arch/x86/kernel/i8253.c @@ -16,7 +16,7 @@ #include <asm/hpet.h> #include <asm/smp.h> -DEFINE_SPINLOCK(i8253_lock); +DEFINE_RAW_SPINLOCK(i8253_lock); EXPORT_SYMBOL(i8253_lock); /* @@ -33,7 +33,7 @@ struct clock_event_device *global_clock_event; static void init_pit_timer(enum clock_event_mode mode, struct clock_event_device *evt) { - spin_lock(&i8253_lock); + raw_spin_lock(&i8253_lock); switch (mode) { case CLOCK_EVT_MODE_PERIODIC: @@ -62,7 +62,7 @@ static void init_pit_timer(enum clock_event_mode mode, /* Nothing to do here */ break; } - spin_unlock(&i8253_lock); + raw_spin_unlock(&i8253_lock); } /* @@ -72,10 +72,10 @@ static void init_pit_timer(enum clock_event_mode mode, */ static int pit_next_event(unsigned long delta, struct clock_event_device *evt) { - spin_lock(&i8253_lock); + raw_spin_lock(&i8253_lock); outb_pit(delta & 0xff , PIT_CH0); /* LSB */ outb_pit(delta >> 8 , PIT_CH0); /* MSB */ - spin_unlock(&i8253_lock); + raw_spin_unlock(&i8253_lock); return 0; } @@ -130,7 +130,7 @@ static cycle_t pit_read(struct clocksource *cs) int count; u32 jifs; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); /* * Although our caller may have the read side of xtime_lock, * this is now a seqlock, and we are cheating in this routine @@ -176,7 +176,7 @@ static cycle_t pit_read(struct clocksource *cs) old_count = count; old_jifs = jifs; - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); count = (LATCH - 1) - count; diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c index 0ed2d300cd46..990ae7cfc578 100644 --- a/arch/x86/kernel/irqinit.c +++ b/arch/x86/kernel/irqinit.c @@ -60,7 +60,7 @@ static irqreturn_t math_error_irq(int cpl, void *dev_id) outb(0, 0xF0); if (ignore_fpu_irq || !boot_cpu_data.hard_math) return IRQ_NONE; - math_error((void __user *)get_irq_regs()->ip); + math_error(get_irq_regs(), 0, 16); return IRQ_HANDLED; } diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c index 1658efdfb4e5..345a4b1fe144 100644 --- a/arch/x86/kernel/kprobes.c +++ b/arch/x86/kernel/kprobes.c @@ -422,14 +422,22 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs, static void __kprobes clear_btf(void) { - if (test_thread_flag(TIF_DEBUGCTLMSR)) - update_debugctlmsr(0); + if (test_thread_flag(TIF_BLOCKSTEP)) { + unsigned long debugctl = get_debugctlmsr(); + + debugctl &= ~DEBUGCTLMSR_BTF; + update_debugctlmsr(debugctl); + } } static void __kprobes restore_btf(void) { - if (test_thread_flag(TIF_DEBUGCTLMSR)) - update_debugctlmsr(current->thread.debugctlmsr); + if (test_thread_flag(TIF_BLOCKSTEP)) { + unsigned long debugctl = get_debugctlmsr(); + + debugctl |= DEBUGCTLMSR_BTF; + update_debugctlmsr(debugctl); + } } void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, diff --git a/arch/x86/kernel/microcode_core.c b/arch/x86/kernel/microcode_core.c index cceb5bc3c3c2..2cd8c544e41a 100644 --- a/arch/x86/kernel/microcode_core.c +++ b/arch/x86/kernel/microcode_core.c @@ -201,9 +201,9 @@ static int do_microcode_update(const void __user *buf, size_t size) return error; } -static int microcode_open(struct inode *unused1, struct file *unused2) +static int microcode_open(struct inode *inode, struct file *file) { - return capable(CAP_SYS_RAWIO) ? 0 : -EPERM; + return capable(CAP_SYS_RAWIO) ? nonseekable_open(inode, file) : -EPERM; } static ssize_t microcode_write(struct file *file, const char __user *buf, diff --git a/arch/x86/kernel/microcode_intel.c b/arch/x86/kernel/microcode_intel.c index 85a343e28937..356170262a93 100644 --- a/arch/x86/kernel/microcode_intel.c +++ b/arch/x86/kernel/microcode_intel.c @@ -343,10 +343,11 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size, int (*get_ucode_data)(void *, const void *, size_t)) { struct ucode_cpu_info *uci = ucode_cpu_info + cpu; - u8 *ucode_ptr = data, *new_mc = NULL, *mc; + u8 *ucode_ptr = data, *new_mc = NULL, *mc = NULL; int new_rev = uci->cpu_sig.rev; unsigned int leftover = size; enum ucode_state state = UCODE_OK; + unsigned int curr_mc_size = 0; while (leftover) { struct microcode_header_intel mc_header; @@ -361,9 +362,15 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size, break; } - mc = vmalloc(mc_size); - if (!mc) - break; + /* For performance reasons, reuse mc area when possible */ + if (!mc || mc_size > curr_mc_size) { + if (mc) + vfree(mc); + mc = vmalloc(mc_size); + if (!mc) + break; + curr_mc_size = mc_size; + } if (get_ucode_data(mc, ucode_ptr, mc_size) || microcode_sanity_check(mc) < 0) { @@ -376,13 +383,16 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size, vfree(new_mc); new_rev = mc_header.rev; new_mc = mc; - } else - vfree(mc); + mc = NULL; /* trigger new vmalloc */ + } ucode_ptr += mc_size; leftover -= mc_size; } + if (mc) + vfree(mc); + if (leftover) { if (new_mc) vfree(new_mc); diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c index e81030f71a8f..5ae5d2426edf 100644 --- a/arch/x86/kernel/mpparse.c +++ b/arch/x86/kernel/mpparse.c @@ -115,21 +115,6 @@ static void __init MP_bus_info(struct mpc_bus *m) printk(KERN_WARNING "Unknown bustype %s - ignoring\n", str); } -static int bad_ioapic(unsigned long address) -{ - if (nr_ioapics >= MAX_IO_APICS) { - printk(KERN_ERR "ERROR: Max # of I/O APICs (%d) exceeded " - "(found %d)\n", MAX_IO_APICS, nr_ioapics); - panic("Recompile kernel with bigger MAX_IO_APICS!\n"); - } - if (!address) { - printk(KERN_ERR "WARNING: Bogus (zero) I/O APIC address" - " found in table, skipping!\n"); - return 1; - } - return 0; -} - static void __init MP_ioapic_info(struct mpc_ioapic *m) { if (!(m->flags & MPC_APIC_USABLE)) @@ -138,15 +123,7 @@ static void __init MP_ioapic_info(struct mpc_ioapic *m) printk(KERN_INFO "I/O APIC #%d Version %d at 0x%X.\n", m->apicid, m->apicver, m->apicaddr); - if (bad_ioapic(m->apicaddr)) - return; - - mp_ioapics[nr_ioapics].apicaddr = m->apicaddr; - mp_ioapics[nr_ioapics].apicid = m->apicid; - mp_ioapics[nr_ioapics].type = m->type; - mp_ioapics[nr_ioapics].apicver = m->apicver; - mp_ioapics[nr_ioapics].flags = m->flags; - nr_ioapics++; + mp_register_ioapic(m->apicid, m->apicaddr, gsi_end + 1); } static void print_MP_intsrc_info(struct mpc_intsrc *m) diff --git a/arch/x86/kernel/mrst.c b/arch/x86/kernel/mrst.c index 0aad8670858e..e796448f0eb5 100644 --- a/arch/x86/kernel/mrst.c +++ b/arch/x86/kernel/mrst.c @@ -237,4 +237,9 @@ void __init x86_mrst_early_setup(void) x86_init.pci.fixup_irqs = x86_init_noop; legacy_pic = &null_legacy_pic; + + /* Avoid searching for BIOS MP tables */ + x86_init.mpparse.find_smp_config = x86_init_noop; + x86_init.mpparse.get_smp_config = x86_init_uint_noop; + } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 28ad9f4d8b94..e7e35219b32f 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -20,7 +20,6 @@ #include <asm/idle.h> #include <asm/uaccess.h> #include <asm/i387.h> -#include <asm/ds.h> #include <asm/debugreg.h> unsigned long idle_halt; @@ -32,26 +31,22 @@ struct kmem_cache *task_xstate_cachep; int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) { + int ret; + *dst = *src; - if (src->thread.xstate) { - dst->thread.xstate = kmem_cache_alloc(task_xstate_cachep, - GFP_KERNEL); - if (!dst->thread.xstate) - return -ENOMEM; - WARN_ON((unsigned long)dst->thread.xstate & 15); - memcpy(dst->thread.xstate, src->thread.xstate, xstate_size); + if (fpu_allocated(&src->thread.fpu)) { + memset(&dst->thread.fpu, 0, sizeof(dst->thread.fpu)); + ret = fpu_alloc(&dst->thread.fpu); + if (ret) + return ret; + fpu_copy(&dst->thread.fpu, &src->thread.fpu); } return 0; } void free_thread_xstate(struct task_struct *tsk) { - if (tsk->thread.xstate) { - kmem_cache_free(task_xstate_cachep, tsk->thread.xstate); - tsk->thread.xstate = NULL; - } - - WARN(tsk->thread.ds_ctx, "leaking DS context\n"); + fpu_free(&tsk->thread.fpu); } void free_thread_info(struct thread_info *ti) @@ -198,11 +193,16 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, prev = &prev_p->thread; next = &next_p->thread; - if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) || - test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR)) - ds_switch_to(prev_p, next_p); - else if (next->debugctlmsr != prev->debugctlmsr) - update_debugctlmsr(next->debugctlmsr); + if (test_tsk_thread_flag(prev_p, TIF_BLOCKSTEP) ^ + test_tsk_thread_flag(next_p, TIF_BLOCKSTEP)) { + unsigned long debugctl = get_debugctlmsr(); + + debugctl &= ~DEBUGCTLMSR_BTF; + if (test_tsk_thread_flag(next_p, TIF_BLOCKSTEP)) + debugctl |= DEBUGCTLMSR_BTF; + + update_debugctlmsr(debugctl); + } if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^ test_tsk_thread_flag(next_p, TIF_NOTSC)) { @@ -546,11 +546,13 @@ static int __cpuinit check_c1e_idle(const struct cpuinfo_x86 *c) * check OSVW bit for CPUs that are not affected * by erratum #400 */ - rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, val); - if (val >= 2) { - rdmsrl(MSR_AMD64_OSVW_STATUS, val); - if (!(val & BIT(1))) - goto no_c1e_idle; + if (cpu_has(c, X86_FEATURE_OSVW)) { + rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, val); + if (val >= 2) { + rdmsrl(MSR_AMD64_OSVW_STATUS, val); + if (!(val & BIT(1))) + goto no_c1e_idle; + } } return 1; } diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index f6c62667e30c..8d128783af47 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -55,7 +55,6 @@ #include <asm/cpu.h> #include <asm/idle.h> #include <asm/syscalls.h> -#include <asm/ds.h> #include <asm/debugreg.h> asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); @@ -238,13 +237,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, kfree(p->thread.io_bitmap_ptr); p->thread.io_bitmap_max = 0; } - - clear_tsk_thread_flag(p, TIF_DS_AREA_MSR); - p->thread.ds_ctx = NULL; - - clear_tsk_thread_flag(p, TIF_DEBUGCTLMSR); - p->thread.debugctlmsr = 0; - return err; } @@ -317,7 +309,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) /* we're going to use this soon, after a few expensive things */ if (preload_fpu) - prefetch(next->xstate); + prefetch(next->fpu.state); /* * Reload esp0. diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 17cb3295cbf7..3c2422a99f1f 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -49,7 +49,6 @@ #include <asm/ia32.h> #include <asm/idle.h> #include <asm/syscalls.h> -#include <asm/ds.h> #include <asm/debugreg.h> asmlinkage extern void ret_from_fork(void); @@ -313,13 +312,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, if (err) goto out; } - - clear_tsk_thread_flag(p, TIF_DS_AREA_MSR); - p->thread.ds_ctx = NULL; - - clear_tsk_thread_flag(p, TIF_DEBUGCTLMSR); - p->thread.debugctlmsr = 0; - err = 0; out: if (err && p->thread.io_bitmap_ptr) { @@ -396,7 +388,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) /* we're going to use this soon, after a few expensive things */ if (preload_fpu) - prefetch(next->xstate); + prefetch(next->fpu.state); /* * Reload esp0, LDT and the page table pointer: diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c index 2e9b55027b7e..70c4872cd8aa 100644 --- a/arch/x86/kernel/ptrace.c +++ b/arch/x86/kernel/ptrace.c @@ -2,9 +2,6 @@ /* * Pentium III FXSR, SSE support * Gareth Hughes <gareth@valinux.com>, May 2000 - * - * BTS tracing - * Markus Metzger <markus.t.metzger@intel.com>, Dec 2007 */ #include <linux/kernel.h> @@ -22,7 +19,6 @@ #include <linux/audit.h> #include <linux/seccomp.h> #include <linux/signal.h> -#include <linux/workqueue.h> #include <linux/perf_event.h> #include <linux/hw_breakpoint.h> @@ -36,7 +32,6 @@ #include <asm/desc.h> #include <asm/prctl.h> #include <asm/proto.h> -#include <asm/ds.h> #include <asm/hw_breakpoint.h> #include "tls.h" @@ -693,7 +688,7 @@ static int ptrace_set_breakpoint_addr(struct task_struct *tsk, int nr, struct perf_event_attr attr; if (!t->ptrace_bps[nr]) { - hw_breakpoint_init(&attr); + ptrace_breakpoint_init(&attr); /* * Put stub len and type to register (reserve) an inactive but * correct bp @@ -789,342 +784,6 @@ static int ioperm_get(struct task_struct *target, 0, IO_BITMAP_BYTES); } -#ifdef CONFIG_X86_PTRACE_BTS -/* - * A branch trace store context. - * - * Contexts may only be installed by ptrace_bts_config() and only for - * ptraced tasks. - * - * Contexts are destroyed when the tracee is detached from the tracer. - * The actual destruction work requires interrupts enabled, so the - * work is deferred and will be scheduled during __ptrace_unlink(). - * - * Contexts hold an additional task_struct reference on the traced - * task, as well as a reference on the tracer's mm. - * - * Ptrace already holds a task_struct for the duration of ptrace operations, - * but since destruction is deferred, it may be executed after both - * tracer and tracee exited. - */ -struct bts_context { - /* The branch trace handle. */ - struct bts_tracer *tracer; - - /* The buffer used to store the branch trace and its size. */ - void *buffer; - unsigned int size; - - /* The mm that paid for the above buffer. */ - struct mm_struct *mm; - - /* The task this context belongs to. */ - struct task_struct *task; - - /* The signal to send on a bts buffer overflow. */ - unsigned int bts_ovfl_signal; - - /* The work struct to destroy a context. */ - struct work_struct work; -}; - -static int alloc_bts_buffer(struct bts_context *context, unsigned int size) -{ - void *buffer = NULL; - int err = -ENOMEM; - - err = account_locked_memory(current->mm, current->signal->rlim, size); - if (err < 0) - return err; - - buffer = kzalloc(size, GFP_KERNEL); - if (!buffer) - goto out_refund; - - context->buffer = buffer; - context->size = size; - context->mm = get_task_mm(current); - - return 0; - - out_refund: - refund_locked_memory(current->mm, size); - return err; -} - -static inline void free_bts_buffer(struct bts_context *context) -{ - if (!context->buffer) - return; - - kfree(context->buffer); - context->buffer = NULL; - - refund_locked_memory(context->mm, context->size); - context->size = 0; - - mmput(context->mm); - context->mm = NULL; -} - -static void free_bts_context_work(struct work_struct *w) -{ - struct bts_context *context; - - context = container_of(w, struct bts_context, work); - - ds_release_bts(context->tracer); - put_task_struct(context->task); - free_bts_buffer(context); - kfree(context); -} - -static inline void free_bts_context(struct bts_context *context) -{ - INIT_WORK(&context->work, free_bts_context_work); - schedule_work(&context->work); -} - -static inline struct bts_context *alloc_bts_context(struct task_struct *task) -{ - struct bts_context *context = kzalloc(sizeof(*context), GFP_KERNEL); - if (context) { - context->task = task; - task->bts = context; - - get_task_struct(task); - } - - return context; -} - -static int ptrace_bts_read_record(struct task_struct *child, size_t index, - struct bts_struct __user *out) -{ - struct bts_context *context; - const struct bts_trace *trace; - struct bts_struct bts; - const unsigned char *at; - int error; - - context = child->bts; - if (!context) - return -ESRCH; - - trace = ds_read_bts(context->tracer); - if (!trace) - return -ESRCH; - - at = trace->ds.top - ((index + 1) * trace->ds.size); - if ((void *)at < trace->ds.begin) - at += (trace->ds.n * trace->ds.size); - - if (!trace->read) - return -EOPNOTSUPP; - - error = trace->read(context->tracer, at, &bts); - if (error < 0) - return error; - - if (copy_to_user(out, &bts, sizeof(bts))) - return -EFAULT; - - return sizeof(bts); -} - -static int ptrace_bts_drain(struct task_struct *child, - long size, - struct bts_struct __user *out) -{ - struct bts_context *context; - const struct bts_trace *trace; - const unsigned char *at; - int error, drained = 0; - - context = child->bts; - if (!context) - return -ESRCH; - - trace = ds_read_bts(context->tracer); - if (!trace) - return -ESRCH; - - if (!trace->read) - return -EOPNOTSUPP; - - if (size < (trace->ds.top - trace->ds.begin)) - return -EIO; - - for (at = trace->ds.begin; (void *)at < trace->ds.top; - out++, drained++, at += trace->ds.size) { - struct bts_struct bts; - - error = trace->read(context->tracer, at, &bts); - if (error < 0) - return error; - - if (copy_to_user(out, &bts, sizeof(bts))) - return -EFAULT; - } - - memset(trace->ds.begin, 0, trace->ds.n * trace->ds.size); - - error = ds_reset_bts(context->tracer); - if (error < 0) - return error; - - return drained; -} - -static int ptrace_bts_config(struct task_struct *child, - long cfg_size, - const struct ptrace_bts_config __user *ucfg) -{ - struct bts_context *context; - struct ptrace_bts_config cfg; - unsigned int flags = 0; - - if (cfg_size < sizeof(cfg)) - return -EIO; - - if (copy_from_user(&cfg, ucfg, sizeof(cfg))) - return -EFAULT; - - context = child->bts; - if (!context) - context = alloc_bts_context(child); - if (!context) - return -ENOMEM; - - if (cfg.flags & PTRACE_BTS_O_SIGNAL) { - if (!cfg.signal) - return -EINVAL; - - return -EOPNOTSUPP; - context->bts_ovfl_signal = cfg.signal; - } - - ds_release_bts(context->tracer); - context->tracer = NULL; - - if ((cfg.flags & PTRACE_BTS_O_ALLOC) && (cfg.size != context->size)) { - int err; - - free_bts_buffer(context); - if (!cfg.size) - return 0; - - err = alloc_bts_buffer(context, cfg.size); - if (err < 0) - return err; - } - - if (cfg.flags & PTRACE_BTS_O_TRACE) - flags |= BTS_USER; - - if (cfg.flags & PTRACE_BTS_O_SCHED) - flags |= BTS_TIMESTAMPS; - - context->tracer = - ds_request_bts_task(child, context->buffer, context->size, - NULL, (size_t)-1, flags); - if (unlikely(IS_ERR(context->tracer))) { - int error = PTR_ERR(context->tracer); - - free_bts_buffer(context); - context->tracer = NULL; - return error; - } - - return sizeof(cfg); -} - -static int ptrace_bts_status(struct task_struct *child, - long cfg_size, - struct ptrace_bts_config __user *ucfg) -{ - struct bts_context *context; - const struct bts_trace *trace; - struct ptrace_bts_config cfg; - - context = child->bts; - if (!context) - return -ESRCH; - - if (cfg_size < sizeof(cfg)) - return -EIO; - - trace = ds_read_bts(context->tracer); - if (!trace) - return -ESRCH; - - memset(&cfg, 0, sizeof(cfg)); - cfg.size = trace->ds.end - trace->ds.begin; - cfg.signal = context->bts_ovfl_signal; - cfg.bts_size = sizeof(struct bts_struct); - - if (cfg.signal) - cfg.flags |= PTRACE_BTS_O_SIGNAL; - - if (trace->ds.flags & BTS_USER) - cfg.flags |= PTRACE_BTS_O_TRACE; - - if (trace->ds.flags & BTS_TIMESTAMPS) - cfg.flags |= PTRACE_BTS_O_SCHED; - - if (copy_to_user(ucfg, &cfg, sizeof(cfg))) - return -EFAULT; - - return sizeof(cfg); -} - -static int ptrace_bts_clear(struct task_struct *child) -{ - struct bts_context *context; - const struct bts_trace *trace; - - context = child->bts; - if (!context) - return -ESRCH; - - trace = ds_read_bts(context->tracer); - if (!trace) - return -ESRCH; - - memset(trace->ds.begin, 0, trace->ds.n * trace->ds.size); - - return ds_reset_bts(context->tracer); -} - -static int ptrace_bts_size(struct task_struct *child) -{ - struct bts_context *context; - const struct bts_trace *trace; - - context = child->bts; - if (!context) - return -ESRCH; - - trace = ds_read_bts(context->tracer); - if (!trace) - return -ESRCH; - - return (trace->ds.top - trace->ds.begin) / trace->ds.size; -} - -/* - * Called from __ptrace_unlink() after the child has been moved back - * to its original parent. - */ -void ptrace_bts_untrace(struct task_struct *child) -{ - if (unlikely(child->bts)) { - free_bts_context(child->bts); - child->bts = NULL; - } -} -#endif /* CONFIG_X86_PTRACE_BTS */ - /* * Called by kernel/ptrace.c when detaching.. * @@ -1252,39 +911,6 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data) break; #endif - /* - * These bits need more cooking - not enabled yet: - */ -#ifdef CONFIG_X86_PTRACE_BTS - case PTRACE_BTS_CONFIG: - ret = ptrace_bts_config - (child, data, (struct ptrace_bts_config __user *)addr); - break; - - case PTRACE_BTS_STATUS: - ret = ptrace_bts_status - (child, data, (struct ptrace_bts_config __user *)addr); - break; - - case PTRACE_BTS_SIZE: - ret = ptrace_bts_size(child); - break; - - case PTRACE_BTS_GET: - ret = ptrace_bts_read_record - (child, data, (struct bts_struct __user *) addr); - break; - - case PTRACE_BTS_CLEAR: - ret = ptrace_bts_clear(child); - break; - - case PTRACE_BTS_DRAIN: - ret = ptrace_bts_drain - (child, data, (struct bts_struct __user *) addr); - break; -#endif /* CONFIG_X86_PTRACE_BTS */ - default: ret = ptrace_request(child, request, addr, data); break; @@ -1544,14 +1170,6 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, case PTRACE_GET_THREAD_AREA: case PTRACE_SET_THREAD_AREA: -#ifdef CONFIG_X86_PTRACE_BTS - case PTRACE_BTS_CONFIG: - case PTRACE_BTS_STATUS: - case PTRACE_BTS_SIZE: - case PTRACE_BTS_GET: - case PTRACE_BTS_CLEAR: - case PTRACE_BTS_DRAIN: -#endif /* CONFIG_X86_PTRACE_BTS */ return arch_ptrace(child, request, addr, data); default: diff --git a/arch/x86/kernel/sfi.c b/arch/x86/kernel/sfi.c index 34e099382651..7ded57896c0a 100644 --- a/arch/x86/kernel/sfi.c +++ b/arch/x86/kernel/sfi.c @@ -81,7 +81,6 @@ static int __init sfi_parse_cpus(struct sfi_table_header *table) #endif /* CONFIG_X86_LOCAL_APIC */ #ifdef CONFIG_X86_IO_APIC -static u32 gsi_base; static int __init sfi_parse_ioapic(struct sfi_table_header *table) { @@ -94,8 +93,7 @@ static int __init sfi_parse_ioapic(struct sfi_table_header *table) pentry = (struct sfi_apic_table_entry *)sb->pentry; for (i = 0; i < num; i++) { - mp_register_ioapic(i, pentry->phys_addr, gsi_base); - gsi_base += io_apic_get_redir_entries(i); + mp_register_ioapic(i, pentry->phys_addr, gsi_end + 1); pentry++; } diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c index 3149032ff107..58de45ee08b6 100644 --- a/arch/x86/kernel/step.c +++ b/arch/x86/kernel/step.c @@ -158,22 +158,6 @@ static int enable_single_step(struct task_struct *child) } /* - * Install this value in MSR_IA32_DEBUGCTLMSR whenever child is running. - */ -static void write_debugctlmsr(struct task_struct *child, unsigned long val) -{ - if (child->thread.debugctlmsr == val) - return; - - child->thread.debugctlmsr = val; - - if (child != current) - return; - - update_debugctlmsr(val); -} - -/* * Enable single or block step. */ static void enable_step(struct task_struct *child, bool block) @@ -186,15 +170,17 @@ static void enable_step(struct task_struct *child, bool block) * that uses user-mode single stepping itself. */ if (enable_single_step(child) && block) { - set_tsk_thread_flag(child, TIF_DEBUGCTLMSR); - write_debugctlmsr(child, - child->thread.debugctlmsr | DEBUGCTLMSR_BTF); - } else { - write_debugctlmsr(child, - child->thread.debugctlmsr & ~DEBUGCTLMSR_BTF); - - if (!child->thread.debugctlmsr) - clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR); + unsigned long debugctl = get_debugctlmsr(); + + debugctl |= DEBUGCTLMSR_BTF; + update_debugctlmsr(debugctl); + set_tsk_thread_flag(child, TIF_BLOCKSTEP); + } else if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) { + unsigned long debugctl = get_debugctlmsr(); + + debugctl &= ~DEBUGCTLMSR_BTF; + update_debugctlmsr(debugctl); + clear_tsk_thread_flag(child, TIF_BLOCKSTEP); } } @@ -213,11 +199,13 @@ void user_disable_single_step(struct task_struct *child) /* * Make sure block stepping (BTF) is disabled. */ - write_debugctlmsr(child, - child->thread.debugctlmsr & ~DEBUGCTLMSR_BTF); + if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) { + unsigned long debugctl = get_debugctlmsr(); - if (!child->thread.debugctlmsr) - clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR); + debugctl &= ~DEBUGCTLMSR_BTF; + update_debugctlmsr(debugctl); + clear_tsk_thread_flag(child, TIF_BLOCKSTEP); + } /* Always clear TIF_SINGLESTEP... */ clear_tsk_thread_flag(child, TIF_SINGLESTEP); diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c index 86c9f91b48ae..cc2c60474fd0 100644 --- a/arch/x86/kernel/tboot.c +++ b/arch/x86/kernel/tboot.c @@ -175,6 +175,9 @@ static void add_mac_region(phys_addr_t start, unsigned long size) struct tboot_mac_region *mr; phys_addr_t end = start + size; + if (tboot->num_mac_regions >= MAX_TB_MAC_REGIONS) + panic("tboot: Too many MAC regions\n"); + if (start && size) { mr = &tboot->mac_regions[tboot->num_mac_regions++]; mr->start = round_down(start, PAGE_SIZE); @@ -184,18 +187,17 @@ static void add_mac_region(phys_addr_t start, unsigned long size) static int tboot_setup_sleep(void) { + int i; + tboot->num_mac_regions = 0; - /* S3 resume code */ - add_mac_region(acpi_wakeup_address, WAKEUP_SIZE); + for (i = 0; i < e820.nr_map; i++) { + if ((e820.map[i].type != E820_RAM) + && (e820.map[i].type != E820_RESERVED_KERN)) + continue; -#ifdef CONFIG_X86_TRAMPOLINE - /* AP trampoline code */ - add_mac_region(virt_to_phys(trampoline_base), TRAMPOLINE_SIZE); -#endif - - /* kernel code + data + bss */ - add_mac_region(virt_to_phys(_text), _end - _text); + add_mac_region(e820.map[i].addr, e820.map[i].size); + } tboot->acpi_sinfo.kernel_s3_resume_vector = acpi_wakeup_address; diff --git a/arch/x86/kernel/tlb_uv.c b/arch/x86/kernel/tlb_uv.c index 17b03dd3a6b5..7fea555929e2 100644 --- a/arch/x86/kernel/tlb_uv.c +++ b/arch/x86/kernel/tlb_uv.c @@ -1,7 +1,7 @@ /* * SGI UltraViolet TLB flush routines. * - * (c) 2008 Cliff Wickman <cpw@sgi.com>, SGI. + * (c) 2008-2010 Cliff Wickman <cpw@sgi.com>, SGI. * * This code is released under the GNU General Public License version 2 or * later. @@ -20,42 +20,67 @@ #include <asm/idle.h> #include <asm/tsc.h> #include <asm/irq_vectors.h> +#include <asm/timer.h> -static struct bau_control **uv_bau_table_bases __read_mostly; -static int uv_bau_retry_limit __read_mostly; +struct msg_desc { + struct bau_payload_queue_entry *msg; + int msg_slot; + int sw_ack_slot; + struct bau_payload_queue_entry *va_queue_first; + struct bau_payload_queue_entry *va_queue_last; +}; -/* base pnode in this partition */ -static int uv_partition_base_pnode __read_mostly; +#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD 0x000000000bUL + +static int uv_bau_max_concurrent __read_mostly; + +static int nobau; +static int __init setup_nobau(char *arg) +{ + nobau = 1; + return 0; +} +early_param("nobau", setup_nobau); -static unsigned long uv_mmask __read_mostly; +/* base pnode in this partition */ +static int uv_partition_base_pnode __read_mostly; +/* position of pnode (which is nasid>>1): */ +static int uv_nshift __read_mostly; +static unsigned long uv_mmask __read_mostly; static DEFINE_PER_CPU(struct ptc_stats, ptcstats); static DEFINE_PER_CPU(struct bau_control, bau_control); +static DEFINE_PER_CPU(cpumask_var_t, uv_flush_tlb_mask); + +struct reset_args { + int sender; +}; /* - * Determine the first node on a blade. + * Determine the first node on a uvhub. 'Nodes' are used for kernel + * memory allocation. */ -static int __init blade_to_first_node(int blade) +static int __init uvhub_to_first_node(int uvhub) { int node, b; for_each_online_node(node) { b = uv_node_to_blade_id(node); - if (blade == b) + if (uvhub == b) return node; } - return -1; /* shouldn't happen */ + return -1; } /* - * Determine the apicid of the first cpu on a blade. + * Determine the apicid of the first cpu on a uvhub. */ -static int __init blade_to_first_apicid(int blade) +static int __init uvhub_to_first_apicid(int uvhub) { int cpu; for_each_present_cpu(cpu) - if (blade == uv_cpu_to_blade_id(cpu)) + if (uvhub == uv_cpu_to_blade_id(cpu)) return per_cpu(x86_cpu_to_apicid, cpu); return -1; } @@ -68,195 +93,459 @@ static int __init blade_to_first_apicid(int blade) * clear of the Timeout bit (as well) will free the resource. No reply will * be sent (the hardware will only do one reply per message). */ -static void uv_reply_to_message(int resource, - struct bau_payload_queue_entry *msg, - struct bau_msg_status *msp) +static inline void uv_reply_to_message(struct msg_desc *mdp, + struct bau_control *bcp) { unsigned long dw; + struct bau_payload_queue_entry *msg; - dw = (1 << (resource + UV_SW_ACK_NPENDING)) | (1 << resource); + msg = mdp->msg; + if (!msg->canceled) { + dw = (msg->sw_ack_vector << UV_SW_ACK_NPENDING) | + msg->sw_ack_vector; + uv_write_local_mmr( + UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS, dw); + } msg->replied_to = 1; msg->sw_ack_vector = 0; - if (msp) - msp->seen_by.bits = 0; - uv_write_local_mmr(UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS, dw); } /* - * Do all the things a cpu should do for a TLB shootdown message. - * Other cpu's may come here at the same time for this message. + * Process the receipt of a RETRY message */ -static void uv_bau_process_message(struct bau_payload_queue_entry *msg, - int msg_slot, int sw_ack_slot) +static inline void uv_bau_process_retry_msg(struct msg_desc *mdp, + struct bau_control *bcp) { - unsigned long this_cpu_mask; - struct bau_msg_status *msp; - int cpu; + int i; + int cancel_count = 0; + int slot2; + unsigned long msg_res; + unsigned long mmr = 0; + struct bau_payload_queue_entry *msg; + struct bau_payload_queue_entry *msg2; + struct ptc_stats *stat; - msp = __get_cpu_var(bau_control).msg_statuses + msg_slot; - cpu = uv_blade_processor_id(); - msg->number_of_cpus = - uv_blade_nr_online_cpus(uv_node_to_blade_id(numa_node_id())); - this_cpu_mask = 1UL << cpu; - if (msp->seen_by.bits & this_cpu_mask) - return; - atomic_or_long(&msp->seen_by.bits, this_cpu_mask); + msg = mdp->msg; + stat = &per_cpu(ptcstats, bcp->cpu); + stat->d_retries++; + /* + * cancel any message from msg+1 to the retry itself + */ + for (msg2 = msg+1, i = 0; i < DEST_Q_SIZE; msg2++, i++) { + if (msg2 > mdp->va_queue_last) + msg2 = mdp->va_queue_first; + if (msg2 == msg) + break; + + /* same conditions for cancellation as uv_do_reset */ + if ((msg2->replied_to == 0) && (msg2->canceled == 0) && + (msg2->sw_ack_vector) && ((msg2->sw_ack_vector & + msg->sw_ack_vector) == 0) && + (msg2->sending_cpu == msg->sending_cpu) && + (msg2->msg_type != MSG_NOOP)) { + slot2 = msg2 - mdp->va_queue_first; + mmr = uv_read_local_mmr + (UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE); + msg_res = ((msg2->sw_ack_vector << 8) | + msg2->sw_ack_vector); + /* + * This is a message retry; clear the resources held + * by the previous message only if they timed out. + * If it has not timed out we have an unexpected + * situation to report. + */ + if (mmr & (msg_res << 8)) { + /* + * is the resource timed out? + * make everyone ignore the cancelled message. + */ + msg2->canceled = 1; + stat->d_canceled++; + cancel_count++; + uv_write_local_mmr( + UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS, + (msg_res << 8) | msg_res); + } else + printk(KERN_INFO "note bau retry: no effect\n"); + } + } + if (!cancel_count) + stat->d_nocanceled++; +} - if (msg->replied_to == 1) - return; +/* + * Do all the things a cpu should do for a TLB shootdown message. + * Other cpu's may come here at the same time for this message. + */ +static void uv_bau_process_message(struct msg_desc *mdp, + struct bau_control *bcp) +{ + int msg_ack_count; + short socket_ack_count = 0; + struct ptc_stats *stat; + struct bau_payload_queue_entry *msg; + struct bau_control *smaster = bcp->socket_master; + /* + * This must be a normal message, or retry of a normal message + */ + msg = mdp->msg; + stat = &per_cpu(ptcstats, bcp->cpu); if (msg->address == TLB_FLUSH_ALL) { local_flush_tlb(); - __get_cpu_var(ptcstats).alltlb++; + stat->d_alltlb++; } else { __flush_tlb_one(msg->address); - __get_cpu_var(ptcstats).onetlb++; + stat->d_onetlb++; } + stat->d_requestee++; + + /* + * One cpu on each uvhub has the additional job on a RETRY + * of releasing the resource held by the message that is + * being retried. That message is identified by sending + * cpu number. + */ + if (msg->msg_type == MSG_RETRY && bcp == bcp->uvhub_master) + uv_bau_process_retry_msg(mdp, bcp); - __get_cpu_var(ptcstats).requestee++; + /* + * This is a sw_ack message, so we have to reply to it. + * Count each responding cpu on the socket. This avoids + * pinging the count's cache line back and forth between + * the sockets. + */ + socket_ack_count = atomic_add_short_return(1, (struct atomic_short *) + &smaster->socket_acknowledge_count[mdp->msg_slot]); + if (socket_ack_count == bcp->cpus_in_socket) { + /* + * Both sockets dump their completed count total into + * the message's count. + */ + smaster->socket_acknowledge_count[mdp->msg_slot] = 0; + msg_ack_count = atomic_add_short_return(socket_ack_count, + (struct atomic_short *)&msg->acknowledge_count); + + if (msg_ack_count == bcp->cpus_in_uvhub) { + /* + * All cpus in uvhub saw it; reply + */ + uv_reply_to_message(mdp, bcp); + } + } - atomic_inc_short(&msg->acknowledge_count); - if (msg->number_of_cpus == msg->acknowledge_count) - uv_reply_to_message(sw_ack_slot, msg, msp); + return; } /* - * Examine the payload queue on one distribution node to see - * which messages have not been seen, and which cpu(s) have not seen them. + * Determine the first cpu on a uvhub. + */ +static int uvhub_to_first_cpu(int uvhub) +{ + int cpu; + for_each_present_cpu(cpu) + if (uvhub == uv_cpu_to_blade_id(cpu)) + return cpu; + return -1; +} + +/* + * Last resort when we get a large number of destination timeouts is + * to clear resources held by a given cpu. + * Do this with IPI so that all messages in the BAU message queue + * can be identified by their nonzero sw_ack_vector field. * - * Returns the number of cpu's that have not responded. + * This is entered for a single cpu on the uvhub. + * The sender want's this uvhub to free a specific message's + * sw_ack resources. */ -static int uv_examine_destination(struct bau_control *bau_tablesp, int sender) +static void +uv_do_reset(void *ptr) { - struct bau_payload_queue_entry *msg; - struct bau_msg_status *msp; - int count = 0; int i; - int j; + int slot; + int count = 0; + unsigned long mmr; + unsigned long msg_res; + struct bau_control *bcp; + struct reset_args *rap; + struct bau_payload_queue_entry *msg; + struct ptc_stats *stat; - for (msg = bau_tablesp->va_queue_first, i = 0; i < DEST_Q_SIZE; - msg++, i++) { - if ((msg->sending_cpu == sender) && (!msg->replied_to)) { - msp = bau_tablesp->msg_statuses + i; - printk(KERN_DEBUG - "blade %d: address:%#lx %d of %d, not cpu(s): ", - i, msg->address, msg->acknowledge_count, - msg->number_of_cpus); - for (j = 0; j < msg->number_of_cpus; j++) { - if (!((1L << j) & msp->seen_by.bits)) { - count++; - printk("%d ", j); - } + bcp = &per_cpu(bau_control, smp_processor_id()); + rap = (struct reset_args *)ptr; + stat = &per_cpu(ptcstats, bcp->cpu); + stat->d_resets++; + + /* + * We're looking for the given sender, and + * will free its sw_ack resource. + * If all cpu's finally responded after the timeout, its + * message 'replied_to' was set. + */ + for (msg = bcp->va_queue_first, i = 0; i < DEST_Q_SIZE; msg++, i++) { + /* uv_do_reset: same conditions for cancellation as + uv_bau_process_retry_msg() */ + if ((msg->replied_to == 0) && + (msg->canceled == 0) && + (msg->sending_cpu == rap->sender) && + (msg->sw_ack_vector) && + (msg->msg_type != MSG_NOOP)) { + /* + * make everyone else ignore this message + */ + msg->canceled = 1; + slot = msg - bcp->va_queue_first; + count++; + /* + * only reset the resource if it is still pending + */ + mmr = uv_read_local_mmr + (UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE); + msg_res = ((msg->sw_ack_vector << 8) | + msg->sw_ack_vector); + if (mmr & msg_res) { + stat->d_rcanceled++; + uv_write_local_mmr( + UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS, + msg_res); } - printk("\n"); } } - return count; + return; } /* - * Examine the payload queue on all the distribution nodes to see - * which messages have not been seen, and which cpu(s) have not seen them. - * - * Returns the number of cpu's that have not responded. + * Use IPI to get all target uvhubs to release resources held by + * a given sending cpu number. */ -static int uv_examine_destinations(struct bau_target_nodemask *distribution) +static void uv_reset_with_ipi(struct bau_target_uvhubmask *distribution, + int sender) { - int sender; - int i; - int count = 0; + int uvhub; + int cpu; + cpumask_t mask; + struct reset_args reset_args; + + reset_args.sender = sender; - sender = smp_processor_id(); - for (i = 0; i < sizeof(struct bau_target_nodemask) * BITSPERBYTE; i++) { - if (!bau_node_isset(i, distribution)) + cpus_clear(mask); + /* find a single cpu for each uvhub in this distribution mask */ + for (uvhub = 0; + uvhub < sizeof(struct bau_target_uvhubmask) * BITSPERBYTE; + uvhub++) { + if (!bau_uvhub_isset(uvhub, distribution)) continue; - count += uv_examine_destination(uv_bau_table_bases[i], sender); + /* find a cpu for this uvhub */ + cpu = uvhub_to_first_cpu(uvhub); + cpu_set(cpu, mask); } - return count; + /* IPI all cpus; Preemption is already disabled */ + smp_call_function_many(&mask, uv_do_reset, (void *)&reset_args, 1); + return; +} + +static inline unsigned long +cycles_2_us(unsigned long long cyc) +{ + unsigned long long ns; + unsigned long us; + ns = (cyc * per_cpu(cyc2ns, smp_processor_id())) + >> CYC2NS_SCALE_FACTOR; + us = ns / 1000; + return us; } /* - * wait for completion of a broadcast message - * - * return COMPLETE, RETRY or GIVEUP + * wait for all cpus on this hub to finish their sends and go quiet + * leaves uvhub_quiesce set so that no new broadcasts are started by + * bau_flush_send_and_wait() + */ +static inline void +quiesce_local_uvhub(struct bau_control *hmaster) +{ + atomic_add_short_return(1, (struct atomic_short *) + &hmaster->uvhub_quiesce); +} + +/* + * mark this quiet-requestor as done + */ +static inline void +end_uvhub_quiesce(struct bau_control *hmaster) +{ + atomic_add_short_return(-1, (struct atomic_short *) + &hmaster->uvhub_quiesce); +} + +/* + * Wait for completion of a broadcast software ack message + * return COMPLETE, RETRY(PLUGGED or TIMEOUT) or GIVEUP */ static int uv_wait_completion(struct bau_desc *bau_desc, - unsigned long mmr_offset, int right_shift) + unsigned long mmr_offset, int right_shift, int this_cpu, + struct bau_control *bcp, struct bau_control *smaster, long try) { - int exams = 0; - long destination_timeouts = 0; - long source_timeouts = 0; + int relaxes = 0; unsigned long descriptor_status; + unsigned long mmr; + unsigned long mask; + cycles_t ttime; + cycles_t timeout_time; + struct ptc_stats *stat = &per_cpu(ptcstats, this_cpu); + struct bau_control *hmaster; + + hmaster = bcp->uvhub_master; + timeout_time = get_cycles() + bcp->timeout_interval; + /* spin on the status MMR, waiting for it to go idle */ while ((descriptor_status = (((unsigned long) uv_read_local_mmr(mmr_offset) >> right_shift) & UV_ACT_STATUS_MASK)) != DESC_STATUS_IDLE) { - if (descriptor_status == DESC_STATUS_SOURCE_TIMEOUT) { - source_timeouts++; - if (source_timeouts > SOURCE_TIMEOUT_LIMIT) - source_timeouts = 0; - __get_cpu_var(ptcstats).s_retry++; - return FLUSH_RETRY; - } /* - * spin here looking for progress at the destinations + * Our software ack messages may be blocked because there are + * no swack resources available. As long as none of them + * has timed out hardware will NACK our message and its + * state will stay IDLE. */ - if (descriptor_status == DESC_STATUS_DESTINATION_TIMEOUT) { - destination_timeouts++; - if (destination_timeouts > DESTINATION_TIMEOUT_LIMIT) { - /* - * returns number of cpus not responding - */ - if (uv_examine_destinations - (&bau_desc->distribution) == 0) { - __get_cpu_var(ptcstats).d_retry++; - return FLUSH_RETRY; - } - exams++; - if (exams >= uv_bau_retry_limit) { - printk(KERN_DEBUG - "uv_flush_tlb_others"); - printk("giving up on cpu %d\n", - smp_processor_id()); + if (descriptor_status == DESC_STATUS_SOURCE_TIMEOUT) { + stat->s_stimeout++; + return FLUSH_GIVEUP; + } else if (descriptor_status == + DESC_STATUS_DESTINATION_TIMEOUT) { + stat->s_dtimeout++; + ttime = get_cycles(); + + /* + * Our retries may be blocked by all destination + * swack resources being consumed, and a timeout + * pending. In that case hardware returns the + * ERROR that looks like a destination timeout. + */ + if (cycles_2_us(ttime - bcp->send_message) < BIOS_TO) { + bcp->conseccompletes = 0; + return FLUSH_RETRY_PLUGGED; + } + + bcp->conseccompletes = 0; + return FLUSH_RETRY_TIMEOUT; + } else { + /* + * descriptor_status is still BUSY + */ + cpu_relax(); + relaxes++; + if (relaxes >= 10000) { + relaxes = 0; + if (get_cycles() > timeout_time) { + quiesce_local_uvhub(hmaster); + + /* single-thread the register change */ + spin_lock(&hmaster->masks_lock); + mmr = uv_read_local_mmr(mmr_offset); + mask = 0UL; + mask |= (3UL < right_shift); + mask = ~mask; + mmr &= mask; + uv_write_local_mmr(mmr_offset, mmr); + spin_unlock(&hmaster->masks_lock); + end_uvhub_quiesce(hmaster); + stat->s_busy++; return FLUSH_GIVEUP; } - /* - * delays can hang the simulator - udelay(1000); - */ - destination_timeouts = 0; } } - cpu_relax(); } + bcp->conseccompletes++; return FLUSH_COMPLETE; } +static inline cycles_t +sec_2_cycles(unsigned long sec) +{ + unsigned long ns; + cycles_t cyc; + + ns = sec * 1000000000; + cyc = (ns << CYC2NS_SCALE_FACTOR)/(per_cpu(cyc2ns, smp_processor_id())); + return cyc; +} + +/* + * conditionally add 1 to *v, unless *v is >= u + * return 0 if we cannot add 1 to *v because it is >= u + * return 1 if we can add 1 to *v because it is < u + * the add is atomic + * + * This is close to atomic_add_unless(), but this allows the 'u' value + * to be lowered below the current 'v'. atomic_add_unless can only stop + * on equal. + */ +static inline int atomic_inc_unless_ge(spinlock_t *lock, atomic_t *v, int u) +{ + spin_lock(lock); + if (atomic_read(v) >= u) { + spin_unlock(lock); + return 0; + } + atomic_inc(v); + spin_unlock(lock); + return 1; +} + /** * uv_flush_send_and_wait * - * Send a broadcast and wait for a broadcast message to complete. + * Send a broadcast and wait for it to complete. * - * The flush_mask contains the cpus the broadcast was sent to. + * The flush_mask contains the cpus the broadcast is to be sent to, plus + * cpus that are on the local uvhub. * - * Returns NULL if all remote flushing was done. The mask is zeroed. + * Returns NULL if all flushing represented in the mask was done. The mask + * is zeroed. * Returns @flush_mask if some remote flushing remains to be done. The - * mask will have some bits still set. + * mask will have some bits still set, representing any cpus on the local + * uvhub (not current cpu) and any on remote uvhubs if the broadcast failed. */ -const struct cpumask *uv_flush_send_and_wait(int cpu, int this_pnode, - struct bau_desc *bau_desc, - struct cpumask *flush_mask) +const struct cpumask *uv_flush_send_and_wait(struct bau_desc *bau_desc, + struct cpumask *flush_mask, + struct bau_control *bcp) { - int completion_status = 0; int right_shift; - int tries = 0; - int pnode; + int uvhub; int bit; + int completion_status = 0; + int seq_number = 0; + long try = 0; + int cpu = bcp->uvhub_cpu; + int this_cpu = bcp->cpu; + int this_uvhub = bcp->uvhub; unsigned long mmr_offset; unsigned long index; cycles_t time1; cycles_t time2; + struct ptc_stats *stat = &per_cpu(ptcstats, bcp->cpu); + struct bau_control *smaster = bcp->socket_master; + struct bau_control *hmaster = bcp->uvhub_master; + + /* + * Spin here while there are hmaster->max_concurrent or more active + * descriptors. This is the per-uvhub 'throttle'. + */ + if (!atomic_inc_unless_ge(&hmaster->uvhub_lock, + &hmaster->active_descriptor_count, + hmaster->max_concurrent)) { + stat->s_throttles++; + do { + cpu_relax(); + } while (!atomic_inc_unless_ge(&hmaster->uvhub_lock, + &hmaster->active_descriptor_count, + hmaster->max_concurrent)); + } + + while (hmaster->uvhub_quiesce) + cpu_relax(); if (cpu < UV_CPUS_PER_ACT_STATUS) { mmr_offset = UVH_LB_BAU_SB_ACTIVATION_STATUS_0; @@ -268,24 +557,108 @@ const struct cpumask *uv_flush_send_and_wait(int cpu, int this_pnode, } time1 = get_cycles(); do { - tries++; + /* + * Every message from any given cpu gets a unique message + * sequence number. But retries use that same number. + * Our message may have timed out at the destination because + * all sw-ack resources are in use and there is a timeout + * pending there. In that case, our last send never got + * placed into the queue and we need to persist until it + * does. + * + * Make any retry a type MSG_RETRY so that the destination will + * free any resource held by a previous message from this cpu. + */ + if (try == 0) { + /* use message type set by the caller the first time */ + seq_number = bcp->message_number++; + } else { + /* use RETRY type on all the rest; same sequence */ + bau_desc->header.msg_type = MSG_RETRY; + stat->s_retry_messages++; + } + bau_desc->header.sequence = seq_number; index = (1UL << UVH_LB_BAU_SB_ACTIVATION_CONTROL_PUSH_SHFT) | - cpu; + bcp->uvhub_cpu; + bcp->send_message = get_cycles(); + uv_write_local_mmr(UVH_LB_BAU_SB_ACTIVATION_CONTROL, index); + + try++; completion_status = uv_wait_completion(bau_desc, mmr_offset, - right_shift); - } while (completion_status == FLUSH_RETRY); + right_shift, this_cpu, bcp, smaster, try); + + if (completion_status == FLUSH_RETRY_PLUGGED) { + /* + * Our retries may be blocked by all destination swack + * resources being consumed, and a timeout pending. In + * that case hardware immediately returns the ERROR + * that looks like a destination timeout. + */ + udelay(TIMEOUT_DELAY); + bcp->plugged_tries++; + if (bcp->plugged_tries >= PLUGSB4RESET) { + bcp->plugged_tries = 0; + quiesce_local_uvhub(hmaster); + spin_lock(&hmaster->queue_lock); + uv_reset_with_ipi(&bau_desc->distribution, + this_cpu); + spin_unlock(&hmaster->queue_lock); + end_uvhub_quiesce(hmaster); + bcp->ipi_attempts++; + stat->s_resets_plug++; + } + } else if (completion_status == FLUSH_RETRY_TIMEOUT) { + hmaster->max_concurrent = 1; + bcp->timeout_tries++; + udelay(TIMEOUT_DELAY); + if (bcp->timeout_tries >= TIMEOUTSB4RESET) { + bcp->timeout_tries = 0; + quiesce_local_uvhub(hmaster); + spin_lock(&hmaster->queue_lock); + uv_reset_with_ipi(&bau_desc->distribution, + this_cpu); + spin_unlock(&hmaster->queue_lock); + end_uvhub_quiesce(hmaster); + bcp->ipi_attempts++; + stat->s_resets_timeout++; + } + } + if (bcp->ipi_attempts >= 3) { + bcp->ipi_attempts = 0; + completion_status = FLUSH_GIVEUP; + break; + } + cpu_relax(); + } while ((completion_status == FLUSH_RETRY_PLUGGED) || + (completion_status == FLUSH_RETRY_TIMEOUT)); time2 = get_cycles(); - __get_cpu_var(ptcstats).sflush += (time2 - time1); - if (tries > 1) - __get_cpu_var(ptcstats).retriesok++; - if (completion_status == FLUSH_GIVEUP) { + if ((completion_status == FLUSH_COMPLETE) && (bcp->conseccompletes > 5) + && (hmaster->max_concurrent < hmaster->max_concurrent_constant)) + hmaster->max_concurrent++; + + /* + * hold any cpu not timing out here; no other cpu currently held by + * the 'throttle' should enter the activation code + */ + while (hmaster->uvhub_quiesce) + cpu_relax(); + atomic_dec(&hmaster->active_descriptor_count); + + /* guard against cycles wrap */ + if (time2 > time1) + stat->s_time += (time2 - time1); + else + stat->s_requestor--; /* don't count this one */ + if (completion_status == FLUSH_COMPLETE && try > 1) + stat->s_retriesok++; + else if (completion_status == FLUSH_GIVEUP) { /* * Cause the caller to do an IPI-style TLB shootdown on - * the cpu's, all of which are still in the mask. + * the target cpu's, all of which are still in the mask. */ - __get_cpu_var(ptcstats).ptc_i++; + stat->s_giveup++; return flush_mask; } @@ -294,18 +667,17 @@ const struct cpumask *uv_flush_send_and_wait(int cpu, int this_pnode, * use the IPI method of shootdown on them. */ for_each_cpu(bit, flush_mask) { - pnode = uv_cpu_to_pnode(bit); - if (pnode == this_pnode) + uvhub = uv_cpu_to_blade_id(bit); + if (uvhub == this_uvhub) continue; cpumask_clear_cpu(bit, flush_mask); } if (!cpumask_empty(flush_mask)) return flush_mask; + return NULL; } -static DEFINE_PER_CPU(cpumask_var_t, uv_flush_tlb_mask); - /** * uv_flush_tlb_others - globally purge translation cache of a virtual * address or all TLB's @@ -322,8 +694,8 @@ static DEFINE_PER_CPU(cpumask_var_t, uv_flush_tlb_mask); * The caller has derived the cpumask from the mm_struct. This function * is called only if there are bits set in the mask. (e.g. flush_tlb_page()) * - * The cpumask is converted into a nodemask of the nodes containing - * the cpus. + * The cpumask is converted into a uvhubmask of the uvhubs containing + * those cpus. * * Note that this function should be called with preemption disabled. * @@ -335,52 +707,82 @@ const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask, struct mm_struct *mm, unsigned long va, unsigned int cpu) { - struct cpumask *flush_mask = __get_cpu_var(uv_flush_tlb_mask); - int i; - int bit; - int pnode; - int uv_cpu; - int this_pnode; + int remotes; + int tcpu; + int uvhub; int locals = 0; struct bau_desc *bau_desc; + struct cpumask *flush_mask; + struct ptc_stats *stat; + struct bau_control *bcp; - cpumask_andnot(flush_mask, cpumask, cpumask_of(cpu)); + if (nobau) + return cpumask; - uv_cpu = uv_blade_processor_id(); - this_pnode = uv_hub_info->pnode; - bau_desc = __get_cpu_var(bau_control).descriptor_base; - bau_desc += UV_ITEMS_PER_DESCRIPTOR * uv_cpu; + bcp = &per_cpu(bau_control, cpu); + /* + * Each sending cpu has a per-cpu mask which it fills from the caller's + * cpu mask. Only remote cpus are converted to uvhubs and copied. + */ + flush_mask = (struct cpumask *)per_cpu(uv_flush_tlb_mask, cpu); + /* + * copy cpumask to flush_mask, removing current cpu + * (current cpu should already have been flushed by the caller and + * should never be returned if we return flush_mask) + */ + cpumask_andnot(flush_mask, cpumask, cpumask_of(cpu)); + if (cpu_isset(cpu, *cpumask)) + locals++; /* current cpu was targeted */ - bau_nodes_clear(&bau_desc->distribution, UV_DISTRIBUTION_SIZE); + bau_desc = bcp->descriptor_base; + bau_desc += UV_ITEMS_PER_DESCRIPTOR * bcp->uvhub_cpu; - i = 0; - for_each_cpu(bit, flush_mask) { - pnode = uv_cpu_to_pnode(bit); - BUG_ON(pnode > (UV_DISTRIBUTION_SIZE - 1)); - if (pnode == this_pnode) { + bau_uvhubs_clear(&bau_desc->distribution, UV_DISTRIBUTION_SIZE); + remotes = 0; + for_each_cpu(tcpu, flush_mask) { + uvhub = uv_cpu_to_blade_id(tcpu); + if (uvhub == bcp->uvhub) { locals++; continue; } - bau_node_set(pnode - uv_partition_base_pnode, - &bau_desc->distribution); - i++; + bau_uvhub_set(uvhub, &bau_desc->distribution); + remotes++; } - if (i == 0) { + if (remotes == 0) { /* - * no off_node flushing; return status for local node + * No off_hub flushing; return status for local hub. + * Return the caller's mask if all were local (the current + * cpu may be in that mask). */ if (locals) - return flush_mask; + return cpumask; else return NULL; } - __get_cpu_var(ptcstats).requestor++; - __get_cpu_var(ptcstats).ntargeted += i; + stat = &per_cpu(ptcstats, cpu); + stat->s_requestor++; + stat->s_ntargcpu += remotes; + remotes = bau_uvhub_weight(&bau_desc->distribution); + stat->s_ntarguvhub += remotes; + if (remotes >= 16) + stat->s_ntarguvhub16++; + else if (remotes >= 8) + stat->s_ntarguvhub8++; + else if (remotes >= 4) + stat->s_ntarguvhub4++; + else if (remotes >= 2) + stat->s_ntarguvhub2++; + else + stat->s_ntarguvhub1++; bau_desc->payload.address = va; bau_desc->payload.sending_cpu = cpu; - return uv_flush_send_and_wait(uv_cpu, this_pnode, bau_desc, flush_mask); + /* + * uv_flush_send_and_wait returns null if all cpu's were messaged, or + * the adjusted flush_mask if any cpu's were not messaged. + */ + return uv_flush_send_and_wait(bau_desc, flush_mask, bcp); } /* @@ -389,87 +791,70 @@ const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask, * * We received a broadcast assist message. * - * Interrupts may have been disabled; this interrupt could represent + * Interrupts are disabled; this interrupt could represent * the receipt of several messages. * - * All cores/threads on this node get this interrupt. - * The last one to see it does the s/w ack. + * All cores/threads on this hub get this interrupt. + * The last one to see it does the software ack. * (the resource will not be freed until noninterruptable cpus see this - * interrupt; hardware will timeout the s/w ack and reply ERROR) + * interrupt; hardware may timeout the s/w ack and reply ERROR) */ void uv_bau_message_interrupt(struct pt_regs *regs) { - struct bau_payload_queue_entry *va_queue_first; - struct bau_payload_queue_entry *va_queue_last; - struct bau_payload_queue_entry *msg; - struct pt_regs *old_regs = set_irq_regs(regs); - cycles_t time1; - cycles_t time2; - int msg_slot; - int sw_ack_slot; - int fw; int count = 0; - unsigned long local_pnode; - - ack_APIC_irq(); - exit_idle(); - irq_enter(); - - time1 = get_cycles(); - - local_pnode = uv_blade_to_pnode(uv_numa_blade_id()); - - va_queue_first = __get_cpu_var(bau_control).va_queue_first; - va_queue_last = __get_cpu_var(bau_control).va_queue_last; - - msg = __get_cpu_var(bau_control).bau_msg_head; + cycles_t time_start; + struct bau_payload_queue_entry *msg; + struct bau_control *bcp; + struct ptc_stats *stat; + struct msg_desc msgdesc; + + time_start = get_cycles(); + bcp = &per_cpu(bau_control, smp_processor_id()); + stat = &per_cpu(ptcstats, smp_processor_id()); + msgdesc.va_queue_first = bcp->va_queue_first; + msgdesc.va_queue_last = bcp->va_queue_last; + msg = bcp->bau_msg_head; while (msg->sw_ack_vector) { count++; - fw = msg->sw_ack_vector; - msg_slot = msg - va_queue_first; - sw_ack_slot = ffs(fw) - 1; - - uv_bau_process_message(msg, msg_slot, sw_ack_slot); - + msgdesc.msg_slot = msg - msgdesc.va_queue_first; + msgdesc.sw_ack_slot = ffs(msg->sw_ack_vector) - 1; + msgdesc.msg = msg; + uv_bau_process_message(&msgdesc, bcp); msg++; - if (msg > va_queue_last) - msg = va_queue_first; - __get_cpu_var(bau_control).bau_msg_head = msg; + if (msg > msgdesc.va_queue_last) + msg = msgdesc.va_queue_first; + bcp->bau_msg_head = msg; } + stat->d_time += (get_cycles() - time_start); if (!count) - __get_cpu_var(ptcstats).nomsg++; + stat->d_nomsg++; else if (count > 1) - __get_cpu_var(ptcstats).multmsg++; - - time2 = get_cycles(); - __get_cpu_var(ptcstats).dflush += (time2 - time1); - - irq_exit(); - set_irq_regs(old_regs); + stat->d_multmsg++; + ack_APIC_irq(); } /* * uv_enable_timeouts * - * Each target blade (i.e. blades that have cpu's) needs to have + * Each target uvhub (i.e. a uvhub that has no cpu's) needs to have * shootdown message timeouts enabled. The timeout does not cause * an interrupt, but causes an error message to be returned to * the sender. */ static void uv_enable_timeouts(void) { - int blade; - int nblades; + int uvhub; + int nuvhubs; int pnode; unsigned long mmr_image; - nblades = uv_num_possible_blades(); + nuvhubs = uv_num_possible_blades(); - for (blade = 0; blade < nblades; blade++) { - if (!uv_blade_nr_possible_cpus(blade)) + for (uvhub = 0; uvhub < nuvhubs; uvhub++) { + if (!uv_blade_nr_possible_cpus(uvhub)) continue; - pnode = uv_blade_to_pnode(blade); + pnode = uv_blade_to_pnode(uvhub); mmr_image = uv_read_global_mmr64(pnode, UVH_LB_BAU_MISC_CONTROL); /* @@ -479,16 +864,16 @@ static void uv_enable_timeouts(void) * To program the period, the SOFT_ACK_MODE must be off. */ mmr_image &= ~((unsigned long)1 << - UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT); + UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT); uv_write_global_mmr64 (pnode, UVH_LB_BAU_MISC_CONTROL, mmr_image); /* * Set the 4-bit period. */ mmr_image &= ~((unsigned long)0xf << - UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT); + UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT); mmr_image |= (UV_INTD_SOFT_ACK_TIMEOUT_PERIOD << - UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT); + UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT); uv_write_global_mmr64 (pnode, UVH_LB_BAU_MISC_CONTROL, mmr_image); /* @@ -497,7 +882,7 @@ static void uv_enable_timeouts(void) * indicated in bits 2:0 (7 causes all of them to timeout). */ mmr_image |= ((unsigned long)1 << - UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT); + UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT); uv_write_global_mmr64 (pnode, UVH_LB_BAU_MISC_CONTROL, mmr_image); } @@ -522,9 +907,20 @@ static void uv_ptc_seq_stop(struct seq_file *file, void *data) { } +static inline unsigned long long +millisec_2_cycles(unsigned long millisec) +{ + unsigned long ns; + unsigned long long cyc; + + ns = millisec * 1000; + cyc = (ns << CYC2NS_SCALE_FACTOR)/(per_cpu(cyc2ns, smp_processor_id())); + return cyc; +} + /* - * Display the statistics thru /proc - * data points to the cpu number + * Display the statistics thru /proc. + * 'data' points to the cpu number */ static int uv_ptc_seq_show(struct seq_file *file, void *data) { @@ -535,78 +931,155 @@ static int uv_ptc_seq_show(struct seq_file *file, void *data) if (!cpu) { seq_printf(file, - "# cpu requestor requestee one all sretry dretry ptc_i "); + "# cpu sent stime numuvhubs numuvhubs16 numuvhubs8 "); seq_printf(file, - "sw_ack sflush dflush sok dnomsg dmult starget\n"); + "numuvhubs4 numuvhubs2 numuvhubs1 numcpus dto "); + seq_printf(file, + "retries rok resetp resett giveup sto bz throt "); + seq_printf(file, + "sw_ack recv rtime all "); + seq_printf(file, + "one mult none retry canc nocan reset rcan\n"); } if (cpu < num_possible_cpus() && cpu_online(cpu)) { stat = &per_cpu(ptcstats, cpu); - seq_printf(file, "cpu %d %ld %ld %ld %ld %ld %ld %ld ", - cpu, stat->requestor, - stat->requestee, stat->onetlb, stat->alltlb, - stat->s_retry, stat->d_retry, stat->ptc_i); - seq_printf(file, "%lx %ld %ld %ld %ld %ld %ld\n", + /* source side statistics */ + seq_printf(file, + "cpu %d %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld ", + cpu, stat->s_requestor, cycles_2_us(stat->s_time), + stat->s_ntarguvhub, stat->s_ntarguvhub16, + stat->s_ntarguvhub8, stat->s_ntarguvhub4, + stat->s_ntarguvhub2, stat->s_ntarguvhub1, + stat->s_ntargcpu, stat->s_dtimeout); + seq_printf(file, "%ld %ld %ld %ld %ld %ld %ld %ld ", + stat->s_retry_messages, stat->s_retriesok, + stat->s_resets_plug, stat->s_resets_timeout, + stat->s_giveup, stat->s_stimeout, + stat->s_busy, stat->s_throttles); + /* destination side statistics */ + seq_printf(file, + "%lx %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld %ld\n", uv_read_global_mmr64(uv_cpu_to_pnode(cpu), UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE), - stat->sflush, stat->dflush, - stat->retriesok, stat->nomsg, - stat->multmsg, stat->ntargeted); + stat->d_requestee, cycles_2_us(stat->d_time), + stat->d_alltlb, stat->d_onetlb, stat->d_multmsg, + stat->d_nomsg, stat->d_retries, stat->d_canceled, + stat->d_nocanceled, stat->d_resets, + stat->d_rcanceled); } return 0; } /* + * -1: resetf the statistics * 0: display meaning of the statistics - * >0: retry limit + * >0: maximum concurrent active descriptors per uvhub (throttle) */ static ssize_t uv_ptc_proc_write(struct file *file, const char __user *user, size_t count, loff_t *data) { - long newmode; + int cpu; + long input_arg; char optstr[64]; + struct ptc_stats *stat; + struct bau_control *bcp; if (count == 0 || count > sizeof(optstr)) return -EINVAL; if (copy_from_user(optstr, user, count)) return -EFAULT; optstr[count - 1] = '\0'; - if (strict_strtoul(optstr, 10, &newmode) < 0) { + if (strict_strtol(optstr, 10, &input_arg) < 0) { printk(KERN_DEBUG "%s is invalid\n", optstr); return -EINVAL; } - if (newmode == 0) { + if (input_arg == 0) { printk(KERN_DEBUG "# cpu: cpu number\n"); + printk(KERN_DEBUG "Sender statistics:\n"); + printk(KERN_DEBUG + "sent: number of shootdown messages sent\n"); + printk(KERN_DEBUG + "stime: time spent sending messages\n"); + printk(KERN_DEBUG + "numuvhubs: number of hubs targeted with shootdown\n"); + printk(KERN_DEBUG + "numuvhubs16: number times 16 or more hubs targeted\n"); + printk(KERN_DEBUG + "numuvhubs8: number times 8 or more hubs targeted\n"); + printk(KERN_DEBUG + "numuvhubs4: number times 4 or more hubs targeted\n"); + printk(KERN_DEBUG + "numuvhubs2: number times 2 or more hubs targeted\n"); + printk(KERN_DEBUG + "numuvhubs1: number times 1 hub targeted\n"); + printk(KERN_DEBUG + "numcpus: number of cpus targeted with shootdown\n"); + printk(KERN_DEBUG + "dto: number of destination timeouts\n"); + printk(KERN_DEBUG + "retries: destination timeout retries sent\n"); + printk(KERN_DEBUG + "rok: : destination timeouts successfully retried\n"); + printk(KERN_DEBUG + "resetp: ipi-style resource resets for plugs\n"); + printk(KERN_DEBUG + "resett: ipi-style resource resets for timeouts\n"); + printk(KERN_DEBUG + "giveup: fall-backs to ipi-style shootdowns\n"); + printk(KERN_DEBUG + "sto: number of source timeouts\n"); + printk(KERN_DEBUG + "bz: number of stay-busy's\n"); + printk(KERN_DEBUG + "throt: number times spun in throttle\n"); + printk(KERN_DEBUG "Destination side statistics:\n"); printk(KERN_DEBUG - "requestor: times this cpu was the flush requestor\n"); + "sw_ack: image of UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE\n"); printk(KERN_DEBUG - "requestee: times this cpu was requested to flush its TLBs\n"); + "recv: shootdown messages received\n"); printk(KERN_DEBUG - "one: times requested to flush a single address\n"); + "rtime: time spent processing messages\n"); printk(KERN_DEBUG - "all: times requested to flush all TLB's\n"); + "all: shootdown all-tlb messages\n"); printk(KERN_DEBUG - "sretry: number of retries of source-side timeouts\n"); + "one: shootdown one-tlb messages\n"); printk(KERN_DEBUG - "dretry: number of retries of destination-side timeouts\n"); + "mult: interrupts that found multiple messages\n"); printk(KERN_DEBUG - "ptc_i: times UV fell through to IPI-style flushes\n"); + "none: interrupts that found no messages\n"); printk(KERN_DEBUG - "sw_ack: image of UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE\n"); + "retry: number of retry messages processed\n"); printk(KERN_DEBUG - "sflush_us: cycles spent in uv_flush_tlb_others()\n"); + "canc: number messages canceled by retries\n"); printk(KERN_DEBUG - "dflush_us: cycles spent in handling flush requests\n"); - printk(KERN_DEBUG "sok: successes on retry\n"); - printk(KERN_DEBUG "dnomsg: interrupts with no message\n"); + "nocan: number retries that found nothing to cancel\n"); printk(KERN_DEBUG - "dmult: interrupts with multiple messages\n"); - printk(KERN_DEBUG "starget: nodes targeted\n"); + "reset: number of ipi-style reset requests processed\n"); + printk(KERN_DEBUG + "rcan: number messages canceled by reset requests\n"); + } else if (input_arg == -1) { + for_each_present_cpu(cpu) { + stat = &per_cpu(ptcstats, cpu); + memset(stat, 0, sizeof(struct ptc_stats)); + } } else { - uv_bau_retry_limit = newmode; - printk(KERN_DEBUG "timeout retry limit:%d\n", - uv_bau_retry_limit); + uv_bau_max_concurrent = input_arg; + bcp = &per_cpu(bau_control, smp_processor_id()); + if (uv_bau_max_concurrent < 1 || + uv_bau_max_concurrent > bcp->cpus_in_uvhub) { + printk(KERN_DEBUG + "Error: BAU max concurrent %d; %d is invalid\n", + bcp->max_concurrent, uv_bau_max_concurrent); + return -EINVAL; + } + printk(KERN_DEBUG "Set BAU max concurrent:%d\n", + uv_bau_max_concurrent); + for_each_present_cpu(cpu) { + bcp = &per_cpu(bau_control, cpu); + bcp->max_concurrent = uv_bau_max_concurrent; + } } return count; @@ -650,79 +1123,30 @@ static int __init uv_ptc_init(void) } /* - * begin the initialization of the per-blade control structures - */ -static struct bau_control * __init uv_table_bases_init(int blade, int node) -{ - int i; - struct bau_msg_status *msp; - struct bau_control *bau_tabp; - - bau_tabp = - kmalloc_node(sizeof(struct bau_control), GFP_KERNEL, node); - BUG_ON(!bau_tabp); - - bau_tabp->msg_statuses = - kmalloc_node(sizeof(struct bau_msg_status) * - DEST_Q_SIZE, GFP_KERNEL, node); - BUG_ON(!bau_tabp->msg_statuses); - - for (i = 0, msp = bau_tabp->msg_statuses; i < DEST_Q_SIZE; i++, msp++) - bau_cpubits_clear(&msp->seen_by, (int) - uv_blade_nr_possible_cpus(blade)); - - uv_bau_table_bases[blade] = bau_tabp; - - return bau_tabp; -} - -/* - * finish the initialization of the per-blade control structures - */ -static void __init -uv_table_bases_finish(int blade, - struct bau_control *bau_tablesp, - struct bau_desc *adp) -{ - struct bau_control *bcp; - int cpu; - - for_each_present_cpu(cpu) { - if (blade != uv_cpu_to_blade_id(cpu)) - continue; - - bcp = (struct bau_control *)&per_cpu(bau_control, cpu); - bcp->bau_msg_head = bau_tablesp->va_queue_first; - bcp->va_queue_first = bau_tablesp->va_queue_first; - bcp->va_queue_last = bau_tablesp->va_queue_last; - bcp->msg_statuses = bau_tablesp->msg_statuses; - bcp->descriptor_base = adp; - } -} - -/* * initialize the sending side's sending buffers */ -static struct bau_desc * __init +static void uv_activation_descriptor_init(int node, int pnode) { int i; + int cpu; unsigned long pa; unsigned long m; unsigned long n; - struct bau_desc *adp; - struct bau_desc *ad2; + struct bau_desc *bau_desc; + struct bau_desc *bd2; + struct bau_control *bcp; /* * each bau_desc is 64 bytes; there are 8 (UV_ITEMS_PER_DESCRIPTOR) - * per cpu; and up to 32 (UV_ADP_SIZE) cpu's per blade + * per cpu; and up to 32 (UV_ADP_SIZE) cpu's per uvhub */ - adp = (struct bau_desc *)kmalloc_node(sizeof(struct bau_desc)* + bau_desc = (struct bau_desc *)kmalloc_node(sizeof(struct bau_desc)* UV_ADP_SIZE*UV_ITEMS_PER_DESCRIPTOR, GFP_KERNEL, node); - BUG_ON(!adp); + BUG_ON(!bau_desc); - pa = uv_gpa(adp); /* need the real nasid*/ - n = uv_gpa_to_pnode(pa); + pa = uv_gpa(bau_desc); /* need the real nasid*/ + n = pa >> uv_nshift; m = pa & uv_mmask; uv_write_global_mmr64(pnode, UVH_LB_BAU_SB_DESCRIPTOR_BASE, @@ -731,96 +1155,188 @@ uv_activation_descriptor_init(int node, int pnode) /* * initializing all 8 (UV_ITEMS_PER_DESCRIPTOR) descriptors for each * cpu even though we only use the first one; one descriptor can - * describe a broadcast to 256 nodes. + * describe a broadcast to 256 uv hubs. */ - for (i = 0, ad2 = adp; i < (UV_ADP_SIZE*UV_ITEMS_PER_DESCRIPTOR); - i++, ad2++) { - memset(ad2, 0, sizeof(struct bau_desc)); - ad2->header.sw_ack_flag = 1; + for (i = 0, bd2 = bau_desc; i < (UV_ADP_SIZE*UV_ITEMS_PER_DESCRIPTOR); + i++, bd2++) { + memset(bd2, 0, sizeof(struct bau_desc)); + bd2->header.sw_ack_flag = 1; /* - * base_dest_nodeid is the first node in the partition, so - * the bit map will indicate partition-relative node numbers. - * note that base_dest_nodeid is actually a nasid. + * base_dest_nodeid is the nasid (pnode<<1) of the first uvhub + * in the partition. The bit map will indicate uvhub numbers, + * which are 0-N in a partition. Pnodes are unique system-wide. */ - ad2->header.base_dest_nodeid = uv_partition_base_pnode << 1; - ad2->header.dest_subnodeid = 0x10; /* the LB */ - ad2->header.command = UV_NET_ENDPOINT_INTD; - ad2->header.int_both = 1; + bd2->header.base_dest_nodeid = uv_partition_base_pnode << 1; + bd2->header.dest_subnodeid = 0x10; /* the LB */ + bd2->header.command = UV_NET_ENDPOINT_INTD; + bd2->header.int_both = 1; /* * all others need to be set to zero: * fairness chaining multilevel count replied_to */ } - return adp; + for_each_present_cpu(cpu) { + if (pnode != uv_blade_to_pnode(uv_cpu_to_blade_id(cpu))) + continue; + bcp = &per_cpu(bau_control, cpu); + bcp->descriptor_base = bau_desc; + } } /* * initialize the destination side's receiving buffers + * entered for each uvhub in the partition + * - node is first node (kernel memory notion) on the uvhub + * - pnode is the uvhub's physical identifier */ -static struct bau_payload_queue_entry * __init -uv_payload_queue_init(int node, int pnode, struct bau_control *bau_tablesp) +static void +uv_payload_queue_init(int node, int pnode) { - struct bau_payload_queue_entry *pqp; - unsigned long pa; int pn; + int cpu; char *cp; + unsigned long pa; + struct bau_payload_queue_entry *pqp; + struct bau_payload_queue_entry *pqp_malloc; + struct bau_control *bcp; pqp = (struct bau_payload_queue_entry *) kmalloc_node( (DEST_Q_SIZE + 1) * sizeof(struct bau_payload_queue_entry), GFP_KERNEL, node); BUG_ON(!pqp); + pqp_malloc = pqp; cp = (char *)pqp + 31; pqp = (struct bau_payload_queue_entry *)(((unsigned long)cp >> 5) << 5); - bau_tablesp->va_queue_first = pqp; + + for_each_present_cpu(cpu) { + if (pnode != uv_cpu_to_pnode(cpu)) + continue; + /* for every cpu on this pnode: */ + bcp = &per_cpu(bau_control, cpu); + bcp->va_queue_first = pqp; + bcp->bau_msg_head = pqp; + bcp->va_queue_last = pqp + (DEST_Q_SIZE - 1); + } /* * need the pnode of where the memory was really allocated */ pa = uv_gpa(pqp); - pn = uv_gpa_to_pnode(pa); + pn = pa >> uv_nshift; uv_write_global_mmr64(pnode, UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST, ((unsigned long)pn << UV_PAYLOADQ_PNODE_SHIFT) | uv_physnodeaddr(pqp)); uv_write_global_mmr64(pnode, UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL, uv_physnodeaddr(pqp)); - bau_tablesp->va_queue_last = pqp + (DEST_Q_SIZE - 1); uv_write_global_mmr64(pnode, UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST, (unsigned long) - uv_physnodeaddr(bau_tablesp->va_queue_last)); + uv_physnodeaddr(pqp + (DEST_Q_SIZE - 1))); + /* in effect, all msg_type's are set to MSG_NOOP */ memset(pqp, 0, sizeof(struct bau_payload_queue_entry) * DEST_Q_SIZE); - - return pqp; } /* - * Initialization of each UV blade's structures + * Initialization of each UV hub's structures */ -static int __init uv_init_blade(int blade) +static void __init uv_init_uvhub(int uvhub, int vector) { int node; int pnode; - unsigned long pa; unsigned long apicid; - struct bau_desc *adp; - struct bau_payload_queue_entry *pqp; - struct bau_control *bau_tablesp; - - node = blade_to_first_node(blade); - bau_tablesp = uv_table_bases_init(blade, node); - pnode = uv_blade_to_pnode(blade); - adp = uv_activation_descriptor_init(node, pnode); - pqp = uv_payload_queue_init(node, pnode, bau_tablesp); - uv_table_bases_finish(blade, bau_tablesp, adp); + + node = uvhub_to_first_node(uvhub); + pnode = uv_blade_to_pnode(uvhub); + uv_activation_descriptor_init(node, pnode); + uv_payload_queue_init(node, pnode); /* * the below initialization can't be in firmware because the * messaging IRQ will be determined by the OS */ - apicid = blade_to_first_apicid(blade); - pa = uv_read_global_mmr64(pnode, UVH_BAU_DATA_CONFIG); + apicid = uvhub_to_first_apicid(uvhub); uv_write_global_mmr64(pnode, UVH_BAU_DATA_CONFIG, - ((apicid << 32) | UV_BAU_MESSAGE)); - return 0; + ((apicid << 32) | vector)); +} + +/* + * initialize the bau_control structure for each cpu + */ +static void uv_init_per_cpu(int nuvhubs) +{ + int i, j, k; + int cpu; + int pnode; + int uvhub; + short socket = 0; + struct bau_control *bcp; + struct uvhub_desc *bdp; + struct socket_desc *sdp; + struct bau_control *hmaster = NULL; + struct bau_control *smaster = NULL; + struct socket_desc { + short num_cpus; + short cpu_number[16]; + }; + struct uvhub_desc { + short num_sockets; + short num_cpus; + short uvhub; + short pnode; + struct socket_desc socket[2]; + }; + struct uvhub_desc *uvhub_descs; + + uvhub_descs = (struct uvhub_desc *) + kmalloc(nuvhubs * sizeof(struct uvhub_desc), GFP_KERNEL); + memset(uvhub_descs, 0, nuvhubs * sizeof(struct uvhub_desc)); + for_each_present_cpu(cpu) { + bcp = &per_cpu(bau_control, cpu); + memset(bcp, 0, sizeof(struct bau_control)); + spin_lock_init(&bcp->masks_lock); + bcp->max_concurrent = uv_bau_max_concurrent; + pnode = uv_cpu_hub_info(cpu)->pnode; + uvhub = uv_cpu_hub_info(cpu)->numa_blade_id; + bdp = &uvhub_descs[uvhub]; + bdp->num_cpus++; + bdp->uvhub = uvhub; + bdp->pnode = pnode; + /* time interval to catch a hardware stay-busy bug */ + bcp->timeout_interval = millisec_2_cycles(3); + /* kludge: assume uv_hub.h is constant */ + socket = (cpu_physical_id(cpu)>>5)&1; + if (socket >= bdp->num_sockets) + bdp->num_sockets = socket+1; + sdp = &bdp->socket[socket]; + sdp->cpu_number[sdp->num_cpus] = cpu; + sdp->num_cpus++; + } + socket = 0; + for_each_possible_blade(uvhub) { + bdp = &uvhub_descs[uvhub]; + for (i = 0; i < bdp->num_sockets; i++) { + sdp = &bdp->socket[i]; + for (j = 0; j < sdp->num_cpus; j++) { + cpu = sdp->cpu_number[j]; + bcp = &per_cpu(bau_control, cpu); + bcp->cpu = cpu; + if (j == 0) { + smaster = bcp; + if (i == 0) + hmaster = bcp; + } + bcp->cpus_in_uvhub = bdp->num_cpus; + bcp->cpus_in_socket = sdp->num_cpus; + bcp->socket_master = smaster; + bcp->uvhub_master = hmaster; + for (k = 0; k < DEST_Q_SIZE; k++) + bcp->socket_acknowledge_count[k] = 0; + bcp->uvhub_cpu = + uv_cpu_hub_info(cpu)->blade_processor_id; + } + socket++; + } + } + kfree(uvhub_descs); } /* @@ -828,38 +1344,54 @@ static int __init uv_init_blade(int blade) */ static int __init uv_bau_init(void) { - int blade; - int nblades; + int uvhub; + int pnode; + int nuvhubs; int cur_cpu; + int vector; + unsigned long mmr; if (!is_uv_system()) return 0; + if (nobau) + return 0; + for_each_possible_cpu(cur_cpu) zalloc_cpumask_var_node(&per_cpu(uv_flush_tlb_mask, cur_cpu), GFP_KERNEL, cpu_to_node(cur_cpu)); - uv_bau_retry_limit = 1; + uv_bau_max_concurrent = MAX_BAU_CONCURRENT; + uv_nshift = uv_hub_info->m_val; uv_mmask = (1UL << uv_hub_info->m_val) - 1; - nblades = uv_num_possible_blades(); + nuvhubs = uv_num_possible_blades(); - uv_bau_table_bases = (struct bau_control **) - kmalloc(nblades * sizeof(struct bau_control *), GFP_KERNEL); - BUG_ON(!uv_bau_table_bases); + uv_init_per_cpu(nuvhubs); uv_partition_base_pnode = 0x7fffffff; - for (blade = 0; blade < nblades; blade++) - if (uv_blade_nr_possible_cpus(blade) && - (uv_blade_to_pnode(blade) < uv_partition_base_pnode)) - uv_partition_base_pnode = uv_blade_to_pnode(blade); - for (blade = 0; blade < nblades; blade++) - if (uv_blade_nr_possible_cpus(blade)) - uv_init_blade(blade); - - alloc_intr_gate(UV_BAU_MESSAGE, uv_bau_message_intr1); + for (uvhub = 0; uvhub < nuvhubs; uvhub++) + if (uv_blade_nr_possible_cpus(uvhub) && + (uv_blade_to_pnode(uvhub) < uv_partition_base_pnode)) + uv_partition_base_pnode = uv_blade_to_pnode(uvhub); + + vector = UV_BAU_MESSAGE; + for_each_possible_blade(uvhub) + if (uv_blade_nr_possible_cpus(uvhub)) + uv_init_uvhub(uvhub, vector); + uv_enable_timeouts(); + alloc_intr_gate(vector, uv_bau_message_intr1); + + for_each_possible_blade(uvhub) { + pnode = uv_blade_to_pnode(uvhub); + /* INIT the bau */ + uv_write_global_mmr64(pnode, UVH_LB_BAU_SB_ACTIVATION_CONTROL, + ((unsigned long)1 << 63)); + mmr = 1; /* should be 1 to broadcast to both sockets */ + uv_write_global_mmr64(pnode, UVH_BAU_DATA_BROADCAST, mmr); + } return 0; } -__initcall(uv_bau_init); -__initcall(uv_ptc_init); +core_initcall(uv_bau_init); +core_initcall(uv_ptc_init); diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 1168e4454188..02cfb9b8f5b1 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -108,15 +108,6 @@ static inline void preempt_conditional_cli(struct pt_regs *regs) dec_preempt_count(); } -#ifdef CONFIG_X86_32 -static inline void -die_if_kernel(const char *str, struct pt_regs *regs, long err) -{ - if (!user_mode_vm(regs)) - die(str, regs, err); -} -#endif - static void __kprobes do_trap(int trapnr, int signr, char *str, struct pt_regs *regs, long error_code, siginfo_t *info) @@ -543,11 +534,11 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code) /* DR6 may or may not be cleared by the CPU */ set_debugreg(0, 6); + /* * The processor cleared BTF, so don't mark that we need it set. */ - clear_tsk_thread_flag(tsk, TIF_DEBUGCTLMSR); - tsk->thread.debugctlmsr = 0; + clear_tsk_thread_flag(tsk, TIF_BLOCKSTEP); /* Store the virtualized DR6 value */ tsk->thread.debugreg6 = dr6; @@ -585,55 +576,67 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code) return; } -#ifdef CONFIG_X86_64 -static int kernel_math_error(struct pt_regs *regs, const char *str, int trapnr) -{ - if (fixup_exception(regs)) - return 1; - - notify_die(DIE_GPF, str, regs, 0, trapnr, SIGFPE); - /* Illegal floating point operation in the kernel */ - current->thread.trap_no = trapnr; - die(str, regs, 0); - return 0; -} -#endif - /* * Note that we play around with the 'TS' bit in an attempt to get * the correct behaviour even in the presence of the asynchronous * IRQ13 behaviour */ -void math_error(void __user *ip) +void math_error(struct pt_regs *regs, int error_code, int trapnr) { - struct task_struct *task; + struct task_struct *task = current; siginfo_t info; - unsigned short cwd, swd, err; + unsigned short err; + char *str = (trapnr == 16) ? "fpu exception" : "simd exception"; + + if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, SIGFPE) == NOTIFY_STOP) + return; + conditional_sti(regs); + + if (!user_mode_vm(regs)) + { + if (!fixup_exception(regs)) { + task->thread.error_code = error_code; + task->thread.trap_no = trapnr; + die(str, regs, error_code); + } + return; + } /* * Save the info for the exception handler and clear the error. */ - task = current; save_init_fpu(task); - task->thread.trap_no = 16; - task->thread.error_code = 0; + task->thread.trap_no = trapnr; + task->thread.error_code = error_code; info.si_signo = SIGFPE; info.si_errno = 0; - info.si_addr = ip; - /* - * (~cwd & swd) will mask out exceptions that are not set to unmasked - * status. 0x3f is the exception bits in these regs, 0x200 is the - * C1 reg you need in case of a stack fault, 0x040 is the stack - * fault bit. We should only be taking one exception at a time, - * so if this combination doesn't produce any single exception, - * then we have a bad program that isn't synchronizing its FPU usage - * and it will suffer the consequences since we won't be able to - * fully reproduce the context of the exception - */ - cwd = get_fpu_cwd(task); - swd = get_fpu_swd(task); + info.si_addr = (void __user *)regs->ip; + if (trapnr == 16) { + unsigned short cwd, swd; + /* + * (~cwd & swd) will mask out exceptions that are not set to unmasked + * status. 0x3f is the exception bits in these regs, 0x200 is the + * C1 reg you need in case of a stack fault, 0x040 is the stack + * fault bit. We should only be taking one exception at a time, + * so if this combination doesn't produce any single exception, + * then we have a bad program that isn't synchronizing its FPU usage + * and it will suffer the consequences since we won't be able to + * fully reproduce the context of the exception + */ + cwd = get_fpu_cwd(task); + swd = get_fpu_swd(task); - err = swd & ~cwd; + err = swd & ~cwd; + } else { + /* + * The SIMD FPU exceptions are handled a little differently, as there + * is only a single status/control register. Thus, to determine which + * unmasked exception was caught we must mask the exception mask bits + * at 0x1f80, and then use these to mask the exception bits at 0x3f. + */ + unsigned short mxcsr = get_fpu_mxcsr(task); + err = ~(mxcsr >> 7) & mxcsr; + } if (err & 0x001) { /* Invalid op */ /* @@ -662,97 +665,17 @@ void math_error(void __user *ip) dotraplinkage void do_coprocessor_error(struct pt_regs *regs, long error_code) { - conditional_sti(regs); - #ifdef CONFIG_X86_32 ignore_fpu_irq = 1; -#else - if (!user_mode(regs) && - kernel_math_error(regs, "kernel x87 math error", 16)) - return; #endif - math_error((void __user *)regs->ip); -} - -static void simd_math_error(void __user *ip) -{ - struct task_struct *task; - siginfo_t info; - unsigned short mxcsr; - - /* - * Save the info for the exception handler and clear the error. - */ - task = current; - save_init_fpu(task); - task->thread.trap_no = 19; - task->thread.error_code = 0; - info.si_signo = SIGFPE; - info.si_errno = 0; - info.si_code = __SI_FAULT; - info.si_addr = ip; - /* - * The SIMD FPU exceptions are handled a little differently, as there - * is only a single status/control register. Thus, to determine which - * unmasked exception was caught we must mask the exception mask bits - * at 0x1f80, and then use these to mask the exception bits at 0x3f. - */ - mxcsr = get_fpu_mxcsr(task); - switch (~((mxcsr & 0x1f80) >> 7) & (mxcsr & 0x3f)) { - case 0x000: - default: - break; - case 0x001: /* Invalid Op */ - info.si_code = FPE_FLTINV; - break; - case 0x002: /* Denormalize */ - case 0x010: /* Underflow */ - info.si_code = FPE_FLTUND; - break; - case 0x004: /* Zero Divide */ - info.si_code = FPE_FLTDIV; - break; - case 0x008: /* Overflow */ - info.si_code = FPE_FLTOVF; - break; - case 0x020: /* Precision */ - info.si_code = FPE_FLTRES; - break; - } - force_sig_info(SIGFPE, &info, task); + math_error(regs, error_code, 16); } dotraplinkage void do_simd_coprocessor_error(struct pt_regs *regs, long error_code) { - conditional_sti(regs); - -#ifdef CONFIG_X86_32 - if (cpu_has_xmm) { - /* Handle SIMD FPU exceptions on PIII+ processors. */ - ignore_fpu_irq = 1; - simd_math_error((void __user *)regs->ip); - return; - } - /* - * Handle strange cache flush from user space exception - * in all other cases. This is undocumented behaviour. - */ - if (regs->flags & X86_VM_MASK) { - handle_vm86_fault((struct kernel_vm86_regs *)regs, error_code); - return; - } - current->thread.trap_no = 19; - current->thread.error_code = error_code; - die_if_kernel("cache flush denied", regs, error_code); - force_sig(SIGSEGV, current); -#else - if (!user_mode(regs) && - kernel_math_error(regs, "kernel simd math error", 19)) - return; - simd_math_error((void __user *)regs->ip); -#endif + math_error(regs, error_code, 19); } dotraplinkage void diff --git a/arch/x86/kernel/uv_irq.c b/arch/x86/kernel/uv_irq.c index 1d40336b030a..1132129db792 100644 --- a/arch/x86/kernel/uv_irq.c +++ b/arch/x86/kernel/uv_irq.c @@ -44,7 +44,7 @@ static void uv_ack_apic(unsigned int irq) ack_APIC_irq(); } -struct irq_chip uv_irq_chip = { +static struct irq_chip uv_irq_chip = { .name = "UV-CORE", .startup = uv_noop_ret, .shutdown = uv_noop, @@ -141,7 +141,7 @@ int uv_irq_2_mmr_info(int irq, unsigned long *offset, int *pnode) */ static int arch_enable_uv_irq(char *irq_name, unsigned int irq, int cpu, int mmr_blade, - unsigned long mmr_offset, int restrict) + unsigned long mmr_offset, int limit) { const struct cpumask *eligible_cpu = cpumask_of(cpu); struct irq_desc *desc = irq_to_desc(irq); @@ -160,7 +160,7 @@ arch_enable_uv_irq(char *irq_name, unsigned int irq, int cpu, int mmr_blade, if (err != 0) return err; - if (restrict == UV_AFFINITY_CPU) + if (limit == UV_AFFINITY_CPU) desc->status |= IRQ_NO_BALANCING; else desc->status |= IRQ_MOVE_PCNTXT; @@ -214,7 +214,7 @@ static int uv_set_irq_affinity(unsigned int irq, const struct cpumask *mask) unsigned long mmr_value; struct uv_IO_APIC_route_entry *entry; unsigned long mmr_offset; - unsigned mmr_pnode; + int mmr_pnode; if (set_desc_affinity(desc, mask, &dest)) return -1; @@ -248,7 +248,7 @@ static int uv_set_irq_affinity(unsigned int irq, const struct cpumask *mask) * interrupt is raised. */ int uv_setup_irq(char *irq_name, int cpu, int mmr_blade, - unsigned long mmr_offset, int restrict) + unsigned long mmr_offset, int limit) { int irq, ret; @@ -258,7 +258,7 @@ int uv_setup_irq(char *irq_name, int cpu, int mmr_blade, return -EBUSY; ret = arch_enable_uv_irq(irq_name, irq, cpu, mmr_blade, mmr_offset, - restrict); + limit); if (ret == irq) uv_set_irq_2_mmr_info(irq, mmr_offset, mmr_blade); else diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c index 693920b22496..1b950d151e58 100644 --- a/arch/x86/kernel/x8664_ksyms_64.c +++ b/arch/x86/kernel/x8664_ksyms_64.c @@ -54,7 +54,6 @@ EXPORT_SYMBOL(memcpy); EXPORT_SYMBOL(__memcpy); EXPORT_SYMBOL(empty_zero_page); -EXPORT_SYMBOL(init_level4_pgt); #ifndef CONFIG_PARAVIRT EXPORT_SYMBOL(native_load_gs_index); #endif diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c index 782c3a362ec6..37e68fc5e24a 100644 --- a/arch/x86/kernel/xsave.c +++ b/arch/x86/kernel/xsave.c @@ -99,7 +99,7 @@ int save_i387_xstate(void __user *buf) if (err) return err; - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) err = xsave_user(buf); else err = fxsave_user(buf); @@ -109,14 +109,14 @@ int save_i387_xstate(void __user *buf) task_thread_info(tsk)->status &= ~TS_USEDFPU; stts(); } else { - if (__copy_to_user(buf, &tsk->thread.xstate->fxsave, + if (__copy_to_user(buf, &tsk->thread.fpu.state->fxsave, xstate_size)) return -1; } clear_used_math(); /* trigger finit */ - if (task_thread_info(tsk)->status & TS_XSAVE) { + if (use_xsave()) { struct _fpstate __user *fx = buf; struct _xstate __user *x = buf; u64 xstate_bv; @@ -225,7 +225,7 @@ int restore_i387_xstate(void __user *buf) clts(); task_thread_info(current)->status |= TS_USEDFPU; } - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) err = restore_user_xstate(buf); else err = fxrstor_checking((__force struct i387_fxsave_struct *) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 2ba58206812a..737361fcd503 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2067,7 +2067,7 @@ static int cpuid_interception(struct vcpu_svm *svm) static int iret_interception(struct vcpu_svm *svm) { ++svm->vcpu.stat.nmi_window_exits; - svm->vmcb->control.intercept &= ~(1UL << INTERCEPT_IRET); + svm->vmcb->control.intercept &= ~(1ULL << INTERCEPT_IRET); svm->vcpu.arch.hflags |= HF_IRET_MASK; return 1; } @@ -2479,7 +2479,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) svm->vmcb->control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI; vcpu->arch.hflags |= HF_NMI_MASK; - svm->vmcb->control.intercept |= (1UL << INTERCEPT_IRET); + svm->vmcb->control.intercept |= (1ULL << INTERCEPT_IRET); ++vcpu->stat.nmi_injections; } @@ -2539,10 +2539,10 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) if (masked) { svm->vcpu.arch.hflags |= HF_NMI_MASK; - svm->vmcb->control.intercept |= (1UL << INTERCEPT_IRET); + svm->vmcb->control.intercept |= (1ULL << INTERCEPT_IRET); } else { svm->vcpu.arch.hflags &= ~HF_NMI_MASK; - svm->vmcb->control.intercept &= ~(1UL << INTERCEPT_IRET); + svm->vmcb->control.intercept &= ~(1ULL << INTERCEPT_IRET); } } diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index bc933cfb4e66..edca080407a5 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2703,8 +2703,7 @@ static int vmx_nmi_allowed(struct kvm_vcpu *vcpu) return 0; return !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & - (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS | - GUEST_INTR_STATE_NMI)); + (GUEST_INTR_STATE_MOV_SS | GUEST_INTR_STATE_NMI)); } static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu) @@ -3660,8 +3659,11 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) /* We need to handle NMIs before interrupts are enabled */ if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR && - (exit_intr_info & INTR_INFO_VALID_MASK)) + (exit_intr_info & INTR_INFO_VALID_MASK)) { + kvm_before_handle_nmi(&vmx->vcpu); asm("int $2"); + kvm_after_handle_nmi(&vmx->vcpu); + } idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3c4ca98ad27f..dd9bc8fb81ab 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -40,6 +40,7 @@ #include <linux/user-return-notifier.h> #include <linux/srcu.h> #include <linux/slab.h> +#include <linux/perf_event.h> #include <trace/events/kvm.h> #undef TRACE_INCLUDE_FILE #define CREATE_TRACE_POINTS @@ -1712,6 +1713,7 @@ static int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, if (copy_from_user(cpuid_entries, entries, cpuid->nent * sizeof(struct kvm_cpuid_entry))) goto out_free; + vcpu_load(vcpu); for (i = 0; i < cpuid->nent; i++) { vcpu->arch.cpuid_entries[i].function = cpuid_entries[i].function; vcpu->arch.cpuid_entries[i].eax = cpuid_entries[i].eax; @@ -1729,6 +1731,7 @@ static int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, r = 0; kvm_apic_set_version(vcpu); kvm_x86_ops->cpuid_update(vcpu); + vcpu_put(vcpu); out_free: vfree(cpuid_entries); @@ -1749,9 +1752,11 @@ static int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu, if (copy_from_user(&vcpu->arch.cpuid_entries, entries, cpuid->nent * sizeof(struct kvm_cpuid_entry2))) goto out; + vcpu_load(vcpu); vcpu->arch.cpuid_nent = cpuid->nent; kvm_apic_set_version(vcpu); kvm_x86_ops->cpuid_update(vcpu); + vcpu_put(vcpu); return 0; out: @@ -3743,6 +3748,51 @@ static void kvm_timer_init(void) } } +static DEFINE_PER_CPU(struct kvm_vcpu *, current_vcpu); + +static int kvm_is_in_guest(void) +{ + return percpu_read(current_vcpu) != NULL; +} + +static int kvm_is_user_mode(void) +{ + int user_mode = 3; + + if (percpu_read(current_vcpu)) + user_mode = kvm_x86_ops->get_cpl(percpu_read(current_vcpu)); + + return user_mode != 0; +} + +static unsigned long kvm_get_guest_ip(void) +{ + unsigned long ip = 0; + + if (percpu_read(current_vcpu)) + ip = kvm_rip_read(percpu_read(current_vcpu)); + + return ip; +} + +static struct perf_guest_info_callbacks kvm_guest_cbs = { + .is_in_guest = kvm_is_in_guest, + .is_user_mode = kvm_is_user_mode, + .get_guest_ip = kvm_get_guest_ip, +}; + +void kvm_before_handle_nmi(struct kvm_vcpu *vcpu) +{ + percpu_write(current_vcpu, vcpu); +} +EXPORT_SYMBOL_GPL(kvm_before_handle_nmi); + +void kvm_after_handle_nmi(struct kvm_vcpu *vcpu) +{ + percpu_write(current_vcpu, NULL); +} +EXPORT_SYMBOL_GPL(kvm_after_handle_nmi); + int kvm_arch_init(void *opaque) { int r; @@ -3779,6 +3829,8 @@ int kvm_arch_init(void *opaque) kvm_timer_init(); + perf_register_guest_info_callbacks(&kvm_guest_cbs); + return 0; out: @@ -3787,6 +3839,8 @@ out: void kvm_arch_exit(void) { + perf_unregister_guest_info_callbacks(&kvm_guest_cbs); + if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) cpufreq_unregister_notifier(&kvmclock_cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 2d101639bd8d..b7a404722d2b 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -65,4 +65,7 @@ static inline int is_paging(struct kvm_vcpu *vcpu) return kvm_read_cr0_bits(vcpu, X86_CR0_PG); } +void kvm_before_handle_nmi(struct kvm_vcpu *vcpu); +void kvm_after_handle_nmi(struct kvm_vcpu *vcpu); + #endif diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index 419386c24b82..f871e04b6965 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -20,17 +20,18 @@ lib-y := delay.o lib-y += thunk_$(BITS).o lib-y += usercopy_$(BITS).o getuser.o putuser.o lib-y += memcpy_$(BITS).o -lib-$(CONFIG_KPROBES) += insn.o inat.o +lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o obj-y += msr.o msr-reg.o msr-reg-export.o ifeq ($(CONFIG_X86_32),y) obj-y += atomic64_32.o + lib-y += atomic64_cx8_32.o lib-y += checksum_32.o lib-y += strstr_32.o lib-y += semaphore_32.o string_32.o ifneq ($(CONFIG_X86_CMPXCHG64),y) - lib-y += cmpxchg8b_emu.o + lib-y += cmpxchg8b_emu.o atomic64_386_32.o endif lib-$(CONFIG_X86_USE_3DNOW) += mmx_32.o else diff --git a/arch/x86/lib/atomic64_32.c b/arch/x86/lib/atomic64_32.c index 824fa0be55a3..540179e8e9fa 100644 --- a/arch/x86/lib/atomic64_32.c +++ b/arch/x86/lib/atomic64_32.c @@ -6,225 +6,54 @@ #include <asm/cmpxchg.h> #include <asm/atomic.h> -static noinline u64 cmpxchg8b(u64 *ptr, u64 old, u64 new) -{ - u32 low = new; - u32 high = new >> 32; - - asm volatile( - LOCK_PREFIX "cmpxchg8b %1\n" - : "+A" (old), "+m" (*ptr) - : "b" (low), "c" (high) - ); - return old; -} - -u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old_val, u64 new_val) -{ - return cmpxchg8b(&ptr->counter, old_val, new_val); -} -EXPORT_SYMBOL(atomic64_cmpxchg); - -/** - * atomic64_xchg - xchg atomic64 variable - * @ptr: pointer to type atomic64_t - * @new_val: value to assign - * - * Atomically xchgs the value of @ptr to @new_val and returns - * the old value. - */ -u64 atomic64_xchg(atomic64_t *ptr, u64 new_val) -{ - /* - * Try first with a (possibly incorrect) assumption about - * what we have there. We'll do two loops most likely, - * but we'll get an ownership MESI transaction straight away - * instead of a read transaction followed by a - * flush-for-ownership transaction: - */ - u64 old_val, real_val = 0; - - do { - old_val = real_val; - - real_val = atomic64_cmpxchg(ptr, old_val, new_val); - - } while (real_val != old_val); - - return old_val; -} -EXPORT_SYMBOL(atomic64_xchg); - -/** - * atomic64_set - set atomic64 variable - * @ptr: pointer to type atomic64_t - * @new_val: value to assign - * - * Atomically sets the value of @ptr to @new_val. - */ -void atomic64_set(atomic64_t *ptr, u64 new_val) -{ - atomic64_xchg(ptr, new_val); -} -EXPORT_SYMBOL(atomic64_set); - -/** -EXPORT_SYMBOL(atomic64_read); - * atomic64_add_return - add and return - * @delta: integer value to add - * @ptr: pointer to type atomic64_t - * - * Atomically adds @delta to @ptr and returns @delta + *@ptr - */ -noinline u64 atomic64_add_return(u64 delta, atomic64_t *ptr) -{ - /* - * Try first with a (possibly incorrect) assumption about - * what we have there. We'll do two loops most likely, - * but we'll get an ownership MESI transaction straight away - * instead of a read transaction followed by a - * flush-for-ownership transaction: - */ - u64 old_val, new_val, real_val = 0; - - do { - old_val = real_val; - new_val = old_val + delta; - - real_val = atomic64_cmpxchg(ptr, old_val, new_val); - - } while (real_val != old_val); - - return new_val; -} -EXPORT_SYMBOL(atomic64_add_return); - -u64 atomic64_sub_return(u64 delta, atomic64_t *ptr) -{ - return atomic64_add_return(-delta, ptr); -} -EXPORT_SYMBOL(atomic64_sub_return); - -u64 atomic64_inc_return(atomic64_t *ptr) -{ - return atomic64_add_return(1, ptr); -} -EXPORT_SYMBOL(atomic64_inc_return); - -u64 atomic64_dec_return(atomic64_t *ptr) -{ - return atomic64_sub_return(1, ptr); -} -EXPORT_SYMBOL(atomic64_dec_return); - -/** - * atomic64_add - add integer to atomic64 variable - * @delta: integer value to add - * @ptr: pointer to type atomic64_t - * - * Atomically adds @delta to @ptr. - */ -void atomic64_add(u64 delta, atomic64_t *ptr) -{ - atomic64_add_return(delta, ptr); -} -EXPORT_SYMBOL(atomic64_add); - -/** - * atomic64_sub - subtract the atomic64 variable - * @delta: integer value to subtract - * @ptr: pointer to type atomic64_t - * - * Atomically subtracts @delta from @ptr. - */ -void atomic64_sub(u64 delta, atomic64_t *ptr) -{ - atomic64_add(-delta, ptr); -} -EXPORT_SYMBOL(atomic64_sub); - -/** - * atomic64_sub_and_test - subtract value from variable and test result - * @delta: integer value to subtract - * @ptr: pointer to type atomic64_t - * - * Atomically subtracts @delta from @ptr and returns - * true if the result is zero, or false for all - * other cases. - */ -int atomic64_sub_and_test(u64 delta, atomic64_t *ptr) -{ - u64 new_val = atomic64_sub_return(delta, ptr); - - return new_val == 0; -} -EXPORT_SYMBOL(atomic64_sub_and_test); - -/** - * atomic64_inc - increment atomic64 variable - * @ptr: pointer to type atomic64_t - * - * Atomically increments @ptr by 1. - */ -void atomic64_inc(atomic64_t *ptr) -{ - atomic64_add(1, ptr); -} -EXPORT_SYMBOL(atomic64_inc); - -/** - * atomic64_dec - decrement atomic64 variable - * @ptr: pointer to type atomic64_t - * - * Atomically decrements @ptr by 1. - */ -void atomic64_dec(atomic64_t *ptr) -{ - atomic64_sub(1, ptr); -} -EXPORT_SYMBOL(atomic64_dec); - -/** - * atomic64_dec_and_test - decrement and test - * @ptr: pointer to type atomic64_t - * - * Atomically decrements @ptr by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -int atomic64_dec_and_test(atomic64_t *ptr) -{ - return atomic64_sub_and_test(1, ptr); -} -EXPORT_SYMBOL(atomic64_dec_and_test); - -/** - * atomic64_inc_and_test - increment and test - * @ptr: pointer to type atomic64_t - * - * Atomically increments @ptr by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -int atomic64_inc_and_test(atomic64_t *ptr) -{ - return atomic64_sub_and_test(-1, ptr); -} -EXPORT_SYMBOL(atomic64_inc_and_test); - -/** - * atomic64_add_negative - add and test if negative - * @delta: integer value to add - * @ptr: pointer to type atomic64_t - * - * Atomically adds @delta to @ptr and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -int atomic64_add_negative(u64 delta, atomic64_t *ptr) -{ - s64 new_val = atomic64_add_return(delta, ptr); - - return new_val < 0; -} -EXPORT_SYMBOL(atomic64_add_negative); +long long atomic64_read_cx8(long long, const atomic64_t *v); +EXPORT_SYMBOL(atomic64_read_cx8); +long long atomic64_set_cx8(long long, const atomic64_t *v); +EXPORT_SYMBOL(atomic64_set_cx8); +long long atomic64_xchg_cx8(long long, unsigned high); +EXPORT_SYMBOL(atomic64_xchg_cx8); +long long atomic64_add_return_cx8(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_add_return_cx8); +long long atomic64_sub_return_cx8(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_sub_return_cx8); +long long atomic64_inc_return_cx8(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_inc_return_cx8); +long long atomic64_dec_return_cx8(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_dec_return_cx8); +long long atomic64_dec_if_positive_cx8(atomic64_t *v); +EXPORT_SYMBOL(atomic64_dec_if_positive_cx8); +int atomic64_inc_not_zero_cx8(atomic64_t *v); +EXPORT_SYMBOL(atomic64_inc_not_zero_cx8); +int atomic64_add_unless_cx8(atomic64_t *v, long long a, long long u); +EXPORT_SYMBOL(atomic64_add_unless_cx8); + +#ifndef CONFIG_X86_CMPXCHG64 +long long atomic64_read_386(long long, const atomic64_t *v); +EXPORT_SYMBOL(atomic64_read_386); +long long atomic64_set_386(long long, const atomic64_t *v); +EXPORT_SYMBOL(atomic64_set_386); +long long atomic64_xchg_386(long long, unsigned high); +EXPORT_SYMBOL(atomic64_xchg_386); +long long atomic64_add_return_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_add_return_386); +long long atomic64_sub_return_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_sub_return_386); +long long atomic64_inc_return_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_inc_return_386); +long long atomic64_dec_return_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_dec_return_386); +long long atomic64_add_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_add_386); +long long atomic64_sub_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_sub_386); +long long atomic64_inc_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_inc_386); +long long atomic64_dec_386(long long a, atomic64_t *v); +EXPORT_SYMBOL(atomic64_dec_386); +long long atomic64_dec_if_positive_386(atomic64_t *v); +EXPORT_SYMBOL(atomic64_dec_if_positive_386); +int atomic64_inc_not_zero_386(atomic64_t *v); +EXPORT_SYMBOL(atomic64_inc_not_zero_386); +int atomic64_add_unless_386(atomic64_t *v, long long a, long long u); +EXPORT_SYMBOL(atomic64_add_unless_386); +#endif diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S new file mode 100644 index 000000000000..4a5979aa6883 --- /dev/null +++ b/arch/x86/lib/atomic64_386_32.S @@ -0,0 +1,174 @@ +/* + * atomic64_t for 386/486 + * + * Copyright © 2010 Luca Barbieri + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include <linux/linkage.h> +#include <asm/alternative-asm.h> +#include <asm/dwarf2.h> + +/* if you want SMP support, implement these with real spinlocks */ +.macro LOCK reg + pushfl + CFI_ADJUST_CFA_OFFSET 4 + cli +.endm + +.macro UNLOCK reg + popfl + CFI_ADJUST_CFA_OFFSET -4 +.endm + +.macro BEGIN func reg +$v = \reg + +ENTRY(atomic64_\func\()_386) + CFI_STARTPROC + LOCK $v + +.macro RETURN + UNLOCK $v + ret +.endm + +.macro END_ + CFI_ENDPROC +ENDPROC(atomic64_\func\()_386) +.purgem RETURN +.purgem END_ +.purgem END +.endm + +.macro END +RETURN +END_ +.endm +.endm + +BEGIN read %ecx + movl ($v), %eax + movl 4($v), %edx +END + +BEGIN set %esi + movl %ebx, ($v) + movl %ecx, 4($v) +END + +BEGIN xchg %esi + movl ($v), %eax + movl 4($v), %edx + movl %ebx, ($v) + movl %ecx, 4($v) +END + +BEGIN add %ecx + addl %eax, ($v) + adcl %edx, 4($v) +END + +BEGIN add_return %ecx + addl ($v), %eax + adcl 4($v), %edx + movl %eax, ($v) + movl %edx, 4($v) +END + +BEGIN sub %ecx + subl %eax, ($v) + sbbl %edx, 4($v) +END + +BEGIN sub_return %ecx + negl %edx + negl %eax + sbbl $0, %edx + addl ($v), %eax + adcl 4($v), %edx + movl %eax, ($v) + movl %edx, 4($v) +END + +BEGIN inc %esi + addl $1, ($v) + adcl $0, 4($v) +END + +BEGIN inc_return %esi + movl ($v), %eax + movl 4($v), %edx + addl $1, %eax + adcl $0, %edx + movl %eax, ($v) + movl %edx, 4($v) +END + +BEGIN dec %esi + subl $1, ($v) + sbbl $0, 4($v) +END + +BEGIN dec_return %esi + movl ($v), %eax + movl 4($v), %edx + subl $1, %eax + sbbl $0, %edx + movl %eax, ($v) + movl %edx, 4($v) +END + +BEGIN add_unless %ecx + addl %eax, %esi + adcl %edx, %edi + addl ($v), %eax + adcl 4($v), %edx + cmpl %eax, %esi + je 3f +1: + movl %eax, ($v) + movl %edx, 4($v) + movl $1, %eax +2: +RETURN +3: + cmpl %edx, %edi + jne 1b + xorl %eax, %eax + jmp 2b +END_ + +BEGIN inc_not_zero %esi + movl ($v), %eax + movl 4($v), %edx + testl %eax, %eax + je 3f +1: + addl $1, %eax + adcl $0, %edx + movl %eax, ($v) + movl %edx, 4($v) + movl $1, %eax +2: +RETURN +3: + testl %edx, %edx + jne 1b + jmp 2b +END_ + +BEGIN dec_if_positive %esi + movl ($v), %eax + movl 4($v), %edx + subl $1, %eax + sbbl $0, %edx + js 1f + movl %eax, ($v) + movl %edx, 4($v) +1: +END diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S new file mode 100644 index 000000000000..71e080de3352 --- /dev/null +++ b/arch/x86/lib/atomic64_cx8_32.S @@ -0,0 +1,224 @@ +/* + * atomic64_t for 586+ + * + * Copyright © 2010 Luca Barbieri + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include <linux/linkage.h> +#include <asm/alternative-asm.h> +#include <asm/dwarf2.h> + +.macro SAVE reg + pushl %\reg + CFI_ADJUST_CFA_OFFSET 4 + CFI_REL_OFFSET \reg, 0 +.endm + +.macro RESTORE reg + popl %\reg + CFI_ADJUST_CFA_OFFSET -4 + CFI_RESTORE \reg +.endm + +.macro read64 reg + movl %ebx, %eax + movl %ecx, %edx +/* we need LOCK_PREFIX since otherwise cmpxchg8b always does the write */ + LOCK_PREFIX + cmpxchg8b (\reg) +.endm + +ENTRY(atomic64_read_cx8) + CFI_STARTPROC + + read64 %ecx + ret + CFI_ENDPROC +ENDPROC(atomic64_read_cx8) + +ENTRY(atomic64_set_cx8) + CFI_STARTPROC + +1: +/* we don't need LOCK_PREFIX since aligned 64-bit writes + * are atomic on 586 and newer */ + cmpxchg8b (%esi) + jne 1b + + ret + CFI_ENDPROC +ENDPROC(atomic64_set_cx8) + +ENTRY(atomic64_xchg_cx8) + CFI_STARTPROC + + movl %ebx, %eax + movl %ecx, %edx +1: + LOCK_PREFIX + cmpxchg8b (%esi) + jne 1b + + ret + CFI_ENDPROC +ENDPROC(atomic64_xchg_cx8) + +.macro addsub_return func ins insc +ENTRY(atomic64_\func\()_return_cx8) + CFI_STARTPROC + SAVE ebp + SAVE ebx + SAVE esi + SAVE edi + + movl %eax, %esi + movl %edx, %edi + movl %ecx, %ebp + + read64 %ebp +1: + movl %eax, %ebx + movl %edx, %ecx + \ins\()l %esi, %ebx + \insc\()l %edi, %ecx + LOCK_PREFIX + cmpxchg8b (%ebp) + jne 1b + +10: + movl %ebx, %eax + movl %ecx, %edx + RESTORE edi + RESTORE esi + RESTORE ebx + RESTORE ebp + ret + CFI_ENDPROC +ENDPROC(atomic64_\func\()_return_cx8) +.endm + +addsub_return add add adc +addsub_return sub sub sbb + +.macro incdec_return func ins insc +ENTRY(atomic64_\func\()_return_cx8) + CFI_STARTPROC + SAVE ebx + + read64 %esi +1: + movl %eax, %ebx + movl %edx, %ecx + \ins\()l $1, %ebx + \insc\()l $0, %ecx + LOCK_PREFIX + cmpxchg8b (%esi) + jne 1b + +10: + movl %ebx, %eax + movl %ecx, %edx + RESTORE ebx + ret + CFI_ENDPROC +ENDPROC(atomic64_\func\()_return_cx8) +.endm + +incdec_return inc add adc +incdec_return dec sub sbb + +ENTRY(atomic64_dec_if_positive_cx8) + CFI_STARTPROC + SAVE ebx + + read64 %esi +1: + movl %eax, %ebx + movl %edx, %ecx + subl $1, %ebx + sbb $0, %ecx + js 2f + LOCK_PREFIX + cmpxchg8b (%esi) + jne 1b + +2: + movl %ebx, %eax + movl %ecx, %edx + RESTORE ebx + ret + CFI_ENDPROC +ENDPROC(atomic64_dec_if_positive_cx8) + +ENTRY(atomic64_add_unless_cx8) + CFI_STARTPROC + SAVE ebp + SAVE ebx +/* these just push these two parameters on the stack */ + SAVE edi + SAVE esi + + movl %ecx, %ebp + movl %eax, %esi + movl %edx, %edi + + read64 %ebp +1: + cmpl %eax, 0(%esp) + je 4f +2: + movl %eax, %ebx + movl %edx, %ecx + addl %esi, %ebx + adcl %edi, %ecx + LOCK_PREFIX + cmpxchg8b (%ebp) + jne 1b + + movl $1, %eax +3: + addl $8, %esp + CFI_ADJUST_CFA_OFFSET -8 + RESTORE ebx + RESTORE ebp + ret +4: + cmpl %edx, 4(%esp) + jne 2b + xorl %eax, %eax + jmp 3b + CFI_ENDPROC +ENDPROC(atomic64_add_unless_cx8) + +ENTRY(atomic64_inc_not_zero_cx8) + CFI_STARTPROC + SAVE ebx + + read64 %esi +1: + testl %eax, %eax + je 4f +2: + movl %eax, %ebx + movl %edx, %ecx + addl $1, %ebx + adcl $0, %ecx + LOCK_PREFIX + cmpxchg8b (%esi) + jne 1b + + movl $1, %eax +3: + RESTORE ebx + ret +4: + testl %edx, %edx + jne 2b + jmp 3b + CFI_ENDPROC +ENDPROC(atomic64_inc_not_zero_cx8) diff --git a/arch/x86/math-emu/fpu_aux.c b/arch/x86/math-emu/fpu_aux.c index aa0987088774..dc8adad10a2f 100644 --- a/arch/x86/math-emu/fpu_aux.c +++ b/arch/x86/math-emu/fpu_aux.c @@ -30,10 +30,10 @@ static void fclex(void) } /* Needs to be externally visible */ -void finit_task(struct task_struct *tsk) +void finit_soft_fpu(struct i387_soft_struct *soft) { - struct i387_soft_struct *soft = &tsk->thread.xstate->soft; struct address *oaddr, *iaddr; + memset(soft, 0, sizeof(*soft)); soft->cwd = 0x037f; soft->swd = 0; soft->ftop = 0; /* We don't keep top in the status word internally. */ @@ -52,7 +52,7 @@ void finit_task(struct task_struct *tsk) void finit(void) { - finit_task(current); + finit_soft_fpu(¤t->thread.fpu.state->soft); } /* diff --git a/arch/x86/math-emu/fpu_entry.c b/arch/x86/math-emu/fpu_entry.c index 5d87f586f8d7..7718541541d4 100644 --- a/arch/x86/math-emu/fpu_entry.c +++ b/arch/x86/math-emu/fpu_entry.c @@ -681,7 +681,7 @@ int fpregs_soft_set(struct task_struct *target, unsigned int pos, unsigned int count, const void *kbuf, const void __user *ubuf) { - struct i387_soft_struct *s387 = &target->thread.xstate->soft; + struct i387_soft_struct *s387 = &target->thread.fpu.state->soft; void *space = s387->st_space; int ret; int offset, other, i, tags, regnr, tag, newtop; @@ -733,7 +733,7 @@ int fpregs_soft_get(struct task_struct *target, unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf) { - struct i387_soft_struct *s387 = &target->thread.xstate->soft; + struct i387_soft_struct *s387 = &target->thread.fpu.state->soft; const void *space = s387->st_space; int ret; int offset = (S387->ftop & 7) * 10, other = 80 - offset; diff --git a/arch/x86/math-emu/fpu_system.h b/arch/x86/math-emu/fpu_system.h index 50fa0ec2c8a5..2c614410a5f3 100644 --- a/arch/x86/math-emu/fpu_system.h +++ b/arch/x86/math-emu/fpu_system.h @@ -31,7 +31,7 @@ #define SEG_EXPAND_DOWN(s) (((s).b & ((1 << 11) | (1 << 10))) \ == (1 << 10)) -#define I387 (current->thread.xstate) +#define I387 (current->thread.fpu.state) #define FPU_info (I387->soft.info) #define FPU_CS (*(unsigned short *) &(FPU_info->regs->cs)) diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 06630d26e56d..a4c768397baa 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -6,6 +6,7 @@ nostackp := $(call cc-option, -fno-stack-protector) CFLAGS_physaddr.o := $(nostackp) CFLAGS_setup_nx.o := $(nostackp) +obj-$(CONFIG_X86_PAT) += pat_rbtree.o obj-$(CONFIG_SMP) += tlb.o obj-$(CONFIG_X86_32) += pgtable_32.o iomap_32.o diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c index edc8b95afc1a..bbe5502ee1cb 100644 --- a/arch/x86/mm/pat.c +++ b/arch/x86/mm/pat.c @@ -30,6 +30,8 @@ #include <asm/pat.h> #include <asm/io.h> +#include "pat_internal.h" + #ifdef CONFIG_X86_PAT int __read_mostly pat_enabled = 1; @@ -53,19 +55,15 @@ static inline void pat_disable(const char *reason) #endif -static int debug_enable; +int pat_debug_enable; static int __init pat_debug_setup(char *str) { - debug_enable = 1; + pat_debug_enable = 1; return 0; } __setup("debugpat", pat_debug_setup); -#define dprintk(fmt, arg...) \ - do { if (debug_enable) printk(KERN_INFO fmt, ##arg); } while (0) - - static u64 __read_mostly boot_pat_state; enum { @@ -132,84 +130,7 @@ void pat_init(void) #undef PAT -static char *cattr_name(unsigned long flags) -{ - switch (flags & _PAGE_CACHE_MASK) { - case _PAGE_CACHE_UC: return "uncached"; - case _PAGE_CACHE_UC_MINUS: return "uncached-minus"; - case _PAGE_CACHE_WB: return "write-back"; - case _PAGE_CACHE_WC: return "write-combining"; - default: return "broken"; - } -} - -/* - * The global memtype list keeps track of memory type for specific - * physical memory areas. Conflicting memory types in different - * mappings can cause CPU cache corruption. To avoid this we keep track. - * - * The list is sorted based on starting address and can contain multiple - * entries for each address (this allows reference counting for overlapping - * areas). All the aliases have the same cache attributes of course. - * Zero attributes are represented as holes. - * - * The data structure is a list that is also organized as an rbtree - * sorted on the start address of memtype range. - * - * memtype_lock protects both the linear list and rbtree. - */ - -struct memtype { - u64 start; - u64 end; - unsigned long type; - struct list_head nd; - struct rb_node rb; -}; - -static struct rb_root memtype_rbroot = RB_ROOT; -static LIST_HEAD(memtype_list); -static DEFINE_SPINLOCK(memtype_lock); /* protects memtype list */ - -static struct memtype *memtype_rb_search(struct rb_root *root, u64 start) -{ - struct rb_node *node = root->rb_node; - struct memtype *last_lower = NULL; - - while (node) { - struct memtype *data = container_of(node, struct memtype, rb); - - if (data->start < start) { - last_lower = data; - node = node->rb_right; - } else if (data->start > start) { - node = node->rb_left; - } else - return data; - } - - /* Will return NULL if there is no entry with its start <= start */ - return last_lower; -} - -static void memtype_rb_insert(struct rb_root *root, struct memtype *data) -{ - struct rb_node **new = &(root->rb_node); - struct rb_node *parent = NULL; - - while (*new) { - struct memtype *this = container_of(*new, struct memtype, rb); - - parent = *new; - if (data->start <= this->start) - new = &((*new)->rb_left); - else if (data->start > this->start) - new = &((*new)->rb_right); - } - - rb_link_node(&data->rb, parent, new); - rb_insert_color(&data->rb, root); -} +static DEFINE_SPINLOCK(memtype_lock); /* protects memtype accesses */ /* * Does intersection of PAT memory type and MTRR memory type and returns @@ -237,33 +158,6 @@ static unsigned long pat_x_mtrr_type(u64 start, u64 end, unsigned long req_type) return req_type; } -static int -chk_conflict(struct memtype *new, struct memtype *entry, unsigned long *type) -{ - if (new->type != entry->type) { - if (type) { - new->type = entry->type; - *type = entry->type; - } else - goto conflict; - } - - /* check overlaps with more than one entry in the list */ - list_for_each_entry_continue(entry, &memtype_list, nd) { - if (new->end <= entry->start) - break; - else if (new->type != entry->type) - goto conflict; - } - return 0; - - conflict: - printk(KERN_INFO "%s:%d conflicting memory types " - "%Lx-%Lx %s<->%s\n", current->comm, current->pid, new->start, - new->end, cattr_name(new->type), cattr_name(entry->type)); - return -EBUSY; -} - static int pat_pagerange_is_ram(unsigned long start, unsigned long end) { int ram_page = 0, not_rampage = 0; @@ -296,8 +190,6 @@ static int pat_pagerange_is_ram(unsigned long start, unsigned long end) * Here we do two pass: * - Find the memtype of all the pages in the range, look for any conflicts * - In case of no conflicts, set the new memtype for pages in the range - * - * Caller must hold memtype_lock for atomicity. */ static int reserve_ram_pages_type(u64 start, u64 end, unsigned long req_type, unsigned long *new_type) @@ -364,9 +256,8 @@ static int free_ram_pages_type(u64 start, u64 end) int reserve_memtype(u64 start, u64 end, unsigned long req_type, unsigned long *new_type) { - struct memtype *new, *entry; + struct memtype *new; unsigned long actual_type; - struct list_head *where; int is_range_ram; int err = 0; @@ -404,9 +295,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type, is_range_ram = pat_pagerange_is_ram(start, end); if (is_range_ram == 1) { - spin_lock(&memtype_lock); err = reserve_ram_pages_type(start, end, req_type, new_type); - spin_unlock(&memtype_lock); return err; } else if (is_range_ram < 0) { @@ -423,42 +312,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type, spin_lock(&memtype_lock); - /* Search for existing mapping that overlaps the current range */ - where = NULL; - list_for_each_entry(entry, &memtype_list, nd) { - if (end <= entry->start) { - where = entry->nd.prev; - break; - } else if (start <= entry->start) { /* end > entry->start */ - err = chk_conflict(new, entry, new_type); - if (!err) { - dprintk("Overlap at 0x%Lx-0x%Lx\n", - entry->start, entry->end); - where = entry->nd.prev; - } - break; - } else if (start < entry->end) { /* start > entry->start */ - err = chk_conflict(new, entry, new_type); - if (!err) { - dprintk("Overlap at 0x%Lx-0x%Lx\n", - entry->start, entry->end); - - /* - * Move to right position in the linked - * list to add this new entry - */ - list_for_each_entry_continue(entry, - &memtype_list, nd) { - if (start <= entry->start) { - where = entry->nd.prev; - break; - } - } - } - break; - } - } - + err = rbt_memtype_check_insert(new, new_type); if (err) { printk(KERN_INFO "reserve_memtype failed 0x%Lx-0x%Lx, " "track %s, req %s\n", @@ -469,13 +323,6 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type, return err; } - if (where) - list_add(&new->nd, where); - else - list_add_tail(&new->nd, &memtype_list); - - memtype_rb_insert(&memtype_rbroot, new); - spin_unlock(&memtype_lock); dprintk("reserve_memtype added 0x%Lx-0x%Lx, track %s, req %s, ret %s\n", @@ -487,7 +334,6 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type, int free_memtype(u64 start, u64 end) { - struct memtype *entry, *saved_entry; int err = -EINVAL; int is_range_ram; @@ -501,9 +347,7 @@ int free_memtype(u64 start, u64 end) is_range_ram = pat_pagerange_is_ram(start, end); if (is_range_ram == 1) { - spin_lock(&memtype_lock); err = free_ram_pages_type(start, end); - spin_unlock(&memtype_lock); return err; } else if (is_range_ram < 0) { @@ -511,46 +355,7 @@ int free_memtype(u64 start, u64 end) } spin_lock(&memtype_lock); - - entry = memtype_rb_search(&memtype_rbroot, start); - if (unlikely(entry == NULL)) - goto unlock_ret; - - /* - * Saved entry points to an entry with start same or less than what - * we searched for. Now go through the list in both directions to look - * for the entry that matches with both start and end, with list stored - * in sorted start address - */ - saved_entry = entry; - list_for_each_entry_from(entry, &memtype_list, nd) { - if (entry->start == start && entry->end == end) { - rb_erase(&entry->rb, &memtype_rbroot); - list_del(&entry->nd); - kfree(entry); - err = 0; - break; - } else if (entry->start > start) { - break; - } - } - - if (!err) - goto unlock_ret; - - entry = saved_entry; - list_for_each_entry_reverse(entry, &memtype_list, nd) { - if (entry->start == start && entry->end == end) { - rb_erase(&entry->rb, &memtype_rbroot); - list_del(&entry->nd); - kfree(entry); - err = 0; - break; - } else if (entry->start < start) { - break; - } - } -unlock_ret: + err = rbt_memtype_erase(start, end); spin_unlock(&memtype_lock); if (err) { @@ -583,10 +388,8 @@ static unsigned long lookup_memtype(u64 paddr) if (pat_pagerange_is_ram(paddr, paddr + PAGE_SIZE)) { struct page *page; - spin_lock(&memtype_lock); page = pfn_to_page(paddr >> PAGE_SHIFT); rettype = get_page_memtype(page); - spin_unlock(&memtype_lock); /* * -1 from get_page_memtype() implies RAM page is in its * default state and not reserved, and hence of type WB @@ -599,7 +402,7 @@ static unsigned long lookup_memtype(u64 paddr) spin_lock(&memtype_lock); - entry = memtype_rb_search(&memtype_rbroot, paddr); + entry = rbt_memtype_lookup(paddr); if (entry != NULL) rettype = entry->type; else @@ -936,29 +739,25 @@ EXPORT_SYMBOL_GPL(pgprot_writecombine); #if defined(CONFIG_DEBUG_FS) && defined(CONFIG_X86_PAT) -/* get Nth element of the linked list */ static struct memtype *memtype_get_idx(loff_t pos) { - struct memtype *list_node, *print_entry; - int i = 1; + struct memtype *print_entry; + int ret; - print_entry = kmalloc(sizeof(struct memtype), GFP_KERNEL); + print_entry = kzalloc(sizeof(struct memtype), GFP_KERNEL); if (!print_entry) return NULL; spin_lock(&memtype_lock); - list_for_each_entry(list_node, &memtype_list, nd) { - if (pos == i) { - *print_entry = *list_node; - spin_unlock(&memtype_lock); - return print_entry; - } - ++i; - } + ret = rbt_memtype_copy_nth_element(print_entry, pos); spin_unlock(&memtype_lock); - kfree(print_entry); - return NULL; + if (!ret) { + return print_entry; + } else { + kfree(print_entry); + return NULL; + } } static void *memtype_seq_start(struct seq_file *seq, loff_t *pos) diff --git a/arch/x86/mm/pat_internal.h b/arch/x86/mm/pat_internal.h new file mode 100644 index 000000000000..4f39eefa3e61 --- /dev/null +++ b/arch/x86/mm/pat_internal.h @@ -0,0 +1,46 @@ +#ifndef __PAT_INTERNAL_H_ +#define __PAT_INTERNAL_H_ + +extern int pat_debug_enable; + +#define dprintk(fmt, arg...) \ + do { if (pat_debug_enable) printk(KERN_INFO fmt, ##arg); } while (0) + +struct memtype { + u64 start; + u64 end; + u64 subtree_max_end; + unsigned long type; + struct rb_node rb; +}; + +static inline char *cattr_name(unsigned long flags) +{ + switch (flags & _PAGE_CACHE_MASK) { + case _PAGE_CACHE_UC: return "uncached"; + case _PAGE_CACHE_UC_MINUS: return "uncached-minus"; + case _PAGE_CACHE_WB: return "write-back"; + case _PAGE_CACHE_WC: return "write-combining"; + default: return "broken"; + } +} + +#ifdef CONFIG_X86_PAT +extern int rbt_memtype_check_insert(struct memtype *new, + unsigned long *new_type); +extern int rbt_memtype_erase(u64 start, u64 end); +extern struct memtype *rbt_memtype_lookup(u64 addr); +extern int rbt_memtype_copy_nth_element(struct memtype *out, loff_t pos); +#else +static inline int rbt_memtype_check_insert(struct memtype *new, + unsigned long *new_type) +{ return 0; } +static inline int rbt_memtype_erase(u64 start, u64 end) +{ return 0; } +static inline struct memtype *rbt_memtype_lookup(u64 addr) +{ return NULL; } +static inline int rbt_memtype_copy_nth_element(struct memtype *out, loff_t pos) +{ return 0; } +#endif + +#endif /* __PAT_INTERNAL_H_ */ diff --git a/arch/x86/mm/pat_rbtree.c b/arch/x86/mm/pat_rbtree.c new file mode 100644 index 000000000000..07de4cb8cc30 --- /dev/null +++ b/arch/x86/mm/pat_rbtree.c @@ -0,0 +1,273 @@ +/* + * Handle caching attributes in page tables (PAT) + * + * Authors: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> + * Suresh B Siddha <suresh.b.siddha@intel.com> + * + * Interval tree (augmented rbtree) used to store the PAT memory type + * reservations. + */ + +#include <linux/seq_file.h> +#include <linux/debugfs.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/rbtree.h> +#include <linux/sched.h> +#include <linux/gfp.h> + +#include <asm/pgtable.h> +#include <asm/pat.h> + +#include "pat_internal.h" + +/* + * The memtype tree keeps track of memory type for specific + * physical memory areas. Without proper tracking, conflicting memory + * types in different mappings can cause CPU cache corruption. + * + * The tree is an interval tree (augmented rbtree) with tree ordered + * on starting address. Tree can contain multiple entries for + * different regions which overlap. All the aliases have the same + * cache attributes of course. + * + * memtype_lock protects the rbtree. + */ + +static void memtype_rb_augment_cb(struct rb_node *node); +static struct rb_root memtype_rbroot = RB_AUGMENT_ROOT(&memtype_rb_augment_cb); + +static int is_node_overlap(struct memtype *node, u64 start, u64 end) +{ + if (node->start >= end || node->end <= start) + return 0; + + return 1; +} + +static u64 get_subtree_max_end(struct rb_node *node) +{ + u64 ret = 0; + if (node) { + struct memtype *data = container_of(node, struct memtype, rb); + ret = data->subtree_max_end; + } + return ret; +} + +/* Update 'subtree_max_end' for a node, based on node and its children */ +static void update_node_max_end(struct rb_node *node) +{ + struct memtype *data; + u64 max_end, child_max_end; + + if (!node) + return; + + data = container_of(node, struct memtype, rb); + max_end = data->end; + + child_max_end = get_subtree_max_end(node->rb_right); + if (child_max_end > max_end) + max_end = child_max_end; + + child_max_end = get_subtree_max_end(node->rb_left); + if (child_max_end > max_end) + max_end = child_max_end; + + data->subtree_max_end = max_end; +} + +/* Update 'subtree_max_end' for a node and all its ancestors */ +static void update_path_max_end(struct rb_node *node) +{ + u64 old_max_end, new_max_end; + + while (node) { + struct memtype *data = container_of(node, struct memtype, rb); + + old_max_end = data->subtree_max_end; + update_node_max_end(node); + new_max_end = data->subtree_max_end; + + if (new_max_end == old_max_end) + break; + + node = rb_parent(node); + } +} + +/* Find the first (lowest start addr) overlapping range from rb tree */ +static struct memtype *memtype_rb_lowest_match(struct rb_root *root, + u64 start, u64 end) +{ + struct rb_node *node = root->rb_node; + struct memtype *last_lower = NULL; + + while (node) { + struct memtype *data = container_of(node, struct memtype, rb); + + if (get_subtree_max_end(node->rb_left) > start) { + /* Lowest overlap if any must be on left side */ + node = node->rb_left; + } else if (is_node_overlap(data, start, end)) { + last_lower = data; + break; + } else if (start >= data->start) { + /* Lowest overlap if any must be on right side */ + node = node->rb_right; + } else { + break; + } + } + return last_lower; /* Returns NULL if there is no overlap */ +} + +static struct memtype *memtype_rb_exact_match(struct rb_root *root, + u64 start, u64 end) +{ + struct memtype *match; + + match = memtype_rb_lowest_match(root, start, end); + while (match != NULL && match->start < end) { + struct rb_node *node; + + if (match->start == start && match->end == end) + return match; + + node = rb_next(&match->rb); + if (node) + match = container_of(node, struct memtype, rb); + else + match = NULL; + } + + return NULL; /* Returns NULL if there is no exact match */ +} + +static int memtype_rb_check_conflict(struct rb_root *root, + u64 start, u64 end, + unsigned long reqtype, unsigned long *newtype) +{ + struct rb_node *node; + struct memtype *match; + int found_type = reqtype; + + match = memtype_rb_lowest_match(&memtype_rbroot, start, end); + if (match == NULL) + goto success; + + if (match->type != found_type && newtype == NULL) + goto failure; + + dprintk("Overlap at 0x%Lx-0x%Lx\n", match->start, match->end); + found_type = match->type; + + node = rb_next(&match->rb); + while (node) { + match = container_of(node, struct memtype, rb); + + if (match->start >= end) /* Checked all possible matches */ + goto success; + + if (is_node_overlap(match, start, end) && + match->type != found_type) { + goto failure; + } + + node = rb_next(&match->rb); + } +success: + if (newtype) + *newtype = found_type; + + return 0; + +failure: + printk(KERN_INFO "%s:%d conflicting memory types " + "%Lx-%Lx %s<->%s\n", current->comm, current->pid, start, + end, cattr_name(found_type), cattr_name(match->type)); + return -EBUSY; +} + +static void memtype_rb_augment_cb(struct rb_node *node) +{ + if (node) + update_path_max_end(node); +} + +static void memtype_rb_insert(struct rb_root *root, struct memtype *newdata) +{ + struct rb_node **node = &(root->rb_node); + struct rb_node *parent = NULL; + + while (*node) { + struct memtype *data = container_of(*node, struct memtype, rb); + + parent = *node; + if (newdata->start <= data->start) + node = &((*node)->rb_left); + else if (newdata->start > data->start) + node = &((*node)->rb_right); + } + + rb_link_node(&newdata->rb, parent, node); + rb_insert_color(&newdata->rb, root); +} + +int rbt_memtype_check_insert(struct memtype *new, unsigned long *ret_type) +{ + int err = 0; + + err = memtype_rb_check_conflict(&memtype_rbroot, new->start, new->end, + new->type, ret_type); + + if (!err) { + if (ret_type) + new->type = *ret_type; + + memtype_rb_insert(&memtype_rbroot, new); + } + return err; +} + +int rbt_memtype_erase(u64 start, u64 end) +{ + struct memtype *data; + + data = memtype_rb_exact_match(&memtype_rbroot, start, end); + if (!data) + return -EINVAL; + + rb_erase(&data->rb, &memtype_rbroot); + return 0; +} + +struct memtype *rbt_memtype_lookup(u64 addr) +{ + struct memtype *data; + data = memtype_rb_lowest_match(&memtype_rbroot, addr, addr + PAGE_SIZE); + return data; +} + +#if defined(CONFIG_DEBUG_FS) +int rbt_memtype_copy_nth_element(struct memtype *out, loff_t pos) +{ + struct rb_node *node; + int i = 1; + + node = rb_first(&memtype_rbroot); + while (node && pos != i) { + node = rb_next(node); + i++; + } + + if (node) { /* pos == i */ + struct memtype *this = container_of(node, struct memtype, rb); + *out = *this; + return 0; + } else { + return 1; + } +} +#endif diff --git a/arch/x86/mm/srat_64.c b/arch/x86/mm/srat_64.c index 28c68762648f..f9897f7a9ef1 100644 --- a/arch/x86/mm/srat_64.c +++ b/arch/x86/mm/srat_64.c @@ -363,6 +363,54 @@ int __init acpi_scan_nodes(unsigned long start, unsigned long end) for (i = 0; i < MAX_NUMNODES; i++) cutoff_node(i, start, end); + /* + * Join together blocks on the same node, holes between + * which don't overlap with memory on other nodes. + */ + for (i = 0; i < num_node_memblks; ++i) { + int j, k; + + for (j = i + 1; j < num_node_memblks; ++j) { + unsigned long start, end; + + if (memblk_nodeid[i] != memblk_nodeid[j]) + continue; + start = min(node_memblk_range[i].end, + node_memblk_range[j].end); + end = max(node_memblk_range[i].start, + node_memblk_range[j].start); + for (k = 0; k < num_node_memblks; ++k) { + if (memblk_nodeid[i] == memblk_nodeid[k]) + continue; + if (start < node_memblk_range[k].end && + end > node_memblk_range[k].start) + break; + } + if (k < num_node_memblks) + continue; + start = min(node_memblk_range[i].start, + node_memblk_range[j].start); + end = max(node_memblk_range[i].end, + node_memblk_range[j].end); + printk(KERN_INFO "SRAT: Node %d " + "[%Lx,%Lx) + [%Lx,%Lx) -> [%lx,%lx)\n", + memblk_nodeid[i], + node_memblk_range[i].start, + node_memblk_range[i].end, + node_memblk_range[j].start, + node_memblk_range[j].end, + start, end); + node_memblk_range[i].start = start; + node_memblk_range[i].end = end; + k = --num_node_memblks - j; + memmove(memblk_nodeid + j, memblk_nodeid + j+1, + k * sizeof(*memblk_nodeid)); + memmove(node_memblk_range + j, node_memblk_range + j+1, + k * sizeof(*node_memblk_range)); + --j; + } + } + memnode_shift = compute_hash_shift(node_memblk_range, num_node_memblks, memblk_nodeid); if (memnode_shift < 0) { @@ -461,7 +509,8 @@ void __init acpi_fake_nodes(const struct bootnode *fake_nodes, int num_nodes) * node, it must now point to the fake node ID. */ for (j = 0; j < MAX_LOCAL_APIC; j++) - if (apicid_to_node[j] == nid) + if (apicid_to_node[j] == nid && + fake_apicid_to_node[j] == NUMA_NO_NODE) fake_apicid_to_node[j] = i; } for (i = 0; i < num_nodes; i++) diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c index 2c505ee71014..b28d2f1253bb 100644 --- a/arch/x86/oprofile/nmi_int.c +++ b/arch/x86/oprofile/nmi_int.c @@ -31,8 +31,9 @@ static struct op_x86_model_spec *model; static DEFINE_PER_CPU(struct op_msrs, cpu_msrs); static DEFINE_PER_CPU(unsigned long, saved_lvtpc); -/* 0 == registered but off, 1 == registered and on */ -static int nmi_enabled = 0; +/* must be protected with get_online_cpus()/put_online_cpus(): */ +static int nmi_enabled; +static int ctr_running; struct op_counter_config counter_config[OP_MAX_COUNTER]; @@ -61,12 +62,16 @@ static int profile_exceptions_notify(struct notifier_block *self, { struct die_args *args = (struct die_args *)data; int ret = NOTIFY_DONE; - int cpu = smp_processor_id(); switch (val) { case DIE_NMI: case DIE_NMI_IPI: - model->check_ctrs(args->regs, &per_cpu(cpu_msrs, cpu)); + if (ctr_running) + model->check_ctrs(args->regs, &__get_cpu_var(cpu_msrs)); + else if (!nmi_enabled) + break; + else + model->stop(&__get_cpu_var(cpu_msrs)); ret = NOTIFY_STOP; break; default: @@ -95,24 +100,36 @@ static void nmi_cpu_save_registers(struct op_msrs *msrs) static void nmi_cpu_start(void *dummy) { struct op_msrs const *msrs = &__get_cpu_var(cpu_msrs); - model->start(msrs); + if (!msrs->controls) + WARN_ON_ONCE(1); + else + model->start(msrs); } static int nmi_start(void) { + get_online_cpus(); on_each_cpu(nmi_cpu_start, NULL, 1); + ctr_running = 1; + put_online_cpus(); return 0; } static void nmi_cpu_stop(void *dummy) { struct op_msrs const *msrs = &__get_cpu_var(cpu_msrs); - model->stop(msrs); + if (!msrs->controls) + WARN_ON_ONCE(1); + else + model->stop(msrs); } static void nmi_stop(void) { + get_online_cpus(); on_each_cpu(nmi_cpu_stop, NULL, 1); + ctr_running = 0; + put_online_cpus(); } #ifdef CONFIG_OPROFILE_EVENT_MULTIPLEX @@ -252,7 +269,10 @@ static int nmi_switch_event(void) if (nmi_multiplex_on() < 0) return -EINVAL; /* not necessary */ - on_each_cpu(nmi_cpu_switch, NULL, 1); + get_online_cpus(); + if (ctr_running) + on_each_cpu(nmi_cpu_switch, NULL, 1); + put_online_cpus(); return 0; } @@ -295,6 +315,7 @@ static void free_msrs(void) kfree(per_cpu(cpu_msrs, i).controls); per_cpu(cpu_msrs, i).controls = NULL; } + nmi_shutdown_mux(); } static int allocate_msrs(void) @@ -307,14 +328,21 @@ static int allocate_msrs(void) per_cpu(cpu_msrs, i).counters = kzalloc(counters_size, GFP_KERNEL); if (!per_cpu(cpu_msrs, i).counters) - return 0; + goto fail; per_cpu(cpu_msrs, i).controls = kzalloc(controls_size, GFP_KERNEL); if (!per_cpu(cpu_msrs, i).controls) - return 0; + goto fail; } + if (!nmi_setup_mux()) + goto fail; + return 1; + +fail: + free_msrs(); + return 0; } static void nmi_cpu_setup(void *dummy) @@ -336,49 +364,6 @@ static struct notifier_block profile_exceptions_nb = { .priority = 2 }; -static int nmi_setup(void) -{ - int err = 0; - int cpu; - - if (!allocate_msrs()) - err = -ENOMEM; - else if (!nmi_setup_mux()) - err = -ENOMEM; - else - err = register_die_notifier(&profile_exceptions_nb); - - if (err) { - free_msrs(); - nmi_shutdown_mux(); - return err; - } - - /* We need to serialize save and setup for HT because the subset - * of msrs are distinct for save and setup operations - */ - - /* Assume saved/restored counters are the same on all CPUs */ - model->fill_in_addresses(&per_cpu(cpu_msrs, 0)); - for_each_possible_cpu(cpu) { - if (!cpu) - continue; - - memcpy(per_cpu(cpu_msrs, cpu).counters, - per_cpu(cpu_msrs, 0).counters, - sizeof(struct op_msr) * model->num_counters); - - memcpy(per_cpu(cpu_msrs, cpu).controls, - per_cpu(cpu_msrs, 0).controls, - sizeof(struct op_msr) * model->num_controls); - - mux_clone(cpu); - } - on_each_cpu(nmi_cpu_setup, NULL, 1); - nmi_enabled = 1; - return 0; -} - static void nmi_cpu_restore_registers(struct op_msrs *msrs) { struct op_msr *counters = msrs->counters; @@ -412,20 +397,24 @@ static void nmi_cpu_shutdown(void *dummy) apic_write(APIC_LVTPC, per_cpu(saved_lvtpc, cpu)); apic_write(APIC_LVTERR, v); nmi_cpu_restore_registers(msrs); + if (model->cpu_down) + model->cpu_down(); } -static void nmi_shutdown(void) +static void nmi_cpu_up(void *dummy) { - struct op_msrs *msrs; + if (nmi_enabled) + nmi_cpu_setup(dummy); + if (ctr_running) + nmi_cpu_start(dummy); +} - nmi_enabled = 0; - on_each_cpu(nmi_cpu_shutdown, NULL, 1); - unregister_die_notifier(&profile_exceptions_nb); - nmi_shutdown_mux(); - msrs = &get_cpu_var(cpu_msrs); - model->shutdown(msrs); - free_msrs(); - put_cpu_var(cpu_msrs); +static void nmi_cpu_down(void *dummy) +{ + if (ctr_running) + nmi_cpu_stop(dummy); + if (nmi_enabled) + nmi_cpu_shutdown(dummy); } static int nmi_create_files(struct super_block *sb, struct dentry *root) @@ -457,7 +446,6 @@ static int nmi_create_files(struct super_block *sb, struct dentry *root) return 0; } -#ifdef CONFIG_SMP static int oprofile_cpu_notifier(struct notifier_block *b, unsigned long action, void *data) { @@ -465,10 +453,10 @@ static int oprofile_cpu_notifier(struct notifier_block *b, unsigned long action, switch (action) { case CPU_DOWN_FAILED: case CPU_ONLINE: - smp_call_function_single(cpu, nmi_cpu_start, NULL, 0); + smp_call_function_single(cpu, nmi_cpu_up, NULL, 0); break; case CPU_DOWN_PREPARE: - smp_call_function_single(cpu, nmi_cpu_stop, NULL, 1); + smp_call_function_single(cpu, nmi_cpu_down, NULL, 1); break; } return NOTIFY_DONE; @@ -477,7 +465,75 @@ static int oprofile_cpu_notifier(struct notifier_block *b, unsigned long action, static struct notifier_block oprofile_cpu_nb = { .notifier_call = oprofile_cpu_notifier }; -#endif + +static int nmi_setup(void) +{ + int err = 0; + int cpu; + + if (!allocate_msrs()) + return -ENOMEM; + + /* We need to serialize save and setup for HT because the subset + * of msrs are distinct for save and setup operations + */ + + /* Assume saved/restored counters are the same on all CPUs */ + err = model->fill_in_addresses(&per_cpu(cpu_msrs, 0)); + if (err) + goto fail; + + for_each_possible_cpu(cpu) { + if (!cpu) + continue; + + memcpy(per_cpu(cpu_msrs, cpu).counters, + per_cpu(cpu_msrs, 0).counters, + sizeof(struct op_msr) * model->num_counters); + + memcpy(per_cpu(cpu_msrs, cpu).controls, + per_cpu(cpu_msrs, 0).controls, + sizeof(struct op_msr) * model->num_controls); + + mux_clone(cpu); + } + + nmi_enabled = 0; + ctr_running = 0; + barrier(); + err = register_die_notifier(&profile_exceptions_nb); + if (err) + goto fail; + + get_online_cpus(); + register_cpu_notifier(&oprofile_cpu_nb); + on_each_cpu(nmi_cpu_setup, NULL, 1); + nmi_enabled = 1; + put_online_cpus(); + + return 0; +fail: + free_msrs(); + return err; +} + +static void nmi_shutdown(void) +{ + struct op_msrs *msrs; + + get_online_cpus(); + unregister_cpu_notifier(&oprofile_cpu_nb); + on_each_cpu(nmi_cpu_shutdown, NULL, 1); + nmi_enabled = 0; + ctr_running = 0; + put_online_cpus(); + barrier(); + unregister_die_notifier(&profile_exceptions_nb); + msrs = &get_cpu_var(cpu_msrs); + model->shutdown(msrs); + free_msrs(); + put_cpu_var(cpu_msrs); +} #ifdef CONFIG_PM @@ -687,9 +743,6 @@ int __init op_nmi_init(struct oprofile_operations *ops) return -ENODEV; } -#ifdef CONFIG_SMP - register_cpu_notifier(&oprofile_cpu_nb); -#endif /* default values, can be overwritten by model */ ops->create_files = nmi_create_files; ops->setup = nmi_setup; @@ -716,12 +769,6 @@ int __init op_nmi_init(struct oprofile_operations *ops) void op_nmi_exit(void) { - if (using_nmi) { + if (using_nmi) exit_sysfs(); -#ifdef CONFIG_SMP - unregister_cpu_notifier(&oprofile_cpu_nb); -#endif - } - if (model->exit) - model->exit(); } diff --git a/arch/x86/oprofile/op_model_amd.c b/arch/x86/oprofile/op_model_amd.c index 090cbbec7dbd..b67a6b5aa8d4 100644 --- a/arch/x86/oprofile/op_model_amd.c +++ b/arch/x86/oprofile/op_model_amd.c @@ -30,13 +30,10 @@ #include "op_counter.h" #define NUM_COUNTERS 4 -#define NUM_CONTROLS 4 #ifdef CONFIG_OPROFILE_EVENT_MULTIPLEX #define NUM_VIRT_COUNTERS 32 -#define NUM_VIRT_CONTROLS 32 #else #define NUM_VIRT_COUNTERS NUM_COUNTERS -#define NUM_VIRT_CONTROLS NUM_CONTROLS #endif #define OP_EVENT_MASK 0x0FFF @@ -105,102 +102,6 @@ static u32 get_ibs_caps(void) return ibs_caps; } -#ifdef CONFIG_OPROFILE_EVENT_MULTIPLEX - -static void op_mux_switch_ctrl(struct op_x86_model_spec const *model, - struct op_msrs const * const msrs) -{ - u64 val; - int i; - - /* enable active counters */ - for (i = 0; i < NUM_COUNTERS; ++i) { - int virt = op_x86_phys_to_virt(i); - if (!reset_value[virt]) - continue; - rdmsrl(msrs->controls[i].addr, val); - val &= model->reserved; - val |= op_x86_get_ctrl(model, &counter_config[virt]); - wrmsrl(msrs->controls[i].addr, val); - } -} - -#endif - -/* functions for op_amd_spec */ - -static void op_amd_fill_in_addresses(struct op_msrs * const msrs) -{ - int i; - - for (i = 0; i < NUM_COUNTERS; i++) { - if (reserve_perfctr_nmi(MSR_K7_PERFCTR0 + i)) - msrs->counters[i].addr = MSR_K7_PERFCTR0 + i; - } - - for (i = 0; i < NUM_CONTROLS; i++) { - if (reserve_evntsel_nmi(MSR_K7_EVNTSEL0 + i)) - msrs->controls[i].addr = MSR_K7_EVNTSEL0 + i; - } -} - -static void op_amd_setup_ctrs(struct op_x86_model_spec const *model, - struct op_msrs const * const msrs) -{ - u64 val; - int i; - - /* setup reset_value */ - for (i = 0; i < NUM_VIRT_COUNTERS; ++i) { - if (counter_config[i].enabled - && msrs->counters[op_x86_virt_to_phys(i)].addr) - reset_value[i] = counter_config[i].count; - else - reset_value[i] = 0; - } - - /* clear all counters */ - for (i = 0; i < NUM_CONTROLS; ++i) { - if (unlikely(!msrs->controls[i].addr)) { - if (counter_config[i].enabled && !smp_processor_id()) - /* - * counter is reserved, this is on all - * cpus, so report only for cpu #0 - */ - op_x86_warn_reserved(i); - continue; - } - rdmsrl(msrs->controls[i].addr, val); - if (val & ARCH_PERFMON_EVENTSEL_ENABLE) - op_x86_warn_in_use(i); - val &= model->reserved; - wrmsrl(msrs->controls[i].addr, val); - } - - /* avoid a false detection of ctr overflows in NMI handler */ - for (i = 0; i < NUM_COUNTERS; ++i) { - if (unlikely(!msrs->counters[i].addr)) - continue; - wrmsrl(msrs->counters[i].addr, -1LL); - } - - /* enable active counters */ - for (i = 0; i < NUM_COUNTERS; ++i) { - int virt = op_x86_phys_to_virt(i); - if (!reset_value[virt]) - continue; - - /* setup counter registers */ - wrmsrl(msrs->counters[i].addr, -(u64)reset_value[virt]); - - /* setup control registers */ - rdmsrl(msrs->controls[i].addr, val); - val &= model->reserved; - val |= op_x86_get_ctrl(model, &counter_config[virt]); - wrmsrl(msrs->controls[i].addr, val); - } -} - /* * 16-bit Linear Feedback Shift Register (LFSR) * @@ -365,6 +266,125 @@ static void op_amd_stop_ibs(void) wrmsrl(MSR_AMD64_IBSOPCTL, 0); } +#ifdef CONFIG_OPROFILE_EVENT_MULTIPLEX + +static void op_mux_switch_ctrl(struct op_x86_model_spec const *model, + struct op_msrs const * const msrs) +{ + u64 val; + int i; + + /* enable active counters */ + for (i = 0; i < NUM_COUNTERS; ++i) { + int virt = op_x86_phys_to_virt(i); + if (!reset_value[virt]) + continue; + rdmsrl(msrs->controls[i].addr, val); + val &= model->reserved; + val |= op_x86_get_ctrl(model, &counter_config[virt]); + wrmsrl(msrs->controls[i].addr, val); + } +} + +#endif + +/* functions for op_amd_spec */ + +static void op_amd_shutdown(struct op_msrs const * const msrs) +{ + int i; + + for (i = 0; i < NUM_COUNTERS; ++i) { + if (!msrs->counters[i].addr) + continue; + release_perfctr_nmi(MSR_K7_PERFCTR0 + i); + release_evntsel_nmi(MSR_K7_EVNTSEL0 + i); + } +} + +static int op_amd_fill_in_addresses(struct op_msrs * const msrs) +{ + int i; + + for (i = 0; i < NUM_COUNTERS; i++) { + if (!reserve_perfctr_nmi(MSR_K7_PERFCTR0 + i)) + goto fail; + if (!reserve_evntsel_nmi(MSR_K7_EVNTSEL0 + i)) { + release_perfctr_nmi(MSR_K7_PERFCTR0 + i); + goto fail; + } + /* both registers must be reserved */ + msrs->counters[i].addr = MSR_K7_PERFCTR0 + i; + msrs->controls[i].addr = MSR_K7_EVNTSEL0 + i; + continue; + fail: + if (!counter_config[i].enabled) + continue; + op_x86_warn_reserved(i); + op_amd_shutdown(msrs); + return -EBUSY; + } + + return 0; +} + +static void op_amd_setup_ctrs(struct op_x86_model_spec const *model, + struct op_msrs const * const msrs) +{ + u64 val; + int i; + + /* setup reset_value */ + for (i = 0; i < NUM_VIRT_COUNTERS; ++i) { + if (counter_config[i].enabled + && msrs->counters[op_x86_virt_to_phys(i)].addr) + reset_value[i] = counter_config[i].count; + else + reset_value[i] = 0; + } + + /* clear all counters */ + for (i = 0; i < NUM_COUNTERS; ++i) { + if (!msrs->controls[i].addr) + continue; + rdmsrl(msrs->controls[i].addr, val); + if (val & ARCH_PERFMON_EVENTSEL_ENABLE) + op_x86_warn_in_use(i); + val &= model->reserved; + wrmsrl(msrs->controls[i].addr, val); + /* + * avoid a false detection of ctr overflows in NMI + * handler + */ + wrmsrl(msrs->counters[i].addr, -1LL); + } + + /* enable active counters */ + for (i = 0; i < NUM_COUNTERS; ++i) { + int virt = op_x86_phys_to_virt(i); + if (!reset_value[virt]) + continue; + + /* setup counter registers */ + wrmsrl(msrs->counters[i].addr, -(u64)reset_value[virt]); + + /* setup control registers */ + rdmsrl(msrs->controls[i].addr, val); + val &= model->reserved; + val |= op_x86_get_ctrl(model, &counter_config[virt]); + wrmsrl(msrs->controls[i].addr, val); + } + + if (ibs_caps) + setup_APIC_eilvt_ibs(0, APIC_EILVT_MSG_NMI, 0); +} + +static void op_amd_cpu_shutdown(void) +{ + if (ibs_caps) + setup_APIC_eilvt_ibs(0, APIC_EILVT_MSG_FIX, 1); +} + static int op_amd_check_ctrs(struct pt_regs * const regs, struct op_msrs const * const msrs) { @@ -425,42 +445,16 @@ static void op_amd_stop(struct op_msrs const * const msrs) op_amd_stop_ibs(); } -static void op_amd_shutdown(struct op_msrs const * const msrs) -{ - int i; - - for (i = 0; i < NUM_COUNTERS; ++i) { - if (msrs->counters[i].addr) - release_perfctr_nmi(MSR_K7_PERFCTR0 + i); - } - for (i = 0; i < NUM_CONTROLS; ++i) { - if (msrs->controls[i].addr) - release_evntsel_nmi(MSR_K7_EVNTSEL0 + i); - } -} - -static u8 ibs_eilvt_off; - -static inline void apic_init_ibs_nmi_per_cpu(void *arg) -{ - ibs_eilvt_off = setup_APIC_eilvt_ibs(0, APIC_EILVT_MSG_NMI, 0); -} - -static inline void apic_clear_ibs_nmi_per_cpu(void *arg) -{ - setup_APIC_eilvt_ibs(0, APIC_EILVT_MSG_FIX, 1); -} - -static int init_ibs_nmi(void) +static int __init_ibs_nmi(void) { #define IBSCTL_LVTOFFSETVAL (1 << 8) #define IBSCTL 0x1cc struct pci_dev *cpu_cfg; int nodes; u32 value = 0; + u8 ibs_eilvt_off; - /* per CPU setup */ - on_each_cpu(apic_init_ibs_nmi_per_cpu, NULL, 1); + ibs_eilvt_off = setup_APIC_eilvt_ibs(0, APIC_EILVT_MSG_FIX, 1); nodes = 0; cpu_cfg = NULL; @@ -490,22 +484,15 @@ static int init_ibs_nmi(void) return 0; } -/* uninitialize the APIC for the IBS interrupts if needed */ -static void clear_ibs_nmi(void) -{ - if (ibs_caps) - on_each_cpu(apic_clear_ibs_nmi_per_cpu, NULL, 1); -} - /* initialize the APIC for the IBS interrupts if available */ -static void ibs_init(void) +static void init_ibs(void) { ibs_caps = get_ibs_caps(); if (!ibs_caps) return; - if (init_ibs_nmi()) { + if (__init_ibs_nmi()) { ibs_caps = 0; return; } @@ -514,14 +501,6 @@ static void ibs_init(void) (unsigned)ibs_caps); } -static void ibs_exit(void) -{ - if (!ibs_caps) - return; - - clear_ibs_nmi(); -} - static int (*create_arch_files)(struct super_block *sb, struct dentry *root); static int setup_ibs_files(struct super_block *sb, struct dentry *root) @@ -570,27 +549,22 @@ static int setup_ibs_files(struct super_block *sb, struct dentry *root) static int op_amd_init(struct oprofile_operations *ops) { - ibs_init(); + init_ibs(); create_arch_files = ops->create_files; ops->create_files = setup_ibs_files; return 0; } -static void op_amd_exit(void) -{ - ibs_exit(); -} - struct op_x86_model_spec op_amd_spec = { .num_counters = NUM_COUNTERS, - .num_controls = NUM_CONTROLS, + .num_controls = NUM_COUNTERS, .num_virt_counters = NUM_VIRT_COUNTERS, .reserved = MSR_AMD_EVENTSEL_RESERVED, .event_mask = OP_EVENT_MASK, .init = op_amd_init, - .exit = op_amd_exit, .fill_in_addresses = &op_amd_fill_in_addresses, .setup_ctrs = &op_amd_setup_ctrs, + .cpu_down = &op_amd_cpu_shutdown, .check_ctrs = &op_amd_check_ctrs, .start = &op_amd_start, .stop = &op_amd_stop, diff --git a/arch/x86/oprofile/op_model_p4.c b/arch/x86/oprofile/op_model_p4.c index e6a160a4684a..182558dd5515 100644 --- a/arch/x86/oprofile/op_model_p4.c +++ b/arch/x86/oprofile/op_model_p4.c @@ -385,8 +385,26 @@ static unsigned int get_stagger(void) static unsigned long reset_value[NUM_COUNTERS_NON_HT]; +static void p4_shutdown(struct op_msrs const * const msrs) +{ + int i; -static void p4_fill_in_addresses(struct op_msrs * const msrs) + for (i = 0; i < num_counters; ++i) { + if (msrs->counters[i].addr) + release_perfctr_nmi(msrs->counters[i].addr); + } + /* + * some of the control registers are specially reserved in + * conjunction with the counter registers (hence the starting offset). + * This saves a few bits. + */ + for (i = num_counters; i < num_controls; ++i) { + if (msrs->controls[i].addr) + release_evntsel_nmi(msrs->controls[i].addr); + } +} + +static int p4_fill_in_addresses(struct op_msrs * const msrs) { unsigned int i; unsigned int addr, cccraddr, stag; @@ -468,6 +486,18 @@ static void p4_fill_in_addresses(struct op_msrs * const msrs) msrs->controls[i++].addr = MSR_P4_CRU_ESCR5; } } + + for (i = 0; i < num_counters; ++i) { + if (!counter_config[i].enabled) + continue; + if (msrs->controls[i].addr) + continue; + op_x86_warn_reserved(i); + p4_shutdown(msrs); + return -EBUSY; + } + + return 0; } @@ -668,26 +698,6 @@ static void p4_stop(struct op_msrs const * const msrs) } } -static void p4_shutdown(struct op_msrs const * const msrs) -{ - int i; - - for (i = 0; i < num_counters; ++i) { - if (msrs->counters[i].addr) - release_perfctr_nmi(msrs->counters[i].addr); - } - /* - * some of the control registers are specially reserved in - * conjunction with the counter registers (hence the starting offset). - * This saves a few bits. - */ - for (i = num_counters; i < num_controls; ++i) { - if (msrs->controls[i].addr) - release_evntsel_nmi(msrs->controls[i].addr); - } -} - - #ifdef CONFIG_SMP struct op_x86_model_spec op_p4_ht2_spec = { .num_counters = NUM_COUNTERS_HT2, diff --git a/arch/x86/oprofile/op_model_ppro.c b/arch/x86/oprofile/op_model_ppro.c index 2bf90fafa7b5..d769cda54082 100644 --- a/arch/x86/oprofile/op_model_ppro.c +++ b/arch/x86/oprofile/op_model_ppro.c @@ -30,19 +30,46 @@ static int counter_width = 32; static u64 *reset_value; -static void ppro_fill_in_addresses(struct op_msrs * const msrs) +static void ppro_shutdown(struct op_msrs const * const msrs) { int i; - for (i = 0; i < num_counters; i++) { - if (reserve_perfctr_nmi(MSR_P6_PERFCTR0 + i)) - msrs->counters[i].addr = MSR_P6_PERFCTR0 + i; + for (i = 0; i < num_counters; ++i) { + if (!msrs->counters[i].addr) + continue; + release_perfctr_nmi(MSR_P6_PERFCTR0 + i); + release_evntsel_nmi(MSR_P6_EVNTSEL0 + i); + } + if (reset_value) { + kfree(reset_value); + reset_value = NULL; } +} + +static int ppro_fill_in_addresses(struct op_msrs * const msrs) +{ + int i; for (i = 0; i < num_counters; i++) { - if (reserve_evntsel_nmi(MSR_P6_EVNTSEL0 + i)) - msrs->controls[i].addr = MSR_P6_EVNTSEL0 + i; + if (!reserve_perfctr_nmi(MSR_P6_PERFCTR0 + i)) + goto fail; + if (!reserve_evntsel_nmi(MSR_P6_EVNTSEL0 + i)) { + release_perfctr_nmi(MSR_P6_PERFCTR0 + i); + goto fail; + } + /* both registers must be reserved */ + msrs->counters[i].addr = MSR_P6_PERFCTR0 + i; + msrs->controls[i].addr = MSR_P6_EVNTSEL0 + i; + continue; + fail: + if (!counter_config[i].enabled) + continue; + op_x86_warn_reserved(i); + ppro_shutdown(msrs); + return -EBUSY; } + + return 0; } @@ -78,26 +105,17 @@ static void ppro_setup_ctrs(struct op_x86_model_spec const *model, /* clear all counters */ for (i = 0; i < num_counters; ++i) { - if (unlikely(!msrs->controls[i].addr)) { - if (counter_config[i].enabled && !smp_processor_id()) - /* - * counter is reserved, this is on all - * cpus, so report only for cpu #0 - */ - op_x86_warn_reserved(i); + if (!msrs->controls[i].addr) continue; - } rdmsrl(msrs->controls[i].addr, val); if (val & ARCH_PERFMON_EVENTSEL_ENABLE) op_x86_warn_in_use(i); val &= model->reserved; wrmsrl(msrs->controls[i].addr, val); - } - - /* avoid a false detection of ctr overflows in NMI handler */ - for (i = 0; i < num_counters; ++i) { - if (unlikely(!msrs->counters[i].addr)) - continue; + /* + * avoid a false detection of ctr overflows in NMI * + * handler + */ wrmsrl(msrs->counters[i].addr, -1LL); } @@ -189,25 +207,6 @@ static void ppro_stop(struct op_msrs const * const msrs) } } -static void ppro_shutdown(struct op_msrs const * const msrs) -{ - int i; - - for (i = 0; i < num_counters; ++i) { - if (msrs->counters[i].addr) - release_perfctr_nmi(MSR_P6_PERFCTR0 + i); - } - for (i = 0; i < num_counters; ++i) { - if (msrs->controls[i].addr) - release_evntsel_nmi(MSR_P6_EVNTSEL0 + i); - } - if (reset_value) { - kfree(reset_value); - reset_value = NULL; - } -} - - struct op_x86_model_spec op_ppro_spec = { .num_counters = 2, .num_controls = 2, @@ -239,11 +238,11 @@ static void arch_perfmon_setup_counters(void) if (eax.split.version_id == 0 && current_cpu_data.x86 == 6 && current_cpu_data.x86_model == 15) { eax.split.version_id = 2; - eax.split.num_events = 2; + eax.split.num_counters = 2; eax.split.bit_width = 40; } - num_counters = eax.split.num_events; + num_counters = eax.split.num_counters; op_arch_perfmon_spec.num_counters = num_counters; op_arch_perfmon_spec.num_controls = num_counters; diff --git a/arch/x86/oprofile/op_x86_model.h b/arch/x86/oprofile/op_x86_model.h index ff82a755edd4..89017fa1fd63 100644 --- a/arch/x86/oprofile/op_x86_model.h +++ b/arch/x86/oprofile/op_x86_model.h @@ -40,10 +40,10 @@ struct op_x86_model_spec { u64 reserved; u16 event_mask; int (*init)(struct oprofile_operations *ops); - void (*exit)(void); - void (*fill_in_addresses)(struct op_msrs * const msrs); + int (*fill_in_addresses)(struct op_msrs * const msrs); void (*setup_ctrs)(struct op_x86_model_spec const *model, struct op_msrs const * const msrs); + void (*cpu_down)(void); int (*check_ctrs)(struct pt_regs * const regs, struct op_msrs const * const msrs); void (*start)(struct op_msrs const * const msrs); diff --git a/arch/x86/pci/mrst.c b/arch/x86/pci/mrst.c index 8bf2fcb88d04..7ef3a2735df3 100644 --- a/arch/x86/pci/mrst.c +++ b/arch/x86/pci/mrst.c @@ -109,7 +109,7 @@ static int pci_device_update_fixed(struct pci_bus *bus, unsigned int devfn, decode++; decode = ~(decode - 1); } else { - decode = ~0; + decode = 0; } /* @@ -247,6 +247,10 @@ static void __devinit pci_fixed_bar_fixup(struct pci_dev *dev) u32 size; int i; + /* Must have extended configuration space */ + if (dev->cfg_size < PCIE_CAP_OFFSET + 4) + return; + /* Fixup the BAR sizes for fixed BAR devices and make them unmoveable */ offset = fixed_bar_cap(dev->bus, dev->devfn); if (!offset || PCI_DEVFN(2, 0) == dev->devfn || diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index 22d6dde42619..a96a0619d0b7 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -46,7 +46,7 @@ * * Atomically reads the value of @v. */ -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) /** * atomic_set - set atomic variable diff --git a/drivers/acpi/pci_irq.c b/drivers/acpi/pci_irq.c index b0a71ecee682..e4804fb05e23 100644 --- a/drivers/acpi/pci_irq.c +++ b/drivers/acpi/pci_irq.c @@ -401,11 +401,13 @@ int acpi_pci_irq_enable(struct pci_dev *dev) * driver reported one, then use it. Exit in any case. */ if (gsi < 0) { + u32 dev_gsi; dev_warn(&dev->dev, "PCI INT %c: no GSI", pin_name(pin)); /* Interrupt Line values above 0xF are forbidden */ - if (dev->irq > 0 && (dev->irq <= 0xF)) { - printk(" - using IRQ %d\n", dev->irq); - acpi_register_gsi(&dev->dev, dev->irq, + if (dev->irq > 0 && (dev->irq <= 0xF) && + (acpi_isa_irq_to_gsi(dev->irq, &dev_gsi) == 0)) { + printk(" - using ISA IRQ %d\n", dev->irq); + acpi_register_gsi(&dev->dev, dev_gsi, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW); return 0; diff --git a/drivers/base/iommu.c b/drivers/base/iommu.c index 8ad4ffea6920..6e6b6a11b3ce 100644 --- a/drivers/base/iommu.c +++ b/drivers/base/iommu.c @@ -80,20 +80,6 @@ void iommu_detach_device(struct iommu_domain *domain, struct device *dev) } EXPORT_SYMBOL_GPL(iommu_detach_device); -int iommu_map_range(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot) -{ - return iommu_ops->map(domain, iova, paddr, size, prot); -} -EXPORT_SYMBOL_GPL(iommu_map_range); - -void iommu_unmap_range(struct iommu_domain *domain, unsigned long iova, - size_t size) -{ - iommu_ops->unmap(domain, iova, size); -} -EXPORT_SYMBOL_GPL(iommu_unmap_range); - phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, unsigned long iova) { @@ -107,3 +93,32 @@ int iommu_domain_has_cap(struct iommu_domain *domain, return iommu_ops->domain_has_cap(domain, cap); } EXPORT_SYMBOL_GPL(iommu_domain_has_cap); + +int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, int gfp_order, int prot) +{ + unsigned long invalid_mask; + size_t size; + + size = 0x1000UL << gfp_order; + invalid_mask = size - 1; + + BUG_ON((iova | paddr) & invalid_mask); + + return iommu_ops->map(domain, iova, paddr, gfp_order, prot); +} +EXPORT_SYMBOL_GPL(iommu_map); + +int iommu_unmap(struct iommu_domain *domain, unsigned long iova, int gfp_order) +{ + unsigned long invalid_mask; + size_t size; + + size = 0x1000UL << gfp_order; + invalid_mask = size - 1; + + BUG_ON(iova & invalid_mask); + + return iommu_ops->unmap(domain, iova, gfp_order); +} +EXPORT_SYMBOL_GPL(iommu_unmap); diff --git a/drivers/base/platform.c b/drivers/base/platform.c index 4b4b565c835f..c5fbe198fbdb 100644 --- a/drivers/base/platform.c +++ b/drivers/base/platform.c @@ -187,7 +187,7 @@ EXPORT_SYMBOL_GPL(platform_device_alloc); * released. */ int platform_device_add_resources(struct platform_device *pdev, - struct resource *res, unsigned int num) + const struct resource *res, unsigned int num) { struct resource *r; @@ -367,7 +367,7 @@ EXPORT_SYMBOL_GPL(platform_device_unregister); */ struct platform_device *platform_device_register_simple(const char *name, int id, - struct resource *res, + const struct resource *res, unsigned int num) { struct platform_device *pdev; diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c index 0182a22c423a..832798aa14f6 100644 --- a/drivers/block/amiflop.c +++ b/drivers/block/amiflop.c @@ -66,6 +66,7 @@ #include <linux/blkdev.h> #include <linux/elevator.h> #include <linux/interrupt.h> +#include <linux/platform_device.h> #include <asm/setup.h> #include <asm/uaccess.h> @@ -1696,34 +1697,18 @@ static struct kobject *floppy_find(dev_t dev, int *part, void *data) return get_disk(unit[drive].gendisk); } -static int __init amiga_floppy_init(void) +static int __init amiga_floppy_probe(struct platform_device *pdev) { int i, ret; - if (!MACH_IS_AMIGA) - return -ENODEV; - - if (!AMIGAHW_PRESENT(AMI_FLOPPY)) - return -ENODEV; - if (register_blkdev(FLOPPY_MAJOR,"fd")) return -EBUSY; - /* - * We request DSKPTR, DSKLEN and DSKDATA only, because the other - * floppy registers are too spreaded over the custom register space - */ - ret = -EBUSY; - if (!request_mem_region(CUSTOM_PHYSADDR+0x20, 8, "amiflop [Paula]")) { - printk("fd: cannot get floppy registers\n"); - goto out_blkdev; - } - ret = -ENOMEM; if ((raw_buf = (char *)amiga_chip_alloc (RAW_BUF_SIZE, "Floppy")) == NULL) { printk("fd: cannot get chip mem buffer\n"); - goto out_memregion; + goto out_blkdev; } ret = -EBUSY; @@ -1792,18 +1777,13 @@ out_irq2: free_irq(IRQ_AMIGA_DSKBLK, NULL); out_irq: amiga_chip_free(raw_buf); -out_memregion: - release_mem_region(CUSTOM_PHYSADDR+0x20, 8); out_blkdev: unregister_blkdev(FLOPPY_MAJOR,"fd"); return ret; } -module_init(amiga_floppy_init); -#ifdef MODULE - #if 0 /* not safe to unload */ -void cleanup_module(void) +static int __exit amiga_floppy_remove(struct platform_device *pdev) { int i; @@ -1820,12 +1800,25 @@ void cleanup_module(void) custom.dmacon = DMAF_DISK; /* disable DMA */ amiga_chip_free(raw_buf); blk_cleanup_queue(floppy_queue); - release_mem_region(CUSTOM_PHYSADDR+0x20, 8); unregister_blkdev(FLOPPY_MAJOR, "fd"); } #endif -#else +static struct platform_driver amiga_floppy_driver = { + .driver = { + .name = "amiga-floppy", + .owner = THIS_MODULE, + }, +}; + +static int __init amiga_floppy_init(void) +{ + return platform_driver_probe(&amiga_floppy_driver, amiga_floppy_probe); +} + +module_init(amiga_floppy_init); + +#ifndef MODULE static int __init amiga_floppy_setup (char *str) { int n; @@ -1840,3 +1833,5 @@ static int __init amiga_floppy_setup (char *str) __setup("floppy=", amiga_floppy_setup); #endif + +MODULE_ALIAS("platform:amiga-floppy"); diff --git a/drivers/block/hd.c b/drivers/block/hd.c index 034e6dfc878c..81c78b3ce2df 100644 --- a/drivers/block/hd.c +++ b/drivers/block/hd.c @@ -164,12 +164,12 @@ unsigned long read_timer(void) unsigned long t, flags; int i; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); t = jiffies * 11932; outb_p(0, 0x43); i = inb_p(0x40); i |= inb(0x40) << 8; - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); return(t - i); } #endif diff --git a/drivers/char/serial167.c b/drivers/char/serial167.c index 8dfd24721a82..78a62ebe75c7 100644 --- a/drivers/char/serial167.c +++ b/drivers/char/serial167.c @@ -627,7 +627,6 @@ static irqreturn_t cd2401_rx_interrupt(int irq, void *dev_id) char data; int char_count; int save_cnt; - int len; /* determine the channel and change to that context */ channel = (u_short) (base_addr[CyLICR] >> 2); @@ -1528,7 +1527,6 @@ static int cy_ioctl(struct tty_struct *tty, struct file *file, unsigned int cmd, unsigned long arg) { - unsigned long val; struct cyclades_port *info = tty->driver_data; int ret_val = 0; void __user *argp = (void __user *)arg; diff --git a/drivers/char/sysrq.c b/drivers/char/sysrq.c index 59de2525d303..d4e8b213a462 100644 --- a/drivers/char/sysrq.c +++ b/drivers/char/sysrq.c @@ -289,7 +289,7 @@ static struct sysrq_key_op sysrq_showstate_blocked_op = { static void sysrq_ftrace_dump(int key, struct tty_struct *tty) { - ftrace_dump(); + ftrace_dump(DUMP_ALL); } static struct sysrq_key_op sysrq_ftrace_dump_op = { .handler = sysrq_ftrace_dump, diff --git a/drivers/char/tty_io.c b/drivers/char/tty_io.c index 6da962c9b21c..d71f0fc34b46 100644 --- a/drivers/char/tty_io.c +++ b/drivers/char/tty_io.c @@ -1875,6 +1875,7 @@ got_driver: */ if (filp->f_op == &hung_up_tty_fops) filp->f_op = &tty_fops; + unlock_kernel(); goto retry_open; } unlock_kernel(); diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 75d293eeb3ee..063b2184caf5 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -662,32 +662,20 @@ static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf) return sprintf(buf, "%u\n", policy->cpuinfo.max_freq); } -#define define_one_ro(_name) \ -static struct freq_attr _name = \ -__ATTR(_name, 0444, show_##_name, NULL) - -#define define_one_ro0400(_name) \ -static struct freq_attr _name = \ -__ATTR(_name, 0400, show_##_name, NULL) - -#define define_one_rw(_name) \ -static struct freq_attr _name = \ -__ATTR(_name, 0644, show_##_name, store_##_name) - -define_one_ro0400(cpuinfo_cur_freq); -define_one_ro(cpuinfo_min_freq); -define_one_ro(cpuinfo_max_freq); -define_one_ro(cpuinfo_transition_latency); -define_one_ro(scaling_available_governors); -define_one_ro(scaling_driver); -define_one_ro(scaling_cur_freq); -define_one_ro(bios_limit); -define_one_ro(related_cpus); -define_one_ro(affected_cpus); -define_one_rw(scaling_min_freq); -define_one_rw(scaling_max_freq); -define_one_rw(scaling_governor); -define_one_rw(scaling_setspeed); +cpufreq_freq_attr_ro_perm(cpuinfo_cur_freq, 0400); +cpufreq_freq_attr_ro(cpuinfo_min_freq); +cpufreq_freq_attr_ro(cpuinfo_max_freq); +cpufreq_freq_attr_ro(cpuinfo_transition_latency); +cpufreq_freq_attr_ro(scaling_available_governors); +cpufreq_freq_attr_ro(scaling_driver); +cpufreq_freq_attr_ro(scaling_cur_freq); +cpufreq_freq_attr_ro(bios_limit); +cpufreq_freq_attr_ro(related_cpus); +cpufreq_freq_attr_ro(affected_cpus); +cpufreq_freq_attr_rw(scaling_min_freq); +cpufreq_freq_attr_rw(scaling_max_freq); +cpufreq_freq_attr_rw(scaling_governor); +cpufreq_freq_attr_rw(scaling_setspeed); static struct attribute *default_attrs[] = { &cpuinfo_min_freq.attr, diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index 3a147874a465..526bfbf69611 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c @@ -178,12 +178,8 @@ static ssize_t show_sampling_rate_min(struct kobject *kobj, return sprintf(buf, "%u\n", min_sampling_rate); } -#define define_one_ro(_name) \ -static struct global_attr _name = \ -__ATTR(_name, 0444, show_##_name, NULL) - -define_one_ro(sampling_rate_max); -define_one_ro(sampling_rate_min); +define_one_global_ro(sampling_rate_max); +define_one_global_ro(sampling_rate_min); /* cpufreq_conservative Governor Tunables */ #define show_one(file_name, object) \ @@ -221,12 +217,8 @@ show_one_old(freq_step); show_one_old(sampling_rate_min); show_one_old(sampling_rate_max); -#define define_one_ro_old(object, _name) \ -static struct freq_attr object = \ -__ATTR(_name, 0444, show_##_name##_old, NULL) - -define_one_ro_old(sampling_rate_min_old, sampling_rate_min); -define_one_ro_old(sampling_rate_max_old, sampling_rate_max); +cpufreq_freq_attr_ro_old(sampling_rate_min); +cpufreq_freq_attr_ro_old(sampling_rate_max); /*** delete after deprecation time ***/ @@ -364,16 +356,12 @@ static ssize_t store_freq_step(struct kobject *a, struct attribute *b, return count; } -#define define_one_rw(_name) \ -static struct global_attr _name = \ -__ATTR(_name, 0644, show_##_name, store_##_name) - -define_one_rw(sampling_rate); -define_one_rw(sampling_down_factor); -define_one_rw(up_threshold); -define_one_rw(down_threshold); -define_one_rw(ignore_nice_load); -define_one_rw(freq_step); +define_one_global_rw(sampling_rate); +define_one_global_rw(sampling_down_factor); +define_one_global_rw(up_threshold); +define_one_global_rw(down_threshold); +define_one_global_rw(ignore_nice_load); +define_one_global_rw(freq_step); static struct attribute *dbs_attributes[] = { &sampling_rate_max.attr, @@ -409,16 +397,12 @@ write_one_old(down_threshold); write_one_old(ignore_nice_load); write_one_old(freq_step); -#define define_one_rw_old(object, _name) \ -static struct freq_attr object = \ -__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old) - -define_one_rw_old(sampling_rate_old, sampling_rate); -define_one_rw_old(sampling_down_factor_old, sampling_down_factor); -define_one_rw_old(up_threshold_old, up_threshold); -define_one_rw_old(down_threshold_old, down_threshold); -define_one_rw_old(ignore_nice_load_old, ignore_nice_load); -define_one_rw_old(freq_step_old, freq_step); +cpufreq_freq_attr_rw_old(sampling_rate); +cpufreq_freq_attr_rw_old(sampling_down_factor); +cpufreq_freq_attr_rw_old(up_threshold); +cpufreq_freq_attr_rw_old(down_threshold); +cpufreq_freq_attr_rw_old(ignore_nice_load); +cpufreq_freq_attr_rw_old(freq_step); static struct attribute *dbs_attributes_old[] = { &sampling_rate_max_old.attr, diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index bd444dc93cf2..e1314212d8d4 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -73,6 +73,7 @@ enum {DBS_NORMAL_SAMPLE, DBS_SUB_SAMPLE}; struct cpu_dbs_info_s { cputime64_t prev_cpu_idle; + cputime64_t prev_cpu_iowait; cputime64_t prev_cpu_wall; cputime64_t prev_cpu_nice; struct cpufreq_policy *cur_policy; @@ -108,6 +109,7 @@ static struct dbs_tuners { unsigned int down_differential; unsigned int ignore_nice; unsigned int powersave_bias; + unsigned int io_is_busy; } dbs_tuners_ins = { .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, .down_differential = DEF_FREQUENCY_DOWN_DIFFERENTIAL, @@ -148,6 +150,16 @@ static inline cputime64_t get_cpu_idle_time(unsigned int cpu, cputime64_t *wall) return idle_time; } +static inline cputime64_t get_cpu_iowait_time(unsigned int cpu, cputime64_t *wall) +{ + u64 iowait_time = get_cpu_iowait_time_us(cpu, wall); + + if (iowait_time == -1ULL) + return 0; + + return iowait_time; +} + /* * Find right freq to be set now with powersave_bias on. * Returns the freq_hi to be used right now and will set freq_hi_jiffies, @@ -234,12 +246,8 @@ static ssize_t show_sampling_rate_min(struct kobject *kobj, return sprintf(buf, "%u\n", min_sampling_rate); } -#define define_one_ro(_name) \ -static struct global_attr _name = \ -__ATTR(_name, 0444, show_##_name, NULL) - -define_one_ro(sampling_rate_max); -define_one_ro(sampling_rate_min); +define_one_global_ro(sampling_rate_max); +define_one_global_ro(sampling_rate_min); /* cpufreq_ondemand Governor Tunables */ #define show_one(file_name, object) \ @@ -249,6 +257,7 @@ static ssize_t show_##file_name \ return sprintf(buf, "%u\n", dbs_tuners_ins.object); \ } show_one(sampling_rate, sampling_rate); +show_one(io_is_busy, io_is_busy); show_one(up_threshold, up_threshold); show_one(ignore_nice_load, ignore_nice); show_one(powersave_bias, powersave_bias); @@ -274,12 +283,8 @@ show_one_old(powersave_bias); show_one_old(sampling_rate_min); show_one_old(sampling_rate_max); -#define define_one_ro_old(object, _name) \ -static struct freq_attr object = \ -__ATTR(_name, 0444, show_##_name##_old, NULL) - -define_one_ro_old(sampling_rate_min_old, sampling_rate_min); -define_one_ro_old(sampling_rate_max_old, sampling_rate_max); +cpufreq_freq_attr_ro_old(sampling_rate_min); +cpufreq_freq_attr_ro_old(sampling_rate_max); /*** delete after deprecation time ***/ @@ -299,6 +304,23 @@ static ssize_t store_sampling_rate(struct kobject *a, struct attribute *b, return count; } +static ssize_t store_io_is_busy(struct kobject *a, struct attribute *b, + const char *buf, size_t count) +{ + unsigned int input; + int ret; + + ret = sscanf(buf, "%u", &input); + if (ret != 1) + return -EINVAL; + + mutex_lock(&dbs_mutex); + dbs_tuners_ins.io_is_busy = !!input; + mutex_unlock(&dbs_mutex); + + return count; +} + static ssize_t store_up_threshold(struct kobject *a, struct attribute *b, const char *buf, size_t count) { @@ -376,14 +398,11 @@ static ssize_t store_powersave_bias(struct kobject *a, struct attribute *b, return count; } -#define define_one_rw(_name) \ -static struct global_attr _name = \ -__ATTR(_name, 0644, show_##_name, store_##_name) - -define_one_rw(sampling_rate); -define_one_rw(up_threshold); -define_one_rw(ignore_nice_load); -define_one_rw(powersave_bias); +define_one_global_rw(sampling_rate); +define_one_global_rw(io_is_busy); +define_one_global_rw(up_threshold); +define_one_global_rw(ignore_nice_load); +define_one_global_rw(powersave_bias); static struct attribute *dbs_attributes[] = { &sampling_rate_max.attr, @@ -392,6 +411,7 @@ static struct attribute *dbs_attributes[] = { &up_threshold.attr, &ignore_nice_load.attr, &powersave_bias.attr, + &io_is_busy.attr, NULL }; @@ -415,14 +435,10 @@ write_one_old(up_threshold); write_one_old(ignore_nice_load); write_one_old(powersave_bias); -#define define_one_rw_old(object, _name) \ -static struct freq_attr object = \ -__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old) - -define_one_rw_old(sampling_rate_old, sampling_rate); -define_one_rw_old(up_threshold_old, up_threshold); -define_one_rw_old(ignore_nice_load_old, ignore_nice_load); -define_one_rw_old(powersave_bias_old, powersave_bias); +cpufreq_freq_attr_rw_old(sampling_rate); +cpufreq_freq_attr_rw_old(up_threshold); +cpufreq_freq_attr_rw_old(ignore_nice_load); +cpufreq_freq_attr_rw_old(powersave_bias); static struct attribute *dbs_attributes_old[] = { &sampling_rate_max_old.attr, @@ -470,14 +486,15 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info) for_each_cpu(j, policy->cpus) { struct cpu_dbs_info_s *j_dbs_info; - cputime64_t cur_wall_time, cur_idle_time; - unsigned int idle_time, wall_time; + cputime64_t cur_wall_time, cur_idle_time, cur_iowait_time; + unsigned int idle_time, wall_time, iowait_time; unsigned int load, load_freq; int freq_avg; j_dbs_info = &per_cpu(od_cpu_dbs_info, j); cur_idle_time = get_cpu_idle_time(j, &cur_wall_time); + cur_iowait_time = get_cpu_iowait_time(j, &cur_wall_time); wall_time = (unsigned int) cputime64_sub(cur_wall_time, j_dbs_info->prev_cpu_wall); @@ -487,6 +504,10 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info) j_dbs_info->prev_cpu_idle); j_dbs_info->prev_cpu_idle = cur_idle_time; + iowait_time = (unsigned int) cputime64_sub(cur_iowait_time, + j_dbs_info->prev_cpu_iowait); + j_dbs_info->prev_cpu_iowait = cur_iowait_time; + if (dbs_tuners_ins.ignore_nice) { cputime64_t cur_nice; unsigned long cur_nice_jiffies; @@ -504,6 +525,16 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info) idle_time += jiffies_to_usecs(cur_nice_jiffies); } + /* + * For the purpose of ondemand, waiting for disk IO is an + * indication that you're performance critical, and not that + * the system is actually idle. So subtract the iowait time + * from the cpu idle time. + */ + + if (dbs_tuners_ins.io_is_busy && idle_time >= iowait_time) + idle_time -= iowait_time; + if (unlikely(!wall_time || wall_time < idle_time)) continue; @@ -617,6 +648,29 @@ static inline void dbs_timer_exit(struct cpu_dbs_info_s *dbs_info) cancel_delayed_work_sync(&dbs_info->work); } +/* + * Not all CPUs want IO time to be accounted as busy; this dependson how + * efficient idling at a higher frequency/voltage is. + * Pavel Machek says this is not so for various generations of AMD and old + * Intel systems. + * Mike Chan (androidlcom) calis this is also not true for ARM. + * Because of this, whitelist specific known (series) of CPUs by default, and + * leave all others up to the user. + */ +static int should_io_be_busy(void) +{ +#if defined(CONFIG_X86) + /* + * For Intel, Core 2 (model 15) andl later have an efficient idle. + */ + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && + boot_cpu_data.x86 == 6 && + boot_cpu_data.x86_model >= 15) + return 1; +#endif + return 0; +} + static int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event) { @@ -679,6 +733,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, dbs_tuners_ins.sampling_rate = max(min_sampling_rate, latency * LATENCY_MULTIPLIER); + dbs_tuners_ins.io_is_busy = should_io_be_busy(); } mutex_unlock(&dbs_mutex); diff --git a/drivers/input/gameport/gameport.c b/drivers/input/gameport/gameport.c index 7e18bcf05a66..46239e47a260 100644 --- a/drivers/input/gameport/gameport.c +++ b/drivers/input/gameport/gameport.c @@ -59,11 +59,11 @@ static unsigned int get_time_pit(void) unsigned long flags; unsigned int count; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); outb_p(0x00, 0x43); count = inb_p(0x40); count |= inb_p(0x40) << 8; - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); return count; } diff --git a/drivers/input/joystick/analog.c b/drivers/input/joystick/analog.c index 1c0b529c06aa..4afe0a3b4884 100644 --- a/drivers/input/joystick/analog.c +++ b/drivers/input/joystick/analog.c @@ -146,11 +146,11 @@ static unsigned int get_time_pit(void) unsigned long flags; unsigned int count; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); outb_p(0x00, 0x43); count = inb_p(0x40); count |= inb_p(0x40) << 8; - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); return count; } diff --git a/drivers/input/joystick/iforce/iforce-main.c b/drivers/input/joystick/iforce/iforce-main.c index b1edd778639c..405febd94f24 100644 --- a/drivers/input/joystick/iforce/iforce-main.c +++ b/drivers/input/joystick/iforce/iforce-main.c @@ -54,6 +54,9 @@ static signed short btn_avb_wheel[] = static signed short abs_joystick[] = { ABS_X, ABS_Y, ABS_THROTTLE, ABS_HAT0X, ABS_HAT0Y, -1 }; +static signed short abs_joystick_rudder[] = +{ ABS_X, ABS_Y, ABS_THROTTLE, ABS_RUDDER, ABS_HAT0X, ABS_HAT0Y, -1 }; + static signed short abs_avb_pegasus[] = { ABS_X, ABS_Y, ABS_THROTTLE, ABS_RUDDER, ABS_HAT0X, ABS_HAT0Y, ABS_HAT1X, ABS_HAT1Y, -1 }; @@ -76,8 +79,9 @@ static struct iforce_device iforce_device[] = { { 0x061c, 0xc0a4, "ACT LABS Force RS", btn_wheel, abs_wheel, ff_iforce }, //? { 0x061c, 0xc084, "ACT LABS Force RS", btn_wheel, abs_wheel, ff_iforce }, { 0x06f8, 0x0001, "Guillemot Race Leader Force Feedback", btn_wheel, abs_wheel, ff_iforce }, //? + { 0x06f8, 0x0001, "Guillemot Jet Leader Force Feedback", btn_joystick, abs_joystick_rudder, ff_iforce }, { 0x06f8, 0x0004, "Guillemot Force Feedback Racing Wheel", btn_wheel, abs_wheel, ff_iforce }, //? - { 0x06f8, 0x0004, "Gullemot Jet Leader 3D", btn_joystick, abs_joystick, ff_iforce }, //? + { 0x06f8, 0xa302, "Guillemot Jet Leader 3D", btn_joystick, abs_joystick, ff_iforce }, //? { 0x06d6, 0x29bc, "Trust Force Feedback Race Master", btn_wheel, abs_wheel, ff_iforce }, { 0x0000, 0x0000, "Unknown I-Force Device [%04x:%04x]", btn_joystick, abs_joystick, ff_iforce } }; diff --git a/drivers/input/joystick/iforce/iforce-usb.c b/drivers/input/joystick/iforce/iforce-usb.c index b41303d3ec54..6c96631ae5d9 100644 --- a/drivers/input/joystick/iforce/iforce-usb.c +++ b/drivers/input/joystick/iforce/iforce-usb.c @@ -212,6 +212,7 @@ static struct usb_device_id iforce_usb_ids [] = { { USB_DEVICE(0x061c, 0xc0a4) }, /* ACT LABS Force RS */ { USB_DEVICE(0x061c, 0xc084) }, /* ACT LABS Force RS */ { USB_DEVICE(0x06f8, 0x0001) }, /* Guillemot Race Leader Force Feedback */ + { USB_DEVICE(0x06f8, 0x0003) }, /* Guillemot Jet Leader Force Feedback */ { USB_DEVICE(0x06f8, 0x0004) }, /* Guillemot Force Feedback Racing Wheel */ { USB_DEVICE(0x06f8, 0xa302) }, /* Guillemot Jet Leader 3D */ { } /* Terminating entry */ diff --git a/drivers/input/misc/pcspkr.c b/drivers/input/misc/pcspkr.c index ea4e1fd12651..f080dd31499b 100644 --- a/drivers/input/misc/pcspkr.c +++ b/drivers/input/misc/pcspkr.c @@ -30,7 +30,7 @@ MODULE_ALIAS("platform:pcspkr"); #include <asm/i8253.h> #else #include <asm/8253pit.h> -static DEFINE_SPINLOCK(i8253_lock); +static DEFINE_RAW_SPINLOCK(i8253_lock); #endif static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int code, int value) @@ -50,7 +50,7 @@ static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int c if (value > 20 && value < 32767) count = PIT_TICK_RATE / value; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); if (count) { /* set command for counter 2, 2 byte write */ @@ -65,7 +65,7 @@ static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int c outb(inb_p(0x61) & 0xFC, 0x61); } - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); return 0; } diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c index 0520c2e19927..112b4ee52ff2 100644 --- a/drivers/input/mouse/elantech.c +++ b/drivers/input/mouse/elantech.c @@ -185,7 +185,7 @@ static void elantech_report_absolute_v1(struct psmouse *psmouse) int fingers; static int old_fingers; - if (etd->fw_version_maj == 0x01) { + if (etd->fw_version < 0x020000) { /* * byte 0: D U p1 p2 1 p3 R L * byte 1: f 0 th tw x9 x8 y9 y8 @@ -227,7 +227,7 @@ static void elantech_report_absolute_v1(struct psmouse *psmouse) input_report_key(dev, BTN_LEFT, packet[0] & 0x01); input_report_key(dev, BTN_RIGHT, packet[0] & 0x02); - if ((etd->fw_version_maj == 0x01) && + if (etd->fw_version < 0x020000 && (etd->capabilities & ETP_CAP_HAS_ROCKER)) { /* rocker up */ input_report_key(dev, BTN_FORWARD, packet[0] & 0x40); @@ -321,7 +321,7 @@ static int elantech_check_parity_v1(struct psmouse *psmouse) unsigned char p1, p2, p3; /* Parity bits are placed differently */ - if (etd->fw_version_maj == 0x01) { + if (etd->fw_version < 0x020000) { /* byte 0: D U p1 p2 1 p3 R L */ p1 = (packet[0] & 0x20) >> 5; p2 = (packet[0] & 0x10) >> 4; @@ -457,7 +457,7 @@ static void elantech_set_input_params(struct psmouse *psmouse) switch (etd->hw_version) { case 1: /* Rocker button */ - if ((etd->fw_version_maj == 0x01) && + if (etd->fw_version < 0x020000 && (etd->capabilities & ETP_CAP_HAS_ROCKER)) { __set_bit(BTN_FORWARD, dev->keybit); __set_bit(BTN_BACK, dev->keybit); @@ -686,15 +686,14 @@ int elantech_init(struct psmouse *psmouse) pr_err("elantech.c: failed to query firmware version.\n"); goto init_fail; } - etd->fw_version_maj = param[0]; - etd->fw_version_min = param[2]; + + etd->fw_version = (param[0] << 16) | (param[1] << 8) | param[2]; /* * Assume every version greater than this is new EeePC style * hardware with 6 byte packets */ - if ((etd->fw_version_maj == 0x02 && etd->fw_version_min >= 0x30) || - etd->fw_version_maj > 0x02) { + if (etd->fw_version >= 0x020030) { etd->hw_version = 2; /* For now show extra debug information */ etd->debug = 1; @@ -704,8 +703,9 @@ int elantech_init(struct psmouse *psmouse) etd->hw_version = 1; etd->paritycheck = 1; } - pr_info("elantech.c: assuming hardware version %d, firmware version %d.%d\n", - etd->hw_version, etd->fw_version_maj, etd->fw_version_min); + + pr_info("elantech.c: assuming hardware version %d, firmware version %d.%d.%d\n", + etd->hw_version, param[0], param[1], param[2]); if (synaptics_send_cmd(psmouse, ETP_CAPABILITIES_QUERY, param)) { pr_err("elantech.c: failed to query capabilities.\n"); @@ -720,8 +720,8 @@ int elantech_init(struct psmouse *psmouse) * a touch action starts causing the mouse cursor or scrolled page * to jump. Enable a workaround. */ - if (etd->fw_version_maj == 0x02 && etd->fw_version_min == 0x22) { - pr_info("elantech.c: firmware version 2.34 detected, " + if (etd->fw_version == 0x020022) { + pr_info("elantech.c: firmware version 2.0.34 detected, " "enabling jumpy cursor workaround\n"); etd->jumpy_cursor = 1; } diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h index feac5f7af966..ac57bde1bb9f 100644 --- a/drivers/input/mouse/elantech.h +++ b/drivers/input/mouse/elantech.h @@ -100,11 +100,10 @@ struct elantech_data { unsigned char reg_26; unsigned char debug; unsigned char capabilities; - unsigned char fw_version_maj; - unsigned char fw_version_min; - unsigned char hw_version; unsigned char paritycheck; unsigned char jumpy_cursor; + unsigned char hw_version; + unsigned int fw_version; unsigned char parity[256]; }; diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c index cbc807264940..a3c97315a473 100644 --- a/drivers/input/mouse/psmouse-base.c +++ b/drivers/input/mouse/psmouse-base.c @@ -1394,6 +1394,7 @@ static int psmouse_reconnect(struct serio *serio) struct psmouse *psmouse = serio_get_drvdata(serio); struct psmouse *parent = NULL; struct serio_driver *drv = serio->drv; + unsigned char type; int rc = -1; if (!drv || !psmouse) { @@ -1413,10 +1414,15 @@ static int psmouse_reconnect(struct serio *serio) if (psmouse->reconnect) { if (psmouse->reconnect(psmouse)) goto out; - } else if (psmouse_probe(psmouse) < 0 || - psmouse->type != psmouse_extensions(psmouse, - psmouse_max_proto, false)) { - goto out; + } else { + psmouse_reset(psmouse); + + if (psmouse_probe(psmouse) < 0) + goto out; + + type = psmouse_extensions(psmouse, psmouse_max_proto, false); + if (psmouse->type != type) + goto out; } /* ok, the device type (and capabilities) match the old one, diff --git a/drivers/input/touchscreen/ad7877.c b/drivers/input/touchscreen/ad7877.c index e019d53d1ab4..0d2d7e54b465 100644 --- a/drivers/input/touchscreen/ad7877.c +++ b/drivers/input/touchscreen/ad7877.c @@ -156,9 +156,14 @@ struct ser_req { u16 reset; u16 ref_on; u16 command; - u16 sample; struct spi_message msg; struct spi_transfer xfer[6]; + + /* + * DMA (thus cache coherency maintenance) requires the + * transfer buffers to live in their own cache lines. + */ + u16 sample ____cacheline_aligned; }; struct ad7877 { @@ -182,8 +187,6 @@ struct ad7877 { u8 averaging; u8 pen_down_acc_interval; - u16 conversion_data[AD7877_NR_SENSE]; - struct spi_transfer xfer[AD7877_NR_SENSE + 2]; struct spi_message msg; @@ -195,6 +198,12 @@ struct ad7877 { spinlock_t lock; struct timer_list timer; /* P: lock */ unsigned pending:1; /* P: lock */ + + /* + * DMA (thus cache coherency maintenance) requires the + * transfer buffers to live in their own cache lines. + */ + u16 conversion_data[AD7877_NR_SENSE] ____cacheline_aligned; }; static int gpio3; diff --git a/drivers/mfd/wm831x-core.c b/drivers/mfd/wm831x-core.c index a3d5728b6449..f2ab025ad97a 100644 --- a/drivers/mfd/wm831x-core.c +++ b/drivers/mfd/wm831x-core.c @@ -349,6 +349,9 @@ int wm831x_auxadc_read(struct wm831x *wm831x, enum wm831x_auxadc input) goto disable; } + /* If an interrupt arrived late clean up after it */ + try_wait_for_completion(&wm831x->auxadc_done); + /* Ignore the result to allow us to soldier on without IRQ hookup */ wait_for_completion_timeout(&wm831x->auxadc_done, msecs_to_jiffies(5)); diff --git a/drivers/mfd/wm8350-core.c b/drivers/mfd/wm8350-core.c index e400a3bed063..b5807484b4c9 100644 --- a/drivers/mfd/wm8350-core.c +++ b/drivers/mfd/wm8350-core.c @@ -363,6 +363,10 @@ int wm8350_read_auxadc(struct wm8350 *wm8350, int channel, int scale, int vref) reg |= 1 << channel | WM8350_AUXADC_POLL; wm8350_reg_write(wm8350, WM8350_DIGITISER_CONTROL_1, reg); + /* If a late IRQ left the completion signalled then consume + * the completion. */ + try_wait_for_completion(&wm8350->auxadc_done); + /* We ignore the result of the completion and just check for a * conversion result, allowing us to soldier on if the IRQ * infrastructure is not set up for the chip. */ diff --git a/drivers/misc/vmware_balloon.c b/drivers/misc/vmware_balloon.c index e7161c4e3798..db9cd0240c6f 100644 --- a/drivers/misc/vmware_balloon.c +++ b/drivers/misc/vmware_balloon.c @@ -41,7 +41,7 @@ #include <linux/workqueue.h> #include <linux/debugfs.h> #include <linux/seq_file.h> -#include <asm/vmware.h> +#include <asm/hypervisor.h> MODULE_AUTHOR("VMware, Inc."); MODULE_DESCRIPTION("VMware Memory Control (Balloon) Driver"); @@ -767,7 +767,7 @@ static int __init vmballoon_init(void) * Check if we are running on VMware's hypervisor and bail out * if we are not. */ - if (!vmware_platform()) + if (x86_hyper != &x86_hyper_vmware) return -ENODEV; vmballoon_wq = create_freezeable_workqueue("vmmemctl"); diff --git a/drivers/mmc/host/at91_mci.c b/drivers/mmc/host/at91_mci.c index a6dd7da37357..336d9f553f3e 100644 --- a/drivers/mmc/host/at91_mci.c +++ b/drivers/mmc/host/at91_mci.c @@ -314,8 +314,8 @@ static void at91_mci_post_dma_read(struct at91mci_host *host) dmabuf = (unsigned *)tmpv; } + flush_kernel_dcache_page(sg_page(sg)); kunmap_atomic(sgbuffer, KM_BIO_SRC_IRQ); - dmac_flush_range((void *)sgbuffer, ((void *)sgbuffer) + amount); data->bytes_xfered += amount; if (size == 0) break; diff --git a/drivers/net/a2065.c b/drivers/net/a2065.c index 541f9a20f519..f142cc21e453 100644 --- a/drivers/net/a2065.c +++ b/drivers/net/a2065.c @@ -672,6 +672,7 @@ static struct zorro_device_id a2065_zorro_tbl[] __devinitdata = { { ZORRO_PROD_AMERISTAR_A2065 }, { 0 } }; +MODULE_DEVICE_TABLE(zorro, a2065_zorro_tbl); static struct zorro_driver a2065_driver = { .name = "a2065", diff --git a/drivers/net/ariadne.c b/drivers/net/ariadne.c index 705373a5308d..39214e512452 100644 --- a/drivers/net/ariadne.c +++ b/drivers/net/ariadne.c @@ -145,6 +145,7 @@ static struct zorro_device_id ariadne_zorro_tbl[] __devinitdata = { { ZORRO_PROD_VILLAGE_TRONIC_ARIADNE }, { 0 } }; +MODULE_DEVICE_TABLE(zorro, ariadne_zorro_tbl); static struct zorro_driver ariadne_driver = { .name = "ariadne", diff --git a/drivers/net/hydra.c b/drivers/net/hydra.c index 24724b4ad709..07d8e5b634f3 100644 --- a/drivers/net/hydra.c +++ b/drivers/net/hydra.c @@ -71,6 +71,7 @@ static struct zorro_device_id hydra_zorro_tbl[] __devinitdata = { { ZORRO_PROD_HYDRA_SYSTEMS_AMIGANET }, { 0 } }; +MODULE_DEVICE_TABLE(zorro, hydra_zorro_tbl); static struct zorro_driver hydra_driver = { .name = "hydra", diff --git a/drivers/net/zorro8390.c b/drivers/net/zorro8390.c index 4f7b9d6a087b..b78a38d9172a 100644 --- a/drivers/net/zorro8390.c +++ b/drivers/net/zorro8390.c @@ -102,6 +102,7 @@ static struct zorro_device_id zorro8390_zorro_tbl[] __devinitdata = { { ZORRO_PROD_INDIVIDUAL_COMPUTERS_X_SURF, }, { 0 } }; +MODULE_DEVICE_TABLE(zorro, zorro8390_zorro_tbl); static struct zorro_driver zorro8390_driver = { .name = "zorro8390", diff --git a/drivers/oprofile/cpu_buffer.c b/drivers/oprofile/cpu_buffer.c index 166b67ea622f..219f79e2210a 100644 --- a/drivers/oprofile/cpu_buffer.c +++ b/drivers/oprofile/cpu_buffer.c @@ -30,23 +30,7 @@ #define OP_BUFFER_FLAGS 0 -/* - * Read and write access is using spin locking. Thus, writing to the - * buffer by NMI handler (x86) could occur also during critical - * sections when reading the buffer. To avoid this, there are 2 - * buffers for independent read and write access. Read access is in - * process context only, write access only in the NMI handler. If the - * read buffer runs empty, both buffers are swapped atomically. There - * is potentially a small window during swapping where the buffers are - * disabled and samples could be lost. - * - * Using 2 buffers is a little bit overhead, but the solution is clear - * and does not require changes in the ring buffer implementation. It - * can be changed to a single buffer solution when the ring buffer - * access is implemented as non-locking atomic code. - */ -static struct ring_buffer *op_ring_buffer_read; -static struct ring_buffer *op_ring_buffer_write; +static struct ring_buffer *op_ring_buffer; DEFINE_PER_CPU(struct oprofile_cpu_buffer, op_cpu_buffer); static void wq_sync_buffer(struct work_struct *work); @@ -68,12 +52,9 @@ void oprofile_cpu_buffer_inc_smpl_lost(void) void free_cpu_buffers(void) { - if (op_ring_buffer_read) - ring_buffer_free(op_ring_buffer_read); - op_ring_buffer_read = NULL; - if (op_ring_buffer_write) - ring_buffer_free(op_ring_buffer_write); - op_ring_buffer_write = NULL; + if (op_ring_buffer) + ring_buffer_free(op_ring_buffer); + op_ring_buffer = NULL; } #define RB_EVENT_HDR_SIZE 4 @@ -86,11 +67,8 @@ int alloc_cpu_buffers(void) unsigned long byte_size = buffer_size * (sizeof(struct op_sample) + RB_EVENT_HDR_SIZE); - op_ring_buffer_read = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS); - if (!op_ring_buffer_read) - goto fail; - op_ring_buffer_write = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS); - if (!op_ring_buffer_write) + op_ring_buffer = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS); + if (!op_ring_buffer) goto fail; for_each_possible_cpu(i) { @@ -162,16 +140,11 @@ struct op_sample *op_cpu_buffer_write_reserve(struct op_entry *entry, unsigned long size) { entry->event = ring_buffer_lock_reserve - (op_ring_buffer_write, sizeof(struct op_sample) + + (op_ring_buffer, sizeof(struct op_sample) + size * sizeof(entry->sample->data[0])); - if (entry->event) - entry->sample = ring_buffer_event_data(entry->event); - else - entry->sample = NULL; - - if (!entry->sample) + if (!entry->event) return NULL; - + entry->sample = ring_buffer_event_data(entry->event); entry->size = size; entry->data = entry->sample->data; @@ -180,25 +153,16 @@ struct op_sample int op_cpu_buffer_write_commit(struct op_entry *entry) { - return ring_buffer_unlock_commit(op_ring_buffer_write, entry->event); + return ring_buffer_unlock_commit(op_ring_buffer, entry->event); } struct op_sample *op_cpu_buffer_read_entry(struct op_entry *entry, int cpu) { struct ring_buffer_event *e; - e = ring_buffer_consume(op_ring_buffer_read, cpu, NULL); - if (e) - goto event; - if (ring_buffer_swap_cpu(op_ring_buffer_read, - op_ring_buffer_write, - cpu)) + e = ring_buffer_consume(op_ring_buffer, cpu, NULL, NULL); + if (!e) return NULL; - e = ring_buffer_consume(op_ring_buffer_read, cpu, NULL); - if (e) - goto event; - return NULL; -event: entry->event = e; entry->sample = ring_buffer_event_data(e); entry->size = (ring_buffer_event_length(e) - sizeof(struct op_sample)) @@ -209,8 +173,7 @@ event: unsigned long op_cpu_buffer_entries(int cpu) { - return ring_buffer_entries_cpu(op_ring_buffer_read, cpu) - + ring_buffer_entries_cpu(op_ring_buffer_write, cpu); + return ring_buffer_entries_cpu(op_ring_buffer, cpu); } static int @@ -356,8 +319,16 @@ void oprofile_add_ext_sample(unsigned long pc, struct pt_regs * const regs, void oprofile_add_sample(struct pt_regs * const regs, unsigned long event) { - int is_kernel = !user_mode(regs); - unsigned long pc = profile_pc(regs); + int is_kernel; + unsigned long pc; + + if (likely(regs)) { + is_kernel = !user_mode(regs); + pc = profile_pc(regs); + } else { + is_kernel = 0; /* This value will not be used */ + pc = ESCAPE_CODE; /* as this causes an early return. */ + } __oprofile_add_ext_sample(pc, regs, event, is_kernel); } diff --git a/drivers/oprofile/oprof.c b/drivers/oprofile/oprof.c index dc8a0428260d..b336cd9ee7a1 100644 --- a/drivers/oprofile/oprof.c +++ b/drivers/oprofile/oprof.c @@ -253,22 +253,26 @@ static int __init oprofile_init(void) int err; err = oprofile_arch_init(&oprofile_ops); - if (err < 0 || timer) { printk(KERN_INFO "oprofile: using timer interrupt.\n"); - oprofile_timer_init(&oprofile_ops); + err = oprofile_timer_init(&oprofile_ops); + if (err) + goto out_arch; } - err = oprofilefs_register(); if (err) - oprofile_arch_exit(); + goto out_arch; + return 0; +out_arch: + oprofile_arch_exit(); return err; } static void __exit oprofile_exit(void) { + oprofile_timer_exit(); oprofilefs_unregister(); oprofile_arch_exit(); } diff --git a/drivers/oprofile/oprof.h b/drivers/oprofile/oprof.h index cb92f5c98c1a..47e12cb4ee8b 100644 --- a/drivers/oprofile/oprof.h +++ b/drivers/oprofile/oprof.h @@ -34,7 +34,8 @@ struct super_block; struct dentry; void oprofile_create_files(struct super_block *sb, struct dentry *root); -void oprofile_timer_init(struct oprofile_operations *ops); +int oprofile_timer_init(struct oprofile_operations *ops); +void oprofile_timer_exit(void); int oprofile_set_backtrace(unsigned long depth); int oprofile_set_timeout(unsigned long time); diff --git a/drivers/oprofile/timer_int.c b/drivers/oprofile/timer_int.c index 333f915568c7..dc0ae4d14dff 100644 --- a/drivers/oprofile/timer_int.c +++ b/drivers/oprofile/timer_int.c @@ -13,34 +13,94 @@ #include <linux/oprofile.h> #include <linux/profile.h> #include <linux/init.h> +#include <linux/cpu.h> +#include <linux/hrtimer.h> +#include <asm/irq_regs.h> #include <asm/ptrace.h> #include "oprof.h" -static int timer_notify(struct pt_regs *regs) +static DEFINE_PER_CPU(struct hrtimer, oprofile_hrtimer); + +static enum hrtimer_restart oprofile_hrtimer_notify(struct hrtimer *hrtimer) +{ + oprofile_add_sample(get_irq_regs(), 0); + hrtimer_forward_now(hrtimer, ns_to_ktime(TICK_NSEC)); + return HRTIMER_RESTART; +} + +static void __oprofile_hrtimer_start(void *unused) +{ + struct hrtimer *hrtimer = &__get_cpu_var(oprofile_hrtimer); + + hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer->function = oprofile_hrtimer_notify; + + hrtimer_start(hrtimer, ns_to_ktime(TICK_NSEC), + HRTIMER_MODE_REL_PINNED); +} + +static int oprofile_hrtimer_start(void) { - oprofile_add_sample(regs, 0); + on_each_cpu(__oprofile_hrtimer_start, NULL, 1); return 0; } -static int timer_start(void) +static void __oprofile_hrtimer_stop(int cpu) { - return register_timer_hook(timer_notify); + struct hrtimer *hrtimer = &per_cpu(oprofile_hrtimer, cpu); + + hrtimer_cancel(hrtimer); } +static void oprofile_hrtimer_stop(void) +{ + int cpu; + + for_each_online_cpu(cpu) + __oprofile_hrtimer_stop(cpu); +} -static void timer_stop(void) +static int __cpuinit oprofile_cpu_notify(struct notifier_block *self, + unsigned long action, void *hcpu) { - unregister_timer_hook(timer_notify); + long cpu = (long) hcpu; + + switch (action) { + case CPU_ONLINE: + case CPU_ONLINE_FROZEN: + smp_call_function_single(cpu, __oprofile_hrtimer_start, + NULL, 1); + break; + case CPU_DEAD: + case CPU_DEAD_FROZEN: + __oprofile_hrtimer_stop(cpu); + break; + } + return NOTIFY_OK; } +static struct notifier_block __refdata oprofile_cpu_notifier = { + .notifier_call = oprofile_cpu_notify, +}; -void __init oprofile_timer_init(struct oprofile_operations *ops) +int __init oprofile_timer_init(struct oprofile_operations *ops) { + int rc; + + rc = register_hotcpu_notifier(&oprofile_cpu_notifier); + if (rc) + return rc; ops->create_files = NULL; ops->setup = NULL; ops->shutdown = NULL; - ops->start = timer_start; - ops->stop = timer_stop; + ops->start = oprofile_hrtimer_start; + ops->stop = oprofile_hrtimer_stop; ops->cpu_type = "timer"; + return 0; +} + +void __exit oprofile_timer_exit(void) +{ + unregister_hotcpu_notifier(&oprofile_cpu_notifier); } diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c index 417312528ddf..371dc564e2e4 100644 --- a/drivers/pci/intel-iommu.c +++ b/drivers/pci/intel-iommu.c @@ -3626,14 +3626,15 @@ static void intel_iommu_detach_device(struct iommu_domain *domain, domain_remove_one_dev_info(dmar_domain, pdev); } -static int intel_iommu_map_range(struct iommu_domain *domain, - unsigned long iova, phys_addr_t hpa, - size_t size, int iommu_prot) +static int intel_iommu_map(struct iommu_domain *domain, + unsigned long iova, phys_addr_t hpa, + int gfp_order, int iommu_prot) { struct dmar_domain *dmar_domain = domain->priv; u64 max_addr; int addr_width; int prot = 0; + size_t size; int ret; if (iommu_prot & IOMMU_READ) @@ -3643,6 +3644,7 @@ static int intel_iommu_map_range(struct iommu_domain *domain, if ((iommu_prot & IOMMU_CACHE) && dmar_domain->iommu_snooping) prot |= DMA_PTE_SNP; + size = PAGE_SIZE << gfp_order; max_addr = iova + size; if (dmar_domain->max_addr < max_addr) { int min_agaw; @@ -3669,19 +3671,19 @@ static int intel_iommu_map_range(struct iommu_domain *domain, return ret; } -static void intel_iommu_unmap_range(struct iommu_domain *domain, - unsigned long iova, size_t size) +static int intel_iommu_unmap(struct iommu_domain *domain, + unsigned long iova, int gfp_order) { struct dmar_domain *dmar_domain = domain->priv; - - if (!size) - return; + size_t size = PAGE_SIZE << gfp_order; dma_pte_clear_range(dmar_domain, iova >> VTD_PAGE_SHIFT, (iova + size - 1) >> VTD_PAGE_SHIFT); if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova; + + return gfp_order; } static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, @@ -3714,8 +3716,8 @@ static struct iommu_ops intel_iommu_ops = { .domain_destroy = intel_iommu_domain_destroy, .attach_dev = intel_iommu_attach_device, .detach_dev = intel_iommu_detach_device, - .map = intel_iommu_map_range, - .unmap = intel_iommu_unmap_range, + .map = intel_iommu_map, + .unmap = intel_iommu_unmap, .iova_to_phys = intel_iommu_iova_to_phys, .domain_has_cap = intel_iommu_domain_has_cap, }; diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index 4fe36d2e1049..19b111383f62 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c @@ -838,65 +838,11 @@ static void pci_bus_dump_resources(struct pci_bus *bus) } } -static int __init pci_bus_get_depth(struct pci_bus *bus) -{ - int depth = 0; - struct pci_dev *dev; - - list_for_each_entry(dev, &bus->devices, bus_list) { - int ret; - struct pci_bus *b = dev->subordinate; - if (!b) - continue; - - ret = pci_bus_get_depth(b); - if (ret + 1 > depth) - depth = ret + 1; - } - - return depth; -} -static int __init pci_get_max_depth(void) -{ - int depth = 0; - struct pci_bus *bus; - - list_for_each_entry(bus, &pci_root_buses, node) { - int ret; - - ret = pci_bus_get_depth(bus); - if (ret > depth) - depth = ret; - } - - return depth; -} - -/* - * first try will not touch pci bridge res - * second and later try will clear small leaf bridge res - * will stop till to the max deepth if can not find good one - */ void __init pci_assign_unassigned_resources(void) { struct pci_bus *bus; - int tried_times = 0; - enum release_type rel_type = leaf_only; - struct resource_list_x head, *list; - unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | - IORESOURCE_PREFETCH; - unsigned long failed_type; - int max_depth = pci_get_max_depth(); - int pci_try_num; - head.next = NULL; - - pci_try_num = max_depth + 1; - printk(KERN_DEBUG "PCI: max bus depth: %d pci_try_num: %d\n", - max_depth, pci_try_num); - -again: /* Depth first, calculate sizes and alignments of all subordinate buses. */ list_for_each_entry(bus, &pci_root_buses, node) { @@ -904,65 +850,9 @@ again: } /* Depth last, allocate resources and update the hardware. */ list_for_each_entry(bus, &pci_root_buses, node) { - __pci_bus_assign_resources(bus, &head); - } - tried_times++; - - /* any device complain? */ - if (!head.next) - goto enable_and_dump; - failed_type = 0; - for (list = head.next; list;) { - failed_type |= list->flags; - list = list->next; - } - /* - * io port are tight, don't try extra - * or if reach the limit, don't want to try more - */ - failed_type &= type_mask; - if ((failed_type == IORESOURCE_IO) || (tried_times >= pci_try_num)) { - free_failed_list(&head); - goto enable_and_dump; - } - - printk(KERN_DEBUG "PCI: No. %d try to assign unassigned res\n", - tried_times + 1); - - /* third times and later will not check if it is leaf */ - if ((tried_times + 1) > 2) - rel_type = whole_subtree; - - /* - * Try to release leaf bridge's resources that doesn't fit resource of - * child device under that bridge - */ - for (list = head.next; list;) { - bus = list->dev->bus; - pci_bus_release_bridge_resources(bus, list->flags & type_mask, - rel_type); - list = list->next; - } - /* restore size and flags */ - for (list = head.next; list;) { - struct resource *res = list->res; - - res->start = list->start; - res->end = list->end; - res->flags = list->flags; - if (list->dev->subordinate) - res->flags = 0; - - list = list->next; - } - free_failed_list(&head); - - goto again; - -enable_and_dump: - /* Depth last, update the hardware. */ - list_for_each_entry(bus, &pci_root_buses, node) + pci_bus_assign_resources(bus); pci_enable_bridges(bus); + } /* dump the resource on buses */ list_for_each_entry(bus, &pci_root_buses, node) { diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index acf222f91f5a..fa2339cb1681 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c @@ -37,6 +37,9 @@ */ #define DASD_CHANQ_MAX_SIZE 4 +#define DASD_SLEEPON_START_TAG (void *) 1 +#define DASD_SLEEPON_END_TAG (void *) 2 + /* * SECTION: exported variables of dasd.c */ @@ -1472,7 +1475,10 @@ void dasd_add_request_tail(struct dasd_ccw_req *cqr) */ static void dasd_wakeup_cb(struct dasd_ccw_req *cqr, void *data) { - wake_up((wait_queue_head_t *) data); + spin_lock_irq(get_ccwdev_lock(cqr->startdev->cdev)); + cqr->callback_data = DASD_SLEEPON_END_TAG; + spin_unlock_irq(get_ccwdev_lock(cqr->startdev->cdev)); + wake_up(&generic_waitq); } static inline int _wait_for_wakeup(struct dasd_ccw_req *cqr) @@ -1482,10 +1488,7 @@ static inline int _wait_for_wakeup(struct dasd_ccw_req *cqr) device = cqr->startdev; spin_lock_irq(get_ccwdev_lock(device->cdev)); - rc = ((cqr->status == DASD_CQR_DONE || - cqr->status == DASD_CQR_NEED_ERP || - cqr->status == DASD_CQR_TERMINATED) && - list_empty(&cqr->devlist)); + rc = (cqr->callback_data == DASD_SLEEPON_END_TAG); spin_unlock_irq(get_ccwdev_lock(device->cdev)); return rc; } @@ -1573,7 +1576,7 @@ static int _dasd_sleep_on(struct dasd_ccw_req *maincqr, int interruptible) wait_event(generic_waitq, !(device->stopped)); cqr->callback = dasd_wakeup_cb; - cqr->callback_data = (void *) &generic_waitq; + cqr->callback_data = DASD_SLEEPON_START_TAG; dasd_add_request_tail(cqr); if (interruptible) { rc = wait_event_interruptible( @@ -1652,7 +1655,7 @@ int dasd_sleep_on_immediatly(struct dasd_ccw_req *cqr) } cqr->callback = dasd_wakeup_cb; - cqr->callback_data = (void *) &generic_waitq; + cqr->callback_data = DASD_SLEEPON_START_TAG; cqr->status = DASD_CQR_QUEUED; list_add(&cqr->devlist, &device->ccw_queue); diff --git a/drivers/scsi/zorro7xx.c b/drivers/scsi/zorro7xx.c index 105449c15fa9..e17764d71476 100644 --- a/drivers/scsi/zorro7xx.c +++ b/drivers/scsi/zorro7xx.c @@ -69,6 +69,7 @@ static struct zorro_device_id zorro7xx_zorro_tbl[] __devinitdata = { }, { 0 } }; +MODULE_DEVICE_TABLE(zorro, zorro7xx_zorro_tbl); static int __devinit zorro7xx_init_one(struct zorro_dev *z, const struct zorro_device_id *ent) diff --git a/drivers/serial/imx.c b/drivers/serial/imx.c index 4315b23590bd..eacb588a9345 100644 --- a/drivers/serial/imx.c +++ b/drivers/serial/imx.c @@ -120,7 +120,8 @@ #define MX2_UCR3_RXDMUXSEL (1<<2) /* RXD Muxed Input Select, on mx2/mx3 */ #define UCR3_INVT (1<<1) /* Inverted Infrared transmission */ #define UCR3_BPEN (1<<0) /* Preset registers enable */ -#define UCR4_CTSTL_32 (32<<10) /* CTS trigger level (32 chars) */ +#define UCR4_CTSTL_SHF 10 /* CTS trigger level shift */ +#define UCR4_CTSTL_MASK 0x3F /* CTS trigger is 6 bits wide */ #define UCR4_INVR (1<<9) /* Inverted infrared reception */ #define UCR4_ENIRI (1<<8) /* Serial infrared interrupt enable */ #define UCR4_WKEN (1<<7) /* Wake interrupt enable */ @@ -591,6 +592,9 @@ static int imx_setup_ufcr(struct imx_port *sport, unsigned int mode) return 0; } +/* half the RX buffer size */ +#define CTSTL 16 + static int imx_startup(struct uart_port *port) { struct imx_port *sport = (struct imx_port *)port; @@ -607,6 +611,10 @@ static int imx_startup(struct uart_port *port) if (USE_IRDA(sport)) temp |= UCR4_IRSC; + /* set the trigger level for CTS */ + temp &= ~(UCR4_CTSTL_MASK<< UCR4_CTSTL_SHF); + temp |= CTSTL<< UCR4_CTSTL_SHF; + writel(temp & ~UCR4_DREN, sport->port.membase + UCR4); if (USE_IRDA(sport)) { diff --git a/drivers/serial/mpc52xx_uart.c b/drivers/serial/mpc52xx_uart.c index a176ab4bd65b..02469c31bf0b 100644 --- a/drivers/serial/mpc52xx_uart.c +++ b/drivers/serial/mpc52xx_uart.c @@ -1467,7 +1467,7 @@ mpc52xx_uart_init(void) /* * Map the PSC FIFO Controller and init if on MPC512x. */ - if (psc_ops->fifoc_init) { + if (psc_ops && psc_ops->fifoc_init) { ret = psc_ops->fifoc_init(); if (ret) return ret; diff --git a/drivers/usb/core/inode.c b/drivers/usb/core/inode.c index 4a6366a42129..111a01a747fc 100644 --- a/drivers/usb/core/inode.c +++ b/drivers/usb/core/inode.c @@ -380,6 +380,7 @@ static int usbfs_rmdir(struct inode *dir, struct dentry *dentry) mutex_lock(&inode->i_mutex); dentry_unhash(dentry); if (usbfs_empty(dentry)) { + dont_mount(dentry); drop_nlink(dentry->d_inode); drop_nlink(dentry->d_inode); dput(dentry); diff --git a/drivers/video/amifb.c b/drivers/video/amifb.c index dca48df98444..e5d6b56d4447 100644 --- a/drivers/video/amifb.c +++ b/drivers/video/amifb.c @@ -50,8 +50,9 @@ #include <linux/fb.h> #include <linux/init.h> #include <linux/ioport.h> - +#include <linux/platform_device.h> #include <linux/uaccess.h> + #include <asm/system.h> #include <asm/irq.h> #include <asm/amigahw.h> @@ -1135,7 +1136,7 @@ static int amifb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg * Interface to the low level console driver */ -static void amifb_deinit(void); +static void amifb_deinit(struct platform_device *pdev); /* * Internal routines @@ -2246,7 +2247,7 @@ static inline void chipfree(void) * Initialisation */ -static int __init amifb_init(void) +static int __init amifb_probe(struct platform_device *pdev) { int tag, i, err = 0; u_long chipptr; @@ -2261,16 +2262,6 @@ static int __init amifb_init(void) } amifb_setup(option); #endif - if (!MACH_IS_AMIGA || !AMIGAHW_PRESENT(AMI_VIDEO)) - return -ENODEV; - - /* - * We request all registers starting from bplpt[0] - */ - if (!request_mem_region(CUSTOM_PHYSADDR+0xe0, 0x120, - "amifb [Denise/Lisa]")) - return -EBUSY; - custom.dmacon = DMAF_ALL | DMAF_MASTER; switch (amiga_chipset) { @@ -2377,6 +2368,7 @@ default_chipset: fb_info.fbops = &amifb_ops; fb_info.par = ¤tpar; fb_info.flags = FBINFO_DEFAULT; + fb_info.device = &pdev->dev; if (!fb_find_mode(&fb_info.var, &fb_info, mode_option, ami_modedb, NUM_TOTAL_MODES, &ami_modedb[defmode], 4)) { @@ -2451,18 +2443,18 @@ default_chipset: return 0; amifb_error: - amifb_deinit(); + amifb_deinit(pdev); return err; } -static void amifb_deinit(void) +static void amifb_deinit(struct platform_device *pdev) { if (fb_info.cmap.len) fb_dealloc_cmap(&fb_info.cmap); + fb_dealloc_cmap(&fb_info.cmap); chipfree(); if (videomemory) iounmap((void*)videomemory); - release_mem_region(CUSTOM_PHYSADDR+0xe0, 0x120); custom.dmacon = DMAF_ALL | DMAF_MASTER; } @@ -3794,14 +3786,35 @@ static void ami_rebuild_copper(void) } } -static void __exit amifb_exit(void) +static int __exit amifb_remove(struct platform_device *pdev) { unregister_framebuffer(&fb_info); - amifb_deinit(); + amifb_deinit(pdev); amifb_video_off(); + return 0; +} + +static struct platform_driver amifb_driver = { + .remove = __exit_p(amifb_remove), + .driver = { + .name = "amiga-video", + .owner = THIS_MODULE, + }, +}; + +static int __init amifb_init(void) +{ + return platform_driver_probe(&amifb_driver, amifb_probe); } module_init(amifb_init); + +static void __exit amifb_exit(void) +{ + platform_driver_unregister(&amifb_driver); +} + module_exit(amifb_exit); MODULE_LICENSE("GPL"); +MODULE_ALIAS("platform:amiga-video"); diff --git a/drivers/video/cirrusfb.c b/drivers/video/cirrusfb.c index 8d8dfda2f868..6df7c54db0a3 100644 --- a/drivers/video/cirrusfb.c +++ b/drivers/video/cirrusfb.c @@ -299,6 +299,7 @@ static const struct zorro_device_id cirrusfb_zorro_table[] = { }, { 0 } }; +MODULE_DEVICE_TABLE(zorro, cirrusfb_zorro_table); static const struct { zorro_id id2; diff --git a/drivers/video/fm2fb.c b/drivers/video/fm2fb.c index 6c91c61cdb63..1b0feb8e7244 100644 --- a/drivers/video/fm2fb.c +++ b/drivers/video/fm2fb.c @@ -219,6 +219,7 @@ static struct zorro_device_id fm2fb_devices[] __devinitdata = { { ZORRO_PROD_HELFRICH_RAINBOW_II }, { 0 } }; +MODULE_DEVICE_TABLE(zorro, fm2fb_devices); static struct zorro_driver fm2fb_driver = { .name = "fm2fb", diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig index 0bf5020d0d32..b87ba23442d2 100644 --- a/drivers/watchdog/Kconfig +++ b/drivers/watchdog/Kconfig @@ -175,7 +175,7 @@ config SA1100_WATCHDOG config MPCORE_WATCHDOG tristate "MPcore watchdog" - depends on ARM_MPCORE_PLATFORM && LOCAL_TIMERS + depends on HAVE_ARM_TWD help Watchdog timer embedded into the MPcore system. diff --git a/drivers/watchdog/mpcore_wdt.c b/drivers/watchdog/mpcore_wdt.c index 016c6a791cab..b8ec7aca3c8e 100644 --- a/drivers/watchdog/mpcore_wdt.c +++ b/drivers/watchdog/mpcore_wdt.c @@ -31,8 +31,9 @@ #include <linux/platform_device.h> #include <linux/uaccess.h> #include <linux/slab.h> +#include <linux/io.h> -#include <asm/hardware/arm_twd.h> +#include <asm/smp_twd.h> struct mpcore_wdt { unsigned long timer_alive; @@ -44,7 +45,7 @@ struct mpcore_wdt { }; static struct platform_device *mpcore_wdt_dev; -extern unsigned int mpcore_timer_rate; +static DEFINE_SPINLOCK(wdt_lock); #define TIMER_MARGIN 60 static int mpcore_margin = TIMER_MARGIN; @@ -94,13 +95,15 @@ static irqreturn_t mpcore_wdt_fire(int irq, void *arg) */ static void mpcore_wdt_keepalive(struct mpcore_wdt *wdt) { - unsigned int count; + unsigned long count; + spin_lock(&wdt_lock); /* Assume prescale is set to 256 */ - count = (mpcore_timer_rate / 256) * mpcore_margin; + count = __raw_readl(wdt->base + TWD_WDOG_COUNTER); + count = (0xFFFFFFFFU - count) * (HZ / 5); + count = (count / 256) * mpcore_margin; /* Reload the counter */ - spin_lock(&wdt_lock); writel(count + wdt->perturb, wdt->base + TWD_WDOG_LOAD); wdt->perturb = wdt->perturb ? 0 : 1; spin_unlock(&wdt_lock); @@ -119,7 +122,6 @@ static void mpcore_wdt_start(struct mpcore_wdt *wdt) { dev_printk(KERN_INFO, wdt->dev, "enabling watchdog.\n"); - spin_lock(&wdt_lock); /* This loads the count register but does NOT start the count yet */ mpcore_wdt_keepalive(wdt); @@ -130,7 +132,6 @@ static void mpcore_wdt_start(struct mpcore_wdt *wdt) /* Enable watchdog - prescale=256, watchdog mode=1, enable=1 */ writel(0x0000FF09, wdt->base + TWD_WDOG_CONTROL); } - spin_unlock(&wdt_lock); } static int mpcore_wdt_set_heartbeat(int t) @@ -360,7 +361,7 @@ static int __devinit mpcore_wdt_probe(struct platform_device *dev) mpcore_wdt_miscdev.parent = &dev->dev; ret = misc_register(&mpcore_wdt_miscdev); if (ret) { - dev_printk(KERN_ERR, _dev, + dev_printk(KERN_ERR, wdt->dev, "cannot register miscdev on minor=%d (err=%d)\n", WATCHDOG_MINOR, ret); goto err_misc; @@ -369,13 +370,13 @@ static int __devinit mpcore_wdt_probe(struct platform_device *dev) ret = request_irq(wdt->irq, mpcore_wdt_fire, IRQF_DISABLED, "mpcore_wdt", wdt); if (ret) { - dev_printk(KERN_ERR, _dev, + dev_printk(KERN_ERR, wdt->dev, "cannot register IRQ%d for watchdog\n", wdt->irq); goto err_irq; } mpcore_wdt_stop(wdt); - platform_set_drvdata(&dev->dev, wdt); + platform_set_drvdata(dev, wdt); mpcore_wdt_dev = dev; return 0; diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c index 2ac4440e7b08..8943b8ccee1a 100644 --- a/drivers/xen/manage.c +++ b/drivers/xen/manage.c @@ -80,12 +80,6 @@ static void do_suspend(void) shutting_down = SHUTDOWN_SUSPEND; - err = stop_machine_create(); - if (err) { - printk(KERN_ERR "xen suspend: failed to setup stop_machine %d\n", err); - goto out; - } - #ifdef CONFIG_PREEMPT /* If the kernel is preemptible, we need to freeze all the processes to prevent them from being in the middle of a pagetable update @@ -93,7 +87,7 @@ static void do_suspend(void) err = freeze_processes(); if (err) { printk(KERN_ERR "xen suspend: freeze failed %d\n", err); - goto out_destroy_sm; + goto out; } #endif @@ -136,12 +130,8 @@ out_resume: out_thaw: #ifdef CONFIG_PREEMPT thaw_processes(); - -out_destroy_sm: -#endif - stop_machine_destroy(); - out: +#endif shutting_down = SHUTDOWN_INVALID; } #endif /* CONFIG_PM_SLEEP */ diff --git a/drivers/zorro/proc.c b/drivers/zorro/proc.c index d47c47fc048f..3c7046d79654 100644 --- a/drivers/zorro/proc.c +++ b/drivers/zorro/proc.c @@ -97,7 +97,7 @@ static void zorro_seq_stop(struct seq_file *m, void *v) static int zorro_seq_show(struct seq_file *m, void *v) { - u_int slot = *(loff_t *)v; + unsigned int slot = *(loff_t *)v; struct zorro_dev *z = &zorro_autocon[slot]; seq_printf(m, "%02x\t%08x\t%08lx\t%08lx\t%02x\n", slot, z->id, @@ -129,7 +129,7 @@ static const struct file_operations zorro_devices_proc_fops = { static struct proc_dir_entry *proc_bus_zorro_dir; -static int __init zorro_proc_attach_device(u_int slot) +static int __init zorro_proc_attach_device(unsigned int slot) { struct proc_dir_entry *entry; char name[4]; @@ -146,7 +146,7 @@ static int __init zorro_proc_attach_device(u_int slot) static int __init zorro_proc_init(void) { - u_int slot; + unsigned int slot; if (MACH_IS_AMIGA && AMIGAHW_PRESENT(ZORRO)) { proc_bus_zorro_dir = proc_mkdir("bus/zorro", NULL); diff --git a/drivers/zorro/zorro-driver.c b/drivers/zorro/zorro-driver.c index 53180a37cc9a..7ee2b6e71786 100644 --- a/drivers/zorro/zorro-driver.c +++ b/drivers/zorro/zorro-driver.c @@ -137,10 +137,34 @@ static int zorro_bus_match(struct device *dev, struct device_driver *drv) return 0; } +static int zorro_uevent(struct device *dev, struct kobj_uevent_env *env) +{ +#ifdef CONFIG_HOTPLUG + struct zorro_dev *z; + + if (!dev) + return -ENODEV; + + z = to_zorro_dev(dev); + if (!z) + return -ENODEV; + + if (add_uevent_var(env, "ZORRO_ID=%08X", z->id) || + add_uevent_var(env, "ZORRO_SLOT_NAME=%s", dev_name(dev)) || + add_uevent_var(env, "ZORRO_SLOT_ADDR=%04X", z->slotaddr) || + add_uevent_var(env, "MODALIAS=" ZORRO_DEVICE_MODALIAS_FMT, z->id)) + return -ENOMEM; + + return 0; +#else /* !CONFIG_HOTPLUG */ + return -ENODEV; +#endif /* !CONFIG_HOTPLUG */ +} struct bus_type zorro_bus_type = { .name = "zorro", .match = zorro_bus_match, + .uevent = zorro_uevent, .probe = zorro_device_probe, .remove = zorro_device_remove, }; diff --git a/drivers/zorro/zorro-sysfs.c b/drivers/zorro/zorro-sysfs.c index 1d2a772ea14c..eb924e0a64ce 100644 --- a/drivers/zorro/zorro-sysfs.c +++ b/drivers/zorro/zorro-sysfs.c @@ -77,6 +77,16 @@ static struct bin_attribute zorro_config_attr = { .read = zorro_read_config, }; +static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct zorro_dev *z = to_zorro_dev(dev); + + return sprintf(buf, ZORRO_DEVICE_MODALIAS_FMT "\n", z->id); +} + +static DEVICE_ATTR(modalias, S_IRUGO, modalias_show, NULL); + int zorro_create_sysfs_dev_files(struct zorro_dev *z) { struct device *dev = &z->dev; @@ -89,6 +99,7 @@ int zorro_create_sysfs_dev_files(struct zorro_dev *z) (error = device_create_file(dev, &dev_attr_slotaddr)) || (error = device_create_file(dev, &dev_attr_slotsize)) || (error = device_create_file(dev, &dev_attr_resource)) || + (error = device_create_file(dev, &dev_attr_modalias)) || (error = sysfs_create_bin_file(&dev->kobj, &zorro_config_attr))) return error; diff --git a/drivers/zorro/zorro.c b/drivers/zorro/zorro.c index d45fb34e2d23..6455f3a244c5 100644 --- a/drivers/zorro/zorro.c +++ b/drivers/zorro/zorro.c @@ -15,6 +15,8 @@ #include <linux/zorro.h> #include <linux/bitops.h> #include <linux/string.h> +#include <linux/platform_device.h> +#include <linux/slab.h> #include <asm/setup.h> #include <asm/amigahw.h> @@ -26,24 +28,17 @@ * Zorro Expansion Devices */ -u_int zorro_num_autocon = 0; +unsigned int zorro_num_autocon; struct zorro_dev zorro_autocon[ZORRO_NUM_AUTO]; /* - * Single Zorro bus + * Zorro bus */ -struct zorro_bus zorro_bus = {\ - .resources = { - /* Zorro II regions (on Zorro II/III) */ - { .name = "Zorro II exp", .start = 0x00e80000, .end = 0x00efffff }, - { .name = "Zorro II mem", .start = 0x00200000, .end = 0x009fffff }, - /* Zorro III regions (on Zorro III only) */ - { .name = "Zorro III exp", .start = 0xff000000, .end = 0xffffffff }, - { .name = "Zorro III cfg", .start = 0x40000000, .end = 0x7fffffff } - }, - .name = "Zorro bus" +struct zorro_bus { + struct list_head devices; /* list of devices on this bus */ + struct device dev; }; @@ -53,18 +48,19 @@ struct zorro_bus zorro_bus = {\ struct zorro_dev *zorro_find_device(zorro_id id, struct zorro_dev *from) { - struct zorro_dev *z; + struct zorro_dev *z; - if (!MACH_IS_AMIGA || !AMIGAHW_PRESENT(ZORRO)) - return NULL; + if (!zorro_num_autocon) + return NULL; - for (z = from ? from+1 : &zorro_autocon[0]; - z < zorro_autocon+zorro_num_autocon; - z++) - if (id == ZORRO_WILDCARD || id == z->id) - return z; - return NULL; + for (z = from ? from+1 : &zorro_autocon[0]; + z < zorro_autocon+zorro_num_autocon; + z++) + if (id == ZORRO_WILDCARD || id == z->id) + return z; + return NULL; } +EXPORT_SYMBOL(zorro_find_device); /* @@ -83,121 +79,138 @@ struct zorro_dev *zorro_find_device(zorro_id id, struct zorro_dev *from) */ DECLARE_BITMAP(zorro_unused_z2ram, 128); +EXPORT_SYMBOL(zorro_unused_z2ram); static void __init mark_region(unsigned long start, unsigned long end, int flag) { - if (flag) - start += Z2RAM_CHUNKMASK; - else - end += Z2RAM_CHUNKMASK; - start &= ~Z2RAM_CHUNKMASK; - end &= ~Z2RAM_CHUNKMASK; - - if (end <= Z2RAM_START || start >= Z2RAM_END) - return; - start = start < Z2RAM_START ? 0x00000000 : start-Z2RAM_START; - end = end > Z2RAM_END ? Z2RAM_SIZE : end-Z2RAM_START; - while (start < end) { - u32 chunk = start>>Z2RAM_CHUNKSHIFT; if (flag) - set_bit(chunk, zorro_unused_z2ram); + start += Z2RAM_CHUNKMASK; else - clear_bit(chunk, zorro_unused_z2ram); - start += Z2RAM_CHUNKSIZE; - } + end += Z2RAM_CHUNKMASK; + start &= ~Z2RAM_CHUNKMASK; + end &= ~Z2RAM_CHUNKMASK; + + if (end <= Z2RAM_START || start >= Z2RAM_END) + return; + start = start < Z2RAM_START ? 0x00000000 : start-Z2RAM_START; + end = end > Z2RAM_END ? Z2RAM_SIZE : end-Z2RAM_START; + while (start < end) { + u32 chunk = start>>Z2RAM_CHUNKSHIFT; + if (flag) + set_bit(chunk, zorro_unused_z2ram); + else + clear_bit(chunk, zorro_unused_z2ram); + start += Z2RAM_CHUNKSIZE; + } } -static struct resource __init *zorro_find_parent_resource(struct zorro_dev *z) +static struct resource __init *zorro_find_parent_resource( + struct platform_device *bridge, struct zorro_dev *z) { - int i; + int i; - for (i = 0; i < zorro_bus.num_resources; i++) - if (zorro_resource_start(z) >= zorro_bus.resources[i].start && - zorro_resource_end(z) <= zorro_bus.resources[i].end) - return &zorro_bus.resources[i]; - return &iomem_resource; + for (i = 0; i < bridge->num_resources; i++) { + struct resource *r = &bridge->resource[i]; + if (zorro_resource_start(z) >= r->start && + zorro_resource_end(z) <= r->end) + return r; + } + return &iomem_resource; } - /* - * Initialization - */ -static int __init zorro_init(void) +static int __init amiga_zorro_probe(struct platform_device *pdev) { - struct zorro_dev *z; - unsigned int i; - int error; - - if (!MACH_IS_AMIGA || !AMIGAHW_PRESENT(ZORRO)) - return 0; - - pr_info("Zorro: Probing AutoConfig expansion devices: %d device%s\n", - zorro_num_autocon, zorro_num_autocon == 1 ? "" : "s"); - - /* Initialize the Zorro bus */ - INIT_LIST_HEAD(&zorro_bus.devices); - dev_set_name(&zorro_bus.dev, "zorro"); - error = device_register(&zorro_bus.dev); - if (error) { - pr_err("Zorro: Error registering zorro_bus\n"); - return error; - } - - /* Request the resources */ - zorro_bus.num_resources = AMIGAHW_PRESENT(ZORRO3) ? 4 : 2; - for (i = 0; i < zorro_bus.num_resources; i++) - request_resource(&iomem_resource, &zorro_bus.resources[i]); - - /* Register all devices */ - for (i = 0; i < zorro_num_autocon; i++) { - z = &zorro_autocon[i]; - z->id = (z->rom.er_Manufacturer<<16) | (z->rom.er_Product<<8); - if (z->id == ZORRO_PROD_GVP_EPC_BASE) { - /* GVP quirk */ - unsigned long magic = zorro_resource_start(z)+0x8000; - z->id |= *(u16 *)ZTWO_VADDR(magic) & GVP_PRODMASK; - } - sprintf(z->name, "Zorro device %08x", z->id); - zorro_name_device(z); - z->resource.name = z->name; - if (request_resource(zorro_find_parent_resource(z), &z->resource)) - pr_err("Zorro: Address space collision on device %s %pR\n", - z->name, &z->resource); - dev_set_name(&z->dev, "%02x", i); - z->dev.parent = &zorro_bus.dev; - z->dev.bus = &zorro_bus_type; - error = device_register(&z->dev); + struct zorro_bus *bus; + struct zorro_dev *z; + struct resource *r; + unsigned int i; + int error; + + /* Initialize the Zorro bus */ + bus = kzalloc(sizeof(*bus), GFP_KERNEL); + if (!bus) + return -ENOMEM; + + INIT_LIST_HEAD(&bus->devices); + bus->dev.parent = &pdev->dev; + dev_set_name(&bus->dev, "zorro"); + error = device_register(&bus->dev); if (error) { - pr_err("Zorro: Error registering device %s\n", z->name); - continue; + pr_err("Zorro: Error registering zorro_bus\n"); + kfree(bus); + return error; } - error = zorro_create_sysfs_dev_files(z); - if (error) - dev_err(&z->dev, "Error creating sysfs files\n"); - } - - /* Mark all available Zorro II memory */ - zorro_for_each_dev(z) { - if (z->rom.er_Type & ERTF_MEMLIST) - mark_region(zorro_resource_start(z), zorro_resource_end(z)+1, 1); - } - - /* Unmark all used Zorro II memory */ - for (i = 0; i < m68k_num_memory; i++) - if (m68k_memory[i].addr < 16*1024*1024) - mark_region(m68k_memory[i].addr, - m68k_memory[i].addr+m68k_memory[i].size, 0); - - return 0; + platform_set_drvdata(pdev, bus); + + /* Register all devices */ + pr_info("Zorro: Probing AutoConfig expansion devices: %u device%s\n", + zorro_num_autocon, zorro_num_autocon == 1 ? "" : "s"); + + for (i = 0; i < zorro_num_autocon; i++) { + z = &zorro_autocon[i]; + z->id = (z->rom.er_Manufacturer<<16) | (z->rom.er_Product<<8); + if (z->id == ZORRO_PROD_GVP_EPC_BASE) { + /* GVP quirk */ + unsigned long magic = zorro_resource_start(z)+0x8000; + z->id |= *(u16 *)ZTWO_VADDR(magic) & GVP_PRODMASK; + } + sprintf(z->name, "Zorro device %08x", z->id); + zorro_name_device(z); + z->resource.name = z->name; + r = zorro_find_parent_resource(pdev, z); + error = request_resource(r, &z->resource); + if (error) + dev_err(&bus->dev, + "Address space collision on device %s %pR\n", + z->name, &z->resource); + dev_set_name(&z->dev, "%02x", i); + z->dev.parent = &bus->dev; + z->dev.bus = &zorro_bus_type; + error = device_register(&z->dev); + if (error) { + dev_err(&bus->dev, "Error registering device %s\n", + z->name); + continue; + } + error = zorro_create_sysfs_dev_files(z); + if (error) + dev_err(&z->dev, "Error creating sysfs files\n"); + } + + /* Mark all available Zorro II memory */ + zorro_for_each_dev(z) { + if (z->rom.er_Type & ERTF_MEMLIST) + mark_region(zorro_resource_start(z), + zorro_resource_end(z)+1, 1); + } + + /* Unmark all used Zorro II memory */ + for (i = 0; i < m68k_num_memory; i++) + if (m68k_memory[i].addr < 16*1024*1024) + mark_region(m68k_memory[i].addr, + m68k_memory[i].addr+m68k_memory[i].size, + 0); + + return 0; } -subsys_initcall(zorro_init); +static struct platform_driver amiga_zorro_driver = { + .driver = { + .name = "amiga-zorro", + .owner = THIS_MODULE, + }, +}; -EXPORT_SYMBOL(zorro_find_device); -EXPORT_SYMBOL(zorro_unused_z2ram); +static int __init amiga_zorro_init(void) +{ + return platform_driver_probe(&amiga_zorro_driver, amiga_zorro_probe); +} + +module_init(amiga_zorro_init); MODULE_LICENSE("GPL"); diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index e84ef60ffe35..97a97839a867 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1481,12 +1481,17 @@ static noinline long btrfs_ioctl_clone(struct file *file, unsigned long srcfd, ret = -EBADF; goto out_drop_write; } + src = src_file->f_dentry->d_inode; ret = -EINVAL; if (src == inode) goto out_fput; + /* the src must be open for reading */ + if (!(src_file->f_mode & FMODE_READ)) + goto out_fput; + ret = -EISDIR; if (S_ISDIR(src->i_mode) || S_ISDIR(inode->i_mode)) goto out_fput; diff --git a/fs/cachefiles/security.c b/fs/cachefiles/security.c index b5808cdb2232..039b5011d83b 100644 --- a/fs/cachefiles/security.c +++ b/fs/cachefiles/security.c @@ -77,6 +77,8 @@ static int cachefiles_check_cache_dir(struct cachefiles_cache *cache, /* * check the security details of the on-disk cache * - must be called with security override in force + * - must return with a security override in force - even in the case of an + * error */ int cachefiles_determine_cache_security(struct cachefiles_cache *cache, struct dentry *root, @@ -99,6 +101,8 @@ int cachefiles_determine_cache_security(struct cachefiles_cache *cache, * which create files */ ret = set_create_files_as(new, root->d_inode); if (ret < 0) { + abort_creds(new); + cachefiles_begin_secure(cache, _saved_cred); _leave(" = %d [cfa]", ret); return ret; } diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 4b42c2bb603f..a9005d862ed4 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -504,7 +504,6 @@ static void writepages_finish(struct ceph_osd_request *req, int i; struct ceph_snap_context *snapc = req->r_snapc; struct address_space *mapping = inode->i_mapping; - struct writeback_control *wbc = req->r_wbc; __s32 rc = -EIO; u64 bytes = 0; struct ceph_client *client = ceph_inode_to_client(inode); @@ -546,10 +545,6 @@ static void writepages_finish(struct ceph_osd_request *req, clear_bdi_congested(&client->backing_dev_info, BLK_RW_ASYNC); - if (i >= wrote) { - dout("inode %p skipping page %p\n", inode, page); - wbc->pages_skipped++; - } ceph_put_snap_context((void *)page->private); page->private = 0; ClearPagePrivate(page); @@ -799,7 +794,6 @@ get_more_pages: alloc_page_vec(client, req); req->r_callback = writepages_finish; req->r_inode = inode; - req->r_wbc = wbc; } /* note position of first page in pvec */ diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 0c1681806867..d9400534b279 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -858,6 +858,8 @@ static int __ceph_is_any_caps(struct ceph_inode_info *ci) } /* + * Remove a cap. Take steps to deal with a racing iterate_session_caps. + * * caller should hold i_lock. * caller will not hold session s_mutex if called from destroy_inode. */ @@ -866,15 +868,10 @@ void __ceph_remove_cap(struct ceph_cap *cap) struct ceph_mds_session *session = cap->session; struct ceph_inode_info *ci = cap->ci; struct ceph_mds_client *mdsc = &ceph_client(ci->vfs_inode.i_sb)->mdsc; + int removed = 0; dout("__ceph_remove_cap %p from %p\n", cap, &ci->vfs_inode); - /* remove from inode list */ - rb_erase(&cap->ci_node, &ci->i_caps); - cap->ci = NULL; - if (ci->i_auth_cap == cap) - ci->i_auth_cap = NULL; - /* remove from session list */ spin_lock(&session->s_cap_lock); if (session->s_cap_iterator == cap) { @@ -885,10 +882,18 @@ void __ceph_remove_cap(struct ceph_cap *cap) list_del_init(&cap->session_caps); session->s_nr_caps--; cap->session = NULL; + removed = 1; } + /* protect backpointer with s_cap_lock: see iterate_session_caps */ + cap->ci = NULL; spin_unlock(&session->s_cap_lock); - if (cap->session == NULL) + /* remove from inode list */ + rb_erase(&cap->ci_node, &ci->i_caps); + if (ci->i_auth_cap == cap) + ci->i_auth_cap = NULL; + + if (removed) ceph_put_cap(cap); if (!__ceph_is_any_caps(ci) && ci->i_snap_realm) { diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index 261f3e6c0bcf..85b4d2ffdeba 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -733,6 +733,10 @@ no_change: __ceph_get_fmode(ci, cap_fmode); spin_unlock(&inode->i_lock); } + } else if (cap_fmode >= 0) { + pr_warning("mds issued no caps on %llx.%llx\n", + ceph_vinop(inode)); + __ceph_get_fmode(ci, cap_fmode); } /* update delegation info? */ diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 60a9a4ae47be..24561a557e01 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -736,9 +736,10 @@ static void cleanup_cap_releases(struct ceph_mds_session *session) } /* - * Helper to safely iterate over all caps associated with a session. + * Helper to safely iterate over all caps associated with a session, with + * special care taken to handle a racing __ceph_remove_cap(). * - * caller must hold session s_mutex + * Caller must hold session s_mutex. */ static int iterate_session_caps(struct ceph_mds_session *session, int (*cb)(struct inode *, struct ceph_cap *, @@ -2136,7 +2137,7 @@ static void send_mds_reconnect(struct ceph_mds_client *mdsc, int mds) struct ceph_mds_session *session = NULL; struct ceph_msg *reply; struct rb_node *p; - int err; + int err = -ENOMEM; struct ceph_pagelist *pagelist; pr_info("reconnect to recovering mds%d\n", mds); @@ -2185,7 +2186,7 @@ static void send_mds_reconnect(struct ceph_mds_client *mdsc, int mds) goto fail; err = iterate_session_caps(session, encode_caps_cb, pagelist); if (err < 0) - goto out; + goto fail; /* * snaprealms. we provide mds with the ino, seq (version), and @@ -2213,28 +2214,31 @@ send: reply->nr_pages = calc_pages_for(0, pagelist->length); ceph_con_send(&session->s_con, reply); - if (session) { - session->s_state = CEPH_MDS_SESSION_OPEN; - __wake_requests(mdsc, &session->s_waiting); - } + session->s_state = CEPH_MDS_SESSION_OPEN; + mutex_unlock(&session->s_mutex); + + mutex_lock(&mdsc->mutex); + __wake_requests(mdsc, &session->s_waiting); + mutex_unlock(&mdsc->mutex); + + ceph_put_mds_session(session); -out: up_read(&mdsc->snap_rwsem); - if (session) { - mutex_unlock(&session->s_mutex); - ceph_put_mds_session(session); - } mutex_lock(&mdsc->mutex); return; fail: ceph_msg_put(reply); + up_read(&mdsc->snap_rwsem); + mutex_unlock(&session->s_mutex); + ceph_put_mds_session(session); fail_nomsg: ceph_pagelist_release(pagelist); kfree(pagelist); fail_nopagelist: - pr_err("ENOMEM preparing reconnect for mds%d\n", mds); - goto out; + pr_err("error %d preparing reconnect for mds%d\n", err, mds); + mutex_lock(&mdsc->mutex); + return; } diff --git a/fs/ceph/messenger.c b/fs/ceph/messenger.c index 509f57d9ccb3..cd4fadb6491a 100644 --- a/fs/ceph/messenger.c +++ b/fs/ceph/messenger.c @@ -492,7 +492,14 @@ static void prepare_write_message(struct ceph_connection *con) list_move_tail(&m->list_head, &con->out_sent); } - m->hdr.seq = cpu_to_le64(++con->out_seq); + /* + * only assign outgoing seq # if we haven't sent this message + * yet. if it is requeued, resend with it's original seq. + */ + if (m->needs_out_seq) { + m->hdr.seq = cpu_to_le64(++con->out_seq); + m->needs_out_seq = false; + } dout("prepare_write_message %p seq %lld type %d len %d+%d+%d %d pgs\n", m, con->out_seq, le16_to_cpu(m->hdr.type), @@ -1986,6 +1993,8 @@ void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg) BUG_ON(msg->front.iov_len != le32_to_cpu(msg->hdr.front_len)); + msg->needs_out_seq = true; + /* queue */ mutex_lock(&con->mutex); BUG_ON(!list_empty(&msg->list_head)); @@ -2085,15 +2094,19 @@ struct ceph_msg *ceph_msg_new(int type, int front_len, kref_init(&m->kref); INIT_LIST_HEAD(&m->list_head); + m->hdr.tid = 0; m->hdr.type = cpu_to_le16(type); + m->hdr.priority = cpu_to_le16(CEPH_MSG_PRIO_DEFAULT); + m->hdr.version = 0; m->hdr.front_len = cpu_to_le32(front_len); m->hdr.middle_len = 0; m->hdr.data_len = cpu_to_le32(page_len); m->hdr.data_off = cpu_to_le16(page_off); - m->hdr.priority = cpu_to_le16(CEPH_MSG_PRIO_DEFAULT); + m->hdr.reserved = 0; m->footer.front_crc = 0; m->footer.middle_crc = 0; m->footer.data_crc = 0; + m->footer.flags = 0; m->front_max = front_len; m->front_is_vmalloc = false; m->more_to_follow = false; diff --git a/fs/ceph/messenger.h b/fs/ceph/messenger.h index a343dae73cdc..a5caf91cc971 100644 --- a/fs/ceph/messenger.h +++ b/fs/ceph/messenger.h @@ -86,6 +86,7 @@ struct ceph_msg { struct kref kref; bool front_is_vmalloc; bool more_to_follow; + bool needs_out_seq; int front_max; struct ceph_msgpool *pool; diff --git a/fs/ceph/osd_client.c b/fs/ceph/osd_client.c index c7b4dedaace6..3514f71ff85f 100644 --- a/fs/ceph/osd_client.c +++ b/fs/ceph/osd_client.c @@ -565,7 +565,8 @@ static int __map_osds(struct ceph_osd_client *osdc, { struct ceph_osd_request_head *reqhead = req->r_request->front.iov_base; struct ceph_pg pgid; - int o = -1; + int acting[CEPH_PG_MAX_SIZE]; + int o = -1, num = 0; int err; dout("map_osds %p tid %lld\n", req, req->r_tid); @@ -576,10 +577,16 @@ static int __map_osds(struct ceph_osd_client *osdc, pgid = reqhead->layout.ol_pgid; req->r_pgid = pgid; - o = ceph_calc_pg_primary(osdc->osdmap, pgid); + err = ceph_calc_pg_acting(osdc->osdmap, pgid, acting); + if (err > 0) { + o = acting[0]; + num = err; + } if ((req->r_osd && req->r_osd->o_osd == o && - req->r_sent >= req->r_osd->o_incarnation) || + req->r_sent >= req->r_osd->o_incarnation && + req->r_num_pg_osds == num && + memcmp(req->r_pg_osds, acting, sizeof(acting[0])*num) == 0) || (req->r_osd == NULL && o == -1)) return 0; /* no change */ @@ -587,6 +594,10 @@ static int __map_osds(struct ceph_osd_client *osdc, req->r_tid, le32_to_cpu(pgid.pool), le16_to_cpu(pgid.ps), o, req->r_osd ? req->r_osd->o_osd : -1); + /* record full pg acting set */ + memcpy(req->r_pg_osds, acting, sizeof(acting[0]) * num); + req->r_num_pg_osds = num; + if (req->r_osd) { __cancel_request(req); list_del_init(&req->r_osd_item); @@ -612,7 +623,7 @@ static int __map_osds(struct ceph_osd_client *osdc, __remove_osd_from_lru(req->r_osd); list_add(&req->r_osd_item, &req->r_osd->o_requests); } - err = 1; /* osd changed */ + err = 1; /* osd or pg changed */ out: return err; @@ -779,16 +790,18 @@ static void handle_reply(struct ceph_osd_client *osdc, struct ceph_msg *msg, struct ceph_osd_request *req; u64 tid; int numops, object_len, flags; + s32 result; tid = le64_to_cpu(msg->hdr.tid); if (msg->front.iov_len < sizeof(*rhead)) goto bad; numops = le32_to_cpu(rhead->num_ops); object_len = le32_to_cpu(rhead->object_len); + result = le32_to_cpu(rhead->result); if (msg->front.iov_len != sizeof(*rhead) + object_len + numops * sizeof(struct ceph_osd_op)) goto bad; - dout("handle_reply %p tid %llu\n", msg, tid); + dout("handle_reply %p tid %llu result %d\n", msg, tid, (int)result); /* lookup */ mutex_lock(&osdc->request_mutex); @@ -834,7 +847,8 @@ static void handle_reply(struct ceph_osd_client *osdc, struct ceph_msg *msg, dout("handle_reply tid %llu flags %d\n", tid, flags); /* either this is a read, or we got the safe response */ - if ((flags & CEPH_OSD_FLAG_ONDISK) || + if (result < 0 || + (flags & CEPH_OSD_FLAG_ONDISK) || ((flags & CEPH_OSD_FLAG_WRITE) == 0)) __unregister_request(osdc, req); diff --git a/fs/ceph/osd_client.h b/fs/ceph/osd_client.h index b0759911e7c3..ce776989ef6a 100644 --- a/fs/ceph/osd_client.h +++ b/fs/ceph/osd_client.h @@ -48,6 +48,8 @@ struct ceph_osd_request { struct list_head r_osd_item; struct ceph_osd *r_osd; struct ceph_pg r_pgid; + int r_pg_osds[CEPH_PG_MAX_SIZE]; + int r_num_pg_osds; struct ceph_connection *r_con_filling_msg; @@ -66,7 +68,6 @@ struct ceph_osd_request { struct list_head r_unsafe_item; struct inode *r_inode; /* for use by callbacks */ - struct writeback_control *r_wbc; /* ditto */ char r_oid[40]; /* object name */ int r_oid_len; diff --git a/fs/ceph/osdmap.c b/fs/ceph/osdmap.c index 2e2c15eed82a..cfdd8f4388b7 100644 --- a/fs/ceph/osdmap.c +++ b/fs/ceph/osdmap.c @@ -1041,12 +1041,33 @@ static int *calc_pg_raw(struct ceph_osdmap *osdmap, struct ceph_pg pgid, } /* + * Return acting set for given pgid. + */ +int ceph_calc_pg_acting(struct ceph_osdmap *osdmap, struct ceph_pg pgid, + int *acting) +{ + int rawosds[CEPH_PG_MAX_SIZE], *osds; + int i, o, num = CEPH_PG_MAX_SIZE; + + osds = calc_pg_raw(osdmap, pgid, rawosds, &num); + if (!osds) + return -1; + + /* primary is first up osd */ + o = 0; + for (i = 0; i < num; i++) + if (ceph_osd_is_up(osdmap, osds[i])) + acting[o++] = osds[i]; + return o; +} + +/* * Return primary osd for given pgid, or -1 if none. */ int ceph_calc_pg_primary(struct ceph_osdmap *osdmap, struct ceph_pg pgid) { - int rawosds[10], *osds; - int i, num = ARRAY_SIZE(rawosds); + int rawosds[CEPH_PG_MAX_SIZE], *osds; + int i, num = CEPH_PG_MAX_SIZE; osds = calc_pg_raw(osdmap, pgid, rawosds, &num); if (!osds) @@ -1054,9 +1075,7 @@ int ceph_calc_pg_primary(struct ceph_osdmap *osdmap, struct ceph_pg pgid) /* primary is first up osd */ for (i = 0; i < num; i++) - if (ceph_osd_is_up(osdmap, osds[i])) { + if (ceph_osd_is_up(osdmap, osds[i])) return osds[i]; - break; - } return -1; } diff --git a/fs/ceph/osdmap.h b/fs/ceph/osdmap.h index 8bc9f1e4f562..970b547e510d 100644 --- a/fs/ceph/osdmap.h +++ b/fs/ceph/osdmap.h @@ -120,6 +120,8 @@ extern int ceph_calc_object_layout(struct ceph_object_layout *ol, const char *oid, struct ceph_file_layout *fl, struct ceph_osdmap *osdmap); +extern int ceph_calc_pg_acting(struct ceph_osdmap *osdmap, struct ceph_pg pgid, + int *acting); extern int ceph_calc_pg_primary(struct ceph_osdmap *osdmap, struct ceph_pg pgid); diff --git a/fs/ceph/rados.h b/fs/ceph/rados.h index a1fc1d017b58..fd56451a871f 100644 --- a/fs/ceph/rados.h +++ b/fs/ceph/rados.h @@ -58,6 +58,7 @@ struct ceph_timespec { #define CEPH_PG_LAYOUT_LINEAR 2 #define CEPH_PG_LAYOUT_HYBRID 3 +#define CEPH_PG_MAX_SIZE 16 /* max # osds in a single pg */ /* * placement group. diff --git a/fs/ceph/super.c b/fs/ceph/super.c index f888cf487b7c..110857ba9269 100644 --- a/fs/ceph/super.c +++ b/fs/ceph/super.c @@ -47,10 +47,20 @@ const char *ceph_file_part(const char *s, int len) */ static void ceph_put_super(struct super_block *s) { - struct ceph_client *cl = ceph_client(s); + struct ceph_client *client = ceph_sb_to_client(s); dout("put_super\n"); - ceph_mdsc_close_sessions(&cl->mdsc); + ceph_mdsc_close_sessions(&client->mdsc); + + /* + * ensure we release the bdi before put_anon_super releases + * the device name. + */ + if (s->s_bdi == &client->backing_dev_info) { + bdi_unregister(&client->backing_dev_info); + s->s_bdi = NULL; + } + return; } @@ -636,6 +646,8 @@ static void ceph_destroy_client(struct ceph_client *client) destroy_workqueue(client->pg_inv_wq); destroy_workqueue(client->trunc_wq); + bdi_destroy(&client->backing_dev_info); + if (client->msgr) ceph_messenger_destroy(client->msgr); mempool_destroy(client->wb_pagevec_pool); @@ -876,14 +888,14 @@ static int ceph_register_bdi(struct super_block *sb, struct ceph_client *client) { int err; - sb->s_bdi = &client->backing_dev_info; - /* set ra_pages based on rsize mount option? */ if (client->mount_args->rsize >= PAGE_CACHE_SIZE) client->backing_dev_info.ra_pages = (client->mount_args->rsize + PAGE_CACHE_SIZE - 1) >> PAGE_SHIFT; err = bdi_register_dev(&client->backing_dev_info, sb->s_dev); + if (!err) + sb->s_bdi = &client->backing_dev_info; return err; } @@ -957,9 +969,6 @@ static void ceph_kill_sb(struct super_block *s) dout("kill_sb %p\n", s); ceph_mdsc_pre_umount(&client->mdsc); kill_anon_super(s); /* will call put_super after sb is r/o */ - if (s->s_bdi == &client->backing_dev_info) - bdi_unregister(&client->backing_dev_info); - bdi_destroy(&client->backing_dev_info); ceph_destroy_client(client); } diff --git a/fs/cifs/asn1.c b/fs/cifs/asn1.c index a20bea598933..cfd1ce34e0bc 100644 --- a/fs/cifs/asn1.c +++ b/fs/cifs/asn1.c @@ -492,17 +492,13 @@ compare_oid(unsigned long *oid1, unsigned int oid1len, int decode_negTokenInit(unsigned char *security_blob, int length, - enum securityEnum *secType) + struct TCP_Server_Info *server) { struct asn1_ctx ctx; unsigned char *end; unsigned char *sequence_end; unsigned long *oid = NULL; unsigned int cls, con, tag, oidlen, rc; - bool use_ntlmssp = false; - bool use_kerberos = false; - bool use_kerberosu2u = false; - bool use_mskerberos = false; /* cifs_dump_mem(" Received SecBlob ", security_blob, length); */ @@ -510,11 +506,11 @@ decode_negTokenInit(unsigned char *security_blob, int length, /* GSSAPI header */ if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding negTokenInit header")); + cFYI(1, "Error decoding negTokenInit header"); return 0; } else if ((cls != ASN1_APL) || (con != ASN1_CON) || (tag != ASN1_EOC)) { - cFYI(1, ("cls = %d con = %d tag = %d", cls, con, tag)); + cFYI(1, "cls = %d con = %d tag = %d", cls, con, tag); return 0; } @@ -535,56 +531,52 @@ decode_negTokenInit(unsigned char *security_blob, int length, /* SPNEGO OID not present or garbled -- bail out */ if (!rc) { - cFYI(1, ("Error decoding negTokenInit header")); + cFYI(1, "Error decoding negTokenInit header"); return 0; } /* SPNEGO */ if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding negTokenInit")); + cFYI(1, "Error decoding negTokenInit"); return 0; } else if ((cls != ASN1_CTX) || (con != ASN1_CON) || (tag != ASN1_EOC)) { - cFYI(1, - ("cls = %d con = %d tag = %d end = %p (%d) exit 0", - cls, con, tag, end, *end)); + cFYI(1, "cls = %d con = %d tag = %d end = %p (%d) exit 0", + cls, con, tag, end, *end); return 0; } /* negTokenInit */ if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding negTokenInit")); + cFYI(1, "Error decoding negTokenInit"); return 0; } else if ((cls != ASN1_UNI) || (con != ASN1_CON) || (tag != ASN1_SEQ)) { - cFYI(1, - ("cls = %d con = %d tag = %d end = %p (%d) exit 1", - cls, con, tag, end, *end)); + cFYI(1, "cls = %d con = %d tag = %d end = %p (%d) exit 1", + cls, con, tag, end, *end); return 0; } /* sequence */ if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding 2nd part of negTokenInit")); + cFYI(1, "Error decoding 2nd part of negTokenInit"); return 0; } else if ((cls != ASN1_CTX) || (con != ASN1_CON) || (tag != ASN1_EOC)) { - cFYI(1, - ("cls = %d con = %d tag = %d end = %p (%d) exit 0", - cls, con, tag, end, *end)); + cFYI(1, "cls = %d con = %d tag = %d end = %p (%d) exit 0", + cls, con, tag, end, *end); return 0; } /* sequence of */ if (asn1_header_decode (&ctx, &sequence_end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding 2nd part of negTokenInit")); + cFYI(1, "Error decoding 2nd part of negTokenInit"); return 0; } else if ((cls != ASN1_UNI) || (con != ASN1_CON) || (tag != ASN1_SEQ)) { - cFYI(1, - ("cls = %d con = %d tag = %d end = %p (%d) exit 1", - cls, con, tag, end, *end)); + cFYI(1, "cls = %d con = %d tag = %d end = %p (%d) exit 1", + cls, con, tag, end, *end); return 0; } @@ -592,37 +584,33 @@ decode_negTokenInit(unsigned char *security_blob, int length, while (!asn1_eoc_decode(&ctx, sequence_end)) { rc = asn1_header_decode(&ctx, &end, &cls, &con, &tag); if (!rc) { - cFYI(1, - ("Error decoding negTokenInit hdr exit2")); + cFYI(1, "Error decoding negTokenInit hdr exit2"); return 0; } if ((tag == ASN1_OJI) && (con == ASN1_PRI)) { if (asn1_oid_decode(&ctx, end, &oid, &oidlen)) { - cFYI(1, ("OID len = %d oid = 0x%lx 0x%lx " - "0x%lx 0x%lx", oidlen, *oid, - *(oid + 1), *(oid + 2), *(oid + 3))); + cFYI(1, "OID len = %d oid = 0x%lx 0x%lx " + "0x%lx 0x%lx", oidlen, *oid, + *(oid + 1), *(oid + 2), *(oid + 3)); if (compare_oid(oid, oidlen, MSKRB5_OID, - MSKRB5_OID_LEN) && - !use_mskerberos) - use_mskerberos = true; + MSKRB5_OID_LEN)) + server->sec_mskerberos = true; else if (compare_oid(oid, oidlen, KRB5U2U_OID, - KRB5U2U_OID_LEN) && - !use_kerberosu2u) - use_kerberosu2u = true; + KRB5U2U_OID_LEN)) + server->sec_kerberosu2u = true; else if (compare_oid(oid, oidlen, KRB5_OID, - KRB5_OID_LEN) && - !use_kerberos) - use_kerberos = true; + KRB5_OID_LEN)) + server->sec_kerberos = true; else if (compare_oid(oid, oidlen, NTLMSSP_OID, NTLMSSP_OID_LEN)) - use_ntlmssp = true; + server->sec_ntlmssp = true; kfree(oid); } } else { - cFYI(1, ("Should be an oid what is going on?")); + cFYI(1, "Should be an oid what is going on?"); } } @@ -632,54 +620,47 @@ decode_negTokenInit(unsigned char *security_blob, int length, no mechListMic (e.g. NTLMSSP instead of KRB5) */ if (ctx.error == ASN1_ERR_DEC_EMPTY) goto decode_negtoken_exit; - cFYI(1, ("Error decoding last part negTokenInit exit3")); + cFYI(1, "Error decoding last part negTokenInit exit3"); return 0; } else if ((cls != ASN1_CTX) || (con != ASN1_CON)) { /* tag = 3 indicating mechListMIC */ - cFYI(1, ("Exit 4 cls = %d con = %d tag = %d end = %p (%d)", - cls, con, tag, end, *end)); + cFYI(1, "Exit 4 cls = %d con = %d tag = %d end = %p (%d)", + cls, con, tag, end, *end); return 0; } /* sequence */ if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding last part negTokenInit exit5")); + cFYI(1, "Error decoding last part negTokenInit exit5"); return 0; } else if ((cls != ASN1_UNI) || (con != ASN1_CON) || (tag != ASN1_SEQ)) { - cFYI(1, ("cls = %d con = %d tag = %d end = %p (%d)", - cls, con, tag, end, *end)); + cFYI(1, "cls = %d con = %d tag = %d end = %p (%d)", + cls, con, tag, end, *end); } /* sequence of */ if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding last part negTokenInit exit 7")); + cFYI(1, "Error decoding last part negTokenInit exit 7"); return 0; } else if ((cls != ASN1_CTX) || (con != ASN1_CON)) { - cFYI(1, ("Exit 8 cls = %d con = %d tag = %d end = %p (%d)", - cls, con, tag, end, *end)); + cFYI(1, "Exit 8 cls = %d con = %d tag = %d end = %p (%d)", + cls, con, tag, end, *end); return 0; } /* general string */ if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) { - cFYI(1, ("Error decoding last part negTokenInit exit9")); + cFYI(1, "Error decoding last part negTokenInit exit9"); return 0; } else if ((cls != ASN1_UNI) || (con != ASN1_PRI) || (tag != ASN1_GENSTR)) { - cFYI(1, ("Exit10 cls = %d con = %d tag = %d end = %p (%d)", - cls, con, tag, end, *end)); + cFYI(1, "Exit10 cls = %d con = %d tag = %d end = %p (%d)", + cls, con, tag, end, *end); return 0; } - cFYI(1, ("Need to call asn1_octets_decode() function for %s", - ctx.pointer)); /* is this UTF-8 or ASCII? */ + cFYI(1, "Need to call asn1_octets_decode() function for %s", + ctx.pointer); /* is this UTF-8 or ASCII? */ decode_negtoken_exit: - if (use_kerberos) - *secType = Kerberos; - else if (use_mskerberos) - *secType = MSKerberos; - else if (use_ntlmssp) - *secType = RawNTLMSSP; - return 1; } diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c index 42cec2a7c0cf..4fce6e61b34e 100644 --- a/fs/cifs/cifs_debug.c +++ b/fs/cifs/cifs_debug.c @@ -60,10 +60,10 @@ cifs_dump_mem(char *label, void *data, int length) #ifdef CONFIG_CIFS_DEBUG2 void cifs_dump_detail(struct smb_hdr *smb) { - cERROR(1, ("Cmd: %d Err: 0x%x Flags: 0x%x Flgs2: 0x%x Mid: %d Pid: %d", + cERROR(1, "Cmd: %d Err: 0x%x Flags: 0x%x Flgs2: 0x%x Mid: %d Pid: %d", smb->Command, smb->Status.CifsError, - smb->Flags, smb->Flags2, smb->Mid, smb->Pid)); - cERROR(1, ("smb buf %p len %d", smb, smbCalcSize_LE(smb))); + smb->Flags, smb->Flags2, smb->Mid, smb->Pid); + cERROR(1, "smb buf %p len %d", smb, smbCalcSize_LE(smb)); } @@ -75,25 +75,25 @@ void cifs_dump_mids(struct TCP_Server_Info *server) if (server == NULL) return; - cERROR(1, ("Dump pending requests:")); + cERROR(1, "Dump pending requests:"); spin_lock(&GlobalMid_Lock); list_for_each(tmp, &server->pending_mid_q) { mid_entry = list_entry(tmp, struct mid_q_entry, qhead); - cERROR(1, ("State: %d Cmd: %d Pid: %d Tsk: %p Mid %d", + cERROR(1, "State: %d Cmd: %d Pid: %d Tsk: %p Mid %d", mid_entry->midState, (int)mid_entry->command, mid_entry->pid, mid_entry->tsk, - mid_entry->mid)); + mid_entry->mid); #ifdef CONFIG_CIFS_STATS2 - cERROR(1, ("IsLarge: %d buf: %p time rcv: %ld now: %ld", + cERROR(1, "IsLarge: %d buf: %p time rcv: %ld now: %ld", mid_entry->largeBuf, mid_entry->resp_buf, mid_entry->when_received, - jiffies)); + jiffies); #endif /* STATS2 */ - cERROR(1, ("IsMult: %d IsEnd: %d", mid_entry->multiRsp, - mid_entry->multiEnd)); + cERROR(1, "IsMult: %d IsEnd: %d", mid_entry->multiRsp, + mid_entry->multiEnd); if (mid_entry->resp_buf) { cifs_dump_detail(mid_entry->resp_buf); cifs_dump_mem("existing buf: ", @@ -716,7 +716,7 @@ static const struct file_operations cifs_multiuser_mount_proc_fops = { static int cifs_security_flags_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "0x%x\n", extended_security); + seq_printf(m, "0x%x\n", global_secflags); return 0; } @@ -744,13 +744,13 @@ static ssize_t cifs_security_flags_proc_write(struct file *file, /* single char or single char followed by null */ c = flags_string[0]; if (c == '0' || c == 'n' || c == 'N') { - extended_security = CIFSSEC_DEF; /* default */ + global_secflags = CIFSSEC_DEF; /* default */ return count; } else if (c == '1' || c == 'y' || c == 'Y') { - extended_security = CIFSSEC_MAX; + global_secflags = CIFSSEC_MAX; return count; } else if (!isdigit(c)) { - cERROR(1, ("invalid flag %c", c)); + cERROR(1, "invalid flag %c", c); return -EINVAL; } } @@ -758,26 +758,26 @@ static ssize_t cifs_security_flags_proc_write(struct file *file, flags = simple_strtoul(flags_string, NULL, 0); - cFYI(1, ("sec flags 0x%x", flags)); + cFYI(1, "sec flags 0x%x", flags); if (flags <= 0) { - cERROR(1, ("invalid security flags %s", flags_string)); + cERROR(1, "invalid security flags %s", flags_string); return -EINVAL; } if (flags & ~CIFSSEC_MASK) { - cERROR(1, ("attempt to set unsupported security flags 0x%x", - flags & ~CIFSSEC_MASK)); + cERROR(1, "attempt to set unsupported security flags 0x%x", + flags & ~CIFSSEC_MASK); return -EINVAL; } /* flags look ok - update the global security flags for cifs module */ - extended_security = flags; - if (extended_security & CIFSSEC_MUST_SIGN) { + global_secflags = flags; + if (global_secflags & CIFSSEC_MUST_SIGN) { /* requiring signing implies signing is allowed */ - extended_security |= CIFSSEC_MAY_SIGN; - cFYI(1, ("packet signing now required")); - } else if ((extended_security & CIFSSEC_MAY_SIGN) == 0) { - cFYI(1, ("packet signing disabled")); + global_secflags |= CIFSSEC_MAY_SIGN; + cFYI(1, "packet signing now required"); + } else if ((global_secflags & CIFSSEC_MAY_SIGN) == 0) { + cFYI(1, "packet signing disabled"); } /* BB should we turn on MAY flags for other MUST options? */ return count; diff --git a/fs/cifs/cifs_debug.h b/fs/cifs/cifs_debug.h index 5eb3b83bbfa7..aa316891ac0c 100644 --- a/fs/cifs/cifs_debug.h +++ b/fs/cifs/cifs_debug.h @@ -43,34 +43,54 @@ void dump_smb(struct smb_hdr *, int); */ #ifdef CIFS_DEBUG - /* information message: e.g., configuration, major event */ extern int cifsFYI; -#define cifsfyi(format,arg...) if (cifsFYI & CIFS_INFO) printk(KERN_DEBUG " " __FILE__ ": " format "\n" "" , ## arg) +#define cifsfyi(fmt, arg...) \ +do { \ + if (cifsFYI & CIFS_INFO) \ + printk(KERN_DEBUG "%s: " fmt "\n", __FILE__, ##arg); \ +} while (0) -#define cFYI(button,prspec) if (button) cifsfyi prspec +#define cFYI(set, fmt, arg...) \ +do { \ + if (set) \ + cifsfyi(fmt, ##arg); \ +} while (0) -#define cifswarn(format, arg...) printk(KERN_WARNING ": " format "\n" , ## arg) +#define cifswarn(fmt, arg...) \ + printk(KERN_WARNING fmt "\n", ##arg) /* debug event message: */ extern int cifsERROR; -#define cEVENT(format,arg...) if (cifsERROR) printk(KERN_EVENT __FILE__ ": " format "\n" , ## arg) +#define cEVENT(fmt, arg...) \ +do { \ + if (cifsERROR) \ + printk(KERN_EVENT "%s: " fmt "\n", __FILE__, ##arg); \ +} while (0) /* error event message: e.g., i/o error */ -#define cifserror(format,arg...) if (cifsERROR) printk(KERN_ERR " CIFS VFS: " format "\n" "" , ## arg) +#define cifserror(fmt, arg...) \ +do { \ + if (cifsERROR) \ + printk(KERN_ERR "CIFS VFS: " fmt "\n", ##arg); \ +} while (0) -#define cERROR(button, prspec) if (button) cifserror prspec +#define cERROR(set, fmt, arg...) \ +do { \ + if (set) \ + cifserror(fmt, ##arg); \ +} while (0) /* * debug OFF * --------- */ #else /* _CIFS_DEBUG */ -#define cERROR(button, prspec) -#define cEVENT(format, arg...) -#define cFYI(button, prspec) -#define cifserror(format, arg...) +#define cERROR(set, fmt, arg...) +#define cEVENT(fmt, arg...) +#define cFYI(set, fmt, arg...) +#define cifserror(fmt, arg...) #endif /* _CIFS_DEBUG */ #endif /* _H_CIFS_DEBUG */ diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c index 78e4d2a3a68b..ac19a6f3dae0 100644 --- a/fs/cifs/cifs_dfs_ref.c +++ b/fs/cifs/cifs_dfs_ref.c @@ -85,8 +85,8 @@ static char *cifs_get_share_name(const char *node_name) /* find server name end */ pSep = memchr(UNC+2, '\\', len-2); if (!pSep) { - cERROR(1, ("%s: no server name end in node name: %s", - __func__, node_name)); + cERROR(1, "%s: no server name end in node name: %s", + __func__, node_name); kfree(UNC); return ERR_PTR(-EINVAL); } @@ -142,8 +142,8 @@ char *cifs_compose_mount_options(const char *sb_mountdata, rc = dns_resolve_server_name_to_ip(*devname, &srvIP); if (rc != 0) { - cERROR(1, ("%s: Failed to resolve server part of %s to IP: %d", - __func__, *devname, rc)); + cERROR(1, "%s: Failed to resolve server part of %s to IP: %d", + __func__, *devname, rc); goto compose_mount_options_err; } /* md_len = strlen(...) + 12 for 'sep+prefixpath=' @@ -217,8 +217,8 @@ char *cifs_compose_mount_options(const char *sb_mountdata, strcat(mountdata, fullpath + ref->path_consumed); } - /*cFYI(1,("%s: parent mountdata: %s", __func__,sb_mountdata));*/ - /*cFYI(1, ("%s: submount mountdata: %s", __func__, mountdata ));*/ + /*cFYI(1, "%s: parent mountdata: %s", __func__,sb_mountdata);*/ + /*cFYI(1, "%s: submount mountdata: %s", __func__, mountdata );*/ compose_mount_options_out: kfree(srvIP); @@ -294,11 +294,11 @@ static int add_mount_helper(struct vfsmount *newmnt, struct nameidata *nd, static void dump_referral(const struct dfs_info3_param *ref) { - cFYI(1, ("DFS: ref path: %s", ref->path_name)); - cFYI(1, ("DFS: node path: %s", ref->node_name)); - cFYI(1, ("DFS: fl: %hd, srv_type: %hd", ref->flags, ref->server_type)); - cFYI(1, ("DFS: ref_flags: %hd, path_consumed: %hd", ref->ref_flag, - ref->path_consumed)); + cFYI(1, "DFS: ref path: %s", ref->path_name); + cFYI(1, "DFS: node path: %s", ref->node_name); + cFYI(1, "DFS: fl: %hd, srv_type: %hd", ref->flags, ref->server_type); + cFYI(1, "DFS: ref_flags: %hd, path_consumed: %hd", ref->ref_flag, + ref->path_consumed); } @@ -314,7 +314,7 @@ cifs_dfs_follow_mountpoint(struct dentry *dentry, struct nameidata *nd) int rc = 0; struct vfsmount *mnt = ERR_PTR(-ENOENT); - cFYI(1, ("in %s", __func__)); + cFYI(1, "in %s", __func__); BUG_ON(IS_ROOT(dentry)); xid = GetXid(); @@ -352,15 +352,15 @@ cifs_dfs_follow_mountpoint(struct dentry *dentry, struct nameidata *nd) /* connect to a node */ len = strlen(referrals[i].node_name); if (len < 2) { - cERROR(1, ("%s: Net Address path too short: %s", - __func__, referrals[i].node_name)); + cERROR(1, "%s: Net Address path too short: %s", + __func__, referrals[i].node_name); rc = -EINVAL; goto out_err; } mnt = cifs_dfs_do_refmount(nd->path.mnt, nd->path.dentry, referrals + i); - cFYI(1, ("%s: cifs_dfs_do_refmount:%s , mnt:%p", __func__, - referrals[i].node_name, mnt)); + cFYI(1, "%s: cifs_dfs_do_refmount:%s , mnt:%p", __func__, + referrals[i].node_name, mnt); /* complete mount procedure if we accured submount */ if (!IS_ERR(mnt)) @@ -378,7 +378,7 @@ out: FreeXid(xid); free_dfs_info_array(referrals, num_referrals); kfree(full_path); - cFYI(1, ("leaving %s" , __func__)); + cFYI(1, "leaving %s" , __func__); return ERR_PTR(rc); out_err: path_put(&nd->path); diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c index 310d12f69a92..379bd7d9c05f 100644 --- a/fs/cifs/cifs_spnego.c +++ b/fs/cifs/cifs_spnego.c @@ -133,9 +133,9 @@ cifs_get_spnego_key(struct cifsSesInfo *sesInfo) dp = description + strlen(description); /* for now, only sec=krb5 and sec=mskrb5 are valid */ - if (server->secType == Kerberos) + if (server->sec_kerberos) sprintf(dp, ";sec=krb5"); - else if (server->secType == MSKerberos) + else if (server->sec_mskerberos) sprintf(dp, ";sec=mskrb5"); else goto out; @@ -149,7 +149,7 @@ cifs_get_spnego_key(struct cifsSesInfo *sesInfo) dp = description + strlen(description); sprintf(dp, ";pid=0x%x", current->pid); - cFYI(1, ("key description = %s", description)); + cFYI(1, "key description = %s", description); spnego_key = request_key(&cifs_spnego_key_type, description, ""); #ifdef CONFIG_CIFS_DEBUG2 diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c index d07676bd76d2..430f510a1720 100644 --- a/fs/cifs/cifs_unicode.c +++ b/fs/cifs/cifs_unicode.c @@ -200,9 +200,8 @@ cifs_strtoUCS(__le16 *to, const char *from, int len, /* works for 2.4.0 kernel or later */ charlen = codepage->char2uni(from, len, &wchar_to[i]); if (charlen < 1) { - cERROR(1, - ("strtoUCS: char2uni of %d returned %d", - (int)*from, charlen)); + cERROR(1, "strtoUCS: char2uni of %d returned %d", + (int)*from, charlen); /* A question mark */ to[i] = cpu_to_le16(0x003f); charlen = 1; diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c index 9b716d044bbd..85d7cf7ff2c8 100644 --- a/fs/cifs/cifsacl.c +++ b/fs/cifs/cifsacl.c @@ -87,11 +87,11 @@ int match_sid(struct cifs_sid *ctsid) continue; /* all sub_auth values do not match */ } - cFYI(1, ("matching sid: %s\n", wksidarr[i].sidname)); + cFYI(1, "matching sid: %s\n", wksidarr[i].sidname); return 0; /* sids compare/match */ } - cFYI(1, ("No matching sid")); + cFYI(1, "No matching sid"); return -1; } @@ -208,14 +208,14 @@ static void access_flags_to_mode(__le32 ace_flags, int type, umode_t *pmode, *pbits_to_set &= ~S_IXUGO; return; } else if (type != ACCESS_ALLOWED) { - cERROR(1, ("unknown access control type %d", type)); + cERROR(1, "unknown access control type %d", type); return; } /* else ACCESS_ALLOWED type */ if (flags & GENERIC_ALL) { *pmode |= (S_IRWXUGO & (*pbits_to_set)); - cFYI(DBG2, ("all perms")); + cFYI(DBG2, "all perms"); return; } if ((flags & GENERIC_WRITE) || @@ -228,7 +228,7 @@ static void access_flags_to_mode(__le32 ace_flags, int type, umode_t *pmode, ((flags & FILE_EXEC_RIGHTS) == FILE_EXEC_RIGHTS)) *pmode |= (S_IXUGO & (*pbits_to_set)); - cFYI(DBG2, ("access flags 0x%x mode now 0x%x", flags, *pmode)); + cFYI(DBG2, "access flags 0x%x mode now 0x%x", flags, *pmode); return; } @@ -257,7 +257,7 @@ static void mode_to_access_flags(umode_t mode, umode_t bits_to_use, if (mode & S_IXUGO) *pace_flags |= SET_FILE_EXEC_RIGHTS; - cFYI(DBG2, ("mode: 0x%x, access flags now 0x%x", mode, *pace_flags)); + cFYI(DBG2, "mode: 0x%x, access flags now 0x%x", mode, *pace_flags); return; } @@ -297,24 +297,24 @@ static void dump_ace(struct cifs_ace *pace, char *end_of_acl) /* validate that we do not go past end of acl */ if (le16_to_cpu(pace->size) < 16) { - cERROR(1, ("ACE too small, %d", le16_to_cpu(pace->size))); + cERROR(1, "ACE too small %d", le16_to_cpu(pace->size)); return; } if (end_of_acl < (char *)pace + le16_to_cpu(pace->size)) { - cERROR(1, ("ACL too small to parse ACE")); + cERROR(1, "ACL too small to parse ACE"); return; } num_subauth = pace->sid.num_subauth; if (num_subauth) { int i; - cFYI(1, ("ACE revision %d num_auth %d type %d flags %d size %d", + cFYI(1, "ACE revision %d num_auth %d type %d flags %d size %d", pace->sid.revision, pace->sid.num_subauth, pace->type, - pace->flags, le16_to_cpu(pace->size))); + pace->flags, le16_to_cpu(pace->size)); for (i = 0; i < num_subauth; ++i) { - cFYI(1, ("ACE sub_auth[%d]: 0x%x", i, - le32_to_cpu(pace->sid.sub_auth[i]))); + cFYI(1, "ACE sub_auth[%d]: 0x%x", i, + le32_to_cpu(pace->sid.sub_auth[i])); } /* BB add length check to make sure that we do not have huge @@ -347,13 +347,13 @@ static void parse_dacl(struct cifs_acl *pdacl, char *end_of_acl, /* validate that we do not go past end of acl */ if (end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size)) { - cERROR(1, ("ACL too small to parse DACL")); + cERROR(1, "ACL too small to parse DACL"); return; } - cFYI(DBG2, ("DACL revision %d size %d num aces %d", + cFYI(DBG2, "DACL revision %d size %d num aces %d", le16_to_cpu(pdacl->revision), le16_to_cpu(pdacl->size), - le32_to_cpu(pdacl->num_aces))); + le32_to_cpu(pdacl->num_aces)); /* reset rwx permissions for user/group/other. Also, if num_aces is 0 i.e. DACL has no ACEs, @@ -437,25 +437,25 @@ static int parse_sid(struct cifs_sid *psid, char *end_of_acl) /* validate that we do not go past end of ACL - sid must be at least 8 bytes long (assuming no sub-auths - e.g. the null SID */ if (end_of_acl < (char *)psid + 8) { - cERROR(1, ("ACL too small to parse SID %p", psid)); + cERROR(1, "ACL too small to parse SID %p", psid); return -EINVAL; } if (psid->num_subauth) { #ifdef CONFIG_CIFS_DEBUG2 int i; - cFYI(1, ("SID revision %d num_auth %d", - psid->revision, psid->num_subauth)); + cFYI(1, "SID revision %d num_auth %d", + psid->revision, psid->num_subauth); for (i = 0; i < psid->num_subauth; i++) { - cFYI(1, ("SID sub_auth[%d]: 0x%x ", i, - le32_to_cpu(psid->sub_auth[i]))); + cFYI(1, "SID sub_auth[%d]: 0x%x ", i, + le32_to_cpu(psid->sub_auth[i])); } /* BB add length check to make sure that we do not have huge num auths and therefore go off the end */ - cFYI(1, ("RID 0x%x", - le32_to_cpu(psid->sub_auth[psid->num_subauth-1]))); + cFYI(1, "RID 0x%x", + le32_to_cpu(psid->sub_auth[psid->num_subauth-1])); #endif } @@ -482,11 +482,11 @@ static int parse_sec_desc(struct cifs_ntsd *pntsd, int acl_len, le32_to_cpu(pntsd->gsidoffset)); dacloffset = le32_to_cpu(pntsd->dacloffset); dacl_ptr = (struct cifs_acl *)((char *)pntsd + dacloffset); - cFYI(DBG2, ("revision %d type 0x%x ooffset 0x%x goffset 0x%x " + cFYI(DBG2, "revision %d type 0x%x ooffset 0x%x goffset 0x%x " "sacloffset 0x%x dacloffset 0x%x", pntsd->revision, pntsd->type, le32_to_cpu(pntsd->osidoffset), le32_to_cpu(pntsd->gsidoffset), - le32_to_cpu(pntsd->sacloffset), dacloffset)); + le32_to_cpu(pntsd->sacloffset), dacloffset); /* cifs_dump_mem("owner_sid: ", owner_sid_ptr, 64); */ rc = parse_sid(owner_sid_ptr, end_of_acl); if (rc) @@ -500,7 +500,7 @@ static int parse_sec_desc(struct cifs_ntsd *pntsd, int acl_len, parse_dacl(dacl_ptr, end_of_acl, owner_sid_ptr, group_sid_ptr, fattr); else - cFYI(1, ("no ACL")); /* BB grant all or default perms? */ + cFYI(1, "no ACL"); /* BB grant all or default perms? */ /* cifscred->uid = owner_sid_ptr->rid; cifscred->gid = group_sid_ptr->rid; @@ -563,7 +563,7 @@ static struct cifs_ntsd *get_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb, FreeXid(xid); - cFYI(1, ("GetCIFSACL rc = %d ACL len %d", rc, *pacllen)); + cFYI(1, "GetCIFSACL rc = %d ACL len %d", rc, *pacllen); return pntsd; } @@ -581,12 +581,12 @@ static struct cifs_ntsd *get_cifs_acl_by_path(struct cifs_sb_info *cifs_sb, &fid, &oplock, NULL, cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); if (rc) { - cERROR(1, ("Unable to open file to get ACL")); + cERROR(1, "Unable to open file to get ACL"); goto out; } rc = CIFSSMBGetCIFSACL(xid, cifs_sb->tcon, fid, &pntsd, pacllen); - cFYI(1, ("GetCIFSACL rc = %d ACL len %d", rc, *pacllen)); + cFYI(1, "GetCIFSACL rc = %d ACL len %d", rc, *pacllen); CIFSSMBClose(xid, cifs_sb->tcon, fid); out: @@ -621,7 +621,7 @@ static int set_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb, __u16 fid, rc = CIFSSMBSetCIFSACL(xid, cifs_sb->tcon, fid, pnntsd, acllen); FreeXid(xid); - cFYI(DBG2, ("SetCIFSACL rc = %d", rc)); + cFYI(DBG2, "SetCIFSACL rc = %d", rc); return rc; } @@ -638,12 +638,12 @@ static int set_cifs_acl_by_path(struct cifs_sb_info *cifs_sb, const char *path, &fid, &oplock, NULL, cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); if (rc) { - cERROR(1, ("Unable to open file to set ACL")); + cERROR(1, "Unable to open file to set ACL"); goto out; } rc = CIFSSMBSetCIFSACL(xid, cifs_sb->tcon, fid, pnntsd, acllen); - cFYI(DBG2, ("SetCIFSACL rc = %d", rc)); + cFYI(DBG2, "SetCIFSACL rc = %d", rc); CIFSSMBClose(xid, cifs_sb->tcon, fid); out: @@ -659,7 +659,7 @@ static int set_cifs_acl(struct cifs_ntsd *pnntsd, __u32 acllen, struct cifsFileInfo *open_file; int rc; - cFYI(DBG2, ("set ACL for %s from mode 0x%x", path, inode->i_mode)); + cFYI(DBG2, "set ACL for %s from mode 0x%x", path, inode->i_mode); open_file = find_readable_file(CIFS_I(inode)); if (!open_file) @@ -679,7 +679,7 @@ cifs_acl_to_fattr(struct cifs_sb_info *cifs_sb, struct cifs_fattr *fattr, u32 acllen = 0; int rc = 0; - cFYI(DBG2, ("converting ACL to mode for %s", path)); + cFYI(DBG2, "converting ACL to mode for %s", path); if (pfid) pntsd = get_cifs_acl_by_fid(cifs_sb, *pfid, &acllen); @@ -690,7 +690,7 @@ cifs_acl_to_fattr(struct cifs_sb_info *cifs_sb, struct cifs_fattr *fattr, if (pntsd) rc = parse_sec_desc(pntsd, acllen, fattr); if (rc) - cFYI(1, ("parse sec desc failed rc = %d", rc)); + cFYI(1, "parse sec desc failed rc = %d", rc); kfree(pntsd); return; @@ -704,7 +704,7 @@ int mode_to_acl(struct inode *inode, const char *path, __u64 nmode) struct cifs_ntsd *pntsd = NULL; /* acl obtained from server */ struct cifs_ntsd *pnntsd = NULL; /* modified acl to be sent to server */ - cFYI(DBG2, ("set ACL from mode for %s", path)); + cFYI(DBG2, "set ACL from mode for %s", path); /* Get the security descriptor */ pntsd = get_cifs_acl(CIFS_SB(inode->i_sb), inode, path, &secdesclen); @@ -721,19 +721,19 @@ int mode_to_acl(struct inode *inode, const char *path, __u64 nmode) DEFSECDESCLEN : secdesclen; pnntsd = kmalloc(secdesclen, GFP_KERNEL); if (!pnntsd) { - cERROR(1, ("Unable to allocate security descriptor")); + cERROR(1, "Unable to allocate security descriptor"); kfree(pntsd); return -ENOMEM; } rc = build_sec_desc(pntsd, pnntsd, inode, nmode); - cFYI(DBG2, ("build_sec_desc rc: %d", rc)); + cFYI(DBG2, "build_sec_desc rc: %d", rc); if (!rc) { /* Set the security descriptor */ rc = set_cifs_acl(pnntsd, secdesclen, inode, path); - cFYI(DBG2, ("set_cifs_acl rc: %d", rc)); + cFYI(DBG2, "set_cifs_acl rc: %d", rc); } kfree(pnntsd); diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c index fbe986430d0c..847628dfdc44 100644 --- a/fs/cifs/cifsencrypt.c +++ b/fs/cifs/cifsencrypt.c @@ -103,7 +103,7 @@ static int cifs_calc_signature2(const struct kvec *iov, int n_vec, if (iov[i].iov_len == 0) continue; if (iov[i].iov_base == NULL) { - cERROR(1, ("null iovec entry")); + cERROR(1, "null iovec entry"); return -EIO; } /* The first entry includes a length field (which does not get @@ -181,8 +181,8 @@ int cifs_verify_signature(struct smb_hdr *cifs_pdu, /* Do not need to verify session setups with signature "BSRSPYL " */ if (memcmp(cifs_pdu->Signature.SecuritySignature, "BSRSPYL ", 8) == 0) - cFYI(1, ("dummy signature received for smb command 0x%x", - cifs_pdu->Command)); + cFYI(1, "dummy signature received for smb command 0x%x", + cifs_pdu->Command); /* save off the origiginal signature so we can modify the smb and check its signature against what the server sent */ @@ -291,7 +291,7 @@ void calc_lanman_hash(const char *password, const char *cryptkey, bool encrypt, if (password) strncpy(password_with_pad, password, CIFS_ENCPWD_SIZE); - if (!encrypt && extended_security & CIFSSEC_MAY_PLNTXT) { + if (!encrypt && global_secflags & CIFSSEC_MAY_PLNTXT) { memset(lnm_session_key, 0, CIFS_SESS_KEY_SIZE); memcpy(lnm_session_key, password_with_pad, CIFS_ENCPWD_SIZE); @@ -398,7 +398,7 @@ void setup_ntlmv2_rsp(struct cifsSesInfo *ses, char *resp_buf, /* calculate buf->ntlmv2_hash */ rc = calc_ntlmv2_hash(ses, nls_cp); if (rc) - cERROR(1, ("could not get v2 hash rc %d", rc)); + cERROR(1, "could not get v2 hash rc %d", rc); CalcNTLMv2_response(ses, resp_buf); /* now calculate the MAC key for NTLMv2 */ diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index ad235d604a0b..78c02eb4cb1f 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -49,10 +49,6 @@ #include "cifs_spnego.h" #define CIFS_MAGIC_NUMBER 0xFF534D42 /* the first four bytes of SMB PDUs */ -#ifdef CONFIG_CIFS_QUOTA -static const struct quotactl_ops cifs_quotactl_ops; -#endif /* QUOTA */ - int cifsFYI = 0; int cifsERROR = 1; int traceSMB = 0; @@ -61,7 +57,7 @@ unsigned int experimEnabled = 0; unsigned int linuxExtEnabled = 1; unsigned int lookupCacheEnabled = 1; unsigned int multiuser_mount = 0; -unsigned int extended_security = CIFSSEC_DEF; +unsigned int global_secflags = CIFSSEC_DEF; /* unsigned int ntlmv2_support = 0; */ unsigned int sign_CIFS_PDUs = 1; static const struct super_operations cifs_super_ops; @@ -86,8 +82,6 @@ extern mempool_t *cifs_sm_req_poolp; extern mempool_t *cifs_req_poolp; extern mempool_t *cifs_mid_poolp; -extern struct kmem_cache *cifs_oplock_cachep; - static int cifs_read_super(struct super_block *sb, void *data, const char *devname, int silent) @@ -135,8 +129,7 @@ cifs_read_super(struct super_block *sb, void *data, if (rc) { if (!silent) - cERROR(1, - ("cifs_mount failed w/return code = %d", rc)); + cERROR(1, "cifs_mount failed w/return code = %d", rc); goto out_mount_failed; } @@ -146,9 +139,6 @@ cifs_read_super(struct super_block *sb, void *data, /* if (cifs_sb->tcon->ses->server->maxBuf > MAX_CIFS_HDR_SIZE + 512) sb->s_blocksize = cifs_sb->tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE; */ -#ifdef CONFIG_CIFS_QUOTA - sb->s_qcop = &cifs_quotactl_ops; -#endif sb->s_blocksize = CIFS_MAX_MSGSIZE; sb->s_blocksize_bits = 14; /* default 2**14 = CIFS_MAX_MSGSIZE */ inode = cifs_root_iget(sb, ROOT_I); @@ -168,7 +158,7 @@ cifs_read_super(struct super_block *sb, void *data, #ifdef CONFIG_CIFS_EXPERIMENTAL if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) { - cFYI(1, ("export ops supported")); + cFYI(1, "export ops supported"); sb->s_export_op = &cifs_export_ops; } #endif /* EXPERIMENTAL */ @@ -176,7 +166,7 @@ cifs_read_super(struct super_block *sb, void *data, return 0; out_no_root: - cERROR(1, ("cifs_read_super: get root inode failed")); + cERROR(1, "cifs_read_super: get root inode failed"); if (inode) iput(inode); @@ -203,10 +193,10 @@ cifs_put_super(struct super_block *sb) int rc = 0; struct cifs_sb_info *cifs_sb; - cFYI(1, ("In cifs_put_super")); + cFYI(1, "In cifs_put_super"); cifs_sb = CIFS_SB(sb); if (cifs_sb == NULL) { - cFYI(1, ("Empty cifs superblock info passed to unmount")); + cFYI(1, "Empty cifs superblock info passed to unmount"); return; } @@ -214,7 +204,7 @@ cifs_put_super(struct super_block *sb) rc = cifs_umount(sb, cifs_sb); if (rc) - cERROR(1, ("cifs_umount failed with return code %d", rc)); + cERROR(1, "cifs_umount failed with return code %d", rc); #ifdef CONFIG_CIFS_DFS_UPCALL if (cifs_sb->mountdata) { kfree(cifs_sb->mountdata); @@ -300,7 +290,6 @@ static int cifs_permission(struct inode *inode, int mask) static struct kmem_cache *cifs_inode_cachep; static struct kmem_cache *cifs_req_cachep; static struct kmem_cache *cifs_mid_cachep; -struct kmem_cache *cifs_oplock_cachep; static struct kmem_cache *cifs_sm_req_cachep; mempool_t *cifs_sm_req_poolp; mempool_t *cifs_req_poolp; @@ -432,106 +421,6 @@ cifs_show_options(struct seq_file *s, struct vfsmount *m) return 0; } -#ifdef CONFIG_CIFS_QUOTA -int cifs_xquota_set(struct super_block *sb, int quota_type, qid_t qid, - struct fs_disk_quota *pdquota) -{ - int xid; - int rc = 0; - struct cifs_sb_info *cifs_sb = CIFS_SB(sb); - struct cifsTconInfo *pTcon; - - if (cifs_sb) - pTcon = cifs_sb->tcon; - else - return -EIO; - - - xid = GetXid(); - if (pTcon) { - cFYI(1, ("set type: 0x%x id: %d", quota_type, qid)); - } else - rc = -EIO; - - FreeXid(xid); - return rc; -} - -int cifs_xquota_get(struct super_block *sb, int quota_type, qid_t qid, - struct fs_disk_quota *pdquota) -{ - int xid; - int rc = 0; - struct cifs_sb_info *cifs_sb = CIFS_SB(sb); - struct cifsTconInfo *pTcon; - - if (cifs_sb) - pTcon = cifs_sb->tcon; - else - return -EIO; - - xid = GetXid(); - if (pTcon) { - cFYI(1, ("set type: 0x%x id: %d", quota_type, qid)); - } else - rc = -EIO; - - FreeXid(xid); - return rc; -} - -int cifs_xstate_set(struct super_block *sb, unsigned int flags, int operation) -{ - int xid; - int rc = 0; - struct cifs_sb_info *cifs_sb = CIFS_SB(sb); - struct cifsTconInfo *pTcon; - - if (cifs_sb) - pTcon = cifs_sb->tcon; - else - return -EIO; - - xid = GetXid(); - if (pTcon) { - cFYI(1, ("flags: 0x%x operation: 0x%x", flags, operation)); - } else - rc = -EIO; - - FreeXid(xid); - return rc; -} - -int cifs_xstate_get(struct super_block *sb, struct fs_quota_stat *qstats) -{ - int xid; - int rc = 0; - struct cifs_sb_info *cifs_sb = CIFS_SB(sb); - struct cifsTconInfo *pTcon; - - if (cifs_sb) - pTcon = cifs_sb->tcon; - else - return -EIO; - - xid = GetXid(); - if (pTcon) { - cFYI(1, ("pqstats %p", qstats)); - } else - rc = -EIO; - - FreeXid(xid); - return rc; -} - -static const struct quotactl_ops cifs_quotactl_ops = { - .set_xquota = cifs_xquota_set, - .get_xquota = cifs_xquota_get, - .set_xstate = cifs_xstate_set, - .get_xstate = cifs_xstate_get, -}; -#endif - static void cifs_umount_begin(struct super_block *sb) { struct cifs_sb_info *cifs_sb = CIFS_SB(sb); @@ -558,7 +447,7 @@ static void cifs_umount_begin(struct super_block *sb) /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */ /* cancel_notify_requests(tcon); */ if (tcon->ses && tcon->ses->server) { - cFYI(1, ("wake up tasks now - umount begin not complete")); + cFYI(1, "wake up tasks now - umount begin not complete"); wake_up_all(&tcon->ses->server->request_q); wake_up_all(&tcon->ses->server->response_q); msleep(1); /* yield */ @@ -609,7 +498,7 @@ cifs_get_sb(struct file_system_type *fs_type, int rc; struct super_block *sb = sget(fs_type, NULL, set_anon_super, NULL); - cFYI(1, ("Devname: %s flags: %d ", dev_name, flags)); + cFYI(1, "Devname: %s flags: %d ", dev_name, flags); if (IS_ERR(sb)) return PTR_ERR(sb); @@ -656,7 +545,6 @@ static loff_t cifs_llseek(struct file *file, loff_t offset, int origin) return generic_file_llseek_unlocked(file, offset, origin); } -#ifdef CONFIG_CIFS_EXPERIMENTAL static int cifs_setlease(struct file *file, long arg, struct file_lock **lease) { /* note that this is called by vfs setlease with the BKL held @@ -685,7 +573,6 @@ static int cifs_setlease(struct file *file, long arg, struct file_lock **lease) else return -EAGAIN; } -#endif struct file_system_type cifs_fs_type = { .owner = THIS_MODULE, @@ -762,10 +649,7 @@ const struct file_operations cifs_file_ops = { #ifdef CONFIG_CIFS_POSIX .unlocked_ioctl = cifs_ioctl, #endif /* CONFIG_CIFS_POSIX */ - -#ifdef CONFIG_CIFS_EXPERIMENTAL .setlease = cifs_setlease, -#endif /* CONFIG_CIFS_EXPERIMENTAL */ }; const struct file_operations cifs_file_direct_ops = { @@ -784,9 +668,7 @@ const struct file_operations cifs_file_direct_ops = { .unlocked_ioctl = cifs_ioctl, #endif /* CONFIG_CIFS_POSIX */ .llseek = cifs_llseek, -#ifdef CONFIG_CIFS_EXPERIMENTAL .setlease = cifs_setlease, -#endif /* CONFIG_CIFS_EXPERIMENTAL */ }; const struct file_operations cifs_file_nobrl_ops = { .read = do_sync_read, @@ -803,10 +685,7 @@ const struct file_operations cifs_file_nobrl_ops = { #ifdef CONFIG_CIFS_POSIX .unlocked_ioctl = cifs_ioctl, #endif /* CONFIG_CIFS_POSIX */ - -#ifdef CONFIG_CIFS_EXPERIMENTAL .setlease = cifs_setlease, -#endif /* CONFIG_CIFS_EXPERIMENTAL */ }; const struct file_operations cifs_file_direct_nobrl_ops = { @@ -824,9 +703,7 @@ const struct file_operations cifs_file_direct_nobrl_ops = { .unlocked_ioctl = cifs_ioctl, #endif /* CONFIG_CIFS_POSIX */ .llseek = cifs_llseek, -#ifdef CONFIG_CIFS_EXPERIMENTAL .setlease = cifs_setlease, -#endif /* CONFIG_CIFS_EXPERIMENTAL */ }; const struct file_operations cifs_dir_ops = { @@ -878,7 +755,7 @@ cifs_init_request_bufs(void) } else { CIFSMaxBufSize &= 0x1FE00; /* Round size to even 512 byte mult*/ } -/* cERROR(1,("CIFSMaxBufSize %d 0x%x",CIFSMaxBufSize,CIFSMaxBufSize)); */ +/* cERROR(1, "CIFSMaxBufSize %d 0x%x",CIFSMaxBufSize,CIFSMaxBufSize); */ cifs_req_cachep = kmem_cache_create("cifs_request", CIFSMaxBufSize + MAX_CIFS_HDR_SIZE, 0, @@ -890,7 +767,7 @@ cifs_init_request_bufs(void) cifs_min_rcv = 1; else if (cifs_min_rcv > 64) { cifs_min_rcv = 64; - cERROR(1, ("cifs_min_rcv set to maximum (64)")); + cERROR(1, "cifs_min_rcv set to maximum (64)"); } cifs_req_poolp = mempool_create_slab_pool(cifs_min_rcv, @@ -921,7 +798,7 @@ cifs_init_request_bufs(void) cifs_min_small = 2; else if (cifs_min_small > 256) { cifs_min_small = 256; - cFYI(1, ("cifs_min_small set to maximum (256)")); + cFYI(1, "cifs_min_small set to maximum (256)"); } cifs_sm_req_poolp = mempool_create_slab_pool(cifs_min_small, @@ -962,15 +839,6 @@ cifs_init_mids(void) return -ENOMEM; } - cifs_oplock_cachep = kmem_cache_create("cifs_oplock_structs", - sizeof(struct oplock_q_entry), 0, - SLAB_HWCACHE_ALIGN, NULL); - if (cifs_oplock_cachep == NULL) { - mempool_destroy(cifs_mid_poolp); - kmem_cache_destroy(cifs_mid_cachep); - return -ENOMEM; - } - return 0; } @@ -979,7 +847,6 @@ cifs_destroy_mids(void) { mempool_destroy(cifs_mid_poolp); kmem_cache_destroy(cifs_mid_cachep); - kmem_cache_destroy(cifs_oplock_cachep); } static int __init @@ -1019,10 +886,10 @@ init_cifs(void) if (cifs_max_pending < 2) { cifs_max_pending = 2; - cFYI(1, ("cifs_max_pending set to min of 2")); + cFYI(1, "cifs_max_pending set to min of 2"); } else if (cifs_max_pending > 256) { cifs_max_pending = 256; - cFYI(1, ("cifs_max_pending set to max of 256")); + cFYI(1, "cifs_max_pending set to max of 256"); } rc = cifs_init_inodecache(); @@ -1080,7 +947,7 @@ init_cifs(void) static void __exit exit_cifs(void) { - cFYI(DBG2, ("exit_cifs")); + cFYI(DBG2, "exit_cifs"); cifs_proc_clean(); #ifdef CONFIG_CIFS_DFS_UPCALL cifs_dfs_release_automount_timer(); diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h index 7aa57ecdc437..0242ff9cbf41 100644 --- a/fs/cifs/cifsfs.h +++ b/fs/cifs/cifsfs.h @@ -114,5 +114,5 @@ extern long cifs_ioctl(struct file *filep, unsigned int cmd, unsigned long arg); extern const struct export_operations cifs_export_ops; #endif /* EXPERIMENTAL */ -#define CIFS_VERSION "1.62" +#define CIFS_VERSION "1.64" #endif /* _CIFSFS_H */ diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index ecf0ffbe2b64..a88479ceaad5 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -87,7 +87,6 @@ enum securityEnum { RawNTLMSSP, /* NTLMSSP without SPNEGO, NTLMv2 hash */ /* NTLMSSP, */ /* can use rawNTLMSSP instead of NTLMSSP via SPNEGO */ Kerberos, /* Kerberos via SPNEGO */ - MSKerberos, /* MS Kerberos via SPNEGO */ }; enum protocolEnum { @@ -185,6 +184,12 @@ struct TCP_Server_Info { struct mac_key mac_signing_key; char ntlmv2_hash[16]; unsigned long lstrp; /* when we got last response from this server */ + u16 dialect; /* dialect index that server chose */ + /* extended security flavors that server supports */ + bool sec_kerberos; /* supports plain Kerberos */ + bool sec_mskerberos; /* supports legacy MS Kerberos */ + bool sec_kerberosu2u; /* supports U2U Kerberos */ + bool sec_ntlmssp; /* supports NTLMSSP */ }; /* @@ -502,6 +507,7 @@ struct dfs_info3_param { #define CIFS_FATTR_DFS_REFERRAL 0x1 #define CIFS_FATTR_DELETE_PENDING 0x2 #define CIFS_FATTR_NEED_REVAL 0x4 +#define CIFS_FATTR_INO_COLLISION 0x8 struct cifs_fattr { u32 cf_flags; @@ -717,7 +723,7 @@ GLOBAL_EXTERN unsigned int multiuser_mount; /* if enabled allows new sessions GLOBAL_EXTERN unsigned int oplockEnabled; GLOBAL_EXTERN unsigned int experimEnabled; GLOBAL_EXTERN unsigned int lookupCacheEnabled; -GLOBAL_EXTERN unsigned int extended_security; /* if on, session setup sent +GLOBAL_EXTERN unsigned int global_secflags; /* if on, session setup sent with more secure ntlmssp2 challenge/resp */ GLOBAL_EXTERN unsigned int sign_CIFS_PDUs; /* enable smb packet signing */ GLOBAL_EXTERN unsigned int linuxExtEnabled;/*enable Linux/Unix CIFS extensions*/ diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h index 39e47f46dea5..fb1657e0fdb8 100644 --- a/fs/cifs/cifsproto.h +++ b/fs/cifs/cifsproto.h @@ -39,8 +39,20 @@ extern int smb_send(struct TCP_Server_Info *, struct smb_hdr *, unsigned int /* length */); extern unsigned int _GetXid(void); extern void _FreeXid(unsigned int); -#define GetXid() (int)_GetXid(); cFYI(1,("CIFS VFS: in %s as Xid: %d with uid: %d",__func__, xid,current_fsuid())); -#define FreeXid(curr_xid) {_FreeXid(curr_xid); cFYI(1,("CIFS VFS: leaving %s (xid = %d) rc = %d",__func__,curr_xid,(int)rc));} +#define GetXid() \ +({ \ + int __xid = (int)_GetXid(); \ + cFYI(1, "CIFS VFS: in %s as Xid: %d with uid: %d", \ + __func__, __xid, current_fsuid()); \ + __xid; \ +}) + +#define FreeXid(curr_xid) \ +do { \ + _FreeXid(curr_xid); \ + cFYI(1, "CIFS VFS: leaving %s (xid = %d) rc = %d", \ + __func__, curr_xid, (int)rc); \ +} while (0) extern char *build_path_from_dentry(struct dentry *); extern char *cifs_build_path_to_root(struct cifs_sb_info *cifs_sb); extern char *build_wildcard_path_from_dentry(struct dentry *direntry); @@ -73,7 +85,7 @@ extern struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *); extern unsigned int smbCalcSize(struct smb_hdr *ptr); extern unsigned int smbCalcSize_LE(struct smb_hdr *ptr); extern int decode_negTokenInit(unsigned char *security_blob, int length, - enum securityEnum *secType); + struct TCP_Server_Info *server); extern int cifs_convert_address(char *src, void *dst); extern int map_smb_to_linux_error(struct smb_hdr *smb, int logErr); extern void header_assemble(struct smb_hdr *, char /* command */ , @@ -83,7 +95,6 @@ extern int small_smb_init_no_tc(const int smb_cmd, const int wct, struct cifsSesInfo *ses, void **request_buf); extern int CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, - const int stage, const struct nls_table *nls_cp); extern __u16 GetNextMid(struct TCP_Server_Info *server); extern struct timespec cifs_NTtimeToUnix(__le64 utc_nanoseconds_since_1601); @@ -95,8 +106,11 @@ extern struct cifsFileInfo *cifs_new_fileinfo(struct inode *newinode, __u16 fileHandle, struct file *file, struct vfsmount *mnt, unsigned int oflags); extern int cifs_posix_open(char *full_path, struct inode **pinode, - struct vfsmount *mnt, int mode, int oflags, - __u32 *poplock, __u16 *pnetfid, int xid); + struct vfsmount *mnt, + struct super_block *sb, + int mode, int oflags, + __u32 *poplock, __u16 *pnetfid, int xid); +void cifs_fill_uniqueid(struct super_block *sb, struct cifs_fattr *fattr); extern void cifs_unix_basic_to_fattr(struct cifs_fattr *fattr, FILE_UNIX_BASIC_INFO *info, struct cifs_sb_info *cifs_sb); @@ -125,7 +139,9 @@ extern void cifs_dfs_release_automount_timer(void); void cifs_proc_init(void); void cifs_proc_clean(void); -extern int cifs_setup_session(unsigned int xid, struct cifsSesInfo *pSesInfo, +extern int cifs_negotiate_protocol(unsigned int xid, + struct cifsSesInfo *ses); +extern int cifs_setup_session(unsigned int xid, struct cifsSesInfo *ses, struct nls_table *nls_info); extern int CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses); diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c index 5d3f29fef532..c65c3419dd37 100644 --- a/fs/cifs/cifssmb.c +++ b/fs/cifs/cifssmb.c @@ -1,7 +1,7 @@ /* * fs/cifs/cifssmb.c * - * Copyright (C) International Business Machines Corp., 2002,2009 + * Copyright (C) International Business Machines Corp., 2002,2010 * Author(s): Steve French (sfrench@us.ibm.com) * * Contains the routines for constructing the SMB PDUs themselves @@ -130,8 +130,8 @@ cifs_reconnect_tcon(struct cifsTconInfo *tcon, int smb_command) if (smb_command != SMB_COM_WRITE_ANDX && smb_command != SMB_COM_OPEN_ANDX && smb_command != SMB_COM_TREE_DISCONNECT) { - cFYI(1, ("can not send cmd %d while umounting", - smb_command)); + cFYI(1, "can not send cmd %d while umounting", + smb_command); return -ENODEV; } } @@ -157,7 +157,7 @@ cifs_reconnect_tcon(struct cifsTconInfo *tcon, int smb_command) * back on-line */ if (!tcon->retry || ses->status == CifsExiting) { - cFYI(1, ("gave up waiting on reconnect in smb_init")); + cFYI(1, "gave up waiting on reconnect in smb_init"); return -EHOSTDOWN; } } @@ -172,7 +172,8 @@ cifs_reconnect_tcon(struct cifsTconInfo *tcon, int smb_command) * reconnect the same SMB session */ mutex_lock(&ses->session_mutex); - if (ses->need_reconnect) + rc = cifs_negotiate_protocol(0, ses); + if (rc == 0 && ses->need_reconnect) rc = cifs_setup_session(0, ses, nls_codepage); /* do we need to reconnect tcon? */ @@ -184,7 +185,7 @@ cifs_reconnect_tcon(struct cifsTconInfo *tcon, int smb_command) mark_open_files_invalid(tcon); rc = CIFSTCon(0, ses, tcon->treeName, tcon, nls_codepage); mutex_unlock(&ses->session_mutex); - cFYI(1, ("reconnect tcon rc = %d", rc)); + cFYI(1, "reconnect tcon rc = %d", rc); if (rc) goto out; @@ -355,7 +356,6 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) struct TCP_Server_Info *server; u16 count; unsigned int secFlags; - u16 dialect; if (ses->server) server = ses->server; @@ -372,9 +372,9 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) if (ses->overrideSecFlg & (~(CIFSSEC_MUST_SIGN | CIFSSEC_MUST_SEAL))) secFlags = ses->overrideSecFlg; /* BB FIXME fix sign flags? */ else /* if override flags set only sign/seal OR them with global auth */ - secFlags = extended_security | ses->overrideSecFlg; + secFlags = global_secflags | ses->overrideSecFlg; - cFYI(1, ("secFlags 0x%x", secFlags)); + cFYI(1, "secFlags 0x%x", secFlags); pSMB->hdr.Mid = GetNextMid(server); pSMB->hdr.Flags2 |= (SMBFLG2_UNICODE | SMBFLG2_ERR_STATUS); @@ -382,14 +382,14 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) if ((secFlags & CIFSSEC_MUST_KRB5) == CIFSSEC_MUST_KRB5) pSMB->hdr.Flags2 |= SMBFLG2_EXT_SEC; else if ((secFlags & CIFSSEC_AUTH_MASK) == CIFSSEC_MAY_KRB5) { - cFYI(1, ("Kerberos only mechanism, enable extended security")); + cFYI(1, "Kerberos only mechanism, enable extended security"); pSMB->hdr.Flags2 |= SMBFLG2_EXT_SEC; } #ifdef CONFIG_CIFS_EXPERIMENTAL else if ((secFlags & CIFSSEC_MUST_NTLMSSP) == CIFSSEC_MUST_NTLMSSP) pSMB->hdr.Flags2 |= SMBFLG2_EXT_SEC; else if ((secFlags & CIFSSEC_AUTH_MASK) == CIFSSEC_MAY_NTLMSSP) { - cFYI(1, ("NTLMSSP only mechanism, enable extended security")); + cFYI(1, "NTLMSSP only mechanism, enable extended security"); pSMB->hdr.Flags2 |= SMBFLG2_EXT_SEC; } #endif @@ -408,10 +408,10 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) if (rc != 0) goto neg_err_exit; - dialect = le16_to_cpu(pSMBr->DialectIndex); - cFYI(1, ("Dialect: %d", dialect)); + server->dialect = le16_to_cpu(pSMBr->DialectIndex); + cFYI(1, "Dialect: %d", server->dialect); /* Check wct = 1 error case */ - if ((pSMBr->hdr.WordCount < 13) || (dialect == BAD_PROT)) { + if ((pSMBr->hdr.WordCount < 13) || (server->dialect == BAD_PROT)) { /* core returns wct = 1, but we do not ask for core - otherwise small wct just comes when dialect index is -1 indicating we could not negotiate a common dialect */ @@ -419,8 +419,8 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) goto neg_err_exit; #ifdef CONFIG_CIFS_WEAK_PW_HASH } else if ((pSMBr->hdr.WordCount == 13) - && ((dialect == LANMAN_PROT) - || (dialect == LANMAN2_PROT))) { + && ((server->dialect == LANMAN_PROT) + || (server->dialect == LANMAN2_PROT))) { __s16 tmp; struct lanman_neg_rsp *rsp = (struct lanman_neg_rsp *)pSMBr; @@ -428,8 +428,8 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) (secFlags & CIFSSEC_MAY_PLNTXT)) server->secType = LANMAN; else { - cERROR(1, ("mount failed weak security disabled" - " in /proc/fs/cifs/SecurityFlags")); + cERROR(1, "mount failed weak security disabled" + " in /proc/fs/cifs/SecurityFlags"); rc = -EOPNOTSUPP; goto neg_err_exit; } @@ -462,9 +462,9 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) utc = CURRENT_TIME; ts = cnvrtDosUnixTm(rsp->SrvTime.Date, rsp->SrvTime.Time, 0); - cFYI(1, ("SrvTime %d sec since 1970 (utc: %d) diff: %d", + cFYI(1, "SrvTime %d sec since 1970 (utc: %d) diff: %d", (int)ts.tv_sec, (int)utc.tv_sec, - (int)(utc.tv_sec - ts.tv_sec))); + (int)(utc.tv_sec - ts.tv_sec)); val = (int)(utc.tv_sec - ts.tv_sec); seconds = abs(val); result = (seconds / MIN_TZ_ADJ) * MIN_TZ_ADJ; @@ -478,7 +478,7 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) server->timeAdj = (int)tmp; server->timeAdj *= 60; /* also in seconds */ } - cFYI(1, ("server->timeAdj: %d seconds", server->timeAdj)); + cFYI(1, "server->timeAdj: %d seconds", server->timeAdj); /* BB get server time for time conversions and add @@ -493,14 +493,14 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) goto neg_err_exit; } - cFYI(1, ("LANMAN negotiated")); + cFYI(1, "LANMAN negotiated"); /* we will not end up setting signing flags - as no signing was in LANMAN and server did not return the flags on */ goto signing_check; #else /* weak security disabled */ } else if (pSMBr->hdr.WordCount == 13) { - cERROR(1, ("mount failed, cifs module not built " - "with CIFS_WEAK_PW_HASH support")); + cERROR(1, "mount failed, cifs module not built " + "with CIFS_WEAK_PW_HASH support"); rc = -EOPNOTSUPP; #endif /* WEAK_PW_HASH */ goto neg_err_exit; @@ -512,14 +512,14 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) /* else wct == 17 NTLM */ server->secMode = pSMBr->SecurityMode; if ((server->secMode & SECMODE_USER) == 0) - cFYI(1, ("share mode security")); + cFYI(1, "share mode security"); if ((server->secMode & SECMODE_PW_ENCRYPT) == 0) #ifdef CONFIG_CIFS_WEAK_PW_HASH if ((secFlags & CIFSSEC_MAY_PLNTXT) == 0) #endif /* CIFS_WEAK_PW_HASH */ - cERROR(1, ("Server requests plain text password" - " but client support disabled")); + cERROR(1, "Server requests plain text password" + " but client support disabled"); if ((secFlags & CIFSSEC_MUST_NTLMV2) == CIFSSEC_MUST_NTLMV2) server->secType = NTLMv2; @@ -539,7 +539,7 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) #endif */ else { rc = -EOPNOTSUPP; - cERROR(1, ("Invalid security type")); + cERROR(1, "Invalid security type"); goto neg_err_exit; } /* else ... any others ...? */ @@ -551,7 +551,7 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) server->maxBuf = min(le32_to_cpu(pSMBr->MaxBufferSize), (__u32) CIFSMaxBufSize + MAX_CIFS_HDR_SIZE); server->max_rw = le32_to_cpu(pSMBr->MaxRawSize); - cFYI(DBG2, ("Max buf = %d", ses->server->maxBuf)); + cFYI(DBG2, "Max buf = %d", ses->server->maxBuf); GETU32(ses->server->sessid) = le32_to_cpu(pSMBr->SessionKey); server->capabilities = le32_to_cpu(pSMBr->Capabilities); server->timeAdj = (int)(__s16)le16_to_cpu(pSMBr->ServerTimeZone); @@ -582,7 +582,7 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) if (memcmp(server->server_GUID, pSMBr->u.extended_response. GUID, 16) != 0) { - cFYI(1, ("server UID changed")); + cFYI(1, "server UID changed"); memcpy(server->server_GUID, pSMBr->u.extended_response.GUID, 16); @@ -597,13 +597,19 @@ CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses) server->secType = RawNTLMSSP; } else { rc = decode_negTokenInit(pSMBr->u.extended_response. - SecurityBlob, - count - 16, - &server->secType); + SecurityBlob, count - 16, + server); if (rc == 1) rc = 0; else rc = -EINVAL; + + if (server->sec_kerberos || server->sec_mskerberos) + server->secType = Kerberos; + else if (server->sec_ntlmssp) + server->secType = RawNTLMSSP; + else + rc = -EOPNOTSUPP; } } else server->capabilities &= ~CAP_EXTENDED_SECURITY; @@ -614,22 +620,21 @@ signing_check: if ((secFlags & CIFSSEC_MAY_SIGN) == 0) { /* MUST_SIGN already includes the MAY_SIGN FLAG so if this is zero it means that signing is disabled */ - cFYI(1, ("Signing disabled")); + cFYI(1, "Signing disabled"); if (server->secMode & SECMODE_SIGN_REQUIRED) { - cERROR(1, ("Server requires " + cERROR(1, "Server requires " "packet signing to be enabled in " - "/proc/fs/cifs/SecurityFlags.")); + "/proc/fs/cifs/SecurityFlags."); rc = -EOPNOTSUPP; } server->secMode &= ~(SECMODE_SIGN_ENABLED | SECMODE_SIGN_REQUIRED); } else if ((secFlags & CIFSSEC_MUST_SIGN) == CIFSSEC_MUST_SIGN) { /* signing required */ - cFYI(1, ("Must sign - secFlags 0x%x", secFlags)); + cFYI(1, "Must sign - secFlags 0x%x", secFlags); if ((server->secMode & (SECMODE_SIGN_ENABLED | SECMODE_SIGN_REQUIRED)) == 0) { - cERROR(1, - ("signing required but server lacks support")); + cERROR(1, "signing required but server lacks support"); rc = -EOPNOTSUPP; } else server->secMode |= SECMODE_SIGN_REQUIRED; @@ -643,7 +648,7 @@ signing_check: neg_err_exit: cifs_buf_release(pSMB); - cFYI(1, ("negprot rc %d", rc)); + cFYI(1, "negprot rc %d", rc); return rc; } @@ -653,7 +658,7 @@ CIFSSMBTDis(const int xid, struct cifsTconInfo *tcon) struct smb_hdr *smb_buffer; int rc = 0; - cFYI(1, ("In tree disconnect")); + cFYI(1, "In tree disconnect"); /* BB: do we need to check this? These should never be NULL. */ if ((tcon->ses == NULL) || (tcon->ses->server == NULL)) @@ -675,7 +680,7 @@ CIFSSMBTDis(const int xid, struct cifsTconInfo *tcon) rc = SendReceiveNoRsp(xid, tcon->ses, smb_buffer, 0); if (rc) - cFYI(1, ("Tree disconnect failed %d", rc)); + cFYI(1, "Tree disconnect failed %d", rc); /* No need to return error on this operation if tid invalidated and closed on server already e.g. due to tcp session crashing */ @@ -691,7 +696,7 @@ CIFSSMBLogoff(const int xid, struct cifsSesInfo *ses) LOGOFF_ANDX_REQ *pSMB; int rc = 0; - cFYI(1, ("In SMBLogoff for session disconnect")); + cFYI(1, "In SMBLogoff for session disconnect"); /* * BB: do we need to check validity of ses and server? They should @@ -744,7 +749,7 @@ CIFSPOSIXDelFile(const int xid, struct cifsTconInfo *tcon, const char *fileName, int bytes_returned = 0; __u16 params, param_offset, offset, byte_count; - cFYI(1, ("In POSIX delete")); + cFYI(1, "In POSIX delete"); PsxDelete: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -796,7 +801,7 @@ PsxDelete: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) - cFYI(1, ("Posix delete returned %d", rc)); + cFYI(1, "Posix delete returned %d", rc); cifs_buf_release(pSMB); cifs_stats_inc(&tcon->num_deletes); @@ -843,7 +848,7 @@ DelFileRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_deletes); if (rc) - cFYI(1, ("Error in RMFile = %d", rc)); + cFYI(1, "Error in RMFile = %d", rc); cifs_buf_release(pSMB); if (rc == -EAGAIN) @@ -862,7 +867,7 @@ CIFSSMBRmDir(const int xid, struct cifsTconInfo *tcon, const char *dirName, int bytes_returned; int name_len; - cFYI(1, ("In CIFSSMBRmDir")); + cFYI(1, "In CIFSSMBRmDir"); RmDirRetry: rc = smb_init(SMB_COM_DELETE_DIRECTORY, 0, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -887,7 +892,7 @@ RmDirRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_rmdirs); if (rc) - cFYI(1, ("Error in RMDir = %d", rc)); + cFYI(1, "Error in RMDir = %d", rc); cifs_buf_release(pSMB); if (rc == -EAGAIN) @@ -905,7 +910,7 @@ CIFSSMBMkDir(const int xid, struct cifsTconInfo *tcon, int bytes_returned; int name_len; - cFYI(1, ("In CIFSSMBMkDir")); + cFYI(1, "In CIFSSMBMkDir"); MkDirRetry: rc = smb_init(SMB_COM_CREATE_DIRECTORY, 0, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -930,7 +935,7 @@ MkDirRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_mkdirs); if (rc) - cFYI(1, ("Error in Mkdir = %d", rc)); + cFYI(1, "Error in Mkdir = %d", rc); cifs_buf_release(pSMB); if (rc == -EAGAIN) @@ -953,7 +958,7 @@ CIFSPOSIXCreate(const int xid, struct cifsTconInfo *tcon, __u32 posix_flags, OPEN_PSX_REQ *pdata; OPEN_PSX_RSP *psx_rsp; - cFYI(1, ("In POSIX Create")); + cFYI(1, "In POSIX Create"); PsxCreat: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -1007,11 +1012,11 @@ PsxCreat: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Posix create returned %d", rc)); + cFYI(1, "Posix create returned %d", rc); goto psx_create_err; } - cFYI(1, ("copying inode info")); + cFYI(1, "copying inode info"); rc = validate_t2((struct smb_t2_rsp *)pSMBr); if (rc || (pSMBr->ByteCount < sizeof(OPEN_PSX_RSP))) { @@ -1033,11 +1038,11 @@ PsxCreat: /* check to make sure response data is there */ if (psx_rsp->ReturnedLevel != cpu_to_le16(SMB_QUERY_FILE_UNIX_BASIC)) { pRetData->Type = cpu_to_le32(-1); /* unknown */ - cFYI(DBG2, ("unknown type")); + cFYI(DBG2, "unknown type"); } else { if (pSMBr->ByteCount < sizeof(OPEN_PSX_RSP) + sizeof(FILE_UNIX_BASIC_INFO)) { - cERROR(1, ("Open response data too small")); + cERROR(1, "Open response data too small"); pRetData->Type = cpu_to_le32(-1); goto psx_create_err; } @@ -1084,7 +1089,7 @@ static __u16 convert_disposition(int disposition) ofun = SMBOPEN_OCREATE | SMBOPEN_OTRUNC; break; default: - cFYI(1, ("unknown disposition %d", disposition)); + cFYI(1, "unknown disposition %d", disposition); ofun = SMBOPEN_OAPPEND; /* regular open */ } return ofun; @@ -1175,7 +1180,7 @@ OldOpenRetry: (struct smb_hdr *)pSMBr, &bytes_returned, CIFS_LONG_OP); cifs_stats_inc(&tcon->num_opens); if (rc) { - cFYI(1, ("Error in Open = %d", rc)); + cFYI(1, "Error in Open = %d", rc); } else { /* BB verify if wct == 15 */ @@ -1288,7 +1293,7 @@ openRetry: (struct smb_hdr *)pSMBr, &bytes_returned, CIFS_LONG_OP); cifs_stats_inc(&tcon->num_opens); if (rc) { - cFYI(1, ("Error in Open = %d", rc)); + cFYI(1, "Error in Open = %d", rc); } else { *pOplock = pSMBr->OplockLevel; /* 1 byte no need to le_to_cpu */ *netfid = pSMBr->Fid; /* cifs fid stays in le */ @@ -1326,7 +1331,7 @@ CIFSSMBRead(const int xid, struct cifsTconInfo *tcon, const int netfid, int resp_buf_type = 0; struct kvec iov[1]; - cFYI(1, ("Reading %d bytes on fid %d", count, netfid)); + cFYI(1, "Reading %d bytes on fid %d", count, netfid); if (tcon->ses->capabilities & CAP_LARGE_FILES) wct = 12; else { @@ -1371,7 +1376,7 @@ CIFSSMBRead(const int xid, struct cifsTconInfo *tcon, const int netfid, cifs_stats_inc(&tcon->num_reads); pSMBr = (READ_RSP *)iov[0].iov_base; if (rc) { - cERROR(1, ("Send error in read = %d", rc)); + cERROR(1, "Send error in read = %d", rc); } else { int data_length = le16_to_cpu(pSMBr->DataLengthHigh); data_length = data_length << 16; @@ -1381,15 +1386,15 @@ CIFSSMBRead(const int xid, struct cifsTconInfo *tcon, const int netfid, /*check that DataLength would not go beyond end of SMB */ if ((data_length > CIFSMaxBufSize) || (data_length > count)) { - cFYI(1, ("bad length %d for count %d", - data_length, count)); + cFYI(1, "bad length %d for count %d", + data_length, count); rc = -EIO; *nbytes = 0; } else { pReadData = (char *) (&pSMBr->hdr.Protocol) + le16_to_cpu(pSMBr->DataOffset); /* if (rc = copy_to_user(buf, pReadData, data_length)) { - cERROR(1,("Faulting on read rc = %d",rc)); + cERROR(1, "Faulting on read rc = %d",rc); rc = -EFAULT; }*/ /* can not use copy_to_user when using page cache*/ if (*buf) @@ -1433,7 +1438,7 @@ CIFSSMBWrite(const int xid, struct cifsTconInfo *tcon, *nbytes = 0; - /* cFYI(1, ("write at %lld %d bytes", offset, count));*/ + /* cFYI(1, "write at %lld %d bytes", offset, count);*/ if (tcon->ses == NULL) return -ECONNABORTED; @@ -1514,7 +1519,7 @@ CIFSSMBWrite(const int xid, struct cifsTconInfo *tcon, (struct smb_hdr *) pSMBr, &bytes_returned, long_op); cifs_stats_inc(&tcon->num_writes); if (rc) { - cFYI(1, ("Send error in write = %d", rc)); + cFYI(1, "Send error in write = %d", rc); } else { *nbytes = le16_to_cpu(pSMBr->CountHigh); *nbytes = (*nbytes) << 16; @@ -1551,7 +1556,7 @@ CIFSSMBWrite2(const int xid, struct cifsTconInfo *tcon, *nbytes = 0; - cFYI(1, ("write2 at %lld %d bytes", (long long)offset, count)); + cFYI(1, "write2 at %lld %d bytes", (long long)offset, count); if (tcon->ses->capabilities & CAP_LARGE_FILES) { wct = 14; @@ -1606,7 +1611,7 @@ CIFSSMBWrite2(const int xid, struct cifsTconInfo *tcon, long_op); cifs_stats_inc(&tcon->num_writes); if (rc) { - cFYI(1, ("Send error Write2 = %d", rc)); + cFYI(1, "Send error Write2 = %d", rc); } else if (resp_buf_type == 0) { /* presumably this can not happen, but best to be safe */ rc = -EIO; @@ -1651,7 +1656,7 @@ CIFSSMBLock(const int xid, struct cifsTconInfo *tcon, int timeout = 0; __u16 count; - cFYI(1, ("CIFSSMBLock timeout %d numLock %d", (int)waitFlag, numLock)); + cFYI(1, "CIFSSMBLock timeout %d numLock %d", (int)waitFlag, numLock); rc = small_smb_init(SMB_COM_LOCKING_ANDX, 8, tcon, (void **) &pSMB); if (rc) @@ -1699,7 +1704,7 @@ CIFSSMBLock(const int xid, struct cifsTconInfo *tcon, } cifs_stats_inc(&tcon->num_locks); if (rc) - cFYI(1, ("Send error in Lock = %d", rc)); + cFYI(1, "Send error in Lock = %d", rc); /* Note: On -EAGAIN error only caller can retry on handle based calls since file handle passed in no longer valid */ @@ -1722,7 +1727,7 @@ CIFSSMBPosixLock(const int xid, struct cifsTconInfo *tcon, __u16 params, param_offset, offset, byte_count, count; struct kvec iov[1]; - cFYI(1, ("Posix Lock")); + cFYI(1, "Posix Lock"); if (pLockData == NULL) return -EINVAL; @@ -1792,7 +1797,7 @@ CIFSSMBPosixLock(const int xid, struct cifsTconInfo *tcon, } if (rc) { - cFYI(1, ("Send error in Posix Lock = %d", rc)); + cFYI(1, "Send error in Posix Lock = %d", rc); } else if (get_flag) { /* lock structure can be returned on get */ __u16 data_offset; @@ -1849,7 +1854,7 @@ CIFSSMBClose(const int xid, struct cifsTconInfo *tcon, int smb_file_id) { int rc = 0; CLOSE_REQ *pSMB = NULL; - cFYI(1, ("In CIFSSMBClose")); + cFYI(1, "In CIFSSMBClose"); /* do not retry on dead session on close */ rc = small_smb_init(SMB_COM_CLOSE, 3, tcon, (void **) &pSMB); @@ -1866,7 +1871,7 @@ CIFSSMBClose(const int xid, struct cifsTconInfo *tcon, int smb_file_id) if (rc) { if (rc != -EINTR) { /* EINTR is expected when user ctl-c to kill app */ - cERROR(1, ("Send error in Close = %d", rc)); + cERROR(1, "Send error in Close = %d", rc); } } @@ -1882,7 +1887,7 @@ CIFSSMBFlush(const int xid, struct cifsTconInfo *tcon, int smb_file_id) { int rc = 0; FLUSH_REQ *pSMB = NULL; - cFYI(1, ("In CIFSSMBFlush")); + cFYI(1, "In CIFSSMBFlush"); rc = small_smb_init(SMB_COM_FLUSH, 1, tcon, (void **) &pSMB); if (rc) @@ -1893,7 +1898,7 @@ CIFSSMBFlush(const int xid, struct cifsTconInfo *tcon, int smb_file_id) rc = SendReceiveNoRsp(xid, tcon->ses, (struct smb_hdr *) pSMB, 0); cifs_stats_inc(&tcon->num_flushes); if (rc) - cERROR(1, ("Send error in Flush = %d", rc)); + cERROR(1, "Send error in Flush = %d", rc); return rc; } @@ -1910,7 +1915,7 @@ CIFSSMBRename(const int xid, struct cifsTconInfo *tcon, int name_len, name_len2; __u16 count; - cFYI(1, ("In CIFSSMBRename")); + cFYI(1, "In CIFSSMBRename"); renameRetry: rc = smb_init(SMB_COM_RENAME, 1, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -1956,7 +1961,7 @@ renameRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_renames); if (rc) - cFYI(1, ("Send error in rename = %d", rc)); + cFYI(1, "Send error in rename = %d", rc); cifs_buf_release(pSMB); @@ -1980,7 +1985,7 @@ int CIFSSMBRenameOpenFile(const int xid, struct cifsTconInfo *pTcon, int len_of_str; __u16 params, param_offset, offset, count, byte_count; - cFYI(1, ("Rename to File by handle")); + cFYI(1, "Rename to File by handle"); rc = smb_init(SMB_COM_TRANSACTION2, 15, pTcon, (void **) &pSMB, (void **) &pSMBr); if (rc) @@ -2035,7 +2040,7 @@ int CIFSSMBRenameOpenFile(const int xid, struct cifsTconInfo *pTcon, (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&pTcon->num_t2renames); if (rc) - cFYI(1, ("Send error in Rename (by file handle) = %d", rc)); + cFYI(1, "Send error in Rename (by file handle) = %d", rc); cifs_buf_release(pSMB); @@ -2057,7 +2062,7 @@ CIFSSMBCopy(const int xid, struct cifsTconInfo *tcon, const char *fromName, int name_len, name_len2; __u16 count; - cFYI(1, ("In CIFSSMBCopy")); + cFYI(1, "In CIFSSMBCopy"); copyRetry: rc = smb_init(SMB_COM_COPY, 1, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -2102,8 +2107,8 @@ copyRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in copy = %d with %d files copied", - rc, le16_to_cpu(pSMBr->CopyCount))); + cFYI(1, "Send error in copy = %d with %d files copied", + rc, le16_to_cpu(pSMBr->CopyCount)); } cifs_buf_release(pSMB); @@ -2127,7 +2132,7 @@ CIFSUnixCreateSymLink(const int xid, struct cifsTconInfo *tcon, int bytes_returned = 0; __u16 params, param_offset, offset, byte_count; - cFYI(1, ("In Symlink Unix style")); + cFYI(1, "In Symlink Unix style"); createSymLinkRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -2192,7 +2197,7 @@ createSymLinkRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_symlinks); if (rc) - cFYI(1, ("Send error in SetPathInfo create symlink = %d", rc)); + cFYI(1, "Send error in SetPathInfo create symlink = %d", rc); cifs_buf_release(pSMB); @@ -2216,7 +2221,7 @@ CIFSUnixCreateHardLink(const int xid, struct cifsTconInfo *tcon, int bytes_returned = 0; __u16 params, param_offset, offset, byte_count; - cFYI(1, ("In Create Hard link Unix style")); + cFYI(1, "In Create Hard link Unix style"); createHardLinkRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -2278,7 +2283,7 @@ createHardLinkRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_hardlinks); if (rc) - cFYI(1, ("Send error in SetPathInfo (hard link) = %d", rc)); + cFYI(1, "Send error in SetPathInfo (hard link) = %d", rc); cifs_buf_release(pSMB); if (rc == -EAGAIN) @@ -2299,7 +2304,7 @@ CIFSCreateHardLink(const int xid, struct cifsTconInfo *tcon, int name_len, name_len2; __u16 count; - cFYI(1, ("In CIFSCreateHardLink")); + cFYI(1, "In CIFSCreateHardLink"); winCreateHardLinkRetry: rc = smb_init(SMB_COM_NT_RENAME, 4, tcon, (void **) &pSMB, @@ -2350,7 +2355,7 @@ winCreateHardLinkRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_hardlinks); if (rc) - cFYI(1, ("Send error in hard link (NT rename) = %d", rc)); + cFYI(1, "Send error in hard link (NT rename) = %d", rc); cifs_buf_release(pSMB); if (rc == -EAGAIN) @@ -2373,7 +2378,7 @@ CIFSSMBUnixQuerySymLink(const int xid, struct cifsTconInfo *tcon, __u16 params, byte_count; char *data_start; - cFYI(1, ("In QPathSymLinkInfo (Unix) for path %s", searchName)); + cFYI(1, "In QPathSymLinkInfo (Unix) for path %s", searchName); querySymLinkRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, @@ -2420,7 +2425,7 @@ querySymLinkRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QuerySymLinkInfo = %d", rc)); + cFYI(1, "Send error in QuerySymLinkInfo = %d", rc); } else { /* decode response */ @@ -2521,21 +2526,21 @@ validate_ntransact(char *buf, char **ppparm, char **ppdata, /* should we also check that parm and data areas do not overlap? */ if (*ppparm > end_of_smb) { - cFYI(1, ("parms start after end of smb")); + cFYI(1, "parms start after end of smb"); return -EINVAL; } else if (parm_count + *ppparm > end_of_smb) { - cFYI(1, ("parm end after end of smb")); + cFYI(1, "parm end after end of smb"); return -EINVAL; } else if (*ppdata > end_of_smb) { - cFYI(1, ("data starts after end of smb")); + cFYI(1, "data starts after end of smb"); return -EINVAL; } else if (data_count + *ppdata > end_of_smb) { - cFYI(1, ("data %p + count %d (%p) ends after end of smb %p start %p", + cFYI(1, "data %p + count %d (%p) past smb end %p start %p", *ppdata, data_count, (data_count + *ppdata), - end_of_smb, pSMBr)); + end_of_smb, pSMBr); return -EINVAL; } else if (parm_count + data_count > pSMBr->ByteCount) { - cFYI(1, ("parm count and data count larger than SMB")); + cFYI(1, "parm count and data count larger than SMB"); return -EINVAL; } *pdatalen = data_count; @@ -2554,7 +2559,7 @@ CIFSSMBQueryReparseLinkInfo(const int xid, struct cifsTconInfo *tcon, struct smb_com_transaction_ioctl_req *pSMB; struct smb_com_transaction_ioctl_rsp *pSMBr; - cFYI(1, ("In Windows reparse style QueryLink for path %s", searchName)); + cFYI(1, "In Windows reparse style QueryLink for path %s", searchName); rc = smb_init(SMB_COM_NT_TRANSACT, 23, tcon, (void **) &pSMB, (void **) &pSMBr); if (rc) @@ -2583,7 +2588,7 @@ CIFSSMBQueryReparseLinkInfo(const int xid, struct cifsTconInfo *tcon, rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QueryReparseLinkInfo = %d", rc)); + cFYI(1, "Send error in QueryReparseLinkInfo = %d", rc); } else { /* decode response */ __u32 data_offset = le32_to_cpu(pSMBr->DataOffset); __u32 data_count = le32_to_cpu(pSMBr->DataCount); @@ -2607,7 +2612,7 @@ CIFSSMBQueryReparseLinkInfo(const int xid, struct cifsTconInfo *tcon, if ((reparse_buf->LinkNamesBuf + reparse_buf->TargetNameOffset + reparse_buf->TargetNameLen) > end_of_smb) { - cFYI(1, ("reparse buf beyond SMB")); + cFYI(1, "reparse buf beyond SMB"); rc = -EIO; goto qreparse_out; } @@ -2628,12 +2633,12 @@ CIFSSMBQueryReparseLinkInfo(const int xid, struct cifsTconInfo *tcon, } } else { rc = -EIO; - cFYI(1, ("Invalid return data count on " - "get reparse info ioctl")); + cFYI(1, "Invalid return data count on " + "get reparse info ioctl"); } symlinkinfo[buflen] = 0; /* just in case so the caller does not go off the end of the buffer */ - cFYI(1, ("readlink result - %s", symlinkinfo)); + cFYI(1, "readlink result - %s", symlinkinfo); } qreparse_out: @@ -2656,7 +2661,7 @@ static void cifs_convert_ace(posix_acl_xattr_entry *ace, ace->e_perm = cpu_to_le16(cifs_ace->cifs_e_perm); ace->e_tag = cpu_to_le16(cifs_ace->cifs_e_tag); ace->e_id = cpu_to_le32(le64_to_cpu(cifs_ace->cifs_uid)); - /* cFYI(1,("perm %d tag %d id %d",ace->e_perm,ace->e_tag,ace->e_id)); */ + /* cFYI(1, "perm %d tag %d id %d",ace->e_perm,ace->e_tag,ace->e_id); */ return; } @@ -2682,8 +2687,8 @@ static int cifs_copy_posix_acl(char *trgt, char *src, const int buflen, size += sizeof(struct cifs_posix_ace) * count; /* check if we would go beyond end of SMB */ if (size_of_data_area < size) { - cFYI(1, ("bad CIFS POSIX ACL size %d vs. %d", - size_of_data_area, size)); + cFYI(1, "bad CIFS POSIX ACL size %d vs. %d", + size_of_data_area, size); return -EINVAL; } } else if (acl_type & ACL_TYPE_DEFAULT) { @@ -2730,7 +2735,7 @@ static __u16 convert_ace_to_cifs_ace(struct cifs_posix_ace *cifs_ace, cifs_ace->cifs_uid = cpu_to_le64(-1); } else cifs_ace->cifs_uid = cpu_to_le64(le32_to_cpu(local_ace->e_id)); - /*cFYI(1,("perm %d tag %d id %d",ace->e_perm,ace->e_tag,ace->e_id));*/ + /*cFYI(1, "perm %d tag %d id %d",ace->e_perm,ace->e_tag,ace->e_id);*/ return rc; } @@ -2748,12 +2753,12 @@ static __u16 ACL_to_cifs_posix(char *parm_data, const char *pACL, return 0; count = posix_acl_xattr_count((size_t)buflen); - cFYI(1, ("setting acl with %d entries from buf of length %d and " + cFYI(1, "setting acl with %d entries from buf of length %d and " "version of %d", - count, buflen, le32_to_cpu(local_acl->a_version))); + count, buflen, le32_to_cpu(local_acl->a_version)); if (le32_to_cpu(local_acl->a_version) != 2) { - cFYI(1, ("unknown POSIX ACL version %d", - le32_to_cpu(local_acl->a_version))); + cFYI(1, "unknown POSIX ACL version %d", + le32_to_cpu(local_acl->a_version)); return 0; } cifs_acl->version = cpu_to_le16(1); @@ -2762,7 +2767,7 @@ static __u16 ACL_to_cifs_posix(char *parm_data, const char *pACL, else if (acl_type == ACL_TYPE_DEFAULT) cifs_acl->default_entry_count = cpu_to_le16(count); else { - cFYI(1, ("unknown ACL type %d", acl_type)); + cFYI(1, "unknown ACL type %d", acl_type); return 0; } for (i = 0; i < count; i++) { @@ -2795,7 +2800,7 @@ CIFSSMBGetPosixACL(const int xid, struct cifsTconInfo *tcon, int name_len; __u16 params, byte_count; - cFYI(1, ("In GetPosixACL (Unix) for path %s", searchName)); + cFYI(1, "In GetPosixACL (Unix) for path %s", searchName); queryAclRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, @@ -2847,7 +2852,7 @@ queryAclRetry: (struct smb_hdr *) pSMBr, &bytes_returned, 0); cifs_stats_inc(&tcon->num_acl_get); if (rc) { - cFYI(1, ("Send error in Query POSIX ACL = %d", rc)); + cFYI(1, "Send error in Query POSIX ACL = %d", rc); } else { /* decode response */ @@ -2884,7 +2889,7 @@ CIFSSMBSetPosixACL(const int xid, struct cifsTconInfo *tcon, int bytes_returned = 0; __u16 params, byte_count, data_count, param_offset, offset; - cFYI(1, ("In SetPosixACL (Unix) for path %s", fileName)); + cFYI(1, "In SetPosixACL (Unix) for path %s", fileName); setAclRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -2939,7 +2944,7 @@ setAclRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) - cFYI(1, ("Set POSIX ACL returned %d", rc)); + cFYI(1, "Set POSIX ACL returned %d", rc); setACLerrorExit: cifs_buf_release(pSMB); @@ -2959,7 +2964,7 @@ CIFSGetExtAttr(const int xid, struct cifsTconInfo *tcon, int bytes_returned; __u16 params, byte_count; - cFYI(1, ("In GetExtAttr")); + cFYI(1, "In GetExtAttr"); if (tcon == NULL) return -ENODEV; @@ -2998,7 +3003,7 @@ GetExtAttrRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("error %d in GetExtAttr", rc)); + cFYI(1, "error %d in GetExtAttr", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -3013,7 +3018,7 @@ GetExtAttrRetry: struct file_chattr_info *pfinfo; /* BB Do we need a cast or hash here ? */ if (count != 16) { - cFYI(1, ("Illegal size ret in GetExtAttr")); + cFYI(1, "Illegal size ret in GetExtAttr"); rc = -EIO; goto GetExtAttrOut; } @@ -3043,7 +3048,7 @@ CIFSSMBGetCIFSACL(const int xid, struct cifsTconInfo *tcon, __u16 fid, QUERY_SEC_DESC_REQ *pSMB; struct kvec iov[1]; - cFYI(1, ("GetCifsACL")); + cFYI(1, "GetCifsACL"); *pbuflen = 0; *acl_inf = NULL; @@ -3068,7 +3073,7 @@ CIFSSMBGetCIFSACL(const int xid, struct cifsTconInfo *tcon, __u16 fid, CIFS_STD_OP); cifs_stats_inc(&tcon->num_acl_get); if (rc) { - cFYI(1, ("Send error in QuerySecDesc = %d", rc)); + cFYI(1, "Send error in QuerySecDesc = %d", rc); } else { /* decode response */ __le32 *parm; __u32 parm_len; @@ -3083,7 +3088,7 @@ CIFSSMBGetCIFSACL(const int xid, struct cifsTconInfo *tcon, __u16 fid, goto qsec_out; pSMBr = (struct smb_com_ntransact_rsp *)iov[0].iov_base; - cFYI(1, ("smb %p parm %p data %p", pSMBr, parm, *acl_inf)); + cFYI(1, "smb %p parm %p data %p", pSMBr, parm, *acl_inf); if (le32_to_cpu(pSMBr->ParameterCount) != 4) { rc = -EIO; /* bad smb */ @@ -3095,8 +3100,8 @@ CIFSSMBGetCIFSACL(const int xid, struct cifsTconInfo *tcon, __u16 fid, acl_len = le32_to_cpu(*parm); if (acl_len != *pbuflen) { - cERROR(1, ("acl length %d does not match %d", - acl_len, *pbuflen)); + cERROR(1, "acl length %d does not match %d", + acl_len, *pbuflen); if (*pbuflen > acl_len) *pbuflen = acl_len; } @@ -3105,7 +3110,7 @@ CIFSSMBGetCIFSACL(const int xid, struct cifsTconInfo *tcon, __u16 fid, header followed by the smallest SID */ if ((*pbuflen < sizeof(struct cifs_ntsd) + 8) || (*pbuflen >= 64 * 1024)) { - cERROR(1, ("bad acl length %d", *pbuflen)); + cERROR(1, "bad acl length %d", *pbuflen); rc = -EINVAL; *pbuflen = 0; } else { @@ -3179,9 +3184,9 @@ setCifsAclRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); - cFYI(1, ("SetCIFSACL bytes_returned: %d, rc: %d", bytes_returned, rc)); + cFYI(1, "SetCIFSACL bytes_returned: %d, rc: %d", bytes_returned, rc); if (rc) - cFYI(1, ("Set CIFS ACL returned %d", rc)); + cFYI(1, "Set CIFS ACL returned %d", rc); cifs_buf_release(pSMB); if (rc == -EAGAIN) @@ -3205,7 +3210,7 @@ int SMBQueryInformation(const int xid, struct cifsTconInfo *tcon, int bytes_returned; int name_len; - cFYI(1, ("In SMBQPath path %s", searchName)); + cFYI(1, "In SMBQPath path %s", searchName); QInfRetry: rc = smb_init(SMB_COM_QUERY_INFORMATION, 0, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -3231,7 +3236,7 @@ QInfRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QueryInfo = %d", rc)); + cFYI(1, "Send error in QueryInfo = %d", rc); } else if (pFinfo) { struct timespec ts; __u32 time = le32_to_cpu(pSMBr->last_write_time); @@ -3305,7 +3310,7 @@ QFileInfoRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QPathInfo = %d", rc)); + cFYI(1, "Send error in QPathInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -3343,7 +3348,7 @@ CIFSSMBQPathInfo(const int xid, struct cifsTconInfo *tcon, int name_len; __u16 params, byte_count; -/* cFYI(1, ("In QPathInfo path %s", searchName)); */ +/* cFYI(1, "In QPathInfo path %s", searchName); */ QPathInfoRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -3393,7 +3398,7 @@ QPathInfoRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QPathInfo = %d", rc)); + cFYI(1, "Send error in QPathInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -3473,14 +3478,14 @@ UnixQFileInfoRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QPathInfo = %d", rc)); + cFYI(1, "Send error in QPathInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); if (rc || (pSMBr->ByteCount < sizeof(FILE_UNIX_BASIC_INFO))) { - cERROR(1, ("Malformed FILE_UNIX_BASIC_INFO response.\n" + cERROR(1, "Malformed FILE_UNIX_BASIC_INFO response.\n" "Unix Extensions can be disabled on mount " - "by specifying the nosfu mount option.")); + "by specifying the nosfu mount option."); rc = -EIO; /* bad smb */ } else { __u16 data_offset = le16_to_cpu(pSMBr->t2.DataOffset); @@ -3512,7 +3517,7 @@ CIFSSMBUnixQPathInfo(const int xid, struct cifsTconInfo *tcon, int name_len; __u16 params, byte_count; - cFYI(1, ("In QPathInfo (Unix) the path %s", searchName)); + cFYI(1, "In QPathInfo (Unix) the path %s", searchName); UnixQPathInfoRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -3559,14 +3564,14 @@ UnixQPathInfoRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QPathInfo = %d", rc)); + cFYI(1, "Send error in QPathInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); if (rc || (pSMBr->ByteCount < sizeof(FILE_UNIX_BASIC_INFO))) { - cERROR(1, ("Malformed FILE_UNIX_BASIC_INFO response.\n" + cERROR(1, "Malformed FILE_UNIX_BASIC_INFO response.\n" "Unix Extensions can be disabled on mount " - "by specifying the nosfu mount option.")); + "by specifying the nosfu mount option."); rc = -EIO; /* bad smb */ } else { __u16 data_offset = le16_to_cpu(pSMBr->t2.DataOffset); @@ -3600,7 +3605,7 @@ CIFSFindFirst(const int xid, struct cifsTconInfo *tcon, int name_len; __u16 params, byte_count; - cFYI(1, ("In FindFirst for %s", searchName)); + cFYI(1, "In FindFirst for %s", searchName); findFirstRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, @@ -3677,7 +3682,7 @@ findFirstRetry: if (rc) {/* BB add logic to retry regular search if Unix search rejected unexpectedly by server */ /* BB Add code to handle unsupported level rc */ - cFYI(1, ("Error in FindFirst = %d", rc)); + cFYI(1, "Error in FindFirst = %d", rc); cifs_buf_release(pSMB); @@ -3716,7 +3721,7 @@ findFirstRetry: lnoff = le16_to_cpu(parms->LastNameOffset); if (tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE < lnoff) { - cERROR(1, ("ignoring corrupt resume name")); + cERROR(1, "ignoring corrupt resume name"); psrch_inf->last_entry = NULL; return rc; } @@ -3744,7 +3749,7 @@ int CIFSFindNext(const int xid, struct cifsTconInfo *tcon, int bytes_returned, name_len; __u16 params, byte_count; - cFYI(1, ("In FindNext")); + cFYI(1, "In FindNext"); if (psrch_inf->endOfSearch) return -ENOENT; @@ -3808,7 +3813,7 @@ int CIFSFindNext(const int xid, struct cifsTconInfo *tcon, cifs_buf_release(pSMB); rc = 0; /* search probably was closed at end of search*/ } else - cFYI(1, ("FindNext returned = %d", rc)); + cFYI(1, "FindNext returned = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -3844,15 +3849,15 @@ int CIFSFindNext(const int xid, struct cifsTconInfo *tcon, lnoff = le16_to_cpu(parms->LastNameOffset); if (tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE < lnoff) { - cERROR(1, ("ignoring corrupt resume name")); + cERROR(1, "ignoring corrupt resume name"); psrch_inf->last_entry = NULL; return rc; } else psrch_inf->last_entry = psrch_inf->srch_entries_start + lnoff; -/* cFYI(1,("fnxt2 entries in buf %d index_of_last %d", - psrch_inf->entries_in_buffer, psrch_inf->index_of_last_entry)); */ +/* cFYI(1, "fnxt2 entries in buf %d index_of_last %d", + psrch_inf->entries_in_buffer, psrch_inf->index_of_last_entry); */ /* BB fixme add unlock here */ } @@ -3877,7 +3882,7 @@ CIFSFindClose(const int xid, struct cifsTconInfo *tcon, int rc = 0; FINDCLOSE_REQ *pSMB = NULL; - cFYI(1, ("In CIFSSMBFindClose")); + cFYI(1, "In CIFSSMBFindClose"); rc = small_smb_init(SMB_COM_FIND_CLOSE2, 1, tcon, (void **)&pSMB); /* no sense returning error if session restarted @@ -3891,7 +3896,7 @@ CIFSFindClose(const int xid, struct cifsTconInfo *tcon, pSMB->ByteCount = 0; rc = SendReceiveNoRsp(xid, tcon->ses, (struct smb_hdr *) pSMB, 0); if (rc) - cERROR(1, ("Send error in FindClose = %d", rc)); + cERROR(1, "Send error in FindClose = %d", rc); cifs_stats_inc(&tcon->num_fclose); @@ -3914,7 +3919,7 @@ CIFSGetSrvInodeNumber(const int xid, struct cifsTconInfo *tcon, int name_len, bytes_returned; __u16 params, byte_count; - cFYI(1, ("In GetSrvInodeNum for %s", searchName)); + cFYI(1, "In GetSrvInodeNum for %s", searchName); if (tcon == NULL) return -ENODEV; @@ -3964,7 +3969,7 @@ GetInodeNumberRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("error %d in QueryInternalInfo", rc)); + cFYI(1, "error %d in QueryInternalInfo", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -3979,7 +3984,7 @@ GetInodeNumberRetry: struct file_internal_info *pfinfo; /* BB Do we need a cast or hash here ? */ if (count < 8) { - cFYI(1, ("Illegal size ret in QryIntrnlInf")); + cFYI(1, "Illegal size ret in QryIntrnlInf"); rc = -EIO; goto GetInodeNumOut; } @@ -4020,16 +4025,16 @@ parse_DFS_referrals(TRANSACTION2_GET_DFS_REFER_RSP *pSMBr, *num_of_nodes = le16_to_cpu(pSMBr->NumberOfReferrals); if (*num_of_nodes < 1) { - cERROR(1, ("num_referrals: must be at least > 0," - "but we get num_referrals = %d\n", *num_of_nodes)); + cERROR(1, "num_referrals: must be at least > 0," + "but we get num_referrals = %d\n", *num_of_nodes); rc = -EINVAL; goto parse_DFS_referrals_exit; } ref = (struct dfs_referral_level_3 *) &(pSMBr->referrals); if (ref->VersionNumber != cpu_to_le16(3)) { - cERROR(1, ("Referrals of V%d version are not supported," - "should be V3", le16_to_cpu(ref->VersionNumber))); + cERROR(1, "Referrals of V%d version are not supported," + "should be V3", le16_to_cpu(ref->VersionNumber)); rc = -EINVAL; goto parse_DFS_referrals_exit; } @@ -4038,14 +4043,14 @@ parse_DFS_referrals(TRANSACTION2_GET_DFS_REFER_RSP *pSMBr, data_end = (char *)(&(pSMBr->PathConsumed)) + le16_to_cpu(pSMBr->t2.DataCount); - cFYI(1, ("num_referrals: %d dfs flags: 0x%x ... \n", + cFYI(1, "num_referrals: %d dfs flags: 0x%x ...\n", *num_of_nodes, - le32_to_cpu(pSMBr->DFSFlags))); + le32_to_cpu(pSMBr->DFSFlags)); *target_nodes = kzalloc(sizeof(struct dfs_info3_param) * *num_of_nodes, GFP_KERNEL); if (*target_nodes == NULL) { - cERROR(1, ("Failed to allocate buffer for target_nodes\n")); + cERROR(1, "Failed to allocate buffer for target_nodes\n"); rc = -ENOMEM; goto parse_DFS_referrals_exit; } @@ -4121,7 +4126,7 @@ CIFSGetDFSRefer(const int xid, struct cifsSesInfo *ses, *num_of_nodes = 0; *target_nodes = NULL; - cFYI(1, ("In GetDFSRefer the path %s", searchName)); + cFYI(1, "In GetDFSRefer the path %s", searchName); if (ses == NULL) return -ENODEV; getDFSRetry: @@ -4188,7 +4193,7 @@ getDFSRetry: rc = SendReceive(xid, ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in GetDFSRefer = %d", rc)); + cFYI(1, "Send error in GetDFSRefer = %d", rc); goto GetDFSRefExit; } rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -4199,9 +4204,9 @@ getDFSRetry: goto GetDFSRefExit; } - cFYI(1, ("Decoding GetDFSRefer response BCC: %d Offset %d", + cFYI(1, "Decoding GetDFSRefer response BCC: %d Offset %d", pSMBr->ByteCount, - le16_to_cpu(pSMBr->t2.DataOffset))); + le16_to_cpu(pSMBr->t2.DataOffset)); /* parse returned result into more usable form */ rc = parse_DFS_referrals(pSMBr, num_of_nodes, @@ -4229,7 +4234,7 @@ SMBOldQFSInfo(const int xid, struct cifsTconInfo *tcon, struct kstatfs *FSData) int bytes_returned = 0; __u16 params, byte_count; - cFYI(1, ("OldQFSInfo")); + cFYI(1, "OldQFSInfo"); oldQFSInfoRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -4262,7 +4267,7 @@ oldQFSInfoRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QFSInfo = %d", rc)); + cFYI(1, "Send error in QFSInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -4270,8 +4275,8 @@ oldQFSInfoRetry: rc = -EIO; /* bad smb */ else { __u16 data_offset = le16_to_cpu(pSMBr->t2.DataOffset); - cFYI(1, ("qfsinf resp BCC: %d Offset %d", - pSMBr->ByteCount, data_offset)); + cFYI(1, "qfsinf resp BCC: %d Offset %d", + pSMBr->ByteCount, data_offset); response_data = (FILE_SYSTEM_ALLOC_INFO *) (((char *) &pSMBr->hdr.Protocol) + data_offset); @@ -4283,11 +4288,10 @@ oldQFSInfoRetry: le32_to_cpu(response_data->TotalAllocationUnits); FSData->f_bfree = FSData->f_bavail = le32_to_cpu(response_data->FreeAllocationUnits); - cFYI(1, - ("Blocks: %lld Free: %lld Block size %ld", - (unsigned long long)FSData->f_blocks, - (unsigned long long)FSData->f_bfree, - FSData->f_bsize)); + cFYI(1, "Blocks: %lld Free: %lld Block size %ld", + (unsigned long long)FSData->f_blocks, + (unsigned long long)FSData->f_bfree, + FSData->f_bsize); } } cifs_buf_release(pSMB); @@ -4309,7 +4313,7 @@ CIFSSMBQFSInfo(const int xid, struct cifsTconInfo *tcon, struct kstatfs *FSData) int bytes_returned = 0; __u16 params, byte_count; - cFYI(1, ("In QFSInfo")); + cFYI(1, "In QFSInfo"); QFSInfoRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -4342,7 +4346,7 @@ QFSInfoRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QFSInfo = %d", rc)); + cFYI(1, "Send error in QFSInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -4363,11 +4367,10 @@ QFSInfoRetry: le64_to_cpu(response_data->TotalAllocationUnits); FSData->f_bfree = FSData->f_bavail = le64_to_cpu(response_data->FreeAllocationUnits); - cFYI(1, - ("Blocks: %lld Free: %lld Block size %ld", - (unsigned long long)FSData->f_blocks, - (unsigned long long)FSData->f_bfree, - FSData->f_bsize)); + cFYI(1, "Blocks: %lld Free: %lld Block size %ld", + (unsigned long long)FSData->f_blocks, + (unsigned long long)FSData->f_bfree, + FSData->f_bsize); } } cifs_buf_release(pSMB); @@ -4389,7 +4392,7 @@ CIFSSMBQFSAttributeInfo(const int xid, struct cifsTconInfo *tcon) int bytes_returned = 0; __u16 params, byte_count; - cFYI(1, ("In QFSAttributeInfo")); + cFYI(1, "In QFSAttributeInfo"); QFSAttributeRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -4423,7 +4426,7 @@ QFSAttributeRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cERROR(1, ("Send error in QFSAttributeInfo = %d", rc)); + cERROR(1, "Send error in QFSAttributeInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -4459,7 +4462,7 @@ CIFSSMBQFSDeviceInfo(const int xid, struct cifsTconInfo *tcon) int bytes_returned = 0; __u16 params, byte_count; - cFYI(1, ("In QFSDeviceInfo")); + cFYI(1, "In QFSDeviceInfo"); QFSDeviceRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -4494,7 +4497,7 @@ QFSDeviceRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QFSDeviceInfo = %d", rc)); + cFYI(1, "Send error in QFSDeviceInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -4529,7 +4532,7 @@ CIFSSMBQFSUnixInfo(const int xid, struct cifsTconInfo *tcon) int bytes_returned = 0; __u16 params, byte_count; - cFYI(1, ("In QFSUnixInfo")); + cFYI(1, "In QFSUnixInfo"); QFSUnixRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -4563,7 +4566,7 @@ QFSUnixRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cERROR(1, ("Send error in QFSUnixInfo = %d", rc)); + cERROR(1, "Send error in QFSUnixInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -4598,7 +4601,7 @@ CIFSSMBSetFSUnixInfo(const int xid, struct cifsTconInfo *tcon, __u64 cap) int bytes_returned = 0; __u16 params, param_offset, offset, byte_count; - cFYI(1, ("In SETFSUnixInfo")); + cFYI(1, "In SETFSUnixInfo"); SETFSUnixRetry: /* BB switch to small buf init to save memory */ rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, @@ -4646,7 +4649,7 @@ SETFSUnixRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cERROR(1, ("Send error in SETFSUnixInfo = %d", rc)); + cERROR(1, "Send error in SETFSUnixInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); if (rc) @@ -4674,7 +4677,7 @@ CIFSSMBQFSPosixInfo(const int xid, struct cifsTconInfo *tcon, int bytes_returned = 0; __u16 params, byte_count; - cFYI(1, ("In QFSPosixInfo")); + cFYI(1, "In QFSPosixInfo"); QFSPosixRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -4708,7 +4711,7 @@ QFSPosixRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QFSUnixInfo = %d", rc)); + cFYI(1, "Send error in QFSUnixInfo = %d", rc); } else { /* decode response */ rc = validate_t2((struct smb_t2_rsp *)pSMBr); @@ -4768,7 +4771,7 @@ CIFSSMBSetEOF(const int xid, struct cifsTconInfo *tcon, const char *fileName, int bytes_returned = 0; __u16 params, byte_count, data_count, param_offset, offset; - cFYI(1, ("In SetEOF")); + cFYI(1, "In SetEOF"); SetEOFRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -4834,7 +4837,7 @@ SetEOFRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) - cFYI(1, ("SetPathInfo (file size) returned %d", rc)); + cFYI(1, "SetPathInfo (file size) returned %d", rc); cifs_buf_release(pSMB); @@ -4854,8 +4857,8 @@ CIFSSMBSetFileSize(const int xid, struct cifsTconInfo *tcon, __u64 size, int rc = 0; __u16 params, param_offset, offset, byte_count, count; - cFYI(1, ("SetFileSize (via SetFileInfo) %lld", - (long long)size)); + cFYI(1, "SetFileSize (via SetFileInfo) %lld", + (long long)size); rc = small_smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB); if (rc) @@ -4914,9 +4917,7 @@ CIFSSMBSetFileSize(const int xid, struct cifsTconInfo *tcon, __u64 size, pSMB->ByteCount = cpu_to_le16(byte_count); rc = SendReceiveNoRsp(xid, tcon->ses, (struct smb_hdr *) pSMB, 0); if (rc) { - cFYI(1, - ("Send error in SetFileInfo (SetFileSize) = %d", - rc)); + cFYI(1, "Send error in SetFileInfo (SetFileSize) = %d", rc); } /* Note: On -EAGAIN error only caller can retry on handle based calls @@ -4940,7 +4941,7 @@ CIFSSMBSetFileInfo(const int xid, struct cifsTconInfo *tcon, int rc = 0; __u16 params, param_offset, offset, byte_count, count; - cFYI(1, ("Set Times (via SetFileInfo)")); + cFYI(1, "Set Times (via SetFileInfo)"); rc = small_smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB); if (rc) @@ -4985,7 +4986,7 @@ CIFSSMBSetFileInfo(const int xid, struct cifsTconInfo *tcon, memcpy(data_offset, data, sizeof(FILE_BASIC_INFO)); rc = SendReceiveNoRsp(xid, tcon->ses, (struct smb_hdr *) pSMB, 0); if (rc) - cFYI(1, ("Send error in Set Time (SetFileInfo) = %d", rc)); + cFYI(1, "Send error in Set Time (SetFileInfo) = %d", rc); /* Note: On -EAGAIN error only caller can retry on handle based calls since file handle passed in no longer valid */ @@ -5002,7 +5003,7 @@ CIFSSMBSetFileDisposition(const int xid, struct cifsTconInfo *tcon, int rc = 0; __u16 params, param_offset, offset, byte_count, count; - cFYI(1, ("Set File Disposition (via SetFileInfo)")); + cFYI(1, "Set File Disposition (via SetFileInfo)"); rc = small_smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB); if (rc) @@ -5044,7 +5045,7 @@ CIFSSMBSetFileDisposition(const int xid, struct cifsTconInfo *tcon, *data_offset = delete_file ? 1 : 0; rc = SendReceiveNoRsp(xid, tcon->ses, (struct smb_hdr *) pSMB, 0); if (rc) - cFYI(1, ("Send error in SetFileDisposition = %d", rc)); + cFYI(1, "Send error in SetFileDisposition = %d", rc); return rc; } @@ -5062,7 +5063,7 @@ CIFSSMBSetPathInfo(const int xid, struct cifsTconInfo *tcon, char *data_offset; __u16 params, param_offset, offset, byte_count, count; - cFYI(1, ("In SetTimes")); + cFYI(1, "In SetTimes"); SetTimesRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, @@ -5118,7 +5119,7 @@ SetTimesRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) - cFYI(1, ("SetPathInfo (times) returned %d", rc)); + cFYI(1, "SetPathInfo (times) returned %d", rc); cifs_buf_release(pSMB); @@ -5143,7 +5144,7 @@ CIFSSMBSetAttrLegacy(int xid, struct cifsTconInfo *tcon, char *fileName, int bytes_returned; int name_len; - cFYI(1, ("In SetAttrLegacy")); + cFYI(1, "In SetAttrLegacy"); SetAttrLgcyRetry: rc = smb_init(SMB_COM_SETATTR, 8, tcon, (void **) &pSMB, @@ -5169,7 +5170,7 @@ SetAttrLgcyRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) - cFYI(1, ("Error in LegacySetAttr = %d", rc)); + cFYI(1, "Error in LegacySetAttr = %d", rc); cifs_buf_release(pSMB); @@ -5231,7 +5232,7 @@ CIFSSMBUnixSetFileInfo(const int xid, struct cifsTconInfo *tcon, int rc = 0; u16 params, param_offset, offset, byte_count, count; - cFYI(1, ("Set Unix Info (via SetFileInfo)")); + cFYI(1, "Set Unix Info (via SetFileInfo)"); rc = small_smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB); if (rc) @@ -5276,7 +5277,7 @@ CIFSSMBUnixSetFileInfo(const int xid, struct cifsTconInfo *tcon, rc = SendReceiveNoRsp(xid, tcon->ses, (struct smb_hdr *) pSMB, 0); if (rc) - cFYI(1, ("Send error in Set Time (SetFileInfo) = %d", rc)); + cFYI(1, "Send error in Set Time (SetFileInfo) = %d", rc); /* Note: On -EAGAIN error only caller can retry on handle based calls since file handle passed in no longer valid */ @@ -5297,7 +5298,7 @@ CIFSSMBUnixSetPathInfo(const int xid, struct cifsTconInfo *tcon, char *fileName, FILE_UNIX_BASIC_INFO *data_offset; __u16 params, param_offset, offset, count, byte_count; - cFYI(1, ("In SetUID/GID/Mode")); + cFYI(1, "In SetUID/GID/Mode"); setPermsRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -5353,7 +5354,7 @@ setPermsRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) - cFYI(1, ("SetPathInfo (perms) returned %d", rc)); + cFYI(1, "SetPathInfo (perms) returned %d", rc); cifs_buf_release(pSMB); if (rc == -EAGAIN) @@ -5372,7 +5373,7 @@ int CIFSSMBNotify(const int xid, struct cifsTconInfo *tcon, struct dir_notify_req *dnotify_req; int bytes_returned; - cFYI(1, ("In CIFSSMBNotify for file handle %d", (int)netfid)); + cFYI(1, "In CIFSSMBNotify for file handle %d", (int)netfid); rc = smb_init(SMB_COM_NT_TRANSACT, 23, tcon, (void **) &pSMB, (void **) &pSMBr); if (rc) @@ -5406,7 +5407,7 @@ int CIFSSMBNotify(const int xid, struct cifsTconInfo *tcon, (struct smb_hdr *)pSMBr, &bytes_returned, CIFS_ASYNC_OP); if (rc) { - cFYI(1, ("Error in Notify = %d", rc)); + cFYI(1, "Error in Notify = %d", rc); } else { /* Add file to outstanding requests */ /* BB change to kmem cache alloc */ @@ -5462,7 +5463,7 @@ CIFSSMBQAllEAs(const int xid, struct cifsTconInfo *tcon, char *end_of_smb; __u16 params, byte_count, data_offset; - cFYI(1, ("In Query All EAs path %s", searchName)); + cFYI(1, "In Query All EAs path %s", searchName); QAllEAsRetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -5509,7 +5510,7 @@ QAllEAsRetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) { - cFYI(1, ("Send error in QueryAllEAs = %d", rc)); + cFYI(1, "Send error in QueryAllEAs = %d", rc); goto QAllEAsOut; } @@ -5537,16 +5538,16 @@ QAllEAsRetry: (((char *) &pSMBr->hdr.Protocol) + data_offset); list_len = le32_to_cpu(ea_response_data->list_len); - cFYI(1, ("ea length %d", list_len)); + cFYI(1, "ea length %d", list_len); if (list_len <= 8) { - cFYI(1, ("empty EA list returned from server")); + cFYI(1, "empty EA list returned from server"); goto QAllEAsOut; } /* make sure list_len doesn't go past end of SMB */ end_of_smb = (char *)pByteArea(&pSMBr->hdr) + BCC(&pSMBr->hdr); if ((char *)ea_response_data + list_len > end_of_smb) { - cFYI(1, ("EA list appears to go beyond SMB")); + cFYI(1, "EA list appears to go beyond SMB"); rc = -EIO; goto QAllEAsOut; } @@ -5563,7 +5564,7 @@ QAllEAsRetry: temp_ptr += 4; /* make sure we can read name_len and value_len */ if (list_len < 0) { - cFYI(1, ("EA entry goes beyond length of list")); + cFYI(1, "EA entry goes beyond length of list"); rc = -EIO; goto QAllEAsOut; } @@ -5572,7 +5573,7 @@ QAllEAsRetry: value_len = le16_to_cpu(temp_fea->value_len); list_len -= name_len + 1 + value_len; if (list_len < 0) { - cFYI(1, ("EA entry goes beyond length of list")); + cFYI(1, "EA entry goes beyond length of list"); rc = -EIO; goto QAllEAsOut; } @@ -5639,7 +5640,7 @@ CIFSSMBSetEA(const int xid, struct cifsTconInfo *tcon, const char *fileName, int bytes_returned = 0; __u16 params, param_offset, byte_count, offset, count; - cFYI(1, ("In SetEA")); + cFYI(1, "In SetEA"); SetEARetry: rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB, (void **) &pSMBr); @@ -5721,7 +5722,7 @@ SetEARetry: rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, (struct smb_hdr *) pSMBr, &bytes_returned, 0); if (rc) - cFYI(1, ("SetPathInfo (EA) returned %d", rc)); + cFYI(1, "SetPathInfo (EA) returned %d", rc); cifs_buf_release(pSMB); diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index d9566bf8f917..2208f06e4c45 100644 --- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -102,6 +102,7 @@ struct smb_vol { bool sockopt_tcp_nodelay:1; unsigned short int port; char *prepath; + struct nls_table *local_nls; }; static int ipv4_connect(struct TCP_Server_Info *server); @@ -135,7 +136,7 @@ cifs_reconnect(struct TCP_Server_Info *server) spin_unlock(&GlobalMid_Lock); server->maxBuf = 0; - cFYI(1, ("Reconnecting tcp session")); + cFYI(1, "Reconnecting tcp session"); /* before reconnecting the tcp session, mark the smb session (uid) and the tid bad so they are not used until reconnected */ @@ -153,12 +154,12 @@ cifs_reconnect(struct TCP_Server_Info *server) /* do not want to be sending data on a socket we are freeing */ mutex_lock(&server->srv_mutex); if (server->ssocket) { - cFYI(1, ("State: 0x%x Flags: 0x%lx", server->ssocket->state, - server->ssocket->flags)); + cFYI(1, "State: 0x%x Flags: 0x%lx", server->ssocket->state, + server->ssocket->flags); kernel_sock_shutdown(server->ssocket, SHUT_WR); - cFYI(1, ("Post shutdown state: 0x%x Flags: 0x%lx", + cFYI(1, "Post shutdown state: 0x%x Flags: 0x%lx", server->ssocket->state, - server->ssocket->flags)); + server->ssocket->flags); sock_release(server->ssocket); server->ssocket = NULL; } @@ -187,7 +188,7 @@ cifs_reconnect(struct TCP_Server_Info *server) else rc = ipv4_connect(server); if (rc) { - cFYI(1, ("reconnect error %d", rc)); + cFYI(1, "reconnect error %d", rc); msleep(3000); } else { atomic_inc(&tcpSesReconnectCount); @@ -223,7 +224,7 @@ static int check2ndT2(struct smb_hdr *pSMB, unsigned int maxBufSize) /* check for plausible wct, bcc and t2 data and parm sizes */ /* check for parm and data offset going beyond end of smb */ if (pSMB->WordCount != 10) { /* coalesce_t2 depends on this */ - cFYI(1, ("invalid transact2 word count")); + cFYI(1, "invalid transact2 word count"); return -EINVAL; } @@ -237,15 +238,15 @@ static int check2ndT2(struct smb_hdr *pSMB, unsigned int maxBufSize) if (remaining == 0) return 0; else if (remaining < 0) { - cFYI(1, ("total data %d smaller than data in frame %d", - total_data_size, data_in_this_rsp)); + cFYI(1, "total data %d smaller than data in frame %d", + total_data_size, data_in_this_rsp); return -EINVAL; } else { - cFYI(1, ("missing %d bytes from transact2, check next response", - remaining)); + cFYI(1, "missing %d bytes from transact2, check next response", + remaining); if (total_data_size > maxBufSize) { - cERROR(1, ("TotalDataSize %d is over maximum buffer %d", - total_data_size, maxBufSize)); + cERROR(1, "TotalDataSize %d is over maximum buffer %d", + total_data_size, maxBufSize); return -EINVAL; } return remaining; @@ -267,7 +268,7 @@ static int coalesce_t2(struct smb_hdr *psecond, struct smb_hdr *pTargetSMB) total_data_size = le16_to_cpu(pSMBt->t2_rsp.TotalDataCount); if (total_data_size != le16_to_cpu(pSMB2->t2_rsp.TotalDataCount)) { - cFYI(1, ("total data size of primary and secondary t2 differ")); + cFYI(1, "total data size of primary and secondary t2 differ"); } total_in_buf = le16_to_cpu(pSMBt->t2_rsp.DataCount); @@ -282,7 +283,7 @@ static int coalesce_t2(struct smb_hdr *psecond, struct smb_hdr *pTargetSMB) total_in_buf2 = le16_to_cpu(pSMB2->t2_rsp.DataCount); if (remaining < total_in_buf2) { - cFYI(1, ("transact2 2nd response contains too much data")); + cFYI(1, "transact2 2nd response contains too much data"); } /* find end of first SMB data area */ @@ -311,7 +312,7 @@ static int coalesce_t2(struct smb_hdr *psecond, struct smb_hdr *pTargetSMB) pTargetSMB->smb_buf_length = byte_count; if (remaining == total_in_buf2) { - cFYI(1, ("found the last secondary response")); + cFYI(1, "found the last secondary response"); return 0; /* we are done */ } else /* more responses to go */ return 1; @@ -339,7 +340,7 @@ cifs_demultiplex_thread(struct TCP_Server_Info *server) int reconnect; current->flags |= PF_MEMALLOC; - cFYI(1, ("Demultiplex PID: %d", task_pid_nr(current))); + cFYI(1, "Demultiplex PID: %d", task_pid_nr(current)); length = atomic_inc_return(&tcpSesAllocCount); if (length > 1) @@ -353,7 +354,7 @@ cifs_demultiplex_thread(struct TCP_Server_Info *server) if (bigbuf == NULL) { bigbuf = cifs_buf_get(); if (!bigbuf) { - cERROR(1, ("No memory for large SMB response")); + cERROR(1, "No memory for large SMB response"); msleep(3000); /* retry will check if exiting */ continue; @@ -366,7 +367,7 @@ cifs_demultiplex_thread(struct TCP_Server_Info *server) if (smallbuf == NULL) { smallbuf = cifs_small_buf_get(); if (!smallbuf) { - cERROR(1, ("No memory for SMB response")); + cERROR(1, "No memory for SMB response"); msleep(1000); /* retry will check if exiting */ continue; @@ -391,9 +392,9 @@ incomplete_rcv: if (server->tcpStatus == CifsExiting) { break; } else if (server->tcpStatus == CifsNeedReconnect) { - cFYI(1, ("Reconnect after server stopped responding")); + cFYI(1, "Reconnect after server stopped responding"); cifs_reconnect(server); - cFYI(1, ("call to reconnect done")); + cFYI(1, "call to reconnect done"); csocket = server->ssocket; continue; } else if ((length == -ERESTARTSYS) || (length == -EAGAIN)) { @@ -411,7 +412,7 @@ incomplete_rcv: continue; } else if (length <= 0) { if (server->tcpStatus == CifsNew) { - cFYI(1, ("tcp session abend after SMBnegprot")); + cFYI(1, "tcp session abend after SMBnegprot"); /* some servers kill the TCP session rather than returning an SMB negprot error, in which case reconnecting here is not going to help, @@ -419,18 +420,18 @@ incomplete_rcv: break; } if (!try_to_freeze() && (length == -EINTR)) { - cFYI(1, ("cifsd thread killed")); + cFYI(1, "cifsd thread killed"); break; } - cFYI(1, ("Reconnect after unexpected peek error %d", - length)); + cFYI(1, "Reconnect after unexpected peek error %d", + length); cifs_reconnect(server); csocket = server->ssocket; wake_up(&server->response_q); continue; } else if (length < pdu_length) { - cFYI(1, ("requested %d bytes but only got %d bytes", - pdu_length, length)); + cFYI(1, "requested %d bytes but only got %d bytes", + pdu_length, length); pdu_length -= length; msleep(1); goto incomplete_rcv; @@ -450,18 +451,18 @@ incomplete_rcv: pdu_length = be32_to_cpu((__force __be32)smb_buffer->smb_buf_length); smb_buffer->smb_buf_length = pdu_length; - cFYI(1, ("rfc1002 length 0x%x", pdu_length+4)); + cFYI(1, "rfc1002 length 0x%x", pdu_length+4); if (temp == (char) RFC1002_SESSION_KEEP_ALIVE) { continue; } else if (temp == (char)RFC1002_POSITIVE_SESSION_RESPONSE) { - cFYI(1, ("Good RFC 1002 session rsp")); + cFYI(1, "Good RFC 1002 session rsp"); continue; } else if (temp == (char)RFC1002_NEGATIVE_SESSION_RESPONSE) { /* we get this from Windows 98 instead of an error on SMB negprot response */ - cFYI(1, ("Negative RFC1002 Session Response Error 0x%x)", - pdu_length)); + cFYI(1, "Negative RFC1002 Session Response Error 0x%x)", + pdu_length); if (server->tcpStatus == CifsNew) { /* if nack on negprot (rather than ret of smb negprot error) reconnecting @@ -484,7 +485,7 @@ incomplete_rcv: continue; } } else if (temp != (char) 0) { - cERROR(1, ("Unknown RFC 1002 frame")); + cERROR(1, "Unknown RFC 1002 frame"); cifs_dump_mem(" Received Data: ", (char *)smb_buffer, length); cifs_reconnect(server); @@ -495,8 +496,8 @@ incomplete_rcv: /* else we have an SMB response */ if ((pdu_length > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4) || (pdu_length < sizeof(struct smb_hdr) - 1 - 4)) { - cERROR(1, ("Invalid size SMB length %d pdu_length %d", - length, pdu_length+4)); + cERROR(1, "Invalid size SMB length %d pdu_length %d", + length, pdu_length+4); cifs_reconnect(server); csocket = server->ssocket; wake_up(&server->response_q); @@ -539,8 +540,8 @@ incomplete_rcv: length = 0; continue; } else if (length <= 0) { - cERROR(1, ("Received no data, expecting %d", - pdu_length - total_read)); + cERROR(1, "Received no data, expecting %d", + pdu_length - total_read); cifs_reconnect(server); csocket = server->ssocket; reconnect = 1; @@ -588,7 +589,7 @@ incomplete_rcv: } } else { if (!isLargeBuf) { - cERROR(1,("1st trans2 resp needs bigbuf")); + cERROR(1, "1st trans2 resp needs bigbuf"); /* BB maybe we can fix this up, switch to already allocated large buffer? */ } else { @@ -630,8 +631,8 @@ multi_t2_fnd: wake_up_process(task_to_wake); } else if (!is_valid_oplock_break(smb_buffer, server) && !isMultiRsp) { - cERROR(1, ("No task to wake, unknown frame received! " - "NumMids %d", midCount.counter)); + cERROR(1, "No task to wake, unknown frame received! " + "NumMids %d", midCount.counter); cifs_dump_mem("Received Data is: ", (char *)smb_buffer, sizeof(struct smb_hdr)); #ifdef CONFIG_CIFS_DEBUG2 @@ -708,8 +709,8 @@ multi_t2_fnd: list_for_each(tmp, &server->pending_mid_q) { mid_entry = list_entry(tmp, struct mid_q_entry, qhead); if (mid_entry->midState == MID_REQUEST_SUBMITTED) { - cFYI(1, ("Clearing Mid 0x%x - waking up ", - mid_entry->mid)); + cFYI(1, "Clearing Mid 0x%x - waking up ", + mid_entry->mid); task_to_wake = mid_entry->tsk; if (task_to_wake) wake_up_process(task_to_wake); @@ -728,7 +729,7 @@ multi_t2_fnd: to wait at least 45 seconds before giving up on a request getting a response and going ahead and killing cifsd */ - cFYI(1, ("Wait for exit from demultiplex thread")); + cFYI(1, "Wait for exit from demultiplex thread"); msleep(46000); /* if threads still have not exited they are probably never coming home not much else we can do but free the memory */ @@ -849,7 +850,7 @@ cifs_parse_mount_options(char *options, const char *devname, separator[0] = options[4]; options += 5; } else { - cFYI(1, ("Null separator not allowed")); + cFYI(1, "Null separator not allowed"); } } @@ -974,7 +975,7 @@ cifs_parse_mount_options(char *options, const char *devname, } } else if (strnicmp(data, "sec", 3) == 0) { if (!value || !*value) { - cERROR(1, ("no security value specified")); + cERROR(1, "no security value specified"); continue; } else if (strnicmp(value, "krb5i", 5) == 0) { vol->secFlg |= CIFSSEC_MAY_KRB5 | @@ -982,7 +983,7 @@ cifs_parse_mount_options(char *options, const char *devname, } else if (strnicmp(value, "krb5p", 5) == 0) { /* vol->secFlg |= CIFSSEC_MUST_SEAL | CIFSSEC_MAY_KRB5; */ - cERROR(1, ("Krb5 cifs privacy not supported")); + cERROR(1, "Krb5 cifs privacy not supported"); return 1; } else if (strnicmp(value, "krb5", 4) == 0) { vol->secFlg |= CIFSSEC_MAY_KRB5; @@ -1014,7 +1015,7 @@ cifs_parse_mount_options(char *options, const char *devname, } else if (strnicmp(value, "none", 4) == 0) { vol->nullauth = 1; } else { - cERROR(1, ("bad security option: %s", value)); + cERROR(1, "bad security option: %s", value); return 1; } } else if ((strnicmp(data, "unc", 3) == 0) @@ -1053,7 +1054,7 @@ cifs_parse_mount_options(char *options, const char *devname, a domain name and need special handling? */ if (strnlen(value, 256) < 256) { vol->domainname = value; - cFYI(1, ("Domain name set")); + cFYI(1, "Domain name set"); } else { printk(KERN_WARNING "CIFS: domain name too " "long\n"); @@ -1076,7 +1077,7 @@ cifs_parse_mount_options(char *options, const char *devname, strcpy(vol->prepath+1, value); } else strcpy(vol->prepath, value); - cFYI(1, ("prefix path %s", vol->prepath)); + cFYI(1, "prefix path %s", vol->prepath); } else { printk(KERN_WARNING "CIFS: prefix too long\n"); return 1; @@ -1092,7 +1093,7 @@ cifs_parse_mount_options(char *options, const char *devname, vol->iocharset = value; /* if iocharset not set then load_nls_default is used by caller */ - cFYI(1, ("iocharset set to %s", value)); + cFYI(1, "iocharset set to %s", value); } else { printk(KERN_WARNING "CIFS: iocharset name " "too long.\n"); @@ -1144,14 +1145,14 @@ cifs_parse_mount_options(char *options, const char *devname, } } else if (strnicmp(data, "sockopt", 5) == 0) { if (!value || !*value) { - cERROR(1, ("no socket option specified")); + cERROR(1, "no socket option specified"); continue; } else if (strnicmp(value, "TCP_NODELAY", 11) == 0) { vol->sockopt_tcp_nodelay = 1; } } else if (strnicmp(data, "netbiosname", 4) == 0) { if (!value || !*value || (*value == ' ')) { - cFYI(1, ("invalid (empty) netbiosname")); + cFYI(1, "invalid (empty) netbiosname"); } else { memset(vol->source_rfc1001_name, 0x20, 15); for (i = 0; i < 15; i++) { @@ -1175,7 +1176,7 @@ cifs_parse_mount_options(char *options, const char *devname, } else if (strnicmp(data, "servern", 7) == 0) { /* servernetbiosname specified override *SMBSERVER */ if (!value || !*value || (*value == ' ')) { - cFYI(1, ("empty server netbiosname specified")); + cFYI(1, "empty server netbiosname specified"); } else { /* last byte, type, is 0x20 for servr type */ memset(vol->target_rfc1001_name, 0x20, 16); @@ -1434,7 +1435,7 @@ cifs_find_tcp_session(struct sockaddr_storage *addr, unsigned short int port) ++server->srv_count; write_unlock(&cifs_tcp_ses_lock); - cFYI(1, ("Existing tcp session with server found")); + cFYI(1, "Existing tcp session with server found"); return server; } write_unlock(&cifs_tcp_ses_lock); @@ -1475,7 +1476,7 @@ cifs_get_tcp_session(struct smb_vol *volume_info) memset(&addr, 0, sizeof(struct sockaddr_storage)); - cFYI(1, ("UNC: %s ip: %s", volume_info->UNC, volume_info->UNCip)); + cFYI(1, "UNC: %s ip: %s", volume_info->UNC, volume_info->UNCip); if (volume_info->UNCip && volume_info->UNC) { rc = cifs_convert_address(volume_info->UNCip, &addr); @@ -1487,13 +1488,12 @@ cifs_get_tcp_session(struct smb_vol *volume_info) } else if (volume_info->UNCip) { /* BB using ip addr as tcp_ses name to connect to the DFS root below */ - cERROR(1, ("Connecting to DFS root not implemented yet")); + cERROR(1, "Connecting to DFS root not implemented yet"); rc = -EINVAL; goto out_err; } else /* which tcp_sess DFS root would we conect to */ { - cERROR(1, - ("CIFS mount error: No UNC path (e.g. -o " - "unc=//192.168.1.100/public) specified")); + cERROR(1, "CIFS mount error: No UNC path (e.g. -o " + "unc=//192.168.1.100/public) specified"); rc = -EINVAL; goto out_err; } @@ -1540,7 +1540,7 @@ cifs_get_tcp_session(struct smb_vol *volume_info) ++tcp_ses->srv_count; if (addr.ss_family == AF_INET6) { - cFYI(1, ("attempting ipv6 connect")); + cFYI(1, "attempting ipv6 connect"); /* BB should we allow ipv6 on port 139? */ /* other OS never observed in Wild doing 139 with v6 */ sin_server6->sin6_port = htons(volume_info->port); @@ -1554,7 +1554,7 @@ cifs_get_tcp_session(struct smb_vol *volume_info) rc = ipv4_connect(tcp_ses); } if (rc < 0) { - cERROR(1, ("Error connecting to socket. Aborting operation")); + cERROR(1, "Error connecting to socket. Aborting operation"); goto out_err; } @@ -1567,7 +1567,7 @@ cifs_get_tcp_session(struct smb_vol *volume_info) tcp_ses, "cifsd"); if (IS_ERR(tcp_ses->tsk)) { rc = PTR_ERR(tcp_ses->tsk); - cERROR(1, ("error %d create cifsd thread", rc)); + cERROR(1, "error %d create cifsd thread", rc); module_put(THIS_MODULE); goto out_err; } @@ -1616,6 +1616,7 @@ cifs_put_smb_ses(struct cifsSesInfo *ses) int xid; struct TCP_Server_Info *server = ses->server; + cFYI(1, "%s: ses_count=%d\n", __func__, ses->ses_count); write_lock(&cifs_tcp_ses_lock); if (--ses->ses_count > 0) { write_unlock(&cifs_tcp_ses_lock); @@ -1634,6 +1635,102 @@ cifs_put_smb_ses(struct cifsSesInfo *ses) cifs_put_tcp_session(server); } +static struct cifsSesInfo * +cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info) +{ + int rc = -ENOMEM, xid; + struct cifsSesInfo *ses; + + xid = GetXid(); + + ses = cifs_find_smb_ses(server, volume_info->username); + if (ses) { + cFYI(1, "Existing smb sess found (status=%d)", ses->status); + + /* existing SMB ses has a server reference already */ + cifs_put_tcp_session(server); + + mutex_lock(&ses->session_mutex); + rc = cifs_negotiate_protocol(xid, ses); + if (rc) { + mutex_unlock(&ses->session_mutex); + /* problem -- put our ses reference */ + cifs_put_smb_ses(ses); + FreeXid(xid); + return ERR_PTR(rc); + } + if (ses->need_reconnect) { + cFYI(1, "Session needs reconnect"); + rc = cifs_setup_session(xid, ses, + volume_info->local_nls); + if (rc) { + mutex_unlock(&ses->session_mutex); + /* problem -- put our reference */ + cifs_put_smb_ses(ses); + FreeXid(xid); + return ERR_PTR(rc); + } + } + mutex_unlock(&ses->session_mutex); + FreeXid(xid); + return ses; + } + + cFYI(1, "Existing smb sess not found"); + ses = sesInfoAlloc(); + if (ses == NULL) + goto get_ses_fail; + + /* new SMB session uses our server ref */ + ses->server = server; + if (server->addr.sockAddr6.sin6_family == AF_INET6) + sprintf(ses->serverName, "%pI6", + &server->addr.sockAddr6.sin6_addr); + else + sprintf(ses->serverName, "%pI4", + &server->addr.sockAddr.sin_addr.s_addr); + + if (volume_info->username) + strncpy(ses->userName, volume_info->username, + MAX_USERNAME_SIZE); + + /* volume_info->password freed at unmount */ + if (volume_info->password) { + ses->password = kstrdup(volume_info->password, GFP_KERNEL); + if (!ses->password) + goto get_ses_fail; + } + if (volume_info->domainname) { + int len = strlen(volume_info->domainname); + ses->domainName = kmalloc(len + 1, GFP_KERNEL); + if (ses->domainName) + strcpy(ses->domainName, volume_info->domainname); + } + ses->linux_uid = volume_info->linux_uid; + ses->overrideSecFlg = volume_info->secFlg; + + mutex_lock(&ses->session_mutex); + rc = cifs_negotiate_protocol(xid, ses); + if (!rc) + rc = cifs_setup_session(xid, ses, volume_info->local_nls); + mutex_unlock(&ses->session_mutex); + if (rc) + goto get_ses_fail; + + /* success, put it on the list */ + write_lock(&cifs_tcp_ses_lock); + list_add(&ses->smb_ses_list, &server->smb_ses_list); + write_unlock(&cifs_tcp_ses_lock); + + FreeXid(xid); + return ses; + +get_ses_fail: + sesInfoFree(ses); + FreeXid(xid); + return ERR_PTR(rc); +} + static struct cifsTconInfo * cifs_find_tcon(struct cifsSesInfo *ses, const char *unc) { @@ -1662,6 +1759,7 @@ cifs_put_tcon(struct cifsTconInfo *tcon) int xid; struct cifsSesInfo *ses = tcon->ses; + cFYI(1, "%s: tc_count=%d\n", __func__, tcon->tc_count); write_lock(&cifs_tcp_ses_lock); if (--tcon->tc_count > 0) { write_unlock(&cifs_tcp_ses_lock); @@ -1679,6 +1777,80 @@ cifs_put_tcon(struct cifsTconInfo *tcon) cifs_put_smb_ses(ses); } +static struct cifsTconInfo * +cifs_get_tcon(struct cifsSesInfo *ses, struct smb_vol *volume_info) +{ + int rc, xid; + struct cifsTconInfo *tcon; + + tcon = cifs_find_tcon(ses, volume_info->UNC); + if (tcon) { + cFYI(1, "Found match on UNC path"); + /* existing tcon already has a reference */ + cifs_put_smb_ses(ses); + if (tcon->seal != volume_info->seal) + cERROR(1, "transport encryption setting " + "conflicts with existing tid"); + return tcon; + } + + tcon = tconInfoAlloc(); + if (tcon == NULL) { + rc = -ENOMEM; + goto out_fail; + } + + tcon->ses = ses; + if (volume_info->password) { + tcon->password = kstrdup(volume_info->password, GFP_KERNEL); + if (!tcon->password) { + rc = -ENOMEM; + goto out_fail; + } + } + + if (strchr(volume_info->UNC + 3, '\\') == NULL + && strchr(volume_info->UNC + 3, '/') == NULL) { + cERROR(1, "Missing share name"); + rc = -ENODEV; + goto out_fail; + } + + /* BB Do we need to wrap session_mutex around + * this TCon call and Unix SetFS as + * we do on SessSetup and reconnect? */ + xid = GetXid(); + rc = CIFSTCon(xid, ses, volume_info->UNC, tcon, volume_info->local_nls); + FreeXid(xid); + cFYI(1, "CIFS Tcon rc = %d", rc); + if (rc) + goto out_fail; + + if (volume_info->nodfs) { + tcon->Flags &= ~SMB_SHARE_IS_IN_DFS; + cFYI(1, "DFS disabled (%d)", tcon->Flags); + } + tcon->seal = volume_info->seal; + /* we can have only one retry value for a connection + to a share so for resources mounted more than once + to the same server share the last value passed in + for the retry flag is used */ + tcon->retry = volume_info->retry; + tcon->nocase = volume_info->nocase; + tcon->local_lease = volume_info->local_lease; + + write_lock(&cifs_tcp_ses_lock); + list_add(&tcon->tcon_list, &ses->tcon_list); + write_unlock(&cifs_tcp_ses_lock); + + return tcon; + +out_fail: + tconInfoFree(tcon); + return ERR_PTR(rc); +} + + int get_dfs_path(int xid, struct cifsSesInfo *pSesInfo, const char *old_path, const struct nls_table *nls_codepage, unsigned int *pnum_referrals, @@ -1703,8 +1875,7 @@ get_dfs_path(int xid, struct cifsSesInfo *pSesInfo, const char *old_path, strcpy(temp_unc + 2, pSesInfo->serverName); strcpy(temp_unc + 2 + strlen(pSesInfo->serverName), "\\IPC$"); rc = CIFSTCon(xid, pSesInfo, temp_unc, NULL, nls_codepage); - cFYI(1, - ("CIFS Tcon rc = %d ipc_tid = %d", rc, pSesInfo->ipc_tid)); + cFYI(1, "CIFS Tcon rc = %d ipc_tid = %d", rc, pSesInfo->ipc_tid); kfree(temp_unc); } if (rc == 0) @@ -1777,12 +1948,12 @@ ipv4_connect(struct TCP_Server_Info *server) rc = sock_create_kern(PF_INET, SOCK_STREAM, IPPROTO_TCP, &socket); if (rc < 0) { - cERROR(1, ("Error %d creating socket", rc)); + cERROR(1, "Error %d creating socket", rc); return rc; } /* BB other socket options to set KEEPALIVE, NODELAY? */ - cFYI(1, ("Socket created")); + cFYI(1, "Socket created"); server->ssocket = socket; socket->sk->sk_allocation = GFP_NOFS; cifs_reclassify_socket4(socket); @@ -1827,7 +1998,7 @@ ipv4_connect(struct TCP_Server_Info *server) if (!connected) { if (orig_port) server->addr.sockAddr.sin_port = orig_port; - cFYI(1, ("Error %d connecting to server via ipv4", rc)); + cFYI(1, "Error %d connecting to server via ipv4", rc); sock_release(socket); server->ssocket = NULL; return rc; @@ -1855,12 +2026,12 @@ ipv4_connect(struct TCP_Server_Info *server) rc = kernel_setsockopt(socket, SOL_TCP, TCP_NODELAY, (char *)&val, sizeof(val)); if (rc) - cFYI(1, ("set TCP_NODELAY socket option error %d", rc)); + cFYI(1, "set TCP_NODELAY socket option error %d", rc); } - cFYI(1, ("sndbuf %d rcvbuf %d rcvtimeo 0x%lx", + cFYI(1, "sndbuf %d rcvbuf %d rcvtimeo 0x%lx", socket->sk->sk_sndbuf, - socket->sk->sk_rcvbuf, socket->sk->sk_rcvtimeo)); + socket->sk->sk_rcvbuf, socket->sk->sk_rcvtimeo); /* send RFC1001 sessinit */ if (server->addr.sockAddr.sin_port == htons(RFC1001_PORT)) { @@ -1938,13 +2109,13 @@ ipv6_connect(struct TCP_Server_Info *server) rc = sock_create_kern(PF_INET6, SOCK_STREAM, IPPROTO_TCP, &socket); if (rc < 0) { - cERROR(1, ("Error %d creating ipv6 socket", rc)); + cERROR(1, "Error %d creating ipv6 socket", rc); socket = NULL; return rc; } /* BB other socket options to set KEEPALIVE, NODELAY? */ - cFYI(1, ("ipv6 Socket created")); + cFYI(1, "ipv6 Socket created"); server->ssocket = socket; socket->sk->sk_allocation = GFP_NOFS; cifs_reclassify_socket6(socket); @@ -1988,7 +2159,7 @@ ipv6_connect(struct TCP_Server_Info *server) if (!connected) { if (orig_port) server->addr.sockAddr6.sin6_port = orig_port; - cFYI(1, ("Error %d connecting to server via ipv6", rc)); + cFYI(1, "Error %d connecting to server via ipv6", rc); sock_release(socket); server->ssocket = NULL; return rc; @@ -2007,7 +2178,7 @@ ipv6_connect(struct TCP_Server_Info *server) rc = kernel_setsockopt(socket, SOL_TCP, TCP_NODELAY, (char *)&val, sizeof(val)); if (rc) - cFYI(1, ("set TCP_NODELAY socket option error %d", rc)); + cFYI(1, "set TCP_NODELAY socket option error %d", rc); } server->ssocket = socket; @@ -2032,13 +2203,13 @@ void reset_cifs_unix_caps(int xid, struct cifsTconInfo *tcon, if (vol_info && vol_info->no_linux_ext) { tcon->fsUnixInfo.Capability = 0; tcon->unix_ext = 0; /* Unix Extensions disabled */ - cFYI(1, ("Linux protocol extensions disabled")); + cFYI(1, "Linux protocol extensions disabled"); return; } else if (vol_info) tcon->unix_ext = 1; /* Unix Extensions supported */ if (tcon->unix_ext == 0) { - cFYI(1, ("Unix extensions disabled so not set on reconnect")); + cFYI(1, "Unix extensions disabled so not set on reconnect"); return; } @@ -2054,12 +2225,11 @@ void reset_cifs_unix_caps(int xid, struct cifsTconInfo *tcon, cap &= ~CIFS_UNIX_POSIX_ACL_CAP; if ((saved_cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) == 0) { if (cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) - cERROR(1, ("POSIXPATH support change")); + cERROR(1, "POSIXPATH support change"); cap &= ~CIFS_UNIX_POSIX_PATHNAMES_CAP; } else if ((cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) == 0) { - cERROR(1, ("possible reconnect error")); - cERROR(1, - ("server disabled POSIX path support")); + cERROR(1, "possible reconnect error"); + cERROR(1, "server disabled POSIX path support"); } } @@ -2067,7 +2237,7 @@ void reset_cifs_unix_caps(int xid, struct cifsTconInfo *tcon, if (vol_info && vol_info->no_psx_acl) cap &= ~CIFS_UNIX_POSIX_ACL_CAP; else if (CIFS_UNIX_POSIX_ACL_CAP & cap) { - cFYI(1, ("negotiated posix acl support")); + cFYI(1, "negotiated posix acl support"); if (sb) sb->s_flags |= MS_POSIXACL; } @@ -2075,7 +2245,7 @@ void reset_cifs_unix_caps(int xid, struct cifsTconInfo *tcon, if (vol_info && vol_info->posix_paths == 0) cap &= ~CIFS_UNIX_POSIX_PATHNAMES_CAP; else if (cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) { - cFYI(1, ("negotiate posix pathnames")); + cFYI(1, "negotiate posix pathnames"); if (sb) CIFS_SB(sb)->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS; @@ -2090,39 +2260,38 @@ void reset_cifs_unix_caps(int xid, struct cifsTconInfo *tcon, if (sb && (CIFS_SB(sb)->rsize > 127 * 1024)) { if ((cap & CIFS_UNIX_LARGE_READ_CAP) == 0) { CIFS_SB(sb)->rsize = 127 * 1024; - cFYI(DBG2, - ("larger reads not supported by srv")); + cFYI(DBG2, "larger reads not supported by srv"); } } - cFYI(1, ("Negotiate caps 0x%x", (int)cap)); + cFYI(1, "Negotiate caps 0x%x", (int)cap); #ifdef CONFIG_CIFS_DEBUG2 if (cap & CIFS_UNIX_FCNTL_CAP) - cFYI(1, ("FCNTL cap")); + cFYI(1, "FCNTL cap"); if (cap & CIFS_UNIX_EXTATTR_CAP) - cFYI(1, ("EXTATTR cap")); + cFYI(1, "EXTATTR cap"); if (cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) - cFYI(1, ("POSIX path cap")); + cFYI(1, "POSIX path cap"); if (cap & CIFS_UNIX_XATTR_CAP) - cFYI(1, ("XATTR cap")); + cFYI(1, "XATTR cap"); if (cap & CIFS_UNIX_POSIX_ACL_CAP) - cFYI(1, ("POSIX ACL cap")); + cFYI(1, "POSIX ACL cap"); if (cap & CIFS_UNIX_LARGE_READ_CAP) - cFYI(1, ("very large read cap")); + cFYI(1, "very large read cap"); if (cap & CIFS_UNIX_LARGE_WRITE_CAP) - cFYI(1, ("very large write cap")); + cFYI(1, "very large write cap"); #endif /* CIFS_DEBUG2 */ if (CIFSSMBSetFSUnixInfo(xid, tcon, cap)) { if (vol_info == NULL) { - cFYI(1, ("resetting capabilities failed")); + cFYI(1, "resetting capabilities failed"); } else - cERROR(1, ("Negotiating Unix capabilities " + cERROR(1, "Negotiating Unix capabilities " "with the server failed. Consider " "mounting with the Unix Extensions\n" "disabled, if problems are found, " "by specifying the nounix mount " - "option.")); + "option."); } } @@ -2152,8 +2321,8 @@ static void setup_cifs_sb(struct smb_vol *pvolume_info, struct cifs_sb_info *cifs_sb) { if (pvolume_info->rsize > CIFSMaxBufSize) { - cERROR(1, ("rsize %d too large, using MaxBufSize", - pvolume_info->rsize)); + cERROR(1, "rsize %d too large, using MaxBufSize", + pvolume_info->rsize); cifs_sb->rsize = CIFSMaxBufSize; } else if ((pvolume_info->rsize) && (pvolume_info->rsize <= CIFSMaxBufSize)) @@ -2162,8 +2331,8 @@ static void setup_cifs_sb(struct smb_vol *pvolume_info, cifs_sb->rsize = CIFSMaxBufSize; if (pvolume_info->wsize > PAGEVEC_SIZE * PAGE_CACHE_SIZE) { - cERROR(1, ("wsize %d too large, using 4096 instead", - pvolume_info->wsize)); + cERROR(1, "wsize %d too large, using 4096 instead", + pvolume_info->wsize); cifs_sb->wsize = 4096; } else if (pvolume_info->wsize) cifs_sb->wsize = pvolume_info->wsize; @@ -2181,7 +2350,7 @@ static void setup_cifs_sb(struct smb_vol *pvolume_info, if (cifs_sb->rsize < 2048) { cifs_sb->rsize = 2048; /* Windows ME may prefer this */ - cFYI(1, ("readsize set to minimum: 2048")); + cFYI(1, "readsize set to minimum: 2048"); } /* calculate prepath */ cifs_sb->prepath = pvolume_info->prepath; @@ -2199,8 +2368,8 @@ static void setup_cifs_sb(struct smb_vol *pvolume_info, cifs_sb->mnt_gid = pvolume_info->linux_gid; cifs_sb->mnt_file_mode = pvolume_info->file_mode; cifs_sb->mnt_dir_mode = pvolume_info->dir_mode; - cFYI(1, ("file mode: 0x%x dir mode: 0x%x", - cifs_sb->mnt_file_mode, cifs_sb->mnt_dir_mode)); + cFYI(1, "file mode: 0x%x dir mode: 0x%x", + cifs_sb->mnt_file_mode, cifs_sb->mnt_dir_mode); if (pvolume_info->noperm) cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NO_PERM; @@ -2229,13 +2398,13 @@ static void setup_cifs_sb(struct smb_vol *pvolume_info, if (pvolume_info->dynperm) cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_DYNPERM; if (pvolume_info->direct_io) { - cFYI(1, ("mounting share using direct i/o")); + cFYI(1, "mounting share using direct i/o"); cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_DIRECT_IO; } if ((pvolume_info->cifs_acl) && (pvolume_info->dynperm)) - cERROR(1, ("mount option dynperm ignored if cifsacl " - "mount option supported")); + cERROR(1, "mount option dynperm ignored if cifsacl " + "mount option supported"); } static int @@ -2262,7 +2431,7 @@ cleanup_volume_info(struct smb_vol **pvolume_info) { struct smb_vol *volume_info; - if (!pvolume_info && !*pvolume_info) + if (!pvolume_info || !*pvolume_info) return; volume_info = *pvolume_info; @@ -2344,11 +2513,11 @@ try_mount_again: } if (volume_info->nullauth) { - cFYI(1, ("null user")); + cFYI(1, "null user"); volume_info->username = ""; } else if (volume_info->username) { /* BB fixme parse for domain name here */ - cFYI(1, ("Username: %s", volume_info->username)); + cFYI(1, "Username: %s", volume_info->username); } else { cifserror("No username specified"); /* In userspace mount helper we can get user name from alternate @@ -2357,20 +2526,20 @@ try_mount_again: goto out; } - /* this is needed for ASCII cp to Unicode converts */ if (volume_info->iocharset == NULL) { - cifs_sb->local_nls = load_nls_default(); - /* load_nls_default can not return null */ + /* load_nls_default cannot return null */ + volume_info->local_nls = load_nls_default(); } else { - cifs_sb->local_nls = load_nls(volume_info->iocharset); - if (cifs_sb->local_nls == NULL) { - cERROR(1, ("CIFS mount error: iocharset %s not found", - volume_info->iocharset)); + volume_info->local_nls = load_nls(volume_info->iocharset); + if (volume_info->local_nls == NULL) { + cERROR(1, "CIFS mount error: iocharset %s not found", + volume_info->iocharset); rc = -ELIBACC; goto out; } } + cifs_sb->local_nls = volume_info->local_nls; /* get a reference to a tcp session */ srvTcp = cifs_get_tcp_session(volume_info); @@ -2379,148 +2548,30 @@ try_mount_again: goto out; } - pSesInfo = cifs_find_smb_ses(srvTcp, volume_info->username); - if (pSesInfo) { - cFYI(1, ("Existing smb sess found (status=%d)", - pSesInfo->status)); - /* - * The existing SMB session already has a reference to srvTcp, - * so we can put back the extra one we got before - */ - cifs_put_tcp_session(srvTcp); - - mutex_lock(&pSesInfo->session_mutex); - if (pSesInfo->need_reconnect) { - cFYI(1, ("Session needs reconnect")); - rc = cifs_setup_session(xid, pSesInfo, - cifs_sb->local_nls); - } - mutex_unlock(&pSesInfo->session_mutex); - } else if (!rc) { - cFYI(1, ("Existing smb sess not found")); - pSesInfo = sesInfoAlloc(); - if (pSesInfo == NULL) { - rc = -ENOMEM; - goto mount_fail_check; - } - - /* new SMB session uses our srvTcp ref */ - pSesInfo->server = srvTcp; - if (srvTcp->addr.sockAddr6.sin6_family == AF_INET6) - sprintf(pSesInfo->serverName, "%pI6", - &srvTcp->addr.sockAddr6.sin6_addr); - else - sprintf(pSesInfo->serverName, "%pI4", - &srvTcp->addr.sockAddr.sin_addr.s_addr); - - write_lock(&cifs_tcp_ses_lock); - list_add(&pSesInfo->smb_ses_list, &srvTcp->smb_ses_list); - write_unlock(&cifs_tcp_ses_lock); - - /* volume_info->password freed at unmount */ - if (volume_info->password) { - pSesInfo->password = kstrdup(volume_info->password, - GFP_KERNEL); - if (!pSesInfo->password) { - rc = -ENOMEM; - goto mount_fail_check; - } - } - if (volume_info->username) - strncpy(pSesInfo->userName, volume_info->username, - MAX_USERNAME_SIZE); - if (volume_info->domainname) { - int len = strlen(volume_info->domainname); - pSesInfo->domainName = kmalloc(len + 1, GFP_KERNEL); - if (pSesInfo->domainName) - strcpy(pSesInfo->domainName, - volume_info->domainname); - } - pSesInfo->linux_uid = volume_info->linux_uid; - pSesInfo->overrideSecFlg = volume_info->secFlg; - mutex_lock(&pSesInfo->session_mutex); - - /* BB FIXME need to pass vol->secFlgs BB */ - rc = cifs_setup_session(xid, pSesInfo, - cifs_sb->local_nls); - mutex_unlock(&pSesInfo->session_mutex); + /* get a reference to a SMB session */ + pSesInfo = cifs_get_smb_ses(srvTcp, volume_info); + if (IS_ERR(pSesInfo)) { + rc = PTR_ERR(pSesInfo); + pSesInfo = NULL; + goto mount_fail_check; } - /* search for existing tcon to this server share */ - if (!rc) { - setup_cifs_sb(volume_info, cifs_sb); - - tcon = cifs_find_tcon(pSesInfo, volume_info->UNC); - if (tcon) { - cFYI(1, ("Found match on UNC path")); - /* existing tcon already has a reference */ - cifs_put_smb_ses(pSesInfo); - if (tcon->seal != volume_info->seal) - cERROR(1, ("transport encryption setting " - "conflicts with existing tid")); - } else { - tcon = tconInfoAlloc(); - if (tcon == NULL) { - rc = -ENOMEM; - goto mount_fail_check; - } - - tcon->ses = pSesInfo; - if (volume_info->password) { - tcon->password = kstrdup(volume_info->password, - GFP_KERNEL); - if (!tcon->password) { - rc = -ENOMEM; - goto mount_fail_check; - } - } - - if ((strchr(volume_info->UNC + 3, '\\') == NULL) - && (strchr(volume_info->UNC + 3, '/') == NULL)) { - cERROR(1, ("Missing share name")); - rc = -ENODEV; - goto mount_fail_check; - } else { - /* BB Do we need to wrap sesSem around - * this TCon call and Unix SetFS as - * we do on SessSetup and reconnect? */ - rc = CIFSTCon(xid, pSesInfo, volume_info->UNC, - tcon, cifs_sb->local_nls); - cFYI(1, ("CIFS Tcon rc = %d", rc)); - if (volume_info->nodfs) { - tcon->Flags &= ~SMB_SHARE_IS_IN_DFS; - cFYI(1, ("DFS disabled (%d)", - tcon->Flags)); - } - } - if (rc) - goto remote_path_check; - tcon->seal = volume_info->seal; - write_lock(&cifs_tcp_ses_lock); - list_add(&tcon->tcon_list, &pSesInfo->tcon_list); - write_unlock(&cifs_tcp_ses_lock); - } - - /* we can have only one retry value for a connection - to a share so for resources mounted more than once - to the same server share the last value passed in - for the retry flag is used */ - tcon->retry = volume_info->retry; - tcon->nocase = volume_info->nocase; - tcon->local_lease = volume_info->local_lease; - } - if (pSesInfo) { - if (pSesInfo->capabilities & CAP_LARGE_FILES) - sb->s_maxbytes = MAX_LFS_FILESIZE; - else - sb->s_maxbytes = MAX_NON_LFS; - } + setup_cifs_sb(volume_info, cifs_sb); + if (pSesInfo->capabilities & CAP_LARGE_FILES) + sb->s_maxbytes = MAX_LFS_FILESIZE; + else + sb->s_maxbytes = MAX_NON_LFS; /* BB FIXME fix time_gran to be larger for LANMAN sessions */ sb->s_time_gran = 100; - if (rc) + /* search for existing tcon to this server share */ + tcon = cifs_get_tcon(pSesInfo, volume_info); + if (IS_ERR(tcon)) { + rc = PTR_ERR(tcon); + tcon = NULL; goto remote_path_check; + } cifs_sb->tcon = tcon; @@ -2544,7 +2595,7 @@ try_mount_again: if ((tcon->unix_ext == 0) && (cifs_sb->rsize > (1024 * 127))) { cifs_sb->rsize = 1024 * 127; - cFYI(DBG2, ("no very large read support, rsize now 127K")); + cFYI(DBG2, "no very large read support, rsize now 127K"); } if (!(tcon->ses->capabilities & CAP_LARGE_WRITE_X)) cifs_sb->wsize = min(cifs_sb->wsize, @@ -2593,7 +2644,7 @@ remote_path_check: goto mount_fail_check; } - cFYI(1, ("Getting referral for: %s", full_path)); + cFYI(1, "Getting referral for: %s", full_path); rc = get_dfs_path(xid, pSesInfo , full_path + 1, cifs_sb->local_nls, &num_referrals, &referrals, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); @@ -2707,7 +2758,7 @@ CIFSTCon(unsigned int xid, struct cifsSesInfo *ses, by Samba (not sure whether other servers allow NTLMv2 password here) */ #ifdef CONFIG_CIFS_WEAK_PW_HASH - if ((extended_security & CIFSSEC_MAY_LANMAN) && + if ((global_secflags & CIFSSEC_MAY_LANMAN) && (ses->server->secType == LANMAN)) calc_lanman_hash(tcon->password, ses->server->cryptKey, ses->server->secMode & @@ -2778,13 +2829,13 @@ CIFSTCon(unsigned int xid, struct cifsSesInfo *ses, if (length == 3) { if ((bcc_ptr[0] == 'I') && (bcc_ptr[1] == 'P') && (bcc_ptr[2] == 'C')) { - cFYI(1, ("IPC connection")); + cFYI(1, "IPC connection"); tcon->ipc = 1; } } else if (length == 2) { if ((bcc_ptr[0] == 'A') && (bcc_ptr[1] == ':')) { /* the most common case */ - cFYI(1, ("disk share connection")); + cFYI(1, "disk share connection"); } } bcc_ptr += length + 1; @@ -2797,7 +2848,7 @@ CIFSTCon(unsigned int xid, struct cifsSesInfo *ses, bytes_left, is_unicode, nls_codepage); - cFYI(1, ("nativeFileSystem=%s", tcon->nativeFileSystem)); + cFYI(1, "nativeFileSystem=%s", tcon->nativeFileSystem); if ((smb_buffer_response->WordCount == 3) || (smb_buffer_response->WordCount == 7)) @@ -2805,7 +2856,7 @@ CIFSTCon(unsigned int xid, struct cifsSesInfo *ses, tcon->Flags = le16_to_cpu(pSMBr->OptionalSupport); else tcon->Flags = 0; - cFYI(1, ("Tcon flags: 0x%x ", tcon->Flags)); + cFYI(1, "Tcon flags: 0x%x ", tcon->Flags); } else if ((rc == 0) && tcon == NULL) { /* all we need to save for IPC$ connection */ ses->ipc_tid = smb_buffer_response->Tid; @@ -2833,57 +2884,61 @@ cifs_umount(struct super_block *sb, struct cifs_sb_info *cifs_sb) return rc; } -int cifs_setup_session(unsigned int xid, struct cifsSesInfo *pSesInfo, - struct nls_table *nls_info) +int cifs_negotiate_protocol(unsigned int xid, struct cifsSesInfo *ses) { int rc = 0; - int first_time = 0; - struct TCP_Server_Info *server = pSesInfo->server; - - /* what if server changes its buffer size after dropping the session? */ - if (server->maxBuf == 0) /* no need to send on reconnect */ { - rc = CIFSSMBNegotiate(xid, pSesInfo); - if (rc == -EAGAIN) { - /* retry only once on 1st time connection */ - rc = CIFSSMBNegotiate(xid, pSesInfo); - if (rc == -EAGAIN) - rc = -EHOSTDOWN; - } - if (rc == 0) { - spin_lock(&GlobalMid_Lock); - if (server->tcpStatus != CifsExiting) - server->tcpStatus = CifsGood; - else - rc = -EHOSTDOWN; - spin_unlock(&GlobalMid_Lock); + struct TCP_Server_Info *server = ses->server; + + /* only send once per connect */ + if (server->maxBuf != 0) + return 0; + + rc = CIFSSMBNegotiate(xid, ses); + if (rc == -EAGAIN) { + /* retry only once on 1st time connection */ + rc = CIFSSMBNegotiate(xid, ses); + if (rc == -EAGAIN) + rc = -EHOSTDOWN; + } + if (rc == 0) { + spin_lock(&GlobalMid_Lock); + if (server->tcpStatus != CifsExiting) + server->tcpStatus = CifsGood; + else + rc = -EHOSTDOWN; + spin_unlock(&GlobalMid_Lock); - } - first_time = 1; } - if (rc) - goto ss_err_exit; + return rc; +} + + +int cifs_setup_session(unsigned int xid, struct cifsSesInfo *ses, + struct nls_table *nls_info) +{ + int rc = 0; + struct TCP_Server_Info *server = ses->server; - pSesInfo->flags = 0; - pSesInfo->capabilities = server->capabilities; + ses->flags = 0; + ses->capabilities = server->capabilities; if (linuxExtEnabled == 0) - pSesInfo->capabilities &= (~CAP_UNIX); + ses->capabilities &= (~CAP_UNIX); - cFYI(1, ("Security Mode: 0x%x Capabilities: 0x%x TimeAdjust: %d", - server->secMode, server->capabilities, server->timeAdj)); + cFYI(1, "Security Mode: 0x%x Capabilities: 0x%x TimeAdjust: %d", + server->secMode, server->capabilities, server->timeAdj); - rc = CIFS_SessSetup(xid, pSesInfo, first_time, nls_info); + rc = CIFS_SessSetup(xid, ses, nls_info); if (rc) { - cERROR(1, ("Send error in SessSetup = %d", rc)); + cERROR(1, "Send error in SessSetup = %d", rc); } else { - cFYI(1, ("CIFS Session Established successfully")); + cFYI(1, "CIFS Session Established successfully"); spin_lock(&GlobalMid_Lock); - pSesInfo->status = CifsGood; - pSesInfo->need_reconnect = false; + ses->status = CifsGood; + ses->need_reconnect = false; spin_unlock(&GlobalMid_Lock); } -ss_err_exit: return rc; } diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c index e9f7ecc2714b..391816b461ca 100644 --- a/fs/cifs/dir.c +++ b/fs/cifs/dir.c @@ -73,7 +73,7 @@ cifs_bp_rename_retry: namelen += (1 + temp->d_name.len); temp = temp->d_parent; if (temp == NULL) { - cERROR(1, ("corrupt dentry")); + cERROR(1, "corrupt dentry"); return NULL; } } @@ -90,19 +90,18 @@ cifs_bp_rename_retry: full_path[namelen] = dirsep; strncpy(full_path + namelen + 1, temp->d_name.name, temp->d_name.len); - cFYI(0, ("name: %s", full_path + namelen)); + cFYI(0, "name: %s", full_path + namelen); } temp = temp->d_parent; if (temp == NULL) { - cERROR(1, ("corrupt dentry")); + cERROR(1, "corrupt dentry"); kfree(full_path); return NULL; } } if (namelen != pplen + dfsplen) { - cERROR(1, - ("did not end path lookup where expected namelen is %d", - namelen)); + cERROR(1, "did not end path lookup where expected namelen is %d", + namelen); /* presumably this is only possible if racing with a rename of one of the parent directories (we can not lock the dentries above us to prevent this, but retrying should be harmless) */ @@ -130,6 +129,12 @@ cifs_bp_rename_retry: return full_path; } +/* + * When called with struct file pointer set to NULL, there is no way we could + * update file->private_data, but getting it stuck on openFileList provides a + * way to access it from cifs_fill_filedata and thereby set file->private_data + * from cifs_open. + */ struct cifsFileInfo * cifs_new_fileinfo(struct inode *newinode, __u16 fileHandle, struct file *file, struct vfsmount *mnt, unsigned int oflags) @@ -173,7 +178,7 @@ cifs_new_fileinfo(struct inode *newinode, __u16 fileHandle, if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) { pCifsInode->clientCanCacheAll = true; pCifsInode->clientCanCacheRead = true; - cFYI(1, ("Exclusive Oplock inode %p", newinode)); + cFYI(1, "Exclusive Oplock inode %p", newinode); } else if ((oplock & 0xF) == OPLOCK_READ) pCifsInode->clientCanCacheRead = true; } @@ -183,16 +188,17 @@ cifs_new_fileinfo(struct inode *newinode, __u16 fileHandle, } int cifs_posix_open(char *full_path, struct inode **pinode, - struct vfsmount *mnt, int mode, int oflags, - __u32 *poplock, __u16 *pnetfid, int xid) + struct vfsmount *mnt, struct super_block *sb, + int mode, int oflags, + __u32 *poplock, __u16 *pnetfid, int xid) { int rc; FILE_UNIX_BASIC_INFO *presp_data; __u32 posix_flags = 0; - struct cifs_sb_info *cifs_sb = CIFS_SB(mnt->mnt_sb); + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); struct cifs_fattr fattr; - cFYI(1, ("posix open %s", full_path)); + cFYI(1, "posix open %s", full_path); presp_data = kzalloc(sizeof(FILE_UNIX_BASIC_INFO), GFP_KERNEL); if (presp_data == NULL) @@ -242,7 +248,8 @@ int cifs_posix_open(char *full_path, struct inode **pinode, /* get new inode and set it up */ if (*pinode == NULL) { - *pinode = cifs_iget(mnt->mnt_sb, &fattr); + cifs_fill_uniqueid(sb, &fattr); + *pinode = cifs_iget(sb, &fattr); if (!*pinode) { rc = -ENOMEM; goto posix_open_ret; @@ -251,7 +258,18 @@ int cifs_posix_open(char *full_path, struct inode **pinode, cifs_fattr_to_inode(*pinode, &fattr); } - cifs_new_fileinfo(*pinode, *pnetfid, NULL, mnt, oflags); + /* + * cifs_fill_filedata() takes care of setting cifsFileInfo pointer to + * file->private_data. + */ + if (mnt) { + struct cifsFileInfo *pfile_info; + + pfile_info = cifs_new_fileinfo(*pinode, *pnetfid, NULL, mnt, + oflags); + if (pfile_info == NULL) + rc = -ENOMEM; + } posix_open_ret: kfree(presp_data); @@ -315,13 +333,14 @@ cifs_create(struct inode *inode, struct dentry *direntry, int mode, if (nd && (nd->flags & LOOKUP_OPEN)) oflags = nd->intent.open.flags; else - oflags = FMODE_READ; + oflags = FMODE_READ | SMB_O_CREAT; if (tcon->unix_ext && (tcon->ses->capabilities & CAP_UNIX) && (CIFS_UNIX_POSIX_PATH_OPS_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability))) { - rc = cifs_posix_open(full_path, &newinode, nd->path.mnt, - mode, oflags, &oplock, &fileHandle, xid); + rc = cifs_posix_open(full_path, &newinode, + nd ? nd->path.mnt : NULL, + inode->i_sb, mode, oflags, &oplock, &fileHandle, xid); /* EIO could indicate that (posix open) operation is not supported, despite what server claimed in capability negotation. EREMOTE indicates DFS junction, which is not @@ -358,7 +377,7 @@ cifs_create(struct inode *inode, struct dentry *direntry, int mode, else if ((oflags & O_CREAT) == O_CREAT) disposition = FILE_OPEN_IF; else - cFYI(1, ("Create flag not set in create function")); + cFYI(1, "Create flag not set in create function"); } /* BB add processing to set equivalent of mode - e.g. via CreateX with @@ -394,7 +413,7 @@ cifs_create(struct inode *inode, struct dentry *direntry, int mode, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); } if (rc) { - cFYI(1, ("cifs_create returned 0x%x", rc)); + cFYI(1, "cifs_create returned 0x%x", rc); goto cifs_create_out; } @@ -457,15 +476,22 @@ cifs_create_set_dentry: if (rc == 0) setup_cifs_dentry(tcon, direntry, newinode); else - cFYI(1, ("Create worked, get_inode_info failed rc = %d", rc)); + cFYI(1, "Create worked, get_inode_info failed rc = %d", rc); /* nfsd case - nfs srv does not set nd */ if ((nd == NULL) || (!(nd->flags & LOOKUP_OPEN))) { /* mknod case - do not leave file open */ CIFSSMBClose(xid, tcon, fileHandle); } else if (!(posix_create) && (newinode)) { - cifs_new_fileinfo(newinode, fileHandle, NULL, - nd->path.mnt, oflags); + struct cifsFileInfo *pfile_info; + /* + * cifs_fill_filedata() takes care of setting cifsFileInfo + * pointer to file->private_data. + */ + pfile_info = cifs_new_fileinfo(newinode, fileHandle, NULL, + nd->path.mnt, oflags); + if (pfile_info == NULL) + rc = -ENOMEM; } cifs_create_out: kfree(buf); @@ -531,7 +557,7 @@ int cifs_mknod(struct inode *inode, struct dentry *direntry, int mode, u16 fileHandle; FILE_ALL_INFO *buf; - cFYI(1, ("sfu compat create special file")); + cFYI(1, "sfu compat create special file"); buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL); if (buf == NULL) { @@ -616,8 +642,8 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry, xid = GetXid(); - cFYI(1, ("parent inode = 0x%p name is: %s and dentry = 0x%p", - parent_dir_inode, direntry->d_name.name, direntry)); + cFYI(1, "parent inode = 0x%p name is: %s and dentry = 0x%p", + parent_dir_inode, direntry->d_name.name, direntry); /* check whether path exists */ @@ -632,7 +658,7 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry, int i; for (i = 0; i < direntry->d_name.len; i++) if (direntry->d_name.name[i] == '\\') { - cFYI(1, ("Invalid file name")); + cFYI(1, "Invalid file name"); FreeXid(xid); return ERR_PTR(-EINVAL); } @@ -657,11 +683,11 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry, } if (direntry->d_inode != NULL) { - cFYI(1, ("non-NULL inode in lookup")); + cFYI(1, "non-NULL inode in lookup"); } else { - cFYI(1, ("NULL inode in lookup")); + cFYI(1, "NULL inode in lookup"); } - cFYI(1, ("Full path: %s inode = 0x%p", full_path, direntry->d_inode)); + cFYI(1, "Full path: %s inode = 0x%p", full_path, direntry->d_inode); /* Posix open is only called (at lookup time) for file create now. * For opens (rather than creates), because we do not know if it @@ -678,6 +704,7 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry, (nd->flags & LOOKUP_OPEN) && !pTcon->broken_posix_open && (nd->intent.open.flags & O_CREAT)) { rc = cifs_posix_open(full_path, &newInode, nd->path.mnt, + parent_dir_inode->i_sb, nd->intent.open.create_mode, nd->intent.open.flags, &oplock, &fileHandle, xid); @@ -723,7 +750,7 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry, /* if it was once a directory (but how can we tell?) we could do shrink_dcache_parent(direntry); */ } else if (rc != -EACCES) { - cERROR(1, ("Unexpected lookup error %d", rc)); + cERROR(1, "Unexpected lookup error %d", rc); /* We special case check for Access Denied - since that is a common return code */ } @@ -742,8 +769,8 @@ cifs_d_revalidate(struct dentry *direntry, struct nameidata *nd) if (cifs_revalidate_dentry(direntry)) return 0; } else { - cFYI(1, ("neg dentry 0x%p name = %s", - direntry, direntry->d_name.name)); + cFYI(1, "neg dentry 0x%p name = %s", + direntry, direntry->d_name.name); if (time_after(jiffies, direntry->d_time + HZ) || !lookupCacheEnabled) { d_drop(direntry); @@ -758,7 +785,7 @@ cifs_d_revalidate(struct dentry *direntry, struct nameidata *nd) { int rc = 0; - cFYI(1, ("In cifs d_delete, name = %s", direntry->d_name.name)); + cFYI(1, "In cifs d_delete, name = %s", direntry->d_name.name); return rc; } */ diff --git a/fs/cifs/dns_resolve.c b/fs/cifs/dns_resolve.c index 6f8a0e3fb25b..4db2c5e7283f 100644 --- a/fs/cifs/dns_resolve.c +++ b/fs/cifs/dns_resolve.c @@ -106,14 +106,14 @@ dns_resolve_server_name_to_ip(const char *unc, char **ip_addr) /* search for server name delimiter */ len = strlen(unc); if (len < 3) { - cFYI(1, ("%s: unc is too short: %s", __func__, unc)); + cFYI(1, "%s: unc is too short: %s", __func__, unc); return -EINVAL; } len -= 2; name = memchr(unc+2, '\\', len); if (!name) { - cFYI(1, ("%s: probably server name is whole unc: %s", - __func__, unc)); + cFYI(1, "%s: probably server name is whole unc: %s", + __func__, unc); } else { len = (name - unc) - 2/* leading // */; } @@ -127,8 +127,8 @@ dns_resolve_server_name_to_ip(const char *unc, char **ip_addr) name[len] = 0; if (is_ip(name)) { - cFYI(1, ("%s: it is IP, skipping dns upcall: %s", - __func__, name)); + cFYI(1, "%s: it is IP, skipping dns upcall: %s", + __func__, name); data = name; goto skip_upcall; } @@ -138,7 +138,7 @@ dns_resolve_server_name_to_ip(const char *unc, char **ip_addr) len = rkey->type_data.x[0]; data = rkey->payload.data; } else { - cERROR(1, ("%s: unable to resolve: %s", __func__, name)); + cERROR(1, "%s: unable to resolve: %s", __func__, name); goto out; } @@ -148,10 +148,10 @@ skip_upcall: if (*ip_addr) { memcpy(*ip_addr, data, len + 1); if (!IS_ERR(rkey)) - cFYI(1, ("%s: resolved: %s to %s", __func__, + cFYI(1, "%s: resolved: %s to %s", __func__, name, *ip_addr - )); + ); rc = 0; } else { rc = -ENOMEM; diff --git a/fs/cifs/export.c b/fs/cifs/export.c index 6177f7cca16a..993f82045bf6 100644 --- a/fs/cifs/export.c +++ b/fs/cifs/export.c @@ -49,7 +49,7 @@ static struct dentry *cifs_get_parent(struct dentry *dentry) { /* BB need to add code here eventually to enable export via NFSD */ - cFYI(1, ("get parent for %p", dentry)); + cFYI(1, "get parent for %p", dentry); return ERR_PTR(-EACCES); } diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 9b11a8f56f3a..a83541ec9713 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -3,7 +3,7 @@ * * vfs operations that deal with files * - * Copyright (C) International Business Machines Corp., 2002,2007 + * Copyright (C) International Business Machines Corp., 2002,2010 * Author(s): Steve French (sfrench@us.ibm.com) * Jeremy Allison (jra@samba.org) * @@ -108,8 +108,7 @@ static inline int cifs_get_disposition(unsigned int flags) /* all arguments to this function must be checked for validity in caller */ static inline int cifs_posix_open_inode_helper(struct inode *inode, struct file *file, - struct cifsInodeInfo *pCifsInode, - struct cifsFileInfo *pCifsFile, __u32 oplock, + struct cifsInodeInfo *pCifsInode, __u32 oplock, u16 netfid) { @@ -136,15 +135,15 @@ cifs_posix_open_inode_helper(struct inode *inode, struct file *file, if (timespec_equal(&file->f_path.dentry->d_inode->i_mtime, &temp) && (file->f_path.dentry->d_inode->i_size == (loff_t)le64_to_cpu(buf->EndOfFile))) { - cFYI(1, ("inode unchanged on server")); + cFYI(1, "inode unchanged on server"); } else { if (file->f_path.dentry->d_inode->i_mapping) { rc = filemap_write_and_wait(file->f_path.dentry->d_inode->i_mapping); if (rc != 0) CIFS_I(file->f_path.dentry->d_inode)->write_behind_rc = rc; } - cFYI(1, ("invalidating remote inode since open detected it " - "changed")); + cFYI(1, "invalidating remote inode since open detected it " + "changed"); invalidate_remote_inode(file->f_path.dentry->d_inode); } */ @@ -152,8 +151,8 @@ psx_client_can_cache: if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) { pCifsInode->clientCanCacheAll = true; pCifsInode->clientCanCacheRead = true; - cFYI(1, ("Exclusive Oplock granted on inode %p", - file->f_path.dentry->d_inode)); + cFYI(1, "Exclusive Oplock granted on inode %p", + file->f_path.dentry->d_inode); } else if ((oplock & 0xF) == OPLOCK_READ) pCifsInode->clientCanCacheRead = true; @@ -190,8 +189,8 @@ cifs_fill_filedata(struct file *file) if (file->private_data != NULL) { return pCifsFile; } else if ((file->f_flags & O_CREAT) && (file->f_flags & O_EXCL)) - cERROR(1, ("could not find file instance for " - "new file %p", file)); + cERROR(1, "could not find file instance for " + "new file %p", file); return NULL; } @@ -217,7 +216,7 @@ static inline int cifs_open_inode_helper(struct inode *inode, struct file *file, if (timespec_equal(&file->f_path.dentry->d_inode->i_mtime, &temp) && (file->f_path.dentry->d_inode->i_size == (loff_t)le64_to_cpu(buf->EndOfFile))) { - cFYI(1, ("inode unchanged on server")); + cFYI(1, "inode unchanged on server"); } else { if (file->f_path.dentry->d_inode->i_mapping) { /* BB no need to lock inode until after invalidate @@ -226,8 +225,8 @@ static inline int cifs_open_inode_helper(struct inode *inode, struct file *file, if (rc != 0) CIFS_I(file->f_path.dentry->d_inode)->write_behind_rc = rc; } - cFYI(1, ("invalidating remote inode since open detected it " - "changed")); + cFYI(1, "invalidating remote inode since open detected it " + "changed"); invalidate_remote_inode(file->f_path.dentry->d_inode); } @@ -242,8 +241,8 @@ client_can_cache: if ((*oplock & 0xF) == OPLOCK_EXCLUSIVE) { pCifsInode->clientCanCacheAll = true; pCifsInode->clientCanCacheRead = true; - cFYI(1, ("Exclusive Oplock granted on inode %p", - file->f_path.dentry->d_inode)); + cFYI(1, "Exclusive Oplock granted on inode %p", + file->f_path.dentry->d_inode); } else if ((*oplock & 0xF) == OPLOCK_READ) pCifsInode->clientCanCacheRead = true; @@ -285,8 +284,8 @@ int cifs_open(struct inode *inode, struct file *file) return rc; } - cFYI(1, ("inode = 0x%p file flags are 0x%x for %s", - inode, file->f_flags, full_path)); + cFYI(1, "inode = 0x%p file flags are 0x%x for %s", + inode, file->f_flags, full_path); if (oplockEnabled) oplock = REQ_OPLOCK; @@ -298,27 +297,29 @@ int cifs_open(struct inode *inode, struct file *file) (CIFS_UNIX_POSIX_PATH_OPS_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability))) { int oflags = (int) cifs_posix_convert_flags(file->f_flags); + oflags |= SMB_O_CREAT; /* can not refresh inode info since size could be stale */ rc = cifs_posix_open(full_path, &inode, file->f_path.mnt, - cifs_sb->mnt_file_mode /* ignored */, - oflags, &oplock, &netfid, xid); + inode->i_sb, + cifs_sb->mnt_file_mode /* ignored */, + oflags, &oplock, &netfid, xid); if (rc == 0) { - cFYI(1, ("posix open succeeded")); + cFYI(1, "posix open succeeded"); /* no need for special case handling of setting mode on read only files needed here */ pCifsFile = cifs_fill_filedata(file); cifs_posix_open_inode_helper(inode, file, pCifsInode, - pCifsFile, oplock, netfid); + oplock, netfid); goto out; } else if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) { if (tcon->ses->serverNOS) - cERROR(1, ("server %s of type %s returned" + cERROR(1, "server %s of type %s returned" " unexpected error on SMB posix open" ", disabling posix open support." " Check if server update available.", tcon->ses->serverName, - tcon->ses->serverNOS)); + tcon->ses->serverNOS); tcon->broken_posix_open = true; } else if ((rc != -EIO) && (rc != -EREMOTE) && (rc != -EOPNOTSUPP)) /* path not found or net err */ @@ -386,7 +387,7 @@ int cifs_open(struct inode *inode, struct file *file) & CIFS_MOUNT_MAP_SPECIAL_CHR); } if (rc) { - cFYI(1, ("cifs_open returned 0x%x", rc)); + cFYI(1, "cifs_open returned 0x%x", rc); goto out; } @@ -469,7 +470,7 @@ static int cifs_reopen_file(struct file *file, bool can_flush) } if (file->f_path.dentry == NULL) { - cERROR(1, ("no valid name if dentry freed")); + cERROR(1, "no valid name if dentry freed"); dump_stack(); rc = -EBADF; goto reopen_error_exit; @@ -477,7 +478,7 @@ static int cifs_reopen_file(struct file *file, bool can_flush) inode = file->f_path.dentry->d_inode; if (inode == NULL) { - cERROR(1, ("inode not valid")); + cERROR(1, "inode not valid"); dump_stack(); rc = -EBADF; goto reopen_error_exit; @@ -499,8 +500,8 @@ reopen_error_exit: return rc; } - cFYI(1, ("inode = 0x%p file flags 0x%x for %s", - inode, file->f_flags, full_path)); + cFYI(1, "inode = 0x%p file flags 0x%x for %s", + inode, file->f_flags, full_path); if (oplockEnabled) oplock = REQ_OPLOCK; @@ -513,10 +514,11 @@ reopen_error_exit: int oflags = (int) cifs_posix_convert_flags(file->f_flags); /* can not refresh inode info since size could be stale */ rc = cifs_posix_open(full_path, NULL, file->f_path.mnt, - cifs_sb->mnt_file_mode /* ignored */, - oflags, &oplock, &netfid, xid); + inode->i_sb, + cifs_sb->mnt_file_mode /* ignored */, + oflags, &oplock, &netfid, xid); if (rc == 0) { - cFYI(1, ("posix reopen succeeded")); + cFYI(1, "posix reopen succeeded"); goto reopen_success; } /* fallthrough to retry open the old way on errors, especially @@ -537,8 +539,8 @@ reopen_error_exit: CIFS_MOUNT_MAP_SPECIAL_CHR); if (rc) { mutex_unlock(&pCifsFile->fh_mutex); - cFYI(1, ("cifs_open returned 0x%x", rc)); - cFYI(1, ("oplock: %d", oplock)); + cFYI(1, "cifs_open returned 0x%x", rc); + cFYI(1, "oplock: %d", oplock); } else { reopen_success: pCifsFile->netfid = netfid; @@ -570,8 +572,8 @@ reopen_success: if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) { pCifsInode->clientCanCacheAll = true; pCifsInode->clientCanCacheRead = true; - cFYI(1, ("Exclusive Oplock granted on inode %p", - file->f_path.dentry->d_inode)); + cFYI(1, "Exclusive Oplock granted on inode %p", + file->f_path.dentry->d_inode); } else if ((oplock & 0xF) == OPLOCK_READ) { pCifsInode->clientCanCacheRead = true; pCifsInode->clientCanCacheAll = false; @@ -619,8 +621,7 @@ int cifs_close(struct inode *inode, struct file *file) the struct would be in each open file, but this should give enough time to clear the socket */ - cFYI(DBG2, - ("close delay, write pending")); + cFYI(DBG2, "close delay, write pending"); msleep(timeout); timeout *= 4; } @@ -653,7 +654,7 @@ int cifs_close(struct inode *inode, struct file *file) read_lock(&GlobalSMBSeslock); if (list_empty(&(CIFS_I(inode)->openFileList))) { - cFYI(1, ("closing last open instance for inode %p", inode)); + cFYI(1, "closing last open instance for inode %p", inode); /* if the file is not open we do not know if we can cache info on this inode, much less write behind and read ahead */ CIFS_I(inode)->clientCanCacheRead = false; @@ -674,7 +675,7 @@ int cifs_closedir(struct inode *inode, struct file *file) (struct cifsFileInfo *)file->private_data; char *ptmp; - cFYI(1, ("Closedir inode = 0x%p", inode)); + cFYI(1, "Closedir inode = 0x%p", inode); xid = GetXid(); @@ -685,22 +686,22 @@ int cifs_closedir(struct inode *inode, struct file *file) pTcon = cifs_sb->tcon; - cFYI(1, ("Freeing private data in close dir")); + cFYI(1, "Freeing private data in close dir"); write_lock(&GlobalSMBSeslock); if (!pCFileStruct->srch_inf.endOfSearch && !pCFileStruct->invalidHandle) { pCFileStruct->invalidHandle = true; write_unlock(&GlobalSMBSeslock); rc = CIFSFindClose(xid, pTcon, pCFileStruct->netfid); - cFYI(1, ("Closing uncompleted readdir with rc %d", - rc)); + cFYI(1, "Closing uncompleted readdir with rc %d", + rc); /* not much we can do if it fails anyway, ignore rc */ rc = 0; } else write_unlock(&GlobalSMBSeslock); ptmp = pCFileStruct->srch_inf.ntwrk_buf_start; if (ptmp) { - cFYI(1, ("closedir free smb buf in srch struct")); + cFYI(1, "closedir free smb buf in srch struct"); pCFileStruct->srch_inf.ntwrk_buf_start = NULL; if (pCFileStruct->srch_inf.smallBuf) cifs_small_buf_release(ptmp); @@ -748,49 +749,49 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *pfLock) rc = -EACCES; xid = GetXid(); - cFYI(1, ("Lock parm: 0x%x flockflags: " + cFYI(1, "Lock parm: 0x%x flockflags: " "0x%x flocktype: 0x%x start: %lld end: %lld", cmd, pfLock->fl_flags, pfLock->fl_type, pfLock->fl_start, - pfLock->fl_end)); + pfLock->fl_end); if (pfLock->fl_flags & FL_POSIX) - cFYI(1, ("Posix")); + cFYI(1, "Posix"); if (pfLock->fl_flags & FL_FLOCK) - cFYI(1, ("Flock")); + cFYI(1, "Flock"); if (pfLock->fl_flags & FL_SLEEP) { - cFYI(1, ("Blocking lock")); + cFYI(1, "Blocking lock"); wait_flag = true; } if (pfLock->fl_flags & FL_ACCESS) - cFYI(1, ("Process suspended by mandatory locking - " - "not implemented yet")); + cFYI(1, "Process suspended by mandatory locking - " + "not implemented yet"); if (pfLock->fl_flags & FL_LEASE) - cFYI(1, ("Lease on file - not implemented yet")); + cFYI(1, "Lease on file - not implemented yet"); if (pfLock->fl_flags & (~(FL_POSIX | FL_FLOCK | FL_SLEEP | FL_ACCESS | FL_LEASE))) - cFYI(1, ("Unknown lock flags 0x%x", pfLock->fl_flags)); + cFYI(1, "Unknown lock flags 0x%x", pfLock->fl_flags); if (pfLock->fl_type == F_WRLCK) { - cFYI(1, ("F_WRLCK ")); + cFYI(1, "F_WRLCK "); numLock = 1; } else if (pfLock->fl_type == F_UNLCK) { - cFYI(1, ("F_UNLCK")); + cFYI(1, "F_UNLCK"); numUnlock = 1; /* Check if unlock includes more than one lock range */ } else if (pfLock->fl_type == F_RDLCK) { - cFYI(1, ("F_RDLCK")); + cFYI(1, "F_RDLCK"); lockType |= LOCKING_ANDX_SHARED_LOCK; numLock = 1; } else if (pfLock->fl_type == F_EXLCK) { - cFYI(1, ("F_EXLCK")); + cFYI(1, "F_EXLCK"); numLock = 1; } else if (pfLock->fl_type == F_SHLCK) { - cFYI(1, ("F_SHLCK")); + cFYI(1, "F_SHLCK"); lockType |= LOCKING_ANDX_SHARED_LOCK; numLock = 1; } else - cFYI(1, ("Unknown type of lock")); + cFYI(1, "Unknown type of lock"); cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); tcon = cifs_sb->tcon; @@ -833,8 +834,8 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *pfLock) 0 /* wait flag */ ); pfLock->fl_type = F_UNLCK; if (rc != 0) - cERROR(1, ("Error unlocking previously locked " - "range %d during test of lock", rc)); + cERROR(1, "Error unlocking previously locked " + "range %d during test of lock", rc); rc = 0; } else { @@ -856,9 +857,9 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *pfLock) 0 /* wait flag */); pfLock->fl_type = F_RDLCK; if (rc != 0) - cERROR(1, ("Error unlocking " + cERROR(1, "Error unlocking " "previously locked range %d " - "during test of lock", rc)); + "during test of lock", rc); rc = 0; } else { pfLock->fl_type = F_WRLCK; @@ -923,9 +924,10 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *pfLock) 1, 0, li->type, false); if (stored_rc) rc = stored_rc; - - list_del(&li->llist); - kfree(li); + else { + list_del(&li->llist); + kfree(li); + } } } mutex_unlock(&fid->lock_mutex); @@ -988,9 +990,8 @@ ssize_t cifs_user_write(struct file *file, const char __user *write_data, pTcon = cifs_sb->tcon; - /* cFYI(1, - (" write %d bytes to offset %lld of %s", write_size, - *poffset, file->f_path.dentry->d_name.name)); */ + /* cFYI(1, " write %d bytes to offset %lld of %s", write_size, + *poffset, file->f_path.dentry->d_name.name); */ if (file->private_data == NULL) return -EBADF; @@ -1091,8 +1092,8 @@ static ssize_t cifs_write(struct file *file, const char *write_data, pTcon = cifs_sb->tcon; - cFYI(1, ("write %zd bytes to offset %lld of %s", write_size, - *poffset, file->f_path.dentry->d_name.name)); + cFYI(1, "write %zd bytes to offset %lld of %s", write_size, + *poffset, file->f_path.dentry->d_name.name); if (file->private_data == NULL) return -EBADF; @@ -1233,7 +1234,7 @@ struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *cifs_inode) it being zero) during stress testcases so we need to check for it */ if (cifs_inode == NULL) { - cERROR(1, ("Null inode passed to cifs_writeable_file")); + cERROR(1, "Null inode passed to cifs_writeable_file"); dump_stack(); return NULL; } @@ -1277,7 +1278,7 @@ refind_writable: again. Note that it would be bad to hold up writepages here (rather than in caller) with continuous retries */ - cFYI(1, ("wp failed on reopen file")); + cFYI(1, "wp failed on reopen file"); read_lock(&GlobalSMBSeslock); /* can not use this handle, no write pending on this one after all */ @@ -1353,7 +1354,7 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to) else if (bytes_written < 0) rc = bytes_written; } else { - cFYI(1, ("No writeable filehandles for inode")); + cFYI(1, "No writeable filehandles for inode"); rc = -EIO; } @@ -1525,7 +1526,7 @@ retry: */ open_file = find_writable_file(CIFS_I(mapping->host)); if (!open_file) { - cERROR(1, ("No writable handles for inode")); + cERROR(1, "No writable handles for inode"); rc = -EBADF; } else { long_op = cifs_write_timeout(cifsi, offset); @@ -1538,8 +1539,8 @@ retry: cifs_update_eof(cifsi, offset, bytes_written); if (rc || bytes_written < bytes_to_write) { - cERROR(1, ("Write2 ret %d, wrote %d", - rc, bytes_written)); + cERROR(1, "Write2 ret %d, wrote %d", + rc, bytes_written); /* BB what if continued retry is requested via mount flags? */ if (rc == -ENOSPC) @@ -1600,7 +1601,7 @@ static int cifs_writepage(struct page *page, struct writeback_control *wbc) /* BB add check for wbc flags */ page_cache_get(page); if (!PageUptodate(page)) - cFYI(1, ("ppw - page not up to date")); + cFYI(1, "ppw - page not up to date"); /* * Set the "writeback" flag, and clear "dirty" in the radix tree. @@ -1629,8 +1630,8 @@ static int cifs_write_end(struct file *file, struct address_space *mapping, int rc; struct inode *inode = mapping->host; - cFYI(1, ("write_end for page %p from pos %lld with %d bytes", - page, pos, copied)); + cFYI(1, "write_end for page %p from pos %lld with %d bytes", + page, pos, copied); if (PageChecked(page)) { if (copied == len) @@ -1686,8 +1687,8 @@ int cifs_fsync(struct file *file, struct dentry *dentry, int datasync) xid = GetXid(); - cFYI(1, ("Sync file - name: %s datasync: 0x%x", - dentry->d_name.name, datasync)); + cFYI(1, "Sync file - name: %s datasync: 0x%x", + dentry->d_name.name, datasync); rc = filemap_write_and_wait(inode->i_mapping); if (rc == 0) { @@ -1711,7 +1712,7 @@ int cifs_fsync(struct file *file, struct dentry *dentry, int datasync) unsigned int rpages = 0; int rc = 0; - cFYI(1, ("sync page %p",page)); + cFYI(1, "sync page %p", page); mapping = page->mapping; if (!mapping) return 0; @@ -1722,7 +1723,7 @@ int cifs_fsync(struct file *file, struct dentry *dentry, int datasync) /* fill in rpages then result = cifs_pagein_inode(inode, index, rpages); */ /* BB finish */ -/* cFYI(1, ("rpages is %d for sync page of Index %ld", rpages, index)); +/* cFYI(1, "rpages is %d for sync page of Index %ld", rpages, index); #if 0 if (rc < 0) @@ -1756,7 +1757,7 @@ int cifs_flush(struct file *file, fl_owner_t id) CIFS_I(inode)->write_behind_rc = 0; } - cFYI(1, ("Flush inode %p file %p rc %d", inode, file, rc)); + cFYI(1, "Flush inode %p file %p rc %d", inode, file, rc); return rc; } @@ -1788,7 +1789,7 @@ ssize_t cifs_user_read(struct file *file, char __user *read_data, open_file = (struct cifsFileInfo *)file->private_data; if ((file->f_flags & O_ACCMODE) == O_WRONLY) - cFYI(1, ("attempting read on write only file instance")); + cFYI(1, "attempting read on write only file instance"); for (total_read = 0, current_offset = read_data; read_size > total_read; @@ -1869,7 +1870,7 @@ static ssize_t cifs_read(struct file *file, char *read_data, size_t read_size, open_file = (struct cifsFileInfo *)file->private_data; if ((file->f_flags & O_ACCMODE) == O_WRONLY) - cFYI(1, ("attempting read on write only file instance")); + cFYI(1, "attempting read on write only file instance"); for (total_read = 0, current_offset = read_data; read_size > total_read; @@ -1920,7 +1921,7 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma) xid = GetXid(); rc = cifs_revalidate_file(file); if (rc) { - cFYI(1, ("Validation prior to mmap failed, error=%d", rc)); + cFYI(1, "Validation prior to mmap failed, error=%d", rc); FreeXid(xid); return rc; } @@ -1931,8 +1932,7 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma) static void cifs_copy_cache_pages(struct address_space *mapping, - struct list_head *pages, int bytes_read, char *data, - struct pagevec *plru_pvec) + struct list_head *pages, int bytes_read, char *data) { struct page *page; char *target; @@ -1944,10 +1944,10 @@ static void cifs_copy_cache_pages(struct address_space *mapping, page = list_entry(pages->prev, struct page, lru); list_del(&page->lru); - if (add_to_page_cache(page, mapping, page->index, + if (add_to_page_cache_lru(page, mapping, page->index, GFP_KERNEL)) { page_cache_release(page); - cFYI(1, ("Add page cache failed")); + cFYI(1, "Add page cache failed"); data += PAGE_CACHE_SIZE; bytes_read -= PAGE_CACHE_SIZE; continue; @@ -1970,8 +1970,6 @@ static void cifs_copy_cache_pages(struct address_space *mapping, flush_dcache_page(page); SetPageUptodate(page); unlock_page(page); - if (!pagevec_add(plru_pvec, page)) - __pagevec_lru_add_file(plru_pvec); data += PAGE_CACHE_SIZE; } return; @@ -1990,7 +1988,6 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, unsigned int read_size, i; char *smb_read_data = NULL; struct smb_com_read_rsp *pSMBr; - struct pagevec lru_pvec; struct cifsFileInfo *open_file; int buf_type = CIFS_NO_BUFFER; @@ -2004,8 +2001,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); pTcon = cifs_sb->tcon; - pagevec_init(&lru_pvec, 0); - cFYI(DBG2, ("rpages: num pages %d", num_pages)); + cFYI(DBG2, "rpages: num pages %d", num_pages); for (i = 0; i < num_pages; ) { unsigned contig_pages; struct page *tmp_page; @@ -2038,8 +2034,8 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, /* Read size needs to be in multiples of one page */ read_size = min_t(const unsigned int, read_size, cifs_sb->rsize & PAGE_CACHE_MASK); - cFYI(DBG2, ("rpages: read size 0x%x contiguous pages %d", - read_size, contig_pages)); + cFYI(DBG2, "rpages: read size 0x%x contiguous pages %d", + read_size, contig_pages); rc = -EAGAIN; while (rc == -EAGAIN) { if ((open_file->invalidHandle) && @@ -2066,14 +2062,14 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, } } if ((rc < 0) || (smb_read_data == NULL)) { - cFYI(1, ("Read error in readpages: %d", rc)); + cFYI(1, "Read error in readpages: %d", rc); break; } else if (bytes_read > 0) { task_io_account_read(bytes_read); pSMBr = (struct smb_com_read_rsp *)smb_read_data; cifs_copy_cache_pages(mapping, page_list, bytes_read, smb_read_data + 4 /* RFC1001 hdr */ + - le16_to_cpu(pSMBr->DataOffset), &lru_pvec); + le16_to_cpu(pSMBr->DataOffset)); i += bytes_read >> PAGE_CACHE_SHIFT; cifs_stats_bytes_read(pTcon, bytes_read); @@ -2089,9 +2085,9 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, /* break; */ } } else { - cFYI(1, ("No bytes read (%d) at offset %lld . " - "Cleaning remaining pages from readahead list", - bytes_read, offset)); + cFYI(1, "No bytes read (%d) at offset %lld . " + "Cleaning remaining pages from readahead list", + bytes_read, offset); /* BB turn off caching and do new lookup on file size at server? */ break; @@ -2106,8 +2102,6 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, bytes_read = 0; } - pagevec_lru_add_file(&lru_pvec); - /* need to free smb_read_data buf before exit */ if (smb_read_data) { if (buf_type == CIFS_SMALL_BUFFER) @@ -2136,7 +2130,7 @@ static int cifs_readpage_worker(struct file *file, struct page *page, if (rc < 0) goto io_error; else - cFYI(1, ("Bytes read %d", rc)); + cFYI(1, "Bytes read %d", rc); file->f_path.dentry->d_inode->i_atime = current_fs_time(file->f_path.dentry->d_inode->i_sb); @@ -2168,8 +2162,8 @@ static int cifs_readpage(struct file *file, struct page *page) return rc; } - cFYI(1, ("readpage %p at offset %d 0x%x\n", - page, (int)offset, (int)offset)); + cFYI(1, "readpage %p at offset %d 0x%x\n", + page, (int)offset, (int)offset); rc = cifs_readpage_worker(file, page, &offset); @@ -2239,7 +2233,7 @@ static int cifs_write_begin(struct file *file, struct address_space *mapping, struct page *page; int rc = 0; - cFYI(1, ("write_begin from %lld len %d", (long long)pos, len)); + cFYI(1, "write_begin from %lld len %d", (long long)pos, len); page = grab_cache_page_write_begin(mapping, index, flags); if (!page) { @@ -2311,12 +2305,10 @@ cifs_oplock_break(struct slow_work *work) int rc, waitrc = 0; if (inode && S_ISREG(inode->i_mode)) { -#ifdef CONFIG_CIFS_EXPERIMENTAL - if (cinode->clientCanCacheAll == 0) + if (cinode->clientCanCacheRead) break_lease(inode, O_RDONLY); - else if (cinode->clientCanCacheRead == 0) + else break_lease(inode, O_WRONLY); -#endif rc = filemap_fdatawrite(inode->i_mapping); if (cinode->clientCanCacheRead == 0) { waitrc = filemap_fdatawait(inode->i_mapping); @@ -2326,7 +2318,7 @@ cifs_oplock_break(struct slow_work *work) rc = waitrc; if (rc) cinode->write_behind_rc = rc; - cFYI(1, ("Oplock flush inode %p rc %d", inode, rc)); + cFYI(1, "Oplock flush inode %p rc %d", inode, rc); } /* @@ -2338,7 +2330,7 @@ cifs_oplock_break(struct slow_work *work) if (!cfile->closePend && !cfile->oplock_break_cancelled) { rc = CIFSSMBLock(0, cifs_sb->tcon, cfile->netfid, 0, 0, 0, 0, LOCKING_ANDX_OPLOCK_RELEASE, false); - cFYI(1, ("Oplock release rc = %d", rc)); + cFYI(1, "Oplock release rc = %d", rc); } } diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c index 35ec11716213..62b324f26a56 100644 --- a/fs/cifs/inode.c +++ b/fs/cifs/inode.c @@ -1,7 +1,7 @@ /* * fs/cifs/inode.c * - * Copyright (C) International Business Machines Corp., 2002,2008 + * Copyright (C) International Business Machines Corp., 2002,2010 * Author(s): Steve French (sfrench@us.ibm.com) * * This library is free software; you can redistribute it and/or modify @@ -86,30 +86,30 @@ cifs_revalidate_cache(struct inode *inode, struct cifs_fattr *fattr) { struct cifsInodeInfo *cifs_i = CIFS_I(inode); - cFYI(1, ("%s: revalidating inode %llu", __func__, cifs_i->uniqueid)); + cFYI(1, "%s: revalidating inode %llu", __func__, cifs_i->uniqueid); if (inode->i_state & I_NEW) { - cFYI(1, ("%s: inode %llu is new", __func__, cifs_i->uniqueid)); + cFYI(1, "%s: inode %llu is new", __func__, cifs_i->uniqueid); return; } /* don't bother with revalidation if we have an oplock */ if (cifs_i->clientCanCacheRead) { - cFYI(1, ("%s: inode %llu is oplocked", __func__, - cifs_i->uniqueid)); + cFYI(1, "%s: inode %llu is oplocked", __func__, + cifs_i->uniqueid); return; } /* revalidate if mtime or size have changed */ if (timespec_equal(&inode->i_mtime, &fattr->cf_mtime) && cifs_i->server_eof == fattr->cf_eof) { - cFYI(1, ("%s: inode %llu is unchanged", __func__, - cifs_i->uniqueid)); + cFYI(1, "%s: inode %llu is unchanged", __func__, + cifs_i->uniqueid); return; } - cFYI(1, ("%s: invalidating inode %llu mapping", __func__, - cifs_i->uniqueid)); + cFYI(1, "%s: invalidating inode %llu mapping", __func__, + cifs_i->uniqueid); cifs_i->invalid_mapping = true; } @@ -137,15 +137,14 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr) inode->i_mode = fattr->cf_mode; cifs_i->cifsAttrs = fattr->cf_cifsattrs; - cifs_i->uniqueid = fattr->cf_uniqueid; if (fattr->cf_flags & CIFS_FATTR_NEED_REVAL) cifs_i->time = 0; else cifs_i->time = jiffies; - cFYI(1, ("inode 0x%p old_time=%ld new_time=%ld", inode, - oldtime, cifs_i->time)); + cFYI(1, "inode 0x%p old_time=%ld new_time=%ld", inode, + oldtime, cifs_i->time); cifs_i->delete_pending = fattr->cf_flags & CIFS_FATTR_DELETE_PENDING; @@ -170,6 +169,17 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr) cifs_set_ops(inode, fattr->cf_flags & CIFS_FATTR_DFS_REFERRAL); } +void +cifs_fill_uniqueid(struct super_block *sb, struct cifs_fattr *fattr) +{ + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); + + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) + return; + + fattr->cf_uniqueid = iunique(sb, ROOT_I); +} + /* Fill a cifs_fattr struct with info from FILE_UNIX_BASIC_INFO. */ void cifs_unix_basic_to_fattr(struct cifs_fattr *fattr, FILE_UNIX_BASIC_INFO *info, @@ -227,7 +237,7 @@ cifs_unix_basic_to_fattr(struct cifs_fattr *fattr, FILE_UNIX_BASIC_INFO *info, /* safest to call it a file if we do not know */ fattr->cf_mode |= S_IFREG; fattr->cf_dtype = DT_REG; - cFYI(1, ("unknown type %d", le32_to_cpu(info->Type))); + cFYI(1, "unknown type %d", le32_to_cpu(info->Type)); break; } @@ -256,7 +266,7 @@ cifs_create_dfs_fattr(struct cifs_fattr *fattr, struct super_block *sb) { struct cifs_sb_info *cifs_sb = CIFS_SB(sb); - cFYI(1, ("creating fake fattr for DFS referral")); + cFYI(1, "creating fake fattr for DFS referral"); memset(fattr, 0, sizeof(*fattr)); fattr->cf_mode = S_IFDIR | S_IXUGO | S_IRWXU; @@ -305,7 +315,7 @@ int cifs_get_inode_info_unix(struct inode **pinode, struct cifs_sb_info *cifs_sb = CIFS_SB(sb); tcon = cifs_sb->tcon; - cFYI(1, ("Getting info on %s", full_path)); + cFYI(1, "Getting info on %s", full_path); /* could have done a find first instead but this returns more info */ rc = CIFSSMBUnixQPathInfo(xid, tcon, full_path, &find_data, @@ -323,6 +333,7 @@ int cifs_get_inode_info_unix(struct inode **pinode, if (*pinode == NULL) { /* get new inode */ + cifs_fill_uniqueid(sb, &fattr); *pinode = cifs_iget(sb, &fattr); if (!*pinode) rc = -ENOMEM; @@ -373,7 +384,7 @@ cifs_sfu_type(struct cifs_fattr *fattr, const unsigned char *path, &bytes_read, &pbuf, &buf_type); if ((rc == 0) && (bytes_read >= 8)) { if (memcmp("IntxBLK", pbuf, 8) == 0) { - cFYI(1, ("Block device")); + cFYI(1, "Block device"); fattr->cf_mode |= S_IFBLK; fattr->cf_dtype = DT_BLK; if (bytes_read == 24) { @@ -385,7 +396,7 @@ cifs_sfu_type(struct cifs_fattr *fattr, const unsigned char *path, fattr->cf_rdev = MKDEV(mjr, mnr); } } else if (memcmp("IntxCHR", pbuf, 8) == 0) { - cFYI(1, ("Char device")); + cFYI(1, "Char device"); fattr->cf_mode |= S_IFCHR; fattr->cf_dtype = DT_CHR; if (bytes_read == 24) { @@ -397,7 +408,7 @@ cifs_sfu_type(struct cifs_fattr *fattr, const unsigned char *path, fattr->cf_rdev = MKDEV(mjr, mnr); } } else if (memcmp("IntxLNK", pbuf, 7) == 0) { - cFYI(1, ("Symlink")); + cFYI(1, "Symlink"); fattr->cf_mode |= S_IFLNK; fattr->cf_dtype = DT_LNK; } else { @@ -439,10 +450,10 @@ static int cifs_sfu_mode(struct cifs_fattr *fattr, const unsigned char *path, else if (rc > 3) { mode = le32_to_cpu(*((__le32 *)ea_value)); fattr->cf_mode &= ~SFBITS_MASK; - cFYI(1, ("special bits 0%o org mode 0%o", mode, - fattr->cf_mode)); + cFYI(1, "special bits 0%o org mode 0%o", mode, + fattr->cf_mode); fattr->cf_mode = (mode & SFBITS_MASK) | fattr->cf_mode; - cFYI(1, ("special mode bits 0%o", mode)); + cFYI(1, "special mode bits 0%o", mode); } return 0; @@ -548,11 +559,11 @@ int cifs_get_inode_info(struct inode **pinode, struct cifs_fattr fattr; pTcon = cifs_sb->tcon; - cFYI(1, ("Getting info on %s", full_path)); + cFYI(1, "Getting info on %s", full_path); if ((pfindData == NULL) && (*pinode != NULL)) { if (CIFS_I(*pinode)->clientCanCacheRead) { - cFYI(1, ("No need to revalidate cached inode sizes")); + cFYI(1, "No need to revalidate cached inode sizes"); return rc; } } @@ -618,7 +629,7 @@ int cifs_get_inode_info(struct inode **pinode, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); if (rc1 || !fattr.cf_uniqueid) { - cFYI(1, ("GetSrvInodeNum rc %d", rc1)); + cFYI(1, "GetSrvInodeNum rc %d", rc1); fattr.cf_uniqueid = iunique(sb, ROOT_I); cifs_autodisable_serverino(cifs_sb); } @@ -634,13 +645,13 @@ int cifs_get_inode_info(struct inode **pinode, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL) { tmprc = cifs_sfu_type(&fattr, full_path, cifs_sb, xid); if (tmprc) - cFYI(1, ("cifs_sfu_type failed: %d", tmprc)); + cFYI(1, "cifs_sfu_type failed: %d", tmprc); } #ifdef CONFIG_CIFS_EXPERIMENTAL /* fill in 0777 bits from ACL */ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) { - cFYI(1, ("Getting mode bits from ACL")); + cFYI(1, "Getting mode bits from ACL"); cifs_acl_to_fattr(cifs_sb, &fattr, *pinode, full_path, pfid); } #endif @@ -715,6 +726,16 @@ cifs_find_inode(struct inode *inode, void *opaque) if (CIFS_I(inode)->uniqueid != fattr->cf_uniqueid) return 0; + /* + * uh oh -- it's a directory. We can't use it since hardlinked dirs are + * verboten. Disable serverino and return it as if it were found, the + * caller can discard it, generate a uniqueid and retry the find + */ + if (S_ISDIR(inode->i_mode) && !list_empty(&inode->i_dentry)) { + fattr->cf_flags |= CIFS_FATTR_INO_COLLISION; + cifs_autodisable_serverino(CIFS_SB(inode->i_sb)); + } + return 1; } @@ -734,15 +755,22 @@ cifs_iget(struct super_block *sb, struct cifs_fattr *fattr) unsigned long hash; struct inode *inode; - cFYI(1, ("looking for uniqueid=%llu", fattr->cf_uniqueid)); +retry_iget5_locked: + cFYI(1, "looking for uniqueid=%llu", fattr->cf_uniqueid); /* hash down to 32-bits on 32-bit arch */ hash = cifs_uniqueid_to_ino_t(fattr->cf_uniqueid); inode = iget5_locked(sb, hash, cifs_find_inode, cifs_init_inode, fattr); - - /* we have fattrs in hand, update the inode */ if (inode) { + /* was there a problematic inode number collision? */ + if (fattr->cf_flags & CIFS_FATTR_INO_COLLISION) { + iput(inode); + fattr->cf_uniqueid = iunique(sb, ROOT_I); + fattr->cf_flags &= ~CIFS_FATTR_INO_COLLISION; + goto retry_iget5_locked; + } + cifs_fattr_to_inode(inode, fattr); if (sb->s_flags & MS_NOATIME) inode->i_flags |= S_NOATIME | S_NOCMTIME; @@ -780,7 +808,7 @@ struct inode *cifs_root_iget(struct super_block *sb, unsigned long ino) return ERR_PTR(-ENOMEM); if (rc && cifs_sb->tcon->ipc) { - cFYI(1, ("ipc connection - fake read inode")); + cFYI(1, "ipc connection - fake read inode"); inode->i_mode |= S_IFDIR; inode->i_nlink = 2; inode->i_op = &cifs_ipc_inode_ops; @@ -842,7 +870,7 @@ cifs_set_file_info(struct inode *inode, struct iattr *attrs, int xid, * server times. */ if (set_time && (attrs->ia_valid & ATTR_CTIME)) { - cFYI(1, ("CIFS - CTIME changed")); + cFYI(1, "CIFS - CTIME changed"); info_buf.ChangeTime = cpu_to_le64(cifs_UnixTimeToNT(attrs->ia_ctime)); } else @@ -877,8 +905,8 @@ cifs_set_file_info(struct inode *inode, struct iattr *attrs, int xid, goto out; } - cFYI(1, ("calling SetFileInfo since SetPathInfo for " - "times not supported by this server")); + cFYI(1, "calling SetFileInfo since SetPathInfo for " + "times not supported by this server"); rc = CIFSSMBOpen(xid, pTcon, full_path, FILE_OPEN, SYNCHRONIZE | FILE_WRITE_ATTRIBUTES, CREATE_NOT_DIR, &netfid, &oplock, @@ -1036,7 +1064,7 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry) struct iattr *attrs = NULL; __u32 dosattr = 0, origattr = 0; - cFYI(1, ("cifs_unlink, dir=0x%p, dentry=0x%p", dir, dentry)); + cFYI(1, "cifs_unlink, dir=0x%p, dentry=0x%p", dir, dentry); xid = GetXid(); @@ -1055,7 +1083,7 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry) rc = CIFSPOSIXDelFile(xid, tcon, full_path, SMB_POSIX_UNLINK_FILE_TARGET, cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); - cFYI(1, ("posix del rc %d", rc)); + cFYI(1, "posix del rc %d", rc); if ((rc == 0) || (rc == -ENOENT)) goto psx_del_no_retry; } @@ -1129,7 +1157,7 @@ int cifs_mkdir(struct inode *inode, struct dentry *direntry, int mode) struct inode *newinode = NULL; struct cifs_fattr fattr; - cFYI(1, ("In cifs_mkdir, mode = 0x%x inode = 0x%p", mode, inode)); + cFYI(1, "In cifs_mkdir, mode = 0x%x inode = 0x%p", mode, inode); xid = GetXid(); @@ -1164,7 +1192,7 @@ int cifs_mkdir(struct inode *inode, struct dentry *direntry, int mode) kfree(pInfo); goto mkdir_retry_old; } else if (rc) { - cFYI(1, ("posix mkdir returned 0x%x", rc)); + cFYI(1, "posix mkdir returned 0x%x", rc); d_drop(direntry); } else { if (pInfo->Type == cpu_to_le32(-1)) { @@ -1181,6 +1209,7 @@ int cifs_mkdir(struct inode *inode, struct dentry *direntry, int mode) direntry->d_op = &cifs_dentry_ops; cifs_unix_basic_to_fattr(&fattr, pInfo, cifs_sb); + cifs_fill_uniqueid(inode->i_sb, &fattr); newinode = cifs_iget(inode->i_sb, &fattr); if (!newinode) { kfree(pInfo); @@ -1190,12 +1219,12 @@ int cifs_mkdir(struct inode *inode, struct dentry *direntry, int mode) d_instantiate(direntry, newinode); #ifdef CONFIG_CIFS_DEBUG2 - cFYI(1, ("instantiated dentry %p %s to inode %p", - direntry, direntry->d_name.name, newinode)); + cFYI(1, "instantiated dentry %p %s to inode %p", + direntry, direntry->d_name.name, newinode); if (newinode->i_nlink != 2) - cFYI(1, ("unexpected number of links %d", - newinode->i_nlink)); + cFYI(1, "unexpected number of links %d", + newinode->i_nlink); #endif } kfree(pInfo); @@ -1206,7 +1235,7 @@ mkdir_retry_old: rc = CIFSSMBMkDir(xid, pTcon, full_path, cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); if (rc) { - cFYI(1, ("cifs_mkdir returned 0x%x", rc)); + cFYI(1, "cifs_mkdir returned 0x%x", rc); d_drop(direntry); } else { mkdir_get_info: @@ -1309,7 +1338,7 @@ int cifs_rmdir(struct inode *inode, struct dentry *direntry) char *full_path = NULL; struct cifsInodeInfo *cifsInode; - cFYI(1, ("cifs_rmdir, inode = 0x%p", inode)); + cFYI(1, "cifs_rmdir, inode = 0x%p", inode); xid = GetXid(); @@ -1511,6 +1540,11 @@ cifs_inode_needs_reval(struct inode *inode) if (time_after_eq(jiffies, cifs_i->time + HZ)) return true; + /* hardlinked files w/ noserverino get "special" treatment */ + if (!(CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) && + S_ISREG(inode->i_mode) && inode->i_nlink != 1) + return true; + return false; } @@ -1577,9 +1611,9 @@ int cifs_revalidate_dentry(struct dentry *dentry) goto check_inval; } - cFYI(1, ("Revalidate: %s inode 0x%p count %d dentry: 0x%p d_time %ld " + cFYI(1, "Revalidate: %s inode 0x%p count %d dentry: 0x%p d_time %ld " "jiffies %ld", full_path, inode, inode->i_count.counter, - dentry, dentry->d_time, jiffies)); + dentry, dentry->d_time, jiffies); if (CIFS_SB(sb)->tcon->unix_ext) rc = cifs_get_inode_info_unix(&inode, full_path, sb, xid); @@ -1673,12 +1707,12 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, rc = CIFSSMBSetFileSize(xid, pTcon, attrs->ia_size, nfid, npid, false); cifsFileInfo_put(open_file); - cFYI(1, ("SetFSize for attrs rc = %d", rc)); + cFYI(1, "SetFSize for attrs rc = %d", rc); if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) { unsigned int bytes_written; rc = CIFSSMBWrite(xid, pTcon, nfid, 0, attrs->ia_size, &bytes_written, NULL, NULL, 1); - cFYI(1, ("Wrt seteof rc %d", rc)); + cFYI(1, "Wrt seteof rc %d", rc); } } else rc = -EINVAL; @@ -1692,7 +1726,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, false, cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); - cFYI(1, ("SetEOF by path (setattrs) rc = %d", rc)); + cFYI(1, "SetEOF by path (setattrs) rc = %d", rc); if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) { __u16 netfid; int oplock = 0; @@ -1709,7 +1743,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs, attrs->ia_size, &bytes_written, NULL, NULL, 1); - cFYI(1, ("wrt seteof rc %d", rc)); + cFYI(1, "wrt seteof rc %d", rc); CIFSSMBClose(xid, pTcon, netfid); } } @@ -1737,8 +1771,8 @@ cifs_setattr_unix(struct dentry *direntry, struct iattr *attrs) struct cifs_unix_set_info_args *args = NULL; struct cifsFileInfo *open_file; - cFYI(1, ("setattr_unix on file %s attrs->ia_valid=0x%x", - direntry->d_name.name, attrs->ia_valid)); + cFYI(1, "setattr_unix on file %s attrs->ia_valid=0x%x", + direntry->d_name.name, attrs->ia_valid); xid = GetXid(); @@ -1868,8 +1902,8 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs) xid = GetXid(); - cFYI(1, ("setattr on file %s attrs->iavalid 0x%x", - direntry->d_name.name, attrs->ia_valid)); + cFYI(1, "setattr on file %s attrs->iavalid 0x%x", + direntry->d_name.name, attrs->ia_valid); if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_PERM) == 0) { /* check if we have permission to change attrs */ @@ -1926,7 +1960,7 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs) attrs->ia_valid &= ~ATTR_MODE; if (attrs->ia_valid & ATTR_MODE) { - cFYI(1, ("Mode changed to 0%o", attrs->ia_mode)); + cFYI(1, "Mode changed to 0%o", attrs->ia_mode); mode = attrs->ia_mode; } @@ -2012,7 +2046,7 @@ cifs_setattr(struct dentry *direntry, struct iattr *attrs) #if 0 void cifs_delete_inode(struct inode *inode) { - cFYI(1, ("In cifs_delete_inode, inode = 0x%p", inode)); + cFYI(1, "In cifs_delete_inode, inode = 0x%p", inode); /* may have to add back in if and when safe distributed caching of directories added e.g. via FindNotify */ } diff --git a/fs/cifs/ioctl.c b/fs/cifs/ioctl.c index f94650683a00..505926f1ee6b 100644 --- a/fs/cifs/ioctl.c +++ b/fs/cifs/ioctl.c @@ -47,7 +47,7 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg) xid = GetXid(); - cFYI(1, ("ioctl file %p cmd %u arg %lu", filep, command, arg)); + cFYI(1, "ioctl file %p cmd %u arg %lu", filep, command, arg); cifs_sb = CIFS_SB(inode->i_sb); @@ -64,12 +64,12 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg) switch (command) { case CIFS_IOC_CHECKUMOUNT: - cFYI(1, ("User unmount attempted")); + cFYI(1, "User unmount attempted"); if (cifs_sb->mnt_uid == current_uid()) rc = 0; else { rc = -EACCES; - cFYI(1, ("uids do not match")); + cFYI(1, "uids do not match"); } break; #ifdef CONFIG_CIFS_POSIX @@ -97,11 +97,11 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg) /* rc= CIFSGetExtAttr(xid,tcon,pSMBFile->netfid, extAttrBits, &ExtAttrMask);*/ } - cFYI(1, ("set flags not implemented yet")); + cFYI(1, "set flags not implemented yet"); break; #endif /* CONFIG_CIFS_POSIX */ default: - cFYI(1, ("unsupported ioctl")); + cFYI(1, "unsupported ioctl"); break; } diff --git a/fs/cifs/link.c b/fs/cifs/link.c index c1a9d4236a8c..473ca8033656 100644 --- a/fs/cifs/link.c +++ b/fs/cifs/link.c @@ -139,7 +139,7 @@ cifs_follow_link(struct dentry *direntry, struct nameidata *nd) if (!full_path) goto out; - cFYI(1, ("Full path: %s inode = 0x%p", full_path, inode)); + cFYI(1, "Full path: %s inode = 0x%p", full_path, inode); rc = CIFSSMBUnixQuerySymLink(xid, tcon, full_path, &target_path, cifs_sb->local_nls); @@ -178,8 +178,8 @@ cifs_symlink(struct inode *inode, struct dentry *direntry, const char *symname) return rc; } - cFYI(1, ("Full path: %s", full_path)); - cFYI(1, ("symname is %s", symname)); + cFYI(1, "Full path: %s", full_path); + cFYI(1, "symname is %s", symname); /* BB what if DFS and this volume is on different share? BB */ if (pTcon->unix_ext) @@ -198,8 +198,8 @@ cifs_symlink(struct inode *inode, struct dentry *direntry, const char *symname) inode->i_sb, xid, NULL); if (rc != 0) { - cFYI(1, ("Create symlink ok, getinodeinfo fail rc = %d", - rc)); + cFYI(1, "Create symlink ok, getinodeinfo fail rc = %d", + rc); } else { if (pTcon->nocase) direntry->d_op = &cifs_ci_dentry_ops; diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c index d1474996a812..1394aa37f26c 100644 --- a/fs/cifs/misc.c +++ b/fs/cifs/misc.c @@ -51,7 +51,7 @@ _GetXid(void) if (GlobalTotalActiveXid > GlobalMaxActiveXid) GlobalMaxActiveXid = GlobalTotalActiveXid; if (GlobalTotalActiveXid > 65000) - cFYI(1, ("warning: more than 65000 requests active")); + cFYI(1, "warning: more than 65000 requests active"); xid = GlobalCurrentXid++; spin_unlock(&GlobalMid_Lock); return xid; @@ -88,7 +88,7 @@ void sesInfoFree(struct cifsSesInfo *buf_to_free) { if (buf_to_free == NULL) { - cFYI(1, ("Null buffer passed to sesInfoFree")); + cFYI(1, "Null buffer passed to sesInfoFree"); return; } @@ -126,7 +126,7 @@ void tconInfoFree(struct cifsTconInfo *buf_to_free) { if (buf_to_free == NULL) { - cFYI(1, ("Null buffer passed to tconInfoFree")); + cFYI(1, "Null buffer passed to tconInfoFree"); return; } atomic_dec(&tconInfoAllocCount); @@ -166,7 +166,7 @@ void cifs_buf_release(void *buf_to_free) { if (buf_to_free == NULL) { - /* cFYI(1, ("Null buffer passed to cifs_buf_release"));*/ + /* cFYI(1, "Null buffer passed to cifs_buf_release");*/ return; } mempool_free(buf_to_free, cifs_req_poolp); @@ -202,7 +202,7 @@ cifs_small_buf_release(void *buf_to_free) { if (buf_to_free == NULL) { - cFYI(1, ("Null buffer passed to cifs_small_buf_release")); + cFYI(1, "Null buffer passed to cifs_small_buf_release"); return; } mempool_free(buf_to_free, cifs_sm_req_poolp); @@ -345,19 +345,19 @@ header_assemble(struct smb_hdr *buffer, char smb_command /* command */ , /* with userid/password pairs found on the smb session */ /* for other target tcp/ip addresses BB */ if (current_fsuid() != treeCon->ses->linux_uid) { - cFYI(1, ("Multiuser mode and UID " - "did not match tcon uid")); + cFYI(1, "Multiuser mode and UID " + "did not match tcon uid"); read_lock(&cifs_tcp_ses_lock); list_for_each(temp_item, &treeCon->ses->server->smb_ses_list) { ses = list_entry(temp_item, struct cifsSesInfo, smb_ses_list); if (ses->linux_uid == current_fsuid()) { if (ses->server == treeCon->ses->server) { - cFYI(1, ("found matching uid substitute right smb_uid")); + cFYI(1, "found matching uid substitute right smb_uid"); buffer->Uid = ses->Suid; break; } else { /* BB eventually call cifs_setup_session here */ - cFYI(1, ("local UID found but no smb sess with this server exists")); + cFYI(1, "local UID found but no smb sess with this server exists"); } } } @@ -394,17 +394,16 @@ checkSMBhdr(struct smb_hdr *smb, __u16 mid) if (smb->Command == SMB_COM_LOCKING_ANDX) return 0; else - cERROR(1, ("Received Request not response")); + cERROR(1, "Received Request not response"); } } else { /* bad signature or mid */ if (*(__le32 *) smb->Protocol != cpu_to_le32(0x424d53ff)) - cERROR(1, - ("Bad protocol string signature header %x", - *(unsigned int *) smb->Protocol)); + cERROR(1, "Bad protocol string signature header %x", + *(unsigned int *) smb->Protocol); if (mid != smb->Mid) - cERROR(1, ("Mids do not match")); + cERROR(1, "Mids do not match"); } - cERROR(1, ("bad smb detected. The Mid=%d", smb->Mid)); + cERROR(1, "bad smb detected. The Mid=%d", smb->Mid); return 1; } @@ -413,7 +412,7 @@ checkSMB(struct smb_hdr *smb, __u16 mid, unsigned int length) { __u32 len = smb->smb_buf_length; __u32 clc_len; /* calculated length */ - cFYI(0, ("checkSMB Length: 0x%x, smb_buf_length: 0x%x", length, len)); + cFYI(0, "checkSMB Length: 0x%x, smb_buf_length: 0x%x", length, len); if (length < 2 + sizeof(struct smb_hdr)) { if ((length >= sizeof(struct smb_hdr) - 1) @@ -437,15 +436,15 @@ checkSMB(struct smb_hdr *smb, __u16 mid, unsigned int length) tmp[sizeof(struct smb_hdr)+1] = 0; return 0; } - cERROR(1, ("rcvd invalid byte count (bcc)")); + cERROR(1, "rcvd invalid byte count (bcc)"); } else { - cERROR(1, ("Length less than smb header size")); + cERROR(1, "Length less than smb header size"); } return 1; } if (len > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4) { - cERROR(1, ("smb length greater than MaxBufSize, mid=%d", - smb->Mid)); + cERROR(1, "smb length greater than MaxBufSize, mid=%d", + smb->Mid); return 1; } @@ -454,8 +453,8 @@ checkSMB(struct smb_hdr *smb, __u16 mid, unsigned int length) clc_len = smbCalcSize_LE(smb); if (4 + len != length) { - cERROR(1, ("Length read does not match RFC1001 length %d", - len)); + cERROR(1, "Length read does not match RFC1001 length %d", + len); return 1; } @@ -466,8 +465,8 @@ checkSMB(struct smb_hdr *smb, __u16 mid, unsigned int length) if (((4 + len) & 0xFFFF) == (clc_len & 0xFFFF)) return 0; /* bcc wrapped */ } - cFYI(1, ("Calculated size %d vs length %d mismatch for mid %d", - clc_len, 4 + len, smb->Mid)); + cFYI(1, "Calculated size %d vs length %d mismatch for mid %d", + clc_len, 4 + len, smb->Mid); /* Windows XP can return a few bytes too much, presumably an illegal pad, at the end of byte range lock responses so we allow for that three byte pad, as long as actual @@ -482,8 +481,8 @@ checkSMB(struct smb_hdr *smb, __u16 mid, unsigned int length) if ((4+len > clc_len) && (len <= clc_len + 512)) return 0; else { - cERROR(1, ("RFC1001 size %d bigger than SMB for Mid=%d", - len, smb->Mid)); + cERROR(1, "RFC1001 size %d bigger than SMB for Mid=%d", + len, smb->Mid); return 1; } } @@ -501,7 +500,7 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) struct cifsFileInfo *netfile; int rc; - cFYI(1, ("Checking for oplock break or dnotify response")); + cFYI(1, "Checking for oplock break or dnotify response"); if ((pSMB->hdr.Command == SMB_COM_NT_TRANSACT) && (pSMB->hdr.Flags & SMBFLG_RESPONSE)) { struct smb_com_transaction_change_notify_rsp *pSMBr = @@ -513,15 +512,15 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) pnotify = (struct file_notify_information *) ((char *)&pSMBr->hdr.Protocol + data_offset); - cFYI(1, ("dnotify on %s Action: 0x%x", - pnotify->FileName, pnotify->Action)); + cFYI(1, "dnotify on %s Action: 0x%x", + pnotify->FileName, pnotify->Action); /* cifs_dump_mem("Rcvd notify Data: ",buf, sizeof(struct smb_hdr)+60); */ return true; } if (pSMBr->hdr.Status.CifsError) { - cFYI(1, ("notify err 0x%d", - pSMBr->hdr.Status.CifsError)); + cFYI(1, "notify err 0x%d", + pSMBr->hdr.Status.CifsError); return true; } return false; @@ -535,7 +534,7 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) large dirty files cached on the client */ if ((NT_STATUS_INVALID_HANDLE) == le32_to_cpu(pSMB->hdr.Status.CifsError)) { - cFYI(1, ("invalid handle on oplock break")); + cFYI(1, "invalid handle on oplock break"); return true; } else if (ERRbadfid == le16_to_cpu(pSMB->hdr.Status.DosError.Error)) { @@ -547,8 +546,8 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) if (pSMB->hdr.WordCount != 8) return false; - cFYI(1, ("oplock type 0x%d level 0x%d", - pSMB->LockType, pSMB->OplockLevel)); + cFYI(1, "oplock type 0x%d level 0x%d", + pSMB->LockType, pSMB->OplockLevel); if (!(pSMB->LockType & LOCKING_ANDX_OPLOCK_RELEASE)) return false; @@ -579,15 +578,15 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) return true; } - cFYI(1, ("file id match, oplock break")); + cFYI(1, "file id match, oplock break"); pCifsInode = CIFS_I(netfile->pInode); pCifsInode->clientCanCacheAll = false; if (pSMB->OplockLevel == 0) pCifsInode->clientCanCacheRead = false; rc = slow_work_enqueue(&netfile->oplock_break); if (rc) { - cERROR(1, ("failed to enqueue oplock " - "break: %d\n", rc)); + cERROR(1, "failed to enqueue oplock " + "break: %d\n", rc); } else { netfile->oplock_break_cancelled = false; } @@ -597,12 +596,12 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) } read_unlock(&GlobalSMBSeslock); read_unlock(&cifs_tcp_ses_lock); - cFYI(1, ("No matching file for oplock break")); + cFYI(1, "No matching file for oplock break"); return true; } } read_unlock(&cifs_tcp_ses_lock); - cFYI(1, ("Can not process oplock break for non-existent connection")); + cFYI(1, "Can not process oplock break for non-existent connection"); return true; } @@ -721,11 +720,11 @@ cifs_autodisable_serverino(struct cifs_sb_info *cifs_sb) { if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) { cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM; - cERROR(1, ("Autodisabling the use of server inode numbers on " + cERROR(1, "Autodisabling the use of server inode numbers on " "%s. This server doesn't seem to support them " "properly. Hardlinks will not be recognized on this " "mount. Consider mounting with the \"noserverino\" " "option to silence this message.", - cifs_sb->tcon->treeName)); + cifs_sb->tcon->treeName); } } diff --git a/fs/cifs/netmisc.c b/fs/cifs/netmisc.c index bd6d6895730d..d35d52889cb5 100644 --- a/fs/cifs/netmisc.c +++ b/fs/cifs/netmisc.c @@ -149,7 +149,7 @@ cifs_inet_pton(const int address_family, const char *cp, void *dst) else if (address_family == AF_INET6) ret = in6_pton(cp, -1 /* len */, dst , '\\', NULL); - cFYI(DBG2, ("address conversion returned %d for %s", ret, cp)); + cFYI(DBG2, "address conversion returned %d for %s", ret, cp); if (ret > 0) ret = 1; return ret; @@ -870,8 +870,8 @@ map_smb_to_linux_error(struct smb_hdr *smb, int logErr) } /* else ERRHRD class errors or junk - return EIO */ - cFYI(1, ("Mapping smb error code %d to POSIX err %d", - smberrcode, rc)); + cFYI(1, "Mapping smb error code %d to POSIX err %d", + smberrcode, rc); /* generic corrective action e.g. reconnect SMB session on * ERRbaduid could be added */ @@ -940,20 +940,20 @@ struct timespec cnvrtDosUnixTm(__le16 le_date, __le16 le_time, int offset) SMB_TIME *st = (SMB_TIME *)&time; SMB_DATE *sd = (SMB_DATE *)&date; - cFYI(1, ("date %d time %d", date, time)); + cFYI(1, "date %d time %d", date, time); sec = 2 * st->TwoSeconds; min = st->Minutes; if ((sec > 59) || (min > 59)) - cERROR(1, ("illegal time min %d sec %d", min, sec)); + cERROR(1, "illegal time min %d sec %d", min, sec); sec += (min * 60); sec += 60 * 60 * st->Hours; if (st->Hours > 24) - cERROR(1, ("illegal hours %d", st->Hours)); + cERROR(1, "illegal hours %d", st->Hours); days = sd->Day; month = sd->Month; if ((days > 31) || (month > 12)) { - cERROR(1, ("illegal date, month %d day: %d", month, days)); + cERROR(1, "illegal date, month %d day: %d", month, days); if (month > 12) month = 12; } @@ -979,7 +979,7 @@ struct timespec cnvrtDosUnixTm(__le16 le_date, __le16 le_time, int offset) ts.tv_sec = sec + offset; - /* cFYI(1,("sec after cnvrt dos to unix time %d",sec)); */ + /* cFYI(1, "sec after cnvrt dos to unix time %d",sec); */ ts.tv_nsec = 0; return ts; diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c index 18e0bc1fb593..daf1753af674 100644 --- a/fs/cifs/readdir.c +++ b/fs/cifs/readdir.c @@ -47,15 +47,15 @@ static void dump_cifs_file_struct(struct file *file, char *label) if (file) { cf = file->private_data; if (cf == NULL) { - cFYI(1, ("empty cifs private file data")); + cFYI(1, "empty cifs private file data"); return; } if (cf->invalidHandle) - cFYI(1, ("invalid handle")); + cFYI(1, "invalid handle"); if (cf->srch_inf.endOfSearch) - cFYI(1, ("end of search")); + cFYI(1, "end of search"); if (cf->srch_inf.emptyDir) - cFYI(1, ("empty dir")); + cFYI(1, "empty dir"); } } #else @@ -76,7 +76,7 @@ cifs_readdir_lookup(struct dentry *parent, struct qstr *name, struct inode *inode; struct super_block *sb = parent->d_inode->i_sb; - cFYI(1, ("For %s", name->name)); + cFYI(1, "For %s", name->name); if (parent->d_op && parent->d_op->d_hash) parent->d_op->d_hash(parent, name); @@ -214,7 +214,7 @@ int get_symlink_reparse_path(char *full_path, struct cifs_sb_info *cifs_sb, fid, cifs_sb->local_nls); if (CIFSSMBClose(xid, ptcon, fid)) { - cFYI(1, ("Error closing temporary reparsepoint open)")); + cFYI(1, "Error closing temporary reparsepoint open"); } } } @@ -252,7 +252,7 @@ static int initiate_cifs_search(const int xid, struct file *file) if (full_path == NULL) return -ENOMEM; - cFYI(1, ("Full path: %s start at: %lld", full_path, file->f_pos)); + cFYI(1, "Full path: %s start at: %lld", full_path, file->f_pos); ffirst_retry: /* test for Unix extensions */ @@ -297,7 +297,7 @@ static int cifs_unicode_bytelen(char *str) if (ustr[len] == 0) return len << 1; } - cFYI(1, ("Unicode string longer than PATH_MAX found")); + cFYI(1, "Unicode string longer than PATH_MAX found"); return len << 1; } @@ -314,19 +314,18 @@ static char *nxt_dir_entry(char *old_entry, char *end_of_smb, int level) pfData->FileNameLength; } else new_entry = old_entry + le32_to_cpu(pDirInfo->NextEntryOffset); - cFYI(1, ("new entry %p old entry %p", new_entry, old_entry)); + cFYI(1, "new entry %p old entry %p", new_entry, old_entry); /* validate that new_entry is not past end of SMB */ if (new_entry >= end_of_smb) { - cERROR(1, - ("search entry %p began after end of SMB %p old entry %p", - new_entry, end_of_smb, old_entry)); + cERROR(1, "search entry %p began after end of SMB %p old entry %p", + new_entry, end_of_smb, old_entry); return NULL; } else if (((level == SMB_FIND_FILE_INFO_STANDARD) && (new_entry + sizeof(FIND_FILE_STANDARD_INFO) > end_of_smb)) || ((level != SMB_FIND_FILE_INFO_STANDARD) && (new_entry + sizeof(FILE_DIRECTORY_INFO) > end_of_smb))) { - cERROR(1, ("search entry %p extends after end of SMB %p", - new_entry, end_of_smb)); + cERROR(1, "search entry %p extends after end of SMB %p", + new_entry, end_of_smb); return NULL; } else return new_entry; @@ -380,8 +379,8 @@ static int cifs_entry_is_dot(char *current_entry, struct cifsFileInfo *cfile) filename = &pFindData->FileName[0]; len = pFindData->FileNameLength; } else { - cFYI(1, ("Unknown findfirst level %d", - cfile->srch_inf.info_level)); + cFYI(1, "Unknown findfirst level %d", + cfile->srch_inf.info_level); } if (filename) { @@ -481,7 +480,7 @@ static int cifs_save_resume_key(const char *current_entry, len = (unsigned int)pFindData->FileNameLength; cifsFile->srch_inf.resume_key = pFindData->ResumeKey; } else { - cFYI(1, ("Unknown findfirst level %d", level)); + cFYI(1, "Unknown findfirst level %d", level); return -EINVAL; } cifsFile->srch_inf.resume_name_len = len; @@ -525,7 +524,7 @@ static int find_cifs_entry(const int xid, struct cifsTconInfo *pTcon, is_dir_changed(file)) || (index_to_find < first_entry_in_buffer)) { /* close and restart search */ - cFYI(1, ("search backing up - close and restart search")); + cFYI(1, "search backing up - close and restart search"); write_lock(&GlobalSMBSeslock); if (!cifsFile->srch_inf.endOfSearch && !cifsFile->invalidHandle) { @@ -535,7 +534,7 @@ static int find_cifs_entry(const int xid, struct cifsTconInfo *pTcon, } else write_unlock(&GlobalSMBSeslock); if (cifsFile->srch_inf.ntwrk_buf_start) { - cFYI(1, ("freeing SMB ff cache buf on search rewind")); + cFYI(1, "freeing SMB ff cache buf on search rewind"); if (cifsFile->srch_inf.smallBuf) cifs_small_buf_release(cifsFile->srch_inf. ntwrk_buf_start); @@ -546,8 +545,8 @@ static int find_cifs_entry(const int xid, struct cifsTconInfo *pTcon, } rc = initiate_cifs_search(xid, file); if (rc) { - cFYI(1, ("error %d reinitiating a search on rewind", - rc)); + cFYI(1, "error %d reinitiating a search on rewind", + rc); return rc; } cifs_save_resume_key(cifsFile->srch_inf.last_entry, cifsFile); @@ -555,7 +554,7 @@ static int find_cifs_entry(const int xid, struct cifsTconInfo *pTcon, while ((index_to_find >= cifsFile->srch_inf.index_of_last_entry) && (rc == 0) && !cifsFile->srch_inf.endOfSearch) { - cFYI(1, ("calling findnext2")); + cFYI(1, "calling findnext2"); rc = CIFSFindNext(xid, pTcon, cifsFile->netfid, &cifsFile->srch_inf); cifs_save_resume_key(cifsFile->srch_inf.last_entry, cifsFile); @@ -575,7 +574,7 @@ static int find_cifs_entry(const int xid, struct cifsTconInfo *pTcon, first_entry_in_buffer = cifsFile->srch_inf.index_of_last_entry - cifsFile->srch_inf.entries_in_buffer; pos_in_buf = index_to_find - first_entry_in_buffer; - cFYI(1, ("found entry - pos_in_buf %d", pos_in_buf)); + cFYI(1, "found entry - pos_in_buf %d", pos_in_buf); for (i = 0; (i < (pos_in_buf)) && (current_entry != NULL); i++) { /* go entry by entry figuring out which is first */ @@ -584,19 +583,19 @@ static int find_cifs_entry(const int xid, struct cifsTconInfo *pTcon, } if ((current_entry == NULL) && (i < pos_in_buf)) { /* BB fixme - check if we should flag this error */ - cERROR(1, ("reached end of buf searching for pos in buf" + cERROR(1, "reached end of buf searching for pos in buf" " %d index to find %lld rc %d", - pos_in_buf, index_to_find, rc)); + pos_in_buf, index_to_find, rc); } rc = 0; *ppCurrentEntry = current_entry; } else { - cFYI(1, ("index not in buffer - could not findnext into it")); + cFYI(1, "index not in buffer - could not findnext into it"); return 0; } if (pos_in_buf >= cifsFile->srch_inf.entries_in_buffer) { - cFYI(1, ("can not return entries pos_in_buf beyond last")); + cFYI(1, "can not return entries pos_in_buf beyond last"); *num_to_ret = 0; } else *num_to_ret = cifsFile->srch_inf.entries_in_buffer - pos_in_buf; @@ -656,12 +655,12 @@ static int cifs_get_name_from_search_buf(struct qstr *pqst, /* one byte length, no name conversion */ len = (unsigned int)pFindData->FileNameLength; } else { - cFYI(1, ("Unknown findfirst level %d", level)); + cFYI(1, "Unknown findfirst level %d", level); return -EINVAL; } if (len > max_len) { - cERROR(1, ("bad search response length %d past smb end", len)); + cERROR(1, "bad search response length %d past smb end", len); return -EINVAL; } @@ -754,7 +753,7 @@ static int cifs_filldir(char *pfindEntry, struct file *file, filldir_t filldir, * case already. Why should we be clobbering other errors from it? */ if (rc) { - cFYI(1, ("filldir rc = %d", rc)); + cFYI(1, "filldir rc = %d", rc); rc = -EOVERFLOW; } dput(tmp_dentry); @@ -786,7 +785,7 @@ int cifs_readdir(struct file *file, void *direntry, filldir_t filldir) case 0: if (filldir(direntry, ".", 1, file->f_pos, file->f_path.dentry->d_inode->i_ino, DT_DIR) < 0) { - cERROR(1, ("Filldir for current dir failed")); + cERROR(1, "Filldir for current dir failed"); rc = -ENOMEM; break; } @@ -794,7 +793,7 @@ int cifs_readdir(struct file *file, void *direntry, filldir_t filldir) case 1: if (filldir(direntry, "..", 2, file->f_pos, file->f_path.dentry->d_parent->d_inode->i_ino, DT_DIR) < 0) { - cERROR(1, ("Filldir for parent dir failed")); + cERROR(1, "Filldir for parent dir failed"); rc = -ENOMEM; break; } @@ -807,7 +806,7 @@ int cifs_readdir(struct file *file, void *direntry, filldir_t filldir) if (file->private_data == NULL) { rc = initiate_cifs_search(xid, file); - cFYI(1, ("initiate cifs search rc %d", rc)); + cFYI(1, "initiate cifs search rc %d", rc); if (rc) { FreeXid(xid); return rc; @@ -821,7 +820,7 @@ int cifs_readdir(struct file *file, void *direntry, filldir_t filldir) cifsFile = file->private_data; if (cifsFile->srch_inf.endOfSearch) { if (cifsFile->srch_inf.emptyDir) { - cFYI(1, ("End of search, empty dir")); + cFYI(1, "End of search, empty dir"); rc = 0; break; } @@ -833,16 +832,16 @@ int cifs_readdir(struct file *file, void *direntry, filldir_t filldir) rc = find_cifs_entry(xid, pTcon, file, ¤t_entry, &num_to_fill); if (rc) { - cFYI(1, ("fce error %d", rc)); + cFYI(1, "fce error %d", rc); goto rddir2_exit; } else if (current_entry != NULL) { - cFYI(1, ("entry %lld found", file->f_pos)); + cFYI(1, "entry %lld found", file->f_pos); } else { - cFYI(1, ("could not find entry")); + cFYI(1, "could not find entry"); goto rddir2_exit; } - cFYI(1, ("loop through %d times filling dir for net buf %p", - num_to_fill, cifsFile->srch_inf.ntwrk_buf_start)); + cFYI(1, "loop through %d times filling dir for net buf %p", + num_to_fill, cifsFile->srch_inf.ntwrk_buf_start); max_len = smbCalcSize((struct smb_hdr *) cifsFile->srch_inf.ntwrk_buf_start); end_of_smb = cifsFile->srch_inf.ntwrk_buf_start + max_len; @@ -851,8 +850,8 @@ int cifs_readdir(struct file *file, void *direntry, filldir_t filldir) for (i = 0; (i < num_to_fill) && (rc == 0); i++) { if (current_entry == NULL) { /* evaluate whether this case is an error */ - cERROR(1, ("past SMB end, num to fill %d i %d", - num_to_fill, i)); + cERROR(1, "past SMB end, num to fill %d i %d", + num_to_fill, i); break; } /* if buggy server returns . and .. late do @@ -867,8 +866,8 @@ int cifs_readdir(struct file *file, void *direntry, filldir_t filldir) file->f_pos++; if (file->f_pos == cifsFile->srch_inf.index_of_last_entry) { - cFYI(1, ("last entry in buf at pos %lld %s", - file->f_pos, tmp_buf)); + cFYI(1, "last entry in buf at pos %lld %s", + file->f_pos, tmp_buf); cifs_save_resume_key(current_entry, cifsFile); break; } else diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c index 7c3fd7463f44..7707389bdf2c 100644 --- a/fs/cifs/sess.c +++ b/fs/cifs/sess.c @@ -35,9 +35,11 @@ extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8, unsigned char *p24); -/* Checks if this is the first smb session to be reconnected after - the socket has been reestablished (so we know whether to use vc 0). - Called while holding the cifs_tcp_ses_lock, so do not block */ +/* + * Checks if this is the first smb session to be reconnected after + * the socket has been reestablished (so we know whether to use vc 0). + * Called while holding the cifs_tcp_ses_lock, so do not block + */ static bool is_first_ses_reconnect(struct cifsSesInfo *ses) { struct list_head *tmp; @@ -284,7 +286,7 @@ decode_unicode_ssetup(char **pbcc_area, int bleft, struct cifsSesInfo *ses, int len; char *data = *pbcc_area; - cFYI(1, ("bleft %d", bleft)); + cFYI(1, "bleft %d", bleft); /* * Windows servers do not always double null terminate their final @@ -301,7 +303,7 @@ decode_unicode_ssetup(char **pbcc_area, int bleft, struct cifsSesInfo *ses, kfree(ses->serverOS); ses->serverOS = cifs_strndup_from_ucs(data, bleft, true, nls_cp); - cFYI(1, ("serverOS=%s", ses->serverOS)); + cFYI(1, "serverOS=%s", ses->serverOS); len = (UniStrnlen((wchar_t *) data, bleft / 2) * 2) + 2; data += len; bleft -= len; @@ -310,7 +312,7 @@ decode_unicode_ssetup(char **pbcc_area, int bleft, struct cifsSesInfo *ses, kfree(ses->serverNOS); ses->serverNOS = cifs_strndup_from_ucs(data, bleft, true, nls_cp); - cFYI(1, ("serverNOS=%s", ses->serverNOS)); + cFYI(1, "serverNOS=%s", ses->serverNOS); len = (UniStrnlen((wchar_t *) data, bleft / 2) * 2) + 2; data += len; bleft -= len; @@ -319,7 +321,7 @@ decode_unicode_ssetup(char **pbcc_area, int bleft, struct cifsSesInfo *ses, kfree(ses->serverDomain); ses->serverDomain = cifs_strndup_from_ucs(data, bleft, true, nls_cp); - cFYI(1, ("serverDomain=%s", ses->serverDomain)); + cFYI(1, "serverDomain=%s", ses->serverDomain); return; } @@ -332,7 +334,7 @@ static int decode_ascii_ssetup(char **pbcc_area, int bleft, int len; char *bcc_ptr = *pbcc_area; - cFYI(1, ("decode sessetup ascii. bleft %d", bleft)); + cFYI(1, "decode sessetup ascii. bleft %d", bleft); len = strnlen(bcc_ptr, bleft); if (len >= bleft) @@ -344,7 +346,7 @@ static int decode_ascii_ssetup(char **pbcc_area, int bleft, if (ses->serverOS) strncpy(ses->serverOS, bcc_ptr, len); if (strncmp(ses->serverOS, "OS/2", 4) == 0) { - cFYI(1, ("OS/2 server")); + cFYI(1, "OS/2 server"); ses->flags |= CIFS_SES_OS2; } @@ -373,7 +375,7 @@ static int decode_ascii_ssetup(char **pbcc_area, int bleft, /* BB For newer servers which do not support Unicode, but thus do return domain here we could add parsing for it later, but it is not very important */ - cFYI(1, ("ascii: bytes left %d", bleft)); + cFYI(1, "ascii: bytes left %d", bleft); return rc; } @@ -384,16 +386,16 @@ static int decode_ntlmssp_challenge(char *bcc_ptr, int blob_len, CHALLENGE_MESSAGE *pblob = (CHALLENGE_MESSAGE *)bcc_ptr; if (blob_len < sizeof(CHALLENGE_MESSAGE)) { - cERROR(1, ("challenge blob len %d too small", blob_len)); + cERROR(1, "challenge blob len %d too small", blob_len); return -EINVAL; } if (memcmp(pblob->Signature, "NTLMSSP", 8)) { - cERROR(1, ("blob signature incorrect %s", pblob->Signature)); + cERROR(1, "blob signature incorrect %s", pblob->Signature); return -EINVAL; } if (pblob->MessageType != NtLmChallenge) { - cERROR(1, ("Incorrect message type %d", pblob->MessageType)); + cERROR(1, "Incorrect message type %d", pblob->MessageType); return -EINVAL; } @@ -447,7 +449,7 @@ static void build_ntlmssp_negotiate_blob(unsigned char *pbuffer, This function returns the length of the data in the blob */ static int build_ntlmssp_auth_blob(unsigned char *pbuffer, struct cifsSesInfo *ses, - const struct nls_table *nls_cp, int first) + const struct nls_table *nls_cp, bool first) { AUTHENTICATE_MESSAGE *sec_blob = (AUTHENTICATE_MESSAGE *)pbuffer; __u32 flags; @@ -546,7 +548,7 @@ static void setup_ntlmssp_neg_req(SESSION_SETUP_ANDX *pSMB, static int setup_ntlmssp_auth_req(SESSION_SETUP_ANDX *pSMB, struct cifsSesInfo *ses, - const struct nls_table *nls, int first_time) + const struct nls_table *nls, bool first_time) { int bloblen; @@ -559,8 +561,8 @@ static int setup_ntlmssp_auth_req(SESSION_SETUP_ANDX *pSMB, #endif int -CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, int first_time, - const struct nls_table *nls_cp) +CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, + const struct nls_table *nls_cp) { int rc = 0; int wct; @@ -577,13 +579,18 @@ CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, int first_time, int bytes_remaining; struct key *spnego_key = NULL; __le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */ + bool first_time; if (ses == NULL) return -EINVAL; + read_lock(&cifs_tcp_ses_lock); + first_time = is_first_ses_reconnect(ses); + read_unlock(&cifs_tcp_ses_lock); + type = ses->server->secType; - cFYI(1, ("sess setup type %d", type)); + cFYI(1, "sess setup type %d", type); ssetup_ntlmssp_authenticate: if (phase == NtLmChallenge) phase = NtLmAuthenticate; /* if ntlmssp, now final phase */ @@ -664,7 +671,7 @@ ssetup_ntlmssp_authenticate: changed to do higher than lanman dialect and we reconnected would we ever calc signing_key? */ - cFYI(1, ("Negotiating LANMAN setting up strings")); + cFYI(1, "Negotiating LANMAN setting up strings"); /* Unicode not allowed for LANMAN dialects */ ascii_ssetup_strings(&bcc_ptr, ses, nls_cp); #endif @@ -744,7 +751,7 @@ ssetup_ntlmssp_authenticate: unicode_ssetup_strings(&bcc_ptr, ses, nls_cp); } else ascii_ssetup_strings(&bcc_ptr, ses, nls_cp); - } else if (type == Kerberos || type == MSKerberos) { + } else if (type == Kerberos) { #ifdef CONFIG_CIFS_UPCALL struct cifs_spnego_msg *msg; spnego_key = cifs_get_spnego_key(ses); @@ -758,17 +765,17 @@ ssetup_ntlmssp_authenticate: /* check version field to make sure that cifs.upcall is sending us a response in an expected form */ if (msg->version != CIFS_SPNEGO_UPCALL_VERSION) { - cERROR(1, ("incorrect version of cifs.upcall (expected" + cERROR(1, "incorrect version of cifs.upcall (expected" " %d but got %d)", - CIFS_SPNEGO_UPCALL_VERSION, msg->version)); + CIFS_SPNEGO_UPCALL_VERSION, msg->version); rc = -EKEYREJECTED; goto ssetup_exit; } /* bail out if key is too long */ if (msg->sesskey_len > sizeof(ses->server->mac_signing_key.data.krb5)) { - cERROR(1, ("Kerberos signing key too long (%u bytes)", - msg->sesskey_len)); + cERROR(1, "Kerberos signing key too long (%u bytes)", + msg->sesskey_len); rc = -EOVERFLOW; goto ssetup_exit; } @@ -796,7 +803,7 @@ ssetup_ntlmssp_authenticate: /* BB: is this right? */ ascii_ssetup_strings(&bcc_ptr, ses, nls_cp); #else /* ! CONFIG_CIFS_UPCALL */ - cERROR(1, ("Kerberos negotiated but upcall support disabled!")); + cERROR(1, "Kerberos negotiated but upcall support disabled!"); rc = -ENOSYS; goto ssetup_exit; #endif /* CONFIG_CIFS_UPCALL */ @@ -804,12 +811,12 @@ ssetup_ntlmssp_authenticate: #ifdef CONFIG_CIFS_EXPERIMENTAL if (type == RawNTLMSSP) { if ((pSMB->req.hdr.Flags2 & SMBFLG2_UNICODE) == 0) { - cERROR(1, ("NTLMSSP requires Unicode support")); + cERROR(1, "NTLMSSP requires Unicode support"); rc = -ENOSYS; goto ssetup_exit; } - cFYI(1, ("ntlmssp session setup phase %d", phase)); + cFYI(1, "ntlmssp session setup phase %d", phase); pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC; capabilities |= CAP_EXTENDED_SECURITY; pSMB->req.Capabilities |= cpu_to_le32(capabilities); @@ -827,7 +834,7 @@ ssetup_ntlmssp_authenticate: on the response (challenge) */ smb_buf->Uid = ses->Suid; } else { - cERROR(1, ("invalid phase %d", phase)); + cERROR(1, "invalid phase %d", phase); rc = -ENOSYS; goto ssetup_exit; } @@ -839,12 +846,12 @@ ssetup_ntlmssp_authenticate: } unicode_oslm_strings(&bcc_ptr, nls_cp); } else { - cERROR(1, ("secType %d not supported!", type)); + cERROR(1, "secType %d not supported!", type); rc = -ENOSYS; goto ssetup_exit; } #else - cERROR(1, ("secType %d not supported!", type)); + cERROR(1, "secType %d not supported!", type); rc = -ENOSYS; goto ssetup_exit; #endif @@ -862,7 +869,7 @@ ssetup_ntlmssp_authenticate: CIFS_STD_OP /* not long */ | CIFS_LOG_ERROR); /* SMB request buf freed in SendReceive2 */ - cFYI(1, ("ssetup rc from sendrecv2 is %d", rc)); + cFYI(1, "ssetup rc from sendrecv2 is %d", rc); pSMB = (SESSION_SETUP_ANDX *)iov[0].iov_base; smb_buf = (struct smb_hdr *)iov[0].iov_base; @@ -870,7 +877,7 @@ ssetup_ntlmssp_authenticate: if ((type == RawNTLMSSP) && (smb_buf->Status.CifsError == cpu_to_le32(NT_STATUS_MORE_PROCESSING_REQUIRED))) { if (phase != NtLmNegotiate) { - cERROR(1, ("Unexpected more processing error")); + cERROR(1, "Unexpected more processing error"); goto ssetup_exit; } /* NTLMSSP Negotiate sent now processing challenge (response) */ @@ -882,14 +889,14 @@ ssetup_ntlmssp_authenticate: if ((smb_buf->WordCount != 3) && (smb_buf->WordCount != 4)) { rc = -EIO; - cERROR(1, ("bad word count %d", smb_buf->WordCount)); + cERROR(1, "bad word count %d", smb_buf->WordCount); goto ssetup_exit; } action = le16_to_cpu(pSMB->resp.Action); if (action & GUEST_LOGIN) - cFYI(1, ("Guest login")); /* BB mark SesInfo struct? */ + cFYI(1, "Guest login"); /* BB mark SesInfo struct? */ ses->Suid = smb_buf->Uid; /* UID left in wire format (le) */ - cFYI(1, ("UID = %d ", ses->Suid)); + cFYI(1, "UID = %d ", ses->Suid); /* response can have either 3 or 4 word count - Samba sends 3 */ /* and lanman response is 3 */ bytes_remaining = BCC(smb_buf); @@ -899,7 +906,7 @@ ssetup_ntlmssp_authenticate: __u16 blob_len; blob_len = le16_to_cpu(pSMB->resp.SecurityBlobLength); if (blob_len > bytes_remaining) { - cERROR(1, ("bad security blob length %d", blob_len)); + cERROR(1, "bad security blob length %d", blob_len); rc = -EINVAL; goto ssetup_exit; } @@ -933,7 +940,7 @@ ssetup_exit: } kfree(str_area); if (resp_buf_type == CIFS_SMALL_BUFFER) { - cFYI(1, ("ssetup freeing small buf %p", iov[0].iov_base)); + cFYI(1, "ssetup freeing small buf %p", iov[0].iov_base); cifs_small_buf_release(iov[0].iov_base); } else if (resp_buf_type == CIFS_LARGE_BUFFER) cifs_buf_release(iov[0].iov_base); diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c index ad081fe7eb18..82f78c4d6978 100644 --- a/fs/cifs/transport.c +++ b/fs/cifs/transport.c @@ -35,7 +35,6 @@ #include "cifs_debug.h" extern mempool_t *cifs_mid_poolp; -extern struct kmem_cache *cifs_oplock_cachep; static struct mid_q_entry * AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server) @@ -43,7 +42,7 @@ AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server) struct mid_q_entry *temp; if (server == NULL) { - cERROR(1, ("Null TCP session in AllocMidQEntry")); + cERROR(1, "Null TCP session in AllocMidQEntry"); return NULL; } @@ -55,7 +54,7 @@ AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server) temp->mid = smb_buffer->Mid; /* always LE */ temp->pid = current->pid; temp->command = smb_buffer->Command; - cFYI(1, ("For smb_command %d", temp->command)); + cFYI(1, "For smb_command %d", temp->command); /* do_gettimeofday(&temp->when_sent);*/ /* easier to use jiffies */ /* when mid allocated can be before when sent */ temp->when_alloc = jiffies; @@ -140,7 +139,7 @@ smb_sendv(struct TCP_Server_Info *server, struct kvec *iov, int n_vec) total_len += iov[i].iov_len; smb_buffer->smb_buf_length = cpu_to_be32(smb_buffer->smb_buf_length); - cFYI(1, ("Sending smb: total_len %d", total_len)); + cFYI(1, "Sending smb: total_len %d", total_len); dump_smb(smb_buffer, len); i = 0; @@ -168,9 +167,8 @@ smb_sendv(struct TCP_Server_Info *server, struct kvec *iov, int n_vec) reconnect which may clear the network problem. */ if ((i >= 14) || (!server->noblocksnd && (i > 2))) { - cERROR(1, - ("sends on sock %p stuck for 15 seconds", - ssocket)); + cERROR(1, "sends on sock %p stuck for 15 seconds", + ssocket); rc = -EAGAIN; break; } @@ -184,13 +182,13 @@ smb_sendv(struct TCP_Server_Info *server, struct kvec *iov, int n_vec) total_len = 0; break; } else if (rc > total_len) { - cERROR(1, ("sent %d requested %d", rc, total_len)); + cERROR(1, "sent %d requested %d", rc, total_len); break; } if (rc == 0) { /* should never happen, letting socket clear before retrying is our only obvious option here */ - cERROR(1, ("tcp sent no data")); + cERROR(1, "tcp sent no data"); msleep(500); continue; } @@ -213,8 +211,8 @@ smb_sendv(struct TCP_Server_Info *server, struct kvec *iov, int n_vec) } if ((total_len > 0) && (total_len != smb_buf_length + 4)) { - cFYI(1, ("partial send (%d remaining), terminating session", - total_len)); + cFYI(1, "partial send (%d remaining), terminating session", + total_len); /* If we have only sent part of an SMB then the next SMB could be taken as the remainder of this one. We need to kill the socket so the server throws away the partial @@ -223,7 +221,7 @@ smb_sendv(struct TCP_Server_Info *server, struct kvec *iov, int n_vec) } if (rc < 0) { - cERROR(1, ("Error %d sending data on socket to server", rc)); + cERROR(1, "Error %d sending data on socket to server", rc); } else rc = 0; @@ -296,7 +294,7 @@ static int allocate_mid(struct cifsSesInfo *ses, struct smb_hdr *in_buf, } if (ses->server->tcpStatus == CifsNeedReconnect) { - cFYI(1, ("tcp session dead - return to caller to retry")); + cFYI(1, "tcp session dead - return to caller to retry"); return -EAGAIN; } @@ -348,7 +346,7 @@ static int wait_for_response(struct cifsSesInfo *ses, lrt += time_to_wait; if (time_after(jiffies, lrt)) { /* No replies for time_to_wait. */ - cERROR(1, ("server not responding")); + cERROR(1, "server not responding"); return -1; } } else { @@ -379,7 +377,7 @@ SendReceiveNoRsp(const unsigned int xid, struct cifsSesInfo *ses, iov[0].iov_len = in_buf->smb_buf_length + 4; flags |= CIFS_NO_RESP; rc = SendReceive2(xid, ses, iov, 1, &resp_buf_type, flags); - cFYI(DBG2, ("SendRcvNoRsp flags %d rc %d", flags, rc)); + cFYI(DBG2, "SendRcvNoRsp flags %d rc %d", flags, rc); return rc; } @@ -402,7 +400,7 @@ SendReceive2(const unsigned int xid, struct cifsSesInfo *ses, if ((ses == NULL) || (ses->server == NULL)) { cifs_small_buf_release(in_buf); - cERROR(1, ("Null session")); + cERROR(1, "Null session"); return -EIO; } @@ -471,7 +469,7 @@ SendReceive2(const unsigned int xid, struct cifsSesInfo *ses, else if (long_op == CIFS_BLOCKING_OP) timeout = 0x7FFFFFFF; /* large, but not so large as to wrap */ else { - cERROR(1, ("unknown timeout flag %d", long_op)); + cERROR(1, "unknown timeout flag %d", long_op); rc = -EIO; goto out; } @@ -490,8 +488,8 @@ SendReceive2(const unsigned int xid, struct cifsSesInfo *ses, spin_lock(&GlobalMid_Lock); if (midQ->resp_buf == NULL) { - cERROR(1, ("No response to cmd %d mid %d", - midQ->command, midQ->mid)); + cERROR(1, "No response to cmd %d mid %d", + midQ->command, midQ->mid); if (midQ->midState == MID_REQUEST_SUBMITTED) { if (ses->server->tcpStatus == CifsExiting) rc = -EHOSTDOWN; @@ -504,7 +502,7 @@ SendReceive2(const unsigned int xid, struct cifsSesInfo *ses, if (rc != -EHOSTDOWN) { if (midQ->midState == MID_RETRY_NEEDED) { rc = -EAGAIN; - cFYI(1, ("marking request for retry")); + cFYI(1, "marking request for retry"); } else { rc = -EIO; } @@ -521,8 +519,8 @@ SendReceive2(const unsigned int xid, struct cifsSesInfo *ses, receive_len = midQ->resp_buf->smb_buf_length; if (receive_len > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE) { - cERROR(1, ("Frame too large received. Length: %d Xid: %d", - receive_len, xid)); + cERROR(1, "Frame too large received. Length: %d Xid: %d", + receive_len, xid); rc = -EIO; goto out; } @@ -548,7 +546,7 @@ SendReceive2(const unsigned int xid, struct cifsSesInfo *ses, &ses->server->mac_signing_key, midQ->sequence_number+1); if (rc) { - cERROR(1, ("Unexpected SMB signature")); + cERROR(1, "Unexpected SMB signature"); /* BB FIXME add code to kill session */ } } @@ -569,7 +567,7 @@ SendReceive2(const unsigned int xid, struct cifsSesInfo *ses, DeleteMidQEntry */ } else { rc = -EIO; - cFYI(1, ("Bad MID state?")); + cFYI(1, "Bad MID state?"); } out: @@ -591,11 +589,11 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, struct mid_q_entry *midQ; if (ses == NULL) { - cERROR(1, ("Null smb session")); + cERROR(1, "Null smb session"); return -EIO; } if (ses->server == NULL) { - cERROR(1, ("Null tcp session")); + cERROR(1, "Null tcp session"); return -EIO; } @@ -607,8 +605,8 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, use ses->maxReq */ if (in_buf->smb_buf_length > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4) { - cERROR(1, ("Illegal length, greater than maximum frame, %d", - in_buf->smb_buf_length)); + cERROR(1, "Illegal length, greater than maximum frame, %d", + in_buf->smb_buf_length); return -EIO; } @@ -665,7 +663,7 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, else if (long_op == CIFS_BLOCKING_OP) timeout = 0x7FFFFFFF; /* large but no so large as to wrap */ else { - cERROR(1, ("unknown timeout flag %d", long_op)); + cERROR(1, "unknown timeout flag %d", long_op); rc = -EIO; goto out; } @@ -681,8 +679,8 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, spin_lock(&GlobalMid_Lock); if (midQ->resp_buf == NULL) { - cERROR(1, ("No response for cmd %d mid %d", - midQ->command, midQ->mid)); + cERROR(1, "No response for cmd %d mid %d", + midQ->command, midQ->mid); if (midQ->midState == MID_REQUEST_SUBMITTED) { if (ses->server->tcpStatus == CifsExiting) rc = -EHOSTDOWN; @@ -695,7 +693,7 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, if (rc != -EHOSTDOWN) { if (midQ->midState == MID_RETRY_NEEDED) { rc = -EAGAIN; - cFYI(1, ("marking request for retry")); + cFYI(1, "marking request for retry"); } else { rc = -EIO; } @@ -712,8 +710,8 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, receive_len = midQ->resp_buf->smb_buf_length; if (receive_len > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE) { - cERROR(1, ("Frame too large received. Length: %d Xid: %d", - receive_len, xid)); + cERROR(1, "Frame too large received. Length: %d Xid: %d", + receive_len, xid); rc = -EIO; goto out; } @@ -736,7 +734,7 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, &ses->server->mac_signing_key, midQ->sequence_number+1); if (rc) { - cERROR(1, ("Unexpected SMB signature")); + cERROR(1, "Unexpected SMB signature"); /* BB FIXME add code to kill session */ } } @@ -753,7 +751,7 @@ SendReceive(const unsigned int xid, struct cifsSesInfo *ses, BCC(out_buf) = le16_to_cpu(BCC_LE(out_buf)); } else { rc = -EIO; - cERROR(1, ("Bad MID state?")); + cERROR(1, "Bad MID state?"); } out: @@ -824,13 +822,13 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifsTconInfo *tcon, struct cifsSesInfo *ses; if (tcon == NULL || tcon->ses == NULL) { - cERROR(1, ("Null smb session")); + cERROR(1, "Null smb session"); return -EIO; } ses = tcon->ses; if (ses->server == NULL) { - cERROR(1, ("Null tcp session")); + cERROR(1, "Null tcp session"); return -EIO; } @@ -842,8 +840,8 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifsTconInfo *tcon, use ses->maxReq */ if (in_buf->smb_buf_length > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4) { - cERROR(1, ("Illegal length, greater than maximum frame, %d", - in_buf->smb_buf_length)); + cERROR(1, "Illegal length, greater than maximum frame, %d", + in_buf->smb_buf_length); return -EIO; } @@ -933,8 +931,8 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifsTconInfo *tcon, spin_unlock(&GlobalMid_Lock); receive_len = midQ->resp_buf->smb_buf_length; } else { - cERROR(1, ("No response for cmd %d mid %d", - midQ->command, midQ->mid)); + cERROR(1, "No response for cmd %d mid %d", + midQ->command, midQ->mid); if (midQ->midState == MID_REQUEST_SUBMITTED) { if (ses->server->tcpStatus == CifsExiting) rc = -EHOSTDOWN; @@ -947,7 +945,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifsTconInfo *tcon, if (rc != -EHOSTDOWN) { if (midQ->midState == MID_RETRY_NEEDED) { rc = -EAGAIN; - cFYI(1, ("marking request for retry")); + cFYI(1, "marking request for retry"); } else { rc = -EIO; } @@ -958,8 +956,8 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifsTconInfo *tcon, } if (receive_len > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE) { - cERROR(1, ("Frame too large received. Length: %d Xid: %d", - receive_len, xid)); + cERROR(1, "Frame too large received. Length: %d Xid: %d", + receive_len, xid); rc = -EIO; goto out; } @@ -968,7 +966,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifsTconInfo *tcon, if ((out_buf == NULL) || (midQ->midState != MID_RESPONSE_RECEIVED)) { rc = -EIO; - cERROR(1, ("Bad MID state?")); + cERROR(1, "Bad MID state?"); goto out; } @@ -986,7 +984,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifsTconInfo *tcon, &ses->server->mac_signing_key, midQ->sequence_number+1); if (rc) { - cERROR(1, ("Unexpected SMB signature")); + cERROR(1, "Unexpected SMB signature"); /* BB FIXME add code to kill session */ } } diff --git a/fs/cifs/xattr.c b/fs/cifs/xattr.c index f555ce077d4f..a1509207bfa6 100644 --- a/fs/cifs/xattr.c +++ b/fs/cifs/xattr.c @@ -70,12 +70,12 @@ int cifs_removexattr(struct dentry *direntry, const char *ea_name) return rc; } if (ea_name == NULL) { - cFYI(1, ("Null xattr names not supported")); + cFYI(1, "Null xattr names not supported"); } else if (strncmp(ea_name, CIFS_XATTR_USER_PREFIX, 5) && (strncmp(ea_name, CIFS_XATTR_OS2_PREFIX, 4))) { cFYI(1, - ("illegal xattr request %s (only user namespace supported)", - ea_name)); + "illegal xattr request %s (only user namespace supported)", + ea_name); /* BB what if no namespace prefix? */ /* Should we just pass them to server, except for system and perhaps security prefixes? */ @@ -131,19 +131,19 @@ int cifs_setxattr(struct dentry *direntry, const char *ea_name, search server for EAs or streams to returns as xattrs */ if (value_size > MAX_EA_VALUE_SIZE) { - cFYI(1, ("size of EA value too large")); + cFYI(1, "size of EA value too large"); kfree(full_path); FreeXid(xid); return -EOPNOTSUPP; } if (ea_name == NULL) { - cFYI(1, ("Null xattr names not supported")); + cFYI(1, "Null xattr names not supported"); } else if (strncmp(ea_name, CIFS_XATTR_USER_PREFIX, 5) == 0) { if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_XATTR) goto set_ea_exit; if (strncmp(ea_name, CIFS_XATTR_DOS_ATTRIB, 14) == 0) - cFYI(1, ("attempt to set cifs inode metadata")); + cFYI(1, "attempt to set cifs inode metadata"); ea_name += 5; /* skip past user. prefix */ rc = CIFSSMBSetEA(xid, pTcon, full_path, ea_name, ea_value, @@ -169,9 +169,9 @@ int cifs_setxattr(struct dentry *direntry, const char *ea_name, ACL_TYPE_ACCESS, cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); - cFYI(1, ("set POSIX ACL rc %d", rc)); + cFYI(1, "set POSIX ACL rc %d", rc); #else - cFYI(1, ("set POSIX ACL not supported")); + cFYI(1, "set POSIX ACL not supported"); #endif } else if (strncmp(ea_name, POSIX_ACL_XATTR_DEFAULT, strlen(POSIX_ACL_XATTR_DEFAULT)) == 0) { @@ -182,13 +182,13 @@ int cifs_setxattr(struct dentry *direntry, const char *ea_name, ACL_TYPE_DEFAULT, cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); - cFYI(1, ("set POSIX default ACL rc %d", rc)); + cFYI(1, "set POSIX default ACL rc %d", rc); #else - cFYI(1, ("set default POSIX ACL not supported")); + cFYI(1, "set default POSIX ACL not supported"); #endif } else { - cFYI(1, ("illegal xattr request %s (only user namespace" - " supported)", ea_name)); + cFYI(1, "illegal xattr request %s (only user namespace" + " supported)", ea_name); /* BB what if no namespace prefix? */ /* Should we just pass them to server, except for system and perhaps security prefixes? */ @@ -235,13 +235,13 @@ ssize_t cifs_getxattr(struct dentry *direntry, const char *ea_name, /* return dos attributes as pseudo xattr */ /* return alt name if available as pseudo attr */ if (ea_name == NULL) { - cFYI(1, ("Null xattr names not supported")); + cFYI(1, "Null xattr names not supported"); } else if (strncmp(ea_name, CIFS_XATTR_USER_PREFIX, 5) == 0) { if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_XATTR) goto get_ea_exit; if (strncmp(ea_name, CIFS_XATTR_DOS_ATTRIB, 14) == 0) { - cFYI(1, ("attempt to query cifs inode metadata")); + cFYI(1, "attempt to query cifs inode metadata"); /* revalidate/getattr then populate from inode */ } /* BB add else when above is implemented */ ea_name += 5; /* skip past user. prefix */ @@ -287,7 +287,7 @@ ssize_t cifs_getxattr(struct dentry *direntry, const char *ea_name, } #endif /* EXPERIMENTAL */ #else - cFYI(1, ("query POSIX ACL not supported yet")); + cFYI(1, "query POSIX ACL not supported yet"); #endif /* CONFIG_CIFS_POSIX */ } else if (strncmp(ea_name, POSIX_ACL_XATTR_DEFAULT, strlen(POSIX_ACL_XATTR_DEFAULT)) == 0) { @@ -299,18 +299,18 @@ ssize_t cifs_getxattr(struct dentry *direntry, const char *ea_name, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); #else - cFYI(1, ("query POSIX default ACL not supported yet")); + cFYI(1, "query POSIX default ACL not supported yet"); #endif } else if (strncmp(ea_name, CIFS_XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) == 0) { - cFYI(1, ("Trusted xattr namespace not supported yet")); + cFYI(1, "Trusted xattr namespace not supported yet"); } else if (strncmp(ea_name, CIFS_XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN) == 0) { - cFYI(1, ("Security xattr namespace not supported yet")); + cFYI(1, "Security xattr namespace not supported yet"); } else cFYI(1, - ("illegal xattr request %s (only user namespace supported)", - ea_name)); + "illegal xattr request %s (only user namespace supported)", + ea_name); /* We could add an additional check for streams ie if proc/fs/cifs/streamstoxattr is set then diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c index 8e48b52205aa..0b502f80c691 100644 --- a/fs/configfs/dir.c +++ b/fs/configfs/dir.c @@ -645,6 +645,7 @@ static void detach_groups(struct config_group *group) configfs_detach_group(sd->s_element); child->d_inode->i_flags |= S_DEAD; + dont_mount(child); mutex_unlock(&child->d_inode->i_mutex); @@ -840,6 +841,7 @@ static int configfs_attach_item(struct config_item *parent_item, mutex_lock(&dentry->d_inode->i_mutex); configfs_remove_dir(item); dentry->d_inode->i_flags |= S_DEAD; + dont_mount(dentry); mutex_unlock(&dentry->d_inode->i_mutex); d_delete(dentry); } @@ -882,6 +884,7 @@ static int configfs_attach_group(struct config_item *parent_item, if (ret) { configfs_detach_item(item); dentry->d_inode->i_flags |= S_DEAD; + dont_mount(dentry); } configfs_adjust_dir_dirent_depth_after_populate(sd); mutex_unlock(&dentry->d_inode->i_mutex); @@ -1725,6 +1728,7 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys) mutex_unlock(&configfs_symlink_mutex); configfs_detach_group(&group->cg_item); dentry->d_inode->i_flags |= S_DEAD; + dont_mount(dentry); mutex_unlock(&dentry->d_inode->i_mutex); d_delete(dentry); diff --git a/fs/eventpoll.c b/fs/eventpoll.c index bd056a5b4efc..3817149919cb 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -1140,8 +1140,7 @@ retry: * ep_poll_callback() when events will become available. */ init_waitqueue_entry(&wait, current); - wait.flags |= WQ_FLAG_EXCLUSIVE; - __add_wait_queue(&ep->wq, &wait); + __add_wait_queue_exclusive(&ep->wq, &wait); for (;;) { /* diff --git a/fs/jfs/super.c b/fs/jfs/super.c index 157382fa6256..b66832ac33ac 100644 --- a/fs/jfs/super.c +++ b/fs/jfs/super.c @@ -446,10 +446,8 @@ static int jfs_fill_super(struct super_block *sb, void *data, int silent) /* initialize the mount flag and determine the default error handler */ flag = JFS_ERR_REMOUNT_RO; - if (!parse_options((char *) data, sb, &newLVSize, &flag)) { - kfree(sbi); - return -EINVAL; - } + if (!parse_options((char *) data, sb, &newLVSize, &flag)) + goto out_kfree; sbi->flag = flag; #ifdef CONFIG_JFS_POSIX_ACL @@ -458,7 +456,7 @@ static int jfs_fill_super(struct super_block *sb, void *data, int silent) if (newLVSize) { printk(KERN_ERR "resize option for remount only\n"); - return -EINVAL; + goto out_kfree; } /* @@ -478,7 +476,7 @@ static int jfs_fill_super(struct super_block *sb, void *data, int silent) inode = new_inode(sb); if (inode == NULL) { ret = -ENOMEM; - goto out_kfree; + goto out_unload; } inode->i_ino = 0; inode->i_nlink = 1; @@ -550,9 +548,10 @@ out_mount_failed: make_bad_inode(sbi->direct_inode); iput(sbi->direct_inode); sbi->direct_inode = NULL; -out_kfree: +out_unload: if (sbi->nls_tab) unload_nls(sbi->nls_tab); +out_kfree: kfree(sbi); return ret; } diff --git a/fs/logfs/dev_bdev.c b/fs/logfs/dev_bdev.c index 243c00071f76..9bd2ce2a3040 100644 --- a/fs/logfs/dev_bdev.c +++ b/fs/logfs/dev_bdev.c @@ -303,6 +303,11 @@ static void bdev_put_device(struct super_block *sb) close_bdev_exclusive(logfs_super(sb)->s_bdev, FMODE_READ|FMODE_WRITE); } +static int bdev_can_write_buf(struct super_block *sb, u64 ofs) +{ + return 0; +} + static const struct logfs_device_ops bd_devops = { .find_first_sb = bdev_find_first_sb, .find_last_sb = bdev_find_last_sb, @@ -310,6 +315,7 @@ static const struct logfs_device_ops bd_devops = { .readpage = bdev_readpage, .writeseg = bdev_writeseg, .erase = bdev_erase, + .can_write_buf = bdev_can_write_buf, .sync = bdev_sync, .put_device = bdev_put_device, }; diff --git a/fs/logfs/dev_mtd.c b/fs/logfs/dev_mtd.c index cafb6ef2e05b..a85d47d13e4b 100644 --- a/fs/logfs/dev_mtd.c +++ b/fs/logfs/dev_mtd.c @@ -9,6 +9,7 @@ #include <linux/completion.h> #include <linux/mount.h> #include <linux/sched.h> +#include <linux/slab.h> #define PAGE_OFS(ofs) ((ofs) & (PAGE_SIZE-1)) @@ -126,7 +127,8 @@ static int mtd_readpage(void *_sb, struct page *page) err = mtd_read(sb, page->index << PAGE_SHIFT, PAGE_SIZE, page_address(page)); - if (err == -EUCLEAN) { + if (err == -EUCLEAN || err == -EBADMSG) { + /* -EBADMSG happens regularly on power failures */ err = 0; /* FIXME: force GC this segment */ } @@ -233,12 +235,32 @@ static void mtd_put_device(struct super_block *sb) put_mtd_device(logfs_super(sb)->s_mtd); } +static int mtd_can_write_buf(struct super_block *sb, u64 ofs) +{ + struct logfs_super *super = logfs_super(sb); + void *buf; + int err; + + buf = kmalloc(super->s_writesize, GFP_KERNEL); + if (!buf) + return -ENOMEM; + err = mtd_read(sb, ofs, super->s_writesize, buf); + if (err) + goto out; + if (memchr_inv(buf, 0xff, super->s_writesize)) + err = -EIO; + kfree(buf); +out: + return err; +} + static const struct logfs_device_ops mtd_devops = { .find_first_sb = mtd_find_first_sb, .find_last_sb = mtd_find_last_sb, .readpage = mtd_readpage, .writeseg = mtd_writeseg, .erase = mtd_erase, + .can_write_buf = mtd_can_write_buf, .sync = mtd_sync, .put_device = mtd_put_device, }; @@ -250,5 +272,7 @@ int logfs_get_sb_mtd(struct file_system_type *type, int flags, const struct logfs_device_ops *devops = &mtd_devops; mtd = get_mtd_device(NULL, mtdnr); + if (IS_ERR(mtd)) + return PTR_ERR(mtd); return logfs_get_sb_device(type, flags, mtd, NULL, devops, mnt); } diff --git a/fs/logfs/file.c b/fs/logfs/file.c index 370f367a933e..0de524071870 100644 --- a/fs/logfs/file.c +++ b/fs/logfs/file.c @@ -161,7 +161,17 @@ static int logfs_writepage(struct page *page, struct writeback_control *wbc) static void logfs_invalidatepage(struct page *page, unsigned long offset) { - move_page_to_btree(page); + struct logfs_block *block = logfs_block(page); + + if (block->reserved_bytes) { + struct super_block *sb = page->mapping->host->i_sb; + struct logfs_super *super = logfs_super(sb); + + super->s_dirty_pages -= block->reserved_bytes; + block->ops->free_block(sb, block); + BUG_ON(bitmap_weight(block->alias_map, LOGFS_BLOCK_FACTOR)); + } else + move_page_to_btree(page); BUG_ON(PagePrivate(page) || page->private); } @@ -212,10 +222,8 @@ int logfs_ioctl(struct inode *inode, struct file *file, unsigned int cmd, int logfs_fsync(struct file *file, struct dentry *dentry, int datasync) { struct super_block *sb = dentry->d_inode->i_sb; - struct logfs_super *super = logfs_super(sb); - /* FIXME: write anchor */ - super->s_devops->sync(sb); + logfs_write_anchor(sb); return 0; } diff --git a/fs/logfs/gc.c b/fs/logfs/gc.c index 76c242fbe1b0..caa4419285dc 100644 --- a/fs/logfs/gc.c +++ b/fs/logfs/gc.c @@ -122,7 +122,7 @@ static void logfs_cleanse_block(struct super_block *sb, u64 ofs, u64 ino, logfs_safe_iput(inode, cookie); } -static u32 logfs_gc_segment(struct super_block *sb, u32 segno, u8 dist) +static u32 logfs_gc_segment(struct super_block *sb, u32 segno) { struct logfs_super *super = logfs_super(sb); struct logfs_segment_header sh; @@ -401,7 +401,7 @@ static int __logfs_gc_once(struct super_block *sb, struct gc_candidate *cand) segno, (u64)segno << super->s_segshift, dist, no_free_segments(sb), valid, super->s_free_bytes); - cleaned = logfs_gc_segment(sb, segno, dist); + cleaned = logfs_gc_segment(sb, segno); log_gc("GC segment #%02x complete - now %x valid\n", segno, valid - cleaned); BUG_ON(cleaned != valid); @@ -632,38 +632,31 @@ static int check_area(struct super_block *sb, int i) { struct logfs_super *super = logfs_super(sb); struct logfs_area *area = super->s_area[i]; - struct logfs_object_header oh; + gc_level_t gc_level; + u32 cleaned, valid, ec; u32 segno = area->a_segno; - u32 ofs = area->a_used_bytes; - __be32 crc; - int err; + u64 ofs = dev_ofs(sb, area->a_segno, area->a_written_bytes); if (!area->a_is_open) return 0; - for (ofs = area->a_used_bytes; - ofs <= super->s_segsize - sizeof(oh); - ofs += (u32)be16_to_cpu(oh.len) + sizeof(oh)) { - err = wbuf_read(sb, dev_ofs(sb, segno, ofs), sizeof(oh), &oh); - if (err) - return err; - - if (!memchr_inv(&oh, 0xff, sizeof(oh))) - break; + if (super->s_devops->can_write_buf(sb, ofs) == 0) + return 0; - crc = logfs_crc32(&oh, sizeof(oh) - 4, 4); - if (crc != oh.crc) { - printk(KERN_INFO "interrupted header at %llx\n", - dev_ofs(sb, segno, ofs)); - return 0; - } - } - if (ofs != area->a_used_bytes) { - printk(KERN_INFO "%x bytes unaccounted data found at %llx\n", - ofs - area->a_used_bytes, - dev_ofs(sb, segno, area->a_used_bytes)); - area->a_used_bytes = ofs; - } + printk(KERN_INFO"LogFS: Possibly incomplete write at %llx\n", ofs); + /* + * The device cannot write back the write buffer. Most likely the + * wbuf was already written out and the system crashed at some point + * before the journal commit happened. In that case we wouldn't have + * to do anything. But if the crash happened before the wbuf was + * written out correctly, we must GC this segment. So assume the + * worst and always do the GC run. + */ + area->a_is_open = 0; + valid = logfs_valid_bytes(sb, segno, &ec, &gc_level); + cleaned = logfs_gc_segment(sb, segno); + if (cleaned != valid) + return -EIO; return 0; } diff --git a/fs/logfs/inode.c b/fs/logfs/inode.c index 14ed27274da2..755a92e8daa7 100644 --- a/fs/logfs/inode.c +++ b/fs/logfs/inode.c @@ -193,6 +193,7 @@ static void logfs_init_inode(struct super_block *sb, struct inode *inode) inode->i_ctime = CURRENT_TIME; inode->i_mtime = CURRENT_TIME; inode->i_nlink = 1; + li->li_refcount = 1; INIT_LIST_HEAD(&li->li_freeing_list); for (i = 0; i < LOGFS_EMBEDDED_FIELDS; i++) @@ -326,7 +327,7 @@ static void logfs_set_ino_generation(struct super_block *sb, u64 ino; mutex_lock(&super->s_journal_mutex); - ino = logfs_seek_hole(super->s_master_inode, super->s_last_ino); + ino = logfs_seek_hole(super->s_master_inode, super->s_last_ino + 1); super->s_last_ino = ino; super->s_inos_till_wrap--; if (super->s_inos_till_wrap < 0) { @@ -386,8 +387,7 @@ static void logfs_init_once(void *_li) static int logfs_sync_fs(struct super_block *sb, int wait) { - /* FIXME: write anchor */ - logfs_super(sb)->s_devops->sync(sb); + logfs_write_anchor(sb); return 0; } diff --git a/fs/logfs/journal.c b/fs/logfs/journal.c index fb0a613f885b..4b0e0616b357 100644 --- a/fs/logfs/journal.c +++ b/fs/logfs/journal.c @@ -132,10 +132,9 @@ static int read_area(struct super_block *sb, struct logfs_je_area *a) ofs = dev_ofs(sb, area->a_segno, area->a_written_bytes); if (super->s_writesize > 1) - logfs_buf_recover(area, ofs, a + 1, super->s_writesize); + return logfs_buf_recover(area, ofs, a + 1, super->s_writesize); else - logfs_buf_recover(area, ofs, NULL, 0); - return 0; + return logfs_buf_recover(area, ofs, NULL, 0); } static void *unpack(void *from, void *to) @@ -245,7 +244,7 @@ static int read_je(struct super_block *sb, u64 ofs) read_erasecount(sb, unpack(jh, scratch)); break; case JE_AREA: - read_area(sb, unpack(jh, scratch)); + err = read_area(sb, unpack(jh, scratch)); break; case JE_OBJ_ALIAS: err = logfs_load_object_aliases(sb, unpack(jh, scratch), diff --git a/fs/logfs/logfs.h b/fs/logfs/logfs.h index 0a3df1a0c936..93b55f337245 100644 --- a/fs/logfs/logfs.h +++ b/fs/logfs/logfs.h @@ -144,6 +144,7 @@ struct logfs_area_ops { * @erase: erase one segment * @read: read from the device * @erase: erase part of the device + * @can_write_buf: decide whether wbuf can be written to ofs */ struct logfs_device_ops { struct page *(*find_first_sb)(struct super_block *sb, u64 *ofs); @@ -153,6 +154,7 @@ struct logfs_device_ops { void (*writeseg)(struct super_block *sb, u64 ofs, size_t len); int (*erase)(struct super_block *sb, loff_t ofs, size_t len, int ensure_write); + int (*can_write_buf)(struct super_block *sb, u64 ofs); void (*sync)(struct super_block *sb); void (*put_device)(struct super_block *sb); }; @@ -394,6 +396,7 @@ struct logfs_super { int s_lock_count; mempool_t *s_block_pool; /* struct logfs_block pool */ mempool_t *s_shadow_pool; /* struct logfs_shadow pool */ + struct list_head s_writeback_list; /* writeback pages */ /* * Space accounting: * - s_used_bytes specifies space used to store valid data objects. @@ -598,19 +601,19 @@ void freeseg(struct super_block *sb, u32 segno); int logfs_init_areas(struct super_block *sb); void logfs_cleanup_areas(struct super_block *sb); int logfs_open_area(struct logfs_area *area, size_t bytes); -void __logfs_buf_write(struct logfs_area *area, u64 ofs, void *buf, size_t len, +int __logfs_buf_write(struct logfs_area *area, u64 ofs, void *buf, size_t len, int use_filler); -static inline void logfs_buf_write(struct logfs_area *area, u64 ofs, +static inline int logfs_buf_write(struct logfs_area *area, u64 ofs, void *buf, size_t len) { - __logfs_buf_write(area, ofs, buf, len, 0); + return __logfs_buf_write(area, ofs, buf, len, 0); } -static inline void logfs_buf_recover(struct logfs_area *area, u64 ofs, +static inline int logfs_buf_recover(struct logfs_area *area, u64 ofs, void *buf, size_t len) { - __logfs_buf_write(area, ofs, buf, len, 1); + return __logfs_buf_write(area, ofs, buf, len, 1); } /* super.c */ diff --git a/fs/logfs/readwrite.c b/fs/logfs/readwrite.c index 3159db6958e5..0718d112a1a5 100644 --- a/fs/logfs/readwrite.c +++ b/fs/logfs/readwrite.c @@ -892,6 +892,8 @@ u64 logfs_seek_hole(struct inode *inode, u64 bix) return bix; else if (li->li_data[INDIRECT_INDEX] & LOGFS_FULLY_POPULATED) bix = maxbix(li->li_height); + else if (bix >= maxbix(li->li_height)) + return bix; else { bix = seek_holedata_loop(inode, bix, 0); if (bix < maxbix(li->li_height)) @@ -1093,17 +1095,25 @@ static int logfs_reserve_bytes(struct inode *inode, int bytes) int get_page_reserve(struct inode *inode, struct page *page) { struct logfs_super *super = logfs_super(inode->i_sb); + struct logfs_block *block = logfs_block(page); int ret; - if (logfs_block(page) && logfs_block(page)->reserved_bytes) + if (block && block->reserved_bytes) return 0; logfs_get_wblocks(inode->i_sb, page, WF_LOCK); - ret = logfs_reserve_bytes(inode, 6 * LOGFS_MAX_OBJECTSIZE); + while ((ret = logfs_reserve_bytes(inode, 6 * LOGFS_MAX_OBJECTSIZE)) && + !list_empty(&super->s_writeback_list)) { + block = list_entry(super->s_writeback_list.next, + struct logfs_block, alias_list); + block->ops->write_block(block); + } if (!ret) { alloc_data_block(inode, page); - logfs_block(page)->reserved_bytes += 6 * LOGFS_MAX_OBJECTSIZE; + block = logfs_block(page); + block->reserved_bytes += 6 * LOGFS_MAX_OBJECTSIZE; super->s_dirty_pages += 6 * LOGFS_MAX_OBJECTSIZE; + list_move_tail(&block->alias_list, &super->s_writeback_list); } logfs_put_wblocks(inode->i_sb, page, WF_LOCK); return ret; @@ -1861,7 +1871,7 @@ int logfs_truncate(struct inode *inode, u64 target) size = target; logfs_get_wblocks(sb, NULL, 1); - err = __logfs_truncate(inode, target); + err = __logfs_truncate(inode, size); if (!err) err = __logfs_write_inode(inode, 0); logfs_put_wblocks(sb, NULL, 1); @@ -2249,6 +2259,7 @@ int logfs_init_rw(struct super_block *sb) int min_fill = 3 * super->s_no_blocks; INIT_LIST_HEAD(&super->s_object_alias); + INIT_LIST_HEAD(&super->s_writeback_list); mutex_init(&super->s_write_mutex); super->s_block_pool = mempool_create_kmalloc_pool(min_fill, sizeof(struct logfs_block)); diff --git a/fs/logfs/segment.c b/fs/logfs/segment.c index f77ce2b470ba..a9657afb70ad 100644 --- a/fs/logfs/segment.c +++ b/fs/logfs/segment.c @@ -67,7 +67,7 @@ static struct page *get_mapping_page(struct super_block *sb, pgoff_t index, return page; } -void __logfs_buf_write(struct logfs_area *area, u64 ofs, void *buf, size_t len, +int __logfs_buf_write(struct logfs_area *area, u64 ofs, void *buf, size_t len, int use_filler) { pgoff_t index = ofs >> PAGE_SHIFT; @@ -81,8 +81,10 @@ void __logfs_buf_write(struct logfs_area *area, u64 ofs, void *buf, size_t len, copylen = min((ulong)len, PAGE_SIZE - offset); page = get_mapping_page(area->a_sb, index, use_filler); - SetPageUptodate(page); + if (IS_ERR(page)) + return PTR_ERR(page); BUG_ON(!page); /* FIXME: reserve a pool */ + SetPageUptodate(page); memcpy(page_address(page) + offset, buf, copylen); SetPagePrivate(page); page_cache_release(page); @@ -92,6 +94,7 @@ void __logfs_buf_write(struct logfs_area *area, u64 ofs, void *buf, size_t len, offset = 0; index++; } while (len); + return 0; } static void pad_partial_page(struct logfs_area *area) diff --git a/fs/logfs/super.c b/fs/logfs/super.c index 5866ee6e1327..d651e10a1e9c 100644 --- a/fs/logfs/super.c +++ b/fs/logfs/super.c @@ -138,10 +138,14 @@ static int logfs_sb_set(struct super_block *sb, void *_super) sb->s_fs_info = super; sb->s_mtd = super->s_mtd; sb->s_bdev = super->s_bdev; +#ifdef CONFIG_BLOCK if (sb->s_bdev) sb->s_bdi = &bdev_get_queue(sb->s_bdev)->backing_dev_info; +#endif +#ifdef CONFIG_MTD if (sb->s_mtd) sb->s_bdi = sb->s_mtd->backing_dev_info; +#endif return 0; } @@ -333,27 +337,27 @@ static int logfs_get_sb_final(struct super_block *sb, struct vfsmount *mnt) goto fail; sb->s_root = d_alloc_root(rootdir); - if (!sb->s_root) - goto fail2; + if (!sb->s_root) { + iput(rootdir); + goto fail; + } super->s_erase_page = alloc_pages(GFP_KERNEL, 0); if (!super->s_erase_page) - goto fail2; + goto fail; memset(page_address(super->s_erase_page), 0xFF, PAGE_SIZE); /* FIXME: check for read-only mounts */ err = logfs_make_writeable(sb); if (err) - goto fail3; + goto fail1; log_super("LogFS: Finished mounting\n"); simple_set_mnt(mnt, sb); return 0; -fail3: +fail1: __free_page(super->s_erase_page); -fail2: - iput(rootdir); fail: iput(logfs_super(sb)->s_master_inode); return -EIO; @@ -382,7 +386,7 @@ static struct page *find_super_block(struct super_block *sb) if (!first || IS_ERR(first)) return NULL; last = super->s_devops->find_last_sb(sb, &super->s_sb_ofs[1]); - if (!last || IS_ERR(first)) { + if (!last || IS_ERR(last)) { page_cache_release(first); return NULL; } @@ -413,7 +417,7 @@ static int __logfs_read_sb(struct super_block *sb) page = find_super_block(sb); if (!page) - return -EIO; + return -EINVAL; ds = page_address(page); super->s_size = be64_to_cpu(ds->ds_filesystem_size); diff --git a/fs/namei.c b/fs/namei.c index a7dce91a7e42..b86b96fe1dc3 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -1641,7 +1641,7 @@ static struct file *do_last(struct nameidata *nd, struct path *path, if (nd->last.name[nd->last.len]) { if (open_flag & O_CREAT) goto exit; - nd->flags |= LOOKUP_DIRECTORY; + nd->flags |= LOOKUP_DIRECTORY | LOOKUP_FOLLOW; } /* just plain open? */ @@ -1830,6 +1830,8 @@ reval: } if (open_flag & O_DIRECTORY) nd.flags |= LOOKUP_DIRECTORY; + if (!(open_flag & O_NOFOLLOW)) + nd.flags |= LOOKUP_FOLLOW; filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname); while (unlikely(!filp)) { /* trailing symlink */ struct path holder; @@ -1837,7 +1839,7 @@ reval: void *cookie; error = -ELOOP; /* S_ISDIR part is a temporary automount kludge */ - if ((open_flag & O_NOFOLLOW) && !S_ISDIR(inode->i_mode)) + if (!(nd.flags & LOOKUP_FOLLOW) && !S_ISDIR(inode->i_mode)) goto exit_dput; if (count++ == 32) goto exit_dput; @@ -2174,8 +2176,10 @@ int vfs_rmdir(struct inode *dir, struct dentry *dentry) error = security_inode_rmdir(dir, dentry); if (!error) { error = dir->i_op->rmdir(dir, dentry); - if (!error) + if (!error) { dentry->d_inode->i_flags |= S_DEAD; + dont_mount(dentry); + } } } mutex_unlock(&dentry->d_inode->i_mutex); @@ -2259,7 +2263,7 @@ int vfs_unlink(struct inode *dir, struct dentry *dentry) if (!error) { error = dir->i_op->unlink(dir, dentry); if (!error) - dentry->d_inode->i_flags |= S_DEAD; + dont_mount(dentry); } } mutex_unlock(&dentry->d_inode->i_mutex); @@ -2570,17 +2574,20 @@ static int vfs_rename_dir(struct inode *old_dir, struct dentry *old_dentry, return error; target = new_dentry->d_inode; - if (target) { + if (target) mutex_lock(&target->i_mutex); - dentry_unhash(new_dentry); - } if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry)) error = -EBUSY; - else + else { + if (target) + dentry_unhash(new_dentry); error = old_dir->i_op->rename(old_dir, old_dentry, new_dir, new_dentry); + } if (target) { - if (!error) + if (!error) { target->i_flags |= S_DEAD; + dont_mount(new_dentry); + } mutex_unlock(&target->i_mutex); if (d_unhashed(new_dentry)) d_rehash(new_dentry); @@ -2612,7 +2619,7 @@ static int vfs_rename_other(struct inode *old_dir, struct dentry *old_dentry, error = old_dir->i_op->rename(old_dir, old_dentry, new_dir, new_dentry); if (!error) { if (target) - target->i_flags |= S_DEAD; + dont_mount(new_dentry); if (!(old_dir->i_sb->s_type->fs_flags & FS_RENAME_DOES_D_MOVE)) d_move(old_dentry, new_dentry); } diff --git a/fs/namespace.c b/fs/namespace.c index 8174c8ab5c70..f20cb57d1067 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -1432,7 +1432,7 @@ static int graft_tree(struct vfsmount *mnt, struct path *path) err = -ENOENT; mutex_lock(&path->dentry->d_inode->i_mutex); - if (IS_DEADDIR(path->dentry->d_inode)) + if (cant_mount(path->dentry)) goto out_unlock; err = security_sb_check_sb(mnt, path); @@ -1623,7 +1623,7 @@ static int do_move_mount(struct path *path, char *old_name) err = -ENOENT; mutex_lock(&path->dentry->d_inode->i_mutex); - if (IS_DEADDIR(path->dentry->d_inode)) + if (cant_mount(path->dentry)) goto out1; if (d_unlinked(path->dentry)) @@ -2234,7 +2234,7 @@ SYSCALL_DEFINE2(pivot_root, const char __user *, new_root, if (!check_mnt(root.mnt)) goto out2; error = -ENOENT; - if (IS_DEADDIR(new.dentry->d_inode)) + if (cant_mount(old.dentry)) goto out2; if (d_unlinked(new.dentry)) goto out2; diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c index 1afb0a10229f..e27960cd76ab 100644 --- a/fs/notify/inotify/inotify_fsnotify.c +++ b/fs/notify/inotify/inotify_fsnotify.c @@ -28,6 +28,7 @@ #include <linux/path.h> /* struct path */ #include <linux/slab.h> /* kmem_* */ #include <linux/types.h> +#include <linux/sched.h> #include "inotify.h" @@ -146,6 +147,7 @@ static void inotify_free_group_priv(struct fsnotify_group *group) idr_for_each(&group->inotify_data.idr, idr_callback, group); idr_remove_all(&group->inotify_data.idr); idr_destroy(&group->inotify_data.idr); + free_uid(group->inotify_data.user); } void inotify_free_event_priv(struct fsnotify_event_private_data *fsn_event_priv) diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c index 472cdf29ef82..e46ca685b9be 100644 --- a/fs/notify/inotify/inotify_user.c +++ b/fs/notify/inotify/inotify_user.c @@ -546,21 +546,24 @@ retry: if (unlikely(!idr_pre_get(&group->inotify_data.idr, GFP_KERNEL))) goto out_err; + /* we are putting the mark on the idr, take a reference */ + fsnotify_get_mark(&tmp_ientry->fsn_entry); + spin_lock(&group->inotify_data.idr_lock); ret = idr_get_new_above(&group->inotify_data.idr, &tmp_ientry->fsn_entry, group->inotify_data.last_wd+1, &tmp_ientry->wd); spin_unlock(&group->inotify_data.idr_lock); if (ret) { + /* we didn't get on the idr, drop the idr reference */ + fsnotify_put_mark(&tmp_ientry->fsn_entry); + /* idr was out of memory allocate and try again */ if (ret == -EAGAIN) goto retry; goto out_err; } - /* we put the mark on the idr, take a reference */ - fsnotify_get_mark(&tmp_ientry->fsn_entry); - /* we are on the idr, now get on the inode */ ret = fsnotify_add_mark(&tmp_ientry->fsn_entry, group, inode); if (ret) { @@ -578,16 +581,13 @@ retry: /* return the watch descriptor for this new entry */ ret = tmp_ientry->wd; - /* match the ref from fsnotify_init_markentry() */ - fsnotify_put_mark(&tmp_ientry->fsn_entry); - /* if this mark added a new event update the group mask */ if (mask & ~group->mask) fsnotify_recalc_group_mask(group); out_err: - if (ret < 0) - kmem_cache_free(inotify_inode_mark_cachep, tmp_ientry); + /* match the ref from fsnotify_init_markentry() */ + fsnotify_put_mark(&tmp_ientry->fsn_entry); return ret; } diff --git a/fs/sysv/dir.c b/fs/sysv/dir.c index 4e50286a4cc3..1dabed286b4c 100644 --- a/fs/sysv/dir.c +++ b/fs/sysv/dir.c @@ -164,8 +164,8 @@ struct sysv_dir_entry *sysv_find_entry(struct dentry *dentry, struct page **res_ name, de->name)) goto found; } + dir_put_page(page); } - dir_put_page(page); if (++n >= npages) n = 0; diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h index c99c64dc5f3d..c33749f95b32 100644 --- a/include/asm-generic/atomic.h +++ b/include/asm-generic/atomic.h @@ -33,7 +33,7 @@ * Atomically reads the value of @v. Note that the guaranteed * useful range of an atomic_t is only 24 bits. */ -#define atomic_read(v) ((v)->counter) +#define atomic_read(v) (*(volatile int *)&(v)->counter) /** * atomic_set - set atomic variable diff --git a/include/asm-generic/bitops/arch_hweight.h b/include/asm-generic/bitops/arch_hweight.h new file mode 100644 index 000000000000..6a211f40665c --- /dev/null +++ b/include/asm-generic/bitops/arch_hweight.h @@ -0,0 +1,25 @@ +#ifndef _ASM_GENERIC_BITOPS_ARCH_HWEIGHT_H_ +#define _ASM_GENERIC_BITOPS_ARCH_HWEIGHT_H_ + +#include <asm/types.h> + +static inline unsigned int __arch_hweight32(unsigned int w) +{ + return __sw_hweight32(w); +} + +static inline unsigned int __arch_hweight16(unsigned int w) +{ + return __sw_hweight16(w); +} + +static inline unsigned int __arch_hweight8(unsigned int w) +{ + return __sw_hweight8(w); +} + +static inline unsigned long __arch_hweight64(__u64 w) +{ + return __sw_hweight64(w); +} +#endif /* _ASM_GENERIC_BITOPS_HWEIGHT_H_ */ diff --git a/include/asm-generic/bitops/const_hweight.h b/include/asm-generic/bitops/const_hweight.h new file mode 100644 index 000000000000..fa2a50b7ee66 --- /dev/null +++ b/include/asm-generic/bitops/const_hweight.h @@ -0,0 +1,42 @@ +#ifndef _ASM_GENERIC_BITOPS_CONST_HWEIGHT_H_ +#define _ASM_GENERIC_BITOPS_CONST_HWEIGHT_H_ + +/* + * Compile time versions of __arch_hweightN() + */ +#define __const_hweight8(w) \ + ( (!!((w) & (1ULL << 0))) + \ + (!!((w) & (1ULL << 1))) + \ + (!!((w) & (1ULL << 2))) + \ + (!!((w) & (1ULL << 3))) + \ + (!!((w) & (1ULL << 4))) + \ + (!!((w) & (1ULL << 5))) + \ + (!!((w) & (1ULL << 6))) + \ + (!!((w) & (1ULL << 7))) ) + +#define __const_hweight16(w) (__const_hweight8(w) + __const_hweight8((w) >> 8 )) +#define __const_hweight32(w) (__const_hweight16(w) + __const_hweight16((w) >> 16)) +#define __const_hweight64(w) (__const_hweight32(w) + __const_hweight32((w) >> 32)) + +/* + * Generic interface. + */ +#define hweight8(w) (__builtin_constant_p(w) ? __const_hweight8(w) : __arch_hweight8(w)) +#define hweight16(w) (__builtin_constant_p(w) ? __const_hweight16(w) : __arch_hweight16(w)) +#define hweight32(w) (__builtin_constant_p(w) ? __const_hweight32(w) : __arch_hweight32(w)) +#define hweight64(w) (__builtin_constant_p(w) ? __const_hweight64(w) : __arch_hweight64(w)) + +/* + * Interface for known constant arguments + */ +#define HWEIGHT8(w) (BUILD_BUG_ON_ZERO(!__builtin_constant_p(w)) + __const_hweight8(w)) +#define HWEIGHT16(w) (BUILD_BUG_ON_ZERO(!__builtin_constant_p(w)) + __const_hweight16(w)) +#define HWEIGHT32(w) (BUILD_BUG_ON_ZERO(!__builtin_constant_p(w)) + __const_hweight32(w)) +#define HWEIGHT64(w) (BUILD_BUG_ON_ZERO(!__builtin_constant_p(w)) + __const_hweight64(w)) + +/* + * Type invariant interface to the compile time constant hweight functions. + */ +#define HWEIGHT(w) HWEIGHT64((u64)w) + +#endif /* _ASM_GENERIC_BITOPS_CONST_HWEIGHT_H_ */ diff --git a/include/asm-generic/bitops/hweight.h b/include/asm-generic/bitops/hweight.h index fbbc383771da..a94d6519c7ed 100644 --- a/include/asm-generic/bitops/hweight.h +++ b/include/asm-generic/bitops/hweight.h @@ -1,11 +1,7 @@ #ifndef _ASM_GENERIC_BITOPS_HWEIGHT_H_ #define _ASM_GENERIC_BITOPS_HWEIGHT_H_ -#include <asm/types.h> - -extern unsigned int hweight32(unsigned int w); -extern unsigned int hweight16(unsigned int w); -extern unsigned int hweight8(unsigned int w); -extern unsigned long hweight64(__u64 w); +#include <asm-generic/bitops/arch_hweight.h> +#include <asm-generic/bitops/const_hweight.h> #endif /* _ASM_GENERIC_BITOPS_HWEIGHT_H_ */ diff --git a/include/linux/acpi.h b/include/linux/acpi.h index b926afe8c03e..3da73f5f0ae9 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -116,11 +116,12 @@ extern unsigned long acpi_realmode_flags; int acpi_register_gsi (struct device *dev, u32 gsi, int triggering, int polarity); int acpi_gsi_to_irq (u32 gsi, unsigned int *irq); +int acpi_isa_irq_to_gsi (unsigned isa_irq, u32 *gsi); #ifdef CONFIG_X86_IO_APIC -extern int acpi_get_override_irq(int bus_irq, int *trigger, int *polarity); +extern int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity); #else -#define acpi_get_override_irq(bus, trigger, polarity) (-1) +#define acpi_get_override_irq(gsi, trigger, polarity) (-1) #endif /* * This function undoes the effect of one call to acpi_register_gsi(). diff --git a/include/linux/bitops.h b/include/linux/bitops.h index b796eab5ca75..fc68053378ce 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -10,6 +10,11 @@ #define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) #endif +extern unsigned int __sw_hweight8(unsigned int w); +extern unsigned int __sw_hweight16(unsigned int w); +extern unsigned int __sw_hweight32(unsigned int w); +extern unsigned long __sw_hweight64(__u64 w); + /* * Include this here because some architectures need generic_ffs/fls in * scope @@ -44,31 +49,6 @@ static inline unsigned long hweight_long(unsigned long w) return sizeof(w) == 4 ? hweight32(w) : hweight64(w); } -/* - * Clearly slow versions of the hweightN() functions, their benefit is - * of course compile time evaluation of constant arguments. - */ -#define HWEIGHT8(w) \ - ( BUILD_BUG_ON_ZERO(!__builtin_constant_p(w)) + \ - (!!((w) & (1ULL << 0))) + \ - (!!((w) & (1ULL << 1))) + \ - (!!((w) & (1ULL << 2))) + \ - (!!((w) & (1ULL << 3))) + \ - (!!((w) & (1ULL << 4))) + \ - (!!((w) & (1ULL << 5))) + \ - (!!((w) & (1ULL << 6))) + \ - (!!((w) & (1ULL << 7))) ) - -#define HWEIGHT16(w) (HWEIGHT8(w) + HWEIGHT8((w) >> 8)) -#define HWEIGHT32(w) (HWEIGHT16(w) + HWEIGHT16((w) >> 16)) -#define HWEIGHT64(w) (HWEIGHT32(w) + HWEIGHT32((w) >> 32)) - -/* - * Type invariant version that simply casts things to the - * largest type. - */ -#define HWEIGHT(w) HWEIGHT64((u64)(w)) - /** * rol32 - rotate a 32-bit value left * @word: value to rotate diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 4de02b10007f..9f15150ce8d6 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -278,6 +278,27 @@ struct freq_attr { ssize_t (*store)(struct cpufreq_policy *, const char *, size_t count); }; +#define cpufreq_freq_attr_ro(_name) \ +static struct freq_attr _name = \ +__ATTR(_name, 0444, show_##_name, NULL) + +#define cpufreq_freq_attr_ro_perm(_name, _perm) \ +static struct freq_attr _name = \ +__ATTR(_name, _perm, show_##_name, NULL) + +#define cpufreq_freq_attr_ro_old(_name) \ +static struct freq_attr _name##_old = \ +__ATTR(_name, 0444, show_##_name##_old, NULL) + +#define cpufreq_freq_attr_rw(_name) \ +static struct freq_attr _name = \ +__ATTR(_name, 0644, show_##_name, store_##_name) + +#define cpufreq_freq_attr_rw_old(_name) \ +static struct freq_attr _name##_old = \ +__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old) + + struct global_attr { struct attribute attr; ssize_t (*show)(struct kobject *kobj, @@ -286,6 +307,15 @@ struct global_attr { const char *c, size_t count); }; +#define define_one_global_ro(_name) \ +static struct global_attr _name = \ +__ATTR(_name, 0444, show_##_name, NULL) + +#define define_one_global_rw(_name) \ +static struct global_attr _name = \ +__ATTR(_name, 0644, show_##_name, store_##_name) + + /********************************************************************* * CPUFREQ 2.6. INTERFACE * *********************************************************************/ diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index a5740fc4d04b..a73454aec333 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -21,8 +21,7 @@ extern int number_of_cpusets; /* How many cpusets are defined in system? */ extern int cpuset_init(void); extern void cpuset_init_smp(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); -extern void cpuset_cpus_allowed_locked(struct task_struct *p, - struct cpumask *mask); +extern int cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); #define cpuset_current_mems_allowed (current->mems_allowed) void cpuset_init_current_mems_allowed(void); @@ -69,9 +68,6 @@ struct seq_file; extern void cpuset_task_status_allowed(struct seq_file *m, struct task_struct *task); -extern void cpuset_lock(void); -extern void cpuset_unlock(void); - extern int cpuset_mem_spread_node(void); static inline int cpuset_do_page_mem_spread(void) @@ -105,10 +101,11 @@ static inline void cpuset_cpus_allowed(struct task_struct *p, { cpumask_copy(mask, cpu_possible_mask); } -static inline void cpuset_cpus_allowed_locked(struct task_struct *p, - struct cpumask *mask) + +static inline int cpuset_cpus_allowed_fallback(struct task_struct *p) { - cpumask_copy(mask, cpu_possible_mask); + cpumask_copy(&p->cpus_allowed, cpu_possible_mask); + return cpumask_any(cpu_active_mask); } static inline nodemask_t cpuset_mems_allowed(struct task_struct *p) @@ -157,9 +154,6 @@ static inline void cpuset_task_status_allowed(struct seq_file *m, { } -static inline void cpuset_lock(void) {} -static inline void cpuset_unlock(void) {} - static inline int cpuset_mem_spread_node(void) { return 0; diff --git a/include/linux/dcache.h b/include/linux/dcache.h index 30b93b2a01a4..eebb617c17d8 100644 --- a/include/linux/dcache.h +++ b/include/linux/dcache.h @@ -186,6 +186,8 @@ d_iput: no no no yes #define DCACHE_FSNOTIFY_PARENT_WATCHED 0x0080 /* Parent inode is watched by some fsnotify listener */ +#define DCACHE_CANT_MOUNT 0x0100 + extern spinlock_t dcache_lock; extern seqlock_t rename_lock; @@ -358,6 +360,18 @@ static inline int d_unlinked(struct dentry *dentry) return d_unhashed(dentry) && !IS_ROOT(dentry); } +static inline int cant_mount(struct dentry *dentry) +{ + return (dentry->d_flags & DCACHE_CANT_MOUNT); +} + +static inline void dont_mount(struct dentry *dentry) +{ + spin_lock(&dentry->d_lock); + dentry->d_flags |= DCACHE_CANT_MOUNT; + spin_unlock(&dentry->d_lock); +} + static inline struct dentry *dget_parent(struct dentry *dentry) { struct dentry *ret; diff --git a/include/linux/debugobjects.h b/include/linux/debugobjects.h index 8c243aaa86a7..597692f1fc8d 100644 --- a/include/linux/debugobjects.h +++ b/include/linux/debugobjects.h @@ -20,12 +20,14 @@ struct debug_obj_descr; * struct debug_obj - representaion of an tracked object * @node: hlist node to link the object into the tracker list * @state: tracked object state + * @astate: current active state * @object: pointer to the real object * @descr: pointer to an object type specific debug description structure */ struct debug_obj { struct hlist_node node; enum debug_obj_state state; + unsigned int astate; void *object; struct debug_obj_descr *descr; }; @@ -60,6 +62,15 @@ extern void debug_object_deactivate(void *addr, struct debug_obj_descr *descr); extern void debug_object_destroy (void *addr, struct debug_obj_descr *descr); extern void debug_object_free (void *addr, struct debug_obj_descr *descr); +/* + * Active state: + * - Set at 0 upon initialization. + * - Must return to 0 before deactivation. + */ +extern void +debug_object_active_state(void *addr, struct debug_obj_descr *descr, + unsigned int expect, unsigned int next); + extern void debug_objects_early_init(void); extern void debug_objects_mem_init(void); #else diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 01e6adea07ec..41e46330d9be 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -82,9 +82,13 @@ void clear_ftrace_function(void); extern void ftrace_stub(unsigned long a0, unsigned long a1); #else /* !CONFIG_FUNCTION_TRACER */ -# define register_ftrace_function(ops) do { } while (0) -# define unregister_ftrace_function(ops) do { } while (0) -# define clear_ftrace_function(ops) do { } while (0) +/* + * (un)register_ftrace_function must be a macro since the ops parameter + * must not be evaluated. + */ +#define register_ftrace_function(ops) ({ 0; }) +#define unregister_ftrace_function(ops) ({ 0; }) +static inline void clear_ftrace_function(void) { } static inline void ftrace_kill(void) { } static inline void ftrace_stop(void) { } static inline void ftrace_start(void) { } @@ -237,11 +241,13 @@ extern int skip_trace(unsigned long ip); extern void ftrace_disable_daemon(void); extern void ftrace_enable_daemon(void); #else -# define skip_trace(ip) ({ 0; }) -# define ftrace_force_update() ({ 0; }) -# define ftrace_set_filter(buf, len, reset) do { } while (0) -# define ftrace_disable_daemon() do { } while (0) -# define ftrace_enable_daemon() do { } while (0) +static inline int skip_trace(unsigned long ip) { return 0; } +static inline int ftrace_force_update(void) { return 0; } +static inline void ftrace_set_filter(unsigned char *buf, int len, int reset) +{ +} +static inline void ftrace_disable_daemon(void) { } +static inline void ftrace_enable_daemon(void) { } static inline void ftrace_release_mod(struct module *mod) {} static inline int register_ftrace_command(struct ftrace_func_command *cmd) { @@ -314,16 +320,16 @@ static inline void __ftrace_enabled_restore(int enabled) extern void time_hardirqs_on(unsigned long a0, unsigned long a1); extern void time_hardirqs_off(unsigned long a0, unsigned long a1); #else -# define time_hardirqs_on(a0, a1) do { } while (0) -# define time_hardirqs_off(a0, a1) do { } while (0) + static inline void time_hardirqs_on(unsigned long a0, unsigned long a1) { } + static inline void time_hardirqs_off(unsigned long a0, unsigned long a1) { } #endif #ifdef CONFIG_PREEMPT_TRACER extern void trace_preempt_on(unsigned long a0, unsigned long a1); extern void trace_preempt_off(unsigned long a0, unsigned long a1); #else -# define trace_preempt_on(a0, a1) do { } while (0) -# define trace_preempt_off(a0, a1) do { } while (0) + static inline void trace_preempt_on(unsigned long a0, unsigned long a1) { } + static inline void trace_preempt_off(unsigned long a0, unsigned long a1) { } #endif #ifdef CONFIG_FTRACE_MCOUNT_RECORD @@ -352,6 +358,10 @@ struct ftrace_graph_ret { int depth; }; +/* Type of the callback handlers for tracing function graph*/ +typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */ +typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *); /* entry */ + #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* for init task */ @@ -400,10 +410,6 @@ extern char __irqentry_text_end[]; #define FTRACE_RETFUNC_DEPTH 50 #define FTRACE_RETSTACK_ALLOC_SIZE 32 -/* Type of the callback handlers for tracing function graph*/ -typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */ -typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *); /* entry */ - extern int register_ftrace_graph(trace_func_graph_ret_t retfunc, trace_func_graph_ent_t entryfunc); @@ -441,6 +447,13 @@ static inline void unpause_graph_tracing(void) static inline void ftrace_graph_init_task(struct task_struct *t) { } static inline void ftrace_graph_exit_task(struct task_struct *t) { } +static inline int register_ftrace_graph(trace_func_graph_ret_t retfunc, + trace_func_graph_ent_t entryfunc) +{ + return -1; +} +static inline void unregister_ftrace_graph(void) { } + static inline int task_curr_ret_stack(struct task_struct *tsk) { return -1; @@ -492,7 +505,9 @@ static inline int test_tsk_trace_graph(struct task_struct *tsk) return tsk->trace & TSK_TRACE_FL_GRAPH; } -extern int ftrace_dump_on_oops; +enum ftrace_dump_mode; + +extern enum ftrace_dump_mode ftrace_dump_on_oops; #ifdef CONFIG_PREEMPT #define INIT_TRACE_RECURSION .trace_recursion = 0, @@ -504,18 +519,6 @@ extern int ftrace_dump_on_oops; #define INIT_TRACE_RECURSION #endif -#ifdef CONFIG_HW_BRANCH_TRACER - -void trace_hw_branch(u64 from, u64 to); -void trace_hw_branch_oops(void); - -#else /* CONFIG_HW_BRANCH_TRACER */ - -static inline void trace_hw_branch(u64 from, u64 to) {} -static inline void trace_hw_branch_oops(void) {} - -#endif /* CONFIG_HW_BRANCH_TRACER */ - #ifdef CONFIG_FTRACE_SYSCALLS unsigned long arch_syscall_addr(int nr); diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h index c0f4b364c711..39e71b0a3bfd 100644 --- a/include/linux/ftrace_event.h +++ b/include/linux/ftrace_event.h @@ -58,6 +58,7 @@ struct trace_iterator { /* The below is zeroed out in pipe_read */ struct trace_seq seq; struct trace_entry *ent; + unsigned long lost_events; int leftover; int cpu; u64 ts; diff --git a/include/linux/hw_breakpoint.h b/include/linux/hw_breakpoint.h index c70d27af03f9..a2d6ea49ec56 100644 --- a/include/linux/hw_breakpoint.h +++ b/include/linux/hw_breakpoint.h @@ -9,9 +9,22 @@ enum { }; enum { - HW_BREAKPOINT_R = 1, - HW_BREAKPOINT_W = 2, - HW_BREAKPOINT_X = 4, + HW_BREAKPOINT_EMPTY = 0, + HW_BREAKPOINT_R = 1, + HW_BREAKPOINT_W = 2, + HW_BREAKPOINT_RW = HW_BREAKPOINT_R | HW_BREAKPOINT_W, + HW_BREAKPOINT_X = 4, + HW_BREAKPOINT_INVALID = HW_BREAKPOINT_RW | HW_BREAKPOINT_X, +}; + +enum bp_type_idx { + TYPE_INST = 0, +#ifdef CONFIG_HAVE_MIXED_BREAKPOINTS_REGS + TYPE_DATA = 0, +#else + TYPE_DATA = 1, +#endif + TYPE_MAX }; #ifdef __KERNEL__ @@ -34,6 +47,12 @@ static inline void hw_breakpoint_init(struct perf_event_attr *attr) attr->sample_period = 1; } +static inline void ptrace_breakpoint_init(struct perf_event_attr *attr) +{ + hw_breakpoint_init(attr); + attr->exclude_kernel = 1; +} + static inline unsigned long hw_breakpoint_addr(struct perf_event *bp) { return bp->attr.bp_addr; diff --git a/include/linux/init_task.h b/include/linux/init_task.h index b1ed1cd8e2a8..7996fc2c9ba9 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -49,7 +49,6 @@ extern struct group_info init_groups; { .first = &init_task.pids[PIDTYPE_PGID].node }, \ { .first = &init_task.pids[PIDTYPE_SID].node }, \ }, \ - .rcu = RCU_HEAD_INIT, \ .level = 0, \ .numbers = { { \ .nr = 0, \ diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 3af4ffd591b9..be22ad83689c 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -37,9 +37,9 @@ struct iommu_ops { int (*attach_dev)(struct iommu_domain *domain, struct device *dev); void (*detach_dev)(struct iommu_domain *domain, struct device *dev); int (*map)(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot); - void (*unmap)(struct iommu_domain *domain, unsigned long iova, - size_t size); + phys_addr_t paddr, int gfp_order, int prot); + int (*unmap)(struct iommu_domain *domain, unsigned long iova, + int gfp_order); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, unsigned long iova); int (*domain_has_cap)(struct iommu_domain *domain, @@ -56,10 +56,10 @@ extern int iommu_attach_device(struct iommu_domain *domain, struct device *dev); extern void iommu_detach_device(struct iommu_domain *domain, struct device *dev); -extern int iommu_map_range(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot); -extern void iommu_unmap_range(struct iommu_domain *domain, unsigned long iova, - size_t size); +extern int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, int gfp_order, int prot); +extern int iommu_unmap(struct iommu_domain *domain, unsigned long iova, + int gfp_order); extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, unsigned long iova); extern int iommu_domain_has_cap(struct iommu_domain *domain, @@ -96,16 +96,16 @@ static inline void iommu_detach_device(struct iommu_domain *domain, { } -static inline int iommu_map_range(struct iommu_domain *domain, - unsigned long iova, phys_addr_t paddr, - size_t size, int prot) +static inline int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, int gfp_order, int prot) { return -ENODEV; } -static inline void iommu_unmap_range(struct iommu_domain *domain, - unsigned long iova, size_t size) +static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova, + int gfp_order) { + return -ENODEV; } static inline phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, diff --git a/include/linux/kernel.h b/include/linux/kernel.h index a38d6bd6fde6..fc33af911852 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -492,6 +492,13 @@ static inline void tracing_off(void) { } static inline void tracing_off_permanent(void) { } static inline int tracing_is_on(void) { return 0; } #endif + +enum ftrace_dump_mode { + DUMP_NONE, + DUMP_ALL, + DUMP_ORIG, +}; + #ifdef CONFIG_TRACING extern void tracing_start(void); extern void tracing_stop(void); @@ -573,7 +580,7 @@ __ftrace_vbprintk(unsigned long ip, const char *fmt, va_list ap); extern int __ftrace_vprintk(unsigned long ip, const char *fmt, va_list ap); -extern void ftrace_dump(void); +extern void ftrace_dump(enum ftrace_dump_mode oops_dump_mode); #else static inline void ftrace_special(unsigned long arg1, unsigned long arg2, unsigned long arg3) { } @@ -594,7 +601,7 @@ ftrace_vprintk(const char *fmt, va_list ap) { return 0; } -static inline void ftrace_dump(void) { } +static inline void ftrace_dump(enum ftrace_dump_mode oops_dump_mode) { } #endif /* CONFIG_TRACING */ /* diff --git a/include/linux/mm.h b/include/linux/mm.h index 462acaf36f3a..fb19bb92b809 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -19,7 +19,6 @@ struct anon_vma; struct file_ra_state; struct user_struct; struct writeback_control; -struct rlimit; #ifndef CONFIG_DISCONTIGMEM /* Don't use mapnrs, do it properly */ extern unsigned long max_mapnr; @@ -1449,9 +1448,6 @@ int vmemmap_populate_basepages(struct page *start_page, int vmemmap_populate(struct page *start_page, unsigned long pages, int node); void vmemmap_populate_print_last(void); -extern int account_locked_memory(struct mm_struct *mm, struct rlimit *rlim, - size_t size); -extern void refund_locked_memory(struct mm_struct *mm, size_t size); enum mf_flags { MF_COUNT_INCREASED = 1 << 0, diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index 55f1f9c9506c..007fbaafead0 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -500,4 +500,13 @@ struct mdio_device_id { __u32 phy_id_mask; }; +struct zorro_device_id { + __u32 id; /* Device ID or ZORRO_WILDCARD */ + kernel_ulong_t driver_data; /* Data private to the driver */ +}; + +#define ZORRO_WILDCARD (0xffffffff) /* not official */ + +#define ZORRO_DEVICE_MODALIAS_FMT "zorro:i%08X" + #endif /* LINUX_MOD_DEVICETABLE_H */ diff --git a/include/linux/module.h b/include/linux/module.h index 515d53ae6a79..6914fcad4673 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -465,8 +465,7 @@ static inline void __module_get(struct module *module) if (module) { preempt_disable(); __this_cpu_inc(module->refptr->incs); - trace_module_get(module, _THIS_IP_, - __this_cpu_read(module->refptr->incs)); + trace_module_get(module, _THIS_IP_); preempt_enable(); } } @@ -480,8 +479,7 @@ static inline int try_module_get(struct module *module) if (likely(module_is_live(module))) { __this_cpu_inc(module->refptr->incs); - trace_module_get(module, _THIS_IP_, - __this_cpu_read(module->refptr->incs)); + trace_module_get(module, _THIS_IP_); } else ret = 0; diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index c8e375440403..3fd5c82e0e18 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -203,8 +203,19 @@ struct perf_event_attr { enable_on_exec : 1, /* next exec enables */ task : 1, /* trace fork/exit */ watermark : 1, /* wakeup_watermark */ - - __reserved_1 : 49; + /* + * precise_ip: + * + * 0 - SAMPLE_IP can have arbitrary skid + * 1 - SAMPLE_IP must have constant skid + * 2 - SAMPLE_IP requested to have 0 skid + * 3 - SAMPLE_IP must have 0 skid + * + * See also PERF_RECORD_MISC_EXACT_IP + */ + precise_ip : 2, /* skid constraint */ + + __reserved_1 : 47; union { __u32 wakeup_events; /* wakeup every n events */ @@ -287,11 +298,24 @@ struct perf_event_mmap_page { __u64 data_tail; /* user-space written tail */ }; -#define PERF_RECORD_MISC_CPUMODE_MASK (3 << 0) +#define PERF_RECORD_MISC_CPUMODE_MASK (7 << 0) #define PERF_RECORD_MISC_CPUMODE_UNKNOWN (0 << 0) #define PERF_RECORD_MISC_KERNEL (1 << 0) #define PERF_RECORD_MISC_USER (2 << 0) #define PERF_RECORD_MISC_HYPERVISOR (3 << 0) +#define PERF_RECORD_MISC_GUEST_KERNEL (4 << 0) +#define PERF_RECORD_MISC_GUEST_USER (5 << 0) + +/* + * Indicates that the content of PERF_SAMPLE_IP points to + * the actual instruction that triggered the event. See also + * perf_event_attr::precise_ip. + */ +#define PERF_RECORD_MISC_EXACT_IP (1 << 14) +/* + * Reserve the last bit to indicate some extended misc field + */ +#define PERF_RECORD_MISC_EXT_RESERVED (1 << 15) struct perf_event_header { __u32 type; @@ -439,6 +463,12 @@ enum perf_callchain_context { # include <asm/perf_event.h> #endif +struct perf_guest_info_callbacks { + int (*is_in_guest) (void); + int (*is_user_mode) (void); + unsigned long (*get_guest_ip) (void); +}; + #ifdef CONFIG_HAVE_HW_BREAKPOINT #include <asm/hw_breakpoint.h> #endif @@ -468,6 +498,17 @@ struct perf_raw_record { void *data; }; +struct perf_branch_entry { + __u64 from; + __u64 to; + __u64 flags; +}; + +struct perf_branch_stack { + __u64 nr; + struct perf_branch_entry entries[0]; +}; + struct task_struct; /** @@ -506,6 +547,8 @@ struct hw_perf_event { struct perf_event; +#define PERF_EVENT_TXN_STARTED 1 + /** * struct pmu - generic performance monitoring unit */ @@ -516,6 +559,16 @@ struct pmu { void (*stop) (struct perf_event *event); void (*read) (struct perf_event *event); void (*unthrottle) (struct perf_event *event); + + /* + * group events scheduling is treated as a transaction, + * add group events as a whole and perform one schedulability test. + * If test fails, roll back the whole group + */ + + void (*start_txn) (const struct pmu *pmu); + void (*cancel_txn) (const struct pmu *pmu); + int (*commit_txn) (const struct pmu *pmu); }; /** @@ -571,6 +624,14 @@ enum perf_group_flag { PERF_GROUP_SOFTWARE = 0x1, }; +#define SWEVENT_HLIST_BITS 8 +#define SWEVENT_HLIST_SIZE (1 << SWEVENT_HLIST_BITS) + +struct swevent_hlist { + struct hlist_head heads[SWEVENT_HLIST_SIZE]; + struct rcu_head rcu_head; +}; + /** * struct perf_event - performance event kernel representation: */ @@ -579,6 +640,7 @@ struct perf_event { struct list_head group_entry; struct list_head event_entry; struct list_head sibling_list; + struct hlist_node hlist_entry; int nr_siblings; int group_flags; struct perf_event *group_leader; @@ -726,6 +788,9 @@ struct perf_cpu_context { int active_oncpu; int max_pertask; int exclusive; + struct swevent_hlist *swevent_hlist; + struct mutex hlist_mutex; + int hlist_refcount; /* * Recursion avoidance: @@ -769,9 +834,6 @@ extern void perf_disable(void); extern void perf_enable(void); extern int perf_event_task_disable(void); extern int perf_event_task_enable(void); -extern int hw_perf_group_sched_in(struct perf_event *group_leader, - struct perf_cpu_context *cpuctx, - struct perf_event_context *ctx); extern void perf_event_update_userpage(struct perf_event *event); extern int perf_event_release_kernel(struct perf_event *event); extern struct perf_event * @@ -902,6 +964,10 @@ static inline void perf_event_mmap(struct vm_area_struct *vma) __perf_event_mmap(vma); } +extern struct perf_guest_info_callbacks *perf_guest_cbs; +extern int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks); +extern int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks); + extern void perf_event_comm(struct task_struct *tsk); extern void perf_event_fork(struct task_struct *tsk); @@ -971,6 +1037,11 @@ perf_sw_event(u32 event_id, u64 nr, int nmi, static inline void perf_bp_event(struct perf_event *event, void *data) { } +static inline int perf_register_guest_info_callbacks +(struct perf_guest_info_callbacks *callbacks) { return 0; } +static inline int perf_unregister_guest_info_callbacks +(struct perf_guest_info_callbacks *callbacks) { return 0; } + static inline void perf_event_mmap(struct vm_area_struct *vma) { } static inline void perf_event_comm(struct task_struct *tsk) { } static inline void perf_event_fork(struct task_struct *tsk) { } diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h index 212da17d06af..5417944d3687 100644 --- a/include/linux/platform_device.h +++ b/include/linux/platform_device.h @@ -44,12 +44,14 @@ extern int platform_get_irq_byname(struct platform_device *, const char *); extern int platform_add_devices(struct platform_device **, int); extern struct platform_device *platform_device_register_simple(const char *, int id, - struct resource *, unsigned int); + const struct resource *, unsigned int); extern struct platform_device *platform_device_register_data(struct device *, const char *, int, const void *, size_t); extern struct platform_device *platform_device_alloc(const char *name, int id); -extern int platform_device_add_resources(struct platform_device *pdev, struct resource *res, unsigned int num); +extern int platform_device_add_resources(struct platform_device *pdev, + const struct resource *res, + unsigned int num); extern int platform_device_add_data(struct platform_device *pdev, const void *data, size_t size); extern int platform_device_add(struct platform_device *pdev); extern void platform_device_del(struct platform_device *pdev); diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h index e1fb60729979..4272521e29e9 100644 --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -345,18 +345,6 @@ static inline void user_single_step_siginfo(struct task_struct *tsk, #define arch_ptrace_stop(code, info) do { } while (0) #endif -#ifndef arch_ptrace_untrace -/* - * Do machine-specific work before untracing child. - * - * This is called for a normal detach as well as from ptrace_exit() - * when the tracing task dies. - * - * Called with write_lock(&tasklist_lock) held. - */ -#define arch_ptrace_untrace(task) do { } while (0) -#endif - extern int task_current_syscall(struct task_struct *target, long *callno, unsigned long args[6], unsigned int maxargs, unsigned long *sp, unsigned long *pc); diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h index 5210a5c60877..fe1872e5b37e 100644 --- a/include/linux/rbtree.h +++ b/include/linux/rbtree.h @@ -110,6 +110,7 @@ struct rb_node struct rb_root { struct rb_node *rb_node; + void (*augment_cb)(struct rb_node *node); }; @@ -129,7 +130,9 @@ static inline void rb_set_color(struct rb_node *rb, int color) rb->rb_parent_color = (rb->rb_parent_color & ~1) | color; } -#define RB_ROOT (struct rb_root) { NULL, } +#define RB_ROOT (struct rb_root) { NULL, NULL, } +#define RB_AUGMENT_ROOT(x) (struct rb_root) { NULL, x} + #define rb_entry(ptr, type, member) container_of(ptr, type, member) #define RB_EMPTY_ROOT(root) ((root)->rb_node == NULL) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index db266bbed23f..b653b4aaa8a6 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -56,8 +56,6 @@ struct rcu_head { }; /* Exported common interfaces */ -extern void synchronize_rcu_bh(void); -extern void synchronize_sched(void); extern void rcu_barrier(void); extern void rcu_barrier_bh(void); extern void rcu_barrier_sched(void); @@ -66,8 +64,6 @@ extern int sched_expedited_torture_stats(char *page); /* Internal to kernel */ extern void rcu_init(void); -extern int rcu_scheduler_active; -extern void rcu_scheduler_starting(void); #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) #include <linux/rcutree.h> @@ -83,6 +79,14 @@ extern void rcu_scheduler_starting(void); (ptr)->next = NULL; (ptr)->func = NULL; \ } while (0) +static inline void init_rcu_head_on_stack(struct rcu_head *head) +{ +} + +static inline void destroy_rcu_head_on_stack(struct rcu_head *head) +{ +} + #ifdef CONFIG_DEBUG_LOCK_ALLOC extern struct lockdep_map rcu_lock_map; @@ -106,12 +110,13 @@ extern int debug_lockdep_rcu_enabled(void); /** * rcu_read_lock_held - might we be in RCU read-side critical section? * - * If CONFIG_PROVE_LOCKING is selected and enabled, returns nonzero iff in - * an RCU read-side critical section. In absence of CONFIG_PROVE_LOCKING, + * If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an RCU + * read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, * this assumes we are in an RCU read-side critical section unless it can * prove otherwise. * - * Check rcu_scheduler_active to prevent false positives during boot. + * Check debug_lockdep_rcu_enabled() to prevent false positives during boot + * and while lockdep is disabled. */ static inline int rcu_read_lock_held(void) { @@ -129,13 +134,15 @@ extern int rcu_read_lock_bh_held(void); /** * rcu_read_lock_sched_held - might we be in RCU-sched read-side critical section? * - * If CONFIG_PROVE_LOCKING is selected and enabled, returns nonzero iff in an - * RCU-sched read-side critical section. In absence of CONFIG_PROVE_LOCKING, - * this assumes we are in an RCU-sched read-side critical section unless it - * can prove otherwise. Note that disabling of preemption (including - * disabling irqs) counts as an RCU-sched read-side critical section. + * If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an + * RCU-sched read-side critical section. In absence of + * CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU-sched read-side + * critical section unless it can prove otherwise. Note that disabling + * of preemption (including disabling irqs) counts as an RCU-sched + * read-side critical section. * - * Check rcu_scheduler_active to prevent false positives during boot. + * Check debug_lockdep_rcu_enabled() to prevent false positives during boot + * and while lockdep is disabled. */ #ifdef CONFIG_PREEMPT static inline int rcu_read_lock_sched_held(void) @@ -177,7 +184,7 @@ static inline int rcu_read_lock_bh_held(void) #ifdef CONFIG_PREEMPT static inline int rcu_read_lock_sched_held(void) { - return !rcu_scheduler_active || preempt_count() != 0 || irqs_disabled(); + return preempt_count() != 0 || irqs_disabled(); } #else /* #ifdef CONFIG_PREEMPT */ static inline int rcu_read_lock_sched_held(void) @@ -192,6 +199,15 @@ static inline int rcu_read_lock_sched_held(void) extern int rcu_my_thread_group_empty(void); +#define __do_rcu_dereference_check(c) \ + do { \ + static bool __warned; \ + if (debug_lockdep_rcu_enabled() && !__warned && !(c)) { \ + __warned = true; \ + lockdep_rcu_dereference(__FILE__, __LINE__); \ + } \ + } while (0) + /** * rcu_dereference_check - rcu_dereference with debug checking * @p: The pointer to read, prior to dereferencing @@ -221,8 +237,7 @@ extern int rcu_my_thread_group_empty(void); */ #define rcu_dereference_check(p, c) \ ({ \ - if (debug_lockdep_rcu_enabled() && !(c)) \ - lockdep_rcu_dereference(__FILE__, __LINE__); \ + __do_rcu_dereference_check(c); \ rcu_dereference_raw(p); \ }) @@ -239,8 +254,7 @@ extern int rcu_my_thread_group_empty(void); */ #define rcu_dereference_protected(p, c) \ ({ \ - if (debug_lockdep_rcu_enabled() && !(c)) \ - lockdep_rcu_dereference(__FILE__, __LINE__); \ + __do_rcu_dereference_check(c); \ (p); \ }) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index a5195875480a..e2e893144a84 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -29,6 +29,10 @@ void rcu_sched_qs(int cpu); void rcu_bh_qs(int cpu); +static inline void rcu_note_context_switch(int cpu) +{ + rcu_sched_qs(cpu); +} #define __rcu_read_lock() preempt_disable() #define __rcu_read_unlock() preempt_enable() @@ -60,8 +64,6 @@ static inline long rcu_batches_completed_bh(void) return 0; } -extern int rcu_expedited_torture_stats(char *page); - static inline void rcu_force_quiescent_state(void) { } @@ -74,7 +76,17 @@ static inline void rcu_sched_force_quiescent_state(void) { } -#define synchronize_rcu synchronize_sched +extern void synchronize_sched(void); + +static inline void synchronize_rcu(void) +{ + synchronize_sched(); +} + +static inline void synchronize_rcu_bh(void) +{ + synchronize_sched(); +} static inline void synchronize_rcu_expedited(void) { @@ -114,4 +126,17 @@ static inline int rcu_preempt_depth(void) return 0; } +#ifdef CONFIG_DEBUG_LOCK_ALLOC + +extern int rcu_scheduler_active __read_mostly; +extern void rcu_scheduler_starting(void); + +#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ + +static inline void rcu_scheduler_starting(void) +{ +} + +#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ + #endif /* __LINUX_RCUTINY_H */ diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 42cc3a04779e..c0ed1c056f29 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -34,8 +34,8 @@ struct notifier_block; extern void rcu_sched_qs(int cpu); extern void rcu_bh_qs(int cpu); +extern void rcu_note_context_switch(int cpu); extern int rcu_needs_cpu(int cpu); -extern int rcu_expedited_torture_stats(char *page); #ifdef CONFIG_TREE_PREEMPT_RCU @@ -86,6 +86,8 @@ static inline void __rcu_read_unlock_bh(void) extern void call_rcu_sched(struct rcu_head *head, void (*func)(struct rcu_head *rcu)); +extern void synchronize_rcu_bh(void); +extern void synchronize_sched(void); extern void synchronize_rcu_expedited(void); static inline void synchronize_rcu_bh_expedited(void) @@ -120,4 +122,7 @@ static inline int rcu_blocking_is_gp(void) return num_online_cpus() == 1; } +extern void rcu_scheduler_starting(void); +extern int rcu_scheduler_active __read_mostly; + #endif /* __LINUX_RCUTREE_H */ diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index 5fcc31ed5771..25b4f686d918 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -120,12 +120,16 @@ int ring_buffer_write(struct ring_buffer *buffer, unsigned long length, void *data); struct ring_buffer_event * -ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts); +ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts, + unsigned long *lost_events); struct ring_buffer_event * -ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts); +ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts, + unsigned long *lost_events); struct ring_buffer_iter * -ring_buffer_read_start(struct ring_buffer *buffer, int cpu); +ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu); +void ring_buffer_read_prepare_sync(void); +void ring_buffer_read_start(struct ring_buffer_iter *iter); void ring_buffer_read_finish(struct ring_buffer_iter *iter); struct ring_buffer_event * diff --git a/include/linux/sched.h b/include/linux/sched.h index 2b7b81df78b3..b55e988988b5 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -99,7 +99,6 @@ struct futex_pi_state; struct robust_list_head; struct bio_list; struct fs_struct; -struct bts_context; struct perf_event_context; /* @@ -275,11 +274,17 @@ extern cpumask_var_t nohz_cpu_mask; #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ) extern int select_nohz_load_balancer(int cpu); extern int get_nohz_load_balancer(void); +extern int nohz_ratelimit(int cpu); #else static inline int select_nohz_load_balancer(int cpu) { return 0; } + +static inline int nohz_ratelimit(int cpu) +{ + return 0; +} #endif /* @@ -954,6 +959,7 @@ struct sched_domain { char *name; #endif + unsigned int span_weight; /* * Span of all CPUs in this domain. * @@ -1026,12 +1032,17 @@ struct sched_domain; #define WF_SYNC 0x01 /* waker goes to sleep after wakup */ #define WF_FORK 0x02 /* child wakeup after fork */ +#define ENQUEUE_WAKEUP 1 +#define ENQUEUE_WAKING 2 +#define ENQUEUE_HEAD 4 + +#define DEQUEUE_SLEEP 1 + struct sched_class { const struct sched_class *next; - void (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup, - bool head); - void (*dequeue_task) (struct rq *rq, struct task_struct *p, int sleep); + void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags); + void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags); void (*yield_task) (struct rq *rq); void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags); @@ -1040,7 +1051,8 @@ struct sched_class { void (*put_prev_task) (struct rq *rq, struct task_struct *p); #ifdef CONFIG_SMP - int (*select_task_rq)(struct task_struct *p, int sd_flag, int flags); + int (*select_task_rq)(struct rq *rq, struct task_struct *p, + int sd_flag, int flags); void (*pre_schedule) (struct rq *this_rq, struct task_struct *task); void (*post_schedule) (struct rq *this_rq); @@ -1077,36 +1089,8 @@ struct load_weight { unsigned long weight, inv_weight; }; -/* - * CFS stats for a schedulable entity (task, task-group etc) - * - * Current field usage histogram: - * - * 4 se->block_start - * 4 se->run_node - * 4 se->sleep_start - * 6 se->load.weight - */ -struct sched_entity { - struct load_weight load; /* for load-balancing */ - struct rb_node run_node; - struct list_head group_node; - unsigned int on_rq; - - u64 exec_start; - u64 sum_exec_runtime; - u64 vruntime; - u64 prev_sum_exec_runtime; - - u64 last_wakeup; - u64 avg_overlap; - - u64 nr_migrations; - - u64 start_runtime; - u64 avg_wakeup; - #ifdef CONFIG_SCHEDSTATS +struct sched_statistics { u64 wait_start; u64 wait_max; u64 wait_count; @@ -1138,6 +1122,24 @@ struct sched_entity { u64 nr_wakeups_affine_attempts; u64 nr_wakeups_passive; u64 nr_wakeups_idle; +}; +#endif + +struct sched_entity { + struct load_weight load; /* for load-balancing */ + struct rb_node run_node; + struct list_head group_node; + unsigned int on_rq; + + u64 exec_start; + u64 sum_exec_runtime; + u64 vruntime; + u64 prev_sum_exec_runtime; + + u64 nr_migrations; + +#ifdef CONFIG_SCHEDSTATS + struct sched_statistics statistics; #endif #ifdef CONFIG_FAIR_GROUP_SCHED @@ -1272,12 +1274,6 @@ struct task_struct { struct list_head ptraced; struct list_head ptrace_entry; - /* - * This is the tracer handle for the ptrace BTS extension. - * This field actually belongs to the ptracer task. - */ - struct bts_context *bts; - /* PID/PID hash table linkage. */ struct pid_link pids[PIDTYPE_MAX]; struct list_head thread_group; @@ -1846,6 +1842,7 @@ extern void sched_clock_idle_sleep_event(void); extern void sched_clock_idle_wakeup_event(u64 delta_ns); #ifdef CONFIG_HOTPLUG_CPU +extern void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p); extern void idle_task_exit(void); #else static inline void idle_task_exit(void) {} @@ -2122,10 +2119,8 @@ extern void set_task_comm(struct task_struct *tsk, char *from); extern char *get_task_comm(char *to, struct task_struct *tsk); #ifdef CONFIG_SMP -extern void wait_task_context_switch(struct task_struct *p); extern unsigned long wait_task_inactive(struct task_struct *, long match_state); #else -static inline void wait_task_context_switch(struct task_struct *p) {} static inline unsigned long wait_task_inactive(struct task_struct *p, long match_state) { diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 4d5ecb222af9..4d5d2f546dbf 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -27,6 +27,8 @@ #ifndef _LINUX_SRCU_H #define _LINUX_SRCU_H +#include <linux/mutex.h> + struct srcu_struct_array { int c[2]; }; @@ -84,8 +86,8 @@ long srcu_batches_completed(struct srcu_struct *sp); /** * srcu_read_lock_held - might we be in SRCU read-side critical section? * - * If CONFIG_PROVE_LOCKING is selected and enabled, returns nonzero iff in - * an SRCU read-side critical section. In absence of CONFIG_PROVE_LOCKING, + * If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an SRCU + * read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, * this assumes we are in an SRCU read-side critical section unless it can * prove otherwise. */ diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h index baba3a23a814..6b524a0d02e4 100644 --- a/include/linux/stop_machine.h +++ b/include/linux/stop_machine.h @@ -1,13 +1,101 @@ #ifndef _LINUX_STOP_MACHINE #define _LINUX_STOP_MACHINE -/* "Bogolock": stop the entire machine, disable interrupts. This is a - very heavy lock, which is equivalent to grabbing every spinlock - (and more). So the "read" side to such a lock is anything which - disables preeempt. */ + #include <linux/cpu.h> #include <linux/cpumask.h> +#include <linux/list.h> #include <asm/system.h> +/* + * stop_cpu[s]() is simplistic per-cpu maximum priority cpu + * monopolization mechanism. The caller can specify a non-sleeping + * function to be executed on a single or multiple cpus preempting all + * other processes and monopolizing those cpus until it finishes. + * + * Resources for this mechanism are preallocated when a cpu is brought + * up and requests are guaranteed to be served as long as the target + * cpus are online. + */ +typedef int (*cpu_stop_fn_t)(void *arg); + +#ifdef CONFIG_SMP + +struct cpu_stop_work { + struct list_head list; /* cpu_stopper->works */ + cpu_stop_fn_t fn; + void *arg; + struct cpu_stop_done *done; +}; + +int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg); +void stop_one_cpu_nowait(unsigned int cpu, cpu_stop_fn_t fn, void *arg, + struct cpu_stop_work *work_buf); +int stop_cpus(const struct cpumask *cpumask, cpu_stop_fn_t fn, void *arg); +int try_stop_cpus(const struct cpumask *cpumask, cpu_stop_fn_t fn, void *arg); + +#else /* CONFIG_SMP */ + +#include <linux/workqueue.h> + +struct cpu_stop_work { + struct work_struct work; + cpu_stop_fn_t fn; + void *arg; +}; + +static inline int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg) +{ + int ret = -ENOENT; + preempt_disable(); + if (cpu == smp_processor_id()) + ret = fn(arg); + preempt_enable(); + return ret; +} + +static void stop_one_cpu_nowait_workfn(struct work_struct *work) +{ + struct cpu_stop_work *stwork = + container_of(work, struct cpu_stop_work, work); + preempt_disable(); + stwork->fn(stwork->arg); + preempt_enable(); +} + +static inline void stop_one_cpu_nowait(unsigned int cpu, + cpu_stop_fn_t fn, void *arg, + struct cpu_stop_work *work_buf) +{ + if (cpu == smp_processor_id()) { + INIT_WORK(&work_buf->work, stop_one_cpu_nowait_workfn); + work_buf->fn = fn; + work_buf->arg = arg; + schedule_work(&work_buf->work); + } +} + +static inline int stop_cpus(const struct cpumask *cpumask, + cpu_stop_fn_t fn, void *arg) +{ + if (cpumask_test_cpu(raw_smp_processor_id(), cpumask)) + return stop_one_cpu(raw_smp_processor_id(), fn, arg); + return -ENOENT; +} + +static inline int try_stop_cpus(const struct cpumask *cpumask, + cpu_stop_fn_t fn, void *arg) +{ + return stop_cpus(cpumask, fn, arg); +} + +#endif /* CONFIG_SMP */ + +/* + * stop_machine "Bogolock": stop the entire machine, disable + * interrupts. This is a very heavy lock, which is equivalent to + * grabbing every spinlock (and more). So the "read" side to such a + * lock is anything which disables preeempt. + */ #if defined(CONFIG_STOP_MACHINE) && defined(CONFIG_SMP) /** @@ -36,24 +124,7 @@ int stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus); */ int __stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus); -/** - * stop_machine_create: create all stop_machine threads - * - * Description: This causes all stop_machine threads to be created before - * stop_machine actually gets called. This can be used by subsystems that - * need a non failing stop_machine infrastructure. - */ -int stop_machine_create(void); - -/** - * stop_machine_destroy: destroy all stop_machine threads - * - * Description: This causes all stop_machine threads which were created with - * stop_machine_create to be destroyed again. - */ -void stop_machine_destroy(void); - -#else +#else /* CONFIG_STOP_MACHINE && CONFIG_SMP */ static inline int stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus) @@ -65,8 +136,5 @@ static inline int stop_machine(int (*fn)(void *), void *data, return ret; } -static inline int stop_machine_create(void) { return 0; } -static inline void stop_machine_destroy(void) { } - -#endif /* CONFIG_SMP */ -#endif /* _LINUX_STOP_MACHINE */ +#endif /* CONFIG_STOP_MACHINE && CONFIG_SMP */ +#endif /* _LINUX_STOP_MACHINE */ diff --git a/include/linux/tick.h b/include/linux/tick.h index d2ae79e21be3..b232ccc0ee29 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -42,6 +42,7 @@ enum tick_nohz_mode { * @idle_waketime: Time when the idle was interrupted * @idle_exittime: Time when the idle state was left * @idle_sleeptime: Sum of the time slept in idle with sched tick stopped + * @iowait_sleeptime: Sum of the time slept in idle with sched tick stopped, with IO outstanding * @sleep_length: Duration of the current idle sleep * @do_timer_lst: CPU was the last one doing do_timer before going idle */ @@ -60,7 +61,7 @@ struct tick_sched { ktime_t idle_waketime; ktime_t idle_exittime; ktime_t idle_sleeptime; - ktime_t idle_lastupdate; + ktime_t iowait_sleeptime; ktime_t sleep_length; unsigned long last_jiffies; unsigned long next_jiffies; @@ -124,6 +125,7 @@ extern void tick_nohz_stop_sched_tick(int inidle); extern void tick_nohz_restart_sched_tick(void); extern ktime_t tick_nohz_get_sleep_length(void); extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time); +extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time); # else static inline void tick_nohz_stop_sched_tick(int inidle) { } static inline void tick_nohz_restart_sched_tick(void) { } @@ -134,6 +136,7 @@ static inline ktime_t tick_nohz_get_sleep_length(void) return len; } static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; } +static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; } # endif /* !NO_HZ */ #endif diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h index 78b4bd3be496..1d85f9a6a199 100644 --- a/include/linux/tracepoint.h +++ b/include/linux/tracepoint.h @@ -33,6 +33,65 @@ struct tracepoint { * Keep in sync with vmlinux.lds.h. */ +/* + * Connect a probe to a tracepoint. + * Internal API, should not be used directly. + */ +extern int tracepoint_probe_register(const char *name, void *probe); + +/* + * Disconnect a probe from a tracepoint. + * Internal API, should not be used directly. + */ +extern int tracepoint_probe_unregister(const char *name, void *probe); + +extern int tracepoint_probe_register_noupdate(const char *name, void *probe); +extern int tracepoint_probe_unregister_noupdate(const char *name, void *probe); +extern void tracepoint_probe_update_all(void); + +struct tracepoint_iter { + struct module *module; + struct tracepoint *tracepoint; +}; + +extern void tracepoint_iter_start(struct tracepoint_iter *iter); +extern void tracepoint_iter_next(struct tracepoint_iter *iter); +extern void tracepoint_iter_stop(struct tracepoint_iter *iter); +extern void tracepoint_iter_reset(struct tracepoint_iter *iter); +extern int tracepoint_get_iter_range(struct tracepoint **tracepoint, + struct tracepoint *begin, struct tracepoint *end); + +/* + * tracepoint_synchronize_unregister must be called between the last tracepoint + * probe unregistration and the end of module exit to make sure there is no + * caller executing a probe when it is freed. + */ +static inline void tracepoint_synchronize_unregister(void) +{ + synchronize_sched(); +} + +#define PARAMS(args...) args + +#ifdef CONFIG_TRACEPOINTS +extern void tracepoint_update_probe_range(struct tracepoint *begin, + struct tracepoint *end); +#else +static inline void tracepoint_update_probe_range(struct tracepoint *begin, + struct tracepoint *end) +{ } +#endif /* CONFIG_TRACEPOINTS */ + +#endif /* _LINUX_TRACEPOINT_H */ + +/* + * Note: we keep the TRACE_EVENT and DECLARE_TRACE outside the include + * file ifdef protection. + * This is due to the way trace events work. If a file includes two + * trace event headers under one "CREATE_TRACE_POINTS" the first include + * will override the TRACE_EVENT and break the second include. + */ + #ifndef DECLARE_TRACE #define TP_PROTO(args...) args @@ -96,9 +155,6 @@ struct tracepoint { #define EXPORT_TRACEPOINT_SYMBOL(name) \ EXPORT_SYMBOL(__tracepoint_##name) -extern void tracepoint_update_probe_range(struct tracepoint *begin, - struct tracepoint *end); - #else /* !CONFIG_TRACEPOINTS */ #define DECLARE_TRACE(name, proto, args) \ static inline void _do_trace_##name(struct tracepoint *tp, proto) \ @@ -119,61 +175,9 @@ extern void tracepoint_update_probe_range(struct tracepoint *begin, #define EXPORT_TRACEPOINT_SYMBOL_GPL(name) #define EXPORT_TRACEPOINT_SYMBOL(name) -static inline void tracepoint_update_probe_range(struct tracepoint *begin, - struct tracepoint *end) -{ } #endif /* CONFIG_TRACEPOINTS */ #endif /* DECLARE_TRACE */ -/* - * Connect a probe to a tracepoint. - * Internal API, should not be used directly. - */ -extern int tracepoint_probe_register(const char *name, void *probe); - -/* - * Disconnect a probe from a tracepoint. - * Internal API, should not be used directly. - */ -extern int tracepoint_probe_unregister(const char *name, void *probe); - -extern int tracepoint_probe_register_noupdate(const char *name, void *probe); -extern int tracepoint_probe_unregister_noupdate(const char *name, void *probe); -extern void tracepoint_probe_update_all(void); - -struct tracepoint_iter { - struct module *module; - struct tracepoint *tracepoint; -}; - -extern void tracepoint_iter_start(struct tracepoint_iter *iter); -extern void tracepoint_iter_next(struct tracepoint_iter *iter); -extern void tracepoint_iter_stop(struct tracepoint_iter *iter); -extern void tracepoint_iter_reset(struct tracepoint_iter *iter); -extern int tracepoint_get_iter_range(struct tracepoint **tracepoint, - struct tracepoint *begin, struct tracepoint *end); - -/* - * tracepoint_synchronize_unregister must be called between the last tracepoint - * probe unregistration and the end of module exit to make sure there is no - * caller executing a probe when it is freed. - */ -static inline void tracepoint_synchronize_unregister(void) -{ - synchronize_sched(); -} - -#define PARAMS(args...) args - -#endif /* _LINUX_TRACEPOINT_H */ - -/* - * Note: we keep the TRACE_EVENT outside the include file ifdef protection. - * This is due to the way trace events work. If a file includes two - * trace event headers under one "CREATE_TRACE_POINTS" the first include - * will override the TRACE_EVENT and break the second include. - */ - #ifndef TRACE_EVENT /* * For use with the TRACE_EVENT macro: diff --git a/include/linux/types.h b/include/linux/types.h index c42724f8c802..23d237a075e2 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -188,12 +188,12 @@ typedef u32 phys_addr_t; typedef phys_addr_t resource_size_t; typedef struct { - volatile int counter; + int counter; } atomic_t; #ifdef CONFIG_64BIT typedef struct { - volatile long counter; + long counter; } atomic64_t; #endif diff --git a/include/linux/wait.h b/include/linux/wait.h index a48e16b77d5e..76d96d035ea0 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -127,12 +127,26 @@ static inline void __add_wait_queue(wait_queue_head_t *head, wait_queue_t *new) /* * Used for wake-one threads: */ +static inline void __add_wait_queue_exclusive(wait_queue_head_t *q, + wait_queue_t *wait) +{ + wait->flags |= WQ_FLAG_EXCLUSIVE; + __add_wait_queue(q, wait); +} + static inline void __add_wait_queue_tail(wait_queue_head_t *head, - wait_queue_t *new) + wait_queue_t *new) { list_add_tail(&new->task_list, &head->task_list); } +static inline void __add_wait_queue_tail_exclusive(wait_queue_head_t *q, + wait_queue_t *wait) +{ + wait->flags |= WQ_FLAG_EXCLUSIVE; + __add_wait_queue_tail(q, wait); +} + static inline void __remove_wait_queue(wait_queue_head_t *head, wait_queue_t *old) { @@ -404,25 +418,6 @@ do { \ }) /* - * Must be called with the spinlock in the wait_queue_head_t held. - */ -static inline void add_wait_queue_exclusive_locked(wait_queue_head_t *q, - wait_queue_t * wait) -{ - wait->flags |= WQ_FLAG_EXCLUSIVE; - __add_wait_queue_tail(q, wait); -} - -/* - * Must be called with the spinlock in the wait_queue_head_t held. - */ -static inline void remove_wait_queue_locked(wait_queue_head_t *q, - wait_queue_t * wait) -{ - __remove_wait_queue(q, wait); -} - -/* * These are the old interfaces to sleep waiting for an event. * They are racy. DO NOT use them, use the wait_event* interfaces above. * We plan to remove these interfaces. diff --git a/include/linux/zorro.h b/include/linux/zorro.h index 913bfc226dda..7bf9db525e9e 100644 --- a/include/linux/zorro.h +++ b/include/linux/zorro.h @@ -38,8 +38,6 @@ typedef __u32 zorro_id; -#define ZORRO_WILDCARD (0xffffffff) /* not official */ - /* Include the ID list */ #include <linux/zorro_ids.h> @@ -116,6 +114,7 @@ struct ConfigDev { #include <linux/init.h> #include <linux/ioport.h> +#include <linux/mod_devicetable.h> #include <asm/zorro.h> @@ -142,29 +141,10 @@ struct zorro_dev { * Zorro bus */ -struct zorro_bus { - struct list_head devices; /* list of devices on this bus */ - unsigned int num_resources; /* number of resources */ - struct resource resources[4]; /* address space routed to this bus */ - struct device dev; - char name[10]; -}; - -extern struct zorro_bus zorro_bus; /* single Zorro bus */ extern struct bus_type zorro_bus_type; /* - * Zorro device IDs - */ - -struct zorro_device_id { - zorro_id id; /* Device ID or ZORRO_WILDCARD */ - unsigned long driver_data; /* Data private to the driver */ -}; - - - /* * Zorro device drivers */ diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h index 5acfb1eb4df9..1dfab5401511 100644 --- a/include/trace/define_trace.h +++ b/include/trace/define_trace.h @@ -65,6 +65,10 @@ #include TRACE_INCLUDE(TRACE_INCLUDE_FILE) +/* Make all open coded DECLARE_TRACE nops */ +#undef DECLARE_TRACE +#define DECLARE_TRACE(name, proto, args) + #ifdef CONFIG_EVENT_TRACING #include <trace/ftrace.h> #endif @@ -75,6 +79,7 @@ #undef DEFINE_EVENT #undef DEFINE_EVENT_PRINT #undef TRACE_HEADER_MULTI_READ +#undef DECLARE_TRACE /* Only undef what we defined in this file */ #ifdef UNDEF_TRACE_INCLUDE_FILE diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h index 5c1dcfc16c60..2821b86de63b 100644 --- a/include/trace/events/lock.h +++ b/include/trace/events/lock.h @@ -35,15 +35,15 @@ TRACE_EVENT(lock_acquire, __get_str(name)) ); -TRACE_EVENT(lock_release, +DECLARE_EVENT_CLASS(lock, - TP_PROTO(struct lockdep_map *lock, int nested, unsigned long ip), + TP_PROTO(struct lockdep_map *lock, unsigned long ip), - TP_ARGS(lock, nested, ip), + TP_ARGS(lock, ip), TP_STRUCT__entry( - __string(name, lock->name) - __field(void *, lockdep_addr) + __string( name, lock->name ) + __field( void *, lockdep_addr ) ), TP_fast_assign( @@ -51,51 +51,30 @@ TRACE_EVENT(lock_release, __entry->lockdep_addr = lock; ), - TP_printk("%p %s", - __entry->lockdep_addr, __get_str(name)) + TP_printk("%p %s", __entry->lockdep_addr, __get_str(name)) ); -#ifdef CONFIG_LOCK_STAT - -TRACE_EVENT(lock_contended, +DEFINE_EVENT(lock, lock_release, TP_PROTO(struct lockdep_map *lock, unsigned long ip), - TP_ARGS(lock, ip), + TP_ARGS(lock, ip) +); - TP_STRUCT__entry( - __string(name, lock->name) - __field(void *, lockdep_addr) - ), +#ifdef CONFIG_LOCK_STAT - TP_fast_assign( - __assign_str(name, lock->name); - __entry->lockdep_addr = lock; - ), +DEFINE_EVENT(lock, lock_contended, - TP_printk("%p %s", - __entry->lockdep_addr, __get_str(name)) -); + TP_PROTO(struct lockdep_map *lock, unsigned long ip), -TRACE_EVENT(lock_acquired, - TP_PROTO(struct lockdep_map *lock, unsigned long ip, s64 waittime), + TP_ARGS(lock, ip) +); - TP_ARGS(lock, ip, waittime), +DEFINE_EVENT(lock, lock_acquired, - TP_STRUCT__entry( - __string(name, lock->name) - __field(s64, wait_nsec) - __field(void *, lockdep_addr) - ), + TP_PROTO(struct lockdep_map *lock, unsigned long ip), - TP_fast_assign( - __assign_str(name, lock->name); - __entry->wait_nsec = waittime; - __entry->lockdep_addr = lock; - ), - TP_printk("%p %s (%llu ns)", __entry->lockdep_addr, - __get_str(name), - __entry->wait_nsec) + TP_ARGS(lock, ip) ); #endif diff --git a/include/trace/events/module.h b/include/trace/events/module.h index 4b0f48ba16a6..c7bb2f0482fe 100644 --- a/include/trace/events/module.h +++ b/include/trace/events/module.h @@ -51,11 +51,14 @@ TRACE_EVENT(module_free, TP_printk("%s", __get_str(name)) ); +#ifdef CONFIG_MODULE_UNLOAD +/* trace_module_get/put are only used if CONFIG_MODULE_UNLOAD is defined */ + DECLARE_EVENT_CLASS(module_refcnt, - TP_PROTO(struct module *mod, unsigned long ip, int refcnt), + TP_PROTO(struct module *mod, unsigned long ip), - TP_ARGS(mod, ip, refcnt), + TP_ARGS(mod, ip), TP_STRUCT__entry( __field( unsigned long, ip ) @@ -65,7 +68,7 @@ DECLARE_EVENT_CLASS(module_refcnt, TP_fast_assign( __entry->ip = ip; - __entry->refcnt = refcnt; + __entry->refcnt = __this_cpu_read(mod->refptr->incs) + __this_cpu_read(mod->refptr->decs); __assign_str(name, mod->name); ), @@ -75,17 +78,18 @@ DECLARE_EVENT_CLASS(module_refcnt, DEFINE_EVENT(module_refcnt, module_get, - TP_PROTO(struct module *mod, unsigned long ip, int refcnt), + TP_PROTO(struct module *mod, unsigned long ip), - TP_ARGS(mod, ip, refcnt) + TP_ARGS(mod, ip) ); DEFINE_EVENT(module_refcnt, module_put, - TP_PROTO(struct module *mod, unsigned long ip, int refcnt), + TP_PROTO(struct module *mod, unsigned long ip), - TP_ARGS(mod, ip, refcnt) + TP_ARGS(mod, ip) ); +#endif /* CONFIG_MODULE_UNLOAD */ TRACE_EVENT(module_request, diff --git a/include/trace/events/napi.h b/include/trace/events/napi.h index a8989c4547e7..188deca2f3c7 100644 --- a/include/trace/events/napi.h +++ b/include/trace/events/napi.h @@ -1,4 +1,7 @@ -#ifndef _TRACE_NAPI_H_ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM napi + +#if !defined(_TRACE_NAPI_H) || defined(TRACE_HEADER_MULTI_READ) #define _TRACE_NAPI_H_ #include <linux/netdevice.h> @@ -8,4 +11,7 @@ DECLARE_TRACE(napi_poll, TP_PROTO(struct napi_struct *napi), TP_ARGS(napi)); -#endif +#endif /* _TRACE_NAPI_H_ */ + +/* This part must be outside protection */ +#include <trace/define_trace.h> diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index cfceb0b73e20..4f733ecea46e 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -51,15 +51,12 @@ TRACE_EVENT(sched_kthread_stop_ret, /* * Tracepoint for waiting on task to unschedule: - * - * (NOTE: the 'rq' argument is not used by generic trace events, - * but used by the latency tracer plugin. ) */ TRACE_EVENT(sched_wait_task, - TP_PROTO(struct rq *rq, struct task_struct *p), + TP_PROTO(struct task_struct *p), - TP_ARGS(rq, p), + TP_ARGS(p), TP_STRUCT__entry( __array( char, comm, TASK_COMM_LEN ) @@ -79,15 +76,12 @@ TRACE_EVENT(sched_wait_task, /* * Tracepoint for waking up a task: - * - * (NOTE: the 'rq' argument is not used by generic trace events, - * but used by the latency tracer plugin. ) */ DECLARE_EVENT_CLASS(sched_wakeup_template, - TP_PROTO(struct rq *rq, struct task_struct *p, int success), + TP_PROTO(struct task_struct *p, int success), - TP_ARGS(rq, p, success), + TP_ARGS(p, success), TP_STRUCT__entry( __array( char, comm, TASK_COMM_LEN ) @@ -111,31 +105,25 @@ DECLARE_EVENT_CLASS(sched_wakeup_template, ); DEFINE_EVENT(sched_wakeup_template, sched_wakeup, - TP_PROTO(struct rq *rq, struct task_struct *p, int success), - TP_ARGS(rq, p, success)); + TP_PROTO(struct task_struct *p, int success), + TP_ARGS(p, success)); /* * Tracepoint for waking up a new task: - * - * (NOTE: the 'rq' argument is not used by generic trace events, - * but used by the latency tracer plugin. ) */ DEFINE_EVENT(sched_wakeup_template, sched_wakeup_new, - TP_PROTO(struct rq *rq, struct task_struct *p, int success), - TP_ARGS(rq, p, success)); + TP_PROTO(struct task_struct *p, int success), + TP_ARGS(p, success)); /* * Tracepoint for task switches, performed by the scheduler: - * - * (NOTE: the 'rq' argument is not used by generic trace events, - * but used by the latency tracer plugin. ) */ TRACE_EVENT(sched_switch, - TP_PROTO(struct rq *rq, struct task_struct *prev, + TP_PROTO(struct task_struct *prev, struct task_struct *next), - TP_ARGS(rq, prev, next), + TP_ARGS(prev, next), TP_STRUCT__entry( __array( char, prev_comm, TASK_COMM_LEN ) diff --git a/include/trace/events/signal.h b/include/trace/events/signal.h index a510b75ac304..814566c99d29 100644 --- a/include/trace/events/signal.h +++ b/include/trace/events/signal.h @@ -100,18 +100,7 @@ TRACE_EVENT(signal_deliver, __entry->sa_handler, __entry->sa_flags) ); -/** - * signal_overflow_fail - called when signal queue is overflow - * @sig: signal number - * @group: signal to process group or not (bool) - * @info: pointer to struct siginfo - * - * Kernel fails to generate 'sig' signal with 'info' siginfo, because - * siginfo queue is overflow, and the signal is dropped. - * 'group' is not 0 if the signal will be sent to a process group. - * 'sig' is always one of RT signals. - */ -TRACE_EVENT(signal_overflow_fail, +DECLARE_EVENT_CLASS(signal_queue_overflow, TP_PROTO(int sig, int group, struct siginfo *info), @@ -135,6 +124,24 @@ TRACE_EVENT(signal_overflow_fail, ); /** + * signal_overflow_fail - called when signal queue is overflow + * @sig: signal number + * @group: signal to process group or not (bool) + * @info: pointer to struct siginfo + * + * Kernel fails to generate 'sig' signal with 'info' siginfo, because + * siginfo queue is overflow, and the signal is dropped. + * 'group' is not 0 if the signal will be sent to a process group. + * 'sig' is always one of RT signals. + */ +DEFINE_EVENT(signal_queue_overflow, signal_overflow_fail, + + TP_PROTO(int sig, int group, struct siginfo *info), + + TP_ARGS(sig, group, info) +); + +/** * signal_lose_info - called when siginfo is lost * @sig: signal number * @group: signal to process group or not (bool) @@ -145,28 +152,13 @@ TRACE_EVENT(signal_overflow_fail, * 'group' is not 0 if the signal will be sent to a process group. * 'sig' is always one of non-RT signals. */ -TRACE_EVENT(signal_lose_info, +DEFINE_EVENT(signal_queue_overflow, signal_lose_info, TP_PROTO(int sig, int group, struct siginfo *info), - TP_ARGS(sig, group, info), - - TP_STRUCT__entry( - __field( int, sig ) - __field( int, group ) - __field( int, errno ) - __field( int, code ) - ), - - TP_fast_assign( - __entry->sig = sig; - __entry->group = group; - TP_STORE_SIGINFO(__entry, info); - ), - - TP_printk("sig=%d group=%d errno=%d code=%d", - __entry->sig, __entry->group, __entry->errno, __entry->code) + TP_ARGS(sig, group, info) ); + #endif /* _TRACE_SIGNAL_H */ /* This part must be outside protection */ diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h index ea6f9d4a20e9..16253db38d73 100644 --- a/include/trace/ftrace.h +++ b/include/trace/ftrace.h @@ -154,9 +154,11 @@ * * field = (typeof(field))entry; * - * p = get_cpu_var(ftrace_event_seq); + * p = &get_cpu_var(ftrace_event_seq); * trace_seq_init(p); - * ret = trace_seq_printf(s, <TP_printk> "\n"); + * ret = trace_seq_printf(s, "%s: ", <call>); + * if (ret) + * ret = trace_seq_printf(s, <TP_printk> "\n"); * put_cpu(); * if (!ret) * return TRACE_TYPE_PARTIAL_LINE; @@ -450,38 +452,38 @@ perf_trace_disable_##name(struct ftrace_event_call *unused) \ * * static void ftrace_raw_event_<call>(proto) * { + * struct ftrace_data_offsets_<call> __maybe_unused __data_offsets; * struct ring_buffer_event *event; * struct ftrace_raw_<call> *entry; <-- defined in stage 1 * struct ring_buffer *buffer; * unsigned long irq_flags; + * int __data_size; * int pc; * * local_save_flags(irq_flags); * pc = preempt_count(); * + * __data_size = ftrace_get_offsets_<call>(&__data_offsets, args); + * * event = trace_current_buffer_lock_reserve(&buffer, * event_<call>.id, - * sizeof(struct ftrace_raw_<call>), + * sizeof(*entry) + __data_size, * irq_flags, pc); * if (!event) * return; * entry = ring_buffer_event_data(event); * - * <assign>; <-- Here we assign the entries by the __field and - * __array macros. + * { <assign>; } <-- Here we assign the entries by the __field and + * __array macros. * - * trace_current_buffer_unlock_commit(buffer, event, irq_flags, pc); + * if (!filter_current_check_discard(buffer, event_call, entry, event)) + * trace_current_buffer_unlock_commit(buffer, + * event, irq_flags, pc); * } * * static int ftrace_raw_reg_event_<call>(struct ftrace_event_call *unused) * { - * int ret; - * - * ret = register_trace_<call>(ftrace_raw_event_<call>); - * if (!ret) - * pr_info("event trace: Could not activate trace point " - * "probe to <call>"); - * return ret; + * return register_trace_<call>(ftrace_raw_event_<call>); * } * * static void ftrace_unreg_event_<call>(struct ftrace_event_call *unused) @@ -493,6 +495,8 @@ perf_trace_disable_##name(struct ftrace_event_call *unused) \ * .trace = ftrace_raw_output_<call>, <-- stage 2 * }; * + * static const char print_fmt_<call>[] = <TP_printk>; + * * static struct ftrace_event_call __used * __attribute__((__aligned__(4))) * __attribute__((section("_ftrace_events"))) event_<call> = { @@ -501,6 +505,8 @@ perf_trace_disable_##name(struct ftrace_event_call *unused) \ * .raw_init = trace_event_raw_init, * .regfunc = ftrace_reg_event_<call>, * .unregfunc = ftrace_unreg_event_<call>, + * .print_fmt = print_fmt_<call>, + * .define_fields = ftrace_define_fields_<call>, * } * */ @@ -569,7 +575,6 @@ ftrace_raw_event_id_##call(struct ftrace_event_call *event_call, \ return; \ entry = ring_buffer_event_data(event); \ \ - \ tstruct \ \ { assign; } \ @@ -758,13 +763,12 @@ __attribute__((section("_ftrace_events"))) event_##call = { \ #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ static notrace void \ perf_trace_templ_##call(struct ftrace_event_call *event_call, \ - proto) \ + struct pt_regs *__regs, proto) \ { \ struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\ struct ftrace_raw_##call *entry; \ u64 __addr = 0, __count = 1; \ unsigned long irq_flags; \ - struct pt_regs *__regs; \ int __entry_size; \ int __data_size; \ int rctx; \ @@ -785,20 +789,22 @@ perf_trace_templ_##call(struct ftrace_event_call *event_call, \ \ { assign; } \ \ - __regs = &__get_cpu_var(perf_trace_regs); \ - perf_fetch_caller_regs(__regs, 2); \ - \ perf_trace_buf_submit(entry, __entry_size, rctx, __addr, \ __count, irq_flags, __regs); \ } #undef DEFINE_EVENT -#define DEFINE_EVENT(template, call, proto, args) \ -static notrace void perf_trace_##call(proto) \ -{ \ - struct ftrace_event_call *event_call = &event_##call; \ - \ - perf_trace_templ_##template(event_call, args); \ +#define DEFINE_EVENT(template, call, proto, args) \ +static notrace void perf_trace_##call(proto) \ +{ \ + struct ftrace_event_call *event_call = &event_##call; \ + struct pt_regs *__regs = &get_cpu_var(perf_trace_regs); \ + \ + perf_fetch_caller_regs(__regs, 1); \ + \ + perf_trace_templ_##template(event_call, __regs, args); \ + \ + put_cpu_var(perf_trace_regs); \ } #undef DEFINE_EVENT_PRINT diff --git a/init/Kconfig b/init/Kconfig index eb77e8ccde1c..5fe94b82e4c0 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -604,8 +604,7 @@ config RT_GROUP_SCHED default n help This feature lets you explicitly allocate real CPU bandwidth - to users or control groups (depending on the "Basis for grouping tasks" - setting below. If enabled, it will also make it impossible to + to task groups. If enabled, it will also make it impossible to schedule realtime tasks for non-root users until you allocate realtime bandwidth for them. See Documentation/scheduler/sched-rt-group.txt for more information. diff --git a/kernel/Makefile b/kernel/Makefile index a987aa1676b5..149e18ef1ab1 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -68,7 +68,7 @@ obj-$(CONFIG_USER_NS) += user_namespace.o obj-$(CONFIG_PID_NS) += pid_namespace.o obj-$(CONFIG_IKCONFIG) += configs.o obj-$(CONFIG_RESOURCE_COUNTERS) += res_counter.o -obj-$(CONFIG_STOP_MACHINE) += stop_machine.o +obj-$(CONFIG_SMP) += stop_machine.o obj-$(CONFIG_KPROBES_SANITY_TEST) += test_kprobes.o obj-$(CONFIG_AUDIT) += audit.o auditfilter.o audit_watch.o obj-$(CONFIG_AUDITSYSCALL) += auditsc.o diff --git a/kernel/capability.c b/kernel/capability.c index 9e4697e9b276..2f05303715a5 100644 --- a/kernel/capability.c +++ b/kernel/capability.c @@ -15,7 +15,6 @@ #include <linux/syscalls.h> #include <linux/pid_namespace.h> #include <asm/uaccess.h> -#include "cred-internals.h" /* * Leveraged for setting/resetting capabilities diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 6d870f2d1228..e9ec642932ee 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -3016,7 +3016,7 @@ static int cgroup_event_wake(wait_queue_t *wait, unsigned mode, unsigned long flags = (unsigned long)key; if (flags & POLLHUP) { - remove_wait_queue_locked(event->wqh, &event->wait); + __remove_wait_queue(event->wqh, &event->wait); spin_lock(&cgrp->event_list_lock); list_del(&event->list); spin_unlock(&cgrp->event_list_lock); diff --git a/kernel/cpu.c b/kernel/cpu.c index 25bba73b1be3..545777574779 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -164,6 +164,7 @@ static inline void check_for_tasks(int cpu) } struct take_cpu_down_param { + struct task_struct *caller; unsigned long mod; void *hcpu; }; @@ -172,6 +173,7 @@ struct take_cpu_down_param { static int __ref take_cpu_down(void *_param) { struct take_cpu_down_param *param = _param; + unsigned int cpu = (unsigned long)param->hcpu; int err; /* Ensure this CPU doesn't handle any more interrupts. */ @@ -182,6 +184,8 @@ static int __ref take_cpu_down(void *_param) raw_notifier_call_chain(&cpu_chain, CPU_DYING | param->mod, param->hcpu); + if (task_cpu(param->caller) == cpu) + move_task_off_dead_cpu(cpu, param->caller); /* Force idle task to run as soon as we yield: it should immediately notice cpu is offline and die quickly. */ sched_idle_next(); @@ -192,10 +196,10 @@ static int __ref take_cpu_down(void *_param) static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) { int err, nr_calls = 0; - cpumask_var_t old_allowed; void *hcpu = (void *)(long)cpu; unsigned long mod = tasks_frozen ? CPU_TASKS_FROZEN : 0; struct take_cpu_down_param tcd_param = { + .caller = current, .mod = mod, .hcpu = hcpu, }; @@ -206,9 +210,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) if (!cpu_online(cpu)) return -EINVAL; - if (!alloc_cpumask_var(&old_allowed, GFP_KERNEL)) - return -ENOMEM; - cpu_hotplug_begin(); set_cpu_active(cpu, false); err = __raw_notifier_call_chain(&cpu_chain, CPU_DOWN_PREPARE | mod, @@ -225,10 +226,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) goto out_release; } - /* Ensure that we are not runnable on dying cpu */ - cpumask_copy(old_allowed, ¤t->cpus_allowed); - set_cpus_allowed_ptr(current, cpu_active_mask); - err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu)); if (err) { set_cpu_active(cpu, true); @@ -237,7 +234,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) hcpu) == NOTIFY_BAD) BUG(); - goto out_allowed; + goto out_release; } BUG_ON(cpu_online(cpu)); @@ -255,8 +252,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) check_for_tasks(cpu); -out_allowed: - set_cpus_allowed_ptr(current, old_allowed); out_release: cpu_hotplug_done(); if (!err) { @@ -264,7 +259,6 @@ out_release: hcpu) == NOTIFY_BAD) BUG(); } - free_cpumask_var(old_allowed); return err; } @@ -272,9 +266,6 @@ int __ref cpu_down(unsigned int cpu) { int err; - err = stop_machine_create(); - if (err) - return err; cpu_maps_update_begin(); if (cpu_hotplug_disabled) { @@ -286,7 +277,6 @@ int __ref cpu_down(unsigned int cpu) out: cpu_maps_update_done(); - stop_machine_destroy(); return err; } EXPORT_SYMBOL(cpu_down); @@ -367,9 +357,6 @@ int disable_nonboot_cpus(void) { int cpu, first_cpu, error; - error = stop_machine_create(); - if (error) - return error; cpu_maps_update_begin(); first_cpu = cpumask_first(cpu_online_mask); /* @@ -400,7 +387,6 @@ int disable_nonboot_cpus(void) printk(KERN_ERR "Non-boot CPUs are not disabled\n"); } cpu_maps_update_done(); - stop_machine_destroy(); return error; } diff --git a/kernel/cpuset.c b/kernel/cpuset.c index d10946748ec2..9a50c5f6e727 100644 --- a/kernel/cpuset.c +++ b/kernel/cpuset.c @@ -2182,19 +2182,52 @@ void __init cpuset_init_smp(void) void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) { mutex_lock(&callback_mutex); - cpuset_cpus_allowed_locked(tsk, pmask); + task_lock(tsk); + guarantee_online_cpus(task_cs(tsk), pmask); + task_unlock(tsk); mutex_unlock(&callback_mutex); } -/** - * cpuset_cpus_allowed_locked - return cpus_allowed mask from a tasks cpuset. - * Must be called with callback_mutex held. - **/ -void cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask) +int cpuset_cpus_allowed_fallback(struct task_struct *tsk) { - task_lock(tsk); - guarantee_online_cpus(task_cs(tsk), pmask); - task_unlock(tsk); + const struct cpuset *cs; + int cpu; + + rcu_read_lock(); + cs = task_cs(tsk); + if (cs) + cpumask_copy(&tsk->cpus_allowed, cs->cpus_allowed); + rcu_read_unlock(); + + /* + * We own tsk->cpus_allowed, nobody can change it under us. + * + * But we used cs && cs->cpus_allowed lockless and thus can + * race with cgroup_attach_task() or update_cpumask() and get + * the wrong tsk->cpus_allowed. However, both cases imply the + * subsequent cpuset_change_cpumask()->set_cpus_allowed_ptr() + * which takes task_rq_lock(). + * + * If we are called after it dropped the lock we must see all + * changes in tsk_cs()->cpus_allowed. Otherwise we can temporary + * set any mask even if it is not right from task_cs() pov, + * the pending set_cpus_allowed_ptr() will fix things. + */ + + cpu = cpumask_any_and(&tsk->cpus_allowed, cpu_active_mask); + if (cpu >= nr_cpu_ids) { + /* + * Either tsk->cpus_allowed is wrong (see above) or it + * is actually empty. The latter case is only possible + * if we are racing with remove_tasks_in_empty_cpuset(). + * Like above we can temporary set any mask and rely on + * set_cpus_allowed_ptr() as synchronization point. + */ + cpumask_copy(&tsk->cpus_allowed, cpu_possible_mask); + cpu = cpumask_any(cpu_active_mask); + } + + return cpu; } void cpuset_init_current_mems_allowed(void) @@ -2383,22 +2416,6 @@ int __cpuset_node_allowed_hardwall(int node, gfp_t gfp_mask) } /** - * cpuset_lock - lock out any changes to cpuset structures - * - * The out of memory (oom) code needs to mutex_lock cpusets - * from being changed while it scans the tasklist looking for a - * task in an overlapping cpuset. Expose callback_mutex via this - * cpuset_lock() routine, so the oom code can lock it, before - * locking the task list. The tasklist_lock is a spinlock, so - * must be taken inside callback_mutex. - */ - -void cpuset_lock(void) -{ - mutex_lock(&callback_mutex); -} - -/** * cpuset_unlock - release lock on cpuset changes * * Undo the lock taken in a previous cpuset_lock() call. diff --git a/kernel/cred-internals.h b/kernel/cred-internals.h deleted file mode 100644 index 2dc4fc2d0bf1..000000000000 --- a/kernel/cred-internals.h +++ /dev/null @@ -1,21 +0,0 @@ -/* Internal credentials stuff - * - * Copyright (C) 2008 Red Hat, Inc. All Rights Reserved. - * Written by David Howells (dhowells@redhat.com) - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public Licence - * as published by the Free Software Foundation; either version - * 2 of the Licence, or (at your option) any later version. - */ - -/* - * user.c - */ -static inline void sched_switch_user(struct task_struct *p) -{ -#ifdef CONFIG_USER_SCHED - sched_move_task(p); -#endif /* CONFIG_USER_SCHED */ -} - diff --git a/kernel/cred.c b/kernel/cred.c index 62af1816c235..8f3672a58a1e 100644 --- a/kernel/cred.c +++ b/kernel/cred.c @@ -17,7 +17,6 @@ #include <linux/init_task.h> #include <linux/security.h> #include <linux/cn_proc.h> -#include "cred-internals.h" #if 0 #define kdebug(FMT, ...) \ @@ -560,8 +559,6 @@ int commit_creds(struct cred *new) atomic_dec(&old->user->processes); alter_cred_subscribers(old, -2); - sched_switch_user(task); - /* send notifications */ if (new->uid != old->uid || new->euid != old->euid || diff --git a/kernel/exit.c b/kernel/exit.c index 7f2683a10ac4..eabca5a73a85 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -55,7 +55,6 @@ #include <asm/unistd.h> #include <asm/pgtable.h> #include <asm/mmu_context.h> -#include "cred-internals.h" static void exit_mm(struct task_struct * tsk); diff --git a/kernel/fork.c b/kernel/fork.c index 4c14942a0ee3..4d57d9e3a6e9 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1112,8 +1112,6 @@ static struct task_struct *copy_process(unsigned long clone_flags, p->memcg_batch.memcg = NULL; #endif - p->bts = NULL; - /* Perform scheduler related setup. Assign this task to a CPU. */ sched_fork(p, clone_flags); diff --git a/kernel/hw_breakpoint.c b/kernel/hw_breakpoint.c index 03808ed342a6..7a56b22e0602 100644 --- a/kernel/hw_breakpoint.c +++ b/kernel/hw_breakpoint.c @@ -40,23 +40,29 @@ #include <linux/percpu.h> #include <linux/sched.h> #include <linux/init.h> +#include <linux/slab.h> #include <linux/cpu.h> #include <linux/smp.h> #include <linux/hw_breakpoint.h> + /* * Constraints data */ /* Number of pinned cpu breakpoints in a cpu */ -static DEFINE_PER_CPU(unsigned int, nr_cpu_bp_pinned); +static DEFINE_PER_CPU(unsigned int, nr_cpu_bp_pinned[TYPE_MAX]); /* Number of pinned task breakpoints in a cpu */ -static DEFINE_PER_CPU(unsigned int, nr_task_bp_pinned[HBP_NUM]); +static DEFINE_PER_CPU(unsigned int *, nr_task_bp_pinned[TYPE_MAX]); /* Number of non-pinned cpu/task breakpoints in a cpu */ -static DEFINE_PER_CPU(unsigned int, nr_bp_flexible); +static DEFINE_PER_CPU(unsigned int, nr_bp_flexible[TYPE_MAX]); + +static int nr_slots[TYPE_MAX]; + +static int constraints_initialized; /* Gather the number of total pinned and un-pinned bp in a cpuset */ struct bp_busy_slots { @@ -67,16 +73,29 @@ struct bp_busy_slots { /* Serialize accesses to the above constraints */ static DEFINE_MUTEX(nr_bp_mutex); +__weak int hw_breakpoint_weight(struct perf_event *bp) +{ + return 1; +} + +static inline enum bp_type_idx find_slot_idx(struct perf_event *bp) +{ + if (bp->attr.bp_type & HW_BREAKPOINT_RW) + return TYPE_DATA; + + return TYPE_INST; +} + /* * Report the maximum number of pinned breakpoints a task * have in this cpu */ -static unsigned int max_task_bp_pinned(int cpu) +static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) { int i; - unsigned int *tsk_pinned = per_cpu(nr_task_bp_pinned, cpu); + unsigned int *tsk_pinned = per_cpu(nr_task_bp_pinned[type], cpu); - for (i = HBP_NUM -1; i >= 0; i--) { + for (i = nr_slots[type] - 1; i >= 0; i--) { if (tsk_pinned[i] > 0) return i + 1; } @@ -84,7 +103,7 @@ static unsigned int max_task_bp_pinned(int cpu) return 0; } -static int task_bp_pinned(struct task_struct *tsk) +static int task_bp_pinned(struct task_struct *tsk, enum bp_type_idx type) { struct perf_event_context *ctx = tsk->perf_event_ctxp; struct list_head *list; @@ -105,7 +124,8 @@ static int task_bp_pinned(struct task_struct *tsk) */ list_for_each_entry(bp, list, event_entry) { if (bp->attr.type == PERF_TYPE_BREAKPOINT) - count++; + if (find_slot_idx(bp) == type) + count += hw_breakpoint_weight(bp); } raw_spin_unlock_irqrestore(&ctx->lock, flags); @@ -118,18 +138,19 @@ static int task_bp_pinned(struct task_struct *tsk) * a given cpu (cpu > -1) or in all of them (cpu = -1). */ static void -fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp) +fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, + enum bp_type_idx type) { int cpu = bp->cpu; struct task_struct *tsk = bp->ctx->task; if (cpu >= 0) { - slots->pinned = per_cpu(nr_cpu_bp_pinned, cpu); + slots->pinned = per_cpu(nr_cpu_bp_pinned[type], cpu); if (!tsk) - slots->pinned += max_task_bp_pinned(cpu); + slots->pinned += max_task_bp_pinned(cpu, type); else - slots->pinned += task_bp_pinned(tsk); - slots->flexible = per_cpu(nr_bp_flexible, cpu); + slots->pinned += task_bp_pinned(tsk, type); + slots->flexible = per_cpu(nr_bp_flexible[type], cpu); return; } @@ -137,16 +158,16 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp) for_each_online_cpu(cpu) { unsigned int nr; - nr = per_cpu(nr_cpu_bp_pinned, cpu); + nr = per_cpu(nr_cpu_bp_pinned[type], cpu); if (!tsk) - nr += max_task_bp_pinned(cpu); + nr += max_task_bp_pinned(cpu, type); else - nr += task_bp_pinned(tsk); + nr += task_bp_pinned(tsk, type); if (nr > slots->pinned) slots->pinned = nr; - nr = per_cpu(nr_bp_flexible, cpu); + nr = per_cpu(nr_bp_flexible[type], cpu); if (nr > slots->flexible) slots->flexible = nr; @@ -154,31 +175,49 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp) } /* + * For now, continue to consider flexible as pinned, until we can + * ensure no flexible event can ever be scheduled before a pinned event + * in a same cpu. + */ +static void +fetch_this_slot(struct bp_busy_slots *slots, int weight) +{ + slots->pinned += weight; +} + +/* * Add a pinned breakpoint for the given task in our constraint table */ -static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable) +static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable, + enum bp_type_idx type, int weight) { unsigned int *tsk_pinned; - int count = 0; + int old_count = 0; + int old_idx = 0; + int idx = 0; - count = task_bp_pinned(tsk); + old_count = task_bp_pinned(tsk, type); + old_idx = old_count - 1; + idx = old_idx + weight; - tsk_pinned = per_cpu(nr_task_bp_pinned, cpu); + tsk_pinned = per_cpu(nr_task_bp_pinned[type], cpu); if (enable) { - tsk_pinned[count]++; - if (count > 0) - tsk_pinned[count-1]--; + tsk_pinned[idx]++; + if (old_count > 0) + tsk_pinned[old_idx]--; } else { - tsk_pinned[count]--; - if (count > 0) - tsk_pinned[count-1]++; + tsk_pinned[idx]--; + if (old_count > 0) + tsk_pinned[old_idx]++; } } /* * Add/remove the given breakpoint in our constraint table */ -static void toggle_bp_slot(struct perf_event *bp, bool enable) +static void +toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, + int weight) { int cpu = bp->cpu; struct task_struct *tsk = bp->ctx->task; @@ -186,20 +225,20 @@ static void toggle_bp_slot(struct perf_event *bp, bool enable) /* Pinned counter task profiling */ if (tsk) { if (cpu >= 0) { - toggle_bp_task_slot(tsk, cpu, enable); + toggle_bp_task_slot(tsk, cpu, enable, type, weight); return; } for_each_online_cpu(cpu) - toggle_bp_task_slot(tsk, cpu, enable); + toggle_bp_task_slot(tsk, cpu, enable, type, weight); return; } /* Pinned counter cpu profiling */ if (enable) - per_cpu(nr_cpu_bp_pinned, bp->cpu)++; + per_cpu(nr_cpu_bp_pinned[type], bp->cpu) += weight; else - per_cpu(nr_cpu_bp_pinned, bp->cpu)--; + per_cpu(nr_cpu_bp_pinned[type], bp->cpu) -= weight; } /* @@ -246,14 +285,29 @@ static void toggle_bp_slot(struct perf_event *bp, bool enable) static int __reserve_bp_slot(struct perf_event *bp) { struct bp_busy_slots slots = {0}; + enum bp_type_idx type; + int weight; - fetch_bp_busy_slots(&slots, bp); + /* We couldn't initialize breakpoint constraints on boot */ + if (!constraints_initialized) + return -ENOMEM; + + /* Basic checks */ + if (bp->attr.bp_type == HW_BREAKPOINT_EMPTY || + bp->attr.bp_type == HW_BREAKPOINT_INVALID) + return -EINVAL; + + type = find_slot_idx(bp); + weight = hw_breakpoint_weight(bp); + + fetch_bp_busy_slots(&slots, bp, type); + fetch_this_slot(&slots, weight); /* Flexible counters need to keep at least one slot */ - if (slots.pinned + (!!slots.flexible) == HBP_NUM) + if (slots.pinned + (!!slots.flexible) > nr_slots[type]) return -ENOSPC; - toggle_bp_slot(bp, true); + toggle_bp_slot(bp, true, type, weight); return 0; } @@ -273,7 +327,12 @@ int reserve_bp_slot(struct perf_event *bp) static void __release_bp_slot(struct perf_event *bp) { - toggle_bp_slot(bp, false); + enum bp_type_idx type; + int weight; + + type = find_slot_idx(bp); + weight = hw_breakpoint_weight(bp); + toggle_bp_slot(bp, false, type, weight); } void release_bp_slot(struct perf_event *bp) @@ -308,6 +367,28 @@ int dbg_release_bp_slot(struct perf_event *bp) return 0; } +static int validate_hw_breakpoint(struct perf_event *bp) +{ + int ret; + + ret = arch_validate_hwbkpt_settings(bp); + if (ret) + return ret; + + if (arch_check_bp_in_kernelspace(bp)) { + if (bp->attr.exclude_kernel) + return -EINVAL; + /* + * Don't let unprivileged users set a breakpoint in the trap + * path to avoid trap recursion attacks. + */ + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + } + + return 0; +} + int register_perf_hw_breakpoint(struct perf_event *bp) { int ret; @@ -316,17 +397,7 @@ int register_perf_hw_breakpoint(struct perf_event *bp) if (ret) return ret; - /* - * Ptrace breakpoints can be temporary perf events only - * meant to reserve a slot. In this case, it is created disabled and - * we don't want to check the params right now (as we put a null addr) - * But perf tools create events as disabled and we want to check - * the params for them. - * This is a quick hack that will be removed soon, once we remove - * the tmp breakpoints from ptrace - */ - if (!bp->attr.disabled || !bp->overflow_handler) - ret = arch_validate_hwbkpt_settings(bp, bp->ctx->task); + ret = validate_hw_breakpoint(bp); /* if arch_validate_hwbkpt_settings() fails then release bp slot */ if (ret) @@ -373,7 +444,7 @@ int modify_user_hw_breakpoint(struct perf_event *bp, struct perf_event_attr *att if (attr->disabled) goto end; - err = arch_validate_hwbkpt_settings(bp, bp->ctx->task); + err = validate_hw_breakpoint(bp); if (!err) perf_event_enable(bp); @@ -480,7 +551,36 @@ static struct notifier_block hw_breakpoint_exceptions_nb = { static int __init init_hw_breakpoint(void) { + unsigned int **task_bp_pinned; + int cpu, err_cpu; + int i; + + for (i = 0; i < TYPE_MAX; i++) + nr_slots[i] = hw_breakpoint_slots(i); + + for_each_possible_cpu(cpu) { + for (i = 0; i < TYPE_MAX; i++) { + task_bp_pinned = &per_cpu(nr_task_bp_pinned[i], cpu); + *task_bp_pinned = kzalloc(sizeof(int) * nr_slots[i], + GFP_KERNEL); + if (!*task_bp_pinned) + goto err_alloc; + } + } + + constraints_initialized = 1; + return register_die_notifier(&hw_breakpoint_exceptions_nb); + + err_alloc: + for_each_possible_cpu(err_cpu) { + if (err_cpu == cpu) + break; + for (i = 0; i < TYPE_MAX; i++) + kfree(per_cpu(nr_task_bp_pinned[i], cpu)); + } + + return -ENOMEM; } core_initcall(init_hw_breakpoint); diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 0ed46f3e51e9..282035f3ae96 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1588,6 +1588,72 @@ static void __kprobes kill_kprobe(struct kprobe *p) arch_remove_kprobe(p); } +/* Disable one kprobe */ +int __kprobes disable_kprobe(struct kprobe *kp) +{ + int ret = 0; + struct kprobe *p; + + mutex_lock(&kprobe_mutex); + + /* Check whether specified probe is valid. */ + p = __get_valid_kprobe(kp); + if (unlikely(p == NULL)) { + ret = -EINVAL; + goto out; + } + + /* If the probe is already disabled (or gone), just return */ + if (kprobe_disabled(kp)) + goto out; + + kp->flags |= KPROBE_FLAG_DISABLED; + if (p != kp) + /* When kp != p, p is always enabled. */ + try_to_disable_aggr_kprobe(p); + + if (!kprobes_all_disarmed && kprobe_disabled(p)) + disarm_kprobe(p); +out: + mutex_unlock(&kprobe_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(disable_kprobe); + +/* Enable one kprobe */ +int __kprobes enable_kprobe(struct kprobe *kp) +{ + int ret = 0; + struct kprobe *p; + + mutex_lock(&kprobe_mutex); + + /* Check whether specified probe is valid. */ + p = __get_valid_kprobe(kp); + if (unlikely(p == NULL)) { + ret = -EINVAL; + goto out; + } + + if (kprobe_gone(kp)) { + /* This kprobe has gone, we couldn't enable it. */ + ret = -EINVAL; + goto out; + } + + if (p != kp) + kp->flags &= ~KPROBE_FLAG_DISABLED; + + if (!kprobes_all_disarmed && kprobe_disabled(p)) { + p->flags &= ~KPROBE_FLAG_DISABLED; + arm_kprobe(p); + } +out: + mutex_unlock(&kprobe_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(enable_kprobe); + void __kprobes dump_kprobe(struct kprobe *kp) { printk(KERN_WARNING "Dumping kprobe:\n"); @@ -1805,72 +1871,6 @@ static const struct file_operations debugfs_kprobes_operations = { .release = seq_release, }; -/* Disable one kprobe */ -int __kprobes disable_kprobe(struct kprobe *kp) -{ - int ret = 0; - struct kprobe *p; - - mutex_lock(&kprobe_mutex); - - /* Check whether specified probe is valid. */ - p = __get_valid_kprobe(kp); - if (unlikely(p == NULL)) { - ret = -EINVAL; - goto out; - } - - /* If the probe is already disabled (or gone), just return */ - if (kprobe_disabled(kp)) - goto out; - - kp->flags |= KPROBE_FLAG_DISABLED; - if (p != kp) - /* When kp != p, p is always enabled. */ - try_to_disable_aggr_kprobe(p); - - if (!kprobes_all_disarmed && kprobe_disabled(p)) - disarm_kprobe(p); -out: - mutex_unlock(&kprobe_mutex); - return ret; -} -EXPORT_SYMBOL_GPL(disable_kprobe); - -/* Enable one kprobe */ -int __kprobes enable_kprobe(struct kprobe *kp) -{ - int ret = 0; - struct kprobe *p; - - mutex_lock(&kprobe_mutex); - - /* Check whether specified probe is valid. */ - p = __get_valid_kprobe(kp); - if (unlikely(p == NULL)) { - ret = -EINVAL; - goto out; - } - - if (kprobe_gone(kp)) { - /* This kprobe has gone, we couldn't enable it. */ - ret = -EINVAL; - goto out; - } - - if (p != kp) - kp->flags &= ~KPROBE_FLAG_DISABLED; - - if (!kprobes_all_disarmed && kprobe_disabled(p)) { - p->flags &= ~KPROBE_FLAG_DISABLED; - arm_kprobe(p); - } -out: - mutex_unlock(&kprobe_mutex); - return ret; -} -EXPORT_SYMBOL_GPL(enable_kprobe); - static void __kprobes arm_all_kprobes(void) { struct hlist_head *head; diff --git a/kernel/lockdep.c b/kernel/lockdep.c index 2594e1ce41cb..ec21304856d1 100644 --- a/kernel/lockdep.c +++ b/kernel/lockdep.c @@ -431,20 +431,7 @@ static struct stack_trace lockdep_init_trace = { /* * Various lockdep statistics: */ -atomic_t chain_lookup_hits; -atomic_t chain_lookup_misses; -atomic_t hardirqs_on_events; -atomic_t hardirqs_off_events; -atomic_t redundant_hardirqs_on; -atomic_t redundant_hardirqs_off; -atomic_t softirqs_on_events; -atomic_t softirqs_off_events; -atomic_t redundant_softirqs_on; -atomic_t redundant_softirqs_off; -atomic_t nr_unused_locks; -atomic_t nr_cyclic_checks; -atomic_t nr_find_usage_forwards_checks; -atomic_t nr_find_usage_backwards_checks; +DEFINE_PER_CPU(struct lockdep_stats, lockdep_stats); #endif /* @@ -748,7 +735,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force) return NULL; } class = lock_classes + nr_lock_classes++; - debug_atomic_inc(&nr_unused_locks); + debug_atomic_inc(nr_unused_locks); class->key = key; class->name = lock->name; class->subclass = subclass; @@ -818,7 +805,8 @@ static struct lock_list *alloc_list_entry(void) * Add a new dependency to the head of the list: */ static int add_lock_to_list(struct lock_class *class, struct lock_class *this, - struct list_head *head, unsigned long ip, int distance) + struct list_head *head, unsigned long ip, + int distance, struct stack_trace *trace) { struct lock_list *entry; /* @@ -829,11 +817,9 @@ static int add_lock_to_list(struct lock_class *class, struct lock_class *this, if (!entry) return 0; - if (!save_trace(&entry->trace)) - return 0; - entry->class = this; entry->distance = distance; + entry->trace = *trace; /* * Since we never remove from the dependency list, the list can * be walked lockless by other CPUs, it's only allocation @@ -1205,7 +1191,7 @@ check_noncircular(struct lock_list *root, struct lock_class *target, { int result; - debug_atomic_inc(&nr_cyclic_checks); + debug_atomic_inc(nr_cyclic_checks); result = __bfs_forwards(root, target, class_equal, target_entry); @@ -1242,7 +1228,7 @@ find_usage_forwards(struct lock_list *root, enum lock_usage_bit bit, { int result; - debug_atomic_inc(&nr_find_usage_forwards_checks); + debug_atomic_inc(nr_find_usage_forwards_checks); result = __bfs_forwards(root, (void *)bit, usage_match, target_entry); @@ -1265,7 +1251,7 @@ find_usage_backwards(struct lock_list *root, enum lock_usage_bit bit, { int result; - debug_atomic_inc(&nr_find_usage_backwards_checks); + debug_atomic_inc(nr_find_usage_backwards_checks); result = __bfs_backwards(root, (void *)bit, usage_match, target_entry); @@ -1635,12 +1621,20 @@ check_deadlock(struct task_struct *curr, struct held_lock *next, */ static int check_prev_add(struct task_struct *curr, struct held_lock *prev, - struct held_lock *next, int distance) + struct held_lock *next, int distance, int trylock_loop) { struct lock_list *entry; int ret; struct lock_list this; struct lock_list *uninitialized_var(target_entry); + /* + * Static variable, serialized by the graph_lock(). + * + * We use this static variable to save the stack trace in case + * we call into this function multiple times due to encountering + * trylocks in the held lock stack. + */ + static struct stack_trace trace; /* * Prove that the new <prev> -> <next> dependency would not @@ -1688,20 +1682,23 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, } } + if (!trylock_loop && !save_trace(&trace)) + return 0; + /* * Ok, all validations passed, add the new lock * to the previous lock's dependency list: */ ret = add_lock_to_list(hlock_class(prev), hlock_class(next), &hlock_class(prev)->locks_after, - next->acquire_ip, distance); + next->acquire_ip, distance, &trace); if (!ret) return 0; ret = add_lock_to_list(hlock_class(next), hlock_class(prev), &hlock_class(next)->locks_before, - next->acquire_ip, distance); + next->acquire_ip, distance, &trace); if (!ret) return 0; @@ -1731,6 +1728,7 @@ static int check_prevs_add(struct task_struct *curr, struct held_lock *next) { int depth = curr->lockdep_depth; + int trylock_loop = 0; struct held_lock *hlock; /* @@ -1756,7 +1754,8 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next) * added: */ if (hlock->read != 2) { - if (!check_prev_add(curr, hlock, next, distance)) + if (!check_prev_add(curr, hlock, next, + distance, trylock_loop)) return 0; /* * Stop after the first non-trylock entry, @@ -1779,6 +1778,7 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next) if (curr->held_locks[depth].irq_context != curr->held_locks[depth-1].irq_context) break; + trylock_loop = 1; } return 1; out_bug: @@ -1825,7 +1825,7 @@ static inline int lookup_chain_cache(struct task_struct *curr, list_for_each_entry(chain, hash_head, entry) { if (chain->chain_key == chain_key) { cache_hit: - debug_atomic_inc(&chain_lookup_hits); + debug_atomic_inc(chain_lookup_hits); if (very_verbose(class)) printk("\nhash chain already cached, key: " "%016Lx tail class: [%p] %s\n", @@ -1890,7 +1890,7 @@ cache_hit: chain_hlocks[chain->base + j] = class - lock_classes; } list_add_tail_rcu(&chain->entry, hash_head); - debug_atomic_inc(&chain_lookup_misses); + debug_atomic_inc(chain_lookup_misses); inc_chains(); return 1; @@ -2311,7 +2311,12 @@ void trace_hardirqs_on_caller(unsigned long ip) return; if (unlikely(curr->hardirqs_enabled)) { - debug_atomic_inc(&redundant_hardirqs_on); + /* + * Neither irq nor preemption are disabled here + * so this is racy by nature but loosing one hit + * in a stat is not a big deal. + */ + __debug_atomic_inc(redundant_hardirqs_on); return; } /* we'll do an OFF -> ON transition: */ @@ -2338,7 +2343,7 @@ void trace_hardirqs_on_caller(unsigned long ip) curr->hardirq_enable_ip = ip; curr->hardirq_enable_event = ++curr->irq_events; - debug_atomic_inc(&hardirqs_on_events); + debug_atomic_inc(hardirqs_on_events); } EXPORT_SYMBOL(trace_hardirqs_on_caller); @@ -2370,9 +2375,9 @@ void trace_hardirqs_off_caller(unsigned long ip) curr->hardirqs_enabled = 0; curr->hardirq_disable_ip = ip; curr->hardirq_disable_event = ++curr->irq_events; - debug_atomic_inc(&hardirqs_off_events); + debug_atomic_inc(hardirqs_off_events); } else - debug_atomic_inc(&redundant_hardirqs_off); + debug_atomic_inc(redundant_hardirqs_off); } EXPORT_SYMBOL(trace_hardirqs_off_caller); @@ -2396,7 +2401,7 @@ void trace_softirqs_on(unsigned long ip) return; if (curr->softirqs_enabled) { - debug_atomic_inc(&redundant_softirqs_on); + debug_atomic_inc(redundant_softirqs_on); return; } @@ -2406,7 +2411,7 @@ void trace_softirqs_on(unsigned long ip) curr->softirqs_enabled = 1; curr->softirq_enable_ip = ip; curr->softirq_enable_event = ++curr->irq_events; - debug_atomic_inc(&softirqs_on_events); + debug_atomic_inc(softirqs_on_events); /* * We are going to turn softirqs on, so set the * usage bit for all held locks, if hardirqs are @@ -2436,10 +2441,10 @@ void trace_softirqs_off(unsigned long ip) curr->softirqs_enabled = 0; curr->softirq_disable_ip = ip; curr->softirq_disable_event = ++curr->irq_events; - debug_atomic_inc(&softirqs_off_events); + debug_atomic_inc(softirqs_off_events); DEBUG_LOCKS_WARN_ON(!softirq_count()); } else - debug_atomic_inc(&redundant_softirqs_off); + debug_atomic_inc(redundant_softirqs_off); } static void __lockdep_trace_alloc(gfp_t gfp_mask, unsigned long flags) @@ -2644,7 +2649,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this, return 0; break; case LOCK_USED: - debug_atomic_dec(&nr_unused_locks); + debug_atomic_dec(nr_unused_locks); break; default: if (!debug_locks_off_graph_unlock()) @@ -2750,7 +2755,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, if (!class) return 0; } - debug_atomic_inc((atomic_t *)&class->ops); + atomic_inc((atomic_t *)&class->ops); if (very_verbose(class)) { printk("\nacquire class [%p] %s", class->key, class->name); if (class->name_version > 1) @@ -3227,7 +3232,7 @@ void lock_release(struct lockdep_map *lock, int nested, raw_local_irq_save(flags); check_flags(flags); current->lockdep_recursion = 1; - trace_lock_release(lock, nested, ip); + trace_lock_release(lock, ip); __lock_release(lock, nested, ip); current->lockdep_recursion = 0; raw_local_irq_restore(flags); @@ -3380,7 +3385,7 @@ found_it: hlock->holdtime_stamp = now; } - trace_lock_acquired(lock, ip, waittime); + trace_lock_acquired(lock, ip); stats = get_lock_stats(hlock_class(hlock)); if (waittime) { @@ -3801,8 +3806,11 @@ void lockdep_rcu_dereference(const char *file, const int line) { struct task_struct *curr = current; +#ifndef CONFIG_PROVE_RCU_REPEATEDLY if (!debug_locks_off()) return; +#endif /* #ifdef CONFIG_PROVE_RCU_REPEATEDLY */ + /* Note: the following can be executed concurrently, so be careful. */ printk("\n===================================================\n"); printk( "[ INFO: suspicious rcu_dereference_check() usage. ]\n"); printk( "---------------------------------------------------\n"); diff --git a/kernel/lockdep_internals.h b/kernel/lockdep_internals.h index a2ee95ad1313..4f560cfedc8f 100644 --- a/kernel/lockdep_internals.h +++ b/kernel/lockdep_internals.h @@ -110,30 +110,60 @@ lockdep_count_backward_deps(struct lock_class *class) #endif #ifdef CONFIG_DEBUG_LOCKDEP + +#include <asm/local.h> /* - * Various lockdep statistics: + * Various lockdep statistics. + * We want them per cpu as they are often accessed in fast path + * and we want to avoid too much cache bouncing. */ -extern atomic_t chain_lookup_hits; -extern atomic_t chain_lookup_misses; -extern atomic_t hardirqs_on_events; -extern atomic_t hardirqs_off_events; -extern atomic_t redundant_hardirqs_on; -extern atomic_t redundant_hardirqs_off; -extern atomic_t softirqs_on_events; -extern atomic_t softirqs_off_events; -extern atomic_t redundant_softirqs_on; -extern atomic_t redundant_softirqs_off; -extern atomic_t nr_unused_locks; -extern atomic_t nr_cyclic_checks; -extern atomic_t nr_cyclic_check_recursions; -extern atomic_t nr_find_usage_forwards_checks; -extern atomic_t nr_find_usage_forwards_recursions; -extern atomic_t nr_find_usage_backwards_checks; -extern atomic_t nr_find_usage_backwards_recursions; -# define debug_atomic_inc(ptr) atomic_inc(ptr) -# define debug_atomic_dec(ptr) atomic_dec(ptr) -# define debug_atomic_read(ptr) atomic_read(ptr) +struct lockdep_stats { + int chain_lookup_hits; + int chain_lookup_misses; + int hardirqs_on_events; + int hardirqs_off_events; + int redundant_hardirqs_on; + int redundant_hardirqs_off; + int softirqs_on_events; + int softirqs_off_events; + int redundant_softirqs_on; + int redundant_softirqs_off; + int nr_unused_locks; + int nr_cyclic_checks; + int nr_cyclic_check_recursions; + int nr_find_usage_forwards_checks; + int nr_find_usage_forwards_recursions; + int nr_find_usage_backwards_checks; + int nr_find_usage_backwards_recursions; +}; + +DECLARE_PER_CPU(struct lockdep_stats, lockdep_stats); + +#define __debug_atomic_inc(ptr) \ + this_cpu_inc(lockdep_stats.ptr); + +#define debug_atomic_inc(ptr) { \ + WARN_ON_ONCE(!irqs_disabled()); \ + __this_cpu_inc(lockdep_stats.ptr); \ +} + +#define debug_atomic_dec(ptr) { \ + WARN_ON_ONCE(!irqs_disabled()); \ + __this_cpu_dec(lockdep_stats.ptr); \ +} + +#define debug_atomic_read(ptr) ({ \ + struct lockdep_stats *__cpu_lockdep_stats; \ + unsigned long long __total = 0; \ + int __cpu; \ + for_each_possible_cpu(__cpu) { \ + __cpu_lockdep_stats = &per_cpu(lockdep_stats, __cpu); \ + __total += __cpu_lockdep_stats->ptr; \ + } \ + __total; \ +}) #else +# define __debug_atomic_inc(ptr) do { } while (0) # define debug_atomic_inc(ptr) do { } while (0) # define debug_atomic_dec(ptr) do { } while (0) # define debug_atomic_read(ptr) 0 diff --git a/kernel/lockdep_proc.c b/kernel/lockdep_proc.c index d4aba4f3584c..59b76c8ce9d7 100644 --- a/kernel/lockdep_proc.c +++ b/kernel/lockdep_proc.c @@ -184,34 +184,34 @@ static const struct file_operations proc_lockdep_chains_operations = { static void lockdep_stats_debug_show(struct seq_file *m) { #ifdef CONFIG_DEBUG_LOCKDEP - unsigned int hi1 = debug_atomic_read(&hardirqs_on_events), - hi2 = debug_atomic_read(&hardirqs_off_events), - hr1 = debug_atomic_read(&redundant_hardirqs_on), - hr2 = debug_atomic_read(&redundant_hardirqs_off), - si1 = debug_atomic_read(&softirqs_on_events), - si2 = debug_atomic_read(&softirqs_off_events), - sr1 = debug_atomic_read(&redundant_softirqs_on), - sr2 = debug_atomic_read(&redundant_softirqs_off); - - seq_printf(m, " chain lookup misses: %11u\n", - debug_atomic_read(&chain_lookup_misses)); - seq_printf(m, " chain lookup hits: %11u\n", - debug_atomic_read(&chain_lookup_hits)); - seq_printf(m, " cyclic checks: %11u\n", - debug_atomic_read(&nr_cyclic_checks)); - seq_printf(m, " find-mask forwards checks: %11u\n", - debug_atomic_read(&nr_find_usage_forwards_checks)); - seq_printf(m, " find-mask backwards checks: %11u\n", - debug_atomic_read(&nr_find_usage_backwards_checks)); - - seq_printf(m, " hardirq on events: %11u\n", hi1); - seq_printf(m, " hardirq off events: %11u\n", hi2); - seq_printf(m, " redundant hardirq ons: %11u\n", hr1); - seq_printf(m, " redundant hardirq offs: %11u\n", hr2); - seq_printf(m, " softirq on events: %11u\n", si1); - seq_printf(m, " softirq off events: %11u\n", si2); - seq_printf(m, " redundant softirq ons: %11u\n", sr1); - seq_printf(m, " redundant softirq offs: %11u\n", sr2); + unsigned long long hi1 = debug_atomic_read(hardirqs_on_events), + hi2 = debug_atomic_read(hardirqs_off_events), + hr1 = debug_atomic_read(redundant_hardirqs_on), + hr2 = debug_atomic_read(redundant_hardirqs_off), + si1 = debug_atomic_read(softirqs_on_events), + si2 = debug_atomic_read(softirqs_off_events), + sr1 = debug_atomic_read(redundant_softirqs_on), + sr2 = debug_atomic_read(redundant_softirqs_off); + + seq_printf(m, " chain lookup misses: %11llu\n", + debug_atomic_read(chain_lookup_misses)); + seq_printf(m, " chain lookup hits: %11llu\n", + debug_atomic_read(chain_lookup_hits)); + seq_printf(m, " cyclic checks: %11llu\n", + debug_atomic_read(nr_cyclic_checks)); + seq_printf(m, " find-mask forwards checks: %11llu\n", + debug_atomic_read(nr_find_usage_forwards_checks)); + seq_printf(m, " find-mask backwards checks: %11llu\n", + debug_atomic_read(nr_find_usage_backwards_checks)); + + seq_printf(m, " hardirq on events: %11llu\n", hi1); + seq_printf(m, " hardirq off events: %11llu\n", hi2); + seq_printf(m, " redundant hardirq ons: %11llu\n", hr1); + seq_printf(m, " redundant hardirq offs: %11llu\n", hr2); + seq_printf(m, " softirq on events: %11llu\n", si1); + seq_printf(m, " softirq off events: %11llu\n", si2); + seq_printf(m, " redundant softirq ons: %11llu\n", sr1); + seq_printf(m, " redundant softirq offs: %11llu\n", sr2); #endif } @@ -263,7 +263,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v) #endif } #ifdef CONFIG_DEBUG_LOCKDEP - DEBUG_LOCKS_WARN_ON(debug_atomic_read(&nr_unused_locks) != nr_unused); + DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused); #endif seq_printf(m, " lock-classes: %11lu [max: %lu]\n", nr_lock_classes, MAX_LOCKDEP_KEYS); diff --git a/kernel/module.c b/kernel/module.c index 1016b75b026a..e2564580f3f1 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -59,8 +59,6 @@ #define CREATE_TRACE_POINTS #include <trace/events/module.h> -EXPORT_TRACEPOINT_SYMBOL(module_get); - #if 0 #define DEBUGP printk #else @@ -515,6 +513,9 @@ MODINFO_ATTR(srcversion); static char last_unloaded_module[MODULE_NAME_LEN+1]; #ifdef CONFIG_MODULE_UNLOAD + +EXPORT_TRACEPOINT_SYMBOL(module_get); + /* Init the unload section of the module. */ static void module_unload_init(struct module *mod) { @@ -723,16 +724,8 @@ SYSCALL_DEFINE2(delete_module, const char __user *, name_user, return -EFAULT; name[MODULE_NAME_LEN-1] = '\0'; - /* Create stop_machine threads since free_module relies on - * a non-failing stop_machine call. */ - ret = stop_machine_create(); - if (ret) - return ret; - - if (mutex_lock_interruptible(&module_mutex) != 0) { - ret = -EINTR; - goto out_stop; - } + if (mutex_lock_interruptible(&module_mutex) != 0) + return -EINTR; mod = find_module(name); if (!mod) { @@ -792,8 +785,6 @@ SYSCALL_DEFINE2(delete_module, const char __user *, name_user, out: mutex_unlock(&module_mutex); -out_stop: - stop_machine_destroy(); return ret; } @@ -867,8 +858,7 @@ void module_put(struct module *module) smp_wmb(); /* see comment in module_refcount */ __this_cpu_inc(module->refptr->decs); - trace_module_put(module, _RET_IP_, - __this_cpu_read(module->refptr->decs)); + trace_module_put(module, _RET_IP_); /* Maybe they're waiting for us to drop reference? */ if (unlikely(!module_is_live(module))) wake_up_process(module->waiter); diff --git a/kernel/perf_event.c b/kernel/perf_event.c index 3d1552d3c12b..a4fa381db3c2 100644 --- a/kernel/perf_event.c +++ b/kernel/perf_event.c @@ -16,6 +16,7 @@ #include <linux/file.h> #include <linux/poll.h> #include <linux/slab.h> +#include <linux/hash.h> #include <linux/sysfs.h> #include <linux/dcache.h> #include <linux/percpu.h> @@ -82,14 +83,6 @@ extern __weak const struct pmu *hw_perf_event_init(struct perf_event *event) void __weak hw_perf_disable(void) { barrier(); } void __weak hw_perf_enable(void) { barrier(); } -int __weak -hw_perf_group_sched_in(struct perf_event *group_leader, - struct perf_cpu_context *cpuctx, - struct perf_event_context *ctx) -{ - return 0; -} - void __weak perf_event_print_debug(void) { } static DEFINE_PER_CPU(int, perf_disable_count); @@ -262,6 +255,18 @@ static void update_event_times(struct perf_event *event) event->total_time_running = run_end - event->tstamp_running; } +/* + * Update total_time_enabled and total_time_running for all events in a group. + */ +static void update_group_times(struct perf_event *leader) +{ + struct perf_event *event; + + update_event_times(leader); + list_for_each_entry(event, &leader->sibling_list, group_entry) + update_event_times(event); +} + static struct list_head * ctx_group_list(struct perf_event *event, struct perf_event_context *ctx) { @@ -315,8 +320,6 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx) static void list_del_event(struct perf_event *event, struct perf_event_context *ctx) { - struct perf_event *sibling, *tmp; - if (list_empty(&event->group_entry)) return; ctx->nr_events--; @@ -329,7 +332,7 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx) if (event->group_leader != event) event->group_leader->nr_siblings--; - update_event_times(event); + update_group_times(event); /* * If event was in error state, then keep it @@ -340,6 +343,12 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx) */ if (event->state > PERF_EVENT_STATE_OFF) event->state = PERF_EVENT_STATE_OFF; +} + +static void +perf_destroy_group(struct perf_event *event, struct perf_event_context *ctx) +{ + struct perf_event *sibling, *tmp; /* * If this was a group event with sibling events then @@ -505,18 +514,6 @@ retry: } /* - * Update total_time_enabled and total_time_running for all events in a group. - */ -static void update_group_times(struct perf_event *leader) -{ - struct perf_event *event; - - update_event_times(leader); - list_for_each_entry(event, &leader->sibling_list, group_entry) - update_event_times(event); -} - -/* * Cross CPU call to disable a performance event */ static void __perf_event_disable(void *info) @@ -640,15 +637,20 @@ group_sched_in(struct perf_event *group_event, struct perf_cpu_context *cpuctx, struct perf_event_context *ctx) { - struct perf_event *event, *partial_group; + struct perf_event *event, *partial_group = NULL; + const struct pmu *pmu = group_event->pmu; + bool txn = false; int ret; if (group_event->state == PERF_EVENT_STATE_OFF) return 0; - ret = hw_perf_group_sched_in(group_event, cpuctx, ctx); - if (ret) - return ret < 0 ? ret : 0; + /* Check if group transaction availabe */ + if (pmu->start_txn) + txn = true; + + if (txn) + pmu->start_txn(pmu); if (event_sched_in(group_event, cpuctx, ctx)) return -EAGAIN; @@ -663,9 +665,19 @@ group_sched_in(struct perf_event *group_event, } } - return 0; + if (!txn) + return 0; + + ret = pmu->commit_txn(pmu); + if (!ret) { + pmu->cancel_txn(pmu); + return 0; + } group_error: + if (txn) + pmu->cancel_txn(pmu); + /* * Groups can be scheduled in as one unit only, so undo any * partial group before returning: @@ -1367,6 +1379,8 @@ void perf_event_task_sched_in(struct task_struct *task) if (cpuctx->task_ctx == ctx) return; + perf_disable(); + /* * We want to keep the following priority order: * cpu pinned (that don't need to move), task pinned, @@ -1379,6 +1393,8 @@ void perf_event_task_sched_in(struct task_struct *task) ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE); cpuctx->task_ctx = ctx; + + perf_enable(); } #define MAX_INTERRUPTS (~0ULL) @@ -1856,9 +1872,30 @@ int perf_event_release_kernel(struct perf_event *event) { struct perf_event_context *ctx = event->ctx; + /* + * Remove from the PMU, can't get re-enabled since we got + * here because the last ref went. + */ + perf_event_disable(event); + WARN_ON_ONCE(ctx->parent_ctx); - mutex_lock(&ctx->mutex); - perf_event_remove_from_context(event); + /* + * There are two ways this annotation is useful: + * + * 1) there is a lock recursion from perf_event_exit_task + * see the comment there. + * + * 2) there is a lock-inversion with mmap_sem through + * perf_event_read_group(), which takes faults while + * holding ctx->mutex, however this is called after + * the last filedesc died, so there is no possibility + * to trigger the AB-BA case. + */ + mutex_lock_nested(&ctx->mutex, SINGLE_DEPTH_NESTING); + raw_spin_lock_irq(&ctx->lock); + list_del_event(event, ctx); + perf_destroy_group(event, ctx); + raw_spin_unlock_irq(&ctx->lock); mutex_unlock(&ctx->mutex); mutex_lock(&event->owner->perf_event_mutex); @@ -2642,6 +2679,7 @@ static int perf_fasync(int fd, struct file *filp, int on) } static const struct file_operations perf_fops = { + .llseek = no_llseek, .release = perf_release, .read = perf_read, .poll = perf_poll, @@ -2792,6 +2830,27 @@ void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip, int ski /* + * We assume there is only KVM supporting the callbacks. + * Later on, we might change it to a list if there is + * another virtualization implementation supporting the callbacks. + */ +struct perf_guest_info_callbacks *perf_guest_cbs; + +int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs) +{ + perf_guest_cbs = cbs; + return 0; +} +EXPORT_SYMBOL_GPL(perf_register_guest_info_callbacks); + +int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs) +{ + perf_guest_cbs = NULL; + return 0; +} +EXPORT_SYMBOL_GPL(perf_unregister_guest_info_callbacks); + +/* * Output */ static bool perf_output_space(struct perf_mmap_data *data, unsigned long tail, @@ -3743,7 +3802,7 @@ void __perf_event_mmap(struct vm_area_struct *vma) .event_id = { .header = { .type = PERF_RECORD_MMAP, - .misc = 0, + .misc = PERF_RECORD_MISC_USER, /* .size */ }, /* .pid */ @@ -3961,36 +4020,6 @@ static void perf_swevent_add(struct perf_event *event, u64 nr, perf_swevent_overflow(event, 0, nmi, data, regs); } -static int perf_swevent_is_counting(struct perf_event *event) -{ - /* - * The event is active, we're good! - */ - if (event->state == PERF_EVENT_STATE_ACTIVE) - return 1; - - /* - * The event is off/error, not counting. - */ - if (event->state != PERF_EVENT_STATE_INACTIVE) - return 0; - - /* - * The event is inactive, if the context is active - * we're part of a group that didn't make it on the 'pmu', - * not counting. - */ - if (event->ctx->is_active) - return 0; - - /* - * We're inactive and the context is too, this means the - * task is scheduled out, we're counting events that happen - * to us, like migration events. - */ - return 1; -} - static int perf_tp_event_match(struct perf_event *event, struct perf_sample_data *data); @@ -4014,12 +4043,6 @@ static int perf_swevent_match(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs) { - if (event->cpu != -1 && event->cpu != smp_processor_id()) - return 0; - - if (!perf_swevent_is_counting(event)) - return 0; - if (event->attr.type != type) return 0; @@ -4036,18 +4059,53 @@ static int perf_swevent_match(struct perf_event *event, return 1; } -static void perf_swevent_ctx_event(struct perf_event_context *ctx, - enum perf_type_id type, - u32 event_id, u64 nr, int nmi, - struct perf_sample_data *data, - struct pt_regs *regs) +static inline u64 swevent_hash(u64 type, u32 event_id) { + u64 val = event_id | (type << 32); + + return hash_64(val, SWEVENT_HLIST_BITS); +} + +static struct hlist_head * +find_swevent_head(struct perf_cpu_context *ctx, u64 type, u32 event_id) +{ + u64 hash; + struct swevent_hlist *hlist; + + hash = swevent_hash(type, event_id); + + hlist = rcu_dereference(ctx->swevent_hlist); + if (!hlist) + return NULL; + + return &hlist->heads[hash]; +} + +static void do_perf_sw_event(enum perf_type_id type, u32 event_id, + u64 nr, int nmi, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + struct perf_cpu_context *cpuctx; struct perf_event *event; + struct hlist_node *node; + struct hlist_head *head; - list_for_each_entry_rcu(event, &ctx->event_list, event_entry) { + cpuctx = &__get_cpu_var(perf_cpu_context); + + rcu_read_lock(); + + head = find_swevent_head(cpuctx, type, event_id); + + if (!head) + goto end; + + hlist_for_each_entry_rcu(event, node, head, hlist_entry) { if (perf_swevent_match(event, type, event_id, data, regs)) perf_swevent_add(event, nr, nmi, data, regs); } +end: + rcu_read_unlock(); } int perf_swevent_get_recursion_context(void) @@ -4085,27 +4143,6 @@ void perf_swevent_put_recursion_context(int rctx) } EXPORT_SYMBOL_GPL(perf_swevent_put_recursion_context); -static void do_perf_sw_event(enum perf_type_id type, u32 event_id, - u64 nr, int nmi, - struct perf_sample_data *data, - struct pt_regs *regs) -{ - struct perf_cpu_context *cpuctx; - struct perf_event_context *ctx; - - cpuctx = &__get_cpu_var(perf_cpu_context); - rcu_read_lock(); - perf_swevent_ctx_event(&cpuctx->ctx, type, event_id, - nr, nmi, data, regs); - /* - * doesn't really matter which of the child contexts the - * events ends up in. - */ - ctx = rcu_dereference(current->perf_event_ctxp); - if (ctx) - perf_swevent_ctx_event(ctx, type, event_id, nr, nmi, data, regs); - rcu_read_unlock(); -} void __perf_sw_event(u32 event_id, u64 nr, int nmi, struct pt_regs *regs, u64 addr) @@ -4131,16 +4168,28 @@ static void perf_swevent_read(struct perf_event *event) static int perf_swevent_enable(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; + struct perf_cpu_context *cpuctx; + struct hlist_head *head; + + cpuctx = &__get_cpu_var(perf_cpu_context); if (hwc->sample_period) { hwc->last_period = hwc->sample_period; perf_swevent_set_period(event); } + + head = find_swevent_head(cpuctx, event->attr.type, event->attr.config); + if (WARN_ON_ONCE(!head)) + return -EINVAL; + + hlist_add_head_rcu(&event->hlist_entry, head); + return 0; } static void perf_swevent_disable(struct perf_event *event) { + hlist_del_rcu(&event->hlist_entry); } static const struct pmu perf_ops_generic = { @@ -4168,15 +4217,8 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer) perf_sample_data_init(&data, 0); data.period = event->hw.last_period; regs = get_irq_regs(); - /* - * In case we exclude kernel IPs or are somehow not in interrupt - * context, provide the next best thing, the user IP. - */ - if ((event->attr.exclude_kernel || !regs) && - !event->attr.exclude_user) - regs = task_pt_regs(current); - if (regs) { + if (regs && !perf_exclude_event(event, regs)) { if (!(event->attr.exclude_idle && current->pid == 0)) if (perf_event_overflow(event, 0, &data, regs)) ret = HRTIMER_NORESTART; @@ -4324,6 +4366,105 @@ static const struct pmu perf_ops_task_clock = { .read = task_clock_perf_event_read, }; +static void swevent_hlist_release_rcu(struct rcu_head *rcu_head) +{ + struct swevent_hlist *hlist; + + hlist = container_of(rcu_head, struct swevent_hlist, rcu_head); + kfree(hlist); +} + +static void swevent_hlist_release(struct perf_cpu_context *cpuctx) +{ + struct swevent_hlist *hlist; + + if (!cpuctx->swevent_hlist) + return; + + hlist = cpuctx->swevent_hlist; + rcu_assign_pointer(cpuctx->swevent_hlist, NULL); + call_rcu(&hlist->rcu_head, swevent_hlist_release_rcu); +} + +static void swevent_hlist_put_cpu(struct perf_event *event, int cpu) +{ + struct perf_cpu_context *cpuctx = &per_cpu(perf_cpu_context, cpu); + + mutex_lock(&cpuctx->hlist_mutex); + + if (!--cpuctx->hlist_refcount) + swevent_hlist_release(cpuctx); + + mutex_unlock(&cpuctx->hlist_mutex); +} + +static void swevent_hlist_put(struct perf_event *event) +{ + int cpu; + + if (event->cpu != -1) { + swevent_hlist_put_cpu(event, event->cpu); + return; + } + + for_each_possible_cpu(cpu) + swevent_hlist_put_cpu(event, cpu); +} + +static int swevent_hlist_get_cpu(struct perf_event *event, int cpu) +{ + struct perf_cpu_context *cpuctx = &per_cpu(perf_cpu_context, cpu); + int err = 0; + + mutex_lock(&cpuctx->hlist_mutex); + + if (!cpuctx->swevent_hlist && cpu_online(cpu)) { + struct swevent_hlist *hlist; + + hlist = kzalloc(sizeof(*hlist), GFP_KERNEL); + if (!hlist) { + err = -ENOMEM; + goto exit; + } + rcu_assign_pointer(cpuctx->swevent_hlist, hlist); + } + cpuctx->hlist_refcount++; + exit: + mutex_unlock(&cpuctx->hlist_mutex); + + return err; +} + +static int swevent_hlist_get(struct perf_event *event) +{ + int err; + int cpu, failed_cpu; + + if (event->cpu != -1) + return swevent_hlist_get_cpu(event, event->cpu); + + get_online_cpus(); + for_each_possible_cpu(cpu) { + err = swevent_hlist_get_cpu(event, cpu); + if (err) { + failed_cpu = cpu; + goto fail; + } + } + put_online_cpus(); + + return 0; + fail: + for_each_possible_cpu(cpu) { + if (cpu == failed_cpu) + break; + swevent_hlist_put_cpu(event, cpu); + } + + put_online_cpus(); + return err; +} + #ifdef CONFIG_EVENT_TRACING void perf_tp_event(int event_id, u64 addr, u64 count, void *record, @@ -4357,10 +4498,13 @@ static int perf_tp_event_match(struct perf_event *event, static void tp_perf_event_destroy(struct perf_event *event) { perf_trace_disable(event->attr.config); + swevent_hlist_put(event); } static const struct pmu *tp_perf_event_init(struct perf_event *event) { + int err; + /* * Raw tracepoint data is a severe data leak, only allow root to * have these. @@ -4374,6 +4518,11 @@ static const struct pmu *tp_perf_event_init(struct perf_event *event) return NULL; event->destroy = tp_perf_event_destroy; + err = swevent_hlist_get(event); + if (err) { + perf_trace_disable(event->attr.config); + return ERR_PTR(err); + } return &perf_ops_generic; } @@ -4474,6 +4623,7 @@ static void sw_perf_event_destroy(struct perf_event *event) WARN_ON(event->parent); atomic_dec(&perf_swevent_enabled[event_id]); + swevent_hlist_put(event); } static const struct pmu *sw_perf_event_init(struct perf_event *event) @@ -4512,6 +4662,12 @@ static const struct pmu *sw_perf_event_init(struct perf_event *event) case PERF_COUNT_SW_ALIGNMENT_FAULTS: case PERF_COUNT_SW_EMULATION_FAULTS: if (!event->parent) { + int err; + + err = swevent_hlist_get(event); + if (err) + return ERR_PTR(err); + atomic_inc(&perf_swevent_enabled[event_id]); event->destroy = sw_perf_event_destroy; } @@ -5176,7 +5332,7 @@ void perf_event_exit_task(struct task_struct *child) * * But since its the parent context it won't be the same instance. */ - mutex_lock_nested(&child_ctx->mutex, SINGLE_DEPTH_NESTING); + mutex_lock(&child_ctx->mutex); again: list_for_each_entry_safe(child_event, tmp, &child_ctx->pinned_groups, @@ -5384,6 +5540,7 @@ static void __init perf_event_init_all_cpus(void) for_each_possible_cpu(cpu) { cpuctx = &per_cpu(perf_cpu_context, cpu); + mutex_init(&cpuctx->hlist_mutex); __perf_event_init_context(&cpuctx->ctx, NULL); } } @@ -5397,6 +5554,16 @@ static void __cpuinit perf_event_init_cpu(int cpu) spin_lock(&perf_resource_lock); cpuctx->max_pertask = perf_max_events - perf_reserved_percpu; spin_unlock(&perf_resource_lock); + + mutex_lock(&cpuctx->hlist_mutex); + if (cpuctx->hlist_refcount > 0) { + struct swevent_hlist *hlist; + + hlist = kzalloc(sizeof(*hlist), GFP_KERNEL); + WARN_ON_ONCE(!hlist); + rcu_assign_pointer(cpuctx->swevent_hlist, hlist); + } + mutex_unlock(&cpuctx->hlist_mutex); } #ifdef CONFIG_HOTPLUG_CPU @@ -5416,6 +5583,10 @@ static void perf_event_exit_cpu(int cpu) struct perf_cpu_context *cpuctx = &per_cpu(perf_cpu_context, cpu); struct perf_event_context *ctx = &cpuctx->ctx; + mutex_lock(&cpuctx->hlist_mutex); + swevent_hlist_release(cpuctx); + mutex_unlock(&cpuctx->hlist_mutex); + mutex_lock(&ctx->mutex); smp_call_function_single(cpu, __perf_event_exit_cpu, NULL, 1); mutex_unlock(&ctx->mutex); diff --git a/kernel/profile.c b/kernel/profile.c index a55d3a367ae8..dfadc5b729f1 100644 --- a/kernel/profile.c +++ b/kernel/profile.c @@ -127,8 +127,10 @@ int __ref profile_init(void) return 0; prof_buffer = vmalloc(buffer_bytes); - if (prof_buffer) + if (prof_buffer) { + memset(prof_buffer, 0, buffer_bytes); return 0; + } free_cpumask_var(prof_cpu_mask); return -ENOMEM; diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 42ad8ae729a0..6af9cdd558b7 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -14,7 +14,6 @@ #include <linux/mm.h> #include <linux/highmem.h> #include <linux/pagemap.h> -#include <linux/smp_lock.h> #include <linux/ptrace.h> #include <linux/security.h> #include <linux/signal.h> @@ -76,7 +75,6 @@ void __ptrace_unlink(struct task_struct *child) child->parent = child->real_parent; list_del_init(&child->ptrace_entry); - arch_ptrace_untrace(child); if (task_is_traced(child)) ptrace_untrace(child); } @@ -666,10 +664,6 @@ SYSCALL_DEFINE4(ptrace, long, request, long, pid, long, addr, long, data) struct task_struct *child; long ret; - /* - * This lock_kernel fixes a subtle race with suid exec - */ - lock_kernel(); if (request == PTRACE_TRACEME) { ret = ptrace_traceme(); if (!ret) @@ -703,7 +697,6 @@ SYSCALL_DEFINE4(ptrace, long, request, long, pid, long, addr, long, data) out_put_task_struct: put_task_struct(child); out: - unlock_kernel(); return ret; } @@ -813,10 +806,6 @@ asmlinkage long compat_sys_ptrace(compat_long_t request, compat_long_t pid, struct task_struct *child; long ret; - /* - * This lock_kernel fixes a subtle race with suid exec - */ - lock_kernel(); if (request == PTRACE_TRACEME) { ret = ptrace_traceme(); goto out; @@ -846,7 +835,6 @@ asmlinkage long compat_sys_ptrace(compat_long_t request, compat_long_t pid, out_put_task_struct: put_task_struct(child); out: - unlock_kernel(); return ret; } #endif /* CONFIG_COMPAT */ diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c index 49d808e833b0..72a8dc9567f5 100644 --- a/kernel/rcupdate.c +++ b/kernel/rcupdate.c @@ -44,7 +44,6 @@ #include <linux/cpu.h> #include <linux/mutex.h> #include <linux/module.h> -#include <linux/kernel_stat.h> #include <linux/hardirq.h> #ifdef CONFIG_DEBUG_LOCK_ALLOC @@ -64,9 +63,6 @@ struct lockdep_map rcu_sched_lock_map = EXPORT_SYMBOL_GPL(rcu_sched_lock_map); #endif -int rcu_scheduler_active __read_mostly; -EXPORT_SYMBOL_GPL(rcu_scheduler_active); - #ifdef CONFIG_DEBUG_LOCK_ALLOC int debug_lockdep_rcu_enabled(void) @@ -97,21 +93,6 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held); #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ /* - * This function is invoked towards the end of the scheduler's initialization - * process. Before this is called, the idle task might contain - * RCU read-side critical sections (during which time, this idle - * task is booting the system). After this function is called, the - * idle tasks are prohibited from containing RCU read-side critical - * sections. - */ -void rcu_scheduler_starting(void) -{ - WARN_ON(num_online_cpus() != 1); - WARN_ON(nr_context_switches() > 0); - rcu_scheduler_active = 1; -} - -/* * Awaken the corresponding synchronize_rcu() instance now that a * grace period has elapsed. */ diff --git a/kernel/rcutiny.c b/kernel/rcutiny.c index 9f6d9ff2572c..38729d3cd236 100644 --- a/kernel/rcutiny.c +++ b/kernel/rcutiny.c @@ -44,9 +44,9 @@ struct rcu_ctrlblk { }; /* Definition for rcupdate control block. */ -static struct rcu_ctrlblk rcu_ctrlblk = { - .donetail = &rcu_ctrlblk.rcucblist, - .curtail = &rcu_ctrlblk.rcucblist, +static struct rcu_ctrlblk rcu_sched_ctrlblk = { + .donetail = &rcu_sched_ctrlblk.rcucblist, + .curtail = &rcu_sched_ctrlblk.rcucblist, }; static struct rcu_ctrlblk rcu_bh_ctrlblk = { @@ -54,6 +54,11 @@ static struct rcu_ctrlblk rcu_bh_ctrlblk = { .curtail = &rcu_bh_ctrlblk.rcucblist, }; +#ifdef CONFIG_DEBUG_LOCK_ALLOC +int rcu_scheduler_active __read_mostly; +EXPORT_SYMBOL_GPL(rcu_scheduler_active); +#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ + #ifdef CONFIG_NO_HZ static long rcu_dynticks_nesting = 1; @@ -108,7 +113,8 @@ static int rcu_qsctr_help(struct rcu_ctrlblk *rcp) */ void rcu_sched_qs(int cpu) { - if (rcu_qsctr_help(&rcu_ctrlblk) + rcu_qsctr_help(&rcu_bh_ctrlblk)) + if (rcu_qsctr_help(&rcu_sched_ctrlblk) + + rcu_qsctr_help(&rcu_bh_ctrlblk)) raise_softirq(RCU_SOFTIRQ); } @@ -173,7 +179,7 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp) */ static void rcu_process_callbacks(struct softirq_action *unused) { - __rcu_process_callbacks(&rcu_ctrlblk); + __rcu_process_callbacks(&rcu_sched_ctrlblk); __rcu_process_callbacks(&rcu_bh_ctrlblk); } @@ -187,7 +193,8 @@ static void rcu_process_callbacks(struct softirq_action *unused) * * Cool, huh? (Due to Josh Triplett.) * - * But we want to make this a static inline later. + * But we want to make this a static inline later. The cond_resched() + * currently makes this problematic. */ void synchronize_sched(void) { @@ -195,12 +202,6 @@ void synchronize_sched(void) } EXPORT_SYMBOL_GPL(synchronize_sched); -void synchronize_rcu_bh(void) -{ - synchronize_sched(); -} -EXPORT_SYMBOL_GPL(synchronize_rcu_bh); - /* * Helper function for call_rcu() and call_rcu_bh(). */ @@ -226,7 +227,7 @@ static void __call_rcu(struct rcu_head *head, */ void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)) { - __call_rcu(head, func, &rcu_ctrlblk); + __call_rcu(head, func, &rcu_sched_ctrlblk); } EXPORT_SYMBOL_GPL(call_rcu); @@ -244,11 +245,13 @@ void rcu_barrier(void) { struct rcu_synchronize rcu; + init_rcu_head_on_stack(&rcu.head); init_completion(&rcu.completion); /* Will wake me after RCU finished. */ call_rcu(&rcu.head, wakeme_after_rcu); /* Wait for it. */ wait_for_completion(&rcu.completion); + destroy_rcu_head_on_stack(&rcu.head); } EXPORT_SYMBOL_GPL(rcu_barrier); @@ -256,11 +259,13 @@ void rcu_barrier_bh(void) { struct rcu_synchronize rcu; + init_rcu_head_on_stack(&rcu.head); init_completion(&rcu.completion); /* Will wake me after RCU finished. */ call_rcu_bh(&rcu.head, wakeme_after_rcu); /* Wait for it. */ wait_for_completion(&rcu.completion); + destroy_rcu_head_on_stack(&rcu.head); } EXPORT_SYMBOL_GPL(rcu_barrier_bh); @@ -268,11 +273,13 @@ void rcu_barrier_sched(void) { struct rcu_synchronize rcu; + init_rcu_head_on_stack(&rcu.head); init_completion(&rcu.completion); /* Will wake me after RCU finished. */ call_rcu_sched(&rcu.head, wakeme_after_rcu); /* Wait for it. */ wait_for_completion(&rcu.completion); + destroy_rcu_head_on_stack(&rcu.head); } EXPORT_SYMBOL_GPL(rcu_barrier_sched); @@ -280,3 +287,5 @@ void __init rcu_init(void) { open_softirq(RCU_SOFTIRQ, rcu_process_callbacks); } + +#include "rcutiny_plugin.h" diff --git a/kernel/rcutiny_plugin.h b/kernel/rcutiny_plugin.h new file mode 100644 index 000000000000..d223a92bc742 --- /dev/null +++ b/kernel/rcutiny_plugin.h @@ -0,0 +1,39 @@ +/* + * Read-Copy Update mechanism for mutual exclusion (tree-based version) + * Internal non-public definitions that provide either classic + * or preemptable semantics. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright IBM Corporation, 2009 + * + * Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com> + */ + +#ifdef CONFIG_DEBUG_LOCK_ALLOC + +#include <linux/kernel_stat.h> + +/* + * During boot, we forgive RCU lockdep issues. After this function is + * invoked, we start taking RCU lockdep issues seriously. + */ +void rcu_scheduler_starting(void) +{ + WARN_ON(nr_context_switches() > 0); + rcu_scheduler_active = 1; +} + +#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ diff --git a/kernel/rcutorture.c b/kernel/rcutorture.c index 58df55bf83ed..6535ac8bc6a5 100644 --- a/kernel/rcutorture.c +++ b/kernel/rcutorture.c @@ -464,9 +464,11 @@ static void rcu_bh_torture_synchronize(void) { struct rcu_bh_torture_synchronize rcu; + init_rcu_head_on_stack(&rcu.head); init_completion(&rcu.completion); call_rcu_bh(&rcu.head, rcu_bh_torture_wakeme_after_cb); wait_for_completion(&rcu.completion); + destroy_rcu_head_on_stack(&rcu.head); } static struct rcu_torture_ops rcu_bh_ops = { @@ -669,7 +671,7 @@ static struct rcu_torture_ops sched_expedited_ops = { .sync = synchronize_sched_expedited, .cb_barrier = NULL, .fqs = rcu_sched_force_quiescent_state, - .stats = rcu_expedited_torture_stats, + .stats = NULL, .irq_capable = 1, .name = "sched_expedited" }; diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 3ec8160fc75f..d4437345706f 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -46,6 +46,7 @@ #include <linux/cpu.h> #include <linux/mutex.h> #include <linux/time.h> +#include <linux/kernel_stat.h> #include "rcutree.h" @@ -53,8 +54,8 @@ static struct lock_class_key rcu_node_class[NUM_RCU_LVLS]; -#define RCU_STATE_INITIALIZER(name) { \ - .level = { &name.node[0] }, \ +#define RCU_STATE_INITIALIZER(structname) { \ + .level = { &structname.node[0] }, \ .levelcnt = { \ NUM_RCU_LVL_0, /* root of hierarchy. */ \ NUM_RCU_LVL_1, \ @@ -65,13 +66,14 @@ static struct lock_class_key rcu_node_class[NUM_RCU_LVLS]; .signaled = RCU_GP_IDLE, \ .gpnum = -300, \ .completed = -300, \ - .onofflock = __RAW_SPIN_LOCK_UNLOCKED(&name.onofflock), \ + .onofflock = __RAW_SPIN_LOCK_UNLOCKED(&structname.onofflock), \ .orphan_cbs_list = NULL, \ - .orphan_cbs_tail = &name.orphan_cbs_list, \ + .orphan_cbs_tail = &structname.orphan_cbs_list, \ .orphan_qlen = 0, \ - .fqslock = __RAW_SPIN_LOCK_UNLOCKED(&name.fqslock), \ + .fqslock = __RAW_SPIN_LOCK_UNLOCKED(&structname.fqslock), \ .n_force_qs = 0, \ .n_force_qs_ngp = 0, \ + .name = #structname, \ } struct rcu_state rcu_sched_state = RCU_STATE_INITIALIZER(rcu_sched_state); @@ -80,6 +82,9 @@ DEFINE_PER_CPU(struct rcu_data, rcu_sched_data); struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh_state); DEFINE_PER_CPU(struct rcu_data, rcu_bh_data); +int rcu_scheduler_active __read_mostly; +EXPORT_SYMBOL_GPL(rcu_scheduler_active); + /* * Return true if an RCU grace period is in progress. The ACCESS_ONCE()s * permit this function to be invoked without holding the root rcu_node @@ -97,25 +102,32 @@ static int rcu_gp_in_progress(struct rcu_state *rsp) */ void rcu_sched_qs(int cpu) { - struct rcu_data *rdp; + struct rcu_data *rdp = &per_cpu(rcu_sched_data, cpu); - rdp = &per_cpu(rcu_sched_data, cpu); rdp->passed_quiesc_completed = rdp->gpnum - 1; barrier(); rdp->passed_quiesc = 1; - rcu_preempt_note_context_switch(cpu); } void rcu_bh_qs(int cpu) { - struct rcu_data *rdp; + struct rcu_data *rdp = &per_cpu(rcu_bh_data, cpu); - rdp = &per_cpu(rcu_bh_data, cpu); rdp->passed_quiesc_completed = rdp->gpnum - 1; barrier(); rdp->passed_quiesc = 1; } +/* + * Note a context switch. This is a quiescent state for RCU-sched, + * and requires special handling for preemptible RCU. + */ +void rcu_note_context_switch(int cpu) +{ + rcu_sched_qs(cpu); + rcu_preempt_note_context_switch(cpu); +} + #ifdef CONFIG_NO_HZ DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = { .dynticks_nesting = 1, @@ -438,6 +450,8 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) #ifdef CONFIG_RCU_CPU_STALL_DETECTOR +int rcu_cpu_stall_panicking __read_mostly; + static void record_gp_stall_check_time(struct rcu_state *rsp) { rsp->gp_start = jiffies; @@ -470,7 +484,8 @@ static void print_other_cpu_stall(struct rcu_state *rsp) /* OK, time to rat on our buddy... */ - printk(KERN_ERR "INFO: RCU detected CPU stalls:"); + printk(KERN_ERR "INFO: %s detected stalls on CPUs/tasks: {", + rsp->name); rcu_for_each_leaf_node(rsp, rnp) { raw_spin_lock_irqsave(&rnp->lock, flags); rcu_print_task_stall(rnp); @@ -481,7 +496,7 @@ static void print_other_cpu_stall(struct rcu_state *rsp) if (rnp->qsmask & (1UL << cpu)) printk(" %d", rnp->grplo + cpu); } - printk(" (detected by %d, t=%ld jiffies)\n", + printk("} (detected by %d, t=%ld jiffies)\n", smp_processor_id(), (long)(jiffies - rsp->gp_start)); trigger_all_cpu_backtrace(); @@ -497,8 +512,8 @@ static void print_cpu_stall(struct rcu_state *rsp) unsigned long flags; struct rcu_node *rnp = rcu_get_root(rsp); - printk(KERN_ERR "INFO: RCU detected CPU %d stall (t=%lu jiffies)\n", - smp_processor_id(), jiffies - rsp->gp_start); + printk(KERN_ERR "INFO: %s detected stall on CPU %d (t=%lu jiffies)\n", + rsp->name, smp_processor_id(), jiffies - rsp->gp_start); trigger_all_cpu_backtrace(); raw_spin_lock_irqsave(&rnp->lock, flags); @@ -515,6 +530,8 @@ static void check_cpu_stall(struct rcu_state *rsp, struct rcu_data *rdp) long delta; struct rcu_node *rnp; + if (rcu_cpu_stall_panicking) + return; delta = jiffies - rsp->jiffies_stall; rnp = rdp->mynode; if ((rnp->qsmask & rdp->grpmask) && delta >= 0) { @@ -529,6 +546,21 @@ static void check_cpu_stall(struct rcu_state *rsp, struct rcu_data *rdp) } } +static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr) +{ + rcu_cpu_stall_panicking = 1; + return NOTIFY_DONE; +} + +static struct notifier_block rcu_panic_block = { + .notifier_call = rcu_panic, +}; + +static void __init check_cpu_stall_init(void) +{ + atomic_notifier_chain_register(&panic_notifier_list, &rcu_panic_block); +} + #else /* #ifdef CONFIG_RCU_CPU_STALL_DETECTOR */ static void record_gp_stall_check_time(struct rcu_state *rsp) @@ -539,6 +571,10 @@ static void check_cpu_stall(struct rcu_state *rsp, struct rcu_data *rdp) { } +static void __init check_cpu_stall_init(void) +{ +} + #endif /* #else #ifdef CONFIG_RCU_CPU_STALL_DETECTOR */ /* @@ -1125,8 +1161,6 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp) */ void rcu_check_callbacks(int cpu, int user) { - if (!rcu_pending(cpu)) - return; /* if nothing for RCU to do. */ if (user || (idle_cpu(cpu) && rcu_scheduler_active && !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) { @@ -1158,7 +1192,8 @@ void rcu_check_callbacks(int cpu, int user) rcu_bh_qs(cpu); } rcu_preempt_check_callbacks(cpu); - raise_softirq(RCU_SOFTIRQ); + if (rcu_pending(cpu)) + raise_softirq(RCU_SOFTIRQ); } #ifdef CONFIG_SMP @@ -1236,11 +1271,11 @@ static void force_quiescent_state(struct rcu_state *rsp, int relaxed) break; /* grace period idle or initializing, ignore. */ case RCU_SAVE_DYNTICK: - - raw_spin_unlock(&rnp->lock); /* irqs remain disabled */ if (RCU_SIGNAL_INIT != RCU_SAVE_DYNTICK) break; /* So gcc recognizes the dead code. */ + raw_spin_unlock(&rnp->lock); /* irqs remain disabled */ + /* Record dyntick-idle state. */ force_qs_rnp(rsp, dyntick_save_progress_counter); raw_spin_lock(&rnp->lock); /* irqs already disabled */ @@ -1449,11 +1484,13 @@ void synchronize_sched(void) if (rcu_blocking_is_gp()) return; + init_rcu_head_on_stack(&rcu.head); init_completion(&rcu.completion); /* Will wake me after RCU finished. */ call_rcu_sched(&rcu.head, wakeme_after_rcu); /* Wait for it. */ wait_for_completion(&rcu.completion); + destroy_rcu_head_on_stack(&rcu.head); } EXPORT_SYMBOL_GPL(synchronize_sched); @@ -1473,11 +1510,13 @@ void synchronize_rcu_bh(void) if (rcu_blocking_is_gp()) return; + init_rcu_head_on_stack(&rcu.head); init_completion(&rcu.completion); /* Will wake me after RCU finished. */ call_rcu_bh(&rcu.head, wakeme_after_rcu); /* Wait for it. */ wait_for_completion(&rcu.completion); + destroy_rcu_head_on_stack(&rcu.head); } EXPORT_SYMBOL_GPL(synchronize_rcu_bh); @@ -1498,8 +1537,20 @@ static int __rcu_pending(struct rcu_state *rsp, struct rcu_data *rdp) check_cpu_stall(rsp, rdp); /* Is the RCU core waiting for a quiescent state from this CPU? */ - if (rdp->qs_pending) { + if (rdp->qs_pending && !rdp->passed_quiesc) { + + /* + * If force_quiescent_state() coming soon and this CPU + * needs a quiescent state, and this is either RCU-sched + * or RCU-bh, force a local reschedule. + */ rdp->n_rp_qs_pending++; + if (!rdp->preemptable && + ULONG_CMP_LT(ACCESS_ONCE(rsp->jiffies_force_qs) - 1, + jiffies)) + set_need_resched(); + } else if (rdp->qs_pending && rdp->passed_quiesc) { + rdp->n_rp_report_qs++; return 1; } @@ -1767,6 +1818,21 @@ static int __cpuinit rcu_cpu_notify(struct notifier_block *self, } /* + * This function is invoked towards the end of the scheduler's initialization + * process. Before this is called, the idle task might contain + * RCU read-side critical sections (during which time, this idle + * task is booting the system). After this function is called, the + * idle tasks are prohibited from containing RCU read-side critical + * sections. This function also enables RCU lockdep checking. + */ +void rcu_scheduler_starting(void) +{ + WARN_ON(num_online_cpus() != 1); + WARN_ON(nr_context_switches() > 0); + rcu_scheduler_active = 1; +} + +/* * Compute the per-level fanout, either using the exact fanout specified * or balancing the tree, depending on CONFIG_RCU_FANOUT_EXACT. */ @@ -1849,6 +1915,14 @@ static void __init rcu_init_one(struct rcu_state *rsp) INIT_LIST_HEAD(&rnp->blocked_tasks[3]); } } + + rnp = rsp->level[NUM_RCU_LVLS - 1]; + for_each_possible_cpu(i) { + while (i > rnp->grphi) + rnp++; + rsp->rda[i]->mynode = rnp; + rcu_boot_init_percpu_data(i, rsp); + } } /* @@ -1859,19 +1933,11 @@ static void __init rcu_init_one(struct rcu_state *rsp) #define RCU_INIT_FLAVOR(rsp, rcu_data) \ do { \ int i; \ - int j; \ - struct rcu_node *rnp; \ \ - rcu_init_one(rsp); \ - rnp = (rsp)->level[NUM_RCU_LVLS - 1]; \ - j = 0; \ for_each_possible_cpu(i) { \ - if (i > rnp[j].grphi) \ - j++; \ - per_cpu(rcu_data, i).mynode = &rnp[j]; \ (rsp)->rda[i] = &per_cpu(rcu_data, i); \ - rcu_boot_init_percpu_data(i, rsp); \ } \ + rcu_init_one(rsp); \ } while (0) void __init rcu_init(void) @@ -1879,12 +1945,6 @@ void __init rcu_init(void) int cpu; rcu_bootup_announce(); -#ifdef CONFIG_RCU_CPU_STALL_DETECTOR - printk(KERN_INFO "RCU-based detection of stalled CPUs is enabled.\n"); -#endif /* #ifdef CONFIG_RCU_CPU_STALL_DETECTOR */ -#if NUM_RCU_LVL_4 != 0 - printk(KERN_INFO "Experimental four-level hierarchy is enabled.\n"); -#endif /* #if NUM_RCU_LVL_4 != 0 */ RCU_INIT_FLAVOR(&rcu_sched_state, rcu_sched_data); RCU_INIT_FLAVOR(&rcu_bh_state, rcu_bh_data); __rcu_init_preempt(); @@ -1898,6 +1958,7 @@ void __init rcu_init(void) cpu_notifier(rcu_cpu_notify, 0); for_each_online_cpu(cpu) rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu); + check_cpu_stall_init(); } #include "rcutree_plugin.h" diff --git a/kernel/rcutree.h b/kernel/rcutree.h index 4a525a30e08e..14c040b18ed0 100644 --- a/kernel/rcutree.h +++ b/kernel/rcutree.h @@ -223,6 +223,7 @@ struct rcu_data { /* 5) __rcu_pending() statistics. */ unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */ unsigned long n_rp_qs_pending; + unsigned long n_rp_report_qs; unsigned long n_rp_cb_ready; unsigned long n_rp_cpu_needs_gp; unsigned long n_rp_gp_completed; @@ -326,6 +327,7 @@ struct rcu_state { unsigned long jiffies_stall; /* Time at which to check */ /* for CPU stalls. */ #endif /* #ifdef CONFIG_RCU_CPU_STALL_DETECTOR */ + char *name; /* Name of structure. */ }; /* Return values for rcu_preempt_offline_tasks(). */ diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index 79b53bda8943..0e4f420245d9 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -26,6 +26,45 @@ #include <linux/delay.h> +/* + * Check the RCU kernel configuration parameters and print informative + * messages about anything out of the ordinary. If you like #ifdef, you + * will love this function. + */ +static void __init rcu_bootup_announce_oddness(void) +{ +#ifdef CONFIG_RCU_TRACE + printk(KERN_INFO "\tRCU debugfs-based tracing is enabled.\n"); +#endif +#if (defined(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 64) || (!defined(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 32) + printk(KERN_INFO "\tCONFIG_RCU_FANOUT set to non-default value of %d\n", + CONFIG_RCU_FANOUT); +#endif +#ifdef CONFIG_RCU_FANOUT_EXACT + printk(KERN_INFO "\tHierarchical RCU autobalancing is disabled.\n"); +#endif +#ifdef CONFIG_RCU_FAST_NO_HZ + printk(KERN_INFO + "\tRCU dyntick-idle grace-period acceleration is enabled.\n"); +#endif +#ifdef CONFIG_PROVE_RCU + printk(KERN_INFO "\tRCU lockdep checking is enabled.\n"); +#endif +#ifdef CONFIG_RCU_TORTURE_TEST_RUNNABLE + printk(KERN_INFO "\tRCU torture testing starts during boot.\n"); +#endif +#ifndef CONFIG_RCU_CPU_STALL_DETECTOR + printk(KERN_INFO + "\tRCU-based detection of stalled CPUs is disabled.\n"); +#endif +#ifndef CONFIG_RCU_CPU_STALL_VERBOSE + printk(KERN_INFO "\tVerbose stalled-CPUs detection is disabled.\n"); +#endif +#if NUM_RCU_LVL_4 != 0 + printk(KERN_INFO "\tExperimental four-level hierarchy is enabled.\n"); +#endif +} + #ifdef CONFIG_TREE_PREEMPT_RCU struct rcu_state rcu_preempt_state = RCU_STATE_INITIALIZER(rcu_preempt_state); @@ -38,8 +77,8 @@ static int rcu_preempted_readers_exp(struct rcu_node *rnp); */ static void __init rcu_bootup_announce(void) { - printk(KERN_INFO - "Experimental preemptable hierarchical RCU implementation.\n"); + printk(KERN_INFO "Preemptable hierarchical RCU implementation.\n"); + rcu_bootup_announce_oddness(); } /* @@ -75,13 +114,19 @@ EXPORT_SYMBOL_GPL(rcu_force_quiescent_state); * that this just means that the task currently running on the CPU is * not in a quiescent state. There might be any number of tasks blocked * while in an RCU read-side critical section. + * + * Unlike the other rcu_*_qs() functions, callers to this function + * must disable irqs in order to protect the assignment to + * ->rcu_read_unlock_special. */ static void rcu_preempt_qs(int cpu) { struct rcu_data *rdp = &per_cpu(rcu_preempt_data, cpu); + rdp->passed_quiesc_completed = rdp->gpnum - 1; barrier(); rdp->passed_quiesc = 1; + current->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_NEED_QS; } /* @@ -144,9 +189,8 @@ static void rcu_preempt_note_context_switch(int cpu) * grace period, then the fact that the task has been enqueued * means that we continue to block the current grace period. */ - rcu_preempt_qs(cpu); local_irq_save(flags); - t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_NEED_QS; + rcu_preempt_qs(cpu); local_irq_restore(flags); } @@ -236,7 +280,6 @@ static void rcu_read_unlock_special(struct task_struct *t) */ special = t->rcu_read_unlock_special; if (special & RCU_READ_UNLOCK_NEED_QS) { - t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_NEED_QS; rcu_preempt_qs(smp_processor_id()); } @@ -473,7 +516,6 @@ static void rcu_preempt_check_callbacks(int cpu) struct task_struct *t = current; if (t->rcu_read_lock_nesting == 0) { - t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_NEED_QS; rcu_preempt_qs(cpu); return; } @@ -515,11 +557,13 @@ void synchronize_rcu(void) if (!rcu_scheduler_active) return; + init_rcu_head_on_stack(&rcu.head); init_completion(&rcu.completion); /* Will wake me after RCU finished. */ call_rcu(&rcu.head, wakeme_after_rcu); /* Wait for it. */ wait_for_completion(&rcu.completion); + destroy_rcu_head_on_stack(&rcu.head); } EXPORT_SYMBOL_GPL(synchronize_rcu); @@ -754,6 +798,7 @@ void exit_rcu(void) static void __init rcu_bootup_announce(void) { printk(KERN_INFO "Hierarchical RCU implementation.\n"); + rcu_bootup_announce_oddness(); } /* @@ -1008,6 +1053,8 @@ static DEFINE_PER_CPU(unsigned long, rcu_dyntick_holdoff); int rcu_needs_cpu(int cpu) { int c = 0; + int snap; + int snap_nmi; int thatcpu; /* Check for being in the holdoff period. */ @@ -1015,12 +1062,18 @@ int rcu_needs_cpu(int cpu) return rcu_needs_cpu_quick_check(cpu); /* Don't bother unless we are the last non-dyntick-idle CPU. */ - for_each_cpu_not(thatcpu, nohz_cpu_mask) - if (thatcpu != cpu) { + for_each_online_cpu(thatcpu) { + if (thatcpu == cpu) + continue; + snap = per_cpu(rcu_dynticks, thatcpu).dynticks; + snap_nmi = per_cpu(rcu_dynticks, thatcpu).dynticks_nmi; + smp_mb(); /* Order sampling of snap with end of grace period. */ + if (((snap & 0x1) != 0) || ((snap_nmi & 0x1) != 0)) { per_cpu(rcu_dyntick_drain, cpu) = 0; per_cpu(rcu_dyntick_holdoff, cpu) = jiffies - 1; return rcu_needs_cpu_quick_check(cpu); } + } /* Check and update the rcu_dyntick_drain sequencing. */ if (per_cpu(rcu_dyntick_drain, cpu) <= 0) { diff --git a/kernel/rcutree_trace.c b/kernel/rcutree_trace.c index d45db2e35d27..36c95b45738e 100644 --- a/kernel/rcutree_trace.c +++ b/kernel/rcutree_trace.c @@ -241,11 +241,13 @@ static const struct file_operations rcugp_fops = { static void print_one_rcu_pending(struct seq_file *m, struct rcu_data *rdp) { seq_printf(m, "%3d%cnp=%ld " - "qsp=%ld cbr=%ld cng=%ld gpc=%ld gps=%ld nf=%ld nn=%ld\n", + "qsp=%ld rpq=%ld cbr=%ld cng=%ld " + "gpc=%ld gps=%ld nf=%ld nn=%ld\n", rdp->cpu, cpu_is_offline(rdp->cpu) ? '!' : ' ', rdp->n_rcu_pending, rdp->n_rp_qs_pending, + rdp->n_rp_report_qs, rdp->n_rp_cb_ready, rdp->n_rp_cpu_needs_gp, rdp->n_rp_gp_completed, diff --git a/kernel/sched.c b/kernel/sched.c index 3c2a54f70ffe..1d93cd0ae4d3 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -55,9 +55,9 @@ #include <linux/cpu.h> #include <linux/cpuset.h> #include <linux/percpu.h> -#include <linux/kthread.h> #include <linux/proc_fs.h> #include <linux/seq_file.h> +#include <linux/stop_machine.h> #include <linux/sysctl.h> #include <linux/syscalls.h> #include <linux/times.h> @@ -503,8 +503,11 @@ struct rq { #define CPU_LOAD_IDX_MAX 5 unsigned long cpu_load[CPU_LOAD_IDX_MAX]; #ifdef CONFIG_NO_HZ + u64 nohz_stamp; unsigned char in_nohz_recently; #endif + unsigned int skip_clock_update; + /* capture load from *all* tasks on this cpu: */ struct load_weight load; unsigned long nr_load_updates; @@ -546,15 +549,13 @@ struct rq { int post_schedule; int active_balance; int push_cpu; + struct cpu_stop_work active_balance_work; /* cpu of this runqueue: */ int cpu; int online; unsigned long avg_load_per_task; - struct task_struct *migration_thread; - struct list_head migration_queue; - u64 rt_avg; u64 age_stamp; u64 idle_stamp; @@ -602,6 +603,13 @@ static inline void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags) { rq->curr->sched_class->check_preempt_curr(rq, p, flags); + + /* + * A queue event has occurred, and we're going to schedule. In + * this case, we can save a useless back to back clock update. + */ + if (test_tsk_need_resched(p)) + rq->skip_clock_update = 1; } static inline int cpu_of(struct rq *rq) @@ -636,7 +644,8 @@ static inline int cpu_of(struct rq *rq) inline void update_rq_clock(struct rq *rq) { - rq->clock = sched_clock_cpu(cpu_of(rq)); + if (!rq->skip_clock_update) + rq->clock = sched_clock_cpu(cpu_of(rq)); } /* @@ -914,16 +923,12 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev) #endif /* __ARCH_WANT_UNLOCKED_CTXSW */ /* - * Check whether the task is waking, we use this to synchronize against - * ttwu() so that task_cpu() reports a stable number. - * - * We need to make an exception for PF_STARTING tasks because the fork - * path might require task_rq_lock() to work, eg. it can call - * set_cpus_allowed_ptr() from the cpuset clone_ns code. + * Check whether the task is waking, we use this to synchronize ->cpus_allowed + * against ttwu(). */ static inline int task_is_waking(struct task_struct *p) { - return unlikely((p->state == TASK_WAKING) && !(p->flags & PF_STARTING)); + return unlikely(p->state == TASK_WAKING); } /* @@ -936,11 +941,9 @@ static inline struct rq *__task_rq_lock(struct task_struct *p) struct rq *rq; for (;;) { - while (task_is_waking(p)) - cpu_relax(); rq = task_rq(p); raw_spin_lock(&rq->lock); - if (likely(rq == task_rq(p) && !task_is_waking(p))) + if (likely(rq == task_rq(p))) return rq; raw_spin_unlock(&rq->lock); } @@ -957,12 +960,10 @@ static struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags) struct rq *rq; for (;;) { - while (task_is_waking(p)) - cpu_relax(); local_irq_save(*flags); rq = task_rq(p); raw_spin_lock(&rq->lock); - if (likely(rq == task_rq(p) && !task_is_waking(p))) + if (likely(rq == task_rq(p))) return rq; raw_spin_unlock_irqrestore(&rq->lock, *flags); } @@ -1239,6 +1240,17 @@ void wake_up_idle_cpu(int cpu) if (!tsk_is_polling(rq->idle)) smp_send_reschedule(cpu); } + +int nohz_ratelimit(int cpu) +{ + struct rq *rq = cpu_rq(cpu); + u64 diff = rq->clock - rq->nohz_stamp; + + rq->nohz_stamp = rq->clock; + + return diff < (NSEC_PER_SEC / HZ) >> 1; +} + #endif /* CONFIG_NO_HZ */ static u64 sched_avg_period(void) @@ -1781,8 +1793,6 @@ static void double_rq_lock(struct rq *rq1, struct rq *rq2) raw_spin_lock_nested(&rq1->lock, SINGLE_DEPTH_NESTING); } } - update_rq_clock(rq1); - update_rq_clock(rq2); } /* @@ -1813,7 +1823,7 @@ static void cfs_rq_set_shares(struct cfs_rq *cfs_rq, unsigned long shares) } #endif -static void calc_load_account_active(struct rq *this_rq); +static void calc_load_account_idle(struct rq *this_rq); static void update_sysctl(void); static int get_update_sysctl_factor(void); @@ -1870,62 +1880,43 @@ static void set_load_weight(struct task_struct *p) p->se.load.inv_weight = prio_to_wmult[p->static_prio - MAX_RT_PRIO]; } -static void update_avg(u64 *avg, u64 sample) -{ - s64 diff = sample - *avg; - *avg += diff >> 3; -} - -static void -enqueue_task(struct rq *rq, struct task_struct *p, int wakeup, bool head) +static void enqueue_task(struct rq *rq, struct task_struct *p, int flags) { - if (wakeup) - p->se.start_runtime = p->se.sum_exec_runtime; - + update_rq_clock(rq); sched_info_queued(p); - p->sched_class->enqueue_task(rq, p, wakeup, head); + p->sched_class->enqueue_task(rq, p, flags); p->se.on_rq = 1; } -static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep) +static void dequeue_task(struct rq *rq, struct task_struct *p, int flags) { - if (sleep) { - if (p->se.last_wakeup) { - update_avg(&p->se.avg_overlap, - p->se.sum_exec_runtime - p->se.last_wakeup); - p->se.last_wakeup = 0; - } else { - update_avg(&p->se.avg_wakeup, - sysctl_sched_wakeup_granularity); - } - } - + update_rq_clock(rq); sched_info_dequeued(p); - p->sched_class->dequeue_task(rq, p, sleep); + p->sched_class->dequeue_task(rq, p, flags); p->se.on_rq = 0; } /* * activate_task - move a task to the runqueue. */ -static void activate_task(struct rq *rq, struct task_struct *p, int wakeup) +static void activate_task(struct rq *rq, struct task_struct *p, int flags) { if (task_contributes_to_load(p)) rq->nr_uninterruptible--; - enqueue_task(rq, p, wakeup, false); + enqueue_task(rq, p, flags); inc_nr_running(rq); } /* * deactivate_task - remove a task from the runqueue. */ -static void deactivate_task(struct rq *rq, struct task_struct *p, int sleep) +static void deactivate_task(struct rq *rq, struct task_struct *p, int flags) { if (task_contributes_to_load(p)) rq->nr_uninterruptible++; - dequeue_task(rq, p, sleep); + dequeue_task(rq, p, flags); dec_nr_running(rq); } @@ -2054,21 +2045,18 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu) __set_task_cpu(p, new_cpu); } -struct migration_req { - struct list_head list; - +struct migration_arg { struct task_struct *task; int dest_cpu; - - struct completion done; }; +static int migration_cpu_stop(void *data); + /* * The task's runqueue lock must be held. * Returns true if you have to wait for migration thread. */ -static int -migrate_task(struct task_struct *p, int dest_cpu, struct migration_req *req) +static bool migrate_task(struct task_struct *p, int dest_cpu) { struct rq *rq = task_rq(p); @@ -2076,58 +2064,7 @@ migrate_task(struct task_struct *p, int dest_cpu, struct migration_req *req) * If the task is not on a runqueue (and not running), then * the next wake-up will properly place the task. */ - if (!p->se.on_rq && !task_running(rq, p)) - return 0; - - init_completion(&req->done); - req->task = p; - req->dest_cpu = dest_cpu; - list_add(&req->list, &rq->migration_queue); - - return 1; -} - -/* - * wait_task_context_switch - wait for a thread to complete at least one - * context switch. - * - * @p must not be current. - */ -void wait_task_context_switch(struct task_struct *p) -{ - unsigned long nvcsw, nivcsw, flags; - int running; - struct rq *rq; - - nvcsw = p->nvcsw; - nivcsw = p->nivcsw; - for (;;) { - /* - * The runqueue is assigned before the actual context - * switch. We need to take the runqueue lock. - * - * We could check initially without the lock but it is - * very likely that we need to take the lock in every - * iteration. - */ - rq = task_rq_lock(p, &flags); - running = task_running(rq, p); - task_rq_unlock(rq, &flags); - - if (likely(!running)) - break; - /* - * The switch count is incremented before the actual - * context switch. We thus wait for two switches to be - * sure at least one completed. - */ - if ((p->nvcsw - nvcsw) > 1) - break; - if ((p->nivcsw - nivcsw) > 1) - break; - - cpu_relax(); - } + return p->se.on_rq || task_running(rq, p); } /* @@ -2185,7 +2122,7 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state) * just go back and repeat. */ rq = task_rq_lock(p, &flags); - trace_sched_wait_task(rq, p); + trace_sched_wait_task(p); running = task_running(rq, p); on_rq = p->se.on_rq; ncsw = 0; @@ -2283,6 +2220,9 @@ void task_oncpu_function_call(struct task_struct *p, } #ifdef CONFIG_SMP +/* + * ->cpus_allowed is protected by either TASK_WAKING or rq->lock held. + */ static int select_fallback_rq(int cpu, struct task_struct *p) { int dest_cpu; @@ -2299,12 +2239,8 @@ static int select_fallback_rq(int cpu, struct task_struct *p) return dest_cpu; /* No more Mr. Nice Guy. */ - if (dest_cpu >= nr_cpu_ids) { - rcu_read_lock(); - cpuset_cpus_allowed_locked(p, &p->cpus_allowed); - rcu_read_unlock(); - dest_cpu = cpumask_any_and(cpu_active_mask, &p->cpus_allowed); - + if (unlikely(dest_cpu >= nr_cpu_ids)) { + dest_cpu = cpuset_cpus_allowed_fallback(p); /* * Don't tell them about moving exiting tasks or * kernel threads (both mm NULL), since they never @@ -2321,17 +2257,12 @@ static int select_fallback_rq(int cpu, struct task_struct *p) } /* - * Gets called from 3 sites (exec, fork, wakeup), since it is called without - * holding rq->lock we need to ensure ->cpus_allowed is stable, this is done - * by: - * - * exec: is unstable, retry loop - * fork & wake-up: serialize ->cpus_allowed against TASK_WAKING + * The caller (fork, wakeup) owns TASK_WAKING, ->cpus_allowed is stable. */ static inline -int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags) +int select_task_rq(struct rq *rq, struct task_struct *p, int sd_flags, int wake_flags) { - int cpu = p->sched_class->select_task_rq(p, sd_flags, wake_flags); + int cpu = p->sched_class->select_task_rq(rq, p, sd_flags, wake_flags); /* * In order not to call set_task_cpu() on a blocking task we need @@ -2349,6 +2280,12 @@ int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags) return cpu; } + +static void update_avg(u64 *avg, u64 sample) +{ + s64 diff = sample - *avg; + *avg += diff >> 3; +} #endif /*** @@ -2370,16 +2307,13 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, { int cpu, orig_cpu, this_cpu, success = 0; unsigned long flags; + unsigned long en_flags = ENQUEUE_WAKEUP; struct rq *rq; - if (!sched_feat(SYNC_WAKEUPS)) - wake_flags &= ~WF_SYNC; - this_cpu = get_cpu(); smp_wmb(); rq = task_rq_lock(p, &flags); - update_rq_clock(rq); if (!(p->state & state)) goto out; @@ -2399,28 +2333,26 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, * * First fix up the nr_uninterruptible count: */ - if (task_contributes_to_load(p)) - rq->nr_uninterruptible--; + if (task_contributes_to_load(p)) { + if (likely(cpu_online(orig_cpu))) + rq->nr_uninterruptible--; + else + this_rq()->nr_uninterruptible--; + } p->state = TASK_WAKING; - if (p->sched_class->task_waking) + if (p->sched_class->task_waking) { p->sched_class->task_waking(rq, p); + en_flags |= ENQUEUE_WAKING; + } - __task_rq_unlock(rq); - - cpu = select_task_rq(p, SD_BALANCE_WAKE, wake_flags); - if (cpu != orig_cpu) { - /* - * Since we migrate the task without holding any rq->lock, - * we need to be careful with task_rq_lock(), since that - * might end up locking an invalid rq. - */ + cpu = select_task_rq(rq, p, SD_BALANCE_WAKE, wake_flags); + if (cpu != orig_cpu) set_task_cpu(p, cpu); - } + __task_rq_unlock(rq); rq = cpu_rq(cpu); raw_spin_lock(&rq->lock); - update_rq_clock(rq); /* * We migrated the task without holding either rq->lock, however @@ -2448,36 +2380,20 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, out_activate: #endif /* CONFIG_SMP */ - schedstat_inc(p, se.nr_wakeups); + schedstat_inc(p, se.statistics.nr_wakeups); if (wake_flags & WF_SYNC) - schedstat_inc(p, se.nr_wakeups_sync); + schedstat_inc(p, se.statistics.nr_wakeups_sync); if (orig_cpu != cpu) - schedstat_inc(p, se.nr_wakeups_migrate); + schedstat_inc(p, se.statistics.nr_wakeups_migrate); if (cpu == this_cpu) - schedstat_inc(p, se.nr_wakeups_local); + schedstat_inc(p, se.statistics.nr_wakeups_local); else - schedstat_inc(p, se.nr_wakeups_remote); - activate_task(rq, p, 1); + schedstat_inc(p, se.statistics.nr_wakeups_remote); + activate_task(rq, p, en_flags); success = 1; - /* - * Only attribute actual wakeups done by this task. - */ - if (!in_interrupt()) { - struct sched_entity *se = ¤t->se; - u64 sample = se->sum_exec_runtime; - - if (se->last_wakeup) - sample -= se->last_wakeup; - else - sample -= se->start_runtime; - update_avg(&se->avg_wakeup, sample); - - se->last_wakeup = se->sum_exec_runtime; - } - out_running: - trace_sched_wakeup(rq, p, success); + trace_sched_wakeup(p, success); check_preempt_curr(rq, p, wake_flags); p->state = TASK_RUNNING; @@ -2537,42 +2453,9 @@ static void __sched_fork(struct task_struct *p) p->se.sum_exec_runtime = 0; p->se.prev_sum_exec_runtime = 0; p->se.nr_migrations = 0; - p->se.last_wakeup = 0; - p->se.avg_overlap = 0; - p->se.start_runtime = 0; - p->se.avg_wakeup = sysctl_sched_wakeup_granularity; #ifdef CONFIG_SCHEDSTATS - p->se.wait_start = 0; - p->se.wait_max = 0; - p->se.wait_count = 0; - p->se.wait_sum = 0; - - p->se.sleep_start = 0; - p->se.sleep_max = 0; - p->se.sum_sleep_runtime = 0; - - p->se.block_start = 0; - p->se.block_max = 0; - p->se.exec_max = 0; - p->se.slice_max = 0; - - p->se.nr_migrations_cold = 0; - p->se.nr_failed_migrations_affine = 0; - p->se.nr_failed_migrations_running = 0; - p->se.nr_failed_migrations_hot = 0; - p->se.nr_forced_migrations = 0; - - p->se.nr_wakeups = 0; - p->se.nr_wakeups_sync = 0; - p->se.nr_wakeups_migrate = 0; - p->se.nr_wakeups_local = 0; - p->se.nr_wakeups_remote = 0; - p->se.nr_wakeups_affine = 0; - p->se.nr_wakeups_affine_attempts = 0; - p->se.nr_wakeups_passive = 0; - p->se.nr_wakeups_idle = 0; - + memset(&p->se.statistics, 0, sizeof(p->se.statistics)); #endif INIT_LIST_HEAD(&p->rt.run_list); @@ -2593,11 +2476,11 @@ void sched_fork(struct task_struct *p, int clone_flags) __sched_fork(p); /* - * We mark the process as waking here. This guarantees that + * We mark the process as running here. This guarantees that * nobody will actually run it, and a signal or other external * event cannot wake it up and insert it on the runqueue either. */ - p->state = TASK_WAKING; + p->state = TASK_RUNNING; /* * Revert to default priority/policy on fork if requested. @@ -2664,31 +2547,27 @@ void wake_up_new_task(struct task_struct *p, unsigned long clone_flags) int cpu __maybe_unused = get_cpu(); #ifdef CONFIG_SMP + rq = task_rq_lock(p, &flags); + p->state = TASK_WAKING; + /* * Fork balancing, do it here and not earlier because: * - cpus_allowed can change in the fork path * - any previously selected cpu might disappear through hotplug * - * We still have TASK_WAKING but PF_STARTING is gone now, meaning - * ->cpus_allowed is stable, we have preemption disabled, meaning - * cpu_online_mask is stable. + * We set TASK_WAKING so that select_task_rq() can drop rq->lock + * without people poking at ->cpus_allowed. */ - cpu = select_task_rq(p, SD_BALANCE_FORK, 0); + cpu = select_task_rq(rq, p, SD_BALANCE_FORK, 0); set_task_cpu(p, cpu); -#endif - - /* - * Since the task is not on the rq and we still have TASK_WAKING set - * nobody else will migrate this task. - */ - rq = cpu_rq(cpu); - raw_spin_lock_irqsave(&rq->lock, flags); - BUG_ON(p->state != TASK_WAKING); p->state = TASK_RUNNING; - update_rq_clock(rq); + task_rq_unlock(rq, &flags); +#endif + + rq = task_rq_lock(p, &flags); activate_task(rq, p, 0); - trace_sched_wakeup_new(rq, p, 1); + trace_sched_wakeup_new(p, 1); check_preempt_curr(rq, p, WF_FORK); #ifdef CONFIG_SMP if (p->sched_class->task_woken) @@ -2908,7 +2787,7 @@ context_switch(struct rq *rq, struct task_struct *prev, struct mm_struct *mm, *oldmm; prepare_task_switch(rq, prev, next); - trace_sched_switch(rq, prev, next); + trace_sched_switch(prev, next); mm = next->mm; oldmm = prev->active_mm; /* @@ -3025,6 +2904,61 @@ static unsigned long calc_load_update; unsigned long avenrun[3]; EXPORT_SYMBOL(avenrun); +static long calc_load_fold_active(struct rq *this_rq) +{ + long nr_active, delta = 0; + + nr_active = this_rq->nr_running; + nr_active += (long) this_rq->nr_uninterruptible; + + if (nr_active != this_rq->calc_load_active) { + delta = nr_active - this_rq->calc_load_active; + this_rq->calc_load_active = nr_active; + } + + return delta; +} + +#ifdef CONFIG_NO_HZ +/* + * For NO_HZ we delay the active fold to the next LOAD_FREQ update. + * + * When making the ILB scale, we should try to pull this in as well. + */ +static atomic_long_t calc_load_tasks_idle; + +static void calc_load_account_idle(struct rq *this_rq) +{ + long delta; + + delta = calc_load_fold_active(this_rq); + if (delta) + atomic_long_add(delta, &calc_load_tasks_idle); +} + +static long calc_load_fold_idle(void) +{ + long delta = 0; + + /* + * Its got a race, we don't care... + */ + if (atomic_long_read(&calc_load_tasks_idle)) + delta = atomic_long_xchg(&calc_load_tasks_idle, 0); + + return delta; +} +#else +static void calc_load_account_idle(struct rq *this_rq) +{ +} + +static inline long calc_load_fold_idle(void) +{ + return 0; +} +#endif + /** * get_avenrun - get the load average array * @loads: pointer to dest load array @@ -3071,20 +3005,22 @@ void calc_global_load(void) } /* - * Either called from update_cpu_load() or from a cpu going idle + * Called from update_cpu_load() to periodically update this CPU's + * active count. */ static void calc_load_account_active(struct rq *this_rq) { - long nr_active, delta; + long delta; - nr_active = this_rq->nr_running; - nr_active += (long) this_rq->nr_uninterruptible; + if (time_before(jiffies, this_rq->calc_load_update)) + return; - if (nr_active != this_rq->calc_load_active) { - delta = nr_active - this_rq->calc_load_active; - this_rq->calc_load_active = nr_active; + delta = calc_load_fold_active(this_rq); + delta += calc_load_fold_idle(); + if (delta) atomic_long_add(delta, &calc_load_tasks); - } + + this_rq->calc_load_update += LOAD_FREQ; } /* @@ -3116,10 +3052,7 @@ static void update_cpu_load(struct rq *this_rq) this_rq->cpu_load[i] = (old_load*(scale-1) + new_load) >> i; } - if (time_after_eq(jiffies, this_rq->calc_load_update)) { - this_rq->calc_load_update += LOAD_FREQ; - calc_load_account_active(this_rq); - } + calc_load_account_active(this_rq); } #ifdef CONFIG_SMP @@ -3131,44 +3064,27 @@ static void update_cpu_load(struct rq *this_rq) void sched_exec(void) { struct task_struct *p = current; - struct migration_req req; - int dest_cpu, this_cpu; unsigned long flags; struct rq *rq; - -again: - this_cpu = get_cpu(); - dest_cpu = select_task_rq(p, SD_BALANCE_EXEC, 0); - if (dest_cpu == this_cpu) { - put_cpu(); - return; - } + int dest_cpu; rq = task_rq_lock(p, &flags); - put_cpu(); + dest_cpu = p->sched_class->select_task_rq(rq, p, SD_BALANCE_EXEC, 0); + if (dest_cpu == smp_processor_id()) + goto unlock; /* * select_task_rq() can race against ->cpus_allowed */ - if (!cpumask_test_cpu(dest_cpu, &p->cpus_allowed) - || unlikely(!cpu_active(dest_cpu))) { - task_rq_unlock(rq, &flags); - goto again; - } + if (cpumask_test_cpu(dest_cpu, &p->cpus_allowed) && + likely(cpu_active(dest_cpu)) && migrate_task(p, dest_cpu)) { + struct migration_arg arg = { p, dest_cpu }; - /* force the process onto the specified CPU */ - if (migrate_task(p, dest_cpu, &req)) { - /* Need to wait for migration thread (might exit: take ref). */ - struct task_struct *mt = rq->migration_thread; - - get_task_struct(mt); task_rq_unlock(rq, &flags); - wake_up_process(mt); - put_task_struct(mt); - wait_for_completion(&req.done); - + stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg); return; } +unlock: task_rq_unlock(rq, &flags); } @@ -3640,23 +3556,9 @@ static inline void schedule_debug(struct task_struct *prev) static void put_prev_task(struct rq *rq, struct task_struct *prev) { - if (prev->state == TASK_RUNNING) { - u64 runtime = prev->se.sum_exec_runtime; - - runtime -= prev->se.prev_sum_exec_runtime; - runtime = min_t(u64, runtime, 2*sysctl_sched_migration_cost); - - /* - * In order to avoid avg_overlap growing stale when we are - * indeed overlapping and hence not getting put to sleep, grow - * the avg_overlap on preemption. - * - * We use the average preemption runtime because that - * correlates to the amount of cache footprint a task can - * build up. - */ - update_avg(&prev->se.avg_overlap, runtime); - } + if (prev->se.on_rq) + update_rq_clock(rq); + rq->skip_clock_update = 0; prev->sched_class->put_prev_task(rq, prev); } @@ -3706,7 +3608,7 @@ need_resched: preempt_disable(); cpu = smp_processor_id(); rq = cpu_rq(cpu); - rcu_sched_qs(cpu); + rcu_note_context_switch(cpu); prev = rq->curr; switch_count = &prev->nivcsw; @@ -3719,14 +3621,13 @@ need_resched_nonpreemptible: hrtick_clear(rq); raw_spin_lock_irq(&rq->lock); - update_rq_clock(rq); clear_tsk_need_resched(prev); if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) { if (unlikely(signal_pending_state(prev->state, prev))) prev->state = TASK_RUNNING; else - deactivate_task(rq, prev, 1); + deactivate_task(rq, prev, DEQUEUE_SLEEP); switch_count = &prev->nvcsw; } @@ -4049,8 +3950,7 @@ do_wait_for_common(struct completion *x, long timeout, int state) if (!x->done) { DECLARE_WAITQUEUE(wait, current); - wait.flags |= WQ_FLAG_EXCLUSIVE; - __add_wait_queue_tail(&x->wait, &wait); + __add_wait_queue_tail_exclusive(&x->wait, &wait); do { if (signal_pending_state(state, current)) { timeout = -ERESTARTSYS; @@ -4276,7 +4176,6 @@ void rt_mutex_setprio(struct task_struct *p, int prio) BUG_ON(prio < 0 || prio > MAX_PRIO); rq = task_rq_lock(p, &flags); - update_rq_clock(rq); oldprio = p->prio; prev_class = p->sched_class; @@ -4297,7 +4196,7 @@ void rt_mutex_setprio(struct task_struct *p, int prio) if (running) p->sched_class->set_curr_task(rq); if (on_rq) { - enqueue_task(rq, p, 0, oldprio < prio); + enqueue_task(rq, p, oldprio < prio ? ENQUEUE_HEAD : 0); check_class_changed(rq, p, prev_class, oldprio, running); } @@ -4319,7 +4218,6 @@ void set_user_nice(struct task_struct *p, long nice) * the task might be in the middle of scheduling on another CPU. */ rq = task_rq_lock(p, &flags); - update_rq_clock(rq); /* * The RT priorities are set via sched_setscheduler(), but we still * allow the 'normal' nice value to be set - but as expected @@ -4341,7 +4239,7 @@ void set_user_nice(struct task_struct *p, long nice) delta = p->prio - old_prio; if (on_rq) { - enqueue_task(rq, p, 0, false); + enqueue_task(rq, p, 0); /* * If the task increased its priority or is running and * lowered its priority, then reschedule its CPU: @@ -4602,7 +4500,6 @@ recheck: raw_spin_unlock_irqrestore(&p->pi_lock, flags); goto recheck; } - update_rq_clock(rq); on_rq = p->se.on_rq; running = task_current(rq, p); if (on_rq) @@ -5339,17 +5236,15 @@ static inline void sched_init_granularity(void) /* * This is how migration works: * - * 1) we queue a struct migration_req structure in the source CPU's - * runqueue and wake up that CPU's migration thread. - * 2) we down() the locked semaphore => thread blocks. - * 3) migration thread wakes up (implicitly it forces the migrated - * thread off the CPU) - * 4) it gets the migration request and checks whether the migrated - * task is still in the wrong runqueue. - * 5) if it's in the wrong runqueue then the migration thread removes + * 1) we invoke migration_cpu_stop() on the target CPU using + * stop_one_cpu(). + * 2) stopper starts to run (implicitly forcing the migrated thread + * off the CPU) + * 3) it checks whether the migrated task is still in the wrong runqueue. + * 4) if it's in the wrong runqueue then the migration thread removes * it and puts it into the right queue. - * 6) migration thread up()s the semaphore. - * 7) we wake up and the migration is done. + * 5) stopper completes and stop_one_cpu() returns and the migration + * is done. */ /* @@ -5363,12 +5258,23 @@ static inline void sched_init_granularity(void) */ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) { - struct migration_req req; unsigned long flags; struct rq *rq; + unsigned int dest_cpu; int ret = 0; + /* + * Serialize against TASK_WAKING so that ttwu() and wunt() can + * drop the rq->lock and still rely on ->cpus_allowed. + */ +again: + while (task_is_waking(p)) + cpu_relax(); rq = task_rq_lock(p, &flags); + if (task_is_waking(p)) { + task_rq_unlock(rq, &flags); + goto again; + } if (!cpumask_intersects(new_mask, cpu_active_mask)) { ret = -EINVAL; @@ -5392,15 +5298,12 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) if (cpumask_test_cpu(task_cpu(p), new_mask)) goto out; - if (migrate_task(p, cpumask_any_and(cpu_active_mask, new_mask), &req)) { + dest_cpu = cpumask_any_and(cpu_active_mask, new_mask); + if (migrate_task(p, dest_cpu)) { + struct migration_arg arg = { p, dest_cpu }; /* Need help from migration thread: drop lock and wait. */ - struct task_struct *mt = rq->migration_thread; - - get_task_struct(mt); task_rq_unlock(rq, &flags); - wake_up_process(mt); - put_task_struct(mt); - wait_for_completion(&req.done); + stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg); tlb_migrate_finish(p->mm); return 0; } @@ -5458,98 +5361,49 @@ fail: return ret; } -#define RCU_MIGRATION_IDLE 0 -#define RCU_MIGRATION_NEED_QS 1 -#define RCU_MIGRATION_GOT_QS 2 -#define RCU_MIGRATION_MUST_SYNC 3 - /* - * migration_thread - this is a highprio system thread that performs - * thread migration by bumping thread off CPU then 'pushing' onto - * another runqueue. + * migration_cpu_stop - this will be executed by a highprio stopper thread + * and performs thread migration by bumping thread off CPU then + * 'pushing' onto another runqueue. */ -static int migration_thread(void *data) -{ - int badcpu; - int cpu = (long)data; - struct rq *rq; - - rq = cpu_rq(cpu); - BUG_ON(rq->migration_thread != current); - - set_current_state(TASK_INTERRUPTIBLE); - while (!kthread_should_stop()) { - struct migration_req *req; - struct list_head *head; - - raw_spin_lock_irq(&rq->lock); - - if (cpu_is_offline(cpu)) { - raw_spin_unlock_irq(&rq->lock); - break; - } - - if (rq->active_balance) { - active_load_balance(rq, cpu); - rq->active_balance = 0; - } - - head = &rq->migration_queue; - - if (list_empty(head)) { - raw_spin_unlock_irq(&rq->lock); - schedule(); - set_current_state(TASK_INTERRUPTIBLE); - continue; - } - req = list_entry(head->next, struct migration_req, list); - list_del_init(head->next); - - if (req->task != NULL) { - raw_spin_unlock(&rq->lock); - __migrate_task(req->task, cpu, req->dest_cpu); - } else if (likely(cpu == (badcpu = smp_processor_id()))) { - req->dest_cpu = RCU_MIGRATION_GOT_QS; - raw_spin_unlock(&rq->lock); - } else { - req->dest_cpu = RCU_MIGRATION_MUST_SYNC; - raw_spin_unlock(&rq->lock); - WARN_ONCE(1, "migration_thread() on CPU %d, expected %d\n", badcpu, cpu); - } - local_irq_enable(); - - complete(&req->done); - } - __set_current_state(TASK_RUNNING); - - return 0; -} - -#ifdef CONFIG_HOTPLUG_CPU - -static int __migrate_task_irq(struct task_struct *p, int src_cpu, int dest_cpu) +static int migration_cpu_stop(void *data) { - int ret; + struct migration_arg *arg = data; + /* + * The original target cpu might have gone down and we might + * be on another cpu but it doesn't matter. + */ local_irq_disable(); - ret = __migrate_task(p, src_cpu, dest_cpu); + __migrate_task(arg->task, raw_smp_processor_id(), arg->dest_cpu); local_irq_enable(); - return ret; + return 0; } +#ifdef CONFIG_HOTPLUG_CPU /* * Figure out where task on dead CPU should go, use force if necessary. */ -static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p) +void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p) { - int dest_cpu; + struct rq *rq = cpu_rq(dead_cpu); + int needs_cpu, uninitialized_var(dest_cpu); + unsigned long flags; -again: - dest_cpu = select_fallback_rq(dead_cpu, p); + local_irq_save(flags); - /* It can have affinity changed while we were choosing. */ - if (unlikely(!__migrate_task_irq(p, dead_cpu, dest_cpu))) - goto again; + raw_spin_lock(&rq->lock); + needs_cpu = (task_cpu(p) == dead_cpu) && (p->state != TASK_WAKING); + if (needs_cpu) + dest_cpu = select_fallback_rq(dead_cpu, p); + raw_spin_unlock(&rq->lock); + /* + * It can only fail if we race with set_cpus_allowed(), + * in the racer should migrate the task anyway. + */ + if (needs_cpu) + __migrate_task(p, dead_cpu, dest_cpu); + local_irq_restore(flags); } /* @@ -5613,7 +5467,6 @@ void sched_idle_next(void) __setscheduler(rq, p, SCHED_FIFO, MAX_RT_PRIO-1); - update_rq_clock(rq); activate_task(rq, p, 0); raw_spin_unlock_irqrestore(&rq->lock, flags); @@ -5668,7 +5521,6 @@ static void migrate_dead_tasks(unsigned int dead_cpu) for ( ; ; ) { if (!rq->nr_running) break; - update_rq_clock(rq); next = pick_next_task(rq); if (!next) break; @@ -5891,35 +5743,20 @@ static void set_rq_offline(struct rq *rq) static int __cpuinit migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu) { - struct task_struct *p; int cpu = (long)hcpu; unsigned long flags; - struct rq *rq; + struct rq *rq = cpu_rq(cpu); switch (action) { case CPU_UP_PREPARE: case CPU_UP_PREPARE_FROZEN: - p = kthread_create(migration_thread, hcpu, "migration/%d", cpu); - if (IS_ERR(p)) - return NOTIFY_BAD; - kthread_bind(p, cpu); - /* Must be high prio: stop_machine expects to yield to it. */ - rq = task_rq_lock(p, &flags); - __setscheduler(rq, p, SCHED_FIFO, MAX_RT_PRIO-1); - task_rq_unlock(rq, &flags); - get_task_struct(p); - cpu_rq(cpu)->migration_thread = p; rq->calc_load_update = calc_load_update; break; case CPU_ONLINE: case CPU_ONLINE_FROZEN: - /* Strictly unnecessary, as first user will wake it. */ - wake_up_process(cpu_rq(cpu)->migration_thread); - /* Update our root-domain */ - rq = cpu_rq(cpu); raw_spin_lock_irqsave(&rq->lock, flags); if (rq->rd) { BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); @@ -5930,61 +5767,24 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu) break; #ifdef CONFIG_HOTPLUG_CPU - case CPU_UP_CANCELED: - case CPU_UP_CANCELED_FROZEN: - if (!cpu_rq(cpu)->migration_thread) - break; - /* Unbind it from offline cpu so it can run. Fall thru. */ - kthread_bind(cpu_rq(cpu)->migration_thread, - cpumask_any(cpu_online_mask)); - kthread_stop(cpu_rq(cpu)->migration_thread); - put_task_struct(cpu_rq(cpu)->migration_thread); - cpu_rq(cpu)->migration_thread = NULL; - break; - case CPU_DEAD: case CPU_DEAD_FROZEN: - cpuset_lock(); /* around calls to cpuset_cpus_allowed_lock() */ migrate_live_tasks(cpu); - rq = cpu_rq(cpu); - kthread_stop(rq->migration_thread); - put_task_struct(rq->migration_thread); - rq->migration_thread = NULL; /* Idle task back to normal (off runqueue, low prio) */ raw_spin_lock_irq(&rq->lock); - update_rq_clock(rq); deactivate_task(rq, rq->idle, 0); __setscheduler(rq, rq->idle, SCHED_NORMAL, 0); rq->idle->sched_class = &idle_sched_class; migrate_dead_tasks(cpu); raw_spin_unlock_irq(&rq->lock); - cpuset_unlock(); migrate_nr_uninterruptible(rq); BUG_ON(rq->nr_running != 0); calc_global_load_remove(rq); - /* - * No need to migrate the tasks: it was best-effort if - * they didn't take sched_hotcpu_mutex. Just wake up - * the requestors. - */ - raw_spin_lock_irq(&rq->lock); - while (!list_empty(&rq->migration_queue)) { - struct migration_req *req; - - req = list_entry(rq->migration_queue.next, - struct migration_req, list); - list_del_init(&req->list); - raw_spin_unlock_irq(&rq->lock); - complete(&req->done); - raw_spin_lock_irq(&rq->lock); - } - raw_spin_unlock_irq(&rq->lock); break; case CPU_DYING: case CPU_DYING_FROZEN: /* Update our root-domain */ - rq = cpu_rq(cpu); raw_spin_lock_irqsave(&rq->lock, flags); if (rq->rd) { BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); @@ -6315,6 +6115,9 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu) struct rq *rq = cpu_rq(cpu); struct sched_domain *tmp; + for (tmp = sd; tmp; tmp = tmp->parent) + tmp->span_weight = cpumask_weight(sched_domain_span(tmp)); + /* Remove the sched domains which do not contribute to scheduling. */ for (tmp = sd; tmp; ) { struct sched_domain *parent = tmp->parent; @@ -7798,10 +7601,8 @@ void __init sched_init(void) rq->push_cpu = 0; rq->cpu = i; rq->online = 0; - rq->migration_thread = NULL; rq->idle_stamp = 0; rq->avg_idle = 2*sysctl_sched_migration_cost; - INIT_LIST_HEAD(&rq->migration_queue); rq_attach_root(rq, &def_root_domain); #endif init_rq_hrtick(rq); @@ -7902,7 +7703,6 @@ static void normalize_task(struct rq *rq, struct task_struct *p) { int on_rq; - update_rq_clock(rq); on_rq = p->se.on_rq; if (on_rq) deactivate_task(rq, p, 0); @@ -7929,9 +7729,9 @@ void normalize_rt_tasks(void) p->se.exec_start = 0; #ifdef CONFIG_SCHEDSTATS - p->se.wait_start = 0; - p->se.sleep_start = 0; - p->se.block_start = 0; + p->se.statistics.wait_start = 0; + p->se.statistics.sleep_start = 0; + p->se.statistics.block_start = 0; #endif if (!rt_task(p)) { @@ -8264,8 +8064,6 @@ void sched_move_task(struct task_struct *tsk) rq = task_rq_lock(tsk, &flags); - update_rq_clock(rq); - running = task_current(rq, tsk); on_rq = tsk->se.on_rq; @@ -8284,7 +8082,7 @@ void sched_move_task(struct task_struct *tsk) if (unlikely(running)) tsk->sched_class->set_curr_task(rq); if (on_rq) - enqueue_task(rq, tsk, 0, false); + enqueue_task(rq, tsk, 0); task_rq_unlock(rq, &flags); } @@ -9098,43 +8896,32 @@ struct cgroup_subsys cpuacct_subsys = { #ifndef CONFIG_SMP -int rcu_expedited_torture_stats(char *page) -{ - return 0; -} -EXPORT_SYMBOL_GPL(rcu_expedited_torture_stats); - void synchronize_sched_expedited(void) { + barrier(); } EXPORT_SYMBOL_GPL(synchronize_sched_expedited); #else /* #ifndef CONFIG_SMP */ -static DEFINE_PER_CPU(struct migration_req, rcu_migration_req); -static DEFINE_MUTEX(rcu_sched_expedited_mutex); +static atomic_t synchronize_sched_expedited_count = ATOMIC_INIT(0); -#define RCU_EXPEDITED_STATE_POST -2 -#define RCU_EXPEDITED_STATE_IDLE -1 - -static int rcu_expedited_state = RCU_EXPEDITED_STATE_IDLE; - -int rcu_expedited_torture_stats(char *page) +static int synchronize_sched_expedited_cpu_stop(void *data) { - int cnt = 0; - int cpu; - - cnt += sprintf(&page[cnt], "state: %d /", rcu_expedited_state); - for_each_online_cpu(cpu) { - cnt += sprintf(&page[cnt], " %d:%d", - cpu, per_cpu(rcu_migration_req, cpu).dest_cpu); - } - cnt += sprintf(&page[cnt], "\n"); - return cnt; + /* + * There must be a full memory barrier on each affected CPU + * between the time that try_stop_cpus() is called and the + * time that it returns. + * + * In the current initial implementation of cpu_stop, the + * above condition is already met when the control reaches + * this point and the following smp_mb() is not strictly + * necessary. Do smp_mb() anyway for documentation and + * robustness against future implementation changes. + */ + smp_mb(); /* See above comment block. */ + return 0; } -EXPORT_SYMBOL_GPL(rcu_expedited_torture_stats); - -static long synchronize_sched_expedited_count; /* * Wait for an rcu-sched grace period to elapse, but use "big hammer" @@ -9148,18 +8935,14 @@ static long synchronize_sched_expedited_count; */ void synchronize_sched_expedited(void) { - int cpu; - unsigned long flags; - bool need_full_sync = 0; - struct rq *rq; - struct migration_req *req; - long snap; - int trycount = 0; + int snap, trycount = 0; smp_mb(); /* ensure prior mod happens before capturing snap. */ - snap = ACCESS_ONCE(synchronize_sched_expedited_count) + 1; + snap = atomic_read(&synchronize_sched_expedited_count) + 1; get_online_cpus(); - while (!mutex_trylock(&rcu_sched_expedited_mutex)) { + while (try_stop_cpus(cpu_online_mask, + synchronize_sched_expedited_cpu_stop, + NULL) == -EAGAIN) { put_online_cpus(); if (trycount++ < 10) udelay(trycount * num_online_cpus()); @@ -9167,41 +8950,15 @@ void synchronize_sched_expedited(void) synchronize_sched(); return; } - if (ACCESS_ONCE(synchronize_sched_expedited_count) - snap > 0) { + if (atomic_read(&synchronize_sched_expedited_count) - snap > 0) { smp_mb(); /* ensure test happens before caller kfree */ return; } get_online_cpus(); } - rcu_expedited_state = RCU_EXPEDITED_STATE_POST; - for_each_online_cpu(cpu) { - rq = cpu_rq(cpu); - req = &per_cpu(rcu_migration_req, cpu); - init_completion(&req->done); - req->task = NULL; - req->dest_cpu = RCU_MIGRATION_NEED_QS; - raw_spin_lock_irqsave(&rq->lock, flags); - list_add(&req->list, &rq->migration_queue); - raw_spin_unlock_irqrestore(&rq->lock, flags); - wake_up_process(rq->migration_thread); - } - for_each_online_cpu(cpu) { - rcu_expedited_state = cpu; - req = &per_cpu(rcu_migration_req, cpu); - rq = cpu_rq(cpu); - wait_for_completion(&req->done); - raw_spin_lock_irqsave(&rq->lock, flags); - if (unlikely(req->dest_cpu == RCU_MIGRATION_MUST_SYNC)) - need_full_sync = 1; - req->dest_cpu = RCU_MIGRATION_IDLE; - raw_spin_unlock_irqrestore(&rq->lock, flags); - } - rcu_expedited_state = RCU_EXPEDITED_STATE_IDLE; - synchronize_sched_expedited_count++; - mutex_unlock(&rcu_sched_expedited_mutex); + atomic_inc(&synchronize_sched_expedited_count); + smp_mb__after_atomic_inc(); /* ensure post-GP actions seen after GP. */ put_online_cpus(); - if (need_full_sync) - synchronize_sched(); } EXPORT_SYMBOL_GPL(synchronize_sched_expedited); diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c index 19be00ba6123..87a330a7185f 100644 --- a/kernel/sched_debug.c +++ b/kernel/sched_debug.c @@ -70,16 +70,16 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, PN(se->vruntime); PN(se->sum_exec_runtime); #ifdef CONFIG_SCHEDSTATS - PN(se->wait_start); - PN(se->sleep_start); - PN(se->block_start); - PN(se->sleep_max); - PN(se->block_max); - PN(se->exec_max); - PN(se->slice_max); - PN(se->wait_max); - PN(se->wait_sum); - P(se->wait_count); + PN(se->statistics.wait_start); + PN(se->statistics.sleep_start); + PN(se->statistics.block_start); + PN(se->statistics.sleep_max); + PN(se->statistics.block_max); + PN(se->statistics.exec_max); + PN(se->statistics.slice_max); + PN(se->statistics.wait_max); + PN(se->statistics.wait_sum); + P(se->statistics.wait_count); #endif P(se->load.weight); #undef PN @@ -104,7 +104,7 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p) SEQ_printf(m, "%9Ld.%06ld %9Ld.%06ld %9Ld.%06ld", SPLIT_NS(p->se.vruntime), SPLIT_NS(p->se.sum_exec_runtime), - SPLIT_NS(p->se.sum_sleep_runtime)); + SPLIT_NS(p->se.statistics.sum_sleep_runtime)); #else SEQ_printf(m, "%15Ld %15Ld %15Ld.%06ld %15Ld.%06ld %15Ld.%06ld", 0LL, 0LL, 0LL, 0L, 0LL, 0L, 0LL, 0L); @@ -175,11 +175,6 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) task_group_path(tg, path, sizeof(path)); SEQ_printf(m, "\ncfs_rq[%d]:%s\n", cpu, path); -#elif defined(CONFIG_USER_SCHED) && defined(CONFIG_FAIR_GROUP_SCHED) - { - uid_t uid = cfs_rq->tg->uid; - SEQ_printf(m, "\ncfs_rq[%d] for UID: %u\n", cpu, uid); - } #else SEQ_printf(m, "\ncfs_rq[%d]:\n", cpu); #endif @@ -409,40 +404,38 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) PN(se.exec_start); PN(se.vruntime); PN(se.sum_exec_runtime); - PN(se.avg_overlap); - PN(se.avg_wakeup); nr_switches = p->nvcsw + p->nivcsw; #ifdef CONFIG_SCHEDSTATS - PN(se.wait_start); - PN(se.sleep_start); - PN(se.block_start); - PN(se.sleep_max); - PN(se.block_max); - PN(se.exec_max); - PN(se.slice_max); - PN(se.wait_max); - PN(se.wait_sum); - P(se.wait_count); - PN(se.iowait_sum); - P(se.iowait_count); + PN(se.statistics.wait_start); + PN(se.statistics.sleep_start); + PN(se.statistics.block_start); + PN(se.statistics.sleep_max); + PN(se.statistics.block_max); + PN(se.statistics.exec_max); + PN(se.statistics.slice_max); + PN(se.statistics.wait_max); + PN(se.statistics.wait_sum); + P(se.statistics.wait_count); + PN(se.statistics.iowait_sum); + P(se.statistics.iowait_count); P(sched_info.bkl_count); P(se.nr_migrations); - P(se.nr_migrations_cold); - P(se.nr_failed_migrations_affine); - P(se.nr_failed_migrations_running); - P(se.nr_failed_migrations_hot); - P(se.nr_forced_migrations); - P(se.nr_wakeups); - P(se.nr_wakeups_sync); - P(se.nr_wakeups_migrate); - P(se.nr_wakeups_local); - P(se.nr_wakeups_remote); - P(se.nr_wakeups_affine); - P(se.nr_wakeups_affine_attempts); - P(se.nr_wakeups_passive); - P(se.nr_wakeups_idle); + P(se.statistics.nr_migrations_cold); + P(se.statistics.nr_failed_migrations_affine); + P(se.statistics.nr_failed_migrations_running); + P(se.statistics.nr_failed_migrations_hot); + P(se.statistics.nr_forced_migrations); + P(se.statistics.nr_wakeups); + P(se.statistics.nr_wakeups_sync); + P(se.statistics.nr_wakeups_migrate); + P(se.statistics.nr_wakeups_local); + P(se.statistics.nr_wakeups_remote); + P(se.statistics.nr_wakeups_affine); + P(se.statistics.nr_wakeups_affine_attempts); + P(se.statistics.nr_wakeups_passive); + P(se.statistics.nr_wakeups_idle); { u64 avg_atom, avg_per_cpu; @@ -493,31 +486,6 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) void proc_sched_set_task(struct task_struct *p) { #ifdef CONFIG_SCHEDSTATS - p->se.wait_max = 0; - p->se.wait_sum = 0; - p->se.wait_count = 0; - p->se.iowait_sum = 0; - p->se.iowait_count = 0; - p->se.sleep_max = 0; - p->se.sum_sleep_runtime = 0; - p->se.block_max = 0; - p->se.exec_max = 0; - p->se.slice_max = 0; - p->se.nr_migrations = 0; - p->se.nr_migrations_cold = 0; - p->se.nr_failed_migrations_affine = 0; - p->se.nr_failed_migrations_running = 0; - p->se.nr_failed_migrations_hot = 0; - p->se.nr_forced_migrations = 0; - p->se.nr_wakeups = 0; - p->se.nr_wakeups_sync = 0; - p->se.nr_wakeups_migrate = 0; - p->se.nr_wakeups_local = 0; - p->se.nr_wakeups_remote = 0; - p->se.nr_wakeups_affine = 0; - p->se.nr_wakeups_affine_attempts = 0; - p->se.nr_wakeups_passive = 0; - p->se.nr_wakeups_idle = 0; - p->sched_info.bkl_count = 0; + memset(&p->se.statistics, 0, sizeof(p->se.statistics)); #endif } diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 5a5ea2cd924f..217e4a9393e4 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -35,8 +35,8 @@ * (to see the precise effective timeslice length of your workload, * run vmstat and monitor the context-switches (cs) field) */ -unsigned int sysctl_sched_latency = 5000000ULL; -unsigned int normalized_sysctl_sched_latency = 5000000ULL; +unsigned int sysctl_sched_latency = 6000000ULL; +unsigned int normalized_sysctl_sched_latency = 6000000ULL; /* * The initial- and re-scaling of tunables is configurable @@ -52,15 +52,15 @@ enum sched_tunable_scaling sysctl_sched_tunable_scaling /* * Minimal preemption granularity for CPU-bound tasks: - * (default: 1 msec * (1 + ilog(ncpus)), units: nanoseconds) + * (default: 2 msec * (1 + ilog(ncpus)), units: nanoseconds) */ -unsigned int sysctl_sched_min_granularity = 1000000ULL; -unsigned int normalized_sysctl_sched_min_granularity = 1000000ULL; +unsigned int sysctl_sched_min_granularity = 2000000ULL; +unsigned int normalized_sysctl_sched_min_granularity = 2000000ULL; /* * is kept at sysctl_sched_latency / sysctl_sched_min_granularity */ -static unsigned int sched_nr_latency = 5; +static unsigned int sched_nr_latency = 3; /* * After fork, child runs first. If set to 0 (default) then @@ -505,7 +505,8 @@ __update_curr(struct cfs_rq *cfs_rq, struct sched_entity *curr, { unsigned long delta_exec_weighted; - schedstat_set(curr->exec_max, max((u64)delta_exec, curr->exec_max)); + schedstat_set(curr->statistics.exec_max, + max((u64)delta_exec, curr->statistics.exec_max)); curr->sum_exec_runtime += delta_exec; schedstat_add(cfs_rq, exec_clock, delta_exec); @@ -548,7 +549,7 @@ static void update_curr(struct cfs_rq *cfs_rq) static inline void update_stats_wait_start(struct cfs_rq *cfs_rq, struct sched_entity *se) { - schedstat_set(se->wait_start, rq_of(cfs_rq)->clock); + schedstat_set(se->statistics.wait_start, rq_of(cfs_rq)->clock); } /* @@ -567,18 +568,18 @@ static void update_stats_enqueue(struct cfs_rq *cfs_rq, struct sched_entity *se) static void update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se) { - schedstat_set(se->wait_max, max(se->wait_max, - rq_of(cfs_rq)->clock - se->wait_start)); - schedstat_set(se->wait_count, se->wait_count + 1); - schedstat_set(se->wait_sum, se->wait_sum + - rq_of(cfs_rq)->clock - se->wait_start); + schedstat_set(se->statistics.wait_max, max(se->statistics.wait_max, + rq_of(cfs_rq)->clock - se->statistics.wait_start)); + schedstat_set(se->statistics.wait_count, se->statistics.wait_count + 1); + schedstat_set(se->statistics.wait_sum, se->statistics.wait_sum + + rq_of(cfs_rq)->clock - se->statistics.wait_start); #ifdef CONFIG_SCHEDSTATS if (entity_is_task(se)) { trace_sched_stat_wait(task_of(se), - rq_of(cfs_rq)->clock - se->wait_start); + rq_of(cfs_rq)->clock - se->statistics.wait_start); } #endif - schedstat_set(se->wait_start, 0); + schedstat_set(se->statistics.wait_start, 0); } static inline void @@ -657,39 +658,39 @@ static void enqueue_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se) if (entity_is_task(se)) tsk = task_of(se); - if (se->sleep_start) { - u64 delta = rq_of(cfs_rq)->clock - se->sleep_start; + if (se->statistics.sleep_start) { + u64 delta = rq_of(cfs_rq)->clock - se->statistics.sleep_start; if ((s64)delta < 0) delta = 0; - if (unlikely(delta > se->sleep_max)) - se->sleep_max = delta; + if (unlikely(delta > se->statistics.sleep_max)) + se->statistics.sleep_max = delta; - se->sleep_start = 0; - se->sum_sleep_runtime += delta; + se->statistics.sleep_start = 0; + se->statistics.sum_sleep_runtime += delta; if (tsk) { account_scheduler_latency(tsk, delta >> 10, 1); trace_sched_stat_sleep(tsk, delta); } } - if (se->block_start) { - u64 delta = rq_of(cfs_rq)->clock - se->block_start; + if (se->statistics.block_start) { + u64 delta = rq_of(cfs_rq)->clock - se->statistics.block_start; if ((s64)delta < 0) delta = 0; - if (unlikely(delta > se->block_max)) - se->block_max = delta; + if (unlikely(delta > se->statistics.block_max)) + se->statistics.block_max = delta; - se->block_start = 0; - se->sum_sleep_runtime += delta; + se->statistics.block_start = 0; + se->statistics.sum_sleep_runtime += delta; if (tsk) { if (tsk->in_iowait) { - se->iowait_sum += delta; - se->iowait_count++; + se->statistics.iowait_sum += delta; + se->statistics.iowait_count++; trace_sched_stat_iowait(tsk, delta); } @@ -737,20 +738,10 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) vruntime += sched_vslice(cfs_rq, se); /* sleeps up to a single latency don't count. */ - if (!initial && sched_feat(FAIR_SLEEPERS)) { + if (!initial) { unsigned long thresh = sysctl_sched_latency; /* - * Convert the sleeper threshold into virtual time. - * SCHED_IDLE is a special sub-class. We care about - * fairness only relative to other SCHED_IDLE tasks, - * all of which have the same weight. - */ - if (sched_feat(NORMALIZED_SLEEPER) && (!entity_is_task(se) || - task_of(se)->policy != SCHED_IDLE)) - thresh = calc_delta_fair(thresh, se); - - /* * Halve their sleep time's effect, to allow * for a gentler effect of sleepers: */ @@ -766,9 +757,6 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) se->vruntime = vruntime; } -#define ENQUEUE_WAKEUP 1 -#define ENQUEUE_MIGRATE 2 - static void enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { @@ -776,7 +764,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * Update the normalized vruntime before updating min_vruntime * through callig update_curr(). */ - if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_MIGRATE)) + if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING)) se->vruntime += cfs_rq->min_vruntime; /* @@ -812,7 +800,7 @@ static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se) } static void -dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int sleep) +dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) { /* * Update run-time statistics of the 'current'. @@ -820,15 +808,15 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int sleep) update_curr(cfs_rq); update_stats_dequeue(cfs_rq, se); - if (sleep) { + if (flags & DEQUEUE_SLEEP) { #ifdef CONFIG_SCHEDSTATS if (entity_is_task(se)) { struct task_struct *tsk = task_of(se); if (tsk->state & TASK_INTERRUPTIBLE) - se->sleep_start = rq_of(cfs_rq)->clock; + se->statistics.sleep_start = rq_of(cfs_rq)->clock; if (tsk->state & TASK_UNINTERRUPTIBLE) - se->block_start = rq_of(cfs_rq)->clock; + se->statistics.block_start = rq_of(cfs_rq)->clock; } #endif } @@ -845,7 +833,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int sleep) * update can refer to the ->curr item and we need to reflect this * movement in our normalized position. */ - if (!sleep) + if (!(flags & DEQUEUE_SLEEP)) se->vruntime -= cfs_rq->min_vruntime; } @@ -912,7 +900,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) * when there are only lesser-weight tasks around): */ if (rq_of(cfs_rq)->load.weight >= 2*se->load.weight) { - se->slice_max = max(se->slice_max, + se->statistics.slice_max = max(se->statistics.slice_max, se->sum_exec_runtime - se->prev_sum_exec_runtime); } #endif @@ -1054,16 +1042,10 @@ static inline void hrtick_update(struct rq *rq) * then put the task into the rbtree: */ static void -enqueue_task_fair(struct rq *rq, struct task_struct *p, int wakeup, bool head) +enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) { struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se; - int flags = 0; - - if (wakeup) - flags |= ENQUEUE_WAKEUP; - if (p->state == TASK_WAKING) - flags |= ENQUEUE_MIGRATE; for_each_sched_entity(se) { if (se->on_rq) @@ -1081,18 +1063,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int wakeup, bool head) * decreased. We remove the task from the rbtree and * update the fair scheduling stats: */ -static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int sleep) +static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) { struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se; for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - dequeue_entity(cfs_rq, se, sleep); + dequeue_entity(cfs_rq, se, flags); /* Don't dequeue parent if it has other entities besides us */ if (cfs_rq->load.weight) break; - sleep = 1; + flags |= DEQUEUE_SLEEP; } hrtick_update(rq); @@ -1240,7 +1222,6 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu, static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) { - struct task_struct *curr = current; unsigned long this_load, load; int idx, this_cpu, prev_cpu; unsigned long tl_per_task; @@ -1255,18 +1236,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) load = source_load(prev_cpu, idx); this_load = target_load(this_cpu, idx); - if (sync) { - if (sched_feat(SYNC_LESS) && - (curr->se.avg_overlap > sysctl_sched_migration_cost || - p->se.avg_overlap > sysctl_sched_migration_cost)) - sync = 0; - } else { - if (sched_feat(SYNC_MORE) && - (curr->se.avg_overlap < sysctl_sched_migration_cost && - p->se.avg_overlap < sysctl_sched_migration_cost)) - sync = 1; - } - /* * If sync wakeup then subtract the (maximum possible) * effect of the currently running task from the load @@ -1306,7 +1275,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) if (sync && balanced) return 1; - schedstat_inc(p, se.nr_wakeups_affine_attempts); + schedstat_inc(p, se.statistics.nr_wakeups_affine_attempts); tl_per_task = cpu_avg_load_per_task(this_cpu); if (balanced || @@ -1318,7 +1287,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) * there is no bad imbalance. */ schedstat_inc(sd, ttwu_move_affine); - schedstat_inc(p, se.nr_wakeups_affine); + schedstat_inc(p, se.statistics.nr_wakeups_affine); return 1; } @@ -1406,29 +1375,48 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) /* * Try and locate an idle CPU in the sched_domain. */ -static int -select_idle_sibling(struct task_struct *p, struct sched_domain *sd, int target) +static int select_idle_sibling(struct task_struct *p, int target) { int cpu = smp_processor_id(); int prev_cpu = task_cpu(p); + struct sched_domain *sd; int i; /* - * If this domain spans both cpu and prev_cpu (see the SD_WAKE_AFFINE - * test in select_task_rq_fair) and the prev_cpu is idle then that's - * always a better target than the current cpu. + * If the task is going to be woken-up on this cpu and if it is + * already idle, then it is the right target. */ - if (target == cpu && !cpu_rq(prev_cpu)->cfs.nr_running) + if (target == cpu && idle_cpu(cpu)) + return cpu; + + /* + * If the task is going to be woken-up on the cpu where it previously + * ran and if it is currently idle, then it the right target. + */ + if (target == prev_cpu && idle_cpu(prev_cpu)) return prev_cpu; /* - * Otherwise, iterate the domain and find an elegible idle cpu. + * Otherwise, iterate the domains and find an elegible idle cpu. */ - for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { - if (!cpu_rq(i)->cfs.nr_running) { - target = i; + for_each_domain(target, sd) { + if (!(sd->flags & SD_SHARE_PKG_RESOURCES)) break; + + for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { + if (idle_cpu(i)) { + target = i; + break; + } } + + /* + * Lets stop looking for an idle sibling when we reached + * the domain that spans the current cpu and prev_cpu. + */ + if (cpumask_test_cpu(cpu, sched_domain_span(sd)) && + cpumask_test_cpu(prev_cpu, sched_domain_span(sd))) + break; } return target; @@ -1445,7 +1433,8 @@ select_idle_sibling(struct task_struct *p, struct sched_domain *sd, int target) * * preempt must be disabled. */ -static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags) +static int +select_task_rq_fair(struct rq *rq, struct task_struct *p, int sd_flag, int wake_flags) { struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL; int cpu = smp_processor_id(); @@ -1456,8 +1445,7 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag int sync = wake_flags & WF_SYNC; if (sd_flag & SD_BALANCE_WAKE) { - if (sched_feat(AFFINE_WAKEUPS) && - cpumask_test_cpu(cpu, &p->cpus_allowed)) + if (cpumask_test_cpu(cpu, &p->cpus_allowed)) want_affine = 1; new_cpu = prev_cpu; } @@ -1491,34 +1479,13 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag } /* - * While iterating the domains looking for a spanning - * WAKE_AFFINE domain, adjust the affine target to any idle cpu - * in cache sharing domains along the way. + * If both cpu and prev_cpu are part of this domain, + * cpu is a valid SD_WAKE_AFFINE target. */ - if (want_affine) { - int target = -1; - - /* - * If both cpu and prev_cpu are part of this domain, - * cpu is a valid SD_WAKE_AFFINE target. - */ - if (cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) - target = cpu; - - /* - * If there's an idle sibling in this domain, make that - * the wake_affine target instead of the current cpu. - */ - if (tmp->flags & SD_SHARE_PKG_RESOURCES) - target = select_idle_sibling(p, tmp, target); - - if (target >= 0) { - if (tmp->flags & SD_WAKE_AFFINE) { - affine_sd = tmp; - want_affine = 0; - } - cpu = target; - } + if (want_affine && (tmp->flags & SD_WAKE_AFFINE) && + cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) { + affine_sd = tmp; + want_affine = 0; } if (!want_sd && !want_affine) @@ -1531,22 +1498,29 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag sd = tmp; } +#ifdef CONFIG_FAIR_GROUP_SCHED if (sched_feat(LB_SHARES_UPDATE)) { /* * Pick the largest domain to update shares over */ tmp = sd; - if (affine_sd && (!tmp || - cpumask_weight(sched_domain_span(affine_sd)) > - cpumask_weight(sched_domain_span(sd)))) + if (affine_sd && (!tmp || affine_sd->span_weight > sd->span_weight)) tmp = affine_sd; - if (tmp) + if (tmp) { + raw_spin_unlock(&rq->lock); update_shares(tmp); + raw_spin_lock(&rq->lock); + } } +#endif - if (affine_sd && wake_affine(affine_sd, p, sync)) - return cpu; + if (affine_sd) { + if (cpu == prev_cpu || wake_affine(affine_sd, p, sync)) + return select_idle_sibling(p, cpu); + else + return select_idle_sibling(p, prev_cpu); + } while (sd) { int load_idx = sd->forkexec_idx; @@ -1576,10 +1550,10 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag /* Now try balancing at a lower domain level of new_cpu */ cpu = new_cpu; - weight = cpumask_weight(sched_domain_span(sd)); + weight = sd->span_weight; sd = NULL; for_each_domain(cpu, tmp) { - if (weight <= cpumask_weight(sched_domain_span(tmp))) + if (weight <= tmp->span_weight) break; if (tmp->flags & sd_flag) sd = tmp; @@ -1591,63 +1565,26 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag } #endif /* CONFIG_SMP */ -/* - * Adaptive granularity - * - * se->avg_wakeup gives the average time a task runs until it does a wakeup, - * with the limit of wakeup_gran -- when it never does a wakeup. - * - * So the smaller avg_wakeup is the faster we want this task to preempt, - * but we don't want to treat the preemptee unfairly and therefore allow it - * to run for at least the amount of time we'd like to run. - * - * NOTE: we use 2*avg_wakeup to increase the probability of actually doing one - * - * NOTE: we use *nr_running to scale with load, this nicely matches the - * degrading latency on load. - */ -static unsigned long -adaptive_gran(struct sched_entity *curr, struct sched_entity *se) -{ - u64 this_run = curr->sum_exec_runtime - curr->prev_sum_exec_runtime; - u64 expected_wakeup = 2*se->avg_wakeup * cfs_rq_of(se)->nr_running; - u64 gran = 0; - - if (this_run < expected_wakeup) - gran = expected_wakeup - this_run; - - return min_t(s64, gran, sysctl_sched_wakeup_granularity); -} - static unsigned long wakeup_gran(struct sched_entity *curr, struct sched_entity *se) { unsigned long gran = sysctl_sched_wakeup_granularity; - if (cfs_rq_of(curr)->curr && sched_feat(ADAPTIVE_GRAN)) - gran = adaptive_gran(curr, se); - /* * Since its curr running now, convert the gran from real-time * to virtual-time in his units. + * + * By using 'se' instead of 'curr' we penalize light tasks, so + * they get preempted easier. That is, if 'se' < 'curr' then + * the resulting gran will be larger, therefore penalizing the + * lighter, if otoh 'se' > 'curr' then the resulting gran will + * be smaller, again penalizing the lighter task. + * + * This is especially important for buddies when the leftmost + * task is higher priority than the buddy. */ - if (sched_feat(ASYM_GRAN)) { - /* - * By using 'se' instead of 'curr' we penalize light tasks, so - * they get preempted easier. That is, if 'se' < 'curr' then - * the resulting gran will be larger, therefore penalizing the - * lighter, if otoh 'se' > 'curr' then the resulting gran will - * be smaller, again penalizing the lighter task. - * - * This is especially important for buddies when the leftmost - * task is higher priority than the buddy. - */ - if (unlikely(se->load.weight != NICE_0_LOAD)) - gran = calc_delta_fair(gran, se); - } else { - if (unlikely(curr->load.weight != NICE_0_LOAD)) - gran = calc_delta_fair(gran, curr); - } + if (unlikely(se->load.weight != NICE_0_LOAD)) + gran = calc_delta_fair(gran, se); return gran; } @@ -1705,7 +1642,6 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_ struct task_struct *curr = rq->curr; struct sched_entity *se = &curr->se, *pse = &p->se; struct cfs_rq *cfs_rq = task_cfs_rq(curr); - int sync = wake_flags & WF_SYNC; int scale = cfs_rq->nr_running >= sched_nr_latency; if (unlikely(rt_prio(p->prio))) @@ -1738,14 +1674,6 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_ if (unlikely(curr->policy == SCHED_IDLE)) goto preempt; - if (sched_feat(WAKEUP_SYNC) && sync) - goto preempt; - - if (sched_feat(WAKEUP_OVERLAP) && - se->avg_overlap < sysctl_sched_migration_cost && - pse->avg_overlap < sysctl_sched_migration_cost) - goto preempt; - if (!sched_feat(WAKEUP_PREEMPT)) return; @@ -1844,13 +1772,13 @@ int can_migrate_task(struct task_struct *p, struct rq *rq, int this_cpu, * 3) are cache-hot on their current CPU. */ if (!cpumask_test_cpu(this_cpu, &p->cpus_allowed)) { - schedstat_inc(p, se.nr_failed_migrations_affine); + schedstat_inc(p, se.statistics.nr_failed_migrations_affine); return 0; } *all_pinned = 0; if (task_running(rq, p)) { - schedstat_inc(p, se.nr_failed_migrations_running); + schedstat_inc(p, se.statistics.nr_failed_migrations_running); return 0; } @@ -1866,14 +1794,14 @@ int can_migrate_task(struct task_struct *p, struct rq *rq, int this_cpu, #ifdef CONFIG_SCHEDSTATS if (tsk_cache_hot) { schedstat_inc(sd, lb_hot_gained[idle]); - schedstat_inc(p, se.nr_forced_migrations); + schedstat_inc(p, se.statistics.nr_forced_migrations); } #endif return 1; } if (tsk_cache_hot) { - schedstat_inc(p, se.nr_failed_migrations_hot); + schedstat_inc(p, se.statistics.nr_failed_migrations_hot); return 0; } return 1; @@ -2311,7 +2239,7 @@ unsigned long __weak arch_scale_freq_power(struct sched_domain *sd, int cpu) unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu) { - unsigned long weight = cpumask_weight(sched_domain_span(sd)); + unsigned long weight = sd->span_weight; unsigned long smt_gain = sd->smt_gain; smt_gain /= weight; @@ -2344,7 +2272,7 @@ unsigned long scale_rt_power(int cpu) static void update_cpu_power(struct sched_domain *sd, int cpu) { - unsigned long weight = cpumask_weight(sched_domain_span(sd)); + unsigned long weight = sd->span_weight; unsigned long power = SCHED_LOAD_SCALE; struct sched_group *sdg = sd->groups; @@ -2870,6 +2798,8 @@ static int need_active_balance(struct sched_domain *sd, int sd_idle, int idle) return unlikely(sd->nr_balance_failed > sd->cache_nice_tries+2); } +static int active_load_balance_cpu_stop(void *data); + /* * Check this_cpu to ensure it is balanced within domain. Attempt to move * tasks if there is an imbalance. @@ -2959,8 +2889,9 @@ redo: if (need_active_balance(sd, sd_idle, idle)) { raw_spin_lock_irqsave(&busiest->lock, flags); - /* don't kick the migration_thread, if the curr - * task on busiest cpu can't be moved to this_cpu + /* don't kick the active_load_balance_cpu_stop, + * if the curr task on busiest cpu can't be + * moved to this_cpu */ if (!cpumask_test_cpu(this_cpu, &busiest->curr->cpus_allowed)) { @@ -2970,14 +2901,22 @@ redo: goto out_one_pinned; } + /* + * ->active_balance synchronizes accesses to + * ->active_balance_work. Once set, it's cleared + * only after active load balance is finished. + */ if (!busiest->active_balance) { busiest->active_balance = 1; busiest->push_cpu = this_cpu; active_balance = 1; } raw_spin_unlock_irqrestore(&busiest->lock, flags); + if (active_balance) - wake_up_process(busiest->migration_thread); + stop_one_cpu_nowait(cpu_of(busiest), + active_load_balance_cpu_stop, busiest, + &busiest->active_balance_work); /* * We've kicked active balancing, reset the failure @@ -3084,24 +3023,29 @@ static void idle_balance(int this_cpu, struct rq *this_rq) } /* - * active_load_balance is run by migration threads. It pushes running tasks - * off the busiest CPU onto idle CPUs. It requires at least 1 task to be - * running on each physical CPU where possible, and avoids physical / - * logical imbalances. - * - * Called with busiest_rq locked. + * active_load_balance_cpu_stop is run by cpu stopper. It pushes + * running tasks off the busiest CPU onto idle CPUs. It requires at + * least 1 task to be running on each physical CPU where possible, and + * avoids physical / logical imbalances. */ -static void active_load_balance(struct rq *busiest_rq, int busiest_cpu) +static int active_load_balance_cpu_stop(void *data) { + struct rq *busiest_rq = data; + int busiest_cpu = cpu_of(busiest_rq); int target_cpu = busiest_rq->push_cpu; + struct rq *target_rq = cpu_rq(target_cpu); struct sched_domain *sd; - struct rq *target_rq; + + raw_spin_lock_irq(&busiest_rq->lock); + + /* make sure the requested cpu hasn't gone down in the meantime */ + if (unlikely(busiest_cpu != smp_processor_id() || + !busiest_rq->active_balance)) + goto out_unlock; /* Is there any task to move? */ if (busiest_rq->nr_running <= 1) - return; - - target_rq = cpu_rq(target_cpu); + goto out_unlock; /* * This condition is "impossible", if it occurs @@ -3112,8 +3056,6 @@ static void active_load_balance(struct rq *busiest_rq, int busiest_cpu) /* move a task from busiest_rq to target_rq */ double_lock_balance(busiest_rq, target_rq); - update_rq_clock(busiest_rq); - update_rq_clock(target_rq); /* Search for an sd spanning us and the target CPU. */ for_each_domain(target_cpu, sd) { @@ -3132,6 +3074,10 @@ static void active_load_balance(struct rq *busiest_rq, int busiest_cpu) schedstat_inc(sd, alb_failed); } double_unlock_balance(busiest_rq, target_rq); +out_unlock: + busiest_rq->active_balance = 0; + raw_spin_unlock_irq(&busiest_rq->lock); + return 0; } #ifdef CONFIG_NO_HZ diff --git a/kernel/sched_features.h b/kernel/sched_features.h index d5059fd761d9..83c66e8ad3ee 100644 --- a/kernel/sched_features.h +++ b/kernel/sched_features.h @@ -1,11 +1,4 @@ /* - * Disregards a certain amount of sleep time (sched_latency_ns) and - * considers the task to be running during that period. This gives it - * a service deficit on wakeup, allowing it to run sooner. - */ -SCHED_FEAT(FAIR_SLEEPERS, 1) - -/* * Only give sleepers 50% of their service deficit. This allows * them to run sooner, but does not allow tons of sleepers to * rip the spread apart. @@ -13,13 +6,6 @@ SCHED_FEAT(FAIR_SLEEPERS, 1) SCHED_FEAT(GENTLE_FAIR_SLEEPERS, 1) /* - * By not normalizing the sleep time, heavy tasks get an effective - * longer period, and lighter task an effective shorter period they - * are considered running. - */ -SCHED_FEAT(NORMALIZED_SLEEPER, 0) - -/* * Place new tasks ahead so that they do not starve already running * tasks */ @@ -31,37 +17,6 @@ SCHED_FEAT(START_DEBIT, 1) SCHED_FEAT(WAKEUP_PREEMPT, 1) /* - * Compute wakeup_gran based on task behaviour, clipped to - * [0, sched_wakeup_gran_ns] - */ -SCHED_FEAT(ADAPTIVE_GRAN, 1) - -/* - * When converting the wakeup granularity to virtual time, do it such - * that heavier tasks preempting a lighter task have an edge. - */ -SCHED_FEAT(ASYM_GRAN, 1) - -/* - * Always wakeup-preempt SYNC wakeups, see SYNC_WAKEUPS. - */ -SCHED_FEAT(WAKEUP_SYNC, 0) - -/* - * Wakeup preempt based on task behaviour. Tasks that do not overlap - * don't get preempted. - */ -SCHED_FEAT(WAKEUP_OVERLAP, 0) - -/* - * Use the SYNC wakeup hint, pipes and the likes use this to indicate - * the remote end is likely to consume the data we just wrote, and - * therefore has cache benefit from being placed on the same cpu, see - * also AFFINE_WAKEUPS. - */ -SCHED_FEAT(SYNC_WAKEUPS, 1) - -/* * Based on load and program behaviour, see if it makes sense to place * a newly woken task on the same cpu as the task that woke it -- * improve cache locality. Typically used with SYNC wakeups as @@ -70,16 +25,6 @@ SCHED_FEAT(SYNC_WAKEUPS, 1) SCHED_FEAT(AFFINE_WAKEUPS, 1) /* - * Weaken SYNC hint based on overlap - */ -SCHED_FEAT(SYNC_LESS, 1) - -/* - * Add SYNC hint based on overlap - */ -SCHED_FEAT(SYNC_MORE, 0) - -/* * Prefer to schedule the task we woke last (assuming it failed * wakeup-preemption), since its likely going to consume data we * touched, increases cache locality. diff --git a/kernel/sched_idletask.c b/kernel/sched_idletask.c index a8a6d8a50947..9fa0f402c87c 100644 --- a/kernel/sched_idletask.c +++ b/kernel/sched_idletask.c @@ -6,7 +6,8 @@ */ #ifdef CONFIG_SMP -static int select_task_rq_idle(struct task_struct *p, int sd_flag, int flags) +static int +select_task_rq_idle(struct rq *rq, struct task_struct *p, int sd_flag, int flags) { return task_cpu(p); /* IDLE tasks as never migrated */ } @@ -22,8 +23,7 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl static struct task_struct *pick_next_task_idle(struct rq *rq) { schedstat_inc(rq, sched_goidle); - /* adjust the active tasks as we might go into a long sleep */ - calc_load_account_active(rq); + calc_load_account_idle(rq); return rq->idle; } @@ -32,7 +32,7 @@ static struct task_struct *pick_next_task_idle(struct rq *rq) * message if some code attempts to do it: */ static void -dequeue_task_idle(struct rq *rq, struct task_struct *p, int sleep) +dequeue_task_idle(struct rq *rq, struct task_struct *p, int flags) { raw_spin_unlock_irq(&rq->lock); printk(KERN_ERR "bad: scheduling from the idle thread!\n"); diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c index b5b920ae2ea7..8afb953e31c6 100644 --- a/kernel/sched_rt.c +++ b/kernel/sched_rt.c @@ -613,7 +613,7 @@ static void update_curr_rt(struct rq *rq) if (unlikely((s64)delta_exec < 0)) delta_exec = 0; - schedstat_set(curr->se.exec_max, max(curr->se.exec_max, delta_exec)); + schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); curr->se.sum_exec_runtime += delta_exec; account_group_exec_runtime(curr, delta_exec); @@ -888,20 +888,20 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se) * Adding/removing a task to/from a priority array: */ static void -enqueue_task_rt(struct rq *rq, struct task_struct *p, int wakeup, bool head) +enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags) { struct sched_rt_entity *rt_se = &p->rt; - if (wakeup) + if (flags & ENQUEUE_WAKEUP) rt_se->timeout = 0; - enqueue_rt_entity(rt_se, head); + enqueue_rt_entity(rt_se, flags & ENQUEUE_HEAD); if (!task_current(rq, p) && p->rt.nr_cpus_allowed > 1) enqueue_pushable_task(rq, p); } -static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int sleep) +static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags) { struct sched_rt_entity *rt_se = &p->rt; @@ -948,10 +948,9 @@ static void yield_task_rt(struct rq *rq) #ifdef CONFIG_SMP static int find_lowest_rq(struct task_struct *task); -static int select_task_rq_rt(struct task_struct *p, int sd_flag, int flags) +static int +select_task_rq_rt(struct rq *rq, struct task_struct *p, int sd_flag, int flags) { - struct rq *rq = task_rq(p); - if (sd_flag != SD_BALANCE_WAKE) return smp_processor_id(); diff --git a/kernel/softirq.c b/kernel/softirq.c index 7c1a67ef0274..0db913a5c60f 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -716,7 +716,7 @@ static int run_ksoftirqd(void * __bind_cpu) preempt_enable_no_resched(); cond_resched(); preempt_disable(); - rcu_sched_qs((long)__bind_cpu); + rcu_note_context_switch((long)__bind_cpu); } preempt_enable(); set_current_state(TASK_INTERRUPTIBLE); diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c index 9bb9fb1bd79c..b4e7431e7c78 100644 --- a/kernel/stop_machine.c +++ b/kernel/stop_machine.c @@ -1,17 +1,384 @@ -/* Copyright 2008, 2005 Rusty Russell rusty@rustcorp.com.au IBM Corporation. - * GPL v2 and any later version. +/* + * kernel/stop_machine.c + * + * Copyright (C) 2008, 2005 IBM Corporation. + * Copyright (C) 2008, 2005 Rusty Russell rusty@rustcorp.com.au + * Copyright (C) 2010 SUSE Linux Products GmbH + * Copyright (C) 2010 Tejun Heo <tj@kernel.org> + * + * This file is released under the GPLv2 and any later version. */ +#include <linux/completion.h> #include <linux/cpu.h> -#include <linux/err.h> +#include <linux/init.h> #include <linux/kthread.h> #include <linux/module.h> +#include <linux/percpu.h> #include <linux/sched.h> #include <linux/stop_machine.h> -#include <linux/syscalls.h> #include <linux/interrupt.h> +#include <linux/kallsyms.h> #include <asm/atomic.h> -#include <asm/uaccess.h> + +/* + * Structure to determine completion condition and record errors. May + * be shared by works on different cpus. + */ +struct cpu_stop_done { + atomic_t nr_todo; /* nr left to execute */ + bool executed; /* actually executed? */ + int ret; /* collected return value */ + struct completion completion; /* fired if nr_todo reaches 0 */ +}; + +/* the actual stopper, one per every possible cpu, enabled on online cpus */ +struct cpu_stopper { + spinlock_t lock; + struct list_head works; /* list of pending works */ + struct task_struct *thread; /* stopper thread */ + bool enabled; /* is this stopper enabled? */ +}; + +static DEFINE_PER_CPU(struct cpu_stopper, cpu_stopper); + +static void cpu_stop_init_done(struct cpu_stop_done *done, unsigned int nr_todo) +{ + memset(done, 0, sizeof(*done)); + atomic_set(&done->nr_todo, nr_todo); + init_completion(&done->completion); +} + +/* signal completion unless @done is NULL */ +static void cpu_stop_signal_done(struct cpu_stop_done *done, bool executed) +{ + if (done) { + if (executed) + done->executed = true; + if (atomic_dec_and_test(&done->nr_todo)) + complete(&done->completion); + } +} + +/* queue @work to @stopper. if offline, @work is completed immediately */ +static void cpu_stop_queue_work(struct cpu_stopper *stopper, + struct cpu_stop_work *work) +{ + unsigned long flags; + + spin_lock_irqsave(&stopper->lock, flags); + + if (stopper->enabled) { + list_add_tail(&work->list, &stopper->works); + wake_up_process(stopper->thread); + } else + cpu_stop_signal_done(work->done, false); + + spin_unlock_irqrestore(&stopper->lock, flags); +} + +/** + * stop_one_cpu - stop a cpu + * @cpu: cpu to stop + * @fn: function to execute + * @arg: argument to @fn + * + * Execute @fn(@arg) on @cpu. @fn is run in a process context with + * the highest priority preempting any task on the cpu and + * monopolizing it. This function returns after the execution is + * complete. + * + * This function doesn't guarantee @cpu stays online till @fn + * completes. If @cpu goes down in the middle, execution may happen + * partially or fully on different cpus. @fn should either be ready + * for that or the caller should ensure that @cpu stays online until + * this function completes. + * + * CONTEXT: + * Might sleep. + * + * RETURNS: + * -ENOENT if @fn(@arg) was not executed because @cpu was offline; + * otherwise, the return value of @fn. + */ +int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg) +{ + struct cpu_stop_done done; + struct cpu_stop_work work = { .fn = fn, .arg = arg, .done = &done }; + + cpu_stop_init_done(&done, 1); + cpu_stop_queue_work(&per_cpu(cpu_stopper, cpu), &work); + wait_for_completion(&done.completion); + return done.executed ? done.ret : -ENOENT; +} + +/** + * stop_one_cpu_nowait - stop a cpu but don't wait for completion + * @cpu: cpu to stop + * @fn: function to execute + * @arg: argument to @fn + * + * Similar to stop_one_cpu() but doesn't wait for completion. The + * caller is responsible for ensuring @work_buf is currently unused + * and will remain untouched until stopper starts executing @fn. + * + * CONTEXT: + * Don't care. + */ +void stop_one_cpu_nowait(unsigned int cpu, cpu_stop_fn_t fn, void *arg, + struct cpu_stop_work *work_buf) +{ + *work_buf = (struct cpu_stop_work){ .fn = fn, .arg = arg, }; + cpu_stop_queue_work(&per_cpu(cpu_stopper, cpu), work_buf); +} + +/* static data for stop_cpus */ +static DEFINE_MUTEX(stop_cpus_mutex); +static DEFINE_PER_CPU(struct cpu_stop_work, stop_cpus_work); + +int __stop_cpus(const struct cpumask *cpumask, cpu_stop_fn_t fn, void *arg) +{ + struct cpu_stop_work *work; + struct cpu_stop_done done; + unsigned int cpu; + + /* initialize works and done */ + for_each_cpu(cpu, cpumask) { + work = &per_cpu(stop_cpus_work, cpu); + work->fn = fn; + work->arg = arg; + work->done = &done; + } + cpu_stop_init_done(&done, cpumask_weight(cpumask)); + + /* + * Disable preemption while queueing to avoid getting + * preempted by a stopper which might wait for other stoppers + * to enter @fn which can lead to deadlock. + */ + preempt_disable(); + for_each_cpu(cpu, cpumask) + cpu_stop_queue_work(&per_cpu(cpu_stopper, cpu), + &per_cpu(stop_cpus_work, cpu)); + preempt_enable(); + + wait_for_completion(&done.completion); + return done.executed ? done.ret : -ENOENT; +} + +/** + * stop_cpus - stop multiple cpus + * @cpumask: cpus to stop + * @fn: function to execute + * @arg: argument to @fn + * + * Execute @fn(@arg) on online cpus in @cpumask. On each target cpu, + * @fn is run in a process context with the highest priority + * preempting any task on the cpu and monopolizing it. This function + * returns after all executions are complete. + * + * This function doesn't guarantee the cpus in @cpumask stay online + * till @fn completes. If some cpus go down in the middle, execution + * on the cpu may happen partially or fully on different cpus. @fn + * should either be ready for that or the caller should ensure that + * the cpus stay online until this function completes. + * + * All stop_cpus() calls are serialized making it safe for @fn to wait + * for all cpus to start executing it. + * + * CONTEXT: + * Might sleep. + * + * RETURNS: + * -ENOENT if @fn(@arg) was not executed at all because all cpus in + * @cpumask were offline; otherwise, 0 if all executions of @fn + * returned 0, any non zero return value if any returned non zero. + */ +int stop_cpus(const struct cpumask *cpumask, cpu_stop_fn_t fn, void *arg) +{ + int ret; + + /* static works are used, process one request at a time */ + mutex_lock(&stop_cpus_mutex); + ret = __stop_cpus(cpumask, fn, arg); + mutex_unlock(&stop_cpus_mutex); + return ret; +} + +/** + * try_stop_cpus - try to stop multiple cpus + * @cpumask: cpus to stop + * @fn: function to execute + * @arg: argument to @fn + * + * Identical to stop_cpus() except that it fails with -EAGAIN if + * someone else is already using the facility. + * + * CONTEXT: + * Might sleep. + * + * RETURNS: + * -EAGAIN if someone else is already stopping cpus, -ENOENT if + * @fn(@arg) was not executed at all because all cpus in @cpumask were + * offline; otherwise, 0 if all executions of @fn returned 0, any non + * zero return value if any returned non zero. + */ +int try_stop_cpus(const struct cpumask *cpumask, cpu_stop_fn_t fn, void *arg) +{ + int ret; + + /* static works are used, process one request at a time */ + if (!mutex_trylock(&stop_cpus_mutex)) + return -EAGAIN; + ret = __stop_cpus(cpumask, fn, arg); + mutex_unlock(&stop_cpus_mutex); + return ret; +} + +static int cpu_stopper_thread(void *data) +{ + struct cpu_stopper *stopper = data; + struct cpu_stop_work *work; + int ret; + +repeat: + set_current_state(TASK_INTERRUPTIBLE); /* mb paired w/ kthread_stop */ + + if (kthread_should_stop()) { + __set_current_state(TASK_RUNNING); + return 0; + } + + work = NULL; + spin_lock_irq(&stopper->lock); + if (!list_empty(&stopper->works)) { + work = list_first_entry(&stopper->works, + struct cpu_stop_work, list); + list_del_init(&work->list); + } + spin_unlock_irq(&stopper->lock); + + if (work) { + cpu_stop_fn_t fn = work->fn; + void *arg = work->arg; + struct cpu_stop_done *done = work->done; + char ksym_buf[KSYM_NAME_LEN]; + + __set_current_state(TASK_RUNNING); + + /* cpu stop callbacks are not allowed to sleep */ + preempt_disable(); + + ret = fn(arg); + if (ret) + done->ret = ret; + + /* restore preemption and check it's still balanced */ + preempt_enable(); + WARN_ONCE(preempt_count(), + "cpu_stop: %s(%p) leaked preempt count\n", + kallsyms_lookup((unsigned long)fn, NULL, NULL, NULL, + ksym_buf), arg); + + cpu_stop_signal_done(done, true); + } else + schedule(); + + goto repeat; +} + +/* manage stopper for a cpu, mostly lifted from sched migration thread mgmt */ +static int __cpuinit cpu_stop_cpu_callback(struct notifier_block *nfb, + unsigned long action, void *hcpu) +{ + struct sched_param param = { .sched_priority = MAX_RT_PRIO - 1 }; + unsigned int cpu = (unsigned long)hcpu; + struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu); + struct task_struct *p; + + switch (action & ~CPU_TASKS_FROZEN) { + case CPU_UP_PREPARE: + BUG_ON(stopper->thread || stopper->enabled || + !list_empty(&stopper->works)); + p = kthread_create(cpu_stopper_thread, stopper, "migration/%d", + cpu); + if (IS_ERR(p)) + return NOTIFY_BAD; + sched_setscheduler_nocheck(p, SCHED_FIFO, ¶m); + get_task_struct(p); + stopper->thread = p; + break; + + case CPU_ONLINE: + kthread_bind(stopper->thread, cpu); + /* strictly unnecessary, as first user will wake it */ + wake_up_process(stopper->thread); + /* mark enabled */ + spin_lock_irq(&stopper->lock); + stopper->enabled = true; + spin_unlock_irq(&stopper->lock); + break; + +#ifdef CONFIG_HOTPLUG_CPU + case CPU_UP_CANCELED: + case CPU_DEAD: + { + struct cpu_stop_work *work; + + /* kill the stopper */ + kthread_stop(stopper->thread); + /* drain remaining works */ + spin_lock_irq(&stopper->lock); + list_for_each_entry(work, &stopper->works, list) + cpu_stop_signal_done(work->done, false); + stopper->enabled = false; + spin_unlock_irq(&stopper->lock); + /* release the stopper */ + put_task_struct(stopper->thread); + stopper->thread = NULL; + break; + } +#endif + } + + return NOTIFY_OK; +} + +/* + * Give it a higher priority so that cpu stopper is available to other + * cpu notifiers. It currently shares the same priority as sched + * migration_notifier. + */ +static struct notifier_block __cpuinitdata cpu_stop_cpu_notifier = { + .notifier_call = cpu_stop_cpu_callback, + .priority = 10, +}; + +static int __init cpu_stop_init(void) +{ + void *bcpu = (void *)(long)smp_processor_id(); + unsigned int cpu; + int err; + + for_each_possible_cpu(cpu) { + struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu); + + spin_lock_init(&stopper->lock); + INIT_LIST_HEAD(&stopper->works); + } + + /* start one for the boot cpu */ + err = cpu_stop_cpu_callback(&cpu_stop_cpu_notifier, CPU_UP_PREPARE, + bcpu); + BUG_ON(err == NOTIFY_BAD); + cpu_stop_cpu_callback(&cpu_stop_cpu_notifier, CPU_ONLINE, bcpu); + register_cpu_notifier(&cpu_stop_cpu_notifier); + + return 0; +} +early_initcall(cpu_stop_init); + +#ifdef CONFIG_STOP_MACHINE /* This controls the threads on each CPU. */ enum stopmachine_state { @@ -26,174 +393,94 @@ enum stopmachine_state { /* Exit */ STOPMACHINE_EXIT, }; -static enum stopmachine_state state; struct stop_machine_data { - int (*fn)(void *); - void *data; - int fnret; + int (*fn)(void *); + void *data; + /* Like num_online_cpus(), but hotplug cpu uses us, so we need this. */ + unsigned int num_threads; + const struct cpumask *active_cpus; + + enum stopmachine_state state; + atomic_t thread_ack; }; -/* Like num_online_cpus(), but hotplug cpu uses us, so we need this. */ -static unsigned int num_threads; -static atomic_t thread_ack; -static DEFINE_MUTEX(lock); -/* setup_lock protects refcount, stop_machine_wq and stop_machine_work. */ -static DEFINE_MUTEX(setup_lock); -/* Users of stop_machine. */ -static int refcount; -static struct workqueue_struct *stop_machine_wq; -static struct stop_machine_data active, idle; -static const struct cpumask *active_cpus; -static void __percpu *stop_machine_work; - -static void set_state(enum stopmachine_state newstate) +static void set_state(struct stop_machine_data *smdata, + enum stopmachine_state newstate) { /* Reset ack counter. */ - atomic_set(&thread_ack, num_threads); + atomic_set(&smdata->thread_ack, smdata->num_threads); smp_wmb(); - state = newstate; + smdata->state = newstate; } /* Last one to ack a state moves to the next state. */ -static void ack_state(void) +static void ack_state(struct stop_machine_data *smdata) { - if (atomic_dec_and_test(&thread_ack)) - set_state(state + 1); + if (atomic_dec_and_test(&smdata->thread_ack)) + set_state(smdata, smdata->state + 1); } -/* This is the actual function which stops the CPU. It runs - * in the context of a dedicated stopmachine workqueue. */ -static void stop_cpu(struct work_struct *unused) +/* This is the cpu_stop function which stops the CPU. */ +static int stop_machine_cpu_stop(void *data) { + struct stop_machine_data *smdata = data; enum stopmachine_state curstate = STOPMACHINE_NONE; - struct stop_machine_data *smdata = &idle; - int cpu = smp_processor_id(); - int err; + int cpu = smp_processor_id(), err = 0; + bool is_active; + + if (!smdata->active_cpus) + is_active = cpu == cpumask_first(cpu_online_mask); + else + is_active = cpumask_test_cpu(cpu, smdata->active_cpus); - if (!active_cpus) { - if (cpu == cpumask_first(cpu_online_mask)) - smdata = &active; - } else { - if (cpumask_test_cpu(cpu, active_cpus)) - smdata = &active; - } /* Simple state machine */ do { /* Chill out and ensure we re-read stopmachine_state. */ cpu_relax(); - if (state != curstate) { - curstate = state; + if (smdata->state != curstate) { + curstate = smdata->state; switch (curstate) { case STOPMACHINE_DISABLE_IRQ: local_irq_disable(); hard_irq_disable(); break; case STOPMACHINE_RUN: - /* On multiple CPUs only a single error code - * is needed to tell that something failed. */ - err = smdata->fn(smdata->data); - if (err) - smdata->fnret = err; + if (is_active) + err = smdata->fn(smdata->data); break; default: break; } - ack_state(); + ack_state(smdata); } } while (curstate != STOPMACHINE_EXIT); local_irq_enable(); + return err; } -/* Callback for CPUs which aren't supposed to do anything. */ -static int chill(void *unused) -{ - return 0; -} - -int stop_machine_create(void) -{ - mutex_lock(&setup_lock); - if (refcount) - goto done; - stop_machine_wq = create_rt_workqueue("kstop"); - if (!stop_machine_wq) - goto err_out; - stop_machine_work = alloc_percpu(struct work_struct); - if (!stop_machine_work) - goto err_out; -done: - refcount++; - mutex_unlock(&setup_lock); - return 0; - -err_out: - if (stop_machine_wq) - destroy_workqueue(stop_machine_wq); - mutex_unlock(&setup_lock); - return -ENOMEM; -} -EXPORT_SYMBOL_GPL(stop_machine_create); - -void stop_machine_destroy(void) -{ - mutex_lock(&setup_lock); - refcount--; - if (refcount) - goto done; - destroy_workqueue(stop_machine_wq); - free_percpu(stop_machine_work); -done: - mutex_unlock(&setup_lock); -} -EXPORT_SYMBOL_GPL(stop_machine_destroy); - int __stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus) { - struct work_struct *sm_work; - int i, ret; - - /* Set up initial state. */ - mutex_lock(&lock); - num_threads = num_online_cpus(); - active_cpus = cpus; - active.fn = fn; - active.data = data; - active.fnret = 0; - idle.fn = chill; - idle.data = NULL; - - set_state(STOPMACHINE_PREPARE); - - /* Schedule the stop_cpu work on all cpus: hold this CPU so one - * doesn't hit this CPU until we're ready. */ - get_cpu(); - for_each_online_cpu(i) { - sm_work = per_cpu_ptr(stop_machine_work, i); - INIT_WORK(sm_work, stop_cpu); - queue_work_on(i, stop_machine_wq, sm_work); - } - /* This will release the thread on our CPU. */ - put_cpu(); - flush_workqueue(stop_machine_wq); - ret = active.fnret; - mutex_unlock(&lock); - return ret; + struct stop_machine_data smdata = { .fn = fn, .data = data, + .num_threads = num_online_cpus(), + .active_cpus = cpus }; + + /* Set the initial state and stop all online cpus. */ + set_state(&smdata, STOPMACHINE_PREPARE); + return stop_cpus(cpu_online_mask, stop_machine_cpu_stop, &smdata); } int stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus) { int ret; - ret = stop_machine_create(); - if (ret) - return ret; /* No CPUs can come up or down during this. */ get_online_cpus(); ret = __stop_machine(fn, data, cpus); put_online_cpus(); - stop_machine_destroy(); return ret; } EXPORT_SYMBOL_GPL(stop_machine); + +#endif /* CONFIG_STOP_MACHINE */ diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index f992762d7f51..1d7b9bc1c034 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -150,14 +150,32 @@ static void tick_nohz_update_jiffies(ktime_t now) touch_softlockup_watchdog(); } +/* + * Updates the per cpu time idle statistics counters + */ +static void +update_ts_time_stats(struct tick_sched *ts, ktime_t now, u64 *last_update_time) +{ + ktime_t delta; + + if (ts->idle_active) { + delta = ktime_sub(now, ts->idle_entrytime); + ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta); + if (nr_iowait_cpu() > 0) + ts->iowait_sleeptime = ktime_add(ts->iowait_sleeptime, delta); + ts->idle_entrytime = now; + } + + if (last_update_time) + *last_update_time = ktime_to_us(now); + +} + static void tick_nohz_stop_idle(int cpu, ktime_t now) { struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu); - ktime_t delta; - delta = ktime_sub(now, ts->idle_entrytime); - ts->idle_lastupdate = now; - ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta); + update_ts_time_stats(ts, now, NULL); ts->idle_active = 0; sched_clock_idle_wakeup_event(0); @@ -165,20 +183,32 @@ static void tick_nohz_stop_idle(int cpu, ktime_t now) static ktime_t tick_nohz_start_idle(struct tick_sched *ts) { - ktime_t now, delta; + ktime_t now; now = ktime_get(); - if (ts->idle_active) { - delta = ktime_sub(now, ts->idle_entrytime); - ts->idle_lastupdate = now; - ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta); - } + + update_ts_time_stats(ts, now, NULL); + ts->idle_entrytime = now; ts->idle_active = 1; sched_clock_idle_sleep_event(); return now; } +/** + * get_cpu_idle_time_us - get the total idle time of a cpu + * @cpu: CPU number to query + * @last_update_time: variable to store update time in + * + * Return the cummulative idle time (since boot) for a given + * CPU, in microseconds. The idle time returned includes + * the iowait time (unlike what "top" and co report). + * + * This time is measured via accounting rather than sampling, + * and is as accurate as ktime_get() is. + * + * This function returns -1 if NOHZ is not enabled. + */ u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time) { struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu); @@ -186,15 +216,38 @@ u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time) if (!tick_nohz_enabled) return -1; - if (ts->idle_active) - *last_update_time = ktime_to_us(ts->idle_lastupdate); - else - *last_update_time = ktime_to_us(ktime_get()); + update_ts_time_stats(ts, ktime_get(), last_update_time); return ktime_to_us(ts->idle_sleeptime); } EXPORT_SYMBOL_GPL(get_cpu_idle_time_us); +/* + * get_cpu_iowait_time_us - get the total iowait time of a cpu + * @cpu: CPU number to query + * @last_update_time: variable to store update time in + * + * Return the cummulative iowait time (since boot) for a given + * CPU, in microseconds. + * + * This time is measured via accounting rather than sampling, + * and is as accurate as ktime_get() is. + * + * This function returns -1 if NOHZ is not enabled. + */ +u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time) +{ + struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu); + + if (!tick_nohz_enabled) + return -1; + + update_ts_time_stats(ts, ktime_get(), last_update_time); + + return ktime_to_us(ts->iowait_sleeptime); +} +EXPORT_SYMBOL_GPL(get_cpu_iowait_time_us); + /** * tick_nohz_stop_sched_tick - stop the idle tick from the idle task * @@ -262,6 +315,9 @@ void tick_nohz_stop_sched_tick(int inidle) goto end; } + if (nohz_ratelimit(cpu)) + goto end; + ts->idle_calls++; /* Read jiffies and the time when jiffies were updated last */ do { diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c index 1a4a7dd78777..ab8f5e33fa92 100644 --- a/kernel/time/timer_list.c +++ b/kernel/time/timer_list.c @@ -176,6 +176,7 @@ static void print_cpu(struct seq_file *m, int cpu, u64 now) P_ns(idle_waketime); P_ns(idle_exittime); P_ns(idle_sleeptime); + P_ns(iowait_sleeptime); P(last_jiffies); P(next_jiffies); P_ns(idle_expires); diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 13e13d428cd3..8b1797c4545b 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -44,9 +44,6 @@ config HAVE_FTRACE_MCOUNT_RECORD help See Documentation/trace/ftrace-design.txt -config HAVE_HW_BRANCH_TRACER - bool - config HAVE_SYSCALL_TRACEPOINTS bool help @@ -374,14 +371,6 @@ config STACK_TRACER Say N if unsure. -config HW_BRANCH_TRACER - depends on HAVE_HW_BRANCH_TRACER - bool "Trace hw branches" - select GENERIC_TRACER - help - This tracer records all branches on the system in a circular - buffer, giving access to the last N branches for each cpu. - config KMEMTRACE bool "Trace SLAB allocations" select GENERIC_TRACER diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile index 78edc6490038..ffb1a5b0550e 100644 --- a/kernel/trace/Makefile +++ b/kernel/trace/Makefile @@ -41,7 +41,6 @@ obj-$(CONFIG_MMIOTRACE) += trace_mmiotrace.o obj-$(CONFIG_BOOT_TRACER) += trace_boot.o obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += trace_functions_graph.o obj-$(CONFIG_TRACE_BRANCH_PROFILING) += trace_branch.o -obj-$(CONFIG_HW_BRANCH_TRACER) += trace_hw_branches.o obj-$(CONFIG_KMEMTRACE) += kmemtrace.o obj-$(CONFIG_WORKQUEUE_TRACER) += trace_workqueue.o obj-$(CONFIG_BLK_DEV_IO_TRACE) += blktrace.o diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 2404b59b3097..32837e19e3bd 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -264,6 +264,7 @@ struct ftrace_profile { unsigned long counter; #ifdef CONFIG_FUNCTION_GRAPH_TRACER unsigned long long time; + unsigned long long time_squared; #endif }; @@ -366,9 +367,9 @@ static int function_stat_headers(struct seq_file *m) { #ifdef CONFIG_FUNCTION_GRAPH_TRACER seq_printf(m, " Function " - "Hit Time Avg\n" + "Hit Time Avg s^2\n" " -------- " - "--- ---- ---\n"); + "--- ---- --- ---\n"); #else seq_printf(m, " Function Hit\n" " -------- ---\n"); @@ -384,6 +385,7 @@ static int function_stat_show(struct seq_file *m, void *v) static DEFINE_MUTEX(mutex); static struct trace_seq s; unsigned long long avg; + unsigned long long stddev; #endif kallsyms_lookup(rec->ip, NULL, NULL, NULL, str); @@ -394,11 +396,25 @@ static int function_stat_show(struct seq_file *m, void *v) avg = rec->time; do_div(avg, rec->counter); + /* Sample standard deviation (s^2) */ + if (rec->counter <= 1) + stddev = 0; + else { + stddev = rec->time_squared - rec->counter * avg * avg; + /* + * Divide only 1000 for ns^2 -> us^2 conversion. + * trace_print_graph_duration will divide 1000 again. + */ + do_div(stddev, (rec->counter - 1) * 1000); + } + mutex_lock(&mutex); trace_seq_init(&s); trace_print_graph_duration(rec->time, &s); trace_seq_puts(&s, " "); trace_print_graph_duration(avg, &s); + trace_seq_puts(&s, " "); + trace_print_graph_duration(stddev, &s); trace_print_seq(m, &s); mutex_unlock(&mutex); #endif @@ -650,6 +666,10 @@ static void profile_graph_return(struct ftrace_graph_ret *trace) if (!stat->hash || !ftrace_profile_enabled) goto out; + /* If the calltime was zero'd ignore it */ + if (!trace->calltime) + goto out; + calltime = trace->rettime - trace->calltime; if (!(trace_flags & TRACE_ITER_GRAPH_TIME)) { @@ -668,8 +688,10 @@ static void profile_graph_return(struct ftrace_graph_ret *trace) } rec = ftrace_find_profiled_func(stat, trace->func); - if (rec) + if (rec) { rec->time += calltime; + rec->time_squared += calltime * calltime; + } out: local_irq_restore(flags); @@ -3212,8 +3234,7 @@ free: } static void -ftrace_graph_probe_sched_switch(struct rq *__rq, struct task_struct *prev, - struct task_struct *next) +ftrace_graph_probe_sched_switch(struct task_struct *prev, struct task_struct *next) { unsigned long long timestamp; int index; @@ -3339,11 +3360,11 @@ void unregister_ftrace_graph(void) goto out; ftrace_graph_active--; - unregister_trace_sched_switch(ftrace_graph_probe_sched_switch); ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub; ftrace_graph_entry = ftrace_graph_entry_stub; ftrace_shutdown(FTRACE_STOP_FUNC_RET); unregister_pm_notifier(&ftrace_suspend_notifier); + unregister_trace_sched_switch(ftrace_graph_probe_sched_switch); out: mutex_unlock(&ftrace_lock); diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 41ca394feb22..7f6059c5aa94 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -319,6 +319,11 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_data); #define TS_MASK ((1ULL << TS_SHIFT) - 1) #define TS_DELTA_TEST (~TS_MASK) +/* Flag when events were overwritten */ +#define RB_MISSED_EVENTS (1 << 31) +/* Missed count stored at end */ +#define RB_MISSED_STORED (1 << 30) + struct buffer_data_page { u64 time_stamp; /* page time stamp */ local_t commit; /* write committed index */ @@ -338,6 +343,7 @@ struct buffer_page { local_t write; /* index for next write */ unsigned read; /* index for next read */ local_t entries; /* entries on this page */ + unsigned long real_end; /* real end of data */ struct buffer_data_page *page; /* Actual data page */ }; @@ -417,6 +423,12 @@ int ring_buffer_print_page_header(struct trace_seq *s) (unsigned int)sizeof(field.commit), (unsigned int)is_signed_type(long)); + ret = trace_seq_printf(s, "\tfield: int overwrite;\t" + "offset:%u;\tsize:%u;\tsigned:%u;\n", + (unsigned int)offsetof(typeof(field), commit), + 1, + (unsigned int)is_signed_type(long)); + ret = trace_seq_printf(s, "\tfield: char data;\t" "offset:%u;\tsize:%u;\tsigned:%u;\n", (unsigned int)offsetof(typeof(field), data), @@ -440,6 +452,8 @@ struct ring_buffer_per_cpu { struct buffer_page *tail_page; /* write to tail */ struct buffer_page *commit_page; /* committed pages */ struct buffer_page *reader_page; + unsigned long lost_events; + unsigned long last_overrun; local_t commit_overrun; local_t overrun; local_t entries; @@ -1762,6 +1776,13 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, kmemcheck_annotate_bitfield(event, bitfield); /* + * Save the original length to the meta data. + * This will be used by the reader to add lost event + * counter. + */ + tail_page->real_end = tail; + + /* * If this event is bigger than the minimum size, then * we need to be careful that we don't subtract the * write counter enough to allow another writer to slip @@ -1979,17 +2000,13 @@ rb_add_time_stamp(struct ring_buffer_per_cpu *cpu_buffer, u64 *ts, u64 *delta) { struct ring_buffer_event *event; - static int once; int ret; - if (unlikely(*delta > (1ULL << 59) && !once++)) { - printk(KERN_WARNING "Delta way too big! %llu" - " ts=%llu write stamp = %llu\n", - (unsigned long long)*delta, - (unsigned long long)*ts, - (unsigned long long)cpu_buffer->write_stamp); - WARN_ON(1); - } + WARN_ONCE(*delta > (1ULL << 59), + KERN_WARNING "Delta way too big! %llu ts=%llu write stamp = %llu\n", + (unsigned long long)*delta, + (unsigned long long)*ts, + (unsigned long long)cpu_buffer->write_stamp); /* * The delta is too big, we to add a @@ -2838,6 +2855,7 @@ static struct buffer_page * rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) { struct buffer_page *reader = NULL; + unsigned long overwrite; unsigned long flags; int nr_loops = 0; int ret; @@ -2879,6 +2897,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) local_set(&cpu_buffer->reader_page->write, 0); local_set(&cpu_buffer->reader_page->entries, 0); local_set(&cpu_buffer->reader_page->page->commit, 0); + cpu_buffer->reader_page->real_end = 0; spin: /* @@ -2899,6 +2918,18 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) rb_set_list_to_head(cpu_buffer, &cpu_buffer->reader_page->list); /* + * We want to make sure we read the overruns after we set up our + * pointers to the next object. The writer side does a + * cmpxchg to cross pages which acts as the mb on the writer + * side. Note, the reader will constantly fail the swap + * while the writer is updating the pointers, so this + * guarantees that the overwrite recorded here is the one we + * want to compare with the last_overrun. + */ + smp_mb(); + overwrite = local_read(&(cpu_buffer->overrun)); + + /* * Here's the tricky part. * * We need to move the pointer past the header page. @@ -2929,6 +2960,11 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->reader_page = reader; rb_reset_reader_page(cpu_buffer); + if (overwrite != cpu_buffer->last_overrun) { + cpu_buffer->lost_events = overwrite - cpu_buffer->last_overrun; + cpu_buffer->last_overrun = overwrite; + } + goto again; out: @@ -3005,8 +3041,14 @@ static void rb_advance_iter(struct ring_buffer_iter *iter) rb_advance_iter(iter); } +static int rb_lost_events(struct ring_buffer_per_cpu *cpu_buffer) +{ + return cpu_buffer->lost_events; +} + static struct ring_buffer_event * -rb_buffer_peek(struct ring_buffer_per_cpu *cpu_buffer, u64 *ts) +rb_buffer_peek(struct ring_buffer_per_cpu *cpu_buffer, u64 *ts, + unsigned long *lost_events) { struct ring_buffer_event *event; struct buffer_page *reader; @@ -3058,6 +3100,8 @@ rb_buffer_peek(struct ring_buffer_per_cpu *cpu_buffer, u64 *ts) ring_buffer_normalize_time_stamp(cpu_buffer->buffer, cpu_buffer->cpu, ts); } + if (lost_events) + *lost_events = rb_lost_events(cpu_buffer); return event; default: @@ -3168,12 +3212,14 @@ static inline int rb_ok_to_lock(void) * @buffer: The ring buffer to read * @cpu: The cpu to peak at * @ts: The timestamp counter of this event. + * @lost_events: a variable to store if events were lost (may be NULL) * * This will return the event that will be read next, but does * not consume the data. */ struct ring_buffer_event * -ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts) +ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts, + unsigned long *lost_events) { struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu]; struct ring_buffer_event *event; @@ -3188,7 +3234,7 @@ ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts) local_irq_save(flags); if (dolock) spin_lock(&cpu_buffer->reader_lock); - event = rb_buffer_peek(cpu_buffer, ts); + event = rb_buffer_peek(cpu_buffer, ts, lost_events); if (event && event->type_len == RINGBUF_TYPE_PADDING) rb_advance_reader(cpu_buffer); if (dolock) @@ -3230,13 +3276,17 @@ ring_buffer_iter_peek(struct ring_buffer_iter *iter, u64 *ts) /** * ring_buffer_consume - return an event and consume it * @buffer: The ring buffer to get the next event from + * @cpu: the cpu to read the buffer from + * @ts: a variable to store the timestamp (may be NULL) + * @lost_events: a variable to store if events were lost (may be NULL) * * Returns the next event in the ring buffer, and that event is consumed. * Meaning, that sequential reads will keep returning a different event, * and eventually empty the ring buffer if the producer is slower. */ struct ring_buffer_event * -ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts) +ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts, + unsigned long *lost_events) { struct ring_buffer_per_cpu *cpu_buffer; struct ring_buffer_event *event = NULL; @@ -3257,9 +3307,11 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts) if (dolock) spin_lock(&cpu_buffer->reader_lock); - event = rb_buffer_peek(cpu_buffer, ts); - if (event) + event = rb_buffer_peek(cpu_buffer, ts, lost_events); + if (event) { + cpu_buffer->lost_events = 0; rb_advance_reader(cpu_buffer); + } if (dolock) spin_unlock(&cpu_buffer->reader_lock); @@ -3276,23 +3328,30 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts) EXPORT_SYMBOL_GPL(ring_buffer_consume); /** - * ring_buffer_read_start - start a non consuming read of the buffer + * ring_buffer_read_prepare - Prepare for a non consuming read of the buffer * @buffer: The ring buffer to read from * @cpu: The cpu buffer to iterate over * - * This starts up an iteration through the buffer. It also disables - * the recording to the buffer until the reading is finished. - * This prevents the reading from being corrupted. This is not - * a consuming read, so a producer is not expected. + * This performs the initial preparations necessary to iterate + * through the buffer. Memory is allocated, buffer recording + * is disabled, and the iterator pointer is returned to the caller. * - * Must be paired with ring_buffer_finish. + * Disabling buffer recordng prevents the reading from being + * corrupted. This is not a consuming read, so a producer is not + * expected. + * + * After a sequence of ring_buffer_read_prepare calls, the user is + * expected to make at least one call to ring_buffer_prepare_sync. + * Afterwards, ring_buffer_read_start is invoked to get things going + * for real. + * + * This overall must be paired with ring_buffer_finish. */ struct ring_buffer_iter * -ring_buffer_read_start(struct ring_buffer *buffer, int cpu) +ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu) { struct ring_buffer_per_cpu *cpu_buffer; struct ring_buffer_iter *iter; - unsigned long flags; if (!cpumask_test_cpu(cpu, buffer->cpumask)) return NULL; @@ -3306,15 +3365,52 @@ ring_buffer_read_start(struct ring_buffer *buffer, int cpu) iter->cpu_buffer = cpu_buffer; atomic_inc(&cpu_buffer->record_disabled); + + return iter; +} +EXPORT_SYMBOL_GPL(ring_buffer_read_prepare); + +/** + * ring_buffer_read_prepare_sync - Synchronize a set of prepare calls + * + * All previously invoked ring_buffer_read_prepare calls to prepare + * iterators will be synchronized. Afterwards, read_buffer_read_start + * calls on those iterators are allowed. + */ +void +ring_buffer_read_prepare_sync(void) +{ synchronize_sched(); +} +EXPORT_SYMBOL_GPL(ring_buffer_read_prepare_sync); + +/** + * ring_buffer_read_start - start a non consuming read of the buffer + * @iter: The iterator returned by ring_buffer_read_prepare + * + * This finalizes the startup of an iteration through the buffer. + * The iterator comes from a call to ring_buffer_read_prepare and + * an intervening ring_buffer_read_prepare_sync must have been + * performed. + * + * Must be paired with ring_buffer_finish. + */ +void +ring_buffer_read_start(struct ring_buffer_iter *iter) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags; + + if (!iter) + return; + + cpu_buffer = iter->cpu_buffer; spin_lock_irqsave(&cpu_buffer->reader_lock, flags); arch_spin_lock(&cpu_buffer->lock); rb_iter_reset(iter); arch_spin_unlock(&cpu_buffer->lock); spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); - - return iter; } EXPORT_SYMBOL_GPL(ring_buffer_read_start); @@ -3408,6 +3504,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->write_stamp = 0; cpu_buffer->read_stamp = 0; + cpu_buffer->lost_events = 0; + cpu_buffer->last_overrun = 0; + rb_head_page_activate(cpu_buffer); } @@ -3683,6 +3782,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer, struct ring_buffer_event *event; struct buffer_data_page *bpage; struct buffer_page *reader; + unsigned long missed_events; unsigned long flags; unsigned int commit; unsigned int read; @@ -3719,6 +3819,9 @@ int ring_buffer_read_page(struct ring_buffer *buffer, read = reader->read; commit = rb_page_commit(reader); + /* Check if any events were dropped */ + missed_events = cpu_buffer->lost_events; + /* * If this page has been partially read or * if len is not big enough to read the rest of the page or @@ -3779,9 +3882,35 @@ int ring_buffer_read_page(struct ring_buffer *buffer, local_set(&reader->entries, 0); reader->read = 0; *data_page = bpage; + + /* + * Use the real_end for the data size, + * This gives us a chance to store the lost events + * on the page. + */ + if (reader->real_end) + local_set(&bpage->commit, reader->real_end); } ret = read; + cpu_buffer->lost_events = 0; + /* + * Set a flag in the commit field if we lost events + */ + if (missed_events) { + commit = local_read(&bpage->commit); + + /* If there is room at the end of the page to save the + * missed events, then record it there. + */ + if (BUF_PAGE_SIZE - commit >= sizeof(missed_events)) { + memcpy(&bpage->data[commit], &missed_events, + sizeof(missed_events)); + local_add(RB_MISSED_STORED, &bpage->commit); + } + local_add(RB_MISSED_EVENTS, &bpage->commit); + } + out_unlock: spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); diff --git a/kernel/trace/ring_buffer_benchmark.c b/kernel/trace/ring_buffer_benchmark.c index df74c7982255..302f8a614635 100644 --- a/kernel/trace/ring_buffer_benchmark.c +++ b/kernel/trace/ring_buffer_benchmark.c @@ -81,7 +81,7 @@ static enum event_status read_event(int cpu) int *entry; u64 ts; - event = ring_buffer_consume(buffer, cpu, &ts); + event = ring_buffer_consume(buffer, cpu, &ts, NULL); if (!event) return EVENT_DROPPED; @@ -113,7 +113,8 @@ static enum event_status read_page(int cpu) ret = ring_buffer_read_page(buffer, &bpage, PAGE_SIZE, cpu, 1); if (ret >= 0) { rpage = bpage; - commit = local_read(&rpage->commit); + /* The commit may have missed event flags set, clear them */ + commit = local_read(&rpage->commit) & 0xfffff; for (i = 0; i < commit && !kill_test; i += inc) { if (i >= (PAGE_SIZE - offsetof(struct rb_page, data))) { diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 44f916a04065..756d7283318b 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -117,9 +117,12 @@ static cpumask_var_t __read_mostly tracing_buffer_mask; * * It is default off, but you can enable it with either specifying * "ftrace_dump_on_oops" in the kernel command line, or setting - * /proc/sys/kernel/ftrace_dump_on_oops to true. + * /proc/sys/kernel/ftrace_dump_on_oops + * Set 1 if you want to dump buffers of all CPUs + * Set 2 if you want to dump the buffer of the CPU that triggered oops */ -int ftrace_dump_on_oops; + +enum ftrace_dump_mode ftrace_dump_on_oops; static int tracing_set_tracer(const char *buf); @@ -139,8 +142,17 @@ __setup("ftrace=", set_cmdline_ftrace); static int __init set_ftrace_dump_on_oops(char *str) { - ftrace_dump_on_oops = 1; - return 1; + if (*str++ != '=' || !*str) { + ftrace_dump_on_oops = DUMP_ALL; + return 1; + } + + if (!strcmp("orig_cpu", str)) { + ftrace_dump_on_oops = DUMP_ORIG; + return 1; + } + + return 0; } __setup("ftrace_dump_on_oops", set_ftrace_dump_on_oops); @@ -1545,7 +1557,8 @@ static void trace_iterator_increment(struct trace_iterator *iter) } static struct trace_entry * -peek_next_entry(struct trace_iterator *iter, int cpu, u64 *ts) +peek_next_entry(struct trace_iterator *iter, int cpu, u64 *ts, + unsigned long *lost_events) { struct ring_buffer_event *event; struct ring_buffer_iter *buf_iter = iter->buffer_iter[cpu]; @@ -1556,7 +1569,8 @@ peek_next_entry(struct trace_iterator *iter, int cpu, u64 *ts) if (buf_iter) event = ring_buffer_iter_peek(buf_iter, ts); else - event = ring_buffer_peek(iter->tr->buffer, cpu, ts); + event = ring_buffer_peek(iter->tr->buffer, cpu, ts, + lost_events); ftrace_enable_cpu(); @@ -1564,10 +1578,12 @@ peek_next_entry(struct trace_iterator *iter, int cpu, u64 *ts) } static struct trace_entry * -__find_next_entry(struct trace_iterator *iter, int *ent_cpu, u64 *ent_ts) +__find_next_entry(struct trace_iterator *iter, int *ent_cpu, + unsigned long *missing_events, u64 *ent_ts) { struct ring_buffer *buffer = iter->tr->buffer; struct trace_entry *ent, *next = NULL; + unsigned long lost_events = 0, next_lost = 0; int cpu_file = iter->cpu_file; u64 next_ts = 0, ts; int next_cpu = -1; @@ -1580,7 +1596,7 @@ __find_next_entry(struct trace_iterator *iter, int *ent_cpu, u64 *ent_ts) if (cpu_file > TRACE_PIPE_ALL_CPU) { if (ring_buffer_empty_cpu(buffer, cpu_file)) return NULL; - ent = peek_next_entry(iter, cpu_file, ent_ts); + ent = peek_next_entry(iter, cpu_file, ent_ts, missing_events); if (ent_cpu) *ent_cpu = cpu_file; @@ -1592,7 +1608,7 @@ __find_next_entry(struct trace_iterator *iter, int *ent_cpu, u64 *ent_ts) if (ring_buffer_empty_cpu(buffer, cpu)) continue; - ent = peek_next_entry(iter, cpu, &ts); + ent = peek_next_entry(iter, cpu, &ts, &lost_events); /* * Pick the entry with the smallest timestamp: @@ -1601,6 +1617,7 @@ __find_next_entry(struct trace_iterator *iter, int *ent_cpu, u64 *ent_ts) next = ent; next_cpu = cpu; next_ts = ts; + next_lost = lost_events; } } @@ -1610,6 +1627,9 @@ __find_next_entry(struct trace_iterator *iter, int *ent_cpu, u64 *ent_ts) if (ent_ts) *ent_ts = next_ts; + if (missing_events) + *missing_events = next_lost; + return next; } @@ -1617,13 +1637,14 @@ __find_next_entry(struct trace_iterator *iter, int *ent_cpu, u64 *ent_ts) struct trace_entry *trace_find_next_entry(struct trace_iterator *iter, int *ent_cpu, u64 *ent_ts) { - return __find_next_entry(iter, ent_cpu, ent_ts); + return __find_next_entry(iter, ent_cpu, NULL, ent_ts); } /* Find the next real entry, and increment the iterator to the next entry */ static void *find_next_entry_inc(struct trace_iterator *iter) { - iter->ent = __find_next_entry(iter, &iter->cpu, &iter->ts); + iter->ent = __find_next_entry(iter, &iter->cpu, + &iter->lost_events, &iter->ts); if (iter->ent) trace_iterator_increment(iter); @@ -1635,7 +1656,8 @@ static void trace_consume(struct trace_iterator *iter) { /* Don't allow ftrace to trace into the ring buffers */ ftrace_disable_cpu(); - ring_buffer_consume(iter->tr->buffer, iter->cpu, &iter->ts); + ring_buffer_consume(iter->tr->buffer, iter->cpu, &iter->ts, + &iter->lost_events); ftrace_enable_cpu(); } @@ -1786,7 +1808,7 @@ static void print_func_help_header(struct seq_file *m) } -static void +void print_trace_header(struct seq_file *m, struct trace_iterator *iter) { unsigned long sym_flags = (trace_flags & TRACE_ITER_SYM_MASK); @@ -1995,7 +2017,7 @@ static enum print_line_t print_bin_fmt(struct trace_iterator *iter) return event ? event->binary(iter, 0) : TRACE_TYPE_HANDLED; } -static int trace_empty(struct trace_iterator *iter) +int trace_empty(struct trace_iterator *iter) { int cpu; @@ -2030,6 +2052,10 @@ static enum print_line_t print_trace_line(struct trace_iterator *iter) { enum print_line_t ret; + if (iter->lost_events) + trace_seq_printf(&iter->seq, "CPU:%d [LOST %lu EVENTS]\n", + iter->cpu, iter->lost_events); + if (iter->trace && iter->trace->print_line) { ret = iter->trace->print_line(iter); if (ret != TRACE_TYPE_UNHANDLED) @@ -2058,6 +2084,23 @@ static enum print_line_t print_trace_line(struct trace_iterator *iter) return print_trace_fmt(iter); } +void trace_default_header(struct seq_file *m) +{ + struct trace_iterator *iter = m->private; + + if (iter->iter_flags & TRACE_FILE_LAT_FMT) { + /* print nothing if the buffers are empty */ + if (trace_empty(iter)) + return; + print_trace_header(m, iter); + if (!(trace_flags & TRACE_ITER_VERBOSE)) + print_lat_help_header(m); + } else { + if (!(trace_flags & TRACE_ITER_VERBOSE)) + print_func_help_header(m); + } +} + static int s_show(struct seq_file *m, void *v) { struct trace_iterator *iter = v; @@ -2070,17 +2113,9 @@ static int s_show(struct seq_file *m, void *v) } if (iter->trace && iter->trace->print_header) iter->trace->print_header(m); - else if (iter->iter_flags & TRACE_FILE_LAT_FMT) { - /* print nothing if the buffers are empty */ - if (trace_empty(iter)) - return 0; - print_trace_header(m, iter); - if (!(trace_flags & TRACE_ITER_VERBOSE)) - print_lat_help_header(m); - } else { - if (!(trace_flags & TRACE_ITER_VERBOSE)) - print_func_help_header(m); - } + else + trace_default_header(m); + } else if (iter->leftover) { /* * If we filled the seq_file buffer earlier, we @@ -2166,15 +2201,20 @@ __tracing_open(struct inode *inode, struct file *file) if (iter->cpu_file == TRACE_PIPE_ALL_CPU) { for_each_tracing_cpu(cpu) { - iter->buffer_iter[cpu] = - ring_buffer_read_start(iter->tr->buffer, cpu); + ring_buffer_read_prepare(iter->tr->buffer, cpu); + } + ring_buffer_read_prepare_sync(); + for_each_tracing_cpu(cpu) { + ring_buffer_read_start(iter->buffer_iter[cpu]); tracing_iter_reset(iter, cpu); } } else { cpu = iter->cpu_file; iter->buffer_iter[cpu] = - ring_buffer_read_start(iter->tr->buffer, cpu); + ring_buffer_read_prepare(iter->tr->buffer, cpu); + ring_buffer_read_prepare_sync(); + ring_buffer_read_start(iter->buffer_iter[cpu]); tracing_iter_reset(iter, cpu); } @@ -4324,7 +4364,7 @@ static int trace_panic_handler(struct notifier_block *this, unsigned long event, void *unused) { if (ftrace_dump_on_oops) - ftrace_dump(); + ftrace_dump(ftrace_dump_on_oops); return NOTIFY_OK; } @@ -4341,7 +4381,7 @@ static int trace_die_handler(struct notifier_block *self, switch (val) { case DIE_OOPS: if (ftrace_dump_on_oops) - ftrace_dump(); + ftrace_dump(ftrace_dump_on_oops); break; default: break; @@ -4382,7 +4422,8 @@ trace_printk_seq(struct trace_seq *s) trace_seq_init(s); } -static void __ftrace_dump(bool disable_tracing) +static void +__ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode) { static arch_spinlock_t ftrace_dump_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; @@ -4415,12 +4456,25 @@ static void __ftrace_dump(bool disable_tracing) /* don't look at user memory in panic mode */ trace_flags &= ~TRACE_ITER_SYM_USEROBJ; - printk(KERN_TRACE "Dumping ftrace buffer:\n"); - /* Simulate the iterator */ iter.tr = &global_trace; iter.trace = current_trace; - iter.cpu_file = TRACE_PIPE_ALL_CPU; + + switch (oops_dump_mode) { + case DUMP_ALL: + iter.cpu_file = TRACE_PIPE_ALL_CPU; + break; + case DUMP_ORIG: + iter.cpu_file = raw_smp_processor_id(); + break; + case DUMP_NONE: + goto out_enable; + default: + printk(KERN_TRACE "Bad dumping mode, switching to all CPUs dump\n"); + iter.cpu_file = TRACE_PIPE_ALL_CPU; + } + + printk(KERN_TRACE "Dumping ftrace buffer:\n"); /* * We need to stop all tracing on all CPUS to read the @@ -4459,6 +4513,7 @@ static void __ftrace_dump(bool disable_tracing) else printk(KERN_TRACE "---------------------------------\n"); + out_enable: /* Re-enable tracing if requested */ if (!disable_tracing) { trace_flags |= old_userobj; @@ -4475,9 +4530,9 @@ static void __ftrace_dump(bool disable_tracing) } /* By default: disable tracing after the dump */ -void ftrace_dump(void) +void ftrace_dump(enum ftrace_dump_mode oops_dump_mode) { - __ftrace_dump(true); + __ftrace_dump(true, oops_dump_mode); } __init static int tracer_alloc_buffers(void) diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 2825ef2c0b15..d1ce0bec1b3f 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -34,7 +34,6 @@ enum trace_type { TRACE_GRAPH_RET, TRACE_GRAPH_ENT, TRACE_USER_STACK, - TRACE_HW_BRANCHES, TRACE_KMEM_ALLOC, TRACE_KMEM_FREE, TRACE_BLK, @@ -103,29 +102,17 @@ struct syscall_trace_exit { long ret; }; -struct kprobe_trace_entry { +struct kprobe_trace_entry_head { struct trace_entry ent; unsigned long ip; - int nargs; - unsigned long args[]; }; -#define SIZEOF_KPROBE_TRACE_ENTRY(n) \ - (offsetof(struct kprobe_trace_entry, args) + \ - (sizeof(unsigned long) * (n))) - -struct kretprobe_trace_entry { +struct kretprobe_trace_entry_head { struct trace_entry ent; unsigned long func; unsigned long ret_ip; - int nargs; - unsigned long args[]; }; -#define SIZEOF_KRETPROBE_TRACE_ENTRY(n) \ - (offsetof(struct kretprobe_trace_entry, args) + \ - (sizeof(unsigned long) * (n))) - /* * trace_flag_type is an enumeration that holds different * states when a trace occurs. These are: @@ -229,7 +216,6 @@ extern void __ftrace_bad_type(void); TRACE_GRAPH_ENT); \ IF_ASSIGN(var, ent, struct ftrace_graph_ret_entry, \ TRACE_GRAPH_RET); \ - IF_ASSIGN(var, ent, struct hw_branch_entry, TRACE_HW_BRANCHES);\ IF_ASSIGN(var, ent, struct kmemtrace_alloc_entry, \ TRACE_KMEM_ALLOC); \ IF_ASSIGN(var, ent, struct kmemtrace_free_entry, \ @@ -378,6 +364,9 @@ void trace_function(struct trace_array *tr, unsigned long ip, unsigned long parent_ip, unsigned long flags, int pc); +void trace_default_header(struct seq_file *m); +void print_trace_header(struct seq_file *m, struct trace_iterator *iter); +int trace_empty(struct trace_iterator *iter); void trace_graph_return(struct ftrace_graph_ret *trace); int trace_graph_entry(struct ftrace_graph_ent *trace); @@ -467,8 +456,6 @@ extern int trace_selftest_startup_sysprof(struct tracer *trace, struct trace_array *tr); extern int trace_selftest_startup_branch(struct tracer *trace, struct trace_array *tr); -extern int trace_selftest_startup_hw_branches(struct tracer *trace, - struct trace_array *tr); extern int trace_selftest_startup_ksym(struct tracer *trace, struct trace_array *tr); #endif /* CONFIG_FTRACE_STARTUP_TEST */ @@ -491,9 +478,29 @@ extern int trace_clock_id; /* Standard output formatting function used for function return traces */ #ifdef CONFIG_FUNCTION_GRAPH_TRACER -extern enum print_line_t print_graph_function(struct trace_iterator *iter); + +/* Flag options */ +#define TRACE_GRAPH_PRINT_OVERRUN 0x1 +#define TRACE_GRAPH_PRINT_CPU 0x2 +#define TRACE_GRAPH_PRINT_OVERHEAD 0x4 +#define TRACE_GRAPH_PRINT_PROC 0x8 +#define TRACE_GRAPH_PRINT_DURATION 0x10 +#define TRACE_GRAPH_PRINT_ABS_TIME 0x20 + +extern enum print_line_t +print_graph_function_flags(struct trace_iterator *iter, u32 flags); +extern void print_graph_headers_flags(struct seq_file *s, u32 flags); extern enum print_line_t trace_print_graph_duration(unsigned long long duration, struct trace_seq *s); +extern void graph_trace_open(struct trace_iterator *iter); +extern void graph_trace_close(struct trace_iterator *iter); +extern int __trace_graph_entry(struct trace_array *tr, + struct ftrace_graph_ent *trace, + unsigned long flags, int pc); +extern void __trace_graph_return(struct trace_array *tr, + struct ftrace_graph_ret *trace, + unsigned long flags, int pc); + #ifdef CONFIG_DYNAMIC_FTRACE /* TODO: make this variable */ @@ -524,7 +531,7 @@ static inline int ftrace_graph_addr(unsigned long addr) #endif /* CONFIG_DYNAMIC_FTRACE */ #else /* CONFIG_FUNCTION_GRAPH_TRACER */ static inline enum print_line_t -print_graph_function(struct trace_iterator *iter) +print_graph_function_flags(struct trace_iterator *iter, u32 flags) { return TRACE_TYPE_UNHANDLED; } diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h index c16a08f399df..dc008c1240da 100644 --- a/kernel/trace/trace_entries.h +++ b/kernel/trace/trace_entries.h @@ -318,18 +318,6 @@ FTRACE_ENTRY(branch, trace_branch, __entry->func, __entry->file, __entry->correct) ); -FTRACE_ENTRY(hw_branch, hw_branch_entry, - - TRACE_HW_BRANCHES, - - F_STRUCT( - __field( u64, from ) - __field( u64, to ) - ), - - F_printk("from: %llx to: %llx", __entry->from, __entry->to) -); - FTRACE_ENTRY(kmem_alloc, kmemtrace_alloc_entry, TRACE_KMEM_ALLOC, diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c index 88c0b6dbd7fe..58092d844a1f 100644 --- a/kernel/trace/trace_events_filter.c +++ b/kernel/trace/trace_events_filter.c @@ -1398,7 +1398,7 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id, } err = -EINVAL; - if (!call) + if (&call->list == &ftrace_events) goto out_unlock; err = -EEXIST; diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 9aed1a5cf553..dd11c830eb84 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -40,7 +40,7 @@ struct fgraph_data { #define TRACE_GRAPH_PRINT_OVERHEAD 0x4 #define TRACE_GRAPH_PRINT_PROC 0x8 #define TRACE_GRAPH_PRINT_DURATION 0x10 -#define TRACE_GRAPH_PRINT_ABS_TIME 0X20 +#define TRACE_GRAPH_PRINT_ABS_TIME 0x20 static struct tracer_opt trace_opts[] = { /* Display overruns? (for self-debug purpose) */ @@ -179,7 +179,7 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer) return ret; } -static int __trace_graph_entry(struct trace_array *tr, +int __trace_graph_entry(struct trace_array *tr, struct ftrace_graph_ent *trace, unsigned long flags, int pc) @@ -246,7 +246,7 @@ int trace_graph_thresh_entry(struct ftrace_graph_ent *trace) return trace_graph_entry(trace); } -static void __trace_graph_return(struct trace_array *tr, +void __trace_graph_return(struct trace_array *tr, struct ftrace_graph_ret *trace, unsigned long flags, int pc) @@ -490,9 +490,10 @@ get_return_for_leaf(struct trace_iterator *iter, * We need to consume the current entry to see * the next one. */ - ring_buffer_consume(iter->tr->buffer, iter->cpu, NULL); + ring_buffer_consume(iter->tr->buffer, iter->cpu, + NULL, NULL); event = ring_buffer_peek(iter->tr->buffer, iter->cpu, - NULL); + NULL, NULL); } if (!event) @@ -526,17 +527,18 @@ get_return_for_leaf(struct trace_iterator *iter, /* Signal a overhead of time execution to the output */ static int -print_graph_overhead(unsigned long long duration, struct trace_seq *s) +print_graph_overhead(unsigned long long duration, struct trace_seq *s, + u32 flags) { /* If duration disappear, we don't need anything */ - if (!(tracer_flags.val & TRACE_GRAPH_PRINT_DURATION)) + if (!(flags & TRACE_GRAPH_PRINT_DURATION)) return 1; /* Non nested entry or return */ if (duration == -1) return trace_seq_printf(s, " "); - if (tracer_flags.val & TRACE_GRAPH_PRINT_OVERHEAD) { + if (flags & TRACE_GRAPH_PRINT_OVERHEAD) { /* Duration exceeded 100 msecs */ if (duration > 100000ULL) return trace_seq_printf(s, "! "); @@ -562,7 +564,7 @@ static int print_graph_abs_time(u64 t, struct trace_seq *s) static enum print_line_t print_graph_irq(struct trace_iterator *iter, unsigned long addr, - enum trace_type type, int cpu, pid_t pid) + enum trace_type type, int cpu, pid_t pid, u32 flags) { int ret; struct trace_seq *s = &iter->seq; @@ -572,21 +574,21 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr, return TRACE_TYPE_UNHANDLED; /* Absolute time */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_ABS_TIME) { + if (flags & TRACE_GRAPH_PRINT_ABS_TIME) { ret = print_graph_abs_time(iter->ts, s); if (!ret) return TRACE_TYPE_PARTIAL_LINE; } /* Cpu */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_CPU) { + if (flags & TRACE_GRAPH_PRINT_CPU) { ret = print_graph_cpu(s, cpu); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; } /* Proc */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_PROC) { + if (flags & TRACE_GRAPH_PRINT_PROC) { ret = print_graph_proc(s, pid); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; @@ -596,7 +598,7 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr, } /* No overhead */ - ret = print_graph_overhead(-1, s); + ret = print_graph_overhead(-1, s, flags); if (!ret) return TRACE_TYPE_PARTIAL_LINE; @@ -609,7 +611,7 @@ print_graph_irq(struct trace_iterator *iter, unsigned long addr, return TRACE_TYPE_PARTIAL_LINE; /* Don't close the duration column if haven't one */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_DURATION) + if (flags & TRACE_GRAPH_PRINT_DURATION) trace_seq_printf(s, " |"); ret = trace_seq_printf(s, "\n"); @@ -679,7 +681,8 @@ print_graph_duration(unsigned long long duration, struct trace_seq *s) static enum print_line_t print_graph_entry_leaf(struct trace_iterator *iter, struct ftrace_graph_ent_entry *entry, - struct ftrace_graph_ret_entry *ret_entry, struct trace_seq *s) + struct ftrace_graph_ret_entry *ret_entry, + struct trace_seq *s, u32 flags) { struct fgraph_data *data = iter->private; struct ftrace_graph_ret *graph_ret; @@ -711,12 +714,12 @@ print_graph_entry_leaf(struct trace_iterator *iter, } /* Overhead */ - ret = print_graph_overhead(duration, s); + ret = print_graph_overhead(duration, s, flags); if (!ret) return TRACE_TYPE_PARTIAL_LINE; /* Duration */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_DURATION) { + if (flags & TRACE_GRAPH_PRINT_DURATION) { ret = print_graph_duration(duration, s); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; @@ -739,7 +742,7 @@ print_graph_entry_leaf(struct trace_iterator *iter, static enum print_line_t print_graph_entry_nested(struct trace_iterator *iter, struct ftrace_graph_ent_entry *entry, - struct trace_seq *s, int cpu) + struct trace_seq *s, int cpu, u32 flags) { struct ftrace_graph_ent *call = &entry->graph_ent; struct fgraph_data *data = iter->private; @@ -759,12 +762,12 @@ print_graph_entry_nested(struct trace_iterator *iter, } /* No overhead */ - ret = print_graph_overhead(-1, s); + ret = print_graph_overhead(-1, s, flags); if (!ret) return TRACE_TYPE_PARTIAL_LINE; /* No time */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_DURATION) { + if (flags & TRACE_GRAPH_PRINT_DURATION) { ret = trace_seq_printf(s, " | "); if (!ret) return TRACE_TYPE_PARTIAL_LINE; @@ -790,7 +793,7 @@ print_graph_entry_nested(struct trace_iterator *iter, static enum print_line_t print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s, - int type, unsigned long addr) + int type, unsigned long addr, u32 flags) { struct fgraph_data *data = iter->private; struct trace_entry *ent = iter->ent; @@ -803,27 +806,27 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s, if (type) { /* Interrupt */ - ret = print_graph_irq(iter, addr, type, cpu, ent->pid); + ret = print_graph_irq(iter, addr, type, cpu, ent->pid, flags); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; } /* Absolute time */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_ABS_TIME) { + if (flags & TRACE_GRAPH_PRINT_ABS_TIME) { ret = print_graph_abs_time(iter->ts, s); if (!ret) return TRACE_TYPE_PARTIAL_LINE; } /* Cpu */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_CPU) { + if (flags & TRACE_GRAPH_PRINT_CPU) { ret = print_graph_cpu(s, cpu); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; } /* Proc */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_PROC) { + if (flags & TRACE_GRAPH_PRINT_PROC) { ret = print_graph_proc(s, ent->pid); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; @@ -845,7 +848,7 @@ print_graph_prologue(struct trace_iterator *iter, struct trace_seq *s, static enum print_line_t print_graph_entry(struct ftrace_graph_ent_entry *field, struct trace_seq *s, - struct trace_iterator *iter) + struct trace_iterator *iter, u32 flags) { struct fgraph_data *data = iter->private; struct ftrace_graph_ent *call = &field->graph_ent; @@ -853,14 +856,14 @@ print_graph_entry(struct ftrace_graph_ent_entry *field, struct trace_seq *s, static enum print_line_t ret; int cpu = iter->cpu; - if (print_graph_prologue(iter, s, TRACE_GRAPH_ENT, call->func)) + if (print_graph_prologue(iter, s, TRACE_GRAPH_ENT, call->func, flags)) return TRACE_TYPE_PARTIAL_LINE; leaf_ret = get_return_for_leaf(iter, field); if (leaf_ret) - ret = print_graph_entry_leaf(iter, field, leaf_ret, s); + ret = print_graph_entry_leaf(iter, field, leaf_ret, s, flags); else - ret = print_graph_entry_nested(iter, field, s, cpu); + ret = print_graph_entry_nested(iter, field, s, cpu, flags); if (data) { /* @@ -879,7 +882,8 @@ print_graph_entry(struct ftrace_graph_ent_entry *field, struct trace_seq *s, static enum print_line_t print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s, - struct trace_entry *ent, struct trace_iterator *iter) + struct trace_entry *ent, struct trace_iterator *iter, + u32 flags) { unsigned long long duration = trace->rettime - trace->calltime; struct fgraph_data *data = iter->private; @@ -909,16 +913,16 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s, } } - if (print_graph_prologue(iter, s, 0, 0)) + if (print_graph_prologue(iter, s, 0, 0, flags)) return TRACE_TYPE_PARTIAL_LINE; /* Overhead */ - ret = print_graph_overhead(duration, s); + ret = print_graph_overhead(duration, s, flags); if (!ret) return TRACE_TYPE_PARTIAL_LINE; /* Duration */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_DURATION) { + if (flags & TRACE_GRAPH_PRINT_DURATION) { ret = print_graph_duration(duration, s); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; @@ -948,14 +952,15 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s, } /* Overrun */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_OVERRUN) { + if (flags & TRACE_GRAPH_PRINT_OVERRUN) { ret = trace_seq_printf(s, " (Overruns: %lu)\n", trace->overrun); if (!ret) return TRACE_TYPE_PARTIAL_LINE; } - ret = print_graph_irq(iter, trace->func, TRACE_GRAPH_RET, cpu, pid); + ret = print_graph_irq(iter, trace->func, TRACE_GRAPH_RET, + cpu, pid, flags); if (ret == TRACE_TYPE_PARTIAL_LINE) return TRACE_TYPE_PARTIAL_LINE; @@ -963,8 +968,8 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s, } static enum print_line_t -print_graph_comment(struct trace_seq *s, struct trace_entry *ent, - struct trace_iterator *iter) +print_graph_comment(struct trace_seq *s, struct trace_entry *ent, + struct trace_iterator *iter, u32 flags) { unsigned long sym_flags = (trace_flags & TRACE_ITER_SYM_MASK); struct fgraph_data *data = iter->private; @@ -976,16 +981,16 @@ print_graph_comment(struct trace_seq *s, struct trace_entry *ent, if (data) depth = per_cpu_ptr(data->cpu_data, iter->cpu)->depth; - if (print_graph_prologue(iter, s, 0, 0)) + if (print_graph_prologue(iter, s, 0, 0, flags)) return TRACE_TYPE_PARTIAL_LINE; /* No overhead */ - ret = print_graph_overhead(-1, s); + ret = print_graph_overhead(-1, s, flags); if (!ret) return TRACE_TYPE_PARTIAL_LINE; /* No time */ - if (tracer_flags.val & TRACE_GRAPH_PRINT_DURATION) { + if (flags & TRACE_GRAPH_PRINT_DURATION) { ret = trace_seq_printf(s, " | "); if (!ret) return TRACE_TYPE_PARTIAL_LINE; @@ -1040,7 +1045,7 @@ print_graph_comment(struct trace_seq *s, struct trace_entry *ent, enum print_line_t -print_graph_function(struct trace_iterator *iter) +print_graph_function_flags(struct trace_iterator *iter, u32 flags) { struct ftrace_graph_ent_entry *field; struct fgraph_data *data = iter->private; @@ -1061,7 +1066,7 @@ print_graph_function(struct trace_iterator *iter) if (data && data->failed) { field = &data->ent; iter->cpu = data->cpu; - ret = print_graph_entry(field, s, iter); + ret = print_graph_entry(field, s, iter, flags); if (ret == TRACE_TYPE_HANDLED && iter->cpu != cpu) { per_cpu_ptr(data->cpu_data, iter->cpu)->ignore = 1; ret = TRACE_TYPE_NO_CONSUME; @@ -1081,32 +1086,49 @@ print_graph_function(struct trace_iterator *iter) struct ftrace_graph_ent_entry saved; trace_assign_type(field, entry); saved = *field; - return print_graph_entry(&saved, s, iter); + return print_graph_entry(&saved, s, iter, flags); } case TRACE_GRAPH_RET: { struct ftrace_graph_ret_entry *field; trace_assign_type(field, entry); - return print_graph_return(&field->ret, s, entry, iter); + return print_graph_return(&field->ret, s, entry, iter, flags); } + case TRACE_STACK: + case TRACE_FN: + /* dont trace stack and functions as comments */ + return TRACE_TYPE_UNHANDLED; + default: - return print_graph_comment(s, entry, iter); + return print_graph_comment(s, entry, iter, flags); } return TRACE_TYPE_HANDLED; } -static void print_lat_header(struct seq_file *s) +static enum print_line_t +print_graph_function(struct trace_iterator *iter) +{ + return print_graph_function_flags(iter, tracer_flags.val); +} + +static enum print_line_t +print_graph_function_event(struct trace_iterator *iter, int flags) +{ + return print_graph_function(iter); +} + +static void print_lat_header(struct seq_file *s, u32 flags) { static const char spaces[] = " " /* 16 spaces */ " " /* 4 spaces */ " "; /* 17 spaces */ int size = 0; - if (tracer_flags.val & TRACE_GRAPH_PRINT_ABS_TIME) + if (flags & TRACE_GRAPH_PRINT_ABS_TIME) size += 16; - if (tracer_flags.val & TRACE_GRAPH_PRINT_CPU) + if (flags & TRACE_GRAPH_PRINT_CPU) size += 4; - if (tracer_flags.val & TRACE_GRAPH_PRINT_PROC) + if (flags & TRACE_GRAPH_PRINT_PROC) size += 17; seq_printf(s, "#%.*s _-----=> irqs-off \n", size, spaces); @@ -1117,43 +1139,48 @@ static void print_lat_header(struct seq_file *s) seq_printf(s, "#%.*s|||| / \n", size, spaces); } -static void print_graph_headers(struct seq_file *s) +void print_graph_headers_flags(struct seq_file *s, u32 flags) { int lat = trace_flags & TRACE_ITER_LATENCY_FMT; if (lat) - print_lat_header(s); + print_lat_header(s, flags); /* 1st line */ seq_printf(s, "#"); - if (tracer_flags.val & TRACE_GRAPH_PRINT_ABS_TIME) + if (flags & TRACE_GRAPH_PRINT_ABS_TIME) seq_printf(s, " TIME "); - if (tracer_flags.val & TRACE_GRAPH_PRINT_CPU) + if (flags & TRACE_GRAPH_PRINT_CPU) seq_printf(s, " CPU"); - if (tracer_flags.val & TRACE_GRAPH_PRINT_PROC) + if (flags & TRACE_GRAPH_PRINT_PROC) seq_printf(s, " TASK/PID "); if (lat) seq_printf(s, "|||||"); - if (tracer_flags.val & TRACE_GRAPH_PRINT_DURATION) + if (flags & TRACE_GRAPH_PRINT_DURATION) seq_printf(s, " DURATION "); seq_printf(s, " FUNCTION CALLS\n"); /* 2nd line */ seq_printf(s, "#"); - if (tracer_flags.val & TRACE_GRAPH_PRINT_ABS_TIME) + if (flags & TRACE_GRAPH_PRINT_ABS_TIME) seq_printf(s, " | "); - if (tracer_flags.val & TRACE_GRAPH_PRINT_CPU) + if (flags & TRACE_GRAPH_PRINT_CPU) seq_printf(s, " | "); - if (tracer_flags.val & TRACE_GRAPH_PRINT_PROC) + if (flags & TRACE_GRAPH_PRINT_PROC) seq_printf(s, " | | "); if (lat) seq_printf(s, "|||||"); - if (tracer_flags.val & TRACE_GRAPH_PRINT_DURATION) + if (flags & TRACE_GRAPH_PRINT_DURATION) seq_printf(s, " | | "); seq_printf(s, " | | | |\n"); } -static void graph_trace_open(struct trace_iterator *iter) +void print_graph_headers(struct seq_file *s) +{ + print_graph_headers_flags(s, tracer_flags.val); +} + +void graph_trace_open(struct trace_iterator *iter) { /* pid and depth on the last trace processed */ struct fgraph_data *data; @@ -1188,7 +1215,7 @@ static void graph_trace_open(struct trace_iterator *iter) pr_warning("function graph tracer: not enough memory\n"); } -static void graph_trace_close(struct trace_iterator *iter) +void graph_trace_close(struct trace_iterator *iter) { struct fgraph_data *data = iter->private; @@ -1198,6 +1225,16 @@ static void graph_trace_close(struct trace_iterator *iter) } } +static struct trace_event graph_trace_entry_event = { + .type = TRACE_GRAPH_ENT, + .trace = print_graph_function_event, +}; + +static struct trace_event graph_trace_ret_event = { + .type = TRACE_GRAPH_RET, + .trace = print_graph_function_event, +}; + static struct tracer graph_trace __read_mostly = { .name = "function_graph", .open = graph_trace_open, @@ -1219,6 +1256,16 @@ static __init int init_graph_trace(void) { max_bytes_for_cpu = snprintf(NULL, 0, "%d", nr_cpu_ids - 1); + if (!register_ftrace_event(&graph_trace_entry_event)) { + pr_warning("Warning: could not register graph trace events\n"); + return 1; + } + + if (!register_ftrace_event(&graph_trace_ret_event)) { + pr_warning("Warning: could not register graph trace events\n"); + return 1; + } + return register_tracer(&graph_trace); } diff --git a/kernel/trace/trace_hw_branches.c b/kernel/trace/trace_hw_branches.c deleted file mode 100644 index 7b97000745f5..000000000000 --- a/kernel/trace/trace_hw_branches.c +++ /dev/null @@ -1,312 +0,0 @@ -/* - * h/w branch tracer for x86 based on BTS - * - * Copyright (C) 2008-2009 Intel Corporation. - * Markus Metzger <markus.t.metzger@gmail.com>, 2008-2009 - */ -#include <linux/kallsyms.h> -#include <linux/debugfs.h> -#include <linux/ftrace.h> -#include <linux/module.h> -#include <linux/cpu.h> -#include <linux/smp.h> -#include <linux/fs.h> - -#include <asm/ds.h> - -#include "trace_output.h" -#include "trace.h" - - -#define BTS_BUFFER_SIZE (1 << 13) - -static DEFINE_PER_CPU(struct bts_tracer *, hwb_tracer); -static DEFINE_PER_CPU(unsigned char[BTS_BUFFER_SIZE], hwb_buffer); - -#define this_tracer per_cpu(hwb_tracer, smp_processor_id()) - -static int trace_hw_branches_enabled __read_mostly; -static int trace_hw_branches_suspended __read_mostly; -static struct trace_array *hw_branch_trace __read_mostly; - - -static void bts_trace_init_cpu(int cpu) -{ - per_cpu(hwb_tracer, cpu) = - ds_request_bts_cpu(cpu, per_cpu(hwb_buffer, cpu), - BTS_BUFFER_SIZE, NULL, (size_t)-1, - BTS_KERNEL); - - if (IS_ERR(per_cpu(hwb_tracer, cpu))) - per_cpu(hwb_tracer, cpu) = NULL; -} - -static int bts_trace_init(struct trace_array *tr) -{ - int cpu; - - hw_branch_trace = tr; - trace_hw_branches_enabled = 0; - - get_online_cpus(); - for_each_online_cpu(cpu) { - bts_trace_init_cpu(cpu); - - if (likely(per_cpu(hwb_tracer, cpu))) - trace_hw_branches_enabled = 1; - } - trace_hw_branches_suspended = 0; - put_online_cpus(); - - /* If we could not enable tracing on a single cpu, we fail. */ - return trace_hw_branches_enabled ? 0 : -EOPNOTSUPP; -} - -static void bts_trace_reset(struct trace_array *tr) -{ - int cpu; - - get_online_cpus(); - for_each_online_cpu(cpu) { - if (likely(per_cpu(hwb_tracer, cpu))) { - ds_release_bts(per_cpu(hwb_tracer, cpu)); - per_cpu(hwb_tracer, cpu) = NULL; - } - } - trace_hw_branches_enabled = 0; - trace_hw_branches_suspended = 0; - put_online_cpus(); -} - -static void bts_trace_start(struct trace_array *tr) -{ - int cpu; - - get_online_cpus(); - for_each_online_cpu(cpu) - if (likely(per_cpu(hwb_tracer, cpu))) - ds_resume_bts(per_cpu(hwb_tracer, cpu)); - trace_hw_branches_suspended = 0; - put_online_cpus(); -} - -static void bts_trace_stop(struct trace_array *tr) -{ - int cpu; - - get_online_cpus(); - for_each_online_cpu(cpu) - if (likely(per_cpu(hwb_tracer, cpu))) - ds_suspend_bts(per_cpu(hwb_tracer, cpu)); - trace_hw_branches_suspended = 1; - put_online_cpus(); -} - -static int __cpuinit bts_hotcpu_handler(struct notifier_block *nfb, - unsigned long action, void *hcpu) -{ - int cpu = (long)hcpu; - - switch (action) { - case CPU_ONLINE: - case CPU_DOWN_FAILED: - /* The notification is sent with interrupts enabled. */ - if (trace_hw_branches_enabled) { - bts_trace_init_cpu(cpu); - - if (trace_hw_branches_suspended && - likely(per_cpu(hwb_tracer, cpu))) - ds_suspend_bts(per_cpu(hwb_tracer, cpu)); - } - break; - - case CPU_DOWN_PREPARE: - /* The notification is sent with interrupts enabled. */ - if (likely(per_cpu(hwb_tracer, cpu))) { - ds_release_bts(per_cpu(hwb_tracer, cpu)); - per_cpu(hwb_tracer, cpu) = NULL; - } - } - - return NOTIFY_DONE; -} - -static struct notifier_block bts_hotcpu_notifier __cpuinitdata = { - .notifier_call = bts_hotcpu_handler -}; - -static void bts_trace_print_header(struct seq_file *m) -{ - seq_puts(m, "# CPU# TO <- FROM\n"); -} - -static enum print_line_t bts_trace_print_line(struct trace_iterator *iter) -{ - unsigned long symflags = TRACE_ITER_SYM_OFFSET; - struct trace_entry *entry = iter->ent; - struct trace_seq *seq = &iter->seq; - struct hw_branch_entry *it; - - trace_assign_type(it, entry); - - if (entry->type == TRACE_HW_BRANCHES) { - if (trace_seq_printf(seq, "%4d ", iter->cpu) && - seq_print_ip_sym(seq, it->to, symflags) && - trace_seq_printf(seq, "\t <- ") && - seq_print_ip_sym(seq, it->from, symflags) && - trace_seq_printf(seq, "\n")) - return TRACE_TYPE_HANDLED; - return TRACE_TYPE_PARTIAL_LINE; - } - return TRACE_TYPE_UNHANDLED; -} - -void trace_hw_branch(u64 from, u64 to) -{ - struct ftrace_event_call *call = &event_hw_branch; - struct trace_array *tr = hw_branch_trace; - struct ring_buffer_event *event; - struct ring_buffer *buf; - struct hw_branch_entry *entry; - unsigned long irq1; - int cpu; - - if (unlikely(!tr)) - return; - - if (unlikely(!trace_hw_branches_enabled)) - return; - - local_irq_save(irq1); - cpu = raw_smp_processor_id(); - if (atomic_inc_return(&tr->data[cpu]->disabled) != 1) - goto out; - - buf = tr->buffer; - event = trace_buffer_lock_reserve(buf, TRACE_HW_BRANCHES, - sizeof(*entry), 0, 0); - if (!event) - goto out; - entry = ring_buffer_event_data(event); - tracing_generic_entry_update(&entry->ent, 0, from); - entry->ent.type = TRACE_HW_BRANCHES; - entry->from = from; - entry->to = to; - if (!filter_check_discard(call, entry, buf, event)) - trace_buffer_unlock_commit(buf, event, 0, 0); - - out: - atomic_dec(&tr->data[cpu]->disabled); - local_irq_restore(irq1); -} - -static void trace_bts_at(const struct bts_trace *trace, void *at) -{ - struct bts_struct bts; - int err = 0; - - WARN_ON_ONCE(!trace->read); - if (!trace->read) - return; - - err = trace->read(this_tracer, at, &bts); - if (err < 0) - return; - - switch (bts.qualifier) { - case BTS_BRANCH: - trace_hw_branch(bts.variant.lbr.from, bts.variant.lbr.to); - break; - } -} - -/* - * Collect the trace on the current cpu and write it into the ftrace buffer. - * - * pre: tracing must be suspended on the current cpu - */ -static void trace_bts_cpu(void *arg) -{ - struct trace_array *tr = (struct trace_array *)arg; - const struct bts_trace *trace; - unsigned char *at; - - if (unlikely(!tr)) - return; - - if (unlikely(atomic_read(&tr->data[raw_smp_processor_id()]->disabled))) - return; - - if (unlikely(!this_tracer)) - return; - - trace = ds_read_bts(this_tracer); - if (!trace) - return; - - for (at = trace->ds.top; (void *)at < trace->ds.end; - at += trace->ds.size) - trace_bts_at(trace, at); - - for (at = trace->ds.begin; (void *)at < trace->ds.top; - at += trace->ds.size) - trace_bts_at(trace, at); -} - -static void trace_bts_prepare(struct trace_iterator *iter) -{ - int cpu; - - get_online_cpus(); - for_each_online_cpu(cpu) - if (likely(per_cpu(hwb_tracer, cpu))) - ds_suspend_bts(per_cpu(hwb_tracer, cpu)); - /* - * We need to collect the trace on the respective cpu since ftrace - * implicitly adds the record for the current cpu. - * Once that is more flexible, we could collect the data from any cpu. - */ - on_each_cpu(trace_bts_cpu, iter->tr, 1); - - for_each_online_cpu(cpu) - if (likely(per_cpu(hwb_tracer, cpu))) - ds_resume_bts(per_cpu(hwb_tracer, cpu)); - put_online_cpus(); -} - -static void trace_bts_close(struct trace_iterator *iter) -{ - tracing_reset_online_cpus(iter->tr); -} - -void trace_hw_branch_oops(void) -{ - if (this_tracer) { - ds_suspend_bts_noirq(this_tracer); - trace_bts_cpu(hw_branch_trace); - ds_resume_bts_noirq(this_tracer); - } -} - -struct tracer bts_tracer __read_mostly = -{ - .name = "hw-branch-tracer", - .init = bts_trace_init, - .reset = bts_trace_reset, - .print_header = bts_trace_print_header, - .print_line = bts_trace_print_line, - .start = bts_trace_start, - .stop = bts_trace_stop, - .open = trace_bts_prepare, - .close = trace_bts_close, -#ifdef CONFIG_FTRACE_SELFTEST - .selftest = trace_selftest_startup_hw_branches, -#endif /* CONFIG_FTRACE_SELFTEST */ -}; - -__init static int init_bts_trace(void) -{ - register_hotcpu_notifier(&bts_hotcpu_notifier); - return register_tracer(&bts_tracer); -} -device_initcall(init_bts_trace); diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 2974bc7538c7..6fd486e0cef4 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -34,6 +34,9 @@ static int trace_type __read_mostly; static int save_lat_flag; +static void stop_irqsoff_tracer(struct trace_array *tr, int graph); +static int start_irqsoff_tracer(struct trace_array *tr, int graph); + #ifdef CONFIG_PREEMPT_TRACER static inline int preempt_trace(void) @@ -55,6 +58,23 @@ irq_trace(void) # define irq_trace() (0) #endif +#define TRACE_DISPLAY_GRAPH 1 + +static struct tracer_opt trace_opts[] = { +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + /* display latency trace as call graph */ + { TRACER_OPT(display-graph, TRACE_DISPLAY_GRAPH) }, +#endif + { } /* Empty entry */ +}; + +static struct tracer_flags tracer_flags = { + .val = 0, + .opts = trace_opts, +}; + +#define is_graph() (tracer_flags.val & TRACE_DISPLAY_GRAPH) + /* * Sequence count - we record it when starting a measurement and * skip the latency if the sequence has changed - some other section @@ -108,6 +128,202 @@ static struct ftrace_ops trace_ops __read_mostly = }; #endif /* CONFIG_FUNCTION_TRACER */ +#ifdef CONFIG_FUNCTION_GRAPH_TRACER +static int irqsoff_set_flag(u32 old_flags, u32 bit, int set) +{ + int cpu; + + if (!(bit & TRACE_DISPLAY_GRAPH)) + return -EINVAL; + + if (!(is_graph() ^ set)) + return 0; + + stop_irqsoff_tracer(irqsoff_trace, !set); + + for_each_possible_cpu(cpu) + per_cpu(tracing_cpu, cpu) = 0; + + tracing_max_latency = 0; + tracing_reset_online_cpus(irqsoff_trace); + + return start_irqsoff_tracer(irqsoff_trace, set); +} + +static int irqsoff_graph_entry(struct ftrace_graph_ent *trace) +{ + struct trace_array *tr = irqsoff_trace; + struct trace_array_cpu *data; + unsigned long flags; + long disabled; + int ret; + int cpu; + int pc; + + cpu = raw_smp_processor_id(); + if (likely(!per_cpu(tracing_cpu, cpu))) + return 0; + + local_save_flags(flags); + /* slight chance to get a false positive on tracing_cpu */ + if (!irqs_disabled_flags(flags)) + return 0; + + data = tr->data[cpu]; + disabled = atomic_inc_return(&data->disabled); + + if (likely(disabled == 1)) { + pc = preempt_count(); + ret = __trace_graph_entry(tr, trace, flags, pc); + } else + ret = 0; + + atomic_dec(&data->disabled); + return ret; +} + +static void irqsoff_graph_return(struct ftrace_graph_ret *trace) +{ + struct trace_array *tr = irqsoff_trace; + struct trace_array_cpu *data; + unsigned long flags; + long disabled; + int cpu; + int pc; + + cpu = raw_smp_processor_id(); + if (likely(!per_cpu(tracing_cpu, cpu))) + return; + + local_save_flags(flags); + /* slight chance to get a false positive on tracing_cpu */ + if (!irqs_disabled_flags(flags)) + return; + + data = tr->data[cpu]; + disabled = atomic_inc_return(&data->disabled); + + if (likely(disabled == 1)) { + pc = preempt_count(); + __trace_graph_return(tr, trace, flags, pc); + } + + atomic_dec(&data->disabled); +} + +static void irqsoff_trace_open(struct trace_iterator *iter) +{ + if (is_graph()) + graph_trace_open(iter); + +} + +static void irqsoff_trace_close(struct trace_iterator *iter) +{ + if (iter->private) + graph_trace_close(iter); +} + +#define GRAPH_TRACER_FLAGS (TRACE_GRAPH_PRINT_CPU | \ + TRACE_GRAPH_PRINT_PROC) + +static enum print_line_t irqsoff_print_line(struct trace_iterator *iter) +{ + u32 flags = GRAPH_TRACER_FLAGS; + + if (trace_flags & TRACE_ITER_LATENCY_FMT) + flags |= TRACE_GRAPH_PRINT_DURATION; + else + flags |= TRACE_GRAPH_PRINT_ABS_TIME; + + /* + * In graph mode call the graph tracer output function, + * otherwise go with the TRACE_FN event handler + */ + if (is_graph()) + return print_graph_function_flags(iter, flags); + + return TRACE_TYPE_UNHANDLED; +} + +static void irqsoff_print_header(struct seq_file *s) +{ + if (is_graph()) { + struct trace_iterator *iter = s->private; + u32 flags = GRAPH_TRACER_FLAGS; + + if (trace_flags & TRACE_ITER_LATENCY_FMT) { + /* print nothing if the buffers are empty */ + if (trace_empty(iter)) + return; + + print_trace_header(s, iter); + flags |= TRACE_GRAPH_PRINT_DURATION; + } else + flags |= TRACE_GRAPH_PRINT_ABS_TIME; + + print_graph_headers_flags(s, flags); + } else + trace_default_header(s); +} + +static void +trace_graph_function(struct trace_array *tr, + unsigned long ip, unsigned long flags, int pc) +{ + u64 time = trace_clock_local(); + struct ftrace_graph_ent ent = { + .func = ip, + .depth = 0, + }; + struct ftrace_graph_ret ret = { + .func = ip, + .depth = 0, + .calltime = time, + .rettime = time, + }; + + __trace_graph_entry(tr, &ent, flags, pc); + __trace_graph_return(tr, &ret, flags, pc); +} + +static void +__trace_function(struct trace_array *tr, + unsigned long ip, unsigned long parent_ip, + unsigned long flags, int pc) +{ + if (!is_graph()) + trace_function(tr, ip, parent_ip, flags, pc); + else { + trace_graph_function(tr, parent_ip, flags, pc); + trace_graph_function(tr, ip, flags, pc); + } +} + +#else +#define __trace_function trace_function + +static int irqsoff_set_flag(u32 old_flags, u32 bit, int set) +{ + return -EINVAL; +} + +static int irqsoff_graph_entry(struct ftrace_graph_ent *trace) +{ + return -1; +} + +static enum print_line_t irqsoff_print_line(struct trace_iterator *iter) +{ + return TRACE_TYPE_UNHANDLED; +} + +static void irqsoff_graph_return(struct ftrace_graph_ret *trace) { } +static void irqsoff_print_header(struct seq_file *s) { } +static void irqsoff_trace_open(struct trace_iterator *iter) { } +static void irqsoff_trace_close(struct trace_iterator *iter) { } +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ + /* * Should this new latency be reported/recorded? */ @@ -150,7 +366,7 @@ check_critical_timing(struct trace_array *tr, if (!report_latency(delta)) goto out_unlock; - trace_function(tr, CALLER_ADDR0, parent_ip, flags, pc); + __trace_function(tr, CALLER_ADDR0, parent_ip, flags, pc); /* Skip 5 functions to get to the irq/preempt enable function */ __trace_stack(tr, flags, 5, pc); @@ -172,7 +388,7 @@ out_unlock: out: data->critical_sequence = max_sequence; data->preempt_timestamp = ftrace_now(cpu); - trace_function(tr, CALLER_ADDR0, parent_ip, flags, pc); + __trace_function(tr, CALLER_ADDR0, parent_ip, flags, pc); } static inline void @@ -204,7 +420,7 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip) local_save_flags(flags); - trace_function(tr, ip, parent_ip, flags, preempt_count()); + __trace_function(tr, ip, parent_ip, flags, preempt_count()); per_cpu(tracing_cpu, cpu) = 1; @@ -238,7 +454,7 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip) atomic_inc(&data->disabled); local_save_flags(flags); - trace_function(tr, ip, parent_ip, flags, preempt_count()); + __trace_function(tr, ip, parent_ip, flags, preempt_count()); check_critical_timing(tr, data, parent_ip ? : ip, cpu); data->critical_start = 0; atomic_dec(&data->disabled); @@ -347,19 +563,32 @@ void trace_preempt_off(unsigned long a0, unsigned long a1) } #endif /* CONFIG_PREEMPT_TRACER */ -static void start_irqsoff_tracer(struct trace_array *tr) +static int start_irqsoff_tracer(struct trace_array *tr, int graph) { - register_ftrace_function(&trace_ops); - if (tracing_is_enabled()) + int ret = 0; + + if (!graph) + ret = register_ftrace_function(&trace_ops); + else + ret = register_ftrace_graph(&irqsoff_graph_return, + &irqsoff_graph_entry); + + if (!ret && tracing_is_enabled()) tracer_enabled = 1; else tracer_enabled = 0; + + return ret; } -static void stop_irqsoff_tracer(struct trace_array *tr) +static void stop_irqsoff_tracer(struct trace_array *tr, int graph) { tracer_enabled = 0; - unregister_ftrace_function(&trace_ops); + + if (!graph) + unregister_ftrace_function(&trace_ops); + else + unregister_ftrace_graph(); } static void __irqsoff_tracer_init(struct trace_array *tr) @@ -372,12 +601,14 @@ static void __irqsoff_tracer_init(struct trace_array *tr) /* make sure that the tracer is visible */ smp_wmb(); tracing_reset_online_cpus(tr); - start_irqsoff_tracer(tr); + + if (start_irqsoff_tracer(tr, is_graph())) + printk(KERN_ERR "failed to start irqsoff tracer\n"); } static void irqsoff_tracer_reset(struct trace_array *tr) { - stop_irqsoff_tracer(tr); + stop_irqsoff_tracer(tr, is_graph()); if (!save_lat_flag) trace_flags &= ~TRACE_ITER_LATENCY_FMT; @@ -409,9 +640,15 @@ static struct tracer irqsoff_tracer __read_mostly = .start = irqsoff_tracer_start, .stop = irqsoff_tracer_stop, .print_max = 1, + .print_header = irqsoff_print_header, + .print_line = irqsoff_print_line, + .flags = &tracer_flags, + .set_flag = irqsoff_set_flag, #ifdef CONFIG_FTRACE_SELFTEST .selftest = trace_selftest_startup_irqsoff, #endif + .open = irqsoff_trace_open, + .close = irqsoff_trace_close, }; # define register_irqsoff(trace) register_tracer(&trace) #else @@ -435,9 +672,15 @@ static struct tracer preemptoff_tracer __read_mostly = .start = irqsoff_tracer_start, .stop = irqsoff_tracer_stop, .print_max = 1, + .print_header = irqsoff_print_header, + .print_line = irqsoff_print_line, + .flags = &tracer_flags, + .set_flag = irqsoff_set_flag, #ifdef CONFIG_FTRACE_SELFTEST .selftest = trace_selftest_startup_preemptoff, #endif + .open = irqsoff_trace_open, + .close = irqsoff_trace_close, }; # define register_preemptoff(trace) register_tracer(&trace) #else @@ -463,9 +706,15 @@ static struct tracer preemptirqsoff_tracer __read_mostly = .start = irqsoff_tracer_start, .stop = irqsoff_tracer_stop, .print_max = 1, + .print_header = irqsoff_print_header, + .print_line = irqsoff_print_line, + .flags = &tracer_flags, + .set_flag = irqsoff_set_flag, #ifdef CONFIG_FTRACE_SELFTEST .selftest = trace_selftest_startup_preemptirqsoff, #endif + .open = irqsoff_trace_open, + .close = irqsoff_trace_close, }; # define register_preemptirqsoff(trace) register_tracer(&trace) diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index 1251e367bae9..a7514326052b 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -29,6 +29,8 @@ #include <linux/ctype.h> #include <linux/ptrace.h> #include <linux/perf_event.h> +#include <linux/stringify.h> +#include <asm/bitsperlong.h> #include "trace.h" #include "trace_output.h" @@ -40,7 +42,6 @@ /* Reserved field names */ #define FIELD_STRING_IP "__probe_ip" -#define FIELD_STRING_NARGS "__probe_nargs" #define FIELD_STRING_RETIP "__probe_ret_ip" #define FIELD_STRING_FUNC "__probe_func" @@ -52,56 +53,102 @@ const char *reserved_field_names[] = { "common_tgid", "common_lock_depth", FIELD_STRING_IP, - FIELD_STRING_NARGS, FIELD_STRING_RETIP, FIELD_STRING_FUNC, }; -struct fetch_func { - unsigned long (*func)(struct pt_regs *, void *); +/* Printing function type */ +typedef int (*print_type_func_t)(struct trace_seq *, const char *, void *); +#define PRINT_TYPE_FUNC_NAME(type) print_type_##type +#define PRINT_TYPE_FMT_NAME(type) print_type_format_##type + +/* Printing in basic type function template */ +#define DEFINE_BASIC_PRINT_TYPE_FUNC(type, fmt, cast) \ +static __kprobes int PRINT_TYPE_FUNC_NAME(type)(struct trace_seq *s, \ + const char *name, void *data)\ +{ \ + return trace_seq_printf(s, " %s=" fmt, name, (cast)*(type *)data);\ +} \ +static const char PRINT_TYPE_FMT_NAME(type)[] = fmt; + +DEFINE_BASIC_PRINT_TYPE_FUNC(u8, "%x", unsigned int) +DEFINE_BASIC_PRINT_TYPE_FUNC(u16, "%x", unsigned int) +DEFINE_BASIC_PRINT_TYPE_FUNC(u32, "%lx", unsigned long) +DEFINE_BASIC_PRINT_TYPE_FUNC(u64, "%llx", unsigned long long) +DEFINE_BASIC_PRINT_TYPE_FUNC(s8, "%d", int) +DEFINE_BASIC_PRINT_TYPE_FUNC(s16, "%d", int) +DEFINE_BASIC_PRINT_TYPE_FUNC(s32, "%ld", long) +DEFINE_BASIC_PRINT_TYPE_FUNC(s64, "%lld", long long) + +/* Data fetch function type */ +typedef void (*fetch_func_t)(struct pt_regs *, void *, void *); + +struct fetch_param { + fetch_func_t fn; void *data; }; -static __kprobes unsigned long call_fetch(struct fetch_func *f, - struct pt_regs *regs) +static __kprobes void call_fetch(struct fetch_param *fprm, + struct pt_regs *regs, void *dest) { - return f->func(regs, f->data); + return fprm->fn(regs, fprm->data, dest); } -/* fetch handlers */ -static __kprobes unsigned long fetch_register(struct pt_regs *regs, - void *offset) -{ - return regs_get_register(regs, (unsigned int)((unsigned long)offset)); +#define FETCH_FUNC_NAME(kind, type) fetch_##kind##_##type +/* + * Define macro for basic types - we don't need to define s* types, because + * we have to care only about bitwidth at recording time. + */ +#define DEFINE_BASIC_FETCH_FUNCS(kind) \ +DEFINE_FETCH_##kind(u8) \ +DEFINE_FETCH_##kind(u16) \ +DEFINE_FETCH_##kind(u32) \ +DEFINE_FETCH_##kind(u64) + +#define CHECK_BASIC_FETCH_FUNCS(kind, fn) \ + ((FETCH_FUNC_NAME(kind, u8) == fn) || \ + (FETCH_FUNC_NAME(kind, u16) == fn) || \ + (FETCH_FUNC_NAME(kind, u32) == fn) || \ + (FETCH_FUNC_NAME(kind, u64) == fn)) + +/* Data fetch function templates */ +#define DEFINE_FETCH_reg(type) \ +static __kprobes void FETCH_FUNC_NAME(reg, type)(struct pt_regs *regs, \ + void *offset, void *dest) \ +{ \ + *(type *)dest = (type)regs_get_register(regs, \ + (unsigned int)((unsigned long)offset)); \ } - -static __kprobes unsigned long fetch_stack(struct pt_regs *regs, - void *num) -{ - return regs_get_kernel_stack_nth(regs, - (unsigned int)((unsigned long)num)); +DEFINE_BASIC_FETCH_FUNCS(reg) + +#define DEFINE_FETCH_stack(type) \ +static __kprobes void FETCH_FUNC_NAME(stack, type)(struct pt_regs *regs,\ + void *offset, void *dest) \ +{ \ + *(type *)dest = (type)regs_get_kernel_stack_nth(regs, \ + (unsigned int)((unsigned long)offset)); \ } +DEFINE_BASIC_FETCH_FUNCS(stack) -static __kprobes unsigned long fetch_memory(struct pt_regs *regs, void *addr) -{ - unsigned long retval; - - if (probe_kernel_address(addr, retval)) - return 0; - return retval; +#define DEFINE_FETCH_retval(type) \ +static __kprobes void FETCH_FUNC_NAME(retval, type)(struct pt_regs *regs,\ + void *dummy, void *dest) \ +{ \ + *(type *)dest = (type)regs_return_value(regs); \ } - -static __kprobes unsigned long fetch_retvalue(struct pt_regs *regs, - void *dummy) -{ - return regs_return_value(regs); -} - -static __kprobes unsigned long fetch_stack_address(struct pt_regs *regs, - void *dummy) -{ - return kernel_stack_pointer(regs); +DEFINE_BASIC_FETCH_FUNCS(retval) + +#define DEFINE_FETCH_memory(type) \ +static __kprobes void FETCH_FUNC_NAME(memory, type)(struct pt_regs *regs,\ + void *addr, void *dest) \ +{ \ + type retval; \ + if (probe_kernel_address(addr, retval)) \ + *(type *)dest = 0; \ + else \ + *(type *)dest = retval; \ } +DEFINE_BASIC_FETCH_FUNCS(memory) /* Memory fetching by symbol */ struct symbol_cache { @@ -145,51 +192,126 @@ static struct symbol_cache *alloc_symbol_cache(const char *sym, long offset) return sc; } -static __kprobes unsigned long fetch_symbol(struct pt_regs *regs, void *data) -{ - struct symbol_cache *sc = data; - - if (sc->addr) - return fetch_memory(regs, (void *)sc->addr); - else - return 0; +#define DEFINE_FETCH_symbol(type) \ +static __kprobes void FETCH_FUNC_NAME(symbol, type)(struct pt_regs *regs,\ + void *data, void *dest) \ +{ \ + struct symbol_cache *sc = data; \ + if (sc->addr) \ + fetch_memory_##type(regs, (void *)sc->addr, dest); \ + else \ + *(type *)dest = 0; \ } +DEFINE_BASIC_FETCH_FUNCS(symbol) -/* Special indirect memory access interface */ -struct indirect_fetch_data { - struct fetch_func orig; +/* Dereference memory access function */ +struct deref_fetch_param { + struct fetch_param orig; long offset; }; -static __kprobes unsigned long fetch_indirect(struct pt_regs *regs, void *data) -{ - struct indirect_fetch_data *ind = data; - unsigned long addr; - - addr = call_fetch(&ind->orig, regs); - if (addr) { - addr += ind->offset; - return fetch_memory(regs, (void *)addr); - } else - return 0; +#define DEFINE_FETCH_deref(type) \ +static __kprobes void FETCH_FUNC_NAME(deref, type)(struct pt_regs *regs,\ + void *data, void *dest) \ +{ \ + struct deref_fetch_param *dprm = data; \ + unsigned long addr; \ + call_fetch(&dprm->orig, regs, &addr); \ + if (addr) { \ + addr += dprm->offset; \ + fetch_memory_##type(regs, (void *)addr, dest); \ + } else \ + *(type *)dest = 0; \ } +DEFINE_BASIC_FETCH_FUNCS(deref) -static __kprobes void free_indirect_fetch_data(struct indirect_fetch_data *data) +static __kprobes void free_deref_fetch_param(struct deref_fetch_param *data) { - if (data->orig.func == fetch_indirect) - free_indirect_fetch_data(data->orig.data); - else if (data->orig.func == fetch_symbol) + if (CHECK_BASIC_FETCH_FUNCS(deref, data->orig.fn)) + free_deref_fetch_param(data->orig.data); + else if (CHECK_BASIC_FETCH_FUNCS(symbol, data->orig.fn)) free_symbol_cache(data->orig.data); kfree(data); } +/* Default (unsigned long) fetch type */ +#define __DEFAULT_FETCH_TYPE(t) u##t +#define _DEFAULT_FETCH_TYPE(t) __DEFAULT_FETCH_TYPE(t) +#define DEFAULT_FETCH_TYPE _DEFAULT_FETCH_TYPE(BITS_PER_LONG) +#define DEFAULT_FETCH_TYPE_STR __stringify(DEFAULT_FETCH_TYPE) + +#define ASSIGN_FETCH_FUNC(kind, type) \ + .kind = FETCH_FUNC_NAME(kind, type) + +#define ASSIGN_FETCH_TYPE(ptype, ftype, sign) \ + {.name = #ptype, \ + .size = sizeof(ftype), \ + .is_signed = sign, \ + .print = PRINT_TYPE_FUNC_NAME(ptype), \ + .fmt = PRINT_TYPE_FMT_NAME(ptype), \ +ASSIGN_FETCH_FUNC(reg, ftype), \ +ASSIGN_FETCH_FUNC(stack, ftype), \ +ASSIGN_FETCH_FUNC(retval, ftype), \ +ASSIGN_FETCH_FUNC(memory, ftype), \ +ASSIGN_FETCH_FUNC(symbol, ftype), \ +ASSIGN_FETCH_FUNC(deref, ftype), \ + } + +/* Fetch type information table */ +static const struct fetch_type { + const char *name; /* Name of type */ + size_t size; /* Byte size of type */ + int is_signed; /* Signed flag */ + print_type_func_t print; /* Print functions */ + const char *fmt; /* Fromat string */ + /* Fetch functions */ + fetch_func_t reg; + fetch_func_t stack; + fetch_func_t retval; + fetch_func_t memory; + fetch_func_t symbol; + fetch_func_t deref; +} fetch_type_table[] = { + ASSIGN_FETCH_TYPE(u8, u8, 0), + ASSIGN_FETCH_TYPE(u16, u16, 0), + ASSIGN_FETCH_TYPE(u32, u32, 0), + ASSIGN_FETCH_TYPE(u64, u64, 0), + ASSIGN_FETCH_TYPE(s8, u8, 1), + ASSIGN_FETCH_TYPE(s16, u16, 1), + ASSIGN_FETCH_TYPE(s32, u32, 1), + ASSIGN_FETCH_TYPE(s64, u64, 1), +}; + +static const struct fetch_type *find_fetch_type(const char *type) +{ + int i; + + if (!type) + type = DEFAULT_FETCH_TYPE_STR; + + for (i = 0; i < ARRAY_SIZE(fetch_type_table); i++) + if (strcmp(type, fetch_type_table[i].name) == 0) + return &fetch_type_table[i]; + return NULL; +} + +/* Special function : only accept unsigned long */ +static __kprobes void fetch_stack_address(struct pt_regs *regs, + void *dummy, void *dest) +{ + *(unsigned long *)dest = kernel_stack_pointer(regs); +} + /** * Kprobe event core functions */ struct probe_arg { - struct fetch_func fetch; - const char *name; + struct fetch_param fetch; + unsigned int offset; /* Offset from argument entry */ + const char *name; /* Name of this argument */ + const char *comm; /* Command of this argument */ + const struct fetch_type *type; /* Type of this argument */ }; /* Flags for trace_probe */ @@ -204,6 +326,7 @@ struct trace_probe { const char *symbol; /* symbol name */ struct ftrace_event_call call; struct trace_event event; + ssize_t size; /* trace entry size */ unsigned int nr_args; struct probe_arg args[]; }; @@ -212,6 +335,7 @@ struct trace_probe { (offsetof(struct trace_probe, args) + \ (sizeof(struct probe_arg) * (n))) + static __kprobes int probe_is_return(struct trace_probe *tp) { return tp->rp.handler != NULL; @@ -222,49 +346,6 @@ static __kprobes const char *probe_symbol(struct trace_probe *tp) return tp->symbol ? tp->symbol : "unknown"; } -static int probe_arg_string(char *buf, size_t n, struct fetch_func *ff) -{ - int ret = -EINVAL; - - if (ff->func == fetch_register) { - const char *name; - name = regs_query_register_name((unsigned int)((long)ff->data)); - ret = snprintf(buf, n, "%%%s", name); - } else if (ff->func == fetch_stack) - ret = snprintf(buf, n, "$stack%lu", (unsigned long)ff->data); - else if (ff->func == fetch_memory) - ret = snprintf(buf, n, "@0x%p", ff->data); - else if (ff->func == fetch_symbol) { - struct symbol_cache *sc = ff->data; - if (sc->offset) - ret = snprintf(buf, n, "@%s%+ld", sc->symbol, - sc->offset); - else - ret = snprintf(buf, n, "@%s", sc->symbol); - } else if (ff->func == fetch_retvalue) - ret = snprintf(buf, n, "$retval"); - else if (ff->func == fetch_stack_address) - ret = snprintf(buf, n, "$stack"); - else if (ff->func == fetch_indirect) { - struct indirect_fetch_data *id = ff->data; - size_t l = 0; - ret = snprintf(buf, n, "%+ld(", id->offset); - if (ret >= n) - goto end; - l += ret; - ret = probe_arg_string(buf + l, n - l, &id->orig); - if (ret < 0) - goto end; - l += ret; - ret = snprintf(buf + l, n - l, ")"); - ret += l; - } -end: - if (ret >= n) - return -ENOSPC; - return ret; -} - static int register_probe_event(struct trace_probe *tp); static void unregister_probe_event(struct trace_probe *tp); @@ -347,11 +428,12 @@ error: static void free_probe_arg(struct probe_arg *arg) { - if (arg->fetch.func == fetch_symbol) + if (CHECK_BASIC_FETCH_FUNCS(deref, arg->fetch.fn)) + free_deref_fetch_param(arg->fetch.data); + else if (CHECK_BASIC_FETCH_FUNCS(symbol, arg->fetch.fn)) free_symbol_cache(arg->fetch.data); - else if (arg->fetch.func == fetch_indirect) - free_indirect_fetch_data(arg->fetch.data); kfree(arg->name); + kfree(arg->comm); } static void free_trace_probe(struct trace_probe *tp) @@ -457,28 +539,30 @@ static int split_symbol_offset(char *symbol, unsigned long *offset) #define PARAM_MAX_ARGS 16 #define PARAM_MAX_STACK (THREAD_SIZE / sizeof(unsigned long)) -static int parse_probe_vars(char *arg, struct fetch_func *ff, int is_return) +static int parse_probe_vars(char *arg, const struct fetch_type *t, + struct fetch_param *f, int is_return) { int ret = 0; unsigned long param; if (strcmp(arg, "retval") == 0) { - if (is_return) { - ff->func = fetch_retvalue; - ff->data = NULL; - } else + if (is_return) + f->fn = t->retval; + else ret = -EINVAL; } else if (strncmp(arg, "stack", 5) == 0) { if (arg[5] == '\0') { - ff->func = fetch_stack_address; - ff->data = NULL; + if (strcmp(t->name, DEFAULT_FETCH_TYPE_STR) == 0) + f->fn = fetch_stack_address; + else + ret = -EINVAL; } else if (isdigit(arg[5])) { ret = strict_strtoul(arg + 5, 10, ¶m); if (ret || param > PARAM_MAX_STACK) ret = -EINVAL; else { - ff->func = fetch_stack; - ff->data = (void *)param; + f->fn = t->stack; + f->data = (void *)param; } } else ret = -EINVAL; @@ -488,7 +572,8 @@ static int parse_probe_vars(char *arg, struct fetch_func *ff, int is_return) } /* Recursive argument parser */ -static int __parse_probe_arg(char *arg, struct fetch_func *ff, int is_return) +static int __parse_probe_arg(char *arg, const struct fetch_type *t, + struct fetch_param *f, int is_return) { int ret = 0; unsigned long param; @@ -497,13 +582,13 @@ static int __parse_probe_arg(char *arg, struct fetch_func *ff, int is_return) switch (arg[0]) { case '$': - ret = parse_probe_vars(arg + 1, ff, is_return); + ret = parse_probe_vars(arg + 1, t, f, is_return); break; case '%': /* named register */ ret = regs_query_register_offset(arg + 1); if (ret >= 0) { - ff->func = fetch_register; - ff->data = (void *)(unsigned long)ret; + f->fn = t->reg; + f->data = (void *)(unsigned long)ret; ret = 0; } break; @@ -512,26 +597,22 @@ static int __parse_probe_arg(char *arg, struct fetch_func *ff, int is_return) ret = strict_strtoul(arg + 1, 0, ¶m); if (ret) break; - ff->func = fetch_memory; - ff->data = (void *)param; + f->fn = t->memory; + f->data = (void *)param; } else { ret = split_symbol_offset(arg + 1, &offset); if (ret) break; - ff->data = alloc_symbol_cache(arg + 1, offset); - if (ff->data) - ff->func = fetch_symbol; - else - ret = -EINVAL; + f->data = alloc_symbol_cache(arg + 1, offset); + if (f->data) + f->fn = t->symbol; } break; - case '+': /* indirect memory */ + case '+': /* deref memory */ case '-': tmp = strchr(arg, '('); - if (!tmp) { - ret = -EINVAL; + if (!tmp) break; - } *tmp = '\0'; ret = strict_strtol(arg + 1, 0, &offset); if (ret) @@ -541,38 +622,58 @@ static int __parse_probe_arg(char *arg, struct fetch_func *ff, int is_return) arg = tmp + 1; tmp = strrchr(arg, ')'); if (tmp) { - struct indirect_fetch_data *id; + struct deref_fetch_param *dprm; + const struct fetch_type *t2 = find_fetch_type(NULL); *tmp = '\0'; - id = kzalloc(sizeof(struct indirect_fetch_data), - GFP_KERNEL); - if (!id) + dprm = kzalloc(sizeof(struct deref_fetch_param), + GFP_KERNEL); + if (!dprm) return -ENOMEM; - id->offset = offset; - ret = __parse_probe_arg(arg, &id->orig, is_return); + dprm->offset = offset; + ret = __parse_probe_arg(arg, t2, &dprm->orig, + is_return); if (ret) - kfree(id); + kfree(dprm); else { - ff->func = fetch_indirect; - ff->data = (void *)id; + f->fn = t->deref; + f->data = (void *)dprm; } - } else - ret = -EINVAL; + } break; - default: - /* TODO: support custom handler */ - ret = -EINVAL; } + if (!ret && !f->fn) + ret = -EINVAL; return ret; } /* String length checking wrapper */ -static int parse_probe_arg(char *arg, struct fetch_func *ff, int is_return) +static int parse_probe_arg(char *arg, struct trace_probe *tp, + struct probe_arg *parg, int is_return) { + const char *t; + if (strlen(arg) > MAX_ARGSTR_LEN) { pr_info("Argument is too long.: %s\n", arg); return -ENOSPC; } - return __parse_probe_arg(arg, ff, is_return); + parg->comm = kstrdup(arg, GFP_KERNEL); + if (!parg->comm) { + pr_info("Failed to allocate memory for command '%s'.\n", arg); + return -ENOMEM; + } + t = strchr(parg->comm, ':'); + if (t) { + arg[t - parg->comm] = '\0'; + t++; + } + parg->type = find_fetch_type(t); + if (!parg->type) { + pr_info("Unsupported type: %s\n", t); + return -EINVAL; + } + parg->offset = tp->size; + tp->size += parg->type->size; + return __parse_probe_arg(arg, parg->type, &parg->fetch, is_return); } /* Return 1 if name is reserved or already used by another argument */ @@ -602,15 +703,18 @@ static int create_trace_probe(int argc, char **argv) * @ADDR : fetch memory at ADDR (ADDR should be in kernel) * @SYM[+|-offs] : fetch memory at SYM +|- offs (SYM is a data symbol) * %REG : fetch register REG - * Indirect memory fetch: + * Dereferencing memory fetch: * +|-offs(ARG) : fetch memory at ARG +|- offs address. * Alias name of args: * NAME=FETCHARG : set NAME as alias of FETCHARG. + * Type of args: + * FETCHARG:TYPE : use TYPE instead of unsigned long. */ struct trace_probe *tp; int i, ret = 0; int is_return = 0, is_delete = 0; - char *symbol = NULL, *event = NULL, *arg = NULL, *group = NULL; + char *symbol = NULL, *event = NULL, *group = NULL; + char *arg, *tmp; unsigned long offset = 0; void *addr = NULL; char buf[MAX_EVENT_NAME_LEN]; @@ -723,13 +827,6 @@ static int create_trace_probe(int argc, char **argv) else arg = argv[i]; - if (conflict_field_name(argv[i], tp->args, i)) { - pr_info("Argument%d name '%s' conflicts with " - "another field.\n", i, argv[i]); - ret = -EINVAL; - goto error; - } - tp->args[i].name = kstrdup(argv[i], GFP_KERNEL); if (!tp->args[i].name) { pr_info("Failed to allocate argument%d name '%s'.\n", @@ -737,9 +834,19 @@ static int create_trace_probe(int argc, char **argv) ret = -ENOMEM; goto error; } + tmp = strchr(tp->args[i].name, ':'); + if (tmp) + *tmp = '_'; /* convert : to _ */ + + if (conflict_field_name(tp->args[i].name, tp->args, i)) { + pr_info("Argument%d name '%s' conflicts with " + "another field.\n", i, argv[i]); + ret = -EINVAL; + goto error; + } /* Parse fetch argument */ - ret = parse_probe_arg(arg, &tp->args[i].fetch, is_return); + ret = parse_probe_arg(arg, tp, &tp->args[i], is_return); if (ret) { pr_info("Parse error at argument%d. (%d)\n", i, ret); kfree(tp->args[i].name); @@ -794,8 +901,7 @@ static void probes_seq_stop(struct seq_file *m, void *v) static int probes_seq_show(struct seq_file *m, void *v) { struct trace_probe *tp = v; - int i, ret; - char buf[MAX_ARGSTR_LEN + 1]; + int i; seq_printf(m, "%c", probe_is_return(tp) ? 'r' : 'p'); seq_printf(m, ":%s/%s", tp->call.system, tp->call.name); @@ -807,15 +913,10 @@ static int probes_seq_show(struct seq_file *m, void *v) else seq_printf(m, " %s", probe_symbol(tp)); - for (i = 0; i < tp->nr_args; i++) { - ret = probe_arg_string(buf, MAX_ARGSTR_LEN, &tp->args[i].fetch); - if (ret < 0) { - pr_warning("Argument%d decoding error(%d).\n", i, ret); - return ret; - } - seq_printf(m, " %s=%s", tp->args[i].name, buf); - } + for (i = 0; i < tp->nr_args; i++) + seq_printf(m, " %s=%s", tp->args[i].name, tp->args[i].comm); seq_printf(m, "\n"); + return 0; } @@ -945,9 +1046,10 @@ static const struct file_operations kprobe_profile_ops = { static __kprobes void kprobe_trace_func(struct kprobe *kp, struct pt_regs *regs) { struct trace_probe *tp = container_of(kp, struct trace_probe, rp.kp); - struct kprobe_trace_entry *entry; + struct kprobe_trace_entry_head *entry; struct ring_buffer_event *event; struct ring_buffer *buffer; + u8 *data; int size, i, pc; unsigned long irq_flags; struct ftrace_event_call *call = &tp->call; @@ -957,7 +1059,7 @@ static __kprobes void kprobe_trace_func(struct kprobe *kp, struct pt_regs *regs) local_save_flags(irq_flags); pc = preempt_count(); - size = SIZEOF_KPROBE_TRACE_ENTRY(tp->nr_args); + size = sizeof(*entry) + tp->size; event = trace_current_buffer_lock_reserve(&buffer, call->id, size, irq_flags, pc); @@ -965,10 +1067,10 @@ static __kprobes void kprobe_trace_func(struct kprobe *kp, struct pt_regs *regs) return; entry = ring_buffer_event_data(event); - entry->nargs = tp->nr_args; entry->ip = (unsigned long)kp->addr; + data = (u8 *)&entry[1]; for (i = 0; i < tp->nr_args; i++) - entry->args[i] = call_fetch(&tp->args[i].fetch, regs); + call_fetch(&tp->args[i].fetch, regs, data + tp->args[i].offset); if (!filter_current_check_discard(buffer, call, entry, event)) trace_nowake_buffer_unlock_commit(buffer, event, irq_flags, pc); @@ -979,9 +1081,10 @@ static __kprobes void kretprobe_trace_func(struct kretprobe_instance *ri, struct pt_regs *regs) { struct trace_probe *tp = container_of(ri->rp, struct trace_probe, rp); - struct kretprobe_trace_entry *entry; + struct kretprobe_trace_entry_head *entry; struct ring_buffer_event *event; struct ring_buffer *buffer; + u8 *data; int size, i, pc; unsigned long irq_flags; struct ftrace_event_call *call = &tp->call; @@ -989,7 +1092,7 @@ static __kprobes void kretprobe_trace_func(struct kretprobe_instance *ri, local_save_flags(irq_flags); pc = preempt_count(); - size = SIZEOF_KRETPROBE_TRACE_ENTRY(tp->nr_args); + size = sizeof(*entry) + tp->size; event = trace_current_buffer_lock_reserve(&buffer, call->id, size, irq_flags, pc); @@ -997,11 +1100,11 @@ static __kprobes void kretprobe_trace_func(struct kretprobe_instance *ri, return; entry = ring_buffer_event_data(event); - entry->nargs = tp->nr_args; entry->func = (unsigned long)tp->rp.kp.addr; entry->ret_ip = (unsigned long)ri->ret_addr; + data = (u8 *)&entry[1]; for (i = 0; i < tp->nr_args; i++) - entry->args[i] = call_fetch(&tp->args[i].fetch, regs); + call_fetch(&tp->args[i].fetch, regs, data + tp->args[i].offset); if (!filter_current_check_discard(buffer, call, entry, event)) trace_nowake_buffer_unlock_commit(buffer, event, irq_flags, pc); @@ -1011,13 +1114,14 @@ static __kprobes void kretprobe_trace_func(struct kretprobe_instance *ri, enum print_line_t print_kprobe_event(struct trace_iterator *iter, int flags) { - struct kprobe_trace_entry *field; + struct kprobe_trace_entry_head *field; struct trace_seq *s = &iter->seq; struct trace_event *event; struct trace_probe *tp; + u8 *data; int i; - field = (struct kprobe_trace_entry *)iter->ent; + field = (struct kprobe_trace_entry_head *)iter->ent; event = ftrace_find_event(field->ent.type); tp = container_of(event, struct trace_probe, event); @@ -1030,9 +1134,10 @@ print_kprobe_event(struct trace_iterator *iter, int flags) if (!trace_seq_puts(s, ")")) goto partial; - for (i = 0; i < field->nargs; i++) - if (!trace_seq_printf(s, " %s=%lx", - tp->args[i].name, field->args[i])) + data = (u8 *)&field[1]; + for (i = 0; i < tp->nr_args; i++) + if (!tp->args[i].type->print(s, tp->args[i].name, + data + tp->args[i].offset)) goto partial; if (!trace_seq_puts(s, "\n")) @@ -1046,13 +1151,14 @@ partial: enum print_line_t print_kretprobe_event(struct trace_iterator *iter, int flags) { - struct kretprobe_trace_entry *field; + struct kretprobe_trace_entry_head *field; struct trace_seq *s = &iter->seq; struct trace_event *event; struct trace_probe *tp; + u8 *data; int i; - field = (struct kretprobe_trace_entry *)iter->ent; + field = (struct kretprobe_trace_entry_head *)iter->ent; event = ftrace_find_event(field->ent.type); tp = container_of(event, struct trace_probe, event); @@ -1071,9 +1177,10 @@ print_kretprobe_event(struct trace_iterator *iter, int flags) if (!trace_seq_puts(s, ")")) goto partial; - for (i = 0; i < field->nargs; i++) - if (!trace_seq_printf(s, " %s=%lx", - tp->args[i].name, field->args[i])) + data = (u8 *)&field[1]; + for (i = 0; i < tp->nr_args; i++) + if (!tp->args[i].type->print(s, tp->args[i].name, + data + tp->args[i].offset)) goto partial; if (!trace_seq_puts(s, "\n")) @@ -1129,29 +1236,43 @@ static int probe_event_raw_init(struct ftrace_event_call *event_call) static int kprobe_event_define_fields(struct ftrace_event_call *event_call) { int ret, i; - struct kprobe_trace_entry field; + struct kprobe_trace_entry_head field; struct trace_probe *tp = (struct trace_probe *)event_call->data; DEFINE_FIELD(unsigned long, ip, FIELD_STRING_IP, 0); - DEFINE_FIELD(int, nargs, FIELD_STRING_NARGS, 1); /* Set argument names as fields */ - for (i = 0; i < tp->nr_args; i++) - DEFINE_FIELD(unsigned long, args[i], tp->args[i].name, 0); + for (i = 0; i < tp->nr_args; i++) { + ret = trace_define_field(event_call, tp->args[i].type->name, + tp->args[i].name, + sizeof(field) + tp->args[i].offset, + tp->args[i].type->size, + tp->args[i].type->is_signed, + FILTER_OTHER); + if (ret) + return ret; + } return 0; } static int kretprobe_event_define_fields(struct ftrace_event_call *event_call) { int ret, i; - struct kretprobe_trace_entry field; + struct kretprobe_trace_entry_head field; struct trace_probe *tp = (struct trace_probe *)event_call->data; DEFINE_FIELD(unsigned long, func, FIELD_STRING_FUNC, 0); DEFINE_FIELD(unsigned long, ret_ip, FIELD_STRING_RETIP, 0); - DEFINE_FIELD(int, nargs, FIELD_STRING_NARGS, 1); /* Set argument names as fields */ - for (i = 0; i < tp->nr_args; i++) - DEFINE_FIELD(unsigned long, args[i], tp->args[i].name, 0); + for (i = 0; i < tp->nr_args; i++) { + ret = trace_define_field(event_call, tp->args[i].type->name, + tp->args[i].name, + sizeof(field) + tp->args[i].offset, + tp->args[i].type->size, + tp->args[i].type->is_signed, + FILTER_OTHER); + if (ret) + return ret; + } return 0; } @@ -1176,8 +1297,8 @@ static int __set_print_fmt(struct trace_probe *tp, char *buf, int len) pos += snprintf(buf + pos, LEN_OR_ZERO, "\"%s", fmt); for (i = 0; i < tp->nr_args; i++) { - pos += snprintf(buf + pos, LEN_OR_ZERO, " %s=%%lx", - tp->args[i].name); + pos += snprintf(buf + pos, LEN_OR_ZERO, " %s=%s", + tp->args[i].name, tp->args[i].type->fmt); } pos += snprintf(buf + pos, LEN_OR_ZERO, "\", %s", arg); @@ -1219,12 +1340,13 @@ static __kprobes void kprobe_perf_func(struct kprobe *kp, { struct trace_probe *tp = container_of(kp, struct trace_probe, rp.kp); struct ftrace_event_call *call = &tp->call; - struct kprobe_trace_entry *entry; + struct kprobe_trace_entry_head *entry; + u8 *data; int size, __size, i; unsigned long irq_flags; int rctx; - __size = SIZEOF_KPROBE_TRACE_ENTRY(tp->nr_args); + __size = sizeof(*entry) + tp->size; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); if (WARN_ONCE(size > PERF_MAX_TRACE_SIZE, @@ -1235,10 +1357,10 @@ static __kprobes void kprobe_perf_func(struct kprobe *kp, if (!entry) return; - entry->nargs = tp->nr_args; entry->ip = (unsigned long)kp->addr; + data = (u8 *)&entry[1]; for (i = 0; i < tp->nr_args; i++) - entry->args[i] = call_fetch(&tp->args[i].fetch, regs); + call_fetch(&tp->args[i].fetch, regs, data + tp->args[i].offset); perf_trace_buf_submit(entry, size, rctx, entry->ip, 1, irq_flags, regs); } @@ -1249,12 +1371,13 @@ static __kprobes void kretprobe_perf_func(struct kretprobe_instance *ri, { struct trace_probe *tp = container_of(ri->rp, struct trace_probe, rp); struct ftrace_event_call *call = &tp->call; - struct kretprobe_trace_entry *entry; + struct kretprobe_trace_entry_head *entry; + u8 *data; int size, __size, i; unsigned long irq_flags; int rctx; - __size = SIZEOF_KRETPROBE_TRACE_ENTRY(tp->nr_args); + __size = sizeof(*entry) + tp->size; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); if (WARN_ONCE(size > PERF_MAX_TRACE_SIZE, @@ -1265,11 +1388,11 @@ static __kprobes void kretprobe_perf_func(struct kretprobe_instance *ri, if (!entry) return; - entry->nargs = tp->nr_args; entry->func = (unsigned long)tp->rp.kp.addr; entry->ret_ip = (unsigned long)ri->ret_addr; + data = (u8 *)&entry[1]; for (i = 0; i < tp->nr_args; i++) - entry->args[i] = call_fetch(&tp->args[i].fetch, regs); + call_fetch(&tp->args[i].fetch, regs, data + tp->args[i].offset); perf_trace_buf_submit(entry, size, rctx, entry->ret_ip, 1, irq_flags, regs); diff --git a/kernel/trace/trace_ksym.c b/kernel/trace/trace_ksym.c index d59cd6879477..8eaf00749b65 100644 --- a/kernel/trace/trace_ksym.c +++ b/kernel/trace/trace_ksym.c @@ -34,12 +34,6 @@ #include <asm/atomic.h> -/* - * For now, let us restrict the no. of symbols traced simultaneously to number - * of available hardware breakpoint registers. - */ -#define KSYM_TRACER_MAX HBP_NUM - #define KSYM_TRACER_OP_LEN 3 /* rw- */ struct trace_ksym { @@ -53,7 +47,6 @@ struct trace_ksym { static struct trace_array *ksym_trace_array; -static unsigned int ksym_filter_entry_count; static unsigned int ksym_tracing_enabled; static HLIST_HEAD(ksym_filter_head); @@ -181,13 +174,6 @@ int process_new_ksym_entry(char *ksymname, int op, unsigned long addr) struct trace_ksym *entry; int ret = -ENOMEM; - if (ksym_filter_entry_count >= KSYM_TRACER_MAX) { - printk(KERN_ERR "ksym_tracer: Maximum limit:(%d) reached. No" - " new requests for tracing can be accepted now.\n", - KSYM_TRACER_MAX); - return -ENOSPC; - } - entry = kzalloc(sizeof(struct trace_ksym), GFP_KERNEL); if (!entry) return -ENOMEM; @@ -203,13 +189,17 @@ int process_new_ksym_entry(char *ksymname, int op, unsigned long addr) if (IS_ERR(entry->ksym_hbp)) { ret = PTR_ERR(entry->ksym_hbp); - printk(KERN_INFO "ksym_tracer request failed. Try again" - " later!!\n"); + if (ret == -ENOSPC) { + printk(KERN_ERR "ksym_tracer: Maximum limit reached." + " No new requests for tracing can be accepted now.\n"); + } else { + printk(KERN_INFO "ksym_tracer request failed. Try again" + " later!!\n"); + } goto err; } hlist_add_head_rcu(&(entry->ksym_hlist), &ksym_filter_head); - ksym_filter_entry_count++; return 0; @@ -265,7 +255,6 @@ static void __ksym_trace_reset(void) hlist_for_each_entry_safe(entry, node, node1, &ksym_filter_head, ksym_hlist) { unregister_wide_hw_breakpoint(entry->ksym_hbp); - ksym_filter_entry_count--; hlist_del_rcu(&(entry->ksym_hlist)); synchronize_rcu(); kfree(entry); @@ -338,7 +327,6 @@ static ssize_t ksym_trace_filter_write(struct file *file, goto out_unlock; } /* Error or "symbol:---" case: drop it */ - ksym_filter_entry_count--; hlist_del_rcu(&(entry->ksym_hlist)); synchronize_rcu(); kfree(entry); diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c index 8e46b3323cdc..2404c129a8c9 100644 --- a/kernel/trace/trace_output.c +++ b/kernel/trace/trace_output.c @@ -253,7 +253,7 @@ void *trace_seq_reserve(struct trace_seq *s, size_t len) void *ret; if (s->full) - return 0; + return NULL; if (len > ((PAGE_SIZE - 1) - s->len)) { s->full = 1; diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c index 5fca0f51fde4..a55fccfede5d 100644 --- a/kernel/trace/trace_sched_switch.c +++ b/kernel/trace/trace_sched_switch.c @@ -50,8 +50,7 @@ tracing_sched_switch_trace(struct trace_array *tr, } static void -probe_sched_switch(struct rq *__rq, struct task_struct *prev, - struct task_struct *next) +probe_sched_switch(struct task_struct *prev, struct task_struct *next) { struct trace_array_cpu *data; unsigned long flags; @@ -109,7 +108,7 @@ tracing_sched_wakeup_trace(struct trace_array *tr, } static void -probe_sched_wakeup(struct rq *__rq, struct task_struct *wakee, int success) +probe_sched_wakeup(struct task_struct *wakee, int success) { struct trace_array_cpu *data; unsigned long flags; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 0271742abb8d..8052446ceeaa 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -107,8 +107,7 @@ static void probe_wakeup_migrate_task(struct task_struct *task, int cpu) } static void notrace -probe_wakeup_sched_switch(struct rq *rq, struct task_struct *prev, - struct task_struct *next) +probe_wakeup_sched_switch(struct task_struct *prev, struct task_struct *next) { struct trace_array_cpu *data; cycle_t T0, T1, delta; @@ -200,7 +199,7 @@ static void wakeup_reset(struct trace_array *tr) } static void -probe_wakeup(struct rq *rq, struct task_struct *p, int success) +probe_wakeup(struct task_struct *p, int success) { struct trace_array_cpu *data; int cpu = smp_processor_id(); diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 81003b4d617f..250e7f9bd2f0 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -17,7 +17,6 @@ static inline int trace_valid_entry(struct trace_entry *entry) case TRACE_BRANCH: case TRACE_GRAPH_ENT: case TRACE_GRAPH_RET: - case TRACE_HW_BRANCHES: case TRACE_KSYM: return 1; } @@ -30,7 +29,7 @@ static int trace_test_buffer_cpu(struct trace_array *tr, int cpu) struct trace_entry *entry; unsigned int loops = 0; - while ((event = ring_buffer_consume(tr->buffer, cpu, NULL))) { + while ((event = ring_buffer_consume(tr->buffer, cpu, NULL, NULL))) { entry = ring_buffer_event_data(event); /* @@ -256,7 +255,8 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr) /* Maximum number of functions to trace before diagnosing a hang */ #define GRAPH_MAX_FUNC_TEST 100000000 -static void __ftrace_dump(bool disable_tracing); +static void +__ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode); static unsigned int graph_hang_thresh; /* Wrap the real function entry probe to avoid possible hanging */ @@ -267,7 +267,7 @@ static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace) ftrace_graph_stop(); printk(KERN_WARNING "BUG: Function graph tracer hang!\n"); if (ftrace_dump_on_oops) - __ftrace_dump(false); + __ftrace_dump(false, DUMP_ALL); return 0; } @@ -755,62 +755,6 @@ trace_selftest_startup_branch(struct tracer *trace, struct trace_array *tr) } #endif /* CONFIG_BRANCH_TRACER */ -#ifdef CONFIG_HW_BRANCH_TRACER -int -trace_selftest_startup_hw_branches(struct tracer *trace, - struct trace_array *tr) -{ - struct trace_iterator *iter; - struct tracer tracer; - unsigned long count; - int ret; - - if (!trace->open) { - printk(KERN_CONT "missing open function..."); - return -1; - } - - ret = tracer_init(trace, tr); - if (ret) { - warn_failed_init_tracer(trace, ret); - return ret; - } - - /* - * The hw-branch tracer needs to collect the trace from the various - * cpu trace buffers - before tracing is stopped. - */ - iter = kzalloc(sizeof(*iter), GFP_KERNEL); - if (!iter) - return -ENOMEM; - - memcpy(&tracer, trace, sizeof(tracer)); - - iter->trace = &tracer; - iter->tr = tr; - iter->pos = -1; - mutex_init(&iter->mutex); - - trace->open(iter); - - mutex_destroy(&iter->mutex); - kfree(iter); - - tracing_stop(); - - ret = trace_test_buffer(tr, &count); - trace->reset(tr); - tracing_start(); - - if (!ret && !count) { - printk(KERN_CONT "no entries found.."); - ret = -1; - } - - return ret; -} -#endif /* CONFIG_HW_BRANCH_TRACER */ - #ifdef CONFIG_KSYM_TRACER static int ksym_selftest_dummy; diff --git a/kernel/user.c b/kernel/user.c index 766467b3bcb7..7e72614b736d 100644 --- a/kernel/user.c +++ b/kernel/user.c @@ -16,7 +16,6 @@ #include <linux/interrupt.h> #include <linux/module.h> #include <linux/user_namespace.h> -#include "cred-internals.h" struct user_namespace init_user_ns = { .kref = { @@ -137,9 +136,6 @@ struct user_struct *alloc_uid(struct user_namespace *ns, uid_t uid) struct hlist_head *hashent = uidhashentry(ns, uid); struct user_struct *up, *new; - /* Make uid_hash_find() + uids_user_create() + uid_hash_insert() - * atomic. - */ spin_lock_irq(&uidhash_lock); up = uid_hash_find(uid, hashent); spin_unlock_irq(&uidhash_lock); @@ -161,11 +157,6 @@ struct user_struct *alloc_uid(struct user_namespace *ns, uid_t uid) spin_lock_irq(&uidhash_lock); up = uid_hash_find(uid, hashent); if (up) { - /* This case is not possible when CONFIG_USER_SCHED - * is defined, since we serialize alloc_uid() using - * uids_mutex. Hence no need to call - * sched_destroy_user() or remove_user_sysfs_dir(). - */ key_put(new->uid_keyring); key_put(new->session_keyring); kmem_cache_free(uid_cachep, new); @@ -178,8 +169,6 @@ struct user_struct *alloc_uid(struct user_namespace *ns, uid_t uid) return up; - put_user_ns(new->user_ns); - kmem_cache_free(uid_cachep, new); out_unlock: return NULL; } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 935248bdbc47..d85be90d5888 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -512,6 +512,18 @@ config PROVE_RCU Say N if you are unsure. +config PROVE_RCU_REPEATEDLY + bool "RCU debugging: don't disable PROVE_RCU on first splat" + depends on PROVE_RCU + default n + help + By itself, PROVE_RCU will disable checking upon issuing the + first warning (or "splat"). This feature prevents such + disabling, allowing multiple RCU-lockdep warnings to be printed + on a single reboot. + + Say N if you are unsure. + config LOCKDEP bool depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT @@ -793,7 +805,7 @@ config RCU_CPU_STALL_DETECTOR config RCU_CPU_STALL_VERBOSE bool "Print additional per-task information for RCU_CPU_STALL_DETECTOR" depends on RCU_CPU_STALL_DETECTOR && TREE_PREEMPT_RCU - default n + default y help This option causes RCU to printk detailed per-task information for any tasks that are stalling the current RCU grace period. @@ -1086,6 +1098,13 @@ config DMA_API_DEBUG This option causes a performance degredation. Use only if you want to debug device drivers. If unsure, say N. +config ATOMIC64_SELFTEST + bool "Perform an atomic64_t self-test at boot" + help + Enable this option to test the atomic64_t functions at boot. + + If unsure, say N. + source "samples/Kconfig" source "lib/Kconfig.kgdb" diff --git a/lib/Makefile b/lib/Makefile index 0d4015205c64..9e6d3c29d73a 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -39,7 +39,10 @@ lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o lib-$(CONFIG_GENERIC_FIND_FIRST_BIT) += find_next_bit.o lib-$(CONFIG_GENERIC_FIND_NEXT_BIT) += find_next_bit.o obj-$(CONFIG_GENERIC_FIND_LAST_BIT) += find_last_bit.o + +CFLAGS_hweight.o = $(subst $(quote),,$(CONFIG_ARCH_HWEIGHT_CFLAGS)) obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o + obj-$(CONFIG_LOCK_KERNEL) += kernel_lock.o obj-$(CONFIG_BTREE) += btree.o obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o @@ -101,6 +104,8 @@ obj-$(CONFIG_GENERIC_CSUM) += checksum.o obj-$(CONFIG_GENERIC_ATOMIC64) += atomic64.o +obj-$(CONFIG_ATOMIC64_SELFTEST) += atomic64_test.o + hostprogs-y := gen_crc32table clean-files := crc32table.h diff --git a/lib/atomic64.c b/lib/atomic64.c index 8bee16ec7524..a21c12bc727c 100644 --- a/lib/atomic64.c +++ b/lib/atomic64.c @@ -162,12 +162,12 @@ int atomic64_add_unless(atomic64_t *v, long long a, long long u) { unsigned long flags; spinlock_t *lock = lock_addr(v); - int ret = 1; + int ret = 0; spin_lock_irqsave(lock, flags); if (v->counter != u) { v->counter += a; - ret = 0; + ret = 1; } spin_unlock_irqrestore(lock, flags); return ret; diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c new file mode 100644 index 000000000000..65e482caf5e9 --- /dev/null +++ b/lib/atomic64_test.c @@ -0,0 +1,164 @@ +/* + * Testsuite for atomic64_t functions + * + * Copyright © 2010 Luca Barbieri + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ +#include <linux/init.h> +#include <asm/atomic.h> + +#define INIT(c) do { atomic64_set(&v, c); r = c; } while (0) +static __init int test_atomic64(void) +{ + long long v0 = 0xaaa31337c001d00dLL; + long long v1 = 0xdeadbeefdeafcafeLL; + long long v2 = 0xfaceabadf00df001LL; + long long onestwos = 0x1111111122222222LL; + long long one = 1LL; + + atomic64_t v = ATOMIC64_INIT(v0); + long long r = v0; + BUG_ON(v.counter != r); + + atomic64_set(&v, v1); + r = v1; + BUG_ON(v.counter != r); + BUG_ON(atomic64_read(&v) != r); + + INIT(v0); + atomic64_add(onestwos, &v); + r += onestwos; + BUG_ON(v.counter != r); + + INIT(v0); + atomic64_add(-one, &v); + r += -one; + BUG_ON(v.counter != r); + + INIT(v0); + r += onestwos; + BUG_ON(atomic64_add_return(onestwos, &v) != r); + BUG_ON(v.counter != r); + + INIT(v0); + r += -one; + BUG_ON(atomic64_add_return(-one, &v) != r); + BUG_ON(v.counter != r); + + INIT(v0); + atomic64_sub(onestwos, &v); + r -= onestwos; + BUG_ON(v.counter != r); + + INIT(v0); + atomic64_sub(-one, &v); + r -= -one; + BUG_ON(v.counter != r); + + INIT(v0); + r -= onestwos; + BUG_ON(atomic64_sub_return(onestwos, &v) != r); + BUG_ON(v.counter != r); + + INIT(v0); + r -= -one; + BUG_ON(atomic64_sub_return(-one, &v) != r); + BUG_ON(v.counter != r); + + INIT(v0); + atomic64_inc(&v); + r += one; + BUG_ON(v.counter != r); + + INIT(v0); + r += one; + BUG_ON(atomic64_inc_return(&v) != r); + BUG_ON(v.counter != r); + + INIT(v0); + atomic64_dec(&v); + r -= one; + BUG_ON(v.counter != r); + + INIT(v0); + r -= one; + BUG_ON(atomic64_dec_return(&v) != r); + BUG_ON(v.counter != r); + + INIT(v0); + BUG_ON(atomic64_xchg(&v, v1) != v0); + r = v1; + BUG_ON(v.counter != r); + + INIT(v0); + BUG_ON(atomic64_cmpxchg(&v, v0, v1) != v0); + r = v1; + BUG_ON(v.counter != r); + + INIT(v0); + BUG_ON(atomic64_cmpxchg(&v, v2, v1) != v0); + BUG_ON(v.counter != r); + + INIT(v0); + BUG_ON(atomic64_add_unless(&v, one, v0)); + BUG_ON(v.counter != r); + + INIT(v0); + BUG_ON(!atomic64_add_unless(&v, one, v1)); + r += one; + BUG_ON(v.counter != r); + +#if defined(CONFIG_X86) || defined(CONFIG_MIPS) || defined(CONFIG_PPC) || defined(_ASM_GENERIC_ATOMIC64_H) + INIT(onestwos); + BUG_ON(atomic64_dec_if_positive(&v) != (onestwos - 1)); + r -= one; + BUG_ON(v.counter != r); + + INIT(0); + BUG_ON(atomic64_dec_if_positive(&v) != -one); + BUG_ON(v.counter != r); + + INIT(-one); + BUG_ON(atomic64_dec_if_positive(&v) != (-one - one)); + BUG_ON(v.counter != r); +#else +#warning Please implement atomic64_dec_if_positive for your architecture, and add it to the IF above +#endif + + INIT(onestwos); + BUG_ON(!atomic64_inc_not_zero(&v)); + r += one; + BUG_ON(v.counter != r); + + INIT(0); + BUG_ON(atomic64_inc_not_zero(&v)); + BUG_ON(v.counter != r); + + INIT(-one); + BUG_ON(!atomic64_inc_not_zero(&v)); + r += one; + BUG_ON(v.counter != r); + +#ifdef CONFIG_X86 + printk(KERN_INFO "atomic64 test passed for %s platform %s CX8 and %s SSE\n", +#ifdef CONFIG_X86_64 + "x86-64", +#elif defined(CONFIG_X86_CMPXCHG64) + "i586+", +#else + "i386+", +#endif + boot_cpu_has(X86_FEATURE_CX8) ? "with" : "without", + boot_cpu_has(X86_FEATURE_XMM) ? "with" : "without"); +#else + printk(KERN_INFO "atomic64 test passed\n"); +#endif + + return 0; +} + +core_initcall(test_atomic64); diff --git a/lib/btree.c b/lib/btree.c index 41859a820218..c9c6f0351526 100644 --- a/lib/btree.c +++ b/lib/btree.c @@ -95,7 +95,8 @@ static unsigned long *btree_node_alloc(struct btree_head *head, gfp_t gfp) unsigned long *node; node = mempool_alloc(head->mempool, gfp); - memset(node, 0, NODESIZE); + if (likely(node)) + memset(node, 0, NODESIZE); return node; } diff --git a/lib/debugobjects.c b/lib/debugobjects.c index b862b30369ff..deebcc57d4e6 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -141,6 +141,7 @@ alloc_object(void *addr, struct debug_bucket *b, struct debug_obj_descr *descr) obj->object = addr; obj->descr = descr; obj->state = ODEBUG_STATE_NONE; + obj->astate = 0; hlist_del(&obj->node); hlist_add_head(&obj->node, &b->list); @@ -252,8 +253,10 @@ static void debug_print_object(struct debug_obj *obj, char *msg) if (limit < 5 && obj->descr != descr_test) { limit++; - WARN(1, KERN_ERR "ODEBUG: %s %s object type: %s\n", msg, - obj_states[obj->state], obj->descr->name); + WARN(1, KERN_ERR "ODEBUG: %s %s (active state %u) " + "object type: %s\n", + msg, obj_states[obj->state], obj->astate, + obj->descr->name); } debug_objects_warnings++; } @@ -447,7 +450,10 @@ void debug_object_deactivate(void *addr, struct debug_obj_descr *descr) case ODEBUG_STATE_INIT: case ODEBUG_STATE_INACTIVE: case ODEBUG_STATE_ACTIVE: - obj->state = ODEBUG_STATE_INACTIVE; + if (!obj->astate) + obj->state = ODEBUG_STATE_INACTIVE; + else + debug_print_object(obj, "deactivate"); break; case ODEBUG_STATE_DESTROYED: @@ -553,6 +559,53 @@ out_unlock: raw_spin_unlock_irqrestore(&db->lock, flags); } +/** + * debug_object_active_state - debug checks object usage state machine + * @addr: address of the object + * @descr: pointer to an object specific debug description structure + * @expect: expected state + * @next: state to move to if expected state is found + */ +void +debug_object_active_state(void *addr, struct debug_obj_descr *descr, + unsigned int expect, unsigned int next) +{ + struct debug_bucket *db; + struct debug_obj *obj; + unsigned long flags; + + if (!debug_objects_enabled) + return; + + db = get_bucket((unsigned long) addr); + + raw_spin_lock_irqsave(&db->lock, flags); + + obj = lookup_object(addr, db); + if (obj) { + switch (obj->state) { + case ODEBUG_STATE_ACTIVE: + if (obj->astate == expect) + obj->astate = next; + else + debug_print_object(obj, "active_state"); + break; + + default: + debug_print_object(obj, "active_state"); + break; + } + } else { + struct debug_obj o = { .object = addr, + .state = ODEBUG_STATE_NOTAVAILABLE, + .descr = descr }; + + debug_print_object(&o, "active_state"); + } + + raw_spin_unlock_irqrestore(&db->lock, flags); +} + #ifdef CONFIG_DEBUG_OBJECTS_FREE static void __debug_check_no_obj_freed(const void *address, unsigned long size) { @@ -774,7 +827,7 @@ static int __init fixup_free(void *addr, enum debug_obj_state state) } } -static int +static int __init check_results(void *addr, enum debug_obj_state state, int fixups, int warnings) { struct debug_bucket *db; @@ -917,7 +970,7 @@ void __init debug_objects_early_init(void) /* * Convert the statically allocated objects to dynamic ones: */ -static int debug_objects_replace_static_objects(void) +static int __init debug_objects_replace_static_objects(void) { struct debug_bucket *db = obj_hash; struct hlist_node *node, *tmp; diff --git a/lib/hweight.c b/lib/hweight.c index 63ee4eb1228d..3c79d50814cf 100644 --- a/lib/hweight.c +++ b/lib/hweight.c @@ -9,7 +9,7 @@ * The Hamming Weight of a number is the total number of bits set in it. */ -unsigned int hweight32(unsigned int w) +unsigned int __sw_hweight32(unsigned int w) { #ifdef ARCH_HAS_FAST_MULTIPLIER w -= (w >> 1) & 0x55555555; @@ -24,29 +24,30 @@ unsigned int hweight32(unsigned int w) return (res + (res >> 16)) & 0x000000FF; #endif } -EXPORT_SYMBOL(hweight32); +EXPORT_SYMBOL(__sw_hweight32); -unsigned int hweight16(unsigned int w) +unsigned int __sw_hweight16(unsigned int w) { unsigned int res = w - ((w >> 1) & 0x5555); res = (res & 0x3333) + ((res >> 2) & 0x3333); res = (res + (res >> 4)) & 0x0F0F; return (res + (res >> 8)) & 0x00FF; } -EXPORT_SYMBOL(hweight16); +EXPORT_SYMBOL(__sw_hweight16); -unsigned int hweight8(unsigned int w) +unsigned int __sw_hweight8(unsigned int w) { unsigned int res = w - ((w >> 1) & 0x55); res = (res & 0x33) + ((res >> 2) & 0x33); return (res + (res >> 4)) & 0x0F; } -EXPORT_SYMBOL(hweight8); +EXPORT_SYMBOL(__sw_hweight8); -unsigned long hweight64(__u64 w) +unsigned long __sw_hweight64(__u64 w) { #if BITS_PER_LONG == 32 - return hweight32((unsigned int)(w >> 32)) + hweight32((unsigned int)w); + return __sw_hweight32((unsigned int)(w >> 32)) + + __sw_hweight32((unsigned int)w); #elif BITS_PER_LONG == 64 #ifdef ARCH_HAS_FAST_MULTIPLIER w -= (w >> 1) & 0x5555555555555555ul; @@ -63,4 +64,4 @@ unsigned long hweight64(__u64 w) #endif #endif } -EXPORT_SYMBOL(hweight64); +EXPORT_SYMBOL(__sw_hweight64); diff --git a/lib/rbtree.c b/lib/rbtree.c index e2aa3be29858..15e10b1afdd2 100644 --- a/lib/rbtree.c +++ b/lib/rbtree.c @@ -44,6 +44,11 @@ static void __rb_rotate_left(struct rb_node *node, struct rb_root *root) else root->rb_node = right; rb_set_parent(node, right); + + if (root->augment_cb) { + root->augment_cb(node); + root->augment_cb(right); + } } static void __rb_rotate_right(struct rb_node *node, struct rb_root *root) @@ -67,12 +72,20 @@ static void __rb_rotate_right(struct rb_node *node, struct rb_root *root) else root->rb_node = left; rb_set_parent(node, left); + + if (root->augment_cb) { + root->augment_cb(node); + root->augment_cb(left); + } } void rb_insert_color(struct rb_node *node, struct rb_root *root) { struct rb_node *parent, *gparent; + if (root->augment_cb) + root->augment_cb(node); + while ((parent = rb_parent(node)) && rb_is_red(parent)) { gparent = rb_parent(parent); @@ -227,12 +240,15 @@ void rb_erase(struct rb_node *node, struct rb_root *root) else { struct rb_node *old = node, *left; + int old_parent_cb = 0; + int successor_parent_cb = 0; node = node->rb_right; while ((left = node->rb_left) != NULL) node = left; if (rb_parent(old)) { + old_parent_cb = 1; if (rb_parent(old)->rb_left == old) rb_parent(old)->rb_left = node; else @@ -247,8 +263,10 @@ void rb_erase(struct rb_node *node, struct rb_root *root) if (parent == old) { parent = node; } else { + successor_parent_cb = 1; if (child) rb_set_parent(child, parent); + parent->rb_left = child; node->rb_right = old->rb_right; @@ -259,6 +277,24 @@ void rb_erase(struct rb_node *node, struct rb_root *root) node->rb_left = old->rb_left; rb_set_parent(old->rb_left, node); + if (root->augment_cb) { + /* + * Here, three different nodes can have new children. + * The parent of the successor node that was selected + * to replace the node to be erased. + * The node that is getting erased and is now replaced + * by its successor. + * The parent of the node getting erased-replaced. + */ + if (successor_parent_cb) + root->augment_cb(parent); + + root->augment_cb(node); + + if (old_parent_cb) + root->augment_cb(rb_parent(old)); + } + goto color; } @@ -267,15 +303,19 @@ void rb_erase(struct rb_node *node, struct rb_root *root) if (child) rb_set_parent(child, parent); - if (parent) - { + + if (parent) { if (parent->rb_left == node) parent->rb_left = child; else parent->rb_right = child; - } - else + + if (root->augment_cb) + root->augment_cb(parent); + + } else { root->rb_node = child; + } color: if (color == RB_BLACK) diff --git a/lib/rwsem.c b/lib/rwsem.c index 3e3365e5665e..ceba8e28807a 100644 --- a/lib/rwsem.c +++ b/lib/rwsem.c @@ -136,9 +136,10 @@ __rwsem_do_wake(struct rw_semaphore *sem, int downgrading) out: return sem; - /* undo the change to count, but check for a transition 1->0 */ + /* undo the change to the active count, but check for a transition + * 1->0 */ undo: - if (rwsem_atomic_update(-RWSEM_ACTIVE_BIAS, sem) != 0) + if (rwsem_atomic_update(-RWSEM_ACTIVE_BIAS, sem) & RWSEM_ACTIVE_MASK) goto out; goto try_again; } diff --git a/mm/mlock.c b/mm/mlock.c index 8f4e2dfceec1..3f82720e0515 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -607,44 +607,3 @@ void user_shm_unlock(size_t size, struct user_struct *user) spin_unlock(&shmlock_user_lock); free_uid(user); } - -int account_locked_memory(struct mm_struct *mm, struct rlimit *rlim, - size_t size) -{ - unsigned long lim, vm, pgsz; - int error = -ENOMEM; - - pgsz = PAGE_ALIGN(size) >> PAGE_SHIFT; - - down_write(&mm->mmap_sem); - - lim = ACCESS_ONCE(rlim[RLIMIT_AS].rlim_cur) >> PAGE_SHIFT; - vm = mm->total_vm + pgsz; - if (lim < vm) - goto out; - - lim = ACCESS_ONCE(rlim[RLIMIT_MEMLOCK].rlim_cur) >> PAGE_SHIFT; - vm = mm->locked_vm + pgsz; - if (lim < vm) - goto out; - - mm->total_vm += pgsz; - mm->locked_vm += pgsz; - - error = 0; - out: - up_write(&mm->mmap_sem); - return error; -} - -void refund_locked_memory(struct mm_struct *mm, size_t size) -{ - unsigned long pgsz = PAGE_ALIGN(size) >> PAGE_SHIFT; - - down_write(&mm->mmap_sem); - - mm->total_vm -= pgsz; - mm->locked_vm -= pgsz; - - up_write(&mm->mmap_sem); -} diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index f9bdf264473d..cbcd654215e6 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -245,3 +245,7 @@ quiet_cmd_lzo = LZO $@ cmd_lzo = (cat $(filter-out FORCE,$^) | \ lzop -9 && $(call size_append, $(filter-out FORCE,$^))) > $@ || \ (rm -f $@ ; false) + +# misc stuff +# --------------------------------------------------------------------------- +quote:=" diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c index 36a60a853173..9cf2400580a7 100644 --- a/scripts/mod/file2alias.c +++ b/scripts/mod/file2alias.c @@ -818,6 +818,16 @@ static int do_mdio_entry(const char *filename, return 1; } +/* Looks like: zorro:iN. */ +static int do_zorro_entry(const char *filename, struct zorro_device_id *id, + char *alias) +{ + id->id = TO_NATIVE(id->id); + strcpy(alias, "zorro:"); + ADD(alias, "i", id->id != ZORRO_WILDCARD, id->id); + return 1; +} + /* Ignore any prefix, eg. some architectures prepend _ */ static inline int sym_is(const char *symbol, const char *name) { @@ -969,6 +979,10 @@ void handle_moddevtable(struct module *mod, struct elf_info *info, do_table(symval, sym->st_size, sizeof(struct mdio_device_id), "mdio", do_mdio_entry, mod); + else if (sym_is(symname, "__mod_zorro_device_table")) + do_table(symval, sym->st_size, + sizeof(struct zorro_device_id), "zorro", + do_zorro_entry, mod); free(zeros); } diff --git a/security/min_addr.c b/security/min_addr.c index e86f297522bf..f728728f193b 100644 --- a/security/min_addr.c +++ b/security/min_addr.c @@ -33,7 +33,7 @@ int mmap_min_addr_handler(struct ctl_table *table, int write, { int ret; - if (!capable(CAP_SYS_RAWIO)) + if (write && !capable(CAP_SYS_RAWIO)) return -EPERM; ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos); diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index 872887624030..20b5982c996b 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -36,6 +36,9 @@ #include <sound/timer.h> #include <sound/minors.h> #include <asm/io.h> +#if defined(CONFIG_MIPS) && defined(CONFIG_DMA_NONCOHERENT) +#include <dma-coherence.h> +#endif /* * Compatibility @@ -3184,6 +3187,10 @@ static int snd_pcm_default_mmap(struct snd_pcm_substream *substream, substream->runtime->dma_area, substream->runtime->dma_addr, area->vm_end - area->vm_start); +#elif defined(CONFIG_MIPS) && defined(CONFIG_DMA_NONCOHERENT) + if (substream->dma_buffer.dev.type == SNDRV_DMA_TYPE_DEV && + !plat_device_is_coherent(substream->dma_buffer.dev.dev)) + area->vm_page_prot = pgprot_noncached(area->vm_page_prot); #endif /* ARCH_HAS_DMA_MMAP_COHERENT */ /* mmap with fault handler */ area->vm_ops = &snd_pcm_vm_ops_data_fault; diff --git a/sound/drivers/pcsp/pcsp.h b/sound/drivers/pcsp/pcsp.h index 1e123077923d..4ff6c8cc5077 100644 --- a/sound/drivers/pcsp/pcsp.h +++ b/sound/drivers/pcsp/pcsp.h @@ -16,7 +16,7 @@ #include <asm/i8253.h> #else #include <asm/8253pit.h> -static DEFINE_SPINLOCK(i8253_lock); +static DEFINE_RAW_SPINLOCK(i8253_lock); #endif #define PCSP_SOUND_VERSION 0x400 /* read 4.00 */ diff --git a/sound/drivers/pcsp/pcsp_input.c b/sound/drivers/pcsp/pcsp_input.c index 0444cdeb4bec..b5e2b54c2604 100644 --- a/sound/drivers/pcsp/pcsp_input.c +++ b/sound/drivers/pcsp/pcsp_input.c @@ -21,7 +21,7 @@ static void pcspkr_do_sound(unsigned int count) { unsigned long flags; - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); if (count) { /* set command for counter 2, 2 byte write */ @@ -36,7 +36,7 @@ static void pcspkr_do_sound(unsigned int count) outb(inb_p(0x61) & 0xFC, 0x61); } - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); } void pcspkr_stop_sound(void) diff --git a/sound/drivers/pcsp/pcsp_lib.c b/sound/drivers/pcsp/pcsp_lib.c index d77ffa9a9387..ce9e7d170c0d 100644 --- a/sound/drivers/pcsp/pcsp_lib.c +++ b/sound/drivers/pcsp/pcsp_lib.c @@ -66,7 +66,7 @@ static u64 pcsp_timer_update(struct snd_pcsp *chip) timer_cnt = val * CUR_DIV() / 256; if (timer_cnt && chip->enable) { - spin_lock_irqsave(&i8253_lock, flags); + raw_spin_lock_irqsave(&i8253_lock, flags); if (!nforce_wa) { outb_p(chip->val61, 0x61); outb_p(timer_cnt, 0x42); @@ -75,7 +75,7 @@ static u64 pcsp_timer_update(struct snd_pcsp *chip) outb(chip->val61 ^ 2, 0x61); chip->thalf = 1; } - spin_unlock_irqrestore(&i8253_lock, flags); + raw_spin_unlock_irqrestore(&i8253_lock, flags); } chip->ns_rem = PCSP_PERIOD_NS(); @@ -159,10 +159,10 @@ static int pcsp_start_playing(struct snd_pcsp *chip) return -EIO; } - spin_lock(&i8253_lock); + raw_spin_lock(&i8253_lock); chip->val61 = inb(0x61) | 0x03; outb_p(0x92, 0x43); /* binary, mode 1, LSB only, ch 2 */ - spin_unlock(&i8253_lock); + raw_spin_unlock(&i8253_lock); atomic_set(&chip->timer_active, 1); chip->thalf = 0; @@ -179,11 +179,11 @@ static void pcsp_stop_playing(struct snd_pcsp *chip) return; atomic_set(&chip->timer_active, 0); - spin_lock(&i8253_lock); + raw_spin_lock(&i8253_lock); /* restore the timer */ outb_p(0xb6, 0x43); /* binary, mode 3, LSB/MSB, ch 2 */ outb(chip->val61 & 0xFC, 0x61); - spin_unlock(&i8253_lock); + raw_spin_unlock(&i8253_lock); } /* diff --git a/sound/oss/dmasound/dmasound_paula.c b/sound/oss/dmasound/dmasound_paula.c index bb14e4c67e89..87910e992133 100644 --- a/sound/oss/dmasound/dmasound_paula.c +++ b/sound/oss/dmasound/dmasound_paula.c @@ -21,6 +21,7 @@ #include <linux/ioport.h> #include <linux/soundcard.h> #include <linux/interrupt.h> +#include <linux/platform_device.h> #include <asm/uaccess.h> #include <asm/setup.h> @@ -710,31 +711,41 @@ static MACHINE machAmiga = { /*** Config & Setup **********************************************************/ -static int __init dmasound_paula_init(void) +static int __init amiga_audio_probe(struct platform_device *pdev) { - int err; - - if (MACH_IS_AMIGA && AMIGAHW_PRESENT(AMI_AUDIO)) { - if (!request_mem_region(CUSTOM_PHYSADDR+0xa0, 0x40, - "dmasound [Paula]")) - return -EBUSY; - dmasound.mach = machAmiga; - dmasound.mach.default_hard = def_hard ; - dmasound.mach.default_soft = def_soft ; - err = dmasound_init(); - if (err) - release_mem_region(CUSTOM_PHYSADDR+0xa0, 0x40); - return err; - } else - return -ENODEV; + dmasound.mach = machAmiga; + dmasound.mach.default_hard = def_hard ; + dmasound.mach.default_soft = def_soft ; + return dmasound_init(); } -static void __exit dmasound_paula_cleanup(void) +static int __exit amiga_audio_remove(struct platform_device *pdev) { dmasound_deinit(); - release_mem_region(CUSTOM_PHYSADDR+0xa0, 0x40); + return 0; +} + +static struct platform_driver amiga_audio_driver = { + .remove = __exit_p(amiga_audio_remove), + .driver = { + .name = "amiga-audio", + .owner = THIS_MODULE, + }, +}; + +static int __init amiga_audio_init(void) +{ + return platform_driver_probe(&amiga_audio_driver, amiga_audio_probe); } -module_init(dmasound_paula_init); -module_exit(dmasound_paula_cleanup); +module_init(amiga_audio_init); + +static void __exit amiga_audio_exit(void) +{ + platform_driver_unregister(&amiga_audio_driver); +} + +module_exit(amiga_audio_exit); + MODULE_LICENSE("GPL"); +MODULE_ALIAS("platform:amiga-audio"); diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index d8213e2231a6..feabb44c7ca4 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -1197,9 +1197,10 @@ static int patch_cxt5045(struct hda_codec *codec) case 0x103c: case 0x1631: case 0x1734: - /* HP, Packard Bell, & Fujitsu-Siemens laptops have really bad - * sound over 0dB on NID 0x17. Fix max PCM level to 0 dB - * (originally it has 0x2b steps with 0dB offset 0x14) + case 0x17aa: + /* HP, Packard Bell, Fujitsu-Siemens & Lenovo laptops have + * really bad sound over 0dB on NID 0x17. Fix max PCM level to + * 0 dB (originally it has 0x2b steps with 0dB offset 0x14) */ snd_hda_override_amp_caps(codec, 0x17, HDA_INPUT, (0x14 << AC_AMPCAP_OFFSET_SHIFT) | diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c index 12825aa03106..a0e06d82da1f 100644 --- a/sound/pci/hda/patch_sigmatel.c +++ b/sound/pci/hda/patch_sigmatel.c @@ -104,6 +104,7 @@ enum { STAC_DELL_M4_2, STAC_DELL_M4_3, STAC_HP_M4, + STAC_HP_DV4, STAC_HP_DV5, STAC_HP_HDX, STAC_HP_DV4_1222NR, @@ -1691,6 +1692,7 @@ static unsigned int *stac92hd71bxx_brd_tbl[STAC_92HD71BXX_MODELS] = { [STAC_DELL_M4_2] = dell_m4_2_pin_configs, [STAC_DELL_M4_3] = dell_m4_3_pin_configs, [STAC_HP_M4] = NULL, + [STAC_HP_DV4] = NULL, [STAC_HP_DV5] = NULL, [STAC_HP_HDX] = NULL, [STAC_HP_DV4_1222NR] = NULL, @@ -1703,6 +1705,7 @@ static const char *stac92hd71bxx_models[STAC_92HD71BXX_MODELS] = { [STAC_DELL_M4_2] = "dell-m4-2", [STAC_DELL_M4_3] = "dell-m4-3", [STAC_HP_M4] = "hp-m4", + [STAC_HP_DV4] = "hp-dv4", [STAC_HP_DV5] = "hp-dv5", [STAC_HP_HDX] = "hp-hdx", [STAC_HP_DV4_1222NR] = "hp-dv4-1222nr", @@ -1721,7 +1724,7 @@ static struct snd_pci_quirk stac92hd71bxx_cfg_tbl[] = { SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x3080, "HP", STAC_HP_DV5), SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x30f0, - "HP dv4-7", STAC_HP_DV5), + "HP dv4-7", STAC_HP_DV4), SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x3600, "HP dv4-7", STAC_HP_DV5), SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x3610, @@ -4766,6 +4769,9 @@ static void set_hp_led_gpio(struct hda_codec *codec) struct sigmatel_spec *spec = codec->spec; unsigned int gpio; + if (spec->gpio_led) + return; + gpio = snd_hda_param_read(codec, codec->afg, AC_PAR_GPIO_CAP); gpio &= AC_GPIO_IO_COUNT; if (gpio > 3) @@ -5675,6 +5681,9 @@ again: spec->num_smuxes = 1; spec->num_dmuxes = 1; /* fallthrough */ + case STAC_HP_DV4: + spec->gpio_led = 0x01; + /* fallthrough */ case STAC_HP_DV5: snd_hda_codec_set_pincfg(codec, 0x0d, 0x90170010); stac92xx_auto_set_pinctl(codec, 0x0d, AC_PINCTL_OUT_EN); @@ -5688,6 +5697,7 @@ again: spec->num_dmics = 1; spec->num_dmuxes = 1; spec->num_smuxes = 1; + spec->gpio_led = 0x08; break; } @@ -5744,7 +5754,8 @@ again: } /* enable bass on HP dv7 */ - if (spec->board_config == STAC_HP_DV5) { + if (spec->board_config == STAC_HP_DV4 || + spec->board_config == STAC_HP_DV5) { unsigned int cap; cap = snd_hda_param_read(codec, 0x1, AC_PAR_GPIO_CAP); cap &= AC_GPIO_IO_COUNT; diff --git a/sound/pci/ice1712/maya44.c b/sound/pci/ice1712/maya44.c index 3e1c20ae2f1c..726fd4b92e19 100644 --- a/sound/pci/ice1712/maya44.c +++ b/sound/pci/ice1712/maya44.c @@ -347,7 +347,7 @@ static int maya_gpio_sw_put(struct snd_kcontrol *kcontrol, /* known working input slots (0-4) */ #define MAYA_LINE_IN 1 /* in-2 */ -#define MAYA_MIC_IN 4 /* in-5 */ +#define MAYA_MIC_IN 3 /* in-4 */ static void wm8776_select_input(struct snd_maya44 *chip, int idx, int line) { @@ -393,8 +393,8 @@ static int maya_rec_src_put(struct snd_kcontrol *kcontrol, int changed; mutex_lock(&chip->mutex); - changed = maya_set_gpio_bits(chip->ice, GPIO_MIC_RELAY, - sel ? GPIO_MIC_RELAY : 0); + changed = maya_set_gpio_bits(chip->ice, 1 << GPIO_MIC_RELAY, + sel ? (1 << GPIO_MIC_RELAY) : 0); wm8776_select_input(chip, 0, sel ? MAYA_MIC_IN : MAYA_LINE_IN); mutex_unlock(&chip->mutex); return changed; diff --git a/sound/pci/oxygen/xonar_cs43xx.c b/sound/pci/oxygen/xonar_cs43xx.c index 16c226bfcd2b..7c4986b27f2b 100644 --- a/sound/pci/oxygen/xonar_cs43xx.c +++ b/sound/pci/oxygen/xonar_cs43xx.c @@ -56,6 +56,7 @@ #include <sound/pcm_params.h> #include <sound/tlv.h> #include "xonar.h" +#include "cm9780.h" #include "cs4398.h" #include "cs4362a.h" @@ -172,6 +173,8 @@ static void xonar_d1_init(struct oxygen *chip) oxygen_clear_bits16(chip, OXYGEN_GPIO_DATA, GPIO_D1_FRONT_PANEL | GPIO_D1_INPUT_ROUTE); + oxygen_ac97_set_bits(chip, 0, CM9780_JACK, CM9780_FMIC2MIC); + xonar_init_cs53x1(chip); xonar_enable_output(chip); diff --git a/tools/perf/Documentation/perf-annotate.txt b/tools/perf/Documentation/perf-annotate.txt index c9dcade06831..5164a655c39f 100644 --- a/tools/perf/Documentation/perf-annotate.txt +++ b/tools/perf/Documentation/perf-annotate.txt @@ -1,5 +1,5 @@ perf-annotate(1) -============== +================ NAME ---- diff --git a/tools/perf/Documentation/perf-bench.txt b/tools/perf/Documentation/perf-bench.txt index ae525ac5a2ce..a3dbadb26ef5 100644 --- a/tools/perf/Documentation/perf-bench.txt +++ b/tools/perf/Documentation/perf-bench.txt @@ -1,5 +1,5 @@ perf-bench(1) -============ +============= NAME ---- @@ -19,12 +19,12 @@ COMMON OPTIONS -f:: --format=:: Specify format style. -Current available format styles are, +Current available format styles are: 'default':: Default style. This is mainly for human reading. --------------------- -% perf bench sched pipe # with no style specify +% perf bench sched pipe # with no style specified (executing 1000000 pipe operations between two tasks) Total time:5.855 sec 5.855061 usecs/op @@ -79,7 +79,7 @@ options (20 sender and receiver processes per group) Total time:0.308 sec -% perf bench sched messaging -t -g 20 # be multi-thread,with 20 groups +% perf bench sched messaging -t -g 20 # be multi-thread, with 20 groups (20 sender and receiver threads per group) (20 groups == 800 threads run) diff --git a/tools/perf/Documentation/perf-buildid-cache.txt b/tools/perf/Documentation/perf-buildid-cache.txt index 88bc3b519746..5d1a9500277f 100644 --- a/tools/perf/Documentation/perf-buildid-cache.txt +++ b/tools/perf/Documentation/perf-buildid-cache.txt @@ -8,7 +8,7 @@ perf-buildid-cache - Manage build-id cache. SYNOPSIS -------- [verse] -'perf buildid-list <options>' +'perf buildid-cache <options>' DESCRIPTION ----------- @@ -30,4 +30,4 @@ OPTIONS SEE ALSO -------- -linkperf:perf-record[1], linkperf:perf-report[1] +linkperf:perf-record[1], linkperf:perf-report[1], linkperf:perf-buildid-list[1] diff --git a/tools/perf/Documentation/perf-diff.txt b/tools/perf/Documentation/perf-diff.txt index 8974e208cba6..20d97d84ea1c 100644 --- a/tools/perf/Documentation/perf-diff.txt +++ b/tools/perf/Documentation/perf-diff.txt @@ -1,5 +1,5 @@ perf-diff(1) -============== +============ NAME ---- diff --git a/tools/perf/Documentation/perf-inject.txt b/tools/perf/Documentation/perf-inject.txt new file mode 100644 index 000000000000..025630d43cd2 --- /dev/null +++ b/tools/perf/Documentation/perf-inject.txt @@ -0,0 +1,35 @@ +perf-inject(1) +============== + +NAME +---- +perf-inject - Filter to augment the events stream with additional information + +SYNOPSIS +-------- +[verse] +'perf inject <options>' + +DESCRIPTION +----------- +perf-inject reads a perf-record event stream and repipes it to stdout. At any +point the processing code can inject other events into the event stream - in +this case build-ids (-b option) are read and injected as needed into the event +stream. + +Build-ids are just the first user of perf-inject - potentially anything that +needs userspace processing to augment the events stream with additional +information could make use of this facility. + +OPTIONS +------- +-b:: +--build-ids=:: + Inject build-ids into the output stream +-v:: +--verbose:: + Be more verbose. + +SEE ALSO +-------- +linkperf:perf-record[1], linkperf:perf-report[1], linkperf:perf-archive[1] diff --git a/tools/perf/Documentation/perf-kmem.txt b/tools/perf/Documentation/perf-kmem.txt index eac4d852e7cd..a52fcde894c7 100644 --- a/tools/perf/Documentation/perf-kmem.txt +++ b/tools/perf/Documentation/perf-kmem.txt @@ -1,5 +1,5 @@ perf-kmem(1) -============== +============ NAME ---- diff --git a/tools/perf/Documentation/perf-kvm.txt b/tools/perf/Documentation/perf-kvm.txt new file mode 100644 index 000000000000..d004e19fe6d6 --- /dev/null +++ b/tools/perf/Documentation/perf-kvm.txt @@ -0,0 +1,68 @@ +perf-kvm(1) +=========== + +NAME +---- +perf-kvm - Tool to trace/measure kvm guest os + +SYNOPSIS +-------- +[verse] +'perf kvm' [--host] [--guest] [--guestmount=<path> + [--guestkallsyms=<path> --guestmodules=<path> | --guestvmlinux=<path>]] + {top|record|report|diff|buildid-list} +'perf kvm' [--host] [--guest] [--guestkallsyms=<path> --guestmodules=<path> + | --guestvmlinux=<path>] {top|record|report|diff|buildid-list} + +DESCRIPTION +----------- +There are a couple of variants of perf kvm: + + 'perf kvm [options] top <command>' to generates and displays + a performance counter profile of guest os in realtime + of an arbitrary workload. + + 'perf kvm record <command>' to record the performance couinter profile + of an arbitrary workload and save it into a perf data file. If both + --host and --guest are input, the perf data file name is perf.data.kvm. + If there is no --host but --guest, the file name is perf.data.guest. + If there is no --guest but --host, the file name is perf.data.host. + + 'perf kvm report' to display the performance counter profile information + recorded via perf kvm record. + + 'perf kvm diff' to displays the performance difference amongst two perf.data + files captured via perf record. + + 'perf kvm buildid-list' to display the buildids found in a perf data file, + so that other tools can be used to fetch packages with matching symbol tables + for use by perf report. + +OPTIONS +------- +--host=:: + Collect host side performance profile. +--guest=:: + Collect guest side performance profile. +--guestmount=<path>:: + Guest os root file system mount directory. Users mounts guest os + root directories under <path> by a specific filesystem access method, + typically, sshfs. For example, start 2 guest os. The one's pid is 8888 + and the other's is 9999. + #mkdir ~/guestmount; cd ~/guestmount + #sshfs -o allow_other,direct_io -p 5551 localhost:/ 8888/ + #sshfs -o allow_other,direct_io -p 5552 localhost:/ 9999/ + #perf kvm --host --guest --guestmount=~/guestmount top +--guestkallsyms=<path>:: + Guest os /proc/kallsyms file copy. 'perf' kvm' reads it to get guest + kernel symbols. Users copy it out from guest os. +--guestmodules=<path>:: + Guest os /proc/modules file copy. 'perf' kvm' reads it to get guest + kernel module information. Users copy it out from guest os. +--guestvmlinux=<path>:: + Guest os kernel vmlinux. + +SEE ALSO +-------- +linkperf:perf-top[1], linkperf:perf-record[1], linkperf:perf-report[1], +linkperf:perf-diff[1], linkperf:perf-buildid-list[1] diff --git a/tools/perf/Documentation/perf-list.txt b/tools/perf/Documentation/perf-list.txt index 8290b9422668..43e3dd284b90 100644 --- a/tools/perf/Documentation/perf-list.txt +++ b/tools/perf/Documentation/perf-list.txt @@ -15,6 +15,35 @@ DESCRIPTION This command displays the symbolic event types which can be selected in the various perf commands with the -e option. +RAW HARDWARE EVENT DESCRIPTOR +----------------------------- +Even when an event is not available in a symbolic form within perf right now, +it can be encoded in a per processor specific way. + +For instance For x86 CPUs NNN represents the raw register encoding with the +layout of IA32_PERFEVTSELx MSRs (see [Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 3B: System Programming Guide] Figure 30-1 Layout +of IA32_PERFEVTSELx MSRs) or AMD's PerfEvtSeln (see [AMD64 Architecture Programmer’s Manual Volume 2: System Programming], Page 344, +Figure 13-7 Performance Event-Select Register (PerfEvtSeln)). + +Example: + +If the Intel docs for a QM720 Core i7 describe an event as: + + Event Umask Event Mask + Num. Value Mnemonic Description Comment + + A8H 01H LSD.UOPS Counts the number of micro-ops Use cmask=1 and + delivered by loop stream detector invert to count + cycles + +raw encoding of 0x1A8 can be used: + + perf stat -e r1a8 -a sleep 1 + perf record -e r1a8 ... + +You should refer to the processor specific documentation for getting these +details. Some of them are referenced in the SEE ALSO section below. + OPTIONS ------- None @@ -22,4 +51,6 @@ None SEE ALSO -------- linkperf:perf-stat[1], linkperf:perf-top[1], -linkperf:perf-record[1] +linkperf:perf-record[1], +http://www.intel.com/Assets/PDF/manual/253669.pdf[Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 3B: System Programming Guide], +http://support.amd.com/us/Processor_TechDocs/24593.pdf[AMD64 Architecture Programmer’s Manual Volume 2: System Programming] diff --git a/tools/perf/Documentation/perf-probe.txt b/tools/perf/Documentation/perf-probe.txt index 34202b1be0bb..94a258c96a44 100644 --- a/tools/perf/Documentation/perf-probe.txt +++ b/tools/perf/Documentation/perf-probe.txt @@ -57,6 +57,14 @@ OPTIONS --force:: Forcibly add events with existing name. +-n:: +--dry-run:: + Dry run. With this option, --add and --del doesn't execute actual + adding and removal operations. + +--max-probes:: + Set the maximum number of probe points for an event. Default is 128. + PROBE SYNTAX ------------ Probe points are defined by following syntax. @@ -74,13 +82,22 @@ Probe points are defined by following syntax. 'EVENT' specifies the name of new event, if omitted, it will be set the name of the probed function. Currently, event group name is set as 'probe'. 'FUNC' specifies a probed function name, and it may have one of the following options; '+OFFS' is the offset from function entry address in bytes, ':RLN' is the relative-line number from function entry line, and '%return' means that it probes function return. And ';PTN' means lazy matching pattern (see LAZY MATCHING). Note that ';PTN' must be the end of the probe point definition. In addition, '@SRC' specifies a source file which has that function. It is also possible to specify a probe point by the source line number or lazy matching by using 'SRC:ALN' or 'SRC;PTN' syntax, where 'SRC' is the source file path, ':ALN' is the line number and ';PTN' is the lazy matching pattern. -'ARG' specifies the arguments of this probe point. You can use the name of local variable, or kprobe-tracer argument format (e.g. $retval, %ax, etc). +'ARG' specifies the arguments of this probe point, (see PROBE ARGUMENT). + +PROBE ARGUMENT +-------------- +Each probe argument follows below syntax. + + [NAME=]LOCALVAR|$retval|%REG|@SYMBOL[:TYPE] + +'NAME' specifies the name of this argument (optional). You can use the name of local variable, local data structure member (e.g. var->field, var.field2), or kprobe-tracer argument format (e.g. $retval, %ax, etc). Note that the name of this argument will be set as the last member name if you specify a local data structure member (e.g. field2 for 'var->field1.field2'.) +'TYPE' casts the type of this argument (optional). If omitted, perf probe automatically set the type based on debuginfo. LINE SYNTAX ----------- Line range is descripted by following syntax. - "FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]" + "FUNC[:RLN[+NUM|-RLN2]]|SRC:ALN[+NUM|-ALN2]" FUNC specifies the function name of showing lines. 'RLN' is the start line number from function entry line, and 'RLN2' is the end line number. As same as diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt index fc46c0b40f6e..34e255fc3e2f 100644 --- a/tools/perf/Documentation/perf-record.txt +++ b/tools/perf/Documentation/perf-record.txt @@ -58,7 +58,7 @@ OPTIONS -f:: --force:: - Overwrite existing data file. + Overwrite existing data file. (deprecated) -c:: --count=:: @@ -69,8 +69,8 @@ OPTIONS Output file name. -i:: ---inherit:: - Child tasks inherit counters. +--no-inherit:: + Child tasks do not inherit counters. -F:: --freq=:: Profile at this frequency. @@ -101,7 +101,7 @@ OPTIONS -R:: --raw-samples:: -Collect raw sample records from all opened counters (typically for tracepoint counters). +Collect raw sample records from all opened counters (default for tracepoint counters). SEE ALSO -------- diff --git a/tools/perf/Documentation/perf-sched.txt b/tools/perf/Documentation/perf-sched.txt index 1ce79198997b..8417644a6166 100644 --- a/tools/perf/Documentation/perf-sched.txt +++ b/tools/perf/Documentation/perf-sched.txt @@ -12,7 +12,7 @@ SYNOPSIS DESCRIPTION ----------- -There's four variants of perf sched: +There are four variants of perf sched: 'perf sched record <command>' to record the scheduling events of an arbitrary workload. @@ -27,7 +27,7 @@ There's four variants of perf sched: via perf sched record. (this is done by starting up mockup threads that mimic the workload based on the events in the trace. These threads can then replay the timings (CPU runtime and sleep patterns) - of the workload as it occured when it was recorded - and can repeat + of the workload as it occurred when it was recorded - and can repeat it a number of times, measuring its performance.) OPTIONS diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt index 484080dd5b6f..2cab8e8c33d0 100644 --- a/tools/perf/Documentation/perf-stat.txt +++ b/tools/perf/Documentation/perf-stat.txt @@ -31,8 +31,8 @@ OPTIONS hexadecimal event descriptor. -i:: ---inherit:: - child tasks inherit counters +--no-inherit:: + child tasks do not inherit counters -p:: --pid=<pid>:: stat events on existing pid diff --git a/tools/perf/Documentation/perf-test.txt b/tools/perf/Documentation/perf-test.txt new file mode 100644 index 000000000000..1c4b5f5b7f71 --- /dev/null +++ b/tools/perf/Documentation/perf-test.txt @@ -0,0 +1,22 @@ +perf-test(1) +============ + +NAME +---- +perf-test - Runs sanity tests. + +SYNOPSIS +-------- +[verse] +'perf test <options>' + +DESCRIPTION +----------- +This command does assorted sanity tests, initially thru linked routines but +also will look for a directory with more tests in the form of scripts. + +OPTIONS +------- +-v:: +--verbose:: + Be more verbose. diff --git a/tools/perf/Documentation/perf-trace-perl.txt b/tools/perf/Documentation/perf-trace-perl.txt index d729cee8d987..ee6525ee6d69 100644 --- a/tools/perf/Documentation/perf-trace-perl.txt +++ b/tools/perf/Documentation/perf-trace-perl.txt @@ -49,12 +49,10 @@ available as calls back into the perf executable (see below). As an example, the following perf record command can be used to record all sched_wakeup events in the system: - # perf record -c 1 -f -a -M -R -e sched:sched_wakeup + # perf record -a -e sched:sched_wakeup Traces meant to be processed using a script should be recorded with -the above options: -c 1 says to sample every event, -a to enable -system-wide collection, -M to multiplex the output, and -R to collect -raw samples. +the above option: -a to enable system-wide collection. The format file for the sched_wakep event defines the following fields (see /sys/kernel/debug/tracing/events/sched/sched_wakeup/format): diff --git a/tools/perf/Documentation/perf-trace-python.txt b/tools/perf/Documentation/perf-trace-python.txt index a241aca77184..693be804dd3d 100644 --- a/tools/perf/Documentation/perf-trace-python.txt +++ b/tools/perf/Documentation/perf-trace-python.txt @@ -1,5 +1,5 @@ perf-trace-python(1) -================== +==================== NAME ---- @@ -93,7 +93,7 @@ don't care how it exited, so we'll use 'perf record' to record only the sys_enter events: ---- -# perf record -c 1 -f -a -M -R -e raw_syscalls:sys_enter +# perf record -a -e raw_syscalls:sys_enter ^C[ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 56.545 MB perf.data (~2470503 samples) ] @@ -182,7 +182,7 @@ mean either that the record step recorded event types that it wasn't really interested in, or the script was run against a trace file that doesn't correspond to the script. -The script generated by -g option option simply prints a line for each +The script generated by -g option simply prints a line for each event found in the trace stream i.e. it basically just dumps the event and its parameter values to stdout. The print_header() function is simply a utility function used for that purpose. Let's rename the @@ -359,7 +359,7 @@ your script: # cat kernel-source/tools/perf/scripts/python/bin/syscall-counts-record #!/bin/bash -perf record -c 1 -f -a -M -R -e raw_syscalls:sys_enter +perf record -a -e raw_syscalls:sys_enter ---- The 'report' script is also a shell script with the same base name as @@ -449,12 +449,10 @@ available as calls back into the perf executable (see below). As an example, the following perf record command can be used to record all sched_wakeup events in the system: - # perf record -c 1 -f -a -M -R -e sched:sched_wakeup + # perf record -a -e sched:sched_wakeup Traces meant to be processed using a script should be recorded with -the above options: -c 1 says to sample every event, -a to enable -system-wide collection, -M to multiplex the output, and -R to collect -raw samples. +the above option: -a to enable system-wide collection. The format file for the sched_wakep event defines the following fields (see /sys/kernel/debug/tracing/events/sched/sched_wakeup/format): @@ -584,7 +582,7 @@ files: flag_str(event_name, field_name, field_value) - returns the string represention corresponding to field_value for the flag field field_name of event event_name symbol_str(event_name, field_name, field_value) - returns the string represention corresponding to field_value for the symbolic field field_name of event event_name -The *autodict* function returns a special special kind of Python +The *autodict* function returns a special kind of Python dictionary that implements Perl's 'autovivifying' hashes in Python i.e. with autovivifying hashes, you can assign nested hash values without having to go to the trouble of creating intermediate levels if diff --git a/tools/perf/Documentation/perf-trace.txt b/tools/perf/Documentation/perf-trace.txt index 8879299cd9df..122ec9dc4853 100644 --- a/tools/perf/Documentation/perf-trace.txt +++ b/tools/perf/Documentation/perf-trace.txt @@ -1,5 +1,5 @@ perf-trace(1) -============== +============= NAME ---- diff --git a/tools/perf/Makefile b/tools/perf/Makefile index bc0f670a8338..3d8f31ed771d 100644 --- a/tools/perf/Makefile +++ b/tools/perf/Makefile @@ -1,3 +1,7 @@ +ifeq ("$(origin O)", "command line") + OUTPUT := $(O)/ +endif + # The default target of this Makefile is... all:: @@ -150,10 +154,17 @@ all:: # Define LDFLAGS=-static to build a static binary. # # Define EXTRA_CFLAGS=-m64 or EXTRA_CFLAGS=-m32 as appropriate for cross-builds. +# +# Define NO_DWARF if you do not want debug-info analysis feature at all. -PERF-VERSION-FILE: .FORCE-PERF-VERSION-FILE - @$(SHELL_PATH) util/PERF-VERSION-GEN --include PERF-VERSION-FILE +$(shell sh -c 'mkdir -p $(OUTPUT)scripts/python/Perf-Trace-Util/' 2> /dev/null) +$(shell sh -c 'mkdir -p $(OUTPUT)scripts/perl/Perf-Trace-Util/' 2> /dev/null) +$(shell sh -c 'mkdir -p $(OUTPUT)util/scripting-engines/' 2> /dev/null) +$(shell sh -c 'mkdir $(OUTPUT)bench' 2> /dev/null) + +$(OUTPUT)PERF-VERSION-FILE: .FORCE-PERF-VERSION-FILE + @$(SHELL_PATH) util/PERF-VERSION-GEN $(OUTPUT) +-include $(OUTPUT)PERF-VERSION-FILE uname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not') uname_M := $(shell sh -c 'uname -m 2>/dev/null || echo not') @@ -162,6 +173,22 @@ uname_R := $(shell sh -c 'uname -r 2>/dev/null || echo not') uname_P := $(shell sh -c 'uname -p 2>/dev/null || echo not') uname_V := $(shell sh -c 'uname -v 2>/dev/null || echo not') +ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/ -e s/sun4u/sparc64/ \ + -e s/arm.*/arm/ -e s/sa110/arm/ \ + -e s/s390x/s390/ -e s/parisc64/parisc/ \ + -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ + -e s/sh[234].*/sh/ ) + +# Additional ARCH settings for x86 +ifeq ($(ARCH),i386) + ARCH := x86 +endif +ifeq ($(ARCH),x86_64) + ARCH := x86 +endif + +$(shell sh -c 'mkdir -p $(OUTPUT)arch/$(ARCH)/util/' 2> /dev/null) + # CFLAGS and LDFLAGS are for the users to override from the command line. # @@ -274,7 +301,7 @@ endif # Those must not be GNU-specific; they are shared with perl/ which may # be built by a different compiler. (Note that this is an artifact now # but it still might be nice to keep that distinction.) -BASIC_CFLAGS = -Iutil/include +BASIC_CFLAGS = -Iutil/include -Iarch/$(ARCH)/include BASIC_LDFLAGS = # Guard against environment variables @@ -308,7 +335,7 @@ PROGRAMS += $(EXTRA_PROGRAMS) # # Single 'perf' binary right now: # -PROGRAMS += perf +PROGRAMS += $(OUTPUT)perf # List built-in command $C whose implementation cmd_$C() is not in # builtin-$C.o but is linked in as part of some other command. @@ -318,7 +345,7 @@ PROGRAMS += perf ALL_PROGRAMS = $(PROGRAMS) $(SCRIPTS) # what 'all' will build but not install in perfexecdir -OTHER_PROGRAMS = perf$X +OTHER_PROGRAMS = $(OUTPUT)perf$X # Set paths to tools early so that they can be used for version tests. ifndef SHELL_PATH @@ -330,7 +357,7 @@ endif export PERL_PATH -LIB_FILE=libperf.a +LIB_FILE=$(OUTPUT)libperf.a LIB_H += ../../include/linux/perf_event.h LIB_H += ../../include/linux/rbtree.h @@ -350,12 +377,13 @@ LIB_H += util/include/linux/rbtree.h LIB_H += util/include/linux/string.h LIB_H += util/include/linux/types.h LIB_H += util/include/asm/asm-offsets.h -LIB_H += util/include/asm/bitops.h LIB_H += util/include/asm/bug.h LIB_H += util/include/asm/byteorder.h +LIB_H += util/include/asm/hweight.h LIB_H += util/include/asm/swab.h LIB_H += util/include/asm/system.h LIB_H += util/include/asm/uaccess.h +LIB_H += util/include/dwarf-regs.h LIB_H += perf.h LIB_H += util/cache.h LIB_H += util/callchain.h @@ -375,7 +403,6 @@ LIB_H += util/header.h LIB_H += util/help.h LIB_H += util/session.h LIB_H += util/strbuf.h -LIB_H += util/string.h LIB_H += util/strlist.h LIB_H += util/svghelper.h LIB_H += util/run-command.h @@ -389,79 +416,83 @@ LIB_H += util/thread.h LIB_H += util/trace-event.h LIB_H += util/probe-finder.h LIB_H += util/probe-event.h +LIB_H += util/pstack.h LIB_H += util/cpumap.h -LIB_OBJS += util/abspath.o -LIB_OBJS += util/alias.o -LIB_OBJS += util/build-id.o -LIB_OBJS += util/config.o -LIB_OBJS += util/ctype.o -LIB_OBJS += util/debugfs.o -LIB_OBJS += util/environment.o -LIB_OBJS += util/event.o -LIB_OBJS += util/exec_cmd.o -LIB_OBJS += util/help.o -LIB_OBJS += util/levenshtein.o -LIB_OBJS += util/parse-options.o -LIB_OBJS += util/parse-events.o -LIB_OBJS += util/path.o -LIB_OBJS += util/rbtree.o -LIB_OBJS += util/bitmap.o -LIB_OBJS += util/hweight.o -LIB_OBJS += util/find_next_bit.o -LIB_OBJS += util/run-command.o -LIB_OBJS += util/quote.o -LIB_OBJS += util/strbuf.o -LIB_OBJS += util/string.o -LIB_OBJS += util/strlist.o -LIB_OBJS += util/usage.o -LIB_OBJS += util/wrapper.o -LIB_OBJS += util/sigchain.o -LIB_OBJS += util/symbol.o -LIB_OBJS += util/color.o -LIB_OBJS += util/pager.o -LIB_OBJS += util/header.o -LIB_OBJS += util/callchain.o -LIB_OBJS += util/values.o -LIB_OBJS += util/debug.o -LIB_OBJS += util/map.o -LIB_OBJS += util/session.o -LIB_OBJS += util/thread.o -LIB_OBJS += util/trace-event-parse.o -LIB_OBJS += util/trace-event-read.o -LIB_OBJS += util/trace-event-info.o -LIB_OBJS += util/trace-event-scripting.o -LIB_OBJS += util/svghelper.o -LIB_OBJS += util/sort.o -LIB_OBJS += util/hist.o -LIB_OBJS += util/probe-event.o -LIB_OBJS += util/util.o -LIB_OBJS += util/cpumap.o - -BUILTIN_OBJS += builtin-annotate.o - -BUILTIN_OBJS += builtin-bench.o +LIB_OBJS += $(OUTPUT)util/abspath.o +LIB_OBJS += $(OUTPUT)util/alias.o +LIB_OBJS += $(OUTPUT)util/build-id.o +LIB_OBJS += $(OUTPUT)util/config.o +LIB_OBJS += $(OUTPUT)util/ctype.o +LIB_OBJS += $(OUTPUT)util/debugfs.o +LIB_OBJS += $(OUTPUT)util/environment.o +LIB_OBJS += $(OUTPUT)util/event.o +LIB_OBJS += $(OUTPUT)util/exec_cmd.o +LIB_OBJS += $(OUTPUT)util/help.o +LIB_OBJS += $(OUTPUT)util/levenshtein.o +LIB_OBJS += $(OUTPUT)util/parse-options.o +LIB_OBJS += $(OUTPUT)util/parse-events.o +LIB_OBJS += $(OUTPUT)util/path.o +LIB_OBJS += $(OUTPUT)util/rbtree.o +LIB_OBJS += $(OUTPUT)util/bitmap.o +LIB_OBJS += $(OUTPUT)util/hweight.o +LIB_OBJS += $(OUTPUT)util/run-command.o +LIB_OBJS += $(OUTPUT)util/quote.o +LIB_OBJS += $(OUTPUT)util/strbuf.o +LIB_OBJS += $(OUTPUT)util/string.o +LIB_OBJS += $(OUTPUT)util/strlist.o +LIB_OBJS += $(OUTPUT)util/usage.o +LIB_OBJS += $(OUTPUT)util/wrapper.o +LIB_OBJS += $(OUTPUT)util/sigchain.o +LIB_OBJS += $(OUTPUT)util/symbol.o +LIB_OBJS += $(OUTPUT)util/color.o +LIB_OBJS += $(OUTPUT)util/pager.o +LIB_OBJS += $(OUTPUT)util/header.o +LIB_OBJS += $(OUTPUT)util/callchain.o +LIB_OBJS += $(OUTPUT)util/values.o +LIB_OBJS += $(OUTPUT)util/debug.o +LIB_OBJS += $(OUTPUT)util/map.o +LIB_OBJS += $(OUTPUT)util/pstack.o +LIB_OBJS += $(OUTPUT)util/session.o +LIB_OBJS += $(OUTPUT)util/thread.o +LIB_OBJS += $(OUTPUT)util/trace-event-parse.o +LIB_OBJS += $(OUTPUT)util/trace-event-read.o +LIB_OBJS += $(OUTPUT)util/trace-event-info.o +LIB_OBJS += $(OUTPUT)util/trace-event-scripting.o +LIB_OBJS += $(OUTPUT)util/svghelper.o +LIB_OBJS += $(OUTPUT)util/sort.o +LIB_OBJS += $(OUTPUT)util/hist.o +LIB_OBJS += $(OUTPUT)util/probe-event.o +LIB_OBJS += $(OUTPUT)util/util.o +LIB_OBJS += $(OUTPUT)util/cpumap.o + +BUILTIN_OBJS += $(OUTPUT)builtin-annotate.o + +BUILTIN_OBJS += $(OUTPUT)builtin-bench.o # Benchmark modules -BUILTIN_OBJS += bench/sched-messaging.o -BUILTIN_OBJS += bench/sched-pipe.o -BUILTIN_OBJS += bench/mem-memcpy.o - -BUILTIN_OBJS += builtin-diff.o -BUILTIN_OBJS += builtin-help.o -BUILTIN_OBJS += builtin-sched.o -BUILTIN_OBJS += builtin-buildid-list.o -BUILTIN_OBJS += builtin-buildid-cache.o -BUILTIN_OBJS += builtin-list.o -BUILTIN_OBJS += builtin-record.o -BUILTIN_OBJS += builtin-report.o -BUILTIN_OBJS += builtin-stat.o -BUILTIN_OBJS += builtin-timechart.o -BUILTIN_OBJS += builtin-top.o -BUILTIN_OBJS += builtin-trace.o -BUILTIN_OBJS += builtin-probe.o -BUILTIN_OBJS += builtin-kmem.o -BUILTIN_OBJS += builtin-lock.o +BUILTIN_OBJS += $(OUTPUT)bench/sched-messaging.o +BUILTIN_OBJS += $(OUTPUT)bench/sched-pipe.o +BUILTIN_OBJS += $(OUTPUT)bench/mem-memcpy.o + +BUILTIN_OBJS += $(OUTPUT)builtin-diff.o +BUILTIN_OBJS += $(OUTPUT)builtin-help.o +BUILTIN_OBJS += $(OUTPUT)builtin-sched.o +BUILTIN_OBJS += $(OUTPUT)builtin-buildid-list.o +BUILTIN_OBJS += $(OUTPUT)builtin-buildid-cache.o +BUILTIN_OBJS += $(OUTPUT)builtin-list.o +BUILTIN_OBJS += $(OUTPUT)builtin-record.o +BUILTIN_OBJS += $(OUTPUT)builtin-report.o +BUILTIN_OBJS += $(OUTPUT)builtin-stat.o +BUILTIN_OBJS += $(OUTPUT)builtin-timechart.o +BUILTIN_OBJS += $(OUTPUT)builtin-top.o +BUILTIN_OBJS += $(OUTPUT)builtin-trace.o +BUILTIN_OBJS += $(OUTPUT)builtin-probe.o +BUILTIN_OBJS += $(OUTPUT)builtin-kmem.o +BUILTIN_OBJS += $(OUTPUT)builtin-lock.o +BUILTIN_OBJS += $(OUTPUT)builtin-kvm.o +BUILTIN_OBJS += $(OUTPUT)builtin-test.o +BUILTIN_OBJS += $(OUTPUT)builtin-inject.o PERFLIBS = $(LIB_FILE) @@ -476,6 +507,15 @@ PERFLIBS = $(LIB_FILE) -include config.mak.autogen -include config.mak +ifndef NO_DWARF +ifneq ($(shell sh -c "(echo '\#include <dwarf.h>'; echo '\#include <libdw.h>'; echo '\#include <version.h>'; echo '\#ifndef _ELFUTILS_PREREQ'; echo '\#error'; echo '\#endif'; echo 'int main(void) { Dwarf *dbg; dbg = dwarf_begin(0, DWARF_C_READ); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -I/usr/include/elfutils -ldw -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) + msg := $(warning No libdw.h found or old libdw.h found or elfutils is older than 0.138, disables dwarf support. Please install new elfutils-devel/libdw-dev); + NO_DWARF := 1 +endif # Dwarf support +endif # NO_DWARF + +-include arch/$(ARCH)/Makefile + ifeq ($(uname_S),Darwin) ifndef NO_FINK ifeq ($(shell test -d /sw/lib && echo y),y) @@ -492,6 +532,10 @@ ifeq ($(uname_S),Darwin) PTHREAD_LIBS = endif +ifneq ($(OUTPUT),) + BASIC_CFLAGS += -I$(OUTPUT) +endif + ifeq ($(shell sh -c "(echo '\#include <libelf.h>'; echo 'int main(void) { Elf * elf = elf_begin(0, ELF_C_READ, 0); return (long)elf; }') | $(CC) -x c - $(ALL_CFLAGS) -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) ifneq ($(shell sh -c "(echo '\#include <gnu/libc-version.h>'; echo 'int main(void) { const char * version = gnu_get_libc_version(); return (long)version; }') | $(CC) -x c - $(ALL_CFLAGS) -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) msg := $(error No gnu/libc-version.h found, please install glibc-dev[el]/glibc-static); @@ -504,14 +548,29 @@ else msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel and glibc-dev[el]); endif -ifneq ($(shell sh -c "(echo '\#include <dwarf.h>'; echo '\#include <libdw.h>'; echo 'int main(void) { Dwarf *dbg; dbg = dwarf_begin(0, DWARF_C_READ); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -I/usr/include/elfutils -ldw -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) - msg := $(warning No libdw.h found or old libdw.h found, disables dwarf support. Please install elfutils-devel/elfutils-dev); - BASIC_CFLAGS += -DNO_DWARF_SUPPORT +ifndef NO_DWARF +ifeq ($(origin PERF_HAVE_DWARF_REGS), undefined) + msg := $(warning DWARF register mappings have not been defined for architecture $(ARCH), DWARF support disabled); else - BASIC_CFLAGS += -I/usr/include/elfutils + BASIC_CFLAGS += -I/usr/include/elfutils -DDWARF_SUPPORT EXTLIBS += -lelf -ldw - LIB_OBJS += util/probe-finder.o + LIB_OBJS += $(OUTPUT)util/probe-finder.o +endif # PERF_HAVE_DWARF_REGS +endif # NO_DWARF + +ifdef NO_NEWT + BASIC_CFLAGS += -DNO_NEWT_SUPPORT +else +ifneq ($(shell sh -c "(echo '\#include <newt.h>'; echo 'int main(void) { newtInit(); newtCls(); return newtFinished(); }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -lnewt -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) + msg := $(warning newt not found, disables TUI support. Please install newt-devel or libnewt-dev); + BASIC_CFLAGS += -DNO_NEWT_SUPPORT +else + # Fedora has /usr/include/slang/slang.h, but ubuntu /usr/include/slang.h + BASIC_CFLAGS += -I/usr/include/slang + EXTLIBS += -lnewt -lslang + LIB_OBJS += $(OUTPUT)util/newt.o endif +endif # NO_NEWT ifndef NO_LIBPERL PERL_EMBED_LDOPTS = `perl -MExtUtils::Embed -e ldopts 2>/dev/null` @@ -522,8 +581,8 @@ ifneq ($(shell sh -c "(echo '\#include <EXTERN.h>'; echo '\#include <perl.h>'; e BASIC_CFLAGS += -DNO_LIBPERL else ALL_LDFLAGS += $(PERL_EMBED_LDOPTS) - LIB_OBJS += util/scripting-engines/trace-event-perl.o - LIB_OBJS += scripts/perl/Perf-Trace-Util/Context.o + LIB_OBJS += $(OUTPUT)util/scripting-engines/trace-event-perl.o + LIB_OBJS += $(OUTPUT)scripts/perl/Perf-Trace-Util/Context.o endif ifndef NO_LIBPYTHON @@ -531,16 +590,19 @@ PYTHON_EMBED_LDOPTS = `python-config --ldflags 2>/dev/null` PYTHON_EMBED_CCOPTS = `python-config --cflags 2>/dev/null` endif -ifneq ($(shell sh -c "(echo '\#include <Python.h>'; echo 'int main(void) { Py_Initialize(); return 0; }') | $(CC) -x c - $(PYTHON_EMBED_CCOPTS) -o /dev/null $(PYTHON_EMBED_LDOPTS) > /dev/null 2>&1 && echo y"), y) +ifneq ($(shell sh -c "(echo '\#include <Python.h>'; echo 'int main(void) { Py_Initialize(); return 0; }') | $(CC) -x c - $(PYTHON_EMBED_CCOPTS) -o $(BITBUCKET) $(PYTHON_EMBED_LDOPTS) > /dev/null 2>&1 && echo y"), y) BASIC_CFLAGS += -DNO_LIBPYTHON else ALL_LDFLAGS += $(PYTHON_EMBED_LDOPTS) - LIB_OBJS += util/scripting-engines/trace-event-python.o - LIB_OBJS += scripts/python/Perf-Trace-Util/Context.o + LIB_OBJS += $(OUTPUT)util/scripting-engines/trace-event-python.o + LIB_OBJS += $(OUTPUT)scripts/python/Perf-Trace-Util/Context.o endif ifdef NO_DEMANGLE BASIC_CFLAGS += -DNO_DEMANGLE +else ifdef HAVE_CPLUS_DEMANGLE + EXTLIBS += -liberty + BASIC_CFLAGS += -DHAVE_CPLUS_DEMANGLE else has_bfd := $(shell sh -c "(echo '\#include <bfd.h>'; echo 'int main(void) { bfd_demangle(0, 0, 0); return 0; }') | $(CC) -x c - $(ALL_CFLAGS) -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) -lbfd "$(QUIET_STDERR)" && echo y") @@ -607,53 +669,53 @@ ifdef NO_C99_FORMAT endif ifdef SNPRINTF_RETURNS_BOGUS COMPAT_CFLAGS += -DSNPRINTF_RETURNS_BOGUS - COMPAT_OBJS += compat/snprintf.o + COMPAT_OBJS += $(OUTPUT)compat/snprintf.o endif ifdef FREAD_READS_DIRECTORIES COMPAT_CFLAGS += -DFREAD_READS_DIRECTORIES - COMPAT_OBJS += compat/fopen.o + COMPAT_OBJS += $(OUTPUT)compat/fopen.o endif ifdef NO_SYMLINK_HEAD BASIC_CFLAGS += -DNO_SYMLINK_HEAD endif ifdef NO_STRCASESTR COMPAT_CFLAGS += -DNO_STRCASESTR - COMPAT_OBJS += compat/strcasestr.o + COMPAT_OBJS += $(OUTPUT)compat/strcasestr.o endif ifdef NO_STRTOUMAX COMPAT_CFLAGS += -DNO_STRTOUMAX - COMPAT_OBJS += compat/strtoumax.o + COMPAT_OBJS += $(OUTPUT)compat/strtoumax.o endif ifdef NO_STRTOULL COMPAT_CFLAGS += -DNO_STRTOULL endif ifdef NO_SETENV COMPAT_CFLAGS += -DNO_SETENV - COMPAT_OBJS += compat/setenv.o + COMPAT_OBJS += $(OUTPUT)compat/setenv.o endif ifdef NO_MKDTEMP COMPAT_CFLAGS += -DNO_MKDTEMP - COMPAT_OBJS += compat/mkdtemp.o + COMPAT_OBJS += $(OUTPUT)compat/mkdtemp.o endif ifdef NO_UNSETENV COMPAT_CFLAGS += -DNO_UNSETENV - COMPAT_OBJS += compat/unsetenv.o + COMPAT_OBJS += $(OUTPUT)compat/unsetenv.o endif ifdef NO_SYS_SELECT_H BASIC_CFLAGS += -DNO_SYS_SELECT_H endif ifdef NO_MMAP COMPAT_CFLAGS += -DNO_MMAP - COMPAT_OBJS += compat/mmap.o + COMPAT_OBJS += $(OUTPUT)compat/mmap.o else ifdef USE_WIN32_MMAP COMPAT_CFLAGS += -DUSE_WIN32_MMAP - COMPAT_OBJS += compat/win32mmap.o + COMPAT_OBJS += $(OUTPUT)compat/win32mmap.o endif endif ifdef NO_PREAD COMPAT_CFLAGS += -DNO_PREAD - COMPAT_OBJS += compat/pread.o + COMPAT_OBJS += $(OUTPUT)compat/pread.o endif ifdef NO_FAST_WORKING_DIRECTORY BASIC_CFLAGS += -DNO_FAST_WORKING_DIRECTORY @@ -675,10 +737,10 @@ else endif endif ifdef NO_INET_NTOP - LIB_OBJS += compat/inet_ntop.o + LIB_OBJS += $(OUTPUT)compat/inet_ntop.o endif ifdef NO_INET_PTON - LIB_OBJS += compat/inet_pton.o + LIB_OBJS += $(OUTPUT)compat/inet_pton.o endif ifdef NO_ICONV @@ -695,15 +757,15 @@ endif ifdef PPC_SHA1 SHA1_HEADER = "ppc/sha1.h" - LIB_OBJS += ppc/sha1.o ppc/sha1ppc.o + LIB_OBJS += $(OUTPUT)ppc/sha1.o ppc/sha1ppc.o else ifdef ARM_SHA1 SHA1_HEADER = "arm/sha1.h" - LIB_OBJS += arm/sha1.o arm/sha1_arm.o + LIB_OBJS += $(OUTPUT)arm/sha1.o $(OUTPUT)arm/sha1_arm.o else ifdef MOZILLA_SHA1 SHA1_HEADER = "mozilla-sha1/sha1.h" - LIB_OBJS += mozilla-sha1/sha1.o + LIB_OBJS += $(OUTPUT)mozilla-sha1/sha1.o else SHA1_HEADER = <openssl/sha.h> EXTLIBS += $(LIB_4_CRYPTO) @@ -715,15 +777,15 @@ ifdef NO_PERL_MAKEMAKER endif ifdef NO_HSTRERROR COMPAT_CFLAGS += -DNO_HSTRERROR - COMPAT_OBJS += compat/hstrerror.o + COMPAT_OBJS += $(OUTPUT)compat/hstrerror.o endif ifdef NO_MEMMEM COMPAT_CFLAGS += -DNO_MEMMEM - COMPAT_OBJS += compat/memmem.o + COMPAT_OBJS += $(OUTPUT)compat/memmem.o endif ifdef INTERNAL_QSORT COMPAT_CFLAGS += -DINTERNAL_QSORT - COMPAT_OBJS += compat/qsort.o + COMPAT_OBJS += $(OUTPUT)compat/qsort.o endif ifdef RUNTIME_PREFIX COMPAT_CFLAGS += -DRUNTIME_PREFIX @@ -803,7 +865,7 @@ export TAR INSTALL DESTDIR SHELL_PATH SHELL = $(SHELL_PATH) -all:: .perf.dev.null shell_compatibility_test $(ALL_PROGRAMS) $(BUILT_INS) $(OTHER_PROGRAMS) PERF-BUILD-OPTIONS +all:: .perf.dev.null shell_compatibility_test $(ALL_PROGRAMS) $(BUILT_INS) $(OTHER_PROGRAMS) $(OUTPUT)PERF-BUILD-OPTIONS ifneq (,$X) $(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_PROGRAMS) $(BUILT_INS) perf$X)), test '$p' -ef '$p$X' || $(RM) '$p';) endif @@ -815,39 +877,39 @@ please_set_SHELL_PATH_to_a_more_modern_shell: shell_compatibility_test: please_set_SHELL_PATH_to_a_more_modern_shell -strip: $(PROGRAMS) perf$X - $(STRIP) $(STRIP_OPTS) $(PROGRAMS) perf$X +strip: $(PROGRAMS) $(OUTPUT)perf$X + $(STRIP) $(STRIP_OPTS) $(PROGRAMS) $(OUTPUT)perf$X -perf.o: perf.c common-cmds.h PERF-CFLAGS +$(OUTPUT)perf.o: perf.c $(OUTPUT)common-cmds.h $(OUTPUT)PERF-CFLAGS $(QUIET_CC)$(CC) -DPERF_VERSION='"$(PERF_VERSION)"' \ '-DPERF_HTML_PATH="$(htmldir_SQ)"' \ - $(ALL_CFLAGS) -c $(filter %.c,$^) + $(ALL_CFLAGS) -c $(filter %.c,$^) -o $@ -perf$X: perf.o $(BUILTIN_OBJS) $(PERFLIBS) - $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ perf.o \ +$(OUTPUT)perf$X: $(OUTPUT)perf.o $(BUILTIN_OBJS) $(PERFLIBS) + $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(OUTPUT)perf.o \ $(BUILTIN_OBJS) $(ALL_LDFLAGS) $(LIBS) -builtin-help.o: builtin-help.c common-cmds.h PERF-CFLAGS - $(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) \ +$(OUTPUT)builtin-help.o: builtin-help.c $(OUTPUT)common-cmds.h $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) \ '-DPERF_HTML_PATH="$(htmldir_SQ)"' \ '-DPERF_MAN_PATH="$(mandir_SQ)"' \ '-DPERF_INFO_PATH="$(infodir_SQ)"' $< -builtin-timechart.o: builtin-timechart.c common-cmds.h PERF-CFLAGS - $(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) \ +$(OUTPUT)builtin-timechart.o: builtin-timechart.c $(OUTPUT)common-cmds.h $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) \ '-DPERF_HTML_PATH="$(htmldir_SQ)"' \ '-DPERF_MAN_PATH="$(mandir_SQ)"' \ '-DPERF_INFO_PATH="$(infodir_SQ)"' $< -$(BUILT_INS): perf$X +$(BUILT_INS): $(OUTPUT)perf$X $(QUIET_BUILT_IN)$(RM) $@ && \ ln perf$X $@ 2>/dev/null || \ ln -s perf$X $@ 2>/dev/null || \ cp perf$X $@ -common-cmds.h: util/generate-cmdlist.sh command-list.txt +$(OUTPUT)common-cmds.h: util/generate-cmdlist.sh command-list.txt -common-cmds.h: $(wildcard Documentation/perf-*.txt) +$(OUTPUT)common-cmds.h: $(wildcard Documentation/perf-*.txt) $(QUIET_GEN). util/generate-cmdlist.sh > $@+ && mv $@+ $@ $(patsubst %.sh,%,$(SCRIPT_SH)) : % : %.sh @@ -859,7 +921,7 @@ $(patsubst %.sh,%,$(SCRIPT_SH)) : % : %.sh -e 's/@@NO_CURL@@/$(NO_CURL)/g' \ $@.sh >$@+ && \ chmod +x $@+ && \ - mv $@+ $@ + mv $@+ $(OUTPUT)$@ configure: configure.ac $(QUIET_GEN)$(RM) $@ $<+ && \ @@ -869,60 +931,50 @@ configure: configure.ac $(RM) $<+ # These can record PERF_VERSION -perf.o perf.spec \ +$(OUTPUT)perf.o perf.spec \ $(patsubst %.sh,%,$(SCRIPT_SH)) \ $(patsubst %.perl,%,$(SCRIPT_PERL)) \ - : PERF-VERSION-FILE + : $(OUTPUT)PERF-VERSION-FILE -%.o: %.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) $< -%.s: %.c PERF-CFLAGS +$(OUTPUT)%.o: %.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $< +$(OUTPUT)%.s: %.c $(OUTPUT)PERF-CFLAGS $(QUIET_CC)$(CC) -S $(ALL_CFLAGS) $< -%.o: %.S - $(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) $< +$(OUTPUT)%.o: %.S + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $< -util/exec_cmd.o: util/exec_cmd.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) \ +$(OUTPUT)util/exec_cmd.o: util/exec_cmd.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) \ '-DPERF_EXEC_PATH="$(perfexecdir_SQ)"' \ '-DBINDIR="$(bindir_relative_SQ)"' \ '-DPREFIX="$(prefix_SQ)"' \ $< -builtin-init-db.o: builtin-init-db.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) -DDEFAULT_PERF_TEMPLATE_DIR='"$(template_dir_SQ)"' $< - -util/config.o: util/config.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o $*.o -c $(ALL_CFLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< - -util/rbtree.o: ../../lib/rbtree.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o util/rbtree.o -c $(ALL_CFLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< - -# some perf warning policies can't fit to lib/bitmap.c, eg: it warns about variable shadowing -# from <string.h> that comes from kernel headers wrapping. -KBITMAP_FLAGS=`echo $(ALL_CFLAGS) | sed s/-Wshadow// | sed s/-Wswitch-default// | sed s/-Wextra//` +$(OUTPUT)builtin-init-db.o: builtin-init-db.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) -DDEFAULT_PERF_TEMPLATE_DIR='"$(template_dir_SQ)"' $< -util/bitmap.o: ../../lib/bitmap.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o util/bitmap.o -c $(KBITMAP_FLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< +$(OUTPUT)util/config.o: util/config.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< -util/hweight.o: ../../lib/hweight.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o util/hweight.o -c $(ALL_CFLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< +$(OUTPUT)util/newt.o: util/newt.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) -DENABLE_SLFUTURE_CONST $< -util/find_next_bit.o: ../../lib/find_next_bit.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o util/find_next_bit.o -c $(ALL_CFLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< +$(OUTPUT)util/rbtree.o: ../../lib/rbtree.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) -DETC_PERFCONFIG='"$(ETC_PERFCONFIG_SQ)"' $< -util/scripting-engines/trace-event-perl.o: util/scripting-engines/trace-event-perl.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o util/scripting-engines/trace-event-perl.o -c $(ALL_CFLAGS) $(PERL_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow $< +$(OUTPUT)util/scripting-engines/trace-event-perl.o: util/scripting-engines/trace-event-perl.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $(PERL_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow $< -scripts/perl/Perf-Trace-Util/Context.o: scripts/perl/Perf-Trace-Util/Context.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o scripts/perl/Perf-Trace-Util/Context.o -c $(ALL_CFLAGS) $(PERL_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs $< +$(OUTPUT)scripts/perl/Perf-Trace-Util/Context.o: scripts/perl/Perf-Trace-Util/Context.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $(PERL_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs $< -util/scripting-engines/trace-event-python.o: util/scripting-engines/trace-event-python.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o util/scripting-engines/trace-event-python.o -c $(ALL_CFLAGS) $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow $< +$(OUTPUT)util/scripting-engines/trace-event-python.o: util/scripting-engines/trace-event-python.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow $< -scripts/python/Perf-Trace-Util/Context.o: scripts/python/Perf-Trace-Util/Context.c PERF-CFLAGS - $(QUIET_CC)$(CC) -o scripts/python/Perf-Trace-Util/Context.o -c $(ALL_CFLAGS) $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs $< +$(OUTPUT)scripts/python/Perf-Trace-Util/Context.o: scripts/python/Perf-Trace-Util/Context.c $(OUTPUT)PERF-CFLAGS + $(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs $< -perf-%$X: %.o $(PERFLIBS) +$(OUTPUT)perf-%$X: %.o $(PERFLIBS) $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) $(LIBS) $(LIB_OBJS) $(BUILTIN_OBJS): $(LIB_H) @@ -963,17 +1015,17 @@ cscope: TRACK_CFLAGS = $(subst ','\'',$(ALL_CFLAGS)):\ $(bindir_SQ):$(perfexecdir_SQ):$(template_dir_SQ):$(prefix_SQ) -PERF-CFLAGS: .FORCE-PERF-CFLAGS +$(OUTPUT)PERF-CFLAGS: .FORCE-PERF-CFLAGS @FLAGS='$(TRACK_CFLAGS)'; \ - if test x"$$FLAGS" != x"`cat PERF-CFLAGS 2>/dev/null`" ; then \ + if test x"$$FLAGS" != x"`cat $(OUTPUT)PERF-CFLAGS 2>/dev/null`" ; then \ echo 1>&2 " * new build flags or prefix"; \ - echo "$$FLAGS" >PERF-CFLAGS; \ + echo "$$FLAGS" >$(OUTPUT)PERF-CFLAGS; \ fi # We need to apply sq twice, once to protect from the shell -# that runs PERF-BUILD-OPTIONS, and then again to protect it +# that runs $(OUTPUT)PERF-BUILD-OPTIONS, and then again to protect it # and the first level quoting from the shell that runs "echo". -PERF-BUILD-OPTIONS: .FORCE-PERF-BUILD-OPTIONS +$(OUTPUT)PERF-BUILD-OPTIONS: .FORCE-PERF-BUILD-OPTIONS @echo SHELL_PATH=\''$(subst ','\'',$(SHELL_PATH_SQ))'\' >$@ @echo TAR=\''$(subst ','\'',$(subst ','\'',$(TAR)))'\' >>$@ @echo NO_CURL=\''$(subst ','\'',$(subst ','\'',$(NO_CURL)))'\' >>$@ @@ -994,7 +1046,7 @@ all:: $(TEST_PROGRAMS) export NO_SVN_TESTS -check: common-cmds.h +check: $(OUTPUT)common-cmds.h if sparse; \ then \ for i in *.c */*.c; \ @@ -1028,10 +1080,10 @@ export perfexec_instdir install: all $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(bindir_SQ)' - $(INSTALL) perf$X '$(DESTDIR_SQ)$(bindir_SQ)' + $(INSTALL) $(OUTPUT)perf$X '$(DESTDIR_SQ)$(bindir_SQ)' $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace' $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/bin' - $(INSTALL) perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' + $(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' $(INSTALL) scripts/perl/Perf-Trace-Util/lib/Perf/Trace/* -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace' $(INSTALL) scripts/perl/*.pl -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl' $(INSTALL) scripts/perl/bin/* -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/bin' @@ -1045,7 +1097,7 @@ ifdef BUILT_INS $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' $(INSTALL) $(BUILT_INS) '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' ifneq (,$X) - $(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_PROGRAMS) $(BUILT_INS) perf$X)), $(RM) '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/$p';) + $(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_PROGRAMS) $(BUILT_INS) $(OUTPUT)perf$X)), $(RM) '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/$p';) endif endif @@ -1129,14 +1181,14 @@ clean: $(RM) *.o */*.o */*/*.o */*/*/*.o $(LIB_FILE) $(RM) $(ALL_PROGRAMS) $(BUILT_INS) perf$X $(RM) $(TEST_PROGRAMS) - $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h TAGS tags cscope* + $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(RM) -r autom4te.cache $(RM) config.log config.mak.autogen config.mak.append config.status config.cache $(RM) -r $(PERF_TARNAME) .doc-tmp-dir $(RM) $(PERF_TARNAME).tar.gz perf-core_$(PERF_VERSION)-*.tar.gz $(RM) $(htmldocs).tar.gz $(manpages).tar.gz $(MAKE) -C Documentation/ clean - $(RM) PERF-VERSION-FILE PERF-CFLAGS PERF-BUILD-OPTIONS + $(RM) $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)PERF-CFLAGS $(OUTPUT)PERF-BUILD-OPTIONS .PHONY: all install clean strip .PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell diff --git a/tools/perf/arch/powerpc/Makefile b/tools/perf/arch/powerpc/Makefile new file mode 100644 index 000000000000..15130b50dfe3 --- /dev/null +++ b/tools/perf/arch/powerpc/Makefile @@ -0,0 +1,4 @@ +ifndef NO_DWARF +PERF_HAVE_DWARF_REGS := 1 +LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/dwarf-regs.o +endif diff --git a/tools/perf/arch/powerpc/util/dwarf-regs.c b/tools/perf/arch/powerpc/util/dwarf-regs.c new file mode 100644 index 000000000000..48ae0c5e3f73 --- /dev/null +++ b/tools/perf/arch/powerpc/util/dwarf-regs.c @@ -0,0 +1,88 @@ +/* + * Mapping of DWARF debug register numbers into register names. + * + * Copyright (C) 2010 Ian Munsie, IBM Corporation. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + */ + +#include <libio.h> +#include <dwarf-regs.h> + + +struct pt_regs_dwarfnum { + const char *name; + unsigned int dwarfnum; +}; + +#define STR(s) #s +#define REG_DWARFNUM_NAME(r, num) {.name = r, .dwarfnum = num} +#define GPR_DWARFNUM_NAME(num) \ + {.name = STR(%gpr##num), .dwarfnum = num} +#define REG_DWARFNUM_END {.name = NULL, .dwarfnum = 0} + +/* + * Reference: + * http://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi-1.9.html + */ +static const struct pt_regs_dwarfnum regdwarfnum_table[] = { + GPR_DWARFNUM_NAME(0), + GPR_DWARFNUM_NAME(1), + GPR_DWARFNUM_NAME(2), + GPR_DWARFNUM_NAME(3), + GPR_DWARFNUM_NAME(4), + GPR_DWARFNUM_NAME(5), + GPR_DWARFNUM_NAME(6), + GPR_DWARFNUM_NAME(7), + GPR_DWARFNUM_NAME(8), + GPR_DWARFNUM_NAME(9), + GPR_DWARFNUM_NAME(10), + GPR_DWARFNUM_NAME(11), + GPR_DWARFNUM_NAME(12), + GPR_DWARFNUM_NAME(13), + GPR_DWARFNUM_NAME(14), + GPR_DWARFNUM_NAME(15), + GPR_DWARFNUM_NAME(16), + GPR_DWARFNUM_NAME(17), + GPR_DWARFNUM_NAME(18), + GPR_DWARFNUM_NAME(19), + GPR_DWARFNUM_NAME(20), + GPR_DWARFNUM_NAME(21), + GPR_DWARFNUM_NAME(22), + GPR_DWARFNUM_NAME(23), + GPR_DWARFNUM_NAME(24), + GPR_DWARFNUM_NAME(25), + GPR_DWARFNUM_NAME(26), + GPR_DWARFNUM_NAME(27), + GPR_DWARFNUM_NAME(28), + GPR_DWARFNUM_NAME(29), + GPR_DWARFNUM_NAME(30), + GPR_DWARFNUM_NAME(31), + REG_DWARFNUM_NAME("%msr", 66), + REG_DWARFNUM_NAME("%ctr", 109), + REG_DWARFNUM_NAME("%link", 108), + REG_DWARFNUM_NAME("%xer", 101), + REG_DWARFNUM_NAME("%dar", 119), + REG_DWARFNUM_NAME("%dsisr", 118), + REG_DWARFNUM_END, +}; + +/** + * get_arch_regstr() - lookup register name from it's DWARF register number + * @n: the DWARF register number + * + * get_arch_regstr() returns the name of the register in struct + * regdwarfnum_table from it's DWARF register number. If the register is not + * found in the table, this returns NULL; + */ +const char *get_arch_regstr(unsigned int n) +{ + const struct pt_regs_dwarfnum *roff; + for (roff = regdwarfnum_table; roff->name != NULL; roff++) + if (roff->dwarfnum == n) + return roff->name; + return NULL; +} diff --git a/tools/perf/arch/x86/Makefile b/tools/perf/arch/x86/Makefile new file mode 100644 index 000000000000..15130b50dfe3 --- /dev/null +++ b/tools/perf/arch/x86/Makefile @@ -0,0 +1,4 @@ +ifndef NO_DWARF +PERF_HAVE_DWARF_REGS := 1 +LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/dwarf-regs.o +endif diff --git a/tools/perf/arch/x86/util/dwarf-regs.c b/tools/perf/arch/x86/util/dwarf-regs.c new file mode 100644 index 000000000000..a794d3081928 --- /dev/null +++ b/tools/perf/arch/x86/util/dwarf-regs.c @@ -0,0 +1,75 @@ +/* + * dwarf-regs.c : Mapping of DWARF debug register numbers into register names. + * Extracted from probe-finder.c + * + * Written by Masami Hiramatsu <mhiramat@redhat.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + */ + +#include <libio.h> +#include <dwarf-regs.h> + +/* + * Generic dwarf analysis helpers + */ + +#define X86_32_MAX_REGS 8 +const char *x86_32_regs_table[X86_32_MAX_REGS] = { + "%ax", + "%cx", + "%dx", + "%bx", + "$stack", /* Stack address instead of %sp */ + "%bp", + "%si", + "%di", +}; + +#define X86_64_MAX_REGS 16 +const char *x86_64_regs_table[X86_64_MAX_REGS] = { + "%ax", + "%dx", + "%cx", + "%bx", + "%si", + "%di", + "%bp", + "%sp", + "%r8", + "%r9", + "%r10", + "%r11", + "%r12", + "%r13", + "%r14", + "%r15", +}; + +/* TODO: switching by dwarf address size */ +#ifdef __x86_64__ +#define ARCH_MAX_REGS X86_64_MAX_REGS +#define arch_regs_table x86_64_regs_table +#else +#define ARCH_MAX_REGS X86_32_MAX_REGS +#define arch_regs_table x86_32_regs_table +#endif + +/* Return architecture dependent register string (for kprobe-tracer) */ +const char *get_arch_regstr(unsigned int n) +{ + return (n <= ARCH_MAX_REGS) ? arch_regs_table[n] : NULL; +} diff --git a/tools/perf/bench/mem-memcpy.c b/tools/perf/bench/mem-memcpy.c index 89773178e894..38dae7465142 100644 --- a/tools/perf/bench/mem-memcpy.c +++ b/tools/perf/bench/mem-memcpy.c @@ -10,7 +10,6 @@ #include "../perf.h" #include "../util/util.h" #include "../util/parse-options.h" -#include "../util/string.h" #include "../util/header.h" #include "bench.h" @@ -24,7 +23,7 @@ static const char *length_str = "1MB"; static const char *routine = "default"; -static int use_clock = 0; +static bool use_clock = false; static int clock_fd; static const struct option options[] = { diff --git a/tools/perf/bench/sched-messaging.c b/tools/perf/bench/sched-messaging.c index 81cee78181fa..d1d1b30f99c1 100644 --- a/tools/perf/bench/sched-messaging.c +++ b/tools/perf/bench/sched-messaging.c @@ -31,9 +31,9 @@ #define DATASIZE 100 -static int use_pipes = 0; +static bool use_pipes = false; static unsigned int loops = 100; -static unsigned int thread_mode = 0; +static bool thread_mode = false; static unsigned int num_groups = 10; struct sender_context { @@ -256,10 +256,8 @@ static const struct option options[] = { "Use pipe() instead of socketpair()"), OPT_BOOLEAN('t', "thread", &thread_mode, "Be multi thread instead of multi process"), - OPT_INTEGER('g', "group", &num_groups, - "Specify number of groups"), - OPT_INTEGER('l', "loop", &loops, - "Specify number of loops"), + OPT_UINTEGER('g', "group", &num_groups, "Specify number of groups"), + OPT_UINTEGER('l', "loop", &loops, "Specify number of loops"), OPT_END() }; diff --git a/tools/perf/bench/sched-pipe.c b/tools/perf/bench/sched-pipe.c index 4f77c7c27640..d9ab3ce446ac 100644 --- a/tools/perf/bench/sched-pipe.c +++ b/tools/perf/bench/sched-pipe.c @@ -93,7 +93,7 @@ int bench_sched_pipe(int argc, const char **argv, switch (bench_format) { case BENCH_FORMAT_DEFAULT: - printf("# Extecuted %d pipe operations between two tasks\n\n", + printf("# Executed %d pipe operations between two tasks\n\n", loops); result_usec = diff.tv_sec * 1000000; diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c index 6ad7148451c5..77bcc9b130f5 100644 --- a/tools/perf/builtin-annotate.c +++ b/tools/perf/builtin-annotate.c @@ -14,7 +14,6 @@ #include "util/cache.h" #include <linux/rbtree.h> #include "util/symbol.h" -#include "util/string.h" #include "perf.h" #include "util/debug.h" @@ -29,80 +28,16 @@ static char const *input_name = "perf.data"; -static int force; +static bool force; -static int full_paths; +static bool full_paths; -static int print_line; - -struct sym_hist { - u64 sum; - u64 ip[0]; -}; - -struct sym_ext { - struct rb_node node; - double percent; - char *path; -}; - -struct sym_priv { - struct sym_hist *hist; - struct sym_ext *ext; -}; +static bool print_line; static const char *sym_hist_filter; -static int sym__alloc_hist(struct symbol *self) -{ - struct sym_priv *priv = symbol__priv(self); - const int size = (sizeof(*priv->hist) + - (self->end - self->start) * sizeof(u64)); - - priv->hist = zalloc(size); - return priv->hist == NULL ? -1 : 0; -} - -/* - * collect histogram counts - */ -static int annotate__hist_hit(struct hist_entry *he, u64 ip) -{ - unsigned int sym_size, offset; - struct symbol *sym = he->sym; - struct sym_priv *priv; - struct sym_hist *h; - - he->count++; - - if (!sym || !he->map) - return 0; - - priv = symbol__priv(sym); - if (priv->hist == NULL && sym__alloc_hist(sym) < 0) - return -ENOMEM; - - sym_size = sym->end - sym->start; - offset = ip - sym->start; - - pr_debug3("%s: ip=%#Lx\n", __func__, he->map->unmap_ip(he->map, ip)); - - if (offset >= sym_size) - return 0; - - h = priv->hist; - h->sum++; - h->ip[offset]++; - - pr_debug3("%#Lx %s: count++ [ip: %#Lx, %#Lx] => %Ld\n", he->sym->start, - he->sym->name, ip, ip - he->sym->start, h->ip[offset]); - return 0; -} - -static int perf_session__add_hist_entry(struct perf_session *self, - struct addr_location *al, u64 count) +static int hists__add_entry(struct hists *self, struct addr_location *al) { - bool hit; struct hist_entry *he; if (sym_hist_filter != NULL && @@ -116,11 +51,11 @@ static int perf_session__add_hist_entry(struct perf_session *self, return 0; } - he = __perf_session__add_hist_entry(&self->hists, al, NULL, count, &hit); + he = __hists__add_entry(self, al, NULL, 1); if (he == NULL) return -ENOMEM; - return annotate__hist_hit(he, al->addr); + return hist_entry__inc_addr_samples(he, al->addr); } static int process_sample_event(event_t *event, struct perf_session *session) @@ -136,7 +71,7 @@ static int process_sample_event(event_t *event, struct perf_session *session) return -1; } - if (!al.filtered && perf_session__add_hist_entry(session, &al, 1)) { + if (!al.filtered && hists__add_entry(&session->hists, &al)) { pr_warning("problem incrementing symbol count, " "skipping event\n"); return -1; @@ -145,106 +80,11 @@ static int process_sample_event(event_t *event, struct perf_session *session) return 0; } -struct objdump_line { - struct list_head node; - s64 offset; - char *line; -}; - -static struct objdump_line *objdump_line__new(s64 offset, char *line) -{ - struct objdump_line *self = malloc(sizeof(*self)); - - if (self != NULL) { - self->offset = offset; - self->line = line; - } - - return self; -} - -static void objdump_line__free(struct objdump_line *self) -{ - free(self->line); - free(self); -} - -static void objdump__add_line(struct list_head *head, struct objdump_line *line) -{ - list_add_tail(&line->node, head); -} - -static struct objdump_line *objdump__get_next_ip_line(struct list_head *head, - struct objdump_line *pos) -{ - list_for_each_entry_continue(pos, head, node) - if (pos->offset >= 0) - return pos; - - return NULL; -} - -static int parse_line(FILE *file, struct hist_entry *he, - struct list_head *head) -{ - struct symbol *sym = he->sym; - struct objdump_line *objdump_line; - char *line = NULL, *tmp, *tmp2; - size_t line_len; - s64 line_ip, offset = -1; - char *c; - - if (getline(&line, &line_len, file) < 0) - return -1; - - if (!line) - return -1; - - c = strchr(line, '\n'); - if (c) - *c = 0; - - line_ip = -1; - - /* - * Strip leading spaces: - */ - tmp = line; - while (*tmp) { - if (*tmp != ' ') - break; - tmp++; - } - - if (*tmp) { - /* - * Parse hexa addresses followed by ':' - */ - line_ip = strtoull(tmp, &tmp2, 16); - if (*tmp2 != ':') - line_ip = -1; - } - - if (line_ip != -1) { - u64 start = map__rip_2objdump(he->map, sym->start); - offset = line_ip - start; - } - - objdump_line = objdump_line__new(offset, line); - if (objdump_line == NULL) { - free(line); - return -1; - } - objdump__add_line(head, objdump_line); - - return 0; -} - static int objdump_line__print(struct objdump_line *self, struct list_head *head, struct hist_entry *he, u64 len) { - struct symbol *sym = he->sym; + struct symbol *sym = he->ms.sym; static const char *prev_line; static const char *prev_color; @@ -327,7 +167,7 @@ static void insert_source_line(struct sym_ext *sym_ext) static void free_source_line(struct hist_entry *he, int len) { - struct sym_priv *priv = symbol__priv(he->sym); + struct sym_priv *priv = symbol__priv(he->ms.sym); struct sym_ext *sym_ext = priv->ext; int i; @@ -346,7 +186,7 @@ static void free_source_line(struct hist_entry *he, int len) static void get_source_line(struct hist_entry *he, int len, const char *filename) { - struct symbol *sym = he->sym; + struct symbol *sym = he->ms.sym; u64 start; int i; char cmd[PATH_MAX * 2]; @@ -361,7 +201,7 @@ get_source_line(struct hist_entry *he, int len, const char *filename) if (!priv->ext) return; - start = he->map->unmap_ip(he->map, sym->start); + start = he->ms.map->unmap_ip(he->ms.map, sym->start); for (i = 0; i < len; i++) { char *path = NULL; @@ -425,7 +265,7 @@ static void print_summary(const char *filename) static void hist_entry__print_hits(struct hist_entry *self) { - struct symbol *sym = self->sym; + struct symbol *sym = self->ms.sym; struct sym_priv *priv = symbol__priv(sym); struct sym_hist *h = priv->hist; u64 len = sym->end - sym->start, offset; @@ -439,23 +279,17 @@ static void hist_entry__print_hits(struct hist_entry *self) static void annotate_sym(struct hist_entry *he) { - struct map *map = he->map; + struct map *map = he->ms.map; struct dso *dso = map->dso; - struct symbol *sym = he->sym; + struct symbol *sym = he->ms.sym; const char *filename = dso->long_name, *d_filename; u64 len; - char command[PATH_MAX*2]; - FILE *file; LIST_HEAD(head); struct objdump_line *pos, *n; - if (!filename) + if (hist_entry__annotate(he, &head) < 0) return; - pr_debug("%s: filename=%s, sym=%s, start=%#Lx, end=%#Lx\n", __func__, - filename, sym->name, map->unmap_ip(map, sym->start), - map->unmap_ip(map, sym->end)); - if (full_paths) d_filename = filename; else @@ -472,29 +306,6 @@ static void annotate_sym(struct hist_entry *he) printf(" Percent | Source code & Disassembly of %s\n", d_filename); printf("------------------------------------------------\n"); - if (verbose >= 2) - printf("annotating [%p] %30s : [%p] %30s\n", - dso, dso->long_name, sym, sym->name); - - sprintf(command, "objdump --start-address=0x%016Lx --stop-address=0x%016Lx -dS %s|grep -v %s", - map__rip_2objdump(map, sym->start), - map__rip_2objdump(map, sym->end), - filename, filename); - - if (verbose >= 3) - printf("doing: %s\n", command); - - file = popen(command, "r"); - if (!file) - return; - - while (!feof(file)) { - if (parse_line(file, he, &head) < 0) - break; - } - - pclose(file); - if (verbose) hist_entry__print_hits(he); @@ -508,25 +319,25 @@ static void annotate_sym(struct hist_entry *he) free_source_line(he, len); } -static void perf_session__find_annotations(struct perf_session *self) +static void hists__find_annotations(struct hists *self) { struct rb_node *nd; - for (nd = rb_first(&self->hists); nd; nd = rb_next(nd)) { + for (nd = rb_first(&self->entries); nd; nd = rb_next(nd)) { struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node); struct sym_priv *priv; - if (he->sym == NULL) + if (he->ms.sym == NULL) continue; - priv = symbol__priv(he->sym); + priv = symbol__priv(he->ms.sym); if (priv->hist == NULL) continue; annotate_sym(he); /* * Since we have a hist_entry per IP for the same symbol, free - * he->sym->hist to signal we already processed this symbol. + * he->ms.sym->hist to signal we already processed this symbol. */ free(priv->hist); priv->hist = NULL; @@ -545,7 +356,7 @@ static int __cmd_annotate(void) int ret; struct perf_session *session; - session = perf_session__new(input_name, O_RDONLY, force); + session = perf_session__new(input_name, O_RDONLY, force, false); if (session == NULL) return -ENOMEM; @@ -554,7 +365,7 @@ static int __cmd_annotate(void) goto out_delete; if (dump_trace) { - event__print_totals(); + perf_session__fprintf_nr_events(session, stdout); goto out_delete; } @@ -562,11 +373,11 @@ static int __cmd_annotate(void) perf_session__fprintf(session, stdout); if (verbose > 2) - dsos__fprintf(stdout); + perf_session__fprintf_dsos(session, stdout); - perf_session__collapse_resort(&session->hists); - perf_session__output_resort(&session->hists, session->event_total[0]); - perf_session__find_annotations(session); + hists__collapse_resort(&session->hists); + hists__output_resort(&session->hists); + hists__find_annotations(&session->hists); out_delete: perf_session__delete(session); @@ -581,10 +392,12 @@ static const char * const annotate_usage[] = { static const struct option options[] = { OPT_STRING('i', "input", &input_name, "file", "input file name"), + OPT_STRING('d', "dsos", &symbol_conf.dso_list_str, "dso[,dso...]", + "only consider symbols in these dsos"), OPT_STRING('s', "symbol", &sym_hist_filter, "symbol", "symbol to annotate"), OPT_BOOLEAN('f', "force", &force, "don't complain, do it"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace, "dump raw trace in ASCII"), diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c index 46996774e559..fcb96269852a 100644 --- a/tools/perf/builtin-bench.c +++ b/tools/perf/builtin-bench.c @@ -95,7 +95,7 @@ static void dump_suites(int subsys_index) return; } -static char *bench_format_str; +static const char *bench_format_str; int bench_format = BENCH_FORMAT_DEFAULT; static const struct option bench_options[] = { @@ -126,7 +126,7 @@ static void print_usage(void) printf("\n"); } -static int bench_str2int(char *str) +static int bench_str2int(const char *str) { if (!str) return BENCH_FORMAT_DEFAULT; diff --git a/tools/perf/builtin-buildid-cache.c b/tools/perf/builtin-buildid-cache.c index 30a05f552c96..f8e3d1852029 100644 --- a/tools/perf/builtin-buildid-cache.c +++ b/tools/perf/builtin-buildid-cache.c @@ -27,7 +27,7 @@ static const struct option buildid_cache_options[] = { "file list", "file(s) to add"), OPT_STRING('r', "remove", &remove_name_list_str, "file list", "file(s) to remove"), - OPT_BOOLEAN('v', "verbose", &verbose, "be more verbose"), + OPT_INCR('v', "verbose", &verbose, "be more verbose"), OPT_END() }; diff --git a/tools/perf/builtin-buildid-list.c b/tools/perf/builtin-buildid-list.c index d0675c02f81e..44a47e13bd67 100644 --- a/tools/perf/builtin-buildid-list.c +++ b/tools/perf/builtin-buildid-list.c @@ -16,7 +16,7 @@ #include "util/symbol.h" static char const *input_name = "perf.data"; -static int force; +static bool force; static bool with_hits; static const char * const buildid_list_usage[] = { @@ -29,7 +29,7 @@ static const struct option options[] = { OPT_STRING('i', "input", &input_name, "file", "input file name"), OPT_BOOLEAN('f', "force", &force, "don't complain, do it"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose"), OPT_END() }; @@ -39,14 +39,14 @@ static int __cmd_buildid_list(void) int err = -1; struct perf_session *session; - session = perf_session__new(input_name, O_RDONLY, force); + session = perf_session__new(input_name, O_RDONLY, force, false); if (session == NULL) return -1; if (with_hits) perf_session__process_events(session, &build_id__mark_dso_hit_ops); - dsos__fprintf_buildid(stdout, with_hits); + perf_session__fprintf_dsos_buildid(session, stdout, with_hits); perf_session__delete(session); return err; diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c index 1ea15d8aeed1..a6e2fdc7a04e 100644 --- a/tools/perf/builtin-diff.c +++ b/tools/perf/builtin-diff.c @@ -19,23 +19,15 @@ static char const *input_old = "perf.data.old", *input_new = "perf.data"; static char diff__default_sort_order[] = "dso,symbol"; -static int force; +static bool force; static bool show_displacement; -static int perf_session__add_hist_entry(struct perf_session *self, - struct addr_location *al, u64 count) +static int hists__add_entry(struct hists *self, + struct addr_location *al, u64 period) { - bool hit; - struct hist_entry *he = __perf_session__add_hist_entry(&self->hists, - al, NULL, - count, &hit); - if (he == NULL) - return -ENOMEM; - - if (hit) - he->count += count; - - return 0; + if (__hists__add_entry(self, al, NULL, period) != NULL) + return 0; + return -ENOMEM; } static int diff__process_sample_event(event_t *event, struct perf_session *session) @@ -57,12 +49,12 @@ static int diff__process_sample_event(event_t *event, struct perf_session *sessi event__parse_sample(event, session->sample_type, &data); - if (perf_session__add_hist_entry(session, &al, data.period)) { - pr_warning("problem incrementing symbol count, skipping event\n"); + if (hists__add_entry(&session->hists, &al, data.period)) { + pr_warning("problem incrementing symbol period, skipping event\n"); return -1; } - session->events_stats.total += data.period; + session->hists.stats.total_period += data.period; return 0; } @@ -95,35 +87,34 @@ static void perf_session__insert_hist_entry_by_name(struct rb_root *root, rb_insert_color(&he->rb_node, root); } -static void perf_session__resort_hist_entries(struct perf_session *self) +static void hists__resort_entries(struct hists *self) { unsigned long position = 1; struct rb_root tmp = RB_ROOT; - struct rb_node *next = rb_first(&self->hists); + struct rb_node *next = rb_first(&self->entries); while (next != NULL) { struct hist_entry *n = rb_entry(next, struct hist_entry, rb_node); next = rb_next(&n->rb_node); - rb_erase(&n->rb_node, &self->hists); + rb_erase(&n->rb_node, &self->entries); n->position = position++; perf_session__insert_hist_entry_by_name(&tmp, n); } - self->hists = tmp; + self->entries = tmp; } -static void perf_session__set_hist_entries_positions(struct perf_session *self) +static void hists__set_positions(struct hists *self) { - perf_session__output_resort(&self->hists, self->events_stats.total); - perf_session__resort_hist_entries(self); + hists__output_resort(self); + hists__resort_entries(self); } -static struct hist_entry * -perf_session__find_hist_entry(struct perf_session *self, - struct hist_entry *he) +static struct hist_entry *hists__find_entry(struct hists *self, + struct hist_entry *he) { - struct rb_node *n = self->hists.rb_node; + struct rb_node *n = self->entries.rb_node; while (n) { struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node); @@ -140,14 +131,13 @@ perf_session__find_hist_entry(struct perf_session *self, return NULL; } -static void perf_session__match_hists(struct perf_session *old_session, - struct perf_session *new_session) +static void hists__match(struct hists *older, struct hists *newer) { struct rb_node *nd; - for (nd = rb_first(&new_session->hists); nd; nd = rb_next(nd)) { + for (nd = rb_first(&newer->entries); nd; nd = rb_next(nd)) { struct hist_entry *pos = rb_entry(nd, struct hist_entry, rb_node); - pos->pair = perf_session__find_hist_entry(old_session, pos); + pos->pair = hists__find_entry(older, pos); } } @@ -156,8 +146,8 @@ static int __cmd_diff(void) int ret, i; struct perf_session *session[2]; - session[0] = perf_session__new(input_old, O_RDONLY, force); - session[1] = perf_session__new(input_new, O_RDONLY, force); + session[0] = perf_session__new(input_old, O_RDONLY, force, false); + session[1] = perf_session__new(input_new, O_RDONLY, force, false); if (session[0] == NULL || session[1] == NULL) return -ENOMEM; @@ -167,15 +157,13 @@ static int __cmd_diff(void) goto out_delete; } - perf_session__output_resort(&session[1]->hists, - session[1]->events_stats.total); + hists__output_resort(&session[1]->hists); if (show_displacement) - perf_session__set_hist_entries_positions(session[0]); + hists__set_positions(&session[0]->hists); - perf_session__match_hists(session[0], session[1]); - perf_session__fprintf_hists(&session[1]->hists, session[0], - show_displacement, stdout, - session[1]->events_stats.total); + hists__match(&session[0]->hists, &session[1]->hists); + hists__fprintf(&session[1]->hists, &session[0]->hists, + show_displacement, stdout); out_delete: for (i = 0; i < 2; ++i) perf_session__delete(session[i]); @@ -188,7 +176,7 @@ static const char * const diff_usage[] = { }; static const struct option options[] = { - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('m', "displacement", &show_displacement, "Show position displacement relative to baseline"), @@ -225,6 +213,10 @@ int cmd_diff(int argc, const char **argv, const char *prefix __used) input_new = argv[1]; } else input_new = argv[0]; + } else if (symbol_conf.default_guest_vmlinux_name || + symbol_conf.default_guest_kallsyms) { + input_old = "perf.data.host"; + input_new = "perf.data.guest"; } symbol_conf.exclude_other = false; diff --git a/tools/perf/builtin-help.c b/tools/perf/builtin-help.c index 215b584007b1..6d5a8a7faf48 100644 --- a/tools/perf/builtin-help.c +++ b/tools/perf/builtin-help.c @@ -29,14 +29,14 @@ enum help_format { HELP_FORMAT_WEB, }; -static int show_all = 0; +static bool show_all = false; static enum help_format help_format = HELP_FORMAT_MAN; static struct option builtin_help_options[] = { OPT_BOOLEAN('a', "all", &show_all, "print all available commands"), - OPT_SET_INT('m', "man", &help_format, "show man page", HELP_FORMAT_MAN), - OPT_SET_INT('w', "web", &help_format, "show manual in web browser", + OPT_SET_UINT('m', "man", &help_format, "show man page", HELP_FORMAT_MAN), + OPT_SET_UINT('w', "web", &help_format, "show manual in web browser", HELP_FORMAT_WEB), - OPT_SET_INT('i', "info", &help_format, "show info page", + OPT_SET_UINT('i', "info", &help_format, "show info page", HELP_FORMAT_INFO), OPT_END(), }; diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c new file mode 100644 index 000000000000..8e3e47b064ce --- /dev/null +++ b/tools/perf/builtin-inject.c @@ -0,0 +1,228 @@ +/* + * builtin-inject.c + * + * Builtin inject command: Examine the live mode (stdin) event stream + * and repipe it to stdout while optionally injecting additional + * events into it. + */ +#include "builtin.h" + +#include "perf.h" +#include "util/session.h" +#include "util/debug.h" + +#include "util/parse-options.h" + +static char const *input_name = "-"; +static bool inject_build_ids; + +static int event__repipe(event_t *event __used, + struct perf_session *session __used) +{ + uint32_t size; + void *buf = event; + + size = event->header.size; + + while (size) { + int ret = write(STDOUT_FILENO, buf, size); + if (ret < 0) + return -errno; + + size -= ret; + buf += ret; + } + + return 0; +} + +static int event__repipe_mmap(event_t *self, struct perf_session *session) +{ + int err; + + err = event__process_mmap(self, session); + event__repipe(self, session); + + return err; +} + +static int event__repipe_task(event_t *self, struct perf_session *session) +{ + int err; + + err = event__process_task(self, session); + event__repipe(self, session); + + return err; +} + +static int event__repipe_tracing_data(event_t *self, + struct perf_session *session) +{ + int err; + + event__repipe(self, session); + err = event__process_tracing_data(self, session); + + return err; +} + +static int dso__read_build_id(struct dso *self) +{ + if (self->has_build_id) + return 0; + + if (filename__read_build_id(self->long_name, self->build_id, + sizeof(self->build_id)) > 0) { + self->has_build_id = true; + return 0; + } + + return -1; +} + +static int dso__inject_build_id(struct dso *self, struct perf_session *session) +{ + u16 misc = PERF_RECORD_MISC_USER; + struct machine *machine; + int err; + + if (dso__read_build_id(self) < 0) { + pr_debug("no build_id found for %s\n", self->long_name); + return -1; + } + + machine = perf_session__find_host_machine(session); + if (machine == NULL) { + pr_err("Can't find machine for session\n"); + return -1; + } + + if (self->kernel) + misc = PERF_RECORD_MISC_KERNEL; + + err = event__synthesize_build_id(self, misc, event__repipe, + machine, session); + if (err) { + pr_err("Can't synthesize build_id event for %s\n", self->long_name); + return -1; + } + + return 0; +} + +static int event__inject_buildid(event_t *event, struct perf_session *session) +{ + struct addr_location al; + struct thread *thread; + u8 cpumode; + + cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK; + + thread = perf_session__findnew(session, event->ip.pid); + if (thread == NULL) { + pr_err("problem processing %d event, skipping it.\n", + event->header.type); + goto repipe; + } + + thread__find_addr_map(thread, session, cpumode, MAP__FUNCTION, + event->ip.pid, event->ip.ip, &al); + + if (al.map != NULL) { + if (!al.map->dso->hit) { + al.map->dso->hit = 1; + if (map__load(al.map, NULL) >= 0) { + dso__inject_build_id(al.map->dso, session); + /* + * If this fails, too bad, let the other side + * account this as unresolved. + */ + } else + pr_warning("no symbols found in %s, maybe " + "install a debug package?\n", + al.map->dso->long_name); + } + } + +repipe: + event__repipe(event, session); + return 0; +} + +struct perf_event_ops inject_ops = { + .sample = event__repipe, + .mmap = event__repipe, + .comm = event__repipe, + .fork = event__repipe, + .exit = event__repipe, + .lost = event__repipe, + .read = event__repipe, + .throttle = event__repipe, + .unthrottle = event__repipe, + .attr = event__repipe, + .event_type = event__repipe, + .tracing_data = event__repipe, + .build_id = event__repipe, +}; + +extern volatile int session_done; + +static void sig_handler(int sig __attribute__((__unused__))) +{ + session_done = 1; +} + +static int __cmd_inject(void) +{ + struct perf_session *session; + int ret = -EINVAL; + + signal(SIGINT, sig_handler); + + if (inject_build_ids) { + inject_ops.sample = event__inject_buildid; + inject_ops.mmap = event__repipe_mmap; + inject_ops.fork = event__repipe_task; + inject_ops.tracing_data = event__repipe_tracing_data; + } + + session = perf_session__new(input_name, O_RDONLY, false, true); + if (session == NULL) + return -ENOMEM; + + ret = perf_session__process_events(session, &inject_ops); + + perf_session__delete(session); + + return ret; +} + +static const char * const report_usage[] = { + "perf inject [<options>]", + NULL +}; + +static const struct option options[] = { + OPT_BOOLEAN('b', "build-ids", &inject_build_ids, + "Inject build-ids into the output stream"), + OPT_INCR('v', "verbose", &verbose, + "be more verbose (show build ids, etc)"), + OPT_END() +}; + +int cmd_inject(int argc, const char **argv, const char *prefix __used) +{ + argc = parse_options(argc, argv, options, report_usage, 0); + + /* + * Any (unrecognized) arguments left? + */ + if (argc) + usage_with_options(report_usage, options); + + if (symbol__init() < 0) + return -1; + + return __cmd_inject(); +} diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index 924a9518931a..31f60a2535e0 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -335,8 +335,9 @@ static int process_sample_event(event_t *event, struct perf_session *session) } static struct perf_event_ops event_ops = { - .sample = process_sample_event, - .comm = event__process_comm, + .sample = process_sample_event, + .comm = event__process_comm, + .ordered_samples = true, }; static double fragmentation(unsigned long n_req, unsigned long n_alloc) @@ -351,6 +352,7 @@ static void __print_result(struct rb_root *root, struct perf_session *session, int n_lines, int is_caller) { struct rb_node *next; + struct machine *machine; printf("%.102s\n", graph_dotted_line); printf(" %-34s |", is_caller ? "Callsite": "Alloc Ptr"); @@ -359,23 +361,29 @@ static void __print_result(struct rb_root *root, struct perf_session *session, next = rb_first(root); + machine = perf_session__find_host_machine(session); + if (!machine) { + pr_err("__print_result: couldn't find kernel information\n"); + return; + } while (next && n_lines--) { struct alloc_stat *data = rb_entry(next, struct alloc_stat, node); struct symbol *sym = NULL; + struct map *map; char buf[BUFSIZ]; u64 addr; if (is_caller) { addr = data->call_site; if (!raw_ip) - sym = map_groups__find_function(&session->kmaps, addr, NULL); + sym = machine__find_kernel_function(machine, addr, &map, NULL); } else addr = data->ptr; if (sym != NULL) snprintf(buf, sizeof(buf), "%s+%Lx", sym->name, - addr - sym->start); + addr - map->unmap_ip(map, sym->start)); else snprintf(buf, sizeof(buf), "%#Lx", addr); printf(" %-34s |", buf); @@ -484,10 +492,13 @@ static void sort_result(void) static int __cmd_kmem(void) { int err = -EINVAL; - struct perf_session *session = perf_session__new(input_name, O_RDONLY, 0); + struct perf_session *session = perf_session__new(input_name, O_RDONLY, 0, false); if (session == NULL) return -ENOMEM; + if (perf_session__create_kernel_maps(session) < 0) + goto out_delete; + if (!perf_session__has_traces(session, "kmem record")) goto out_delete; @@ -718,7 +729,6 @@ static const char *record_args[] = { "record", "-a", "-R", - "-M", "-f", "-c", "1", "-e", "kmem:kmalloc", diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c new file mode 100644 index 000000000000..34d1e853829d --- /dev/null +++ b/tools/perf/builtin-kvm.c @@ -0,0 +1,144 @@ +#include "builtin.h" +#include "perf.h" + +#include "util/util.h" +#include "util/cache.h" +#include "util/symbol.h" +#include "util/thread.h" +#include "util/header.h" +#include "util/session.h" + +#include "util/parse-options.h" +#include "util/trace-event.h" + +#include "util/debug.h" + +#include <sys/prctl.h> + +#include <semaphore.h> +#include <pthread.h> +#include <math.h> + +static const char *file_name; +static char name_buffer[256]; + +bool perf_host = 1; +bool perf_guest; + +static const char * const kvm_usage[] = { + "perf kvm [<options>] {top|record|report|diff|buildid-list}", + NULL +}; + +static const struct option kvm_options[] = { + OPT_STRING('i', "input", &file_name, "file", + "Input file name"), + OPT_STRING('o', "output", &file_name, "file", + "Output file name"), + OPT_BOOLEAN(0, "guest", &perf_guest, + "Collect guest os data"), + OPT_BOOLEAN(0, "host", &perf_host, + "Collect guest os data"), + OPT_STRING(0, "guestmount", &symbol_conf.guestmount, "directory", + "guest mount directory under which every guest os" + " instance has a subdir"), + OPT_STRING(0, "guestvmlinux", &symbol_conf.default_guest_vmlinux_name, + "file", "file saving guest os vmlinux"), + OPT_STRING(0, "guestkallsyms", &symbol_conf.default_guest_kallsyms, + "file", "file saving guest os /proc/kallsyms"), + OPT_STRING(0, "guestmodules", &symbol_conf.default_guest_modules, + "file", "file saving guest os /proc/modules"), + OPT_END() +}; + +static int __cmd_record(int argc, const char **argv) +{ + int rec_argc, i = 0, j; + const char **rec_argv; + + rec_argc = argc + 2; + rec_argv = calloc(rec_argc + 1, sizeof(char *)); + rec_argv[i++] = strdup("record"); + rec_argv[i++] = strdup("-o"); + rec_argv[i++] = strdup(file_name); + for (j = 1; j < argc; j++, i++) + rec_argv[i] = argv[j]; + + BUG_ON(i != rec_argc); + + return cmd_record(i, rec_argv, NULL); +} + +static int __cmd_report(int argc, const char **argv) +{ + int rec_argc, i = 0, j; + const char **rec_argv; + + rec_argc = argc + 2; + rec_argv = calloc(rec_argc + 1, sizeof(char *)); + rec_argv[i++] = strdup("report"); + rec_argv[i++] = strdup("-i"); + rec_argv[i++] = strdup(file_name); + for (j = 1; j < argc; j++, i++) + rec_argv[i] = argv[j]; + + BUG_ON(i != rec_argc); + + return cmd_report(i, rec_argv, NULL); +} + +static int __cmd_buildid_list(int argc, const char **argv) +{ + int rec_argc, i = 0, j; + const char **rec_argv; + + rec_argc = argc + 2; + rec_argv = calloc(rec_argc + 1, sizeof(char *)); + rec_argv[i++] = strdup("buildid-list"); + rec_argv[i++] = strdup("-i"); + rec_argv[i++] = strdup(file_name); + for (j = 1; j < argc; j++, i++) + rec_argv[i] = argv[j]; + + BUG_ON(i != rec_argc); + + return cmd_buildid_list(i, rec_argv, NULL); +} + +int cmd_kvm(int argc, const char **argv, const char *prefix __used) +{ + perf_host = perf_guest = 0; + + argc = parse_options(argc, argv, kvm_options, kvm_usage, + PARSE_OPT_STOP_AT_NON_OPTION); + if (!argc) + usage_with_options(kvm_usage, kvm_options); + + if (!perf_host) + perf_guest = 1; + + if (!file_name) { + if (perf_host && !perf_guest) + sprintf(name_buffer, "perf.data.host"); + else if (!perf_host && perf_guest) + sprintf(name_buffer, "perf.data.guest"); + else + sprintf(name_buffer, "perf.data.kvm"); + file_name = name_buffer; + } + + if (!strncmp(argv[0], "rec", 3)) + return __cmd_record(argc, argv); + else if (!strncmp(argv[0], "rep", 3)) + return __cmd_report(argc, argv); + else if (!strncmp(argv[0], "diff", 4)) + return cmd_diff(argc, argv, NULL); + else if (!strncmp(argv[0], "top", 3)) + return cmd_top(argc, argv, NULL); + else if (!strncmp(argv[0], "buildid-list", 12)) + return __cmd_buildid_list(argc, argv); + else + usage_with_options(kvm_usage, kvm_options); + + return 0; +} diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c index e12c844df1e2..821c1586a22b 100644 --- a/tools/perf/builtin-lock.c +++ b/tools/perf/builtin-lock.c @@ -23,6 +23,8 @@ #include <linux/list.h> #include <linux/hash.h> +static struct perf_session *session; + /* based on kernel/lockdep.c */ #define LOCKHASH_BITS 12 #define LOCKHASH_SIZE (1UL << LOCKHASH_BITS) @@ -32,9 +34,6 @@ static struct list_head lockhash_table[LOCKHASH_SIZE]; #define __lockhashfn(key) hash_long((unsigned long)key, LOCKHASH_BITS) #define lockhashentry(key) (lockhash_table + __lockhashfn((key))) -#define LOCK_STATE_UNLOCKED 0 /* initial state */ -#define LOCK_STATE_LOCKED 1 - struct lock_stat { struct list_head hash_entry; struct rb_node rb; /* used for sorting */ @@ -47,20 +46,151 @@ struct lock_stat { void *addr; /* address of lockdep_map, used as ID */ char *name; /* for strcpy(), we cannot use const */ - int state; - u64 prev_event_time; /* timestamp of previous event */ - - unsigned int nr_acquired; unsigned int nr_acquire; + unsigned int nr_acquired; unsigned int nr_contended; unsigned int nr_release; + unsigned int nr_readlock; + unsigned int nr_trylock; /* these times are in nano sec. */ u64 wait_time_total; u64 wait_time_min; u64 wait_time_max; + + int discard; /* flag of blacklist */ }; +/* + * States of lock_seq_stat + * + * UNINITIALIZED is required for detecting first event of acquire. + * As the nature of lock events, there is no guarantee + * that the first event for the locks are acquire, + * it can be acquired, contended or release. + */ +#define SEQ_STATE_UNINITIALIZED 0 /* initial state */ +#define SEQ_STATE_RELEASED 1 +#define SEQ_STATE_ACQUIRING 2 +#define SEQ_STATE_ACQUIRED 3 +#define SEQ_STATE_READ_ACQUIRED 4 +#define SEQ_STATE_CONTENDED 5 + +/* + * MAX_LOCK_DEPTH + * Imported from include/linux/sched.h. + * Should this be synchronized? + */ +#define MAX_LOCK_DEPTH 48 + +/* + * struct lock_seq_stat: + * Place to put on state of one lock sequence + * 1) acquire -> acquired -> release + * 2) acquire -> contended -> acquired -> release + * 3) acquire (with read or try) -> release + * 4) Are there other patterns? + */ +struct lock_seq_stat { + struct list_head list; + int state; + u64 prev_event_time; + void *addr; + + int read_count; +}; + +struct thread_stat { + struct rb_node rb; + + u32 tid; + struct list_head seq_list; +}; + +static struct rb_root thread_stats; + +static struct thread_stat *thread_stat_find(u32 tid) +{ + struct rb_node *node; + struct thread_stat *st; + + node = thread_stats.rb_node; + while (node) { + st = container_of(node, struct thread_stat, rb); + if (st->tid == tid) + return st; + else if (tid < st->tid) + node = node->rb_left; + else + node = node->rb_right; + } + + return NULL; +} + +static void thread_stat_insert(struct thread_stat *new) +{ + struct rb_node **rb = &thread_stats.rb_node; + struct rb_node *parent = NULL; + struct thread_stat *p; + + while (*rb) { + p = container_of(*rb, struct thread_stat, rb); + parent = *rb; + + if (new->tid < p->tid) + rb = &(*rb)->rb_left; + else if (new->tid > p->tid) + rb = &(*rb)->rb_right; + else + BUG_ON("inserting invalid thread_stat\n"); + } + + rb_link_node(&new->rb, parent, rb); + rb_insert_color(&new->rb, &thread_stats); +} + +static struct thread_stat *thread_stat_findnew_after_first(u32 tid) +{ + struct thread_stat *st; + + st = thread_stat_find(tid); + if (st) + return st; + + st = zalloc(sizeof(struct thread_stat)); + if (!st) + die("memory allocation failed\n"); + + st->tid = tid; + INIT_LIST_HEAD(&st->seq_list); + + thread_stat_insert(st); + + return st; +} + +static struct thread_stat *thread_stat_findnew_first(u32 tid); +static struct thread_stat *(*thread_stat_findnew)(u32 tid) = + thread_stat_findnew_first; + +static struct thread_stat *thread_stat_findnew_first(u32 tid) +{ + struct thread_stat *st; + + st = zalloc(sizeof(struct thread_stat)); + if (!st) + die("memory allocation failed\n"); + st->tid = tid; + INIT_LIST_HEAD(&st->seq_list); + + rb_link_node(&st->rb, NULL, &thread_stats.rb_node); + rb_insert_color(&st->rb, &thread_stats); + + thread_stat_findnew = thread_stat_findnew_after_first; + return st; +} + /* build simple key function one is bigger than two */ #define SINGLE_KEY(member) \ static int lock_stat_key_ ## member(struct lock_stat *one, \ @@ -175,8 +305,6 @@ static struct lock_stat *lock_stat_findnew(void *addr, const char *name) goto alloc_failed; strcpy(new->name, name); - /* LOCK_STATE_UNLOCKED == 0 isn't guaranteed forever */ - new->state = LOCK_STATE_UNLOCKED; new->wait_time_min = ULLONG_MAX; list_add(&new->hash_entry, entry); @@ -188,8 +316,6 @@ alloc_failed: static char const *input_name = "perf.data"; -static int profile_cpu = -1; - struct raw_event_sample { u32 size; char data[0]; @@ -198,6 +324,7 @@ struct raw_event_sample { struct trace_acquire_event { void *addr; const char *name; + int flag; }; struct trace_acquired_event { @@ -241,120 +368,258 @@ struct trace_lock_handler { struct thread *thread); }; +static struct lock_seq_stat *get_seq(struct thread_stat *ts, void *addr) +{ + struct lock_seq_stat *seq; + + list_for_each_entry(seq, &ts->seq_list, list) { + if (seq->addr == addr) + return seq; + } + + seq = zalloc(sizeof(struct lock_seq_stat)); + if (!seq) + die("Not enough memory\n"); + seq->state = SEQ_STATE_UNINITIALIZED; + seq->addr = addr; + + list_add(&seq->list, &ts->seq_list); + return seq; +} + +enum broken_state { + BROKEN_ACQUIRE, + BROKEN_ACQUIRED, + BROKEN_CONTENDED, + BROKEN_RELEASE, + BROKEN_MAX, +}; + +static int bad_hist[BROKEN_MAX]; + +enum acquire_flags { + TRY_LOCK = 1, + READ_LOCK = 2, +}; + static void report_lock_acquire_event(struct trace_acquire_event *acquire_event, struct event *__event __used, int cpu __used, - u64 timestamp, + u64 timestamp __used, struct thread *thread __used) { - struct lock_stat *st; + struct lock_stat *ls; + struct thread_stat *ts; + struct lock_seq_stat *seq; + + ls = lock_stat_findnew(acquire_event->addr, acquire_event->name); + if (ls->discard) + return; - st = lock_stat_findnew(acquire_event->addr, acquire_event->name); + ts = thread_stat_findnew(thread->pid); + seq = get_seq(ts, acquire_event->addr); - switch (st->state) { - case LOCK_STATE_UNLOCKED: + switch (seq->state) { + case SEQ_STATE_UNINITIALIZED: + case SEQ_STATE_RELEASED: + if (!acquire_event->flag) { + seq->state = SEQ_STATE_ACQUIRING; + } else { + if (acquire_event->flag & TRY_LOCK) + ls->nr_trylock++; + if (acquire_event->flag & READ_LOCK) + ls->nr_readlock++; + seq->state = SEQ_STATE_READ_ACQUIRED; + seq->read_count = 1; + ls->nr_acquired++; + } + break; + case SEQ_STATE_READ_ACQUIRED: + if (acquire_event->flag & READ_LOCK) { + seq->read_count++; + ls->nr_acquired++; + goto end; + } else { + goto broken; + } break; - case LOCK_STATE_LOCKED: + case SEQ_STATE_ACQUIRED: + case SEQ_STATE_ACQUIRING: + case SEQ_STATE_CONTENDED: +broken: + /* broken lock sequence, discard it */ + ls->discard = 1; + bad_hist[BROKEN_ACQUIRE]++; + list_del(&seq->list); + free(seq); + goto end; break; default: - BUG_ON(1); + BUG_ON("Unknown state of lock sequence found!\n"); break; } - st->prev_event_time = timestamp; + ls->nr_acquire++; + seq->prev_event_time = timestamp; +end: + return; } static void report_lock_acquired_event(struct trace_acquired_event *acquired_event, struct event *__event __used, int cpu __used, - u64 timestamp, + u64 timestamp __used, struct thread *thread __used) { - struct lock_stat *st; + struct lock_stat *ls; + struct thread_stat *ts; + struct lock_seq_stat *seq; + u64 contended_term; + + ls = lock_stat_findnew(acquired_event->addr, acquired_event->name); + if (ls->discard) + return; - st = lock_stat_findnew(acquired_event->addr, acquired_event->name); + ts = thread_stat_findnew(thread->pid); + seq = get_seq(ts, acquired_event->addr); - switch (st->state) { - case LOCK_STATE_UNLOCKED: - st->state = LOCK_STATE_LOCKED; - st->nr_acquired++; + switch (seq->state) { + case SEQ_STATE_UNINITIALIZED: + /* orphan event, do nothing */ + return; + case SEQ_STATE_ACQUIRING: + break; + case SEQ_STATE_CONTENDED: + contended_term = timestamp - seq->prev_event_time; + ls->wait_time_total += contended_term; + if (contended_term < ls->wait_time_min) + ls->wait_time_min = contended_term; + if (ls->wait_time_max < contended_term) + ls->wait_time_max = contended_term; break; - case LOCK_STATE_LOCKED: + case SEQ_STATE_RELEASED: + case SEQ_STATE_ACQUIRED: + case SEQ_STATE_READ_ACQUIRED: + /* broken lock sequence, discard it */ + ls->discard = 1; + bad_hist[BROKEN_ACQUIRED]++; + list_del(&seq->list); + free(seq); + goto end; break; + default: - BUG_ON(1); + BUG_ON("Unknown state of lock sequence found!\n"); break; } - st->prev_event_time = timestamp; + seq->state = SEQ_STATE_ACQUIRED; + ls->nr_acquired++; + seq->prev_event_time = timestamp; +end: + return; } static void report_lock_contended_event(struct trace_contended_event *contended_event, struct event *__event __used, int cpu __used, - u64 timestamp, + u64 timestamp __used, struct thread *thread __used) { - struct lock_stat *st; + struct lock_stat *ls; + struct thread_stat *ts; + struct lock_seq_stat *seq; - st = lock_stat_findnew(contended_event->addr, contended_event->name); + ls = lock_stat_findnew(contended_event->addr, contended_event->name); + if (ls->discard) + return; - switch (st->state) { - case LOCK_STATE_UNLOCKED: + ts = thread_stat_findnew(thread->pid); + seq = get_seq(ts, contended_event->addr); + + switch (seq->state) { + case SEQ_STATE_UNINITIALIZED: + /* orphan event, do nothing */ + return; + case SEQ_STATE_ACQUIRING: break; - case LOCK_STATE_LOCKED: - st->nr_contended++; + case SEQ_STATE_RELEASED: + case SEQ_STATE_ACQUIRED: + case SEQ_STATE_READ_ACQUIRED: + case SEQ_STATE_CONTENDED: + /* broken lock sequence, discard it */ + ls->discard = 1; + bad_hist[BROKEN_CONTENDED]++; + list_del(&seq->list); + free(seq); + goto end; break; default: - BUG_ON(1); + BUG_ON("Unknown state of lock sequence found!\n"); break; } - st->prev_event_time = timestamp; + seq->state = SEQ_STATE_CONTENDED; + ls->nr_contended++; + seq->prev_event_time = timestamp; +end: + return; } static void report_lock_release_event(struct trace_release_event *release_event, struct event *__event __used, int cpu __used, - u64 timestamp, + u64 timestamp __used, struct thread *thread __used) { - struct lock_stat *st; - u64 hold_time; + struct lock_stat *ls; + struct thread_stat *ts; + struct lock_seq_stat *seq; - st = lock_stat_findnew(release_event->addr, release_event->name); + ls = lock_stat_findnew(release_event->addr, release_event->name); + if (ls->discard) + return; - switch (st->state) { - case LOCK_STATE_UNLOCKED: - break; - case LOCK_STATE_LOCKED: - st->state = LOCK_STATE_UNLOCKED; - hold_time = timestamp - st->prev_event_time; + ts = thread_stat_findnew(thread->pid); + seq = get_seq(ts, release_event->addr); - if (timestamp < st->prev_event_time) { - /* terribly, this can happen... */ + switch (seq->state) { + case SEQ_STATE_UNINITIALIZED: + goto end; + break; + case SEQ_STATE_ACQUIRED: + break; + case SEQ_STATE_READ_ACQUIRED: + seq->read_count--; + BUG_ON(seq->read_count < 0); + if (!seq->read_count) { + ls->nr_release++; goto end; } - - if (st->wait_time_min > hold_time) - st->wait_time_min = hold_time; - if (st->wait_time_max < hold_time) - st->wait_time_max = hold_time; - st->wait_time_total += hold_time; - - st->nr_release++; + break; + case SEQ_STATE_ACQUIRING: + case SEQ_STATE_CONTENDED: + case SEQ_STATE_RELEASED: + /* broken lock sequence, discard it */ + ls->discard = 1; + bad_hist[BROKEN_RELEASE]++; + goto free_seq; break; default: - BUG_ON(1); + BUG_ON("Unknown state of lock sequence found!\n"); break; } + ls->nr_release++; +free_seq: + list_del(&seq->list); + free(seq); end: - st->prev_event_time = timestamp; + return; } /* lock oriented handlers */ @@ -381,6 +646,7 @@ process_lock_acquire_event(void *data, tmp = raw_field_value(event, "lockdep_addr", data); memcpy(&acquire_event.addr, &tmp, sizeof(void *)); acquire_event.name = (char *)raw_field_ptr(event, "name", data); + acquire_event.flag = (int)raw_field_value(event, "flag", data); if (trace_handler->acquire_event) trace_handler->acquire_event(&acquire_event, event, cpu, timestamp, thread); @@ -441,8 +707,7 @@ process_lock_release_event(void *data, } static void -process_raw_event(void *data, int cpu, - u64 timestamp, struct thread *thread) +process_raw_event(void *data, int cpu, u64 timestamp, struct thread *thread) { struct event *event; int type; @@ -460,173 +725,19 @@ process_raw_event(void *data, int cpu, process_lock_release_event(data, event, cpu, timestamp, thread); } -struct raw_event_queue { - u64 timestamp; - int cpu; - void *data; - struct thread *thread; - struct list_head list; -}; - -static LIST_HEAD(raw_event_head); - -#define FLUSH_PERIOD (5 * NSEC_PER_SEC) - -static u64 flush_limit = ULLONG_MAX; -static u64 last_flush = 0; -struct raw_event_queue *last_inserted; - -static void flush_raw_event_queue(u64 limit) -{ - struct raw_event_queue *tmp, *iter; - - list_for_each_entry_safe(iter, tmp, &raw_event_head, list) { - if (iter->timestamp > limit) - return; - - if (iter == last_inserted) - last_inserted = NULL; - - process_raw_event(iter->data, iter->cpu, iter->timestamp, - iter->thread); - - last_flush = iter->timestamp; - list_del(&iter->list); - free(iter->data); - free(iter); - } -} - -static void __queue_raw_event_end(struct raw_event_queue *new) -{ - struct raw_event_queue *iter; - - list_for_each_entry_reverse(iter, &raw_event_head, list) { - if (iter->timestamp < new->timestamp) { - list_add(&new->list, &iter->list); - return; - } - } - - list_add(&new->list, &raw_event_head); -} - -static void __queue_raw_event_before(struct raw_event_queue *new, - struct raw_event_queue *iter) +static void print_bad_events(int bad, int total) { - list_for_each_entry_continue_reverse(iter, &raw_event_head, list) { - if (iter->timestamp < new->timestamp) { - list_add(&new->list, &iter->list); - return; - } - } - - list_add(&new->list, &raw_event_head); -} - -static void __queue_raw_event_after(struct raw_event_queue *new, - struct raw_event_queue *iter) -{ - list_for_each_entry_continue(iter, &raw_event_head, list) { - if (iter->timestamp > new->timestamp) { - list_add_tail(&new->list, &iter->list); - return; - } - } - list_add_tail(&new->list, &raw_event_head); -} - -/* The queue is ordered by time */ -static void __queue_raw_event(struct raw_event_queue *new) -{ - if (!last_inserted) { - __queue_raw_event_end(new); - return; - } - - /* - * Most of the time the current event has a timestamp - * very close to the last event inserted, unless we just switched - * to another event buffer. Having a sorting based on a list and - * on the last inserted event that is close to the current one is - * probably more efficient than an rbtree based sorting. - */ - if (last_inserted->timestamp >= new->timestamp) - __queue_raw_event_before(new, last_inserted); - else - __queue_raw_event_after(new, last_inserted); -} - -static void queue_raw_event(void *data, int raw_size, int cpu, - u64 timestamp, struct thread *thread) -{ - struct raw_event_queue *new; - - if (flush_limit == ULLONG_MAX) - flush_limit = timestamp + FLUSH_PERIOD; - - if (timestamp < last_flush) { - printf("Warning: Timestamp below last timeslice flush\n"); - return; - } - - new = malloc(sizeof(*new)); - if (!new) - die("Not enough memory\n"); - - new->timestamp = timestamp; - new->cpu = cpu; - new->thread = thread; - - new->data = malloc(raw_size); - if (!new->data) - die("Not enough memory\n"); - - memcpy(new->data, data, raw_size); - - __queue_raw_event(new); - last_inserted = new; - - /* - * We want to have a slice of events covering 2 * FLUSH_PERIOD - * If FLUSH_PERIOD is big enough, it ensures every events that occured - * in the first half of the timeslice have all been buffered and there - * are none remaining (we need that because of the weakly ordered - * event recording we have). Then once we reach the 2 * FLUSH_PERIOD - * timeslice, we flush the first half to be gentle with the memory - * (the second half can still get new events in the middle, so wait - * another period to flush it) - */ - if (new->timestamp > flush_limit && - new->timestamp - flush_limit > FLUSH_PERIOD) { - flush_limit += FLUSH_PERIOD; - flush_raw_event_queue(flush_limit); - } -} - -static int process_sample_event(event_t *event, struct perf_session *session) -{ - struct thread *thread; - struct sample_data data; - - bzero(&data, sizeof(struct sample_data)); - event__parse_sample(event, session->sample_type, &data); - thread = perf_session__findnew(session, data.pid); - - if (thread == NULL) { - pr_debug("problem processing %d event, skipping it.\n", - event->header.type); - return -1; - } - - dump_printf(" ... thread: %s:%d\n", thread->comm, thread->pid); - - if (profile_cpu != -1 && profile_cpu != (int) data.cpu) - return 0; - - queue_raw_event(data.raw_data, data.raw_size, data.cpu, data.time, thread); - - return 0; + /* Output for debug, this have to be removed */ + int i; + const char *name[4] = + { "acquire", "acquired", "contended", "release" }; + + pr_info("\n=== output for debug===\n\n"); + pr_info("bad: %d, total: %d\n", bad, total); + pr_info("bad rate: %f %%\n", (double)bad / (double)total * 100); + pr_info("histogram of events caused bad sequence\n"); + for (i = 0; i < BROKEN_MAX; i++) + pr_info(" %10s: %d\n", name[i], bad_hist[i]); } /* TODO: various way to print, coloring, nano or milli sec */ @@ -634,26 +745,30 @@ static void print_result(void) { struct lock_stat *st; char cut_name[20]; + int bad, total; - printf("%18s ", "ID"); - printf("%20s ", "Name"); - printf("%10s ", "acquired"); - printf("%10s ", "contended"); + pr_info("%20s ", "Name"); + pr_info("%10s ", "acquired"); + pr_info("%10s ", "contended"); - printf("%15s ", "total wait (ns)"); - printf("%15s ", "max wait (ns)"); - printf("%15s ", "min wait (ns)"); + pr_info("%15s ", "total wait (ns)"); + pr_info("%15s ", "max wait (ns)"); + pr_info("%15s ", "min wait (ns)"); - printf("\n\n"); + pr_info("\n\n"); + bad = total = 0; while ((st = pop_from_result())) { + total++; + if (st->discard) { + bad++; + continue; + } bzero(cut_name, 20); - printf("%p ", st->addr); - if (strlen(st->name) < 16) { /* output raw name */ - printf("%20s ", st->name); + pr_info("%20s ", st->name); } else { strncpy(cut_name, st->name, 16); cut_name[16] = '.'; @@ -661,18 +776,39 @@ static void print_result(void) cut_name[18] = '.'; cut_name[19] = '\0'; /* cut off name for saving output style */ - printf("%20s ", cut_name); + pr_info("%20s ", cut_name); } - printf("%10u ", st->nr_acquired); - printf("%10u ", st->nr_contended); + pr_info("%10u ", st->nr_acquired); + pr_info("%10u ", st->nr_contended); - printf("%15llu ", st->wait_time_total); - printf("%15llu ", st->wait_time_max); - printf("%15llu ", st->wait_time_min == ULLONG_MAX ? + pr_info("%15llu ", st->wait_time_total); + pr_info("%15llu ", st->wait_time_max); + pr_info("%15llu ", st->wait_time_min == ULLONG_MAX ? 0 : st->wait_time_min); - printf("\n"); + pr_info("\n"); } + + print_bad_events(bad, total); +} + +static bool info_threads, info_map; + +static void dump_threads(void) +{ + struct thread_stat *st; + struct rb_node *node; + struct thread *t; + + pr_info("%10s: comm\n", "Thread ID"); + + node = rb_first(&thread_stats); + while (node) { + st = container_of(node, struct thread_stat, rb); + t = perf_session__findnew(session, st->tid); + pr_info("%10d: %s\n", st->tid, t->comm); + node = rb_next(node); + }; } static void dump_map(void) @@ -680,23 +816,53 @@ static void dump_map(void) unsigned int i; struct lock_stat *st; + pr_info("Address of instance: name of class\n"); for (i = 0; i < LOCKHASH_SIZE; i++) { list_for_each_entry(st, &lockhash_table[i], hash_entry) { - printf("%p: %s\n", st->addr, st->name); + pr_info(" %p: %s\n", st->addr, st->name); } } } +static void dump_info(void) +{ + if (info_threads) + dump_threads(); + else if (info_map) + dump_map(); + else + die("Unknown type of information\n"); +} + +static int process_sample_event(event_t *self, struct perf_session *s) +{ + struct sample_data data; + struct thread *thread; + + bzero(&data, sizeof(data)); + event__parse_sample(self, s->sample_type, &data); + + thread = perf_session__findnew(s, data.tid); + if (thread == NULL) { + pr_debug("problem processing %d event, skipping it.\n", + self->header.type); + return -1; + } + + process_raw_event(data.raw_data, data.cpu, data.time, thread); + + return 0; +} + static struct perf_event_ops eops = { .sample = process_sample_event, .comm = event__process_comm, + .ordered_samples = true, }; -static struct perf_session *session; - static int read_events(void) { - session = perf_session__new(input_name, O_RDONLY, 0); + session = perf_session__new(input_name, O_RDONLY, 0, false); if (!session) die("Initializing perf session failed\n"); @@ -720,7 +886,6 @@ static void __cmd_report(void) setup_pager(); select_key(); read_events(); - flush_raw_event_queue(ULLONG_MAX); sort_result(); print_result(); } @@ -737,6 +902,19 @@ static const struct option report_options[] = { OPT_END() }; +static const char * const info_usage[] = { + "perf lock info [<options>]", + NULL +}; + +static const struct option info_options[] = { + OPT_BOOLEAN('t', "threads", &info_threads, + "dump thread list in perf.data"), + OPT_BOOLEAN('m', "map", &info_map, + "map of lock instances (name:address table)"), + OPT_END() +}; + static const char * const lock_usage[] = { "perf lock [<options>] {record|trace|report}", NULL @@ -744,14 +922,13 @@ static const char * const lock_usage[] = { static const struct option lock_options[] = { OPT_STRING('i', "input", &input_name, "file", "input file name"), - OPT_BOOLEAN('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace, "dump raw trace in ASCII"), OPT_END() }; static const char *record_args[] = { "record", - "-a", "-R", "-f", "-m", "1024", @@ -808,12 +985,18 @@ int cmd_lock(int argc, const char **argv, const char *prefix __used) } else if (!strcmp(argv[0], "trace")) { /* Aliased to 'perf trace' */ return cmd_trace(argc, argv, prefix); - } else if (!strcmp(argv[0], "map")) { + } else if (!strcmp(argv[0], "info")) { + if (argc) { + argc = parse_options(argc, argv, + info_options, info_usage, 0); + if (argc) + usage_with_options(info_usage, info_options); + } /* recycling report_lock_ops */ trace_handler = &report_lock_ops; setup_pager(); read_events(); - dump_map(); + dump_info(); } else { usage_with_options(lock_usage, lock_options); } diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c index 152d6c9b1fa4..61c6d70732c9 100644 --- a/tools/perf/builtin-probe.c +++ b/tools/perf/builtin-probe.c @@ -36,13 +36,10 @@ #include "builtin.h" #include "util/util.h" #include "util/strlist.h" -#include "util/event.h" +#include "util/symbol.h" #include "util/debug.h" #include "util/debugfs.h" -#include "util/symbol.h" -#include "util/thread.h" #include "util/parse-options.h" -#include "util/parse-events.h" /* For debugfs_path */ #include "util/probe-finder.h" #include "util/probe-event.h" @@ -50,103 +47,84 @@ /* Session management structure */ static struct { - bool need_dwarf; bool list_events; bool force_add; bool show_lines; - int nr_probe; - struct probe_point probes[MAX_PROBES]; + int nevents; + struct perf_probe_event events[MAX_PROBES]; struct strlist *dellist; - struct map_groups kmap_groups; - struct map *kmaps[MAP__NR_TYPES]; struct line_range line_range; -} session; + int max_probe_points; +} params; /* Parse an event definition. Note that any error must die. */ -static void parse_probe_event(const char *str) +static int parse_probe_event(const char *str) { - struct probe_point *pp = &session.probes[session.nr_probe]; + struct perf_probe_event *pev = ¶ms.events[params.nevents]; + int ret; - pr_debug("probe-definition(%d): %s\n", session.nr_probe, str); - if (++session.nr_probe == MAX_PROBES) + pr_debug("probe-definition(%d): %s\n", params.nevents, str); + if (++params.nevents == MAX_PROBES) die("Too many probes (> %d) are specified.", MAX_PROBES); - /* Parse perf-probe event into probe_point */ - parse_perf_probe_event(str, pp, &session.need_dwarf); + /* Parse a perf-probe command into event */ + ret = parse_perf_probe_command(str, pev); + pr_debug("%d arguments\n", pev->nargs); - pr_debug("%d arguments\n", pp->nr_args); + return ret; } -static void parse_probe_event_argv(int argc, const char **argv) +static int parse_probe_event_argv(int argc, const char **argv) { - int i, len; + int i, len, ret; char *buf; /* Bind up rest arguments */ len = 0; for (i = 0; i < argc; i++) len += strlen(argv[i]) + 1; - buf = zalloc(len + 1); - if (!buf) - die("Failed to allocate memory for binding arguments."); + buf = xzalloc(len + 1); len = 0; for (i = 0; i < argc; i++) len += sprintf(&buf[len], "%s ", argv[i]); - parse_probe_event(buf); + ret = parse_probe_event(buf); free(buf); + return ret; } static int opt_add_probe_event(const struct option *opt __used, const char *str, int unset __used) { if (str) - parse_probe_event(str); - return 0; + return parse_probe_event(str); + else + return 0; } static int opt_del_probe_event(const struct option *opt __used, const char *str, int unset __used) { if (str) { - if (!session.dellist) - session.dellist = strlist__new(true, NULL); - strlist__add(session.dellist, str); + if (!params.dellist) + params.dellist = strlist__new(true, NULL); + strlist__add(params.dellist, str); } return 0; } -/* Currently just checking function name from symbol map */ -static void evaluate_probe_point(struct probe_point *pp) -{ - struct symbol *sym; - sym = map__find_symbol_by_name(session.kmaps[MAP__FUNCTION], - pp->function, NULL); - if (!sym) - die("Kernel symbol \'%s\' not found - probe not added.", - pp->function); -} - -#ifndef NO_DWARF_SUPPORT -static int open_vmlinux(void) -{ - if (map__load(session.kmaps[MAP__FUNCTION], NULL) < 0) { - pr_debug("Failed to load kernel map.\n"); - return -EINVAL; - } - pr_debug("Try to open %s\n", - session.kmaps[MAP__FUNCTION]->dso->long_name); - return open(session.kmaps[MAP__FUNCTION]->dso->long_name, O_RDONLY); -} - +#ifdef DWARF_SUPPORT static int opt_show_lines(const struct option *opt __used, const char *str, int unset __used) { + int ret = 0; + if (str) - parse_line_range_desc(str, &session.line_range); - INIT_LIST_HEAD(&session.line_range.line_list); - session.show_lines = true; - return 0; + ret = parse_line_range_desc(str, ¶ms.line_range); + INIT_LIST_HEAD(¶ms.line_range.line_list); + params.show_lines = true; + + return ret; } #endif @@ -155,29 +133,25 @@ static const char * const probe_usage[] = { "perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]", "perf probe [<options>] --del '[GROUP:]EVENT' ...", "perf probe --list", -#ifndef NO_DWARF_SUPPORT +#ifdef DWARF_SUPPORT "perf probe --line 'LINEDESC'", #endif NULL }; static const struct option options[] = { - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show parsed arguments, etc)"), -#ifndef NO_DWARF_SUPPORT - OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name, - "file", "vmlinux pathname"), -#endif - OPT_BOOLEAN('l', "list", &session.list_events, + OPT_BOOLEAN('l', "list", ¶ms.list_events, "list up current probe events"), OPT_CALLBACK('d', "del", NULL, "[GROUP:]EVENT", "delete a probe event.", opt_del_probe_event), OPT_CALLBACK('a', "add", NULL, -#ifdef NO_DWARF_SUPPORT - "[EVENT=]FUNC[+OFF|%return] [ARG ...]", -#else +#ifdef DWARF_SUPPORT "[EVENT=]FUNC[@SRC][+OFF|%return|:RL|;PT]|SRC:AL|SRC;PT" - " [ARG ...]", + " [[NAME=]ARG ...]", +#else + "[EVENT=]FUNC[+OFF|%return] [[NAME=]ARG ...]", #endif "probe point definition, where\n" "\t\tGROUP:\tGroup name (optional)\n" @@ -185,51 +159,35 @@ static const struct option options[] = { "\t\tFUNC:\tFunction name\n" "\t\tOFF:\tOffset from function entry (in byte)\n" "\t\t%return:\tPut the probe at function return\n" -#ifdef NO_DWARF_SUPPORT - "\t\tARG:\tProbe argument (only \n" -#else +#ifdef DWARF_SUPPORT "\t\tSRC:\tSource code path\n" "\t\tRL:\tRelative line number from function entry.\n" "\t\tAL:\tAbsolute line number in file.\n" "\t\tPT:\tLazy expression of line code.\n" "\t\tARG:\tProbe argument (local variable name or\n" -#endif "\t\t\tkprobe-tracer argument format.)\n", +#else + "\t\tARG:\tProbe argument (kprobe-tracer argument format.)\n", +#endif opt_add_probe_event), - OPT_BOOLEAN('f', "force", &session.force_add, "forcibly add events" + OPT_BOOLEAN('f', "force", ¶ms.force_add, "forcibly add events" " with existing name"), -#ifndef NO_DWARF_SUPPORT +#ifdef DWARF_SUPPORT OPT_CALLBACK('L', "line", NULL, - "FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]", + "FUNC[:RLN[+NUM|-RLN2]]|SRC:ALN[+NUM|-ALN2]", "Show source code lines.", opt_show_lines), + OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name, + "file", "vmlinux pathname"), #endif + OPT__DRY_RUN(&probe_event_dry_run), + OPT_INTEGER('\0', "max-probes", ¶ms.max_probe_points, + "Set how many probe points can be found for a probe."), OPT_END() }; -/* Initialize symbol maps for vmlinux */ -static void init_vmlinux(void) -{ - symbol_conf.sort_by_name = true; - if (symbol_conf.vmlinux_name == NULL) - symbol_conf.try_vmlinux_path = true; - else - pr_debug("Use vmlinux: %s\n", symbol_conf.vmlinux_name); - if (symbol__init() < 0) - die("Failed to init symbol map."); - - map_groups__init(&session.kmap_groups); - if (map_groups__create_kernel_maps(&session.kmap_groups, - session.kmaps) < 0) - die("Failed to create kernel maps."); -} - int cmd_probe(int argc, const char **argv, const char *prefix __used) { - int i, ret; -#ifndef NO_DWARF_SUPPORT - int fd; -#endif - struct probe_point *pp; + int ret; argc = parse_options(argc, argv, options, probe_usage, PARSE_OPT_STOP_AT_NON_OPTION); @@ -238,123 +196,69 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used) pr_warning(" Error: '-' is not supported.\n"); usage_with_options(probe_usage, options); } - parse_probe_event_argv(argc, argv); + ret = parse_probe_event_argv(argc, argv); + if (ret < 0) { + pr_err(" Error: Parse Error. (%d)\n", ret); + return ret; + } } - if ((!session.nr_probe && !session.dellist && !session.list_events && - !session.show_lines)) - usage_with_options(probe_usage, options); + if (params.max_probe_points == 0) + params.max_probe_points = MAX_PROBES; - if (debugfs_valid_mountpoint(debugfs_path) < 0) - die("Failed to find debugfs path."); + if ((!params.nevents && !params.dellist && !params.list_events && + !params.show_lines)) + usage_with_options(probe_usage, options); - if (session.list_events) { - if (session.nr_probe != 0 || session.dellist) { - pr_warning(" Error: Don't use --list with" - " --add/--del.\n"); + if (params.list_events) { + if (params.nevents != 0 || params.dellist) { + pr_err(" Error: Don't use --list with --add/--del.\n"); usage_with_options(probe_usage, options); } - if (session.show_lines) { - pr_warning(" Error: Don't use --list with --line.\n"); + if (params.show_lines) { + pr_err(" Error: Don't use --list with --line.\n"); usage_with_options(probe_usage, options); } - show_perf_probe_events(); - return 0; + ret = show_perf_probe_events(); + if (ret < 0) + pr_err(" Error: Failed to show event list. (%d)\n", + ret); + return ret; } -#ifndef NO_DWARF_SUPPORT - if (session.show_lines) { - if (session.nr_probe != 0 || session.dellist) { +#ifdef DWARF_SUPPORT + if (params.show_lines) { + if (params.nevents != 0 || params.dellist) { pr_warning(" Error: Don't use --line with" " --add/--del.\n"); usage_with_options(probe_usage, options); } - init_vmlinux(); - fd = open_vmlinux(); - if (fd < 0) - die("Could not open debuginfo file."); - ret = find_line_range(fd, &session.line_range); - if (ret <= 0) - die("Source line is not found.\n"); - close(fd); - show_line_range(&session.line_range); - return 0; - } -#endif - if (session.dellist) { - del_trace_kprobe_events(session.dellist); - strlist__delete(session.dellist); - if (session.nr_probe == 0) - return 0; + ret = show_line_range(¶ms.line_range); + if (ret < 0) + pr_err(" Error: Failed to show lines. (%d)\n", ret); + return ret; } +#endif - /* Add probes */ - init_vmlinux(); - - if (session.need_dwarf) -#ifdef NO_DWARF_SUPPORT - die("Debuginfo-analysis is not supported"); -#else /* !NO_DWARF_SUPPORT */ - pr_debug("Some probes require debuginfo.\n"); - - fd = open_vmlinux(); - if (fd < 0) { - if (session.need_dwarf) - die("Could not open debuginfo file."); - - pr_debug("Could not open vmlinux/module file." - " Try to use symbols.\n"); - goto end_dwarf; - } - - /* Searching probe points */ - for (i = 0; i < session.nr_probe; i++) { - pp = &session.probes[i]; - if (pp->found) - continue; - - lseek(fd, SEEK_SET, 0); - ret = find_probe_point(fd, pp); - if (ret > 0) - continue; - if (ret == 0) { /* No error but failed to find probe point. */ - synthesize_perf_probe_point(pp); - die("Probe point '%s' not found. - probe not added.", - pp->probes[0]); - } - /* Error path */ - if (session.need_dwarf) { - if (ret == -ENOENT) - pr_warning("No dwarf info found in the vmlinux - please rebuild with CONFIG_DEBUG_INFO=y.\n"); - die("Could not analyze debuginfo."); + if (params.dellist) { + ret = del_perf_probe_events(params.dellist); + strlist__delete(params.dellist); + if (ret < 0) { + pr_err(" Error: Failed to delete events. (%d)\n", ret); + return ret; } - pr_debug("An error occurred in debuginfo analysis." - " Try to use symbols.\n"); - break; } - close(fd); - -end_dwarf: -#endif /* !NO_DWARF_SUPPORT */ - /* Synthesize probes without dwarf */ - for (i = 0; i < session.nr_probe; i++) { - pp = &session.probes[i]; - if (pp->found) /* This probe is already found. */ - continue; - - evaluate_probe_point(pp); - ret = synthesize_trace_kprobe_event(pp); - if (ret == -E2BIG) - die("probe point definition becomes too long."); - else if (ret < 0) - die("Failed to synthesize a probe point."); + if (params.nevents) { + ret = add_perf_probe_events(params.events, params.nevents, + params.force_add, + params.max_probe_points); + if (ret < 0) { + pr_err(" Error: Failed to add events. (%d)\n", ret); + return ret; + } } - - /* Settng up probe points */ - add_trace_kprobe_events(session.probes, session.nr_probe, - session.force_add); return 0; } diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 3b8b6387c47c..cb46c7d0ea99 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -15,7 +15,6 @@ #include "util/util.h" #include "util/parse-options.h" #include "util/parse-events.h" -#include "util/string.h" #include "util/header.h" #include "util/event.h" @@ -27,31 +26,41 @@ #include <unistd.h> #include <sched.h> -static int fd[MAX_NR_CPUS][MAX_COUNTERS]; +enum write_mode_t { + WRITE_FORCE, + WRITE_APPEND +}; + +static int *fd[MAX_NR_CPUS][MAX_COUNTERS]; -static long default_interval = 0; +static u64 user_interval = ULLONG_MAX; +static u64 default_interval = 0; static int nr_cpus = 0; static unsigned int page_size; static unsigned int mmap_pages = 128; +static unsigned int user_freq = UINT_MAX; static int freq = 1000; static int output; +static int pipe_output = 0; static const char *output_name = "perf.data"; static int group = 0; -static unsigned int realtime_prio = 0; -static int raw_samples = 0; -static int system_wide = 0; +static int realtime_prio = 0; +static bool raw_samples = false; +static bool system_wide = false; static int profile_cpu = -1; static pid_t target_pid = -1; +static pid_t target_tid = -1; +static pid_t *all_tids = NULL; +static int thread_num = 0; static pid_t child_pid = -1; -static int inherit = 1; -static int force = 0; -static int append_file = 0; -static int call_graph = 0; -static int inherit_stat = 0; -static int no_samples = 0; -static int sample_address = 0; -static int multiplex = 0; +static bool no_inherit = false; +static enum write_mode_t write_mode = WRITE_FORCE; +static bool call_graph = false; +static bool inherit_stat = false; +static bool no_samples = false; +static bool sample_address = false; +static bool multiplex = false; static int multiplex_fd = -1; static long samples = 0; @@ -60,7 +69,7 @@ static struct timeval this_read; static u64 bytes_written = 0; -static struct pollfd event_array[MAX_NR_CPUS * MAX_COUNTERS]; +static struct pollfd *event_array; static int nr_poll = 0; static int nr_cpu = 0; @@ -77,7 +86,7 @@ struct mmap_data { unsigned int prev; }; -static struct mmap_data mmap_array[MAX_NR_CPUS][MAX_COUNTERS]; +static struct mmap_data *mmap_array[MAX_NR_CPUS][MAX_COUNTERS]; static unsigned long mmap_read_head(struct mmap_data *md) { @@ -101,6 +110,11 @@ static void mmap_write_tail(struct mmap_data *md, unsigned long tail) pc->data_tail = tail; } +static void advance_output(size_t size) +{ + bytes_written += size; +} + static void write_output(void *buf, size_t size) { while (size) { @@ -225,12 +239,13 @@ static struct perf_header_attr *get_header_attr(struct perf_event_attr *a, int n return h_attr; } -static void create_counter(int counter, int cpu, pid_t pid) +static void create_counter(int counter, int cpu) { char *filter = filters[counter]; struct perf_event_attr *attr = attrs + counter; struct perf_header_attr *h_attr; int track = !counter; /* only the first counter needs these */ + int thread_index; int ret; struct { u64 count; @@ -248,10 +263,19 @@ static void create_counter(int counter, int cpu, pid_t pid) if (nr_counters > 1) attr->sample_type |= PERF_SAMPLE_ID; - if (freq) { - attr->sample_type |= PERF_SAMPLE_PERIOD; - attr->freq = 1; - attr->sample_freq = freq; + /* + * We default some events to a 1 default interval. But keep + * it a weak assumption overridable by the user. + */ + if (!attr->sample_period || (user_freq != UINT_MAX && + user_interval != ULLONG_MAX)) { + if (freq) { + attr->sample_type |= PERF_SAMPLE_PERIOD; + attr->freq = 1; + attr->sample_freq = freq; + } else { + attr->sample_period = default_interval; + } } if (no_samples) @@ -274,119 +298,130 @@ static void create_counter(int counter, int cpu, pid_t pid) attr->mmap = track; attr->comm = track; - attr->inherit = inherit; - attr->disabled = 1; + attr->inherit = !no_inherit; + if (target_pid == -1 && target_tid == -1 && !system_wide) { + attr->disabled = 1; + attr->enable_on_exec = 1; + } + for (thread_index = 0; thread_index < thread_num; thread_index++) { try_again: - fd[nr_cpu][counter] = sys_perf_event_open(attr, pid, cpu, group_fd, 0); - - if (fd[nr_cpu][counter] < 0) { - int err = errno; - - if (err == EPERM || err == EACCES) - die("Permission error - are you root?\n"); - else if (err == ENODEV && profile_cpu != -1) - die("No such device - did you specify an out-of-range profile CPU?\n"); + fd[nr_cpu][counter][thread_index] = sys_perf_event_open(attr, + all_tids[thread_index], cpu, group_fd, 0); + + if (fd[nr_cpu][counter][thread_index] < 0) { + int err = errno; + + if (err == EPERM || err == EACCES) + die("Permission error - are you root?\n" + "\t Consider tweaking" + " /proc/sys/kernel/perf_event_paranoid.\n"); + else if (err == ENODEV && profile_cpu != -1) { + die("No such device - did you specify" + " an out-of-range profile CPU?\n"); + } - /* - * If it's cycles then fall back to hrtimer - * based cpu-clock-tick sw counter, which - * is always available even if no PMU support: - */ - if (attr->type == PERF_TYPE_HARDWARE - && attr->config == PERF_COUNT_HW_CPU_CYCLES) { - - if (verbose) - warning(" ... trying to fall back to cpu-clock-ticks\n"); - attr->type = PERF_TYPE_SOFTWARE; - attr->config = PERF_COUNT_SW_CPU_CLOCK; - goto try_again; - } - printf("\n"); - error("perfcounter syscall returned with %d (%s)\n", - fd[nr_cpu][counter], strerror(err)); + /* + * If it's cycles then fall back to hrtimer + * based cpu-clock-tick sw counter, which + * is always available even if no PMU support: + */ + if (attr->type == PERF_TYPE_HARDWARE + && attr->config == PERF_COUNT_HW_CPU_CYCLES) { + + if (verbose) + warning(" ... trying to fall back to cpu-clock-ticks\n"); + attr->type = PERF_TYPE_SOFTWARE; + attr->config = PERF_COUNT_SW_CPU_CLOCK; + goto try_again; + } + printf("\n"); + error("perfcounter syscall returned with %d (%s)\n", + fd[nr_cpu][counter][thread_index], strerror(err)); #if defined(__i386__) || defined(__x86_64__) - if (attr->type == PERF_TYPE_HARDWARE && err == EOPNOTSUPP) - die("No hardware sampling interrupt available. No APIC? If so then you can boot the kernel with the \"lapic\" boot parameter to force-enable it.\n"); + if (attr->type == PERF_TYPE_HARDWARE && err == EOPNOTSUPP) + die("No hardware sampling interrupt available." + " No APIC? If so then you can boot the kernel" + " with the \"lapic\" boot parameter to" + " force-enable it.\n"); #endif - die("No CONFIG_PERF_EVENTS=y kernel support configured?\n"); - exit(-1); - } + die("No CONFIG_PERF_EVENTS=y kernel support configured?\n"); + exit(-1); + } - h_attr = get_header_attr(attr, counter); - if (h_attr == NULL) - die("nomem\n"); + h_attr = get_header_attr(attr, counter); + if (h_attr == NULL) + die("nomem\n"); - if (!file_new) { - if (memcmp(&h_attr->attr, attr, sizeof(*attr))) { - fprintf(stderr, "incompatible append\n"); - exit(-1); + if (!file_new) { + if (memcmp(&h_attr->attr, attr, sizeof(*attr))) { + fprintf(stderr, "incompatible append\n"); + exit(-1); + } } - } - if (read(fd[nr_cpu][counter], &read_data, sizeof(read_data)) == -1) { - perror("Unable to read perf file descriptor\n"); - exit(-1); - } + if (read(fd[nr_cpu][counter][thread_index], &read_data, sizeof(read_data)) == -1) { + perror("Unable to read perf file descriptor\n"); + exit(-1); + } - if (perf_header_attr__add_id(h_attr, read_data.id) < 0) { - pr_warning("Not enough memory to add id\n"); - exit(-1); - } + if (perf_header_attr__add_id(h_attr, read_data.id) < 0) { + pr_warning("Not enough memory to add id\n"); + exit(-1); + } - assert(fd[nr_cpu][counter] >= 0); - fcntl(fd[nr_cpu][counter], F_SETFL, O_NONBLOCK); + assert(fd[nr_cpu][counter][thread_index] >= 0); + fcntl(fd[nr_cpu][counter][thread_index], F_SETFL, O_NONBLOCK); - /* - * First counter acts as the group leader: - */ - if (group && group_fd == -1) - group_fd = fd[nr_cpu][counter]; - if (multiplex && multiplex_fd == -1) - multiplex_fd = fd[nr_cpu][counter]; + /* + * First counter acts as the group leader: + */ + if (group && group_fd == -1) + group_fd = fd[nr_cpu][counter][thread_index]; + if (multiplex && multiplex_fd == -1) + multiplex_fd = fd[nr_cpu][counter][thread_index]; - if (multiplex && fd[nr_cpu][counter] != multiplex_fd) { + if (multiplex && fd[nr_cpu][counter][thread_index] != multiplex_fd) { - ret = ioctl(fd[nr_cpu][counter], PERF_EVENT_IOC_SET_OUTPUT, multiplex_fd); - assert(ret != -1); - } else { - event_array[nr_poll].fd = fd[nr_cpu][counter]; - event_array[nr_poll].events = POLLIN; - nr_poll++; - - mmap_array[nr_cpu][counter].counter = counter; - mmap_array[nr_cpu][counter].prev = 0; - mmap_array[nr_cpu][counter].mask = mmap_pages*page_size - 1; - mmap_array[nr_cpu][counter].base = mmap(NULL, (mmap_pages+1)*page_size, - PROT_READ|PROT_WRITE, MAP_SHARED, fd[nr_cpu][counter], 0); - if (mmap_array[nr_cpu][counter].base == MAP_FAILED) { - error("failed to mmap with %d (%s)\n", errno, strerror(errno)); - exit(-1); + ret = ioctl(fd[nr_cpu][counter][thread_index], PERF_EVENT_IOC_SET_OUTPUT, multiplex_fd); + assert(ret != -1); + } else { + event_array[nr_poll].fd = fd[nr_cpu][counter][thread_index]; + event_array[nr_poll].events = POLLIN; + nr_poll++; + + mmap_array[nr_cpu][counter][thread_index].counter = counter; + mmap_array[nr_cpu][counter][thread_index].prev = 0; + mmap_array[nr_cpu][counter][thread_index].mask = mmap_pages*page_size - 1; + mmap_array[nr_cpu][counter][thread_index].base = mmap(NULL, (mmap_pages+1)*page_size, + PROT_READ|PROT_WRITE, MAP_SHARED, fd[nr_cpu][counter][thread_index], 0); + if (mmap_array[nr_cpu][counter][thread_index].base == MAP_FAILED) { + error("failed to mmap with %d (%s)\n", errno, strerror(errno)); + exit(-1); + } } - } - if (filter != NULL) { - ret = ioctl(fd[nr_cpu][counter], - PERF_EVENT_IOC_SET_FILTER, filter); - if (ret) { - error("failed to set filter with %d (%s)\n", errno, - strerror(errno)); - exit(-1); + if (filter != NULL) { + ret = ioctl(fd[nr_cpu][counter][thread_index], + PERF_EVENT_IOC_SET_FILTER, filter); + if (ret) { + error("failed to set filter with %d (%s)\n", errno, + strerror(errno)); + exit(-1); + } } } - - ioctl(fd[nr_cpu][counter], PERF_EVENT_IOC_ENABLE); } -static void open_counters(int cpu, pid_t pid) +static void open_counters(int cpu) { int counter; group_fd = -1; for (counter = 0; counter < nr_counters; counter++) - create_counter(counter, cpu, pid); + create_counter(counter, cpu); nr_cpu++; } @@ -406,10 +441,80 @@ static int process_buildids(void) static void atexit_header(void) { - session->header.data_size += bytes_written; + if (!pipe_output) { + session->header.data_size += bytes_written; + + process_buildids(); + perf_header__write(&session->header, output, true); + } +} + +static void event__synthesize_guest_os(struct machine *machine, void *data) +{ + int err; + char *guest_kallsyms; + char path[PATH_MAX]; + struct perf_session *psession = data; + + if (machine__is_host(machine)) + return; + + /* + *As for guest kernel when processing subcommand record&report, + *we arrange module mmap prior to guest kernel mmap and trigger + *a preload dso because default guest module symbols are loaded + *from guest kallsyms instead of /lib/modules/XXX/XXX. This + *method is used to avoid symbol missing when the first addr is + *in module instead of in guest kernel. + */ + err = event__synthesize_modules(process_synthesized_event, + psession, machine); + if (err < 0) + pr_err("Couldn't record guest kernel [%d]'s reference" + " relocation symbol.\n", machine->pid); + + if (machine__is_default_guest(machine)) + guest_kallsyms = (char *) symbol_conf.default_guest_kallsyms; + else { + sprintf(path, "%s/proc/kallsyms", machine->root_dir); + guest_kallsyms = path; + } + + /* + * We use _stext for guest kernel because guest kernel's /proc/kallsyms + * have no _text sometimes. + */ + err = event__synthesize_kernel_mmap(process_synthesized_event, + psession, machine, "_text"); + if (err < 0) + err = event__synthesize_kernel_mmap(process_synthesized_event, + psession, machine, "_stext"); + if (err < 0) + pr_err("Couldn't record guest kernel [%d]'s reference" + " relocation symbol.\n", machine->pid); +} + +static struct perf_event_header finished_round_event = { + .size = sizeof(struct perf_event_header), + .type = PERF_RECORD_FINISHED_ROUND, +}; + +static void mmap_read_all(void) +{ + int i, counter, thread; - process_buildids(); - perf_header__write(&session->header, output, true); + for (i = 0; i < nr_cpu; i++) { + for (counter = 0; counter < nr_counters; counter++) { + for (thread = 0; thread < thread_num; thread++) { + if (mmap_array[i][counter][thread].base) + mmap_read(&mmap_array[i][counter][thread]); + } + + } + } + + if (perf_header__has_feat(&session->header, HEADER_TRACE_INFO)) + write_output(&finished_round_event, sizeof(finished_round_event)); } static int __cmd_record(int argc, const char **argv) @@ -421,8 +526,9 @@ static int __cmd_record(int argc, const char **argv) int err; unsigned long waking = 0; int child_ready_pipe[2], go_pipe[2]; - const bool forks = target_pid == -1 && argc > 0; + const bool forks = argc > 0; char buf; + struct machine *machine; page_size = sysconf(_SC_PAGE_SIZE); @@ -435,70 +541,63 @@ static int __cmd_record(int argc, const char **argv) exit(-1); } - if (!stat(output_name, &st) && st.st_size) { - if (!force) { - if (!append_file) { - pr_err("Error, output file %s exists, use -A " - "to append or -f to overwrite.\n", - output_name); - exit(-1); - } - } else { + if (!strcmp(output_name, "-")) + pipe_output = 1; + else if (!stat(output_name, &st) && st.st_size) { + if (write_mode == WRITE_FORCE) { char oldname[PATH_MAX]; snprintf(oldname, sizeof(oldname), "%s.old", output_name); unlink(oldname); rename(output_name, oldname); } - } else { - append_file = 0; + } else if (write_mode == WRITE_APPEND) { + write_mode = WRITE_FORCE; } flags = O_CREAT|O_RDWR; - if (append_file) + if (write_mode == WRITE_APPEND) file_new = 0; else flags |= O_TRUNC; - output = open(output_name, flags, S_IRUSR|S_IWUSR); + if (pipe_output) + output = STDOUT_FILENO; + else + output = open(output_name, flags, S_IRUSR | S_IWUSR); if (output < 0) { perror("failed to create output file"); exit(-1); } - session = perf_session__new(output_name, O_WRONLY, force); + session = perf_session__new(output_name, O_WRONLY, + write_mode == WRITE_FORCE, false); if (session == NULL) { pr_err("Not enough memory for reading perf file header\n"); return -1; } if (!file_new) { - err = perf_header__read(&session->header, output); + err = perf_header__read(session, output); if (err < 0) return err; } - if (raw_samples) { + if (have_tracepoints(attrs, nr_counters)) perf_header__set_feat(&session->header, HEADER_TRACE_INFO); - } else { - for (i = 0; i < nr_counters; i++) { - if (attrs[i].sample_type & PERF_SAMPLE_RAW) { - perf_header__set_feat(&session->header, HEADER_TRACE_INFO); - break; - } - } - } atexit(atexit_header); if (forks) { - pid = fork(); + child_pid = fork(); if (pid < 0) { perror("failed to fork"); exit(-1); } - if (!pid) { + if (!child_pid) { + if (pipe_output) + dup2(2, 1); close(child_ready_pipe[0]); close(go_pipe[1]); fcntl(go_pipe[0], F_SETFD, FD_CLOEXEC); @@ -527,10 +626,8 @@ static int __cmd_record(int argc, const char **argv) exit(-1); } - child_pid = pid; - - if (!system_wide) - target_pid = pid; + if (!system_wide && target_tid == -1 && target_pid == -1) + all_tids[0] = child_pid; close(child_ready_pipe[1]); close(go_pipe[0]); @@ -544,16 +641,19 @@ static int __cmd_record(int argc, const char **argv) close(child_ready_pipe[0]); } - - if ((!system_wide && !inherit) || profile_cpu != -1) { - open_counters(profile_cpu, target_pid); + if ((!system_wide && no_inherit) || profile_cpu != -1) { + open_counters(profile_cpu); } else { nr_cpus = read_cpu_map(); for (i = 0; i < nr_cpus; i++) - open_counters(cpumap[i], target_pid); + open_counters(cpumap[i]); } - if (file_new) { + if (pipe_output) { + err = perf_header__write_pipe(output); + if (err < 0) + return err; + } else if (file_new) { err = perf_header__write(&session->header, output, false); if (err < 0) return err; @@ -561,21 +661,70 @@ static int __cmd_record(int argc, const char **argv) post_processing_offset = lseek(output, 0, SEEK_CUR); + if (pipe_output) { + err = event__synthesize_attrs(&session->header, + process_synthesized_event, + session); + if (err < 0) { + pr_err("Couldn't synthesize attrs.\n"); + return err; + } + + err = event__synthesize_event_types(process_synthesized_event, + session); + if (err < 0) { + pr_err("Couldn't synthesize event_types.\n"); + return err; + } + + if (have_tracepoints(attrs, nr_counters)) { + /* + * FIXME err <= 0 here actually means that + * there were no tracepoints so its not really + * an error, just that we don't need to + * synthesize anything. We really have to + * return this more properly and also + * propagate errors that now are calling die() + */ + err = event__synthesize_tracing_data(output, attrs, + nr_counters, + process_synthesized_event, + session); + if (err <= 0) { + pr_err("Couldn't record tracing data.\n"); + return err; + } + advance_output(err); + } + } + + machine = perf_session__find_host_machine(session); + if (!machine) { + pr_err("Couldn't find native kernel information.\n"); + return -1; + } + err = event__synthesize_kernel_mmap(process_synthesized_event, - session, "_text"); + session, machine, "_text"); + if (err < 0) + err = event__synthesize_kernel_mmap(process_synthesized_event, + session, machine, "_stext"); if (err < 0) { pr_err("Couldn't record kernel reference relocation symbol.\n"); return err; } - err = event__synthesize_modules(process_synthesized_event, session); + err = event__synthesize_modules(process_synthesized_event, + session, machine); if (err < 0) { pr_err("Couldn't record kernel reference relocation symbol.\n"); return err; } + if (perf_guest) + perf_session__process_machines(session, event__synthesize_guest_os); if (!system_wide && profile_cpu == -1) - event__synthesize_thread(target_pid, process_synthesized_event, + event__synthesize_thread(target_tid, process_synthesized_event, session); else event__synthesize_threads(process_synthesized_event, session); @@ -598,13 +747,9 @@ static int __cmd_record(int argc, const char **argv) for (;;) { int hits = samples; + int thread; - for (i = 0; i < nr_cpu; i++) { - for (counter = 0; counter < nr_counters; counter++) { - if (mmap_array[i][counter].base) - mmap_read(&mmap_array[i][counter]); - } - } + mmap_read_all(); if (hits == samples) { if (done) @@ -615,8 +760,15 @@ static int __cmd_record(int argc, const char **argv) if (done) { for (i = 0; i < nr_cpu; i++) { - for (counter = 0; counter < nr_counters; counter++) - ioctl(fd[i][counter], PERF_EVENT_IOC_DISABLE); + for (counter = 0; + counter < nr_counters; + counter++) { + for (thread = 0; + thread < thread_num; + thread++) + ioctl(fd[i][counter][thread], + PERF_EVENT_IOC_DISABLE); + } } } } @@ -641,6 +793,8 @@ static const char * const record_usage[] = { NULL }; +static bool force, append_file; + static const struct option options[] = { OPT_CALLBACK('e', "event", NULL, "event", "event selector. use 'perf list' to list available events", @@ -648,7 +802,9 @@ static const struct option options[] = { OPT_CALLBACK(0, "filter", NULL, "filter", "event filter", parse_filter), OPT_INTEGER('p', "pid", &target_pid, - "record events on existing pid"), + "record events on existing process id"), + OPT_INTEGER('t', "tid", &target_tid, + "record events on existing thread id"), OPT_INTEGER('r', "realtime", &realtime_prio, "collect data with this RT SCHED_FIFO priority"), OPT_BOOLEAN('R', "raw-samples", &raw_samples, @@ -660,20 +816,17 @@ static const struct option options[] = { OPT_INTEGER('C', "profile_cpu", &profile_cpu, "CPU to profile on"), OPT_BOOLEAN('f', "force", &force, - "overwrite existing data file"), - OPT_LONG('c', "count", &default_interval, - "event period to sample"), + "overwrite existing data file (deprecated)"), + OPT_U64('c', "count", &user_interval, "event period to sample"), OPT_STRING('o', "output", &output_name, "file", "output file name"), - OPT_BOOLEAN('i', "inherit", &inherit, - "child tasks inherit counters"), - OPT_INTEGER('F', "freq", &freq, - "profile at this frequency"), - OPT_INTEGER('m', "mmap-pages", &mmap_pages, - "number of mmap data pages"), + OPT_BOOLEAN('i', "no-inherit", &no_inherit, + "child tasks do not inherit counters"), + OPT_UINTEGER('F', "freq", &user_freq, "profile at this frequency"), + OPT_UINTEGER('m', "mmap-pages", &mmap_pages, "number of mmap data pages"), OPT_BOOLEAN('g', "call-graph", &call_graph, "do call-graph (stack chain/backtrace) recording"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show counter open errors, etc)"), OPT_BOOLEAN('s', "stat", &inherit_stat, "per thread counts"), @@ -688,13 +841,24 @@ static const struct option options[] = { int cmd_record(int argc, const char **argv, const char *prefix __used) { - int counter; + int i,j; argc = parse_options(argc, argv, options, record_usage, PARSE_OPT_STOP_AT_NON_OPTION); - if (!argc && target_pid == -1 && !system_wide && profile_cpu == -1) + if (!argc && target_pid == -1 && target_tid == -1 && + !system_wide && profile_cpu == -1) usage_with_options(record_usage, options); + if (force && append_file) { + fprintf(stderr, "Can't overwrite and append at the same time." + " You need to choose between -f and -A"); + usage_with_options(record_usage, options); + } else if (append_file) { + write_mode = WRITE_APPEND; + } else { + write_mode = WRITE_FORCE; + } + symbol__init(); if (!nr_counters) { @@ -703,6 +867,42 @@ int cmd_record(int argc, const char **argv, const char *prefix __used) attrs[0].config = PERF_COUNT_HW_CPU_CYCLES; } + if (target_pid != -1) { + target_tid = target_pid; + thread_num = find_all_tid(target_pid, &all_tids); + if (thread_num <= 0) { + fprintf(stderr, "Can't find all threads of pid %d\n", + target_pid); + usage_with_options(record_usage, options); + } + } else { + all_tids=malloc(sizeof(pid_t)); + if (!all_tids) + return -ENOMEM; + + all_tids[0] = target_tid; + thread_num = 1; + } + + for (i = 0; i < MAX_NR_CPUS; i++) { + for (j = 0; j < MAX_COUNTERS; j++) { + fd[i][j] = malloc(sizeof(int)*thread_num); + mmap_array[i][j] = zalloc( + sizeof(struct mmap_data)*thread_num); + if (!fd[i][j] || !mmap_array[i][j]) + return -ENOMEM; + } + } + event_array = malloc( + sizeof(struct pollfd)*MAX_NR_CPUS*MAX_COUNTERS*thread_num); + if (!event_array) + return -ENOMEM; + + if (user_interval != ULLONG_MAX) + default_interval = user_interval; + if (user_freq != UINT_MAX) + freq = user_freq; + /* * User specified count overrides default frequency. */ @@ -715,12 +915,5 @@ int cmd_record(int argc, const char **argv, const char *prefix __used) exit(EXIT_FAILURE); } - for (counter = 0; counter < nr_counters; counter++) { - if (attrs[counter].sample_period) - continue; - - attrs[counter].sample_period = default_interval; - } - return __cmd_record(argc, argv); } diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c index f815de25d0fc..1d3c1003b43a 100644 --- a/tools/perf/builtin-report.c +++ b/tools/perf/builtin-report.c @@ -14,7 +14,6 @@ #include "util/cache.h" #include <linux/rbtree.h> #include "util/symbol.h" -#include "util/string.h" #include "util/callchain.h" #include "util/strlist.h" #include "util/values.h" @@ -33,28 +32,29 @@ static char const *input_name = "perf.data"; -static int force; +static bool force; static bool hide_unresolved; static bool dont_use_callchains; -static int show_threads; +static bool show_threads; static struct perf_read_values show_threads_values; -static char default_pretty_printing_style[] = "normal"; -static char *pretty_printing_style = default_pretty_printing_style; +static const char default_pretty_printing_style[] = "normal"; +static const char *pretty_printing_style = default_pretty_printing_style; static char callchain_default_opt[] = "fractal,0.5"; -static struct event_stat_id *get_stats(struct perf_session *self, - u64 event_stream, u32 type, u64 config) +static struct hists *perf_session__hists_findnew(struct perf_session *self, + u64 event_stream, u32 type, + u64 config) { - struct rb_node **p = &self->stats_by_id.rb_node; + struct rb_node **p = &self->hists_tree.rb_node; struct rb_node *parent = NULL; - struct event_stat_id *iter, *new; + struct hists *iter, *new; while (*p != NULL) { parent = *p; - iter = rb_entry(parent, struct event_stat_id, rb_node); + iter = rb_entry(parent, struct hists, rb_node); if (iter->config == config) return iter; @@ -65,15 +65,15 @@ static struct event_stat_id *get_stats(struct perf_session *self, p = &(*p)->rb_left; } - new = malloc(sizeof(struct event_stat_id)); + new = malloc(sizeof(struct hists)); if (new == NULL) return NULL; - memset(new, 0, sizeof(struct event_stat_id)); + memset(new, 0, sizeof(struct hists)); new->event_stream = event_stream; new->config = config; new->type = type; rb_link_node(&new->rb_node, parent, p); - rb_insert_color(&new->rb_node, &self->stats_by_id); + rb_insert_color(&new->rb_node, &self->hists_tree); return new; } @@ -81,70 +81,71 @@ static int perf_session__add_hist_entry(struct perf_session *self, struct addr_location *al, struct sample_data *data) { - struct symbol **syms = NULL, *parent = NULL; - bool hit; + struct map_symbol *syms = NULL; + struct symbol *parent = NULL; + int err = -ENOMEM; struct hist_entry *he; - struct event_stat_id *stats; + struct hists *hists; struct perf_event_attr *attr; - if ((sort__has_parent || symbol_conf.use_callchain) && data->callchain) + if ((sort__has_parent || symbol_conf.use_callchain) && data->callchain) { syms = perf_session__resolve_callchain(self, al->thread, data->callchain, &parent); + if (syms == NULL) + return -ENOMEM; + } attr = perf_header__find_attr(data->id, &self->header); if (attr) - stats = get_stats(self, data->id, attr->type, attr->config); + hists = perf_session__hists_findnew(self, data->id, attr->type, attr->config); else - stats = get_stats(self, data->id, 0, 0); - if (stats == NULL) - return -ENOMEM; - he = __perf_session__add_hist_entry(&stats->hists, al, parent, - data->period, &hit); + hists = perf_session__hists_findnew(self, data->id, 0, 0); + if (hists == NULL) + goto out_free_syms; + he = __hists__add_entry(hists, al, parent, data->period); if (he == NULL) - return -ENOMEM; - - if (hit) - he->count += data->period; - + goto out_free_syms; + err = 0; if (symbol_conf.use_callchain) { - if (!hit) - callchain_init(&he->callchain); - append_chain(&he->callchain, data->callchain, syms); - free(syms); + err = append_chain(he->callchain, data->callchain, syms); + if (err) + goto out_free_syms; } - - return 0; -} - -static int validate_chain(struct ip_callchain *chain, event_t *event) -{ - unsigned int chain_size; - - chain_size = event->header.size; - chain_size -= (unsigned long)&event->ip.__more_data - (unsigned long)event; - - if (chain->nr*sizeof(u64) > chain_size) - return -1; - - return 0; + /* + * Only in the newt browser we are doing integrated annotation, + * so we don't allocated the extra space needed because the stdio + * code will not use it. + */ + if (use_browser) + err = hist_entry__inc_addr_samples(he, al->addr); +out_free_syms: + free(syms); + return err; } static int add_event_total(struct perf_session *session, struct sample_data *data, struct perf_event_attr *attr) { - struct event_stat_id *stats; + struct hists *hists; if (attr) - stats = get_stats(session, data->id, attr->type, attr->config); + hists = perf_session__hists_findnew(session, data->id, + attr->type, attr->config); else - stats = get_stats(session, data->id, 0, 0); + hists = perf_session__hists_findnew(session, data->id, 0, 0); - if (!stats) + if (!hists) return -ENOMEM; - stats->stats.total += data->period; - session->events_stats.total += data->period; + hists->stats.total_period += data->period; + /* + * FIXME: add_event_total should be moved from here to + * perf_session__process_event so that the proper hist is passed to + * the event_op methods. + */ + hists__inc_nr_events(hists, PERF_RECORD_SAMPLE); + session->hists.stats.total_period += data->period; return 0; } @@ -164,7 +165,7 @@ static int process_sample_event(event_t *event, struct perf_session *session) dump_printf("... chain: nr:%Lu\n", data.callchain->nr); - if (validate_chain(data.callchain, event) < 0) { + if (!ip_callchain__valid(data.callchain, event)) { pr_debug("call-chain problem with event, " "skipping it.\n"); return 0; @@ -187,14 +188,14 @@ static int process_sample_event(event_t *event, struct perf_session *session) return 0; if (perf_session__add_hist_entry(session, &al, &data)) { - pr_debug("problem incrementing symbol count, skipping event\n"); + pr_debug("problem incrementing symbol period, skipping event\n"); return -1; } attr = perf_header__find_attr(data.id, &session->header); if (add_event_total(session, &data, attr)) { - pr_debug("problem adding event count\n"); + pr_debug("problem adding event period\n"); return -1; } @@ -260,15 +261,43 @@ static struct perf_event_ops event_ops = { .fork = event__process_task, .lost = event__process_lost, .read = process_read_event, + .attr = event__process_attr, + .event_type = event__process_event_type, + .tracing_data = event__process_tracing_data, + .build_id = event__process_build_id, }; +extern volatile int session_done; + +static void sig_handler(int sig __used) +{ + session_done = 1; +} + +static size_t hists__fprintf_nr_sample_events(struct hists *self, + const char *evname, FILE *fp) +{ + size_t ret; + char unit; + unsigned long nr_events = self->stats.nr_events[PERF_RECORD_SAMPLE]; + + nr_events = convert_unit(nr_events, &unit); + ret = fprintf(fp, "# Events: %lu%c", nr_events, unit); + if (evname != NULL) + ret += fprintf(fp, " %s", evname); + return ret + fprintf(fp, "\n#\n"); +} + static int __cmd_report(void) { int ret = -EINVAL; struct perf_session *session; struct rb_node *next; + const char *help = "For a higher level overview, try: perf report --sort comm,dso"; + + signal(SIGINT, sig_handler); - session = perf_session__new(input_name, O_RDONLY, force); + session = perf_session__new(input_name, O_RDONLY, force, false); if (session == NULL) return -ENOMEM; @@ -284,7 +313,7 @@ static int __cmd_report(void) goto out_delete; if (dump_trace) { - event__print_totals(); + perf_session__fprintf_nr_events(session, stdout); goto out_delete; } @@ -292,39 +321,42 @@ static int __cmd_report(void) perf_session__fprintf(session, stdout); if (verbose > 2) - dsos__fprintf(stdout); + perf_session__fprintf_dsos(session, stdout); - next = rb_first(&session->stats_by_id); + next = rb_first(&session->hists_tree); while (next) { - struct event_stat_id *stats; - - stats = rb_entry(next, struct event_stat_id, rb_node); - perf_session__collapse_resort(&stats->hists); - perf_session__output_resort(&stats->hists, stats->stats.total); - if (rb_first(&session->stats_by_id) == - rb_last(&session->stats_by_id)) - fprintf(stdout, "# Samples: %Ld\n#\n", - stats->stats.total); - else - fprintf(stdout, "# Samples: %Ld %s\n#\n", - stats->stats.total, - __event_name(stats->type, stats->config)); - - perf_session__fprintf_hists(&stats->hists, NULL, false, stdout, - stats->stats.total); - fprintf(stdout, "\n\n"); - next = rb_next(&stats->rb_node); + struct hists *hists; + + hists = rb_entry(next, struct hists, rb_node); + hists__collapse_resort(hists); + hists__output_resort(hists); + if (use_browser) + hists__browse(hists, help, input_name); + else { + const char *evname = NULL; + if (rb_first(&session->hists.entries) != + rb_last(&session->hists.entries)) + evname = __event_name(hists->type, hists->config); + + hists__fprintf_nr_sample_events(hists, evname, stdout); + + hists__fprintf(hists, NULL, false, stdout); + fprintf(stdout, "\n\n"); + } + + next = rb_next(&hists->rb_node); } - if (sort_order == default_sort_order && - parent_pattern == default_parent_pattern) - fprintf(stdout, "#\n# (For a higher level overview, try: perf report --sort comm,dso)\n#\n"); + if (!use_browser && sort_order == default_sort_order && + parent_pattern == default_parent_pattern) { + fprintf(stdout, "#\n# (%s)\n#\n", help); - if (show_threads) { - bool raw_printing_style = !strcmp(pretty_printing_style, "raw"); - perf_read_values_display(stdout, &show_threads_values, - raw_printing_style); - perf_read_values_destroy(&show_threads_values); + if (show_threads) { + bool style = !strcmp(pretty_printing_style, "raw"); + perf_read_values_display(stdout, &show_threads_values, + style); + perf_read_values_destroy(&show_threads_values); + } } out_delete: perf_session__delete(session); @@ -335,7 +367,7 @@ static int parse_callchain_opt(const struct option *opt __used, const char *arg, int unset) { - char *tok; + char *tok, *tok2; char *endptr; /* @@ -380,10 +412,13 @@ parse_callchain_opt(const struct option *opt __used, const char *arg, if (!tok) goto setup; + tok2 = strtok(NULL, ","); callchain_param.min_percent = strtod(tok, &endptr); if (tok == endptr) return -1; + if (tok2) + callchain_param.print_limit = strtod(tok2, &endptr); setup: if (register_callchain_param(&callchain_param) < 0) { fprintf(stderr, "Can't register callchain params\n"); @@ -400,7 +435,7 @@ static const char * const report_usage[] = { static const struct option options[] = { OPT_STRING('i', "input", &input_name, "file", "input file name"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace, "dump raw trace in ASCII"), @@ -419,12 +454,14 @@ static const struct option options[] = { "sort by key(s): pid, comm, dso, symbol, parent"), OPT_BOOLEAN('P', "full-paths", &symbol_conf.full_paths, "Don't shorten the pathnames taking into account the cwd"), + OPT_BOOLEAN(0, "showcpuutilization", &symbol_conf.show_cpu_utilization, + "Show sample percentage for different cpu modes"), OPT_STRING('p', "parent", &parent_pattern, "regex", "regex filter to identify parent, see: '--sort parent'"), OPT_BOOLEAN('x', "exclude-other", &symbol_conf.exclude_other, "Only display entries with parent-match"), OPT_CALLBACK_DEFAULT('g', "call-graph", NULL, "output_type,min_percent", - "Display callchains using output_type and min percent threshold. " + "Display callchains using output_type (graph, flat, fractal, or none) and min percent threshold. " "Default: fractal,0.5", &parse_callchain_opt, callchain_default_opt), OPT_STRING('d', "dsos", &symbol_conf.dso_list_str, "dso[,dso...]", "only consider symbols in these dsos"), @@ -447,7 +484,15 @@ int cmd_report(int argc, const char **argv, const char *prefix __used) { argc = parse_options(argc, argv, options, report_usage, 0); - setup_pager(); + if (strcmp(input_name, "-") != 0) + setup_browser(); + /* + * Only in the newt browser we are doing integrated annotation, + * so don't allocate extra space that won't be used in the stdio + * implementation. + */ + if (use_browser) + symbol_conf.priv_size = sizeof(struct sym_priv); if (symbol__init() < 0) return -1; @@ -455,7 +500,8 @@ int cmd_report(int argc, const char **argv, const char *prefix __used) setup_sorting(report_usage, options); if (parent_pattern != default_parent_pattern) { - sort_dimension__add("parent"); + if (sort_dimension__add("parent") < 0) + return -1; sort_parent.elide = 1; } else symbol_conf.exclude_other = false; diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c index 4f5a03e43444..f67bce2a83b4 100644 --- a/tools/perf/builtin-sched.c +++ b/tools/perf/builtin-sched.c @@ -22,7 +22,7 @@ static char const *input_name = "perf.data"; static char default_sort_order[] = "avg, max, switch, runtime"; -static char *sort_order = default_sort_order; +static const char *sort_order = default_sort_order; static int profile_cpu = -1; @@ -68,10 +68,10 @@ enum sched_event_type { struct sched_atom { enum sched_event_type type; + int specific_wait; u64 timestamp; u64 duration; unsigned long nr; - int specific_wait; sem_t *wait_sem; struct task_desc *wakee; }; @@ -105,7 +105,7 @@ static u64 sum_runtime; static u64 sum_fluct; static u64 run_avg; -static unsigned long replay_repeat = 10; +static unsigned int replay_repeat = 10; static unsigned long nr_timestamps; static unsigned long nr_unordered_timestamps; static unsigned long nr_state_machine_bugs; @@ -1641,30 +1641,26 @@ static int process_sample_event(event_t *event, struct perf_session *session) return 0; } -static int process_lost_event(event_t *event __used, - struct perf_session *session __used) -{ - nr_lost_chunks++; - nr_lost_events += event->lost.lost; - - return 0; -} - static struct perf_event_ops event_ops = { - .sample = process_sample_event, - .comm = event__process_comm, - .lost = process_lost_event, + .sample = process_sample_event, + .comm = event__process_comm, + .lost = event__process_lost, + .ordered_samples = true, }; static int read_events(void) { int err = -EINVAL; - struct perf_session *session = perf_session__new(input_name, O_RDONLY, 0); + struct perf_session *session = perf_session__new(input_name, O_RDONLY, 0, false); if (session == NULL) return -ENOMEM; - if (perf_session__has_traces(session, "record -R")) + if (perf_session__has_traces(session, "record -R")) { err = perf_session__process_events(session, &event_ops); + nr_events = session->hists.stats.nr_events[0]; + nr_lost_events = session->hists.stats.total_lost; + nr_lost_chunks = session->hists.stats.nr_events[PERF_RECORD_LOST]; + } perf_session__delete(session); return err; @@ -1790,7 +1786,7 @@ static const char * const sched_usage[] = { static const struct option sched_options[] = { OPT_STRING('i', "input", &input_name, "file", "input file name"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace, "dump raw trace in ASCII"), @@ -1805,7 +1801,7 @@ static const char * const latency_usage[] = { static const struct option latency_options[] = { OPT_STRING('s', "sort", &sort_order, "key[,key2...]", "sort by key(s): runtime, switch, avg, max"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_INTEGER('C', "CPU", &profile_cpu, "CPU to profile on"), @@ -1820,9 +1816,9 @@ static const char * const replay_usage[] = { }; static const struct option replay_options[] = { - OPT_INTEGER('r', "repeat", &replay_repeat, - "repeat the workload replay N times (-1: infinite)"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_UINTEGER('r', "repeat", &replay_repeat, + "repeat the workload replay N times (-1: infinite)"), + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace, "dump raw trace in ASCII"), @@ -1850,7 +1846,6 @@ static const char *record_args[] = { "record", "-a", "-R", - "-M", "-f", "-m", "1024", "-c", "1", diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 95db31cff6fd..ff8c413b7e73 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -46,6 +46,7 @@ #include "util/debug.h" #include "util/header.h" #include "util/cpumap.h" +#include "util/thread.h" #include <sys/prctl.h> #include <math.h> @@ -66,18 +67,21 @@ static struct perf_event_attr default_attrs[] = { }; -static int system_wide = 0; +static bool system_wide = false; static unsigned int nr_cpus = 0; static int run_idx = 0; static int run_count = 1; -static int inherit = 1; -static int scale = 1; +static bool no_inherit = false; +static bool scale = true; static pid_t target_pid = -1; +static pid_t target_tid = -1; +static pid_t *all_tids = NULL; +static int thread_num = 0; static pid_t child_pid = -1; -static int null_run = 0; +static bool null_run = false; -static int fd[MAX_NR_CPUS][MAX_COUNTERS]; +static int *fd[MAX_NR_CPUS][MAX_COUNTERS]; static int event_scaled[MAX_COUNTERS]; @@ -140,9 +144,11 @@ struct stats runtime_branches_stats; #define ERR_PERF_OPEN \ "Error: counter %d, sys_perf_event_open() syscall returned with %d (%s)\n" -static void create_perf_stat_counter(int counter, int pid) +static int create_perf_stat_counter(int counter) { struct perf_event_attr *attr = attrs + counter; + int thread; + int ncreated = 0; if (scale) attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED | @@ -152,21 +158,33 @@ static void create_perf_stat_counter(int counter, int pid) unsigned int cpu; for (cpu = 0; cpu < nr_cpus; cpu++) { - fd[cpu][counter] = sys_perf_event_open(attr, -1, cpumap[cpu], -1, 0); - if (fd[cpu][counter] < 0 && verbose) - fprintf(stderr, ERR_PERF_OPEN, counter, - fd[cpu][counter], strerror(errno)); + fd[cpu][counter][0] = sys_perf_event_open(attr, + -1, cpumap[cpu], -1, 0); + if (fd[cpu][counter][0] < 0) + pr_debug(ERR_PERF_OPEN, counter, + fd[cpu][counter][0], strerror(errno)); + else + ++ncreated; } } else { - attr->inherit = inherit; - attr->disabled = 1; - attr->enable_on_exec = 1; - - fd[0][counter] = sys_perf_event_open(attr, pid, -1, -1, 0); - if (fd[0][counter] < 0 && verbose) - fprintf(stderr, ERR_PERF_OPEN, counter, - fd[0][counter], strerror(errno)); + attr->inherit = !no_inherit; + if (target_pid == -1 && target_tid == -1) { + attr->disabled = 1; + attr->enable_on_exec = 1; + } + for (thread = 0; thread < thread_num; thread++) { + fd[0][counter][thread] = sys_perf_event_open(attr, + all_tids[thread], -1, -1, 0); + if (fd[0][counter][thread] < 0) + pr_debug(ERR_PERF_OPEN, counter, + fd[0][counter][thread], + strerror(errno)); + else + ++ncreated; + } } + + return ncreated; } /* @@ -190,25 +208,28 @@ static void read_counter(int counter) unsigned int cpu; size_t res, nv; int scaled; - int i; + int i, thread; count[0] = count[1] = count[2] = 0; nv = scale ? 3 : 1; for (cpu = 0; cpu < nr_cpus; cpu++) { - if (fd[cpu][counter] < 0) - continue; - - res = read(fd[cpu][counter], single_count, nv * sizeof(u64)); - assert(res == nv * sizeof(u64)); - - close(fd[cpu][counter]); - fd[cpu][counter] = -1; - - count[0] += single_count[0]; - if (scale) { - count[1] += single_count[1]; - count[2] += single_count[2]; + for (thread = 0; thread < thread_num; thread++) { + if (fd[cpu][counter][thread] < 0) + continue; + + res = read(fd[cpu][counter][thread], + single_count, nv * sizeof(u64)); + assert(res == nv * sizeof(u64)); + + close(fd[cpu][counter][thread]); + fd[cpu][counter][thread] = -1; + + count[0] += single_count[0]; + if (scale) { + count[1] += single_count[1]; + count[2] += single_count[2]; + } } } @@ -250,10 +271,9 @@ static int run_perf_stat(int argc __used, const char **argv) { unsigned long long t0, t1; int status = 0; - int counter; - int pid = target_pid; + int counter, ncreated = 0; int child_ready_pipe[2], go_pipe[2]; - const bool forks = (target_pid == -1 && argc > 0); + const bool forks = (argc > 0); char buf; if (!system_wide) @@ -265,10 +285,10 @@ static int run_perf_stat(int argc __used, const char **argv) } if (forks) { - if ((pid = fork()) < 0) + if ((child_pid = fork()) < 0) perror("failed to fork"); - if (!pid) { + if (!child_pid) { close(child_ready_pipe[0]); close(go_pipe[1]); fcntl(go_pipe[0], F_SETFD, FD_CLOEXEC); @@ -297,7 +317,8 @@ static int run_perf_stat(int argc __used, const char **argv) exit(-1); } - child_pid = pid; + if (target_tid == -1 && target_pid == -1 && !system_wide) + all_tids[0] = child_pid; /* * Wait for the child to be ready to exec. @@ -310,7 +331,16 @@ static int run_perf_stat(int argc __used, const char **argv) } for (counter = 0; counter < nr_counters; counter++) - create_perf_stat_counter(counter, pid); + ncreated += create_perf_stat_counter(counter); + + if (ncreated == 0) { + pr_err("No permission to collect %sstats.\n" + "Consider tweaking /proc/sys/kernel/perf_event_paranoid.\n", + system_wide ? "system-wide " : ""); + if (child_pid != -1) + kill(child_pid, SIGTERM); + return -1; + } /* * Enable counters and exec the command: @@ -321,7 +351,7 @@ static int run_perf_stat(int argc __used, const char **argv) close(go_pipe[1]); wait(&status); } else { - while(!done); + while(!done) sleep(1); } t1 = rdclock(); @@ -429,12 +459,14 @@ static void print_stat(int argc, const char **argv) fprintf(stderr, "\n"); fprintf(stderr, " Performance counter stats for "); - if(target_pid == -1) { + if(target_pid == -1 && target_tid == -1) { fprintf(stderr, "\'%s", argv[0]); for (i = 1; i < argc; i++) fprintf(stderr, " %s", argv[i]); - }else - fprintf(stderr, "task pid \'%d", target_pid); + } else if (target_pid != -1) + fprintf(stderr, "process id \'%d", target_pid); + else + fprintf(stderr, "thread id \'%d", target_tid); fprintf(stderr, "\'"); if (run_count > 1) @@ -459,7 +491,7 @@ static volatile int signr = -1; static void skip_signal(int signo) { - if(target_pid != -1) + if(child_pid == -1) done = 1; signr = signo; @@ -486,15 +518,17 @@ static const struct option options[] = { OPT_CALLBACK('e', "event", NULL, "event", "event selector. use 'perf list' to list available events", parse_events), - OPT_BOOLEAN('i', "inherit", &inherit, - "child tasks inherit counters"), + OPT_BOOLEAN('i', "no-inherit", &no_inherit, + "child tasks do not inherit counters"), OPT_INTEGER('p', "pid", &target_pid, - "stat events on existing pid"), + "stat events on existing process id"), + OPT_INTEGER('t', "tid", &target_tid, + "stat events on existing thread id"), OPT_BOOLEAN('a', "all-cpus", &system_wide, "system-wide collection from all CPUs"), OPT_BOOLEAN('c', "scale", &scale, "scale/normalize counters"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show counter open errors, etc)"), OPT_INTEGER('r', "repeat", &run_count, "repeat command and print average + stddev (max: 100)"), @@ -506,10 +540,11 @@ static const struct option options[] = { int cmd_stat(int argc, const char **argv, const char *prefix __used) { int status; + int i,j; argc = parse_options(argc, argv, options, stat_usage, PARSE_OPT_STOP_AT_NON_OPTION); - if (!argc && target_pid == -1) + if (!argc && target_pid == -1 && target_tid == -1) usage_with_options(stat_usage, options); if (run_count <= 0) usage_with_options(stat_usage, options); @@ -525,6 +560,31 @@ int cmd_stat(int argc, const char **argv, const char *prefix __used) else nr_cpus = 1; + if (target_pid != -1) { + target_tid = target_pid; + thread_num = find_all_tid(target_pid, &all_tids); + if (thread_num <= 0) { + fprintf(stderr, "Can't find all threads of pid %d\n", + target_pid); + usage_with_options(stat_usage, options); + } + } else { + all_tids=malloc(sizeof(pid_t)); + if (!all_tids) + return -ENOMEM; + + all_tids[0] = target_tid; + thread_num = 1; + } + + for (i = 0; i < MAX_NR_CPUS; i++) { + for (j = 0; j < MAX_COUNTERS; j++) { + fd[i][j] = malloc(sizeof(int)*thread_num); + if (!fd[i][j]) + return -ENOMEM; + } + } + /* * We dont want to block the signals - that would cause * child tasks to inherit that and Ctrl-C would not work. @@ -543,7 +603,8 @@ int cmd_stat(int argc, const char **argv, const char *prefix __used) status = run_perf_stat(argc, argv); } - print_stat(argc, argv); + if (status != -1) + print_stat(argc, argv); return status; } diff --git a/tools/perf/builtin-test.c b/tools/perf/builtin-test.c new file mode 100644 index 000000000000..035b9fa063a9 --- /dev/null +++ b/tools/perf/builtin-test.c @@ -0,0 +1,281 @@ +/* + * builtin-test.c + * + * Builtin regression testing command: ever growing number of sanity tests + */ +#include "builtin.h" + +#include "util/cache.h" +#include "util/debug.h" +#include "util/parse-options.h" +#include "util/session.h" +#include "util/symbol.h" +#include "util/thread.h" + +static long page_size; + +static int vmlinux_matches_kallsyms_filter(struct map *map __used, struct symbol *sym) +{ + bool *visited = symbol__priv(sym); + *visited = true; + return 0; +} + +static int test__vmlinux_matches_kallsyms(void) +{ + int err = -1; + struct rb_node *nd; + struct symbol *sym; + struct map *kallsyms_map, *vmlinux_map; + struct machine kallsyms, vmlinux; + enum map_type type = MAP__FUNCTION; + struct ref_reloc_sym ref_reloc_sym = { .name = "_stext", }; + + /* + * Step 1: + * + * Init the machines that will hold kernel, modules obtained from + * both vmlinux + .ko files and from /proc/kallsyms split by modules. + */ + machine__init(&kallsyms, "", HOST_KERNEL_ID); + machine__init(&vmlinux, "", HOST_KERNEL_ID); + + /* + * Step 2: + * + * Create the kernel maps for kallsyms and the DSO where we will then + * load /proc/kallsyms. Also create the modules maps from /proc/modules + * and find the .ko files that match them in /lib/modules/`uname -r`/. + */ + if (machine__create_kernel_maps(&kallsyms) < 0) { + pr_debug("machine__create_kernel_maps "); + return -1; + } + + /* + * Step 3: + * + * Load and split /proc/kallsyms into multiple maps, one per module. + */ + if (machine__load_kallsyms(&kallsyms, "/proc/kallsyms", type, NULL) <= 0) { + pr_debug("dso__load_kallsyms "); + goto out; + } + + /* + * Step 4: + * + * kallsyms will be internally on demand sorted by name so that we can + * find the reference relocation * symbol, i.e. the symbol we will use + * to see if the running kernel was relocated by checking if it has the + * same value in the vmlinux file we load. + */ + kallsyms_map = machine__kernel_map(&kallsyms, type); + + sym = map__find_symbol_by_name(kallsyms_map, ref_reloc_sym.name, NULL); + if (sym == NULL) { + pr_debug("dso__find_symbol_by_name "); + goto out; + } + + ref_reloc_sym.addr = sym->start; + + /* + * Step 5: + * + * Now repeat step 2, this time for the vmlinux file we'll auto-locate. + */ + if (machine__create_kernel_maps(&vmlinux) < 0) { + pr_debug("machine__create_kernel_maps "); + goto out; + } + + vmlinux_map = machine__kernel_map(&vmlinux, type); + map__kmap(vmlinux_map)->ref_reloc_sym = &ref_reloc_sym; + + /* + * Step 6: + * + * Locate a vmlinux file in the vmlinux path that has a buildid that + * matches the one of the running kernel. + * + * While doing that look if we find the ref reloc symbol, if we find it + * we'll have its ref_reloc_symbol.unrelocated_addr and then + * maps__reloc_vmlinux will notice and set proper ->[un]map_ip routines + * to fixup the symbols. + */ + if (machine__load_vmlinux_path(&vmlinux, type, + vmlinux_matches_kallsyms_filter) <= 0) { + pr_debug("machine__load_vmlinux_path "); + goto out; + } + + err = 0; + /* + * Step 7: + * + * Now look at the symbols in the vmlinux DSO and check if we find all of them + * in the kallsyms dso. For the ones that are in both, check its names and + * end addresses too. + */ + for (nd = rb_first(&vmlinux_map->dso->symbols[type]); nd; nd = rb_next(nd)) { + struct symbol *pair; + + sym = rb_entry(nd, struct symbol, rb_node); + pair = machine__find_kernel_symbol(&kallsyms, type, sym->start, NULL, NULL); + + if (pair && pair->start == sym->start) { +next_pair: + if (strcmp(sym->name, pair->name) == 0) { + /* + * kallsyms don't have the symbol end, so we + * set that by using the next symbol start - 1, + * in some cases we get this up to a page + * wrong, trace_kmalloc when I was developing + * this code was one such example, 2106 bytes + * off the real size. More than that and we + * _really_ have a problem. + */ + s64 skew = sym->end - pair->end; + if (llabs(skew) < page_size) + continue; + + pr_debug("%#Lx: diff end addr for %s v: %#Lx k: %#Lx\n", + sym->start, sym->name, sym->end, pair->end); + } else { + struct rb_node *nnd = rb_prev(&pair->rb_node); + + if (nnd) { + struct symbol *next = rb_entry(nnd, struct symbol, rb_node); + + if (next->start == sym->start) { + pair = next; + goto next_pair; + } + } + pr_debug("%#Lx: diff name v: %s k: %s\n", + sym->start, sym->name, pair->name); + } + } else + pr_debug("%#Lx: %s not on kallsyms\n", sym->start, sym->name); + + err = -1; + } + + if (!verbose) + goto out; + + pr_info("Maps only in vmlinux:\n"); + + for (nd = rb_first(&vmlinux.kmaps.maps[type]); nd; nd = rb_next(nd)) { + struct map *pos = rb_entry(nd, struct map, rb_node), *pair; + /* + * If it is the kernel, kallsyms is always "[kernel.kallsyms]", while + * the kernel will have the path for the vmlinux file being used, + * so use the short name, less descriptive but the same ("[kernel]" in + * both cases. + */ + pair = map_groups__find_by_name(&kallsyms.kmaps, type, + (pos->dso->kernel ? + pos->dso->short_name : + pos->dso->name)); + if (pair) + pair->priv = 1; + else + map__fprintf(pos, stderr); + } + + pr_info("Maps in vmlinux with a different name in kallsyms:\n"); + + for (nd = rb_first(&vmlinux.kmaps.maps[type]); nd; nd = rb_next(nd)) { + struct map *pos = rb_entry(nd, struct map, rb_node), *pair; + + pair = map_groups__find(&kallsyms.kmaps, type, pos->start); + if (pair == NULL || pair->priv) + continue; + + if (pair->start == pos->start) { + pair->priv = 1; + pr_info(" %Lx-%Lx %Lx %s in kallsyms as", + pos->start, pos->end, pos->pgoff, pos->dso->name); + if (pos->pgoff != pair->pgoff || pos->end != pair->end) + pr_info(": \n*%Lx-%Lx %Lx", + pair->start, pair->end, pair->pgoff); + pr_info(" %s\n", pair->dso->name); + pair->priv = 1; + } + } + + pr_info("Maps only in kallsyms:\n"); + + for (nd = rb_first(&kallsyms.kmaps.maps[type]); + nd; nd = rb_next(nd)) { + struct map *pos = rb_entry(nd, struct map, rb_node); + + if (!pos->priv) + map__fprintf(pos, stderr); + } +out: + return err; +} + +static struct test { + const char *desc; + int (*func)(void); +} tests[] = { + { + .desc = "vmlinux symtab matches kallsyms", + .func = test__vmlinux_matches_kallsyms, + }, + { + .func = NULL, + }, +}; + +static int __cmd_test(void) +{ + int i = 0; + + page_size = sysconf(_SC_PAGE_SIZE); + + while (tests[i].func) { + int err; + pr_info("%2d: %s:", i + 1, tests[i].desc); + pr_debug("\n--- start ---\n"); + err = tests[i].func(); + pr_debug("---- end ----\n%s:", tests[i].desc); + pr_info(" %s\n", err ? "FAILED!\n" : "Ok"); + ++i; + } + + return 0; +} + +static const char * const test_usage[] = { + "perf test [<options>]", + NULL, +}; + +static const struct option test_options[] = { + OPT_INTEGER('v', "verbose", &verbose, + "be more verbose (show symbol address, etc)"), + OPT_END() +}; + +int cmd_test(int argc, const char **argv, const char *prefix __used) +{ + argc = parse_options(argc, argv, test_options, test_usage, 0); + if (argc) + usage_with_options(test_usage, test_options); + + symbol_conf.priv_size = sizeof(int); + symbol_conf.sort_by_name = true; + symbol_conf.try_vmlinux_path = true; + + if (symbol__init() < 0) + return -1; + + setup_pager(); + + return __cmd_test(); +} diff --git a/tools/perf/builtin-timechart.c b/tools/perf/builtin-timechart.c index 0d4d8ff7914b..5a52ed9fc10b 100644 --- a/tools/perf/builtin-timechart.c +++ b/tools/perf/builtin-timechart.c @@ -21,7 +21,6 @@ #include "util/cache.h" #include <linux/rbtree.h> #include "util/symbol.h" -#include "util/string.h" #include "util/callchain.h" #include "util/strlist.h" @@ -43,7 +42,7 @@ static u64 turbo_frequency; static u64 first_time, last_time; -static int power_only; +static bool power_only; struct per_pid; @@ -78,8 +77,6 @@ struct per_pid { struct per_pidcomm *all; struct per_pidcomm *current; - - int painted; }; @@ -146,9 +143,6 @@ struct wake_event { static struct power_event *power_events; static struct wake_event *wake_events; -struct sample_wrapper *all_samples; - - struct process_filter; struct process_filter { char *name; @@ -569,88 +563,6 @@ static void end_sample_processing(void) } } -static u64 sample_time(event_t *event, const struct perf_session *session) -{ - int cursor; - - cursor = 0; - if (session->sample_type & PERF_SAMPLE_IP) - cursor++; - if (session->sample_type & PERF_SAMPLE_TID) - cursor++; - if (session->sample_type & PERF_SAMPLE_TIME) - return event->sample.array[cursor]; - return 0; -} - - -/* - * We first queue all events, sorted backwards by insertion. - * The order will get flipped later. - */ -static int queue_sample_event(event_t *event, struct perf_session *session) -{ - struct sample_wrapper *copy, *prev; - int size; - - size = event->sample.header.size + sizeof(struct sample_wrapper) + 8; - - copy = malloc(size); - if (!copy) - return 1; - - memset(copy, 0, size); - - copy->next = NULL; - copy->timestamp = sample_time(event, session); - - memcpy(©->data, event, event->sample.header.size); - - /* insert in the right place in the list */ - - if (!all_samples) { - /* first sample ever */ - all_samples = copy; - return 0; - } - - if (all_samples->timestamp < copy->timestamp) { - /* insert at the head of the list */ - copy->next = all_samples; - all_samples = copy; - return 0; - } - - prev = all_samples; - while (prev->next) { - if (prev->next->timestamp < copy->timestamp) { - copy->next = prev->next; - prev->next = copy; - return 0; - } - prev = prev->next; - } - /* insert at the end of the list */ - prev->next = copy; - - return 0; -} - -static void sort_queued_samples(void) -{ - struct sample_wrapper *cursor, *next; - - cursor = all_samples; - all_samples = NULL; - - while (cursor) { - next = cursor->next; - cursor->next = all_samples; - all_samples = cursor; - cursor = next; - } -} - /* * Sort the pid datastructure */ @@ -1014,31 +926,17 @@ static void write_svg_file(const char *filename) svg_close(); } -static void process_samples(struct perf_session *session) -{ - struct sample_wrapper *cursor; - event_t *event; - - sort_queued_samples(); - - cursor = all_samples; - while (cursor) { - event = (void *)&cursor->data; - cursor = cursor->next; - process_sample_event(event, session); - } -} - static struct perf_event_ops event_ops = { - .comm = process_comm_event, - .fork = process_fork_event, - .exit = process_exit_event, - .sample = queue_sample_event, + .comm = process_comm_event, + .fork = process_fork_event, + .exit = process_exit_event, + .sample = process_sample_event, + .ordered_samples = true, }; static int __cmd_timechart(void) { - struct perf_session *session = perf_session__new(input_name, O_RDONLY, 0); + struct perf_session *session = perf_session__new(input_name, O_RDONLY, 0, false); int ret = -EINVAL; if (session == NULL) @@ -1051,8 +949,6 @@ static int __cmd_timechart(void) if (ret) goto out_delete; - process_samples(session); - end_sample_processing(); sort_pids(); @@ -1075,7 +971,6 @@ static const char *record_args[] = { "record", "-a", "-R", - "-M", "-f", "-c", "1", "-e", "power:power_start", diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index 1f529321607e..397290a0a76e 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -55,9 +55,9 @@ #include <linux/unistd.h> #include <linux/types.h> -static int fd[MAX_NR_CPUS][MAX_COUNTERS]; +static int *fd[MAX_NR_CPUS][MAX_COUNTERS]; -static int system_wide = 0; +static bool system_wide = false; static int default_interval = 0; @@ -65,18 +65,21 @@ static int count_filter = 5; static int print_entries; static int target_pid = -1; -static int inherit = 0; +static int target_tid = -1; +static pid_t *all_tids = NULL; +static int thread_num = 0; +static bool inherit = false; static int profile_cpu = -1; static int nr_cpus = 0; -static unsigned int realtime_prio = 0; -static int group = 0; +static int realtime_prio = 0; +static bool group = false; static unsigned int page_size; static unsigned int mmap_pages = 16; static int freq = 1000; /* 1 KHz */ static int delay_secs = 2; -static int zero = 0; -static int dump_symtab = 0; +static bool zero = false; +static bool dump_symtab = false; static bool hide_kernel_symbols = false; static bool hide_user_symbols = false; @@ -93,7 +96,7 @@ struct source_line { struct source_line *next; }; -static char *sym_filter = NULL; +static const char *sym_filter = NULL; struct sym_entry *sym_filter_entry = NULL; struct sym_entry *sym_filter_entry_sched = NULL; static int sym_pcnt_filter = 5; @@ -133,7 +136,7 @@ static inline struct symbol *sym_entry__symbol(struct sym_entry *self) return ((void *)self) + symbol_conf.priv_size; } -static void get_term_dimensions(struct winsize *ws) +void get_term_dimensions(struct winsize *ws) { char *s = getenv("LINES"); @@ -169,7 +172,7 @@ static void sig_winch_handler(int sig __used) update_print_entries(&winsize); } -static void parse_source(struct sym_entry *syme) +static int parse_source(struct sym_entry *syme) { struct symbol *sym; struct sym_entry_source *source; @@ -180,12 +183,21 @@ static void parse_source(struct sym_entry *syme) u64 len; if (!syme) - return; + return -1; + + sym = sym_entry__symbol(syme); + map = syme->map; + + /* + * We can't annotate with just /proc/kallsyms + */ + if (map->dso->origin == DSO__ORIG_KERNEL) + return -1; if (syme->src == NULL) { syme->src = zalloc(sizeof(*source)); if (syme->src == NULL) - return; + return -1; pthread_mutex_init(&syme->src->lock, NULL); } @@ -195,9 +207,6 @@ static void parse_source(struct sym_entry *syme) pthread_mutex_lock(&source->lock); goto out_assign; } - - sym = sym_entry__symbol(syme); - map = syme->map; path = map->dso->long_name; len = sym->end - sym->start; @@ -209,7 +218,7 @@ static void parse_source(struct sym_entry *syme) file = popen(command, "r"); if (!file) - return; + return -1; pthread_mutex_lock(&source->lock); source->lines_tail = &source->lines; @@ -245,6 +254,7 @@ static void parse_source(struct sym_entry *syme) out_assign: sym_filter_entry = syme; pthread_mutex_unlock(&source->lock); + return 0; } static void __zero_source_counters(struct sym_entry *syme) @@ -410,7 +420,9 @@ static double sym_weight(const struct sym_entry *sym) } static long samples; -static long userspace_samples; +static long kernel_samples, us_samples; +static long exact_samples; +static long guest_us_samples, guest_kernel_samples; static const char CONSOLE_CLEAR[] = "[H[2J"; static void __list_insert_active_sym(struct sym_entry *syme) @@ -450,7 +462,11 @@ static void print_sym_table(void) int printed = 0, j; int counter, snap = !display_weighted ? sym_counter : 0; float samples_per_sec = samples/delay_secs; - float ksamples_per_sec = (samples-userspace_samples)/delay_secs; + float ksamples_per_sec = kernel_samples/delay_secs; + float us_samples_per_sec = (us_samples)/delay_secs; + float guest_kernel_samples_per_sec = (guest_kernel_samples)/delay_secs; + float guest_us_samples_per_sec = (guest_us_samples)/delay_secs; + float esamples_percent = (100.0*exact_samples)/samples; float sum_ksamples = 0.0; struct sym_entry *syme, *n; struct rb_root tmp = RB_ROOT; @@ -458,7 +474,8 @@ static void print_sym_table(void) int sym_width = 0, dso_width = 0, dso_short_width = 0; const int win_width = winsize.ws_col - 1; - samples = userspace_samples = 0; + samples = us_samples = kernel_samples = exact_samples = 0; + guest_kernel_samples = guest_us_samples = 0; /* Sort the active symbols */ pthread_mutex_lock(&active_symbols_lock); @@ -489,9 +506,30 @@ static void print_sym_table(void) puts(CONSOLE_CLEAR); printf("%-*.*s\n", win_width, win_width, graph_dotted_line); - printf( " PerfTop:%8.0f irqs/sec kernel:%4.1f%% [", - samples_per_sec, - 100.0 - (100.0*((samples_per_sec-ksamples_per_sec)/samples_per_sec))); + if (!perf_guest) { + printf(" PerfTop:%8.0f irqs/sec kernel:%4.1f%%" + " exact: %4.1f%% [", + samples_per_sec, + 100.0 - (100.0 * ((samples_per_sec - ksamples_per_sec) / + samples_per_sec)), + esamples_percent); + } else { + printf(" PerfTop:%8.0f irqs/sec kernel:%4.1f%% us:%4.1f%%" + " guest kernel:%4.1f%% guest us:%4.1f%%" + " exact: %4.1f%% [", + samples_per_sec, + 100.0 - (100.0 * ((samples_per_sec-ksamples_per_sec) / + samples_per_sec)), + 100.0 - (100.0 * ((samples_per_sec-us_samples_per_sec) / + samples_per_sec)), + 100.0 - (100.0 * ((samples_per_sec - + guest_kernel_samples_per_sec) / + samples_per_sec)), + 100.0 - (100.0 * ((samples_per_sec - + guest_us_samples_per_sec) / + samples_per_sec)), + esamples_percent); + } if (nr_counters == 1 || !display_weighted) { printf("%Ld", (u64)attrs[0].sample_period); @@ -514,13 +552,15 @@ static void print_sym_table(void) if (target_pid != -1) printf(" (target_pid: %d", target_pid); + else if (target_tid != -1) + printf(" (target_tid: %d", target_tid); else printf(" (all"); if (profile_cpu != -1) printf(", cpu: %d)\n", profile_cpu); else { - if (target_pid != -1) + if (target_tid != -1) printf(")\n"); else printf(", %d CPUs)\n", nr_cpus); @@ -582,7 +622,6 @@ static void print_sym_table(void) syme = rb_entry(nd, struct sym_entry, rb_node); sym = sym_entry__symbol(syme); - if (++printed > print_entries || (int)syme->snap_count < count_filter) continue; @@ -746,7 +785,7 @@ static int key_mapped(int c) return 0; } -static void handle_keypress(int c) +static void handle_keypress(struct perf_session *session, int c) { if (!key_mapped(c)) { struct pollfd stdin_poll = { .fd = 0, .events = POLLIN }; @@ -815,7 +854,7 @@ static void handle_keypress(int c) case 'Q': printf("exiting.\n"); if (dump_symtab) - dsos__fprintf(stderr); + perf_session__fprintf_dsos(session, stderr); exit(0); case 's': prompt_symbol(&sym_filter_entry, "Enter details symbol"); @@ -839,7 +878,7 @@ static void handle_keypress(int c) display_weighted = ~display_weighted; break; case 'z': - zero = ~zero; + zero = !zero; break; default: break; @@ -851,6 +890,7 @@ static void *display_thread(void *arg __used) struct pollfd stdin_poll = { .fd = 0, .events = POLLIN }; struct termios tc, save; int delay_msecs, c; + struct perf_session *session = (struct perf_session *) arg; tcgetattr(0, &save); tc = save; @@ -871,7 +911,7 @@ repeat: c = getc(stdin); tcsetattr(0, TCSAFLUSH, &save); - handle_keypress(c); + handle_keypress(session, c); goto repeat; return NULL; @@ -942,24 +982,48 @@ static void event__process_sample(const event_t *self, u64 ip = self->ip.ip; struct sym_entry *syme; struct addr_location al; + struct machine *machine; u8 origin = self->header.misc & PERF_RECORD_MISC_CPUMODE_MASK; ++samples; switch (origin) { case PERF_RECORD_MISC_USER: - ++userspace_samples; + ++us_samples; if (hide_user_symbols) return; + machine = perf_session__find_host_machine(session); break; case PERF_RECORD_MISC_KERNEL: + ++kernel_samples; if (hide_kernel_symbols) return; + machine = perf_session__find_host_machine(session); + break; + case PERF_RECORD_MISC_GUEST_KERNEL: + ++guest_kernel_samples; + machine = perf_session__find_machine(session, self->ip.pid); break; + case PERF_RECORD_MISC_GUEST_USER: + ++guest_us_samples; + /* + * TODO: we don't process guest user from host side + * except simple counting. + */ + return; default: return; } + if (!machine && perf_guest) { + pr_err("Can't find guest [%d]'s kernel information\n", + self->ip.pid); + return; + } + + if (self->header.misc & PERF_RECORD_MISC_EXACT_IP) + exact_samples++; + if (event__preprocess_sample(self, session, &al, symbol_filter) < 0 || al.filtered) return; @@ -976,7 +1040,7 @@ static void event__process_sample(const event_t *self, * --hide-kernel-symbols, even if the user specifies an * invalid --vmlinux ;-) */ - if (al.map == session->vmlinux_maps[MAP__FUNCTION] && + if (al.map == machine->vmlinux_maps[MAP__FUNCTION] && RB_EMPTY_ROOT(&al.map->dso->symbols[MAP__FUNCTION])) { pr_err("The %s file can't be used\n", symbol_conf.vmlinux_name); @@ -990,7 +1054,17 @@ static void event__process_sample(const event_t *self, if (sym_filter_entry_sched) { sym_filter_entry = sym_filter_entry_sched; sym_filter_entry_sched = NULL; - parse_source(sym_filter_entry); + if (parse_source(sym_filter_entry) < 0) { + struct symbol *sym = sym_entry__symbol(sym_filter_entry); + + pr_err("Can't annotate %s", sym->name); + if (sym_filter_entry->map->dso->origin == DSO__ORIG_KERNEL) { + pr_err(": No vmlinux file was found in the path:\n"); + vmlinux_path__fprintf(stderr); + } else + pr_err(".\n"); + exit(1); + } } syme = symbol__priv(al.sym); @@ -1106,16 +1180,21 @@ static void perf_session__mmap_read_counter(struct perf_session *self, md->prev = old; } -static struct pollfd event_array[MAX_NR_CPUS * MAX_COUNTERS]; -static struct mmap_data mmap_array[MAX_NR_CPUS][MAX_COUNTERS]; +static struct pollfd *event_array; +static struct mmap_data *mmap_array[MAX_NR_CPUS][MAX_COUNTERS]; static void perf_session__mmap_read(struct perf_session *self) { - int i, counter; + int i, counter, thread_index; for (i = 0; i < nr_cpus; i++) { for (counter = 0; counter < nr_counters; counter++) - perf_session__mmap_read_counter(self, &mmap_array[i][counter]); + for (thread_index = 0; + thread_index < thread_num; + thread_index++) { + perf_session__mmap_read_counter(self, + &mmap_array[i][counter][thread_index]); + } } } @@ -1126,9 +1205,10 @@ static void start_counter(int i, int counter) { struct perf_event_attr *attr; int cpu; + int thread_index; cpu = profile_cpu; - if (target_pid == -1 && profile_cpu == -1) + if (target_tid == -1 && profile_cpu == -1) cpu = cpumap[i]; attr = attrs + counter; @@ -1144,55 +1224,58 @@ static void start_counter(int i, int counter) attr->inherit = (cpu < 0) && inherit; attr->mmap = 1; + for (thread_index = 0; thread_index < thread_num; thread_index++) { try_again: - fd[i][counter] = sys_perf_event_open(attr, target_pid, cpu, group_fd, 0); - - if (fd[i][counter] < 0) { - int err = errno; + fd[i][counter][thread_index] = sys_perf_event_open(attr, + all_tids[thread_index], cpu, group_fd, 0); + + if (fd[i][counter][thread_index] < 0) { + int err = errno; + + if (err == EPERM || err == EACCES) + die("No permission - are you root?\n"); + /* + * If it's cycles then fall back to hrtimer + * based cpu-clock-tick sw counter, which + * is always available even if no PMU support: + */ + if (attr->type == PERF_TYPE_HARDWARE + && attr->config == PERF_COUNT_HW_CPU_CYCLES) { + + if (verbose) + warning(" ... trying to fall back to cpu-clock-ticks\n"); + + attr->type = PERF_TYPE_SOFTWARE; + attr->config = PERF_COUNT_SW_CPU_CLOCK; + goto try_again; + } + printf("\n"); + error("perfcounter syscall returned with %d (%s)\n", + fd[i][counter][thread_index], strerror(err)); + die("No CONFIG_PERF_EVENTS=y kernel support configured?\n"); + exit(-1); + } + assert(fd[i][counter][thread_index] >= 0); + fcntl(fd[i][counter][thread_index], F_SETFL, O_NONBLOCK); - if (err == EPERM || err == EACCES) - die("No permission - are you root?\n"); /* - * If it's cycles then fall back to hrtimer - * based cpu-clock-tick sw counter, which - * is always available even if no PMU support: + * First counter acts as the group leader: */ - if (attr->type == PERF_TYPE_HARDWARE - && attr->config == PERF_COUNT_HW_CPU_CYCLES) { - - if (verbose) - warning(" ... trying to fall back to cpu-clock-ticks\n"); - - attr->type = PERF_TYPE_SOFTWARE; - attr->config = PERF_COUNT_SW_CPU_CLOCK; - goto try_again; - } - printf("\n"); - error("perfcounter syscall returned with %d (%s)\n", - fd[i][counter], strerror(err)); - die("No CONFIG_PERF_EVENTS=y kernel support configured?\n"); - exit(-1); + if (group && group_fd == -1) + group_fd = fd[i][counter][thread_index]; + + event_array[nr_poll].fd = fd[i][counter][thread_index]; + event_array[nr_poll].events = POLLIN; + nr_poll++; + + mmap_array[i][counter][thread_index].counter = counter; + mmap_array[i][counter][thread_index].prev = 0; + mmap_array[i][counter][thread_index].mask = mmap_pages*page_size - 1; + mmap_array[i][counter][thread_index].base = mmap(NULL, (mmap_pages+1)*page_size, + PROT_READ, MAP_SHARED, fd[i][counter][thread_index], 0); + if (mmap_array[i][counter][thread_index].base == MAP_FAILED) + die("failed to mmap with %d (%s)\n", errno, strerror(errno)); } - assert(fd[i][counter] >= 0); - fcntl(fd[i][counter], F_SETFL, O_NONBLOCK); - - /* - * First counter acts as the group leader: - */ - if (group && group_fd == -1) - group_fd = fd[i][counter]; - - event_array[nr_poll].fd = fd[i][counter]; - event_array[nr_poll].events = POLLIN; - nr_poll++; - - mmap_array[i][counter].counter = counter; - mmap_array[i][counter].prev = 0; - mmap_array[i][counter].mask = mmap_pages*page_size - 1; - mmap_array[i][counter].base = mmap(NULL, (mmap_pages+1)*page_size, - PROT_READ, MAP_SHARED, fd[i][counter], 0); - if (mmap_array[i][counter].base == MAP_FAILED) - die("failed to mmap with %d (%s)\n", errno, strerror(errno)); } static int __cmd_top(void) @@ -1204,12 +1287,12 @@ static int __cmd_top(void) * FIXME: perf_session__new should allow passing a O_MMAP, so that all this * mmap reading, etc is encapsulated in it. Use O_WRONLY for now. */ - struct perf_session *session = perf_session__new(NULL, O_WRONLY, false); + struct perf_session *session = perf_session__new(NULL, O_WRONLY, false, false); if (session == NULL) return -ENOMEM; - if (target_pid != -1) - event__synthesize_thread(target_pid, event__process, session); + if (target_tid != -1) + event__synthesize_thread(target_tid, event__process, session); else event__synthesize_threads(event__process, session); @@ -1220,11 +1303,11 @@ static int __cmd_top(void) } /* Wait for a minimal set of events before starting the snapshot */ - poll(event_array, nr_poll, 100); + poll(&event_array[0], nr_poll, 100); perf_session__mmap_read(session); - if (pthread_create(&thread, NULL, display_thread, NULL)) { + if (pthread_create(&thread, NULL, display_thread, session)) { printf("Could not create display thread.\n"); exit(-1); } @@ -1263,7 +1346,9 @@ static const struct option options[] = { OPT_INTEGER('c', "count", &default_interval, "event period to sample"), OPT_INTEGER('p', "pid", &target_pid, - "profile events on existing pid"), + "profile events on existing process id"), + OPT_INTEGER('t', "tid", &target_tid, + "profile events on existing thread id"), OPT_BOOLEAN('a', "all-cpus", &system_wide, "system-wide collection from all CPUs"), OPT_INTEGER('C', "CPU", &profile_cpu, @@ -1272,8 +1357,7 @@ static const struct option options[] = { "file", "vmlinux pathname"), OPT_BOOLEAN('K', "hide_kernel_symbols", &hide_kernel_symbols, "hide kernel symbols"), - OPT_INTEGER('m', "mmap-pages", &mmap_pages, - "number of mmap data pages"), + OPT_UINTEGER('m', "mmap-pages", &mmap_pages, "number of mmap data pages"), OPT_INTEGER('r', "realtime", &realtime_prio, "collect data with this RT SCHED_FIFO priority"), OPT_INTEGER('d', "delay", &delay_secs, @@ -1296,7 +1380,7 @@ static const struct option options[] = { "display this many functions"), OPT_BOOLEAN('U', "hide_user_symbols", &hide_user_symbols, "hide user symbols"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show counter open errors, etc)"), OPT_END() }; @@ -1304,6 +1388,7 @@ static const struct option options[] = { int cmd_top(int argc, const char **argv, const char *prefix __used) { int counter; + int i,j; page_size = sysconf(_SC_PAGE_SIZE); @@ -1311,8 +1396,39 @@ int cmd_top(int argc, const char **argv, const char *prefix __used) if (argc) usage_with_options(top_usage, options); + if (target_pid != -1) { + target_tid = target_pid; + thread_num = find_all_tid(target_pid, &all_tids); + if (thread_num <= 0) { + fprintf(stderr, "Can't find all threads of pid %d\n", + target_pid); + usage_with_options(top_usage, options); + } + } else { + all_tids=malloc(sizeof(pid_t)); + if (!all_tids) + return -ENOMEM; + + all_tids[0] = target_tid; + thread_num = 1; + } + + for (i = 0; i < MAX_NR_CPUS; i++) { + for (j = 0; j < MAX_COUNTERS; j++) { + fd[i][j] = malloc(sizeof(int)*thread_num); + mmap_array[i][j] = zalloc( + sizeof(struct mmap_data)*thread_num); + if (!fd[i][j] || !mmap_array[i][j]) + return -ENOMEM; + } + } + event_array = malloc( + sizeof(struct pollfd)*MAX_NR_CPUS*MAX_COUNTERS*thread_num); + if (!event_array) + return -ENOMEM; + /* CPU and PID are mutually exclusive */ - if (target_pid != -1 && profile_cpu != -1) { + if (target_tid > 0 && profile_cpu != -1) { printf("WARNING: PID switch overriding CPU\n"); sleep(1); profile_cpu = -1; @@ -1353,7 +1469,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __used) attrs[counter].sample_period = default_interval; } - if (target_pid != -1 || profile_cpu != -1) + if (target_tid != -1 || profile_cpu != -1) nr_cpus = 1; else nr_cpus = read_cpu_map(); diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c index 407041d20de0..dddf3f01b5ab 100644 --- a/tools/perf/builtin-trace.c +++ b/tools/perf/builtin-trace.c @@ -11,6 +11,8 @@ static char const *script_name; static char const *generate_script_lang; +static bool debug_ordering; +static u64 last_timestamp; static int default_start_script(const char *script __unused, int argc __unused, @@ -51,6 +53,8 @@ static void setup_scripting(void) static int cleanup_scripting(void) { + pr_debug("\nperf trace script stopped\n"); + return scripting_ops->stop_script(); } @@ -87,6 +91,14 @@ static int process_sample_event(event_t *event, struct perf_session *session) } if (session->sample_type & PERF_SAMPLE_RAW) { + if (debug_ordering) { + if (data.time < last_timestamp) { + pr_err("Samples misordered, previous: %llu " + "this: %llu\n", last_timestamp, + data.time); + } + last_timestamp = data.time; + } /* * FIXME: better resolve from pid from the struct trace_entry * field, although it should be the same than this perf @@ -97,17 +109,31 @@ static int process_sample_event(event_t *event, struct perf_session *session) data.time, thread->comm); } - session->events_stats.total += data.period; + session->hists.stats.total_period += data.period; return 0; } static struct perf_event_ops event_ops = { .sample = process_sample_event, .comm = event__process_comm, + .attr = event__process_attr, + .event_type = event__process_event_type, + .tracing_data = event__process_tracing_data, + .build_id = event__process_build_id, + .ordered_samples = true, }; +extern volatile int session_done; + +static void sig_handler(int sig __unused) +{ + session_done = 1; +} + static int __cmd_trace(struct perf_session *session) { + signal(SIGINT, sig_handler); + return perf_session__process_events(session, &event_ops); } @@ -505,7 +531,7 @@ static const char * const trace_usage[] = { static const struct option options[] = { OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace, "dump raw trace in ASCII"), - OPT_BOOLEAN('v', "verbose", &verbose, + OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('L', "Latency", &latency_format, "show latency attributes (irqs/preemption disabled, etc)"), @@ -518,6 +544,8 @@ static const struct option options[] = { "generate perf-trace.xx script in specified language"), OPT_STRING('i', "input", &input_name, "file", "input file name"), + OPT_BOOLEAN('d', "debug-ordering", &debug_ordering, + "check that samples time ordering is monotonic"), OPT_END() }; @@ -548,6 +576,65 @@ int cmd_trace(int argc, const char **argv, const char *prefix __used) suffix = REPORT_SUFFIX; } + if (!suffix && argc >= 2 && strncmp(argv[1], "-", strlen("-")) != 0) { + char *record_script_path, *report_script_path; + int live_pipe[2]; + pid_t pid; + + record_script_path = get_script_path(argv[1], RECORD_SUFFIX); + if (!record_script_path) { + fprintf(stderr, "record script not found\n"); + return -1; + } + + report_script_path = get_script_path(argv[1], REPORT_SUFFIX); + if (!report_script_path) { + fprintf(stderr, "report script not found\n"); + return -1; + } + + if (pipe(live_pipe) < 0) { + perror("failed to create pipe"); + exit(-1); + } + + pid = fork(); + if (pid < 0) { + perror("failed to fork"); + exit(-1); + } + + if (!pid) { + dup2(live_pipe[1], 1); + close(live_pipe[0]); + + __argv = malloc(5 * sizeof(const char *)); + __argv[0] = "/bin/sh"; + __argv[1] = record_script_path; + __argv[2] = "-o"; + __argv[3] = "-"; + __argv[4] = NULL; + + execvp("/bin/sh", (char **)__argv); + exit(-1); + } + + dup2(live_pipe[0], 0); + close(live_pipe[1]); + + __argv = malloc((argc + 3) * sizeof(const char *)); + __argv[0] = "/bin/sh"; + __argv[1] = report_script_path; + for (i = 2; i < argc; i++) + __argv[i] = argv[i]; + __argv[i++] = "-i"; + __argv[i++] = "-"; + __argv[i++] = NULL; + + execvp("/bin/sh", (char **)__argv); + exit(-1); + } + if (suffix) { script_path = get_script_path(argv[2], suffix); if (!script_path) { @@ -576,11 +663,12 @@ int cmd_trace(int argc, const char **argv, const char *prefix __used) if (!script_name) setup_pager(); - session = perf_session__new(input_name, O_RDONLY, 0); + session = perf_session__new(input_name, O_RDONLY, 0, false); if (session == NULL) return -ENOMEM; - if (!perf_session__has_traces(session, "record -R")) + if (strcmp(input_name, "-") && + !perf_session__has_traces(session, "record -R")) return -EINVAL; if (generate_script_lang) { @@ -617,6 +705,7 @@ int cmd_trace(int argc, const char **argv, const char *prefix __used) err = scripting_ops->start_script(script_name, argc, argv); if (err) goto out; + pr_debug("perf trace started with script %s\n\n", script_name); } err = __cmd_trace(session); diff --git a/tools/perf/builtin.h b/tools/perf/builtin.h index 10fe49e7048a..921245b28583 100644 --- a/tools/perf/builtin.h +++ b/tools/perf/builtin.h @@ -32,5 +32,8 @@ extern int cmd_version(int argc, const char **argv, const char *prefix); extern int cmd_probe(int argc, const char **argv, const char *prefix); extern int cmd_kmem(int argc, const char **argv, const char *prefix); extern int cmd_lock(int argc, const char **argv, const char *prefix); +extern int cmd_kvm(int argc, const char **argv, const char *prefix); +extern int cmd_test(int argc, const char **argv, const char *prefix); +extern int cmd_inject(int argc, const char **argv, const char *prefix); #endif diff --git a/tools/perf/command-list.txt b/tools/perf/command-list.txt index db6ee94d4a8e..949d77fc0b97 100644 --- a/tools/perf/command-list.txt +++ b/tools/perf/command-list.txt @@ -8,6 +8,7 @@ perf-bench mainporcelain common perf-buildid-cache mainporcelain common perf-buildid-list mainporcelain common perf-diff mainporcelain common +perf-inject mainporcelain common perf-list mainporcelain common perf-sched mainporcelain common perf-record mainporcelain common @@ -19,3 +20,5 @@ perf-trace mainporcelain common perf-probe mainporcelain common perf-kmem mainporcelain common perf-lock mainporcelain common +perf-kvm mainporcelain common +perf-test mainporcelain common diff --git a/tools/perf/perf-archive.sh b/tools/perf/perf-archive.sh index 910468e6e01c..2e7a4f417e20 100644 --- a/tools/perf/perf-archive.sh +++ b/tools/perf/perf-archive.sh @@ -30,4 +30,7 @@ done tar cfj $PERF_DATA.tar.bz2 -C $DEBUGDIR -T $MANIFEST rm -f $MANIFEST $BUILDIDS +echo -e "Now please run:\n" +echo -e "$ tar xvf $PERF_DATA.tar.bz2 -C ~/.debug\n" +echo "wherever you need to run 'perf report' on." exit 0 diff --git a/tools/perf/perf.c b/tools/perf/perf.c index cd32c200cdb3..08e0e5d2b50e 100644 --- a/tools/perf/perf.c +++ b/tools/perf/perf.c @@ -13,9 +13,10 @@ #include "util/quote.h" #include "util/run-command.h" #include "util/parse-events.h" -#include "util/string.h" #include "util/debugfs.h" +bool use_browser; + const char perf_usage_string[] = "perf [--version] [--help] COMMAND [ARGS]"; @@ -262,6 +263,8 @@ static int run_builtin(struct cmd_struct *p, int argc, const char **argv) set_debugfs_path(); status = p->fn(argc, argv, prefix); + exit_browser(status); + if (status) return status & 0xff; @@ -304,6 +307,9 @@ static void handle_internal_command(int argc, const char **argv) { "probe", cmd_probe, 0 }, { "kmem", cmd_kmem, 0 }, { "lock", cmd_lock, 0 }, + { "kvm", cmd_kvm, 0 }, + { "test", cmd_test, 0 }, + { "inject", cmd_inject, 0 }, }; unsigned int i; static const char ext[] = STRIP_EXTENSION; diff --git a/tools/perf/perf.h b/tools/perf/perf.h index 6fb379bc1d1f..ef7aa0a0c526 100644 --- a/tools/perf/perf.h +++ b/tools/perf/perf.h @@ -1,6 +1,10 @@ #ifndef _PERF_PERF_H #define _PERF_PERF_H +struct winsize; + +void get_term_dimensions(struct winsize *ws); + #if defined(__i386__) #include "../../arch/x86/include/asm/unistd.h" #define rmb() asm volatile("lock; addl $0,0(%%esp)" ::: "memory") @@ -76,6 +80,7 @@ #include "../../include/linux/perf_event.h" #include "util/types.h" +#include <stdbool.h> /* * prctl(PR_TASK_PERF_EVENTS_DISABLE) will (cheaply) disable all @@ -102,8 +107,6 @@ static inline unsigned long long rdclock(void) #define __user #define asmlinkage -#define __used __attribute__((__unused__)) - #define unlikely(x) __builtin_expect(!!(x), 0) #define min(x, y) ({ \ typeof(x) _min1 = (x); \ @@ -129,4 +132,6 @@ struct ip_callchain { u64 ips[0]; }; +extern bool perf_host, perf_guest; + #endif diff --git a/tools/perf/scripts/perl/Perf-Trace-Util/lib/Perf/Trace/Util.pm b/tools/perf/scripts/perl/Perf-Trace-Util/lib/Perf/Trace/Util.pm index f869c48dc9b0..d94b40c8ac85 100644 --- a/tools/perf/scripts/perl/Perf-Trace-Util/lib/Perf/Trace/Util.pm +++ b/tools/perf/scripts/perl/Perf-Trace-Util/lib/Perf/Trace/Util.pm @@ -15,6 +15,7 @@ our @EXPORT_OK = ( @{ $EXPORT_TAGS{'all'} } ); our @EXPORT = qw( avg nsecs nsecs_secs nsecs_nsecs nsecs_usecs print_nsecs +clear_term ); our $VERSION = '0.01'; @@ -55,6 +56,11 @@ sub nsecs_str { return $str; } +sub clear_term +{ + print "\x1b[H\x1b[2J"; +} + 1; __END__ =head1 NAME diff --git a/tools/perf/scripts/perl/bin/check-perf-trace-record b/tools/perf/scripts/perl/bin/check-perf-trace-record index e6cb1474f8e8..423ad6aed056 100644 --- a/tools/perf/scripts/perl/bin/check-perf-trace-record +++ b/tools/perf/scripts/perl/bin/check-perf-trace-record @@ -1,2 +1,2 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e kmem:kmalloc -e irq:softirq_entry -e kmem:kfree +perf record -a -e kmem:kmalloc -e irq:softirq_entry -e kmem:kfree diff --git a/tools/perf/scripts/perl/bin/failed-syscalls-record b/tools/perf/scripts/perl/bin/failed-syscalls-record index f8885d389e6f..eb5846bcb565 100644 --- a/tools/perf/scripts/perl/bin/failed-syscalls-record +++ b/tools/perf/scripts/perl/bin/failed-syscalls-record @@ -1,2 +1,2 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e raw_syscalls:sys_exit +perf record -a -e raw_syscalls:sys_exit $@ diff --git a/tools/perf/scripts/perl/bin/failed-syscalls-report b/tools/perf/scripts/perl/bin/failed-syscalls-report index 8bfc660e5056..e3a5e55d54ff 100644 --- a/tools/perf/scripts/perl/bin/failed-syscalls-report +++ b/tools/perf/scripts/perl/bin/failed-syscalls-report @@ -1,4 +1,10 @@ #!/bin/bash # description: system-wide failed syscalls # args: [comm] -perf trace -s ~/libexec/perf-core/scripts/perl/failed-syscalls.pl $1 +if [ $# -gt 0 ] ; then + if ! expr match "$1" "-" > /dev/null ; then + comm=$1 + shift + fi +fi +perf trace $@ -s ~/libexec/perf-core/scripts/perl/failed-syscalls.pl $comm diff --git a/tools/perf/scripts/perl/bin/rw-by-file-record b/tools/perf/scripts/perl/bin/rw-by-file-record index b25056ebf963..5bfaae5a6cba 100644 --- a/tools/perf/scripts/perl/bin/rw-by-file-record +++ b/tools/perf/scripts/perl/bin/rw-by-file-record @@ -1,2 +1,3 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e syscalls:sys_enter_read -e syscalls:sys_enter_write +perf record -a -e syscalls:sys_enter_read -e syscalls:sys_enter_write $@ + diff --git a/tools/perf/scripts/perl/bin/rw-by-file-report b/tools/perf/scripts/perl/bin/rw-by-file-report index eddb9ccce6a5..d83070b7eeb5 100644 --- a/tools/perf/scripts/perl/bin/rw-by-file-report +++ b/tools/perf/scripts/perl/bin/rw-by-file-report @@ -1,7 +1,13 @@ #!/bin/bash # description: r/w activity for a program, by file # args: <comm> -perf trace -s ~/libexec/perf-core/scripts/perl/rw-by-file.pl $1 +if [ $# -lt 1 ] ; then + echo "usage: rw-by-file <comm>" + exit +fi +comm=$1 +shift +perf trace $@ -s ~/libexec/perf-core/scripts/perl/rw-by-file.pl $comm diff --git a/tools/perf/scripts/perl/bin/rw-by-pid-record b/tools/perf/scripts/perl/bin/rw-by-pid-record index 8903979c5b6c..6e0b2f7755ac 100644 --- a/tools/perf/scripts/perl/bin/rw-by-pid-record +++ b/tools/perf/scripts/perl/bin/rw-by-pid-record @@ -1,2 +1,2 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e syscalls:sys_enter_read -e syscalls:sys_exit_read -e syscalls:sys_enter_write -e syscalls:sys_exit_write +perf record -a -e syscalls:sys_enter_read -e syscalls:sys_exit_read -e syscalls:sys_enter_write -e syscalls:sys_exit_write $@ diff --git a/tools/perf/scripts/perl/bin/rw-by-pid-report b/tools/perf/scripts/perl/bin/rw-by-pid-report index 7f44c25cc857..7ef46983f62f 100644 --- a/tools/perf/scripts/perl/bin/rw-by-pid-report +++ b/tools/perf/scripts/perl/bin/rw-by-pid-report @@ -1,6 +1,6 @@ #!/bin/bash # description: system-wide r/w activity -perf trace -s ~/libexec/perf-core/scripts/perl/rw-by-pid.pl +perf trace $@ -s ~/libexec/perf-core/scripts/perl/rw-by-pid.pl diff --git a/tools/perf/scripts/perl/bin/rwtop-record b/tools/perf/scripts/perl/bin/rwtop-record new file mode 100644 index 000000000000..6e0b2f7755ac --- /dev/null +++ b/tools/perf/scripts/perl/bin/rwtop-record @@ -0,0 +1,2 @@ +#!/bin/bash +perf record -a -e syscalls:sys_enter_read -e syscalls:sys_exit_read -e syscalls:sys_enter_write -e syscalls:sys_exit_write $@ diff --git a/tools/perf/scripts/perl/bin/rwtop-report b/tools/perf/scripts/perl/bin/rwtop-report new file mode 100644 index 000000000000..93e698cd3f38 --- /dev/null +++ b/tools/perf/scripts/perl/bin/rwtop-report @@ -0,0 +1,23 @@ +#!/bin/bash +# description: system-wide r/w top +# args: [interval] +n_args=0 +for i in "$@" +do + if expr match "$i" "-" > /dev/null ; then + break + fi + n_args=$(( $n_args + 1 )) +done +if [ "$n_args" -gt 1 ] ; then + echo "usage: rwtop-report [interval]" + exit +fi +if [ "$n_args" -gt 0 ] ; then + interval=$1 + shift +fi +perf trace $@ -s ~/libexec/perf-core/scripts/perl/rwtop.pl $interval + + + diff --git a/tools/perf/scripts/perl/bin/wakeup-latency-record b/tools/perf/scripts/perl/bin/wakeup-latency-record index 6abedda911a4..9f2acaaae9f0 100644 --- a/tools/perf/scripts/perl/bin/wakeup-latency-record +++ b/tools/perf/scripts/perl/bin/wakeup-latency-record @@ -1,5 +1,5 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e sched:sched_switch -e sched:sched_wakeup +perf record -a -e sched:sched_switch -e sched:sched_wakeup $@ diff --git a/tools/perf/scripts/perl/bin/wakeup-latency-report b/tools/perf/scripts/perl/bin/wakeup-latency-report index fce3adcb3249..a0d898f9ca1d 100644 --- a/tools/perf/scripts/perl/bin/wakeup-latency-report +++ b/tools/perf/scripts/perl/bin/wakeup-latency-report @@ -1,6 +1,6 @@ #!/bin/bash # description: system-wide min/max/avg wakeup latency -perf trace -s ~/libexec/perf-core/scripts/perl/wakeup-latency.pl +perf trace $@ -s ~/libexec/perf-core/scripts/perl/wakeup-latency.pl diff --git a/tools/perf/scripts/perl/bin/workqueue-stats-record b/tools/perf/scripts/perl/bin/workqueue-stats-record index fce6637b19ba..85301f2471ff 100644 --- a/tools/perf/scripts/perl/bin/workqueue-stats-record +++ b/tools/perf/scripts/perl/bin/workqueue-stats-record @@ -1,2 +1,2 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e workqueue:workqueue_creation -e workqueue:workqueue_destruction -e workqueue:workqueue_execution -e workqueue:workqueue_insertion +perf record -a -e workqueue:workqueue_creation -e workqueue:workqueue_destruction -e workqueue:workqueue_execution -e workqueue:workqueue_insertion $@ diff --git a/tools/perf/scripts/perl/bin/workqueue-stats-report b/tools/perf/scripts/perl/bin/workqueue-stats-report index 71cfbd182fb9..35081132ef97 100644 --- a/tools/perf/scripts/perl/bin/workqueue-stats-report +++ b/tools/perf/scripts/perl/bin/workqueue-stats-report @@ -1,6 +1,6 @@ #!/bin/bash # description: workqueue stats (ins/exe/create/destroy) -perf trace -s ~/libexec/perf-core/scripts/perl/workqueue-stats.pl +perf trace $@ -s ~/libexec/perf-core/scripts/perl/workqueue-stats.pl diff --git a/tools/perf/scripts/perl/failed-syscalls.pl b/tools/perf/scripts/perl/failed-syscalls.pl index c18e7e27a84b..94bc25a347eb 100644 --- a/tools/perf/scripts/perl/failed-syscalls.pl +++ b/tools/perf/scripts/perl/failed-syscalls.pl @@ -11,6 +11,8 @@ use Perf::Trace::Core; use Perf::Trace::Context; use Perf::Trace::Util; +my $for_comm = shift; + my %failed_syscalls; sub raw_syscalls::sys_exit @@ -33,6 +35,8 @@ sub trace_end foreach my $comm (sort {$failed_syscalls{$b} <=> $failed_syscalls{$a}} keys %failed_syscalls) { - printf("%-20s %10s\n", $comm, $failed_syscalls{$comm}); + next if ($for_comm && $comm ne $for_comm); + + printf("%-20s %10s\n", $comm, $failed_syscalls{$comm}); } } diff --git a/tools/perf/scripts/perl/rw-by-pid.pl b/tools/perf/scripts/perl/rw-by-pid.pl index da601fae1a00..9db23c9daf55 100644 --- a/tools/perf/scripts/perl/rw-by-pid.pl +++ b/tools/perf/scripts/perl/rw-by-pid.pl @@ -79,12 +79,12 @@ sub trace_end printf("%6s %-20s %10s %10s %10s\n", "------", "--------------------", "-----------", "----------", "----------"); - foreach my $pid (sort {$reads{$b}{bytes_read} <=> - $reads{$a}{bytes_read}} keys %reads) { - my $comm = $reads{$pid}{comm}; - my $total_reads = $reads{$pid}{total_reads}; - my $bytes_requested = $reads{$pid}{bytes_requested}; - my $bytes_read = $reads{$pid}{bytes_read}; + foreach my $pid (sort { ($reads{$b}{bytes_read} || 0) <=> + ($reads{$a}{bytes_read} || 0) } keys %reads) { + my $comm = $reads{$pid}{comm} || ""; + my $total_reads = $reads{$pid}{total_reads} || 0; + my $bytes_requested = $reads{$pid}{bytes_requested} || 0; + my $bytes_read = $reads{$pid}{bytes_read} || 0; printf("%6s %-20s %10s %10s %10s\n", $pid, $comm, $total_reads, $bytes_requested, $bytes_read); @@ -96,16 +96,23 @@ sub trace_end printf("%6s %20s %6s %10s\n", "------", "--------------------", "------", "----------"); - foreach my $pid (keys %reads) { - my $comm = $reads{$pid}{comm}; - foreach my $err (sort {$reads{$b}{comm} cmp $reads{$a}{comm}} - keys %{$reads{$pid}{errors}}) { - my $errors = $reads{$pid}{errors}{$err}; + my @errcounts = (); - printf("%6d %-20s %6d %10s\n", $pid, $comm, $err, $errors); + foreach my $pid (keys %reads) { + foreach my $error (keys %{$reads{$pid}{errors}}) { + my $comm = $reads{$pid}{comm} || ""; + my $errcount = $reads{$pid}{errors}{$error} || 0; + push @errcounts, [$pid, $comm, $error, $errcount]; } } + @errcounts = sort { $b->[3] <=> $a->[3] } @errcounts; + + for my $i (0 .. $#errcounts) { + printf("%6d %-20s %6d %10s\n", $errcounts[$i][0], + $errcounts[$i][1], $errcounts[$i][2], $errcounts[$i][3]); + } + printf("\nwrite counts by pid:\n\n"); printf("%6s %20s %10s %10s\n", "pid", "comm", @@ -113,11 +120,11 @@ sub trace_end printf("%6s %-20s %10s %10s\n", "------", "--------------------", "-----------", "----------"); - foreach my $pid (sort {$writes{$b}{bytes_written} <=> - $writes{$a}{bytes_written}} keys %writes) { - my $comm = $writes{$pid}{comm}; - my $total_writes = $writes{$pid}{total_writes}; - my $bytes_written = $writes{$pid}{bytes_written}; + foreach my $pid (sort { ($writes{$b}{bytes_written} || 0) <=> + ($writes{$a}{bytes_written} || 0)} keys %writes) { + my $comm = $writes{$pid}{comm} || ""; + my $total_writes = $writes{$pid}{total_writes} || 0; + my $bytes_written = $writes{$pid}{bytes_written} || 0; printf("%6s %-20s %10s %10s\n", $pid, $comm, $total_writes, $bytes_written); @@ -129,16 +136,23 @@ sub trace_end printf("%6s %20s %6s %10s\n", "------", "--------------------", "------", "----------"); - foreach my $pid (keys %writes) { - my $comm = $writes{$pid}{comm}; - foreach my $err (sort {$writes{$b}{comm} cmp $writes{$a}{comm}} - keys %{$writes{$pid}{errors}}) { - my $errors = $writes{$pid}{errors}{$err}; + @errcounts = (); - printf("%6d %-20s %6d %10s\n", $pid, $comm, $err, $errors); + foreach my $pid (keys %writes) { + foreach my $error (keys %{$writes{$pid}{errors}}) { + my $comm = $writes{$pid}{comm} || ""; + my $errcount = $writes{$pid}{errors}{$error} || 0; + push @errcounts, [$pid, $comm, $error, $errcount]; } } + @errcounts = sort { $b->[3] <=> $a->[3] } @errcounts; + + for my $i (0 .. $#errcounts) { + printf("%6d %-20s %6d %10s\n", $errcounts[$i][0], + $errcounts[$i][1], $errcounts[$i][2], $errcounts[$i][3]); + } + print_unhandled(); } diff --git a/tools/perf/scripts/perl/rwtop.pl b/tools/perf/scripts/perl/rwtop.pl new file mode 100644 index 000000000000..4bb3ecd33472 --- /dev/null +++ b/tools/perf/scripts/perl/rwtop.pl @@ -0,0 +1,199 @@ +#!/usr/bin/perl -w +# (c) 2010, Tom Zanussi <tzanussi@gmail.com> +# Licensed under the terms of the GNU GPL License version 2 + +# read/write top +# +# Periodically displays system-wide r/w call activity, broken down by +# pid. If an [interval] arg is specified, the display will be +# refreshed every [interval] seconds. The default interval is 3 +# seconds. + +use 5.010000; +use strict; +use warnings; + +use lib "$ENV{'PERF_EXEC_PATH'}/scripts/perl/Perf-Trace-Util/lib"; +use lib "./Perf-Trace-Util/lib"; +use Perf::Trace::Core; +use Perf::Trace::Util; + +my $default_interval = 3; +my $nlines = 20; +my $print_thread; +my $print_pending = 0; + +my %reads; +my %writes; + +my $interval = shift; +if (!$interval) { + $interval = $default_interval; +} + +sub syscalls::sys_exit_read +{ + my ($event_name, $context, $common_cpu, $common_secs, $common_nsecs, + $common_pid, $common_comm, + $nr, $ret) = @_; + + print_check(); + + if ($ret > 0) { + $reads{$common_pid}{bytes_read} += $ret; + } else { + if (!defined ($reads{$common_pid}{bytes_read})) { + $reads{$common_pid}{bytes_read} = 0; + } + $reads{$common_pid}{errors}{$ret}++; + } +} + +sub syscalls::sys_enter_read +{ + my ($event_name, $context, $common_cpu, $common_secs, $common_nsecs, + $common_pid, $common_comm, + $nr, $fd, $buf, $count) = @_; + + print_check(); + + $reads{$common_pid}{bytes_requested} += $count; + $reads{$common_pid}{total_reads}++; + $reads{$common_pid}{comm} = $common_comm; +} + +sub syscalls::sys_exit_write +{ + my ($event_name, $context, $common_cpu, $common_secs, $common_nsecs, + $common_pid, $common_comm, + $nr, $ret) = @_; + + print_check(); + + if ($ret <= 0) { + $writes{$common_pid}{errors}{$ret}++; + } +} + +sub syscalls::sys_enter_write +{ + my ($event_name, $context, $common_cpu, $common_secs, $common_nsecs, + $common_pid, $common_comm, + $nr, $fd, $buf, $count) = @_; + + print_check(); + + $writes{$common_pid}{bytes_written} += $count; + $writes{$common_pid}{total_writes}++; + $writes{$common_pid}{comm} = $common_comm; +} + +sub trace_begin +{ + $SIG{ALRM} = \&set_print_pending; + alarm 1; +} + +sub trace_end +{ + print_unhandled(); + print_totals(); +} + +sub print_check() +{ + if ($print_pending == 1) { + $print_pending = 0; + print_totals(); + } +} + +sub set_print_pending() +{ + $print_pending = 1; + alarm $interval; +} + +sub print_totals +{ + my $count; + + $count = 0; + + clear_term(); + + printf("\nread counts by pid:\n\n"); + + printf("%6s %20s %10s %10s %10s\n", "pid", "comm", + "# reads", "bytes_req", "bytes_read"); + printf("%6s %-20s %10s %10s %10s\n", "------", "--------------------", + "----------", "----------", "----------"); + + foreach my $pid (sort { ($reads{$b}{bytes_read} || 0) <=> + ($reads{$a}{bytes_read} || 0) } keys %reads) { + my $comm = $reads{$pid}{comm} || ""; + my $total_reads = $reads{$pid}{total_reads} || 0; + my $bytes_requested = $reads{$pid}{bytes_requested} || 0; + my $bytes_read = $reads{$pid}{bytes_read} || 0; + + printf("%6s %-20s %10s %10s %10s\n", $pid, $comm, + $total_reads, $bytes_requested, $bytes_read); + + if (++$count == $nlines) { + last; + } + } + + $count = 0; + + printf("\nwrite counts by pid:\n\n"); + + printf("%6s %20s %10s %13s\n", "pid", "comm", + "# writes", "bytes_written"); + printf("%6s %-20s %10s %13s\n", "------", "--------------------", + "----------", "-------------"); + + foreach my $pid (sort { ($writes{$b}{bytes_written} || 0) <=> + ($writes{$a}{bytes_written} || 0)} keys %writes) { + my $comm = $writes{$pid}{comm} || ""; + my $total_writes = $writes{$pid}{total_writes} || 0; + my $bytes_written = $writes{$pid}{bytes_written} || 0; + + printf("%6s %-20s %10s %13s\n", $pid, $comm, + $total_writes, $bytes_written); + + if (++$count == $nlines) { + last; + } + } + + %reads = (); + %writes = (); +} + +my %unhandled; + +sub print_unhandled +{ + if ((scalar keys %unhandled) == 0) { + return; + } + + print "\nunhandled events:\n\n"; + + printf("%-40s %10s\n", "event", "count"); + printf("%-40s %10s\n", "----------------------------------------", + "-----------"); + + foreach my $event_name (keys %unhandled) { + printf("%-40s %10d\n", $event_name, $unhandled{$event_name}); + } +} + +sub trace_unhandled +{ + my ($event_name, $context, $common_cpu, $common_secs, $common_nsecs, + $common_pid, $common_comm) = @_; + + $unhandled{$event_name}++; +} diff --git a/tools/perf/scripts/perl/wakeup-latency.pl b/tools/perf/scripts/perl/wakeup-latency.pl index ed58ef284e23..d9143dcec6c6 100644 --- a/tools/perf/scripts/perl/wakeup-latency.pl +++ b/tools/perf/scripts/perl/wakeup-latency.pl @@ -22,8 +22,8 @@ my %last_wakeup; my $max_wakeup_latency; my $min_wakeup_latency; -my $total_wakeup_latency; -my $total_wakeups; +my $total_wakeup_latency = 0; +my $total_wakeups = 0; sub sched::sched_switch { @@ -67,8 +67,12 @@ sub trace_end { printf("wakeup_latency stats:\n\n"); print "total_wakeups: $total_wakeups\n"; - printf("avg_wakeup_latency (ns): %u\n", - avg($total_wakeup_latency, $total_wakeups)); + if ($total_wakeups) { + printf("avg_wakeup_latency (ns): %u\n", + avg($total_wakeup_latency, $total_wakeups)); + } else { + printf("avg_wakeup_latency (ns): N/A\n"); + } printf("min_wakeup_latency (ns): %u\n", $min_wakeup_latency); printf("max_wakeup_latency (ns): %u\n", $max_wakeup_latency); diff --git a/tools/perf/scripts/perl/workqueue-stats.pl b/tools/perf/scripts/perl/workqueue-stats.pl index 511302c8a494..b84b12699b70 100644 --- a/tools/perf/scripts/perl/workqueue-stats.pl +++ b/tools/perf/scripts/perl/workqueue-stats.pl @@ -71,9 +71,9 @@ sub trace_end printf("%3s %6s %6s\t%-20s\n", "---", "---", "----", "----"); foreach my $pidhash (@cpus) { while ((my $pid, my $wqhash) = each %$pidhash) { - my $ins = $$wqhash{'inserted'}; - my $exe = $$wqhash{'executed'}; - my $comm = $$wqhash{'comm'}; + my $ins = $$wqhash{'inserted'} || 0; + my $exe = $$wqhash{'executed'} || 0; + my $comm = $$wqhash{'comm'} || ""; if ($ins || $exe) { printf("%3u %6u %6u\t%-20s\n", $cpu, $ins, $exe, $comm); } @@ -87,9 +87,9 @@ sub trace_end printf("%3s %6s %6s\t%-20s\n", "---", "-------", "---------", "----"); foreach my $pidhash (@cpus) { while ((my $pid, my $wqhash) = each %$pidhash) { - my $created = $$wqhash{'created'}; - my $destroyed = $$wqhash{'destroyed'}; - my $comm = $$wqhash{'comm'}; + my $created = $$wqhash{'created'} || 0; + my $destroyed = $$wqhash{'destroyed'} || 0; + my $comm = $$wqhash{'comm'} || ""; if ($created || $destroyed) { printf("%3u %6u %6u\t%-20s\n", $cpu, $created, $destroyed, $comm); diff --git a/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py b/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py index 83e91435ed09..9689bc0acd9f 100644 --- a/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py +++ b/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py @@ -23,3 +23,6 @@ def nsecs_nsecs(nsecs): def nsecs_str(nsecs): str = "%5u.%09u" % (nsecs_secs(nsecs), nsecs_nsecs(nsecs)), return str + +def clear_term(): + print("\x1b[H\x1b[2J") diff --git a/tools/perf/scripts/python/bin/failed-syscalls-by-pid-record b/tools/perf/scripts/python/bin/failed-syscalls-by-pid-record index f8885d389e6f..eb5846bcb565 100644 --- a/tools/perf/scripts/python/bin/failed-syscalls-by-pid-record +++ b/tools/perf/scripts/python/bin/failed-syscalls-by-pid-record @@ -1,2 +1,2 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e raw_syscalls:sys_exit +perf record -a -e raw_syscalls:sys_exit $@ diff --git a/tools/perf/scripts/python/bin/failed-syscalls-by-pid-report b/tools/perf/scripts/python/bin/failed-syscalls-by-pid-report index 1e0c0a860c87..30293545fcc2 100644 --- a/tools/perf/scripts/python/bin/failed-syscalls-by-pid-report +++ b/tools/perf/scripts/python/bin/failed-syscalls-by-pid-report @@ -1,4 +1,10 @@ #!/bin/bash # description: system-wide failed syscalls, by pid # args: [comm] -perf trace -s ~/libexec/perf-core/scripts/python/failed-syscalls-by-pid.py $1 +if [ $# -gt 0 ] ; then + if ! expr match "$1" "-" > /dev/null ; then + comm=$1 + shift + fi +fi +perf trace $@ -s ~/libexec/perf-core/scripts/python/failed-syscalls-by-pid.py $comm diff --git a/tools/perf/scripts/python/bin/sctop-record b/tools/perf/scripts/python/bin/sctop-record new file mode 100644 index 000000000000..1fc5998b721d --- /dev/null +++ b/tools/perf/scripts/python/bin/sctop-record @@ -0,0 +1,2 @@ +#!/bin/bash +perf record -a -e raw_syscalls:sys_enter $@ diff --git a/tools/perf/scripts/python/bin/sctop-report b/tools/perf/scripts/python/bin/sctop-report new file mode 100644 index 000000000000..b01c842ae7b4 --- /dev/null +++ b/tools/perf/scripts/python/bin/sctop-report @@ -0,0 +1,24 @@ +#!/bin/bash +# description: syscall top +# args: [comm] [interval] +n_args=0 +for i in "$@" +do + if expr match "$i" "-" > /dev/null ; then + break + fi + n_args=$(( $n_args + 1 )) +done +if [ "$n_args" -gt 2 ] ; then + echo "usage: sctop-report [comm] [interval]" + exit +fi +if [ "$n_args" -gt 1 ] ; then + comm=$1 + interval=$2 + shift 2 +elif [ "$n_args" -gt 0 ] ; then + interval=$1 + shift +fi +perf trace $@ -s ~/libexec/perf-core/scripts/python/sctop.py $comm $interval diff --git a/tools/perf/scripts/python/bin/syscall-counts-by-pid-record b/tools/perf/scripts/python/bin/syscall-counts-by-pid-record index 45a8c50359da..1fc5998b721d 100644 --- a/tools/perf/scripts/python/bin/syscall-counts-by-pid-record +++ b/tools/perf/scripts/python/bin/syscall-counts-by-pid-record @@ -1,2 +1,2 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e raw_syscalls:sys_enter +perf record -a -e raw_syscalls:sys_enter $@ diff --git a/tools/perf/scripts/python/bin/syscall-counts-by-pid-report b/tools/perf/scripts/python/bin/syscall-counts-by-pid-report index f8044d192271..9e9d8ddd72ce 100644 --- a/tools/perf/scripts/python/bin/syscall-counts-by-pid-report +++ b/tools/perf/scripts/python/bin/syscall-counts-by-pid-report @@ -1,4 +1,10 @@ #!/bin/bash # description: system-wide syscall counts, by pid # args: [comm] -perf trace -s ~/libexec/perf-core/scripts/python/syscall-counts-by-pid.py $1 +if [ $# -gt 0 ] ; then + if ! expr match "$1" "-" > /dev/null ; then + comm=$1 + shift + fi +fi +perf trace $@ -s ~/libexec/perf-core/scripts/python/syscall-counts-by-pid.py $comm diff --git a/tools/perf/scripts/python/bin/syscall-counts-record b/tools/perf/scripts/python/bin/syscall-counts-record index 45a8c50359da..1fc5998b721d 100644 --- a/tools/perf/scripts/python/bin/syscall-counts-record +++ b/tools/perf/scripts/python/bin/syscall-counts-record @@ -1,2 +1,2 @@ #!/bin/bash -perf record -c 1 -f -a -M -R -e raw_syscalls:sys_enter +perf record -a -e raw_syscalls:sys_enter $@ diff --git a/tools/perf/scripts/python/bin/syscall-counts-report b/tools/perf/scripts/python/bin/syscall-counts-report index a366aa61612f..dc076b618796 100644 --- a/tools/perf/scripts/python/bin/syscall-counts-report +++ b/tools/perf/scripts/python/bin/syscall-counts-report @@ -1,4 +1,10 @@ #!/bin/bash # description: system-wide syscall counts # args: [comm] -perf trace -s ~/libexec/perf-core/scripts/python/syscall-counts.py $1 +if [ $# -gt 0 ] ; then + if ! expr match "$1" "-" > /dev/null ; then + comm=$1 + shift + fi +fi +perf trace $@ -s ~/libexec/perf-core/scripts/python/syscall-counts.py $comm diff --git a/tools/perf/scripts/python/sctop.py b/tools/perf/scripts/python/sctop.py new file mode 100644 index 000000000000..6cafad40c296 --- /dev/null +++ b/tools/perf/scripts/python/sctop.py @@ -0,0 +1,78 @@ +# system call top +# (c) 2010, Tom Zanussi <tzanussi@gmail.com> +# Licensed under the terms of the GNU GPL License version 2 +# +# Periodically displays system-wide system call totals, broken down by +# syscall. If a [comm] arg is specified, only syscalls called by +# [comm] are displayed. If an [interval] arg is specified, the display +# will be refreshed every [interval] seconds. The default interval is +# 3 seconds. + +import thread +import time +import os +import sys + +sys.path.append(os.environ['PERF_EXEC_PATH'] + \ + '/scripts/python/Perf-Trace-Util/lib/Perf/Trace') + +from perf_trace_context import * +from Core import * +from Util import * + +usage = "perf trace -s syscall-counts.py [comm] [interval]\n"; + +for_comm = None +default_interval = 3 +interval = default_interval + +if len(sys.argv) > 3: + sys.exit(usage) + +if len(sys.argv) > 2: + for_comm = sys.argv[1] + interval = int(sys.argv[2]) +elif len(sys.argv) > 1: + try: + interval = int(sys.argv[1]) + except ValueError: + for_comm = sys.argv[1] + interval = default_interval + +syscalls = autodict() + +def trace_begin(): + thread.start_new_thread(print_syscall_totals, (interval,)) + pass + +def raw_syscalls__sys_enter(event_name, context, common_cpu, + common_secs, common_nsecs, common_pid, common_comm, + id, args): + if for_comm is not None: + if common_comm != for_comm: + return + try: + syscalls[id] += 1 + except TypeError: + syscalls[id] = 1 + +def print_syscall_totals(interval): + while 1: + clear_term() + if for_comm is not None: + print "\nsyscall events for %s:\n\n" % (for_comm), + else: + print "\nsyscall events:\n\n", + + print "%-40s %10s\n" % ("event", "count"), + print "%-40s %10s\n" % ("----------------------------------------", \ + "----------"), + + for id, val in sorted(syscalls.iteritems(), key = lambda(k, v): (v, k), \ + reverse = True): + try: + print "%-40d %10d\n" % (id, val), + except TypeError: + pass + syscalls.clear() + time.sleep(interval) diff --git a/tools/perf/util/PERF-VERSION-GEN b/tools/perf/util/PERF-VERSION-GEN index 54552a00a117..49ece7921914 100755 --- a/tools/perf/util/PERF-VERSION-GEN +++ b/tools/perf/util/PERF-VERSION-GEN @@ -1,6 +1,10 @@ #!/bin/sh -GVF=PERF-VERSION-FILE +if [ $# -eq 1 ] ; then + OUTPUT=$1 +fi + +GVF=${OUTPUT}PERF-VERSION-FILE DEF_VER=v0.0.2.PERF LF=' diff --git a/tools/perf/util/bitmap.c b/tools/perf/util/bitmap.c new file mode 100644 index 000000000000..5e230acae1e9 --- /dev/null +++ b/tools/perf/util/bitmap.c @@ -0,0 +1,21 @@ +/* + * From lib/bitmap.c + * Helper functions for bitmap.h. + * + * This source code is licensed under the GNU General Public License, + * Version 2. See the file COPYING for more details. + */ +#include <linux/bitmap.h> + +int __bitmap_weight(const unsigned long *bitmap, int bits) +{ + int k, w = 0, lim = bits/BITS_PER_LONG; + + for (k = 0; k < lim; k++) + w += hweight_long(bitmap[k]); + + if (bits % BITS_PER_LONG) + w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits)); + + return w; +} diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c index 04904b35ba81..0f60a3906808 100644 --- a/tools/perf/util/build-id.c +++ b/tools/perf/util/build-id.c @@ -24,7 +24,7 @@ static int build_id__mark_dso_hit(event_t *event, struct perf_session *session) } thread__find_addr_map(thread, session, cpumode, MAP__FUNCTION, - event->ip.ip, &al); + event->ip.pid, event->ip.ip, &al); if (al.map != NULL) al.map->dso->hit = 1; diff --git a/tools/perf/util/cache.h b/tools/perf/util/cache.h index 918eb376abe3..4b9aab7f0405 100644 --- a/tools/perf/util/cache.h +++ b/tools/perf/util/cache.h @@ -1,6 +1,7 @@ #ifndef __PERF_CACHE_H #define __PERF_CACHE_H +#include <stdbool.h> #include "util.h" #include "strbuf.h" #include "../perf.h" @@ -69,6 +70,19 @@ extern const char *pager_program; extern int pager_in_use(void); extern int pager_use_color; +extern bool use_browser; + +#ifdef NO_NEWT_SUPPORT +static inline void setup_browser(void) +{ + setup_pager(); +} +static inline void exit_browser(bool wait_for_ok __used) {} +#else +void setup_browser(void); +void exit_browser(bool wait_for_ok); +#endif + extern const char *editor_program; extern const char *excludes_file; diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c index b3b71258272a..21a52e0a4435 100644 --- a/tools/perf/util/callchain.c +++ b/tools/perf/util/callchain.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2009, Frederic Weisbecker <fweisbec@gmail.com> + * Copyright (C) 2009-2010, Frederic Weisbecker <fweisbec@gmail.com> * * Handle the callchains from the stream in an ad-hoc radix tree and then * sort them in an rbtree. @@ -17,6 +17,13 @@ #include "callchain.h" +bool ip_callchain__valid(struct ip_callchain *chain, event_t *event) +{ + unsigned int chain_size = event->header.size; + chain_size -= (unsigned long)&event->ip.__more_data - (unsigned long)event; + return chain->nr * sizeof(u64) <= chain_size; +} + #define chain_for_each_child(child, parent) \ list_for_each_entry(child, &parent->children, brothers) @@ -160,7 +167,7 @@ create_child(struct callchain_node *parent, bool inherit_children) { struct callchain_node *new; - new = malloc(sizeof(*new)); + new = zalloc(sizeof(*new)); if (!new) { perror("not enough memory to create child for code path tree"); return NULL; @@ -183,25 +190,36 @@ create_child(struct callchain_node *parent, bool inherit_children) return new; } + +struct resolved_ip { + u64 ip; + struct map_symbol ms; +}; + +struct resolved_chain { + u64 nr; + struct resolved_ip ips[0]; +}; + + /* * Fill the node with callchain values */ static void -fill_node(struct callchain_node *node, struct ip_callchain *chain, - int start, struct symbol **syms) +fill_node(struct callchain_node *node, struct resolved_chain *chain, int start) { unsigned int i; for (i = start; i < chain->nr; i++) { struct callchain_list *call; - call = malloc(sizeof(*call)); + call = zalloc(sizeof(*call)); if (!call) { perror("not enough memory for the code path tree"); return; } - call->ip = chain->ips[i]; - call->sym = syms[i]; + call->ip = chain->ips[i].ip; + call->ms = chain->ips[i].ms; list_add_tail(&call->list, &node->val); } node->val_nr = chain->nr - start; @@ -210,13 +228,13 @@ fill_node(struct callchain_node *node, struct ip_callchain *chain, } static void -add_child(struct callchain_node *parent, struct ip_callchain *chain, - int start, struct symbol **syms) +add_child(struct callchain_node *parent, struct resolved_chain *chain, + int start) { struct callchain_node *new; new = create_child(parent, false); - fill_node(new, chain, start, syms); + fill_node(new, chain, start); new->children_hit = 0; new->hit = 1; @@ -228,9 +246,8 @@ add_child(struct callchain_node *parent, struct ip_callchain *chain, * Then create another child to host the given callchain of new branch */ static void -split_add_child(struct callchain_node *parent, struct ip_callchain *chain, - struct callchain_list *to_split, int idx_parents, int idx_local, - struct symbol **syms) +split_add_child(struct callchain_node *parent, struct resolved_chain *chain, + struct callchain_list *to_split, int idx_parents, int idx_local) { struct callchain_node *new; struct list_head *old_tail; @@ -257,7 +274,7 @@ split_add_child(struct callchain_node *parent, struct ip_callchain *chain, /* create a new child for the new branch if any */ if (idx_total < chain->nr) { parent->hit = 0; - add_child(parent, chain, idx_total, syms); + add_child(parent, chain, idx_total); parent->children_hit++; } else { parent->hit = 1; @@ -265,32 +282,33 @@ split_add_child(struct callchain_node *parent, struct ip_callchain *chain, } static int -__append_chain(struct callchain_node *root, struct ip_callchain *chain, - unsigned int start, struct symbol **syms); +__append_chain(struct callchain_node *root, struct resolved_chain *chain, + unsigned int start); static void -__append_chain_children(struct callchain_node *root, struct ip_callchain *chain, - struct symbol **syms, unsigned int start) +__append_chain_children(struct callchain_node *root, + struct resolved_chain *chain, + unsigned int start) { struct callchain_node *rnode; /* lookup in childrens */ chain_for_each_child(rnode, root) { - unsigned int ret = __append_chain(rnode, chain, start, syms); + unsigned int ret = __append_chain(rnode, chain, start); if (!ret) goto inc_children_hit; } /* nothing in children, add to the current node */ - add_child(root, chain, start, syms); + add_child(root, chain, start); inc_children_hit: root->children_hit++; } static int -__append_chain(struct callchain_node *root, struct ip_callchain *chain, - unsigned int start, struct symbol **syms) +__append_chain(struct callchain_node *root, struct resolved_chain *chain, + unsigned int start) { struct callchain_list *cnode; unsigned int i = start; @@ -302,13 +320,19 @@ __append_chain(struct callchain_node *root, struct ip_callchain *chain, * anywhere inside a function. */ list_for_each_entry(cnode, &root->val, list) { + struct symbol *sym; + if (i == chain->nr) break; - if (cnode->sym && syms[i]) { - if (cnode->sym->start != syms[i]->start) + + sym = chain->ips[i].ms.sym; + + if (cnode->ms.sym && sym) { + if (cnode->ms.sym->start != sym->start) break; - } else if (cnode->ip != chain->ips[i]) + } else if (cnode->ip != chain->ips[i].ip) break; + if (!found) found = true; i++; @@ -320,7 +344,7 @@ __append_chain(struct callchain_node *root, struct ip_callchain *chain, /* we match only a part of the node. Split it and add the new chain */ if (i - start < root->val_nr) { - split_add_child(root, chain, cnode, start, i - start, syms); + split_add_child(root, chain, cnode, start, i - start); return 0; } @@ -331,15 +355,50 @@ __append_chain(struct callchain_node *root, struct ip_callchain *chain, } /* We match the node and still have a part remaining */ - __append_chain_children(root, chain, syms, i); + __append_chain_children(root, chain, i); return 0; } -void append_chain(struct callchain_node *root, struct ip_callchain *chain, - struct symbol **syms) +static void filter_context(struct ip_callchain *old, struct resolved_chain *new, + struct map_symbol *syms) { + int i, j = 0; + + for (i = 0; i < (int)old->nr; i++) { + if (old->ips[i] >= PERF_CONTEXT_MAX) + continue; + + new->ips[j].ip = old->ips[i]; + new->ips[j].ms = syms[i]; + j++; + } + + new->nr = j; +} + + +int append_chain(struct callchain_node *root, struct ip_callchain *chain, + struct map_symbol *syms) +{ + struct resolved_chain *filtered; + if (!chain->nr) - return; - __append_chain_children(root, chain, syms, 0); + return 0; + + filtered = zalloc(sizeof(*filtered) + + chain->nr * sizeof(struct resolved_ip)); + if (!filtered) + return -ENOMEM; + + filter_context(chain, filtered, syms); + + if (!filtered->nr) + goto end; + + __append_chain_children(root, filtered, 0); +end: + free(filtered); + + return 0; } diff --git a/tools/perf/util/callchain.h b/tools/perf/util/callchain.h index ad4626de4c2b..1cba1f5504e7 100644 --- a/tools/perf/util/callchain.h +++ b/tools/perf/util/callchain.h @@ -4,6 +4,7 @@ #include "../perf.h" #include <linux/list.h> #include <linux/rbtree.h> +#include "event.h" #include "util.h" #include "symbol.h" @@ -33,13 +34,14 @@ typedef void (*sort_chain_func_t)(struct rb_root *, struct callchain_node *, struct callchain_param { enum chain_mode mode; + u32 print_limit; double min_percent; sort_chain_func_t sort; }; struct callchain_list { u64 ip; - struct symbol *sym; + struct map_symbol ms; struct list_head list; }; @@ -56,6 +58,8 @@ static inline u64 cumul_hits(struct callchain_node *node) } int register_callchain_param(struct callchain_param *param); -void append_chain(struct callchain_node *root, struct ip_callchain *chain, - struct symbol **syms); +int append_chain(struct callchain_node *root, struct ip_callchain *chain, + struct map_symbol *syms); + +bool ip_callchain__valid(struct ip_callchain *chain, event_t *event); #endif /* __PERF_CALLCHAIN_H */ diff --git a/tools/perf/util/color.c b/tools/perf/util/color.c index e88bca55a599..e191eb9a667f 100644 --- a/tools/perf/util/color.c +++ b/tools/perf/util/color.c @@ -166,6 +166,31 @@ int perf_color_default_config(const char *var, const char *value, void *cb) return perf_default_config(var, value, cb); } +static int __color_vsnprintf(char *bf, size_t size, const char *color, + const char *fmt, va_list args, const char *trail) +{ + int r = 0; + + /* + * Auto-detect: + */ + if (perf_use_color_default < 0) { + if (isatty(1) || pager_in_use()) + perf_use_color_default = 1; + else + perf_use_color_default = 0; + } + + if (perf_use_color_default && *color) + r += snprintf(bf, size, "%s", color); + r += vsnprintf(bf + r, size - r, fmt, args); + if (perf_use_color_default && *color) + r += snprintf(bf + r, size - r, "%s", PERF_COLOR_RESET); + if (trail) + r += snprintf(bf + r, size - r, "%s", trail); + return r; +} + static int __color_vfprintf(FILE *fp, const char *color, const char *fmt, va_list args, const char *trail) { @@ -191,11 +216,28 @@ static int __color_vfprintf(FILE *fp, const char *color, const char *fmt, return r; } +int color_vsnprintf(char *bf, size_t size, const char *color, + const char *fmt, va_list args) +{ + return __color_vsnprintf(bf, size, color, fmt, args, NULL); +} + int color_vfprintf(FILE *fp, const char *color, const char *fmt, va_list args) { return __color_vfprintf(fp, color, fmt, args, NULL); } +int color_snprintf(char *bf, size_t size, const char *color, + const char *fmt, ...) +{ + va_list args; + int r; + + va_start(args, fmt); + r = color_vsnprintf(bf, size, color, fmt, args); + va_end(args); + return r; +} int color_fprintf(FILE *fp, const char *color, const char *fmt, ...) { @@ -274,3 +316,9 @@ int percent_color_fprintf(FILE *fp, const char *fmt, double percent) return r; } + +int percent_color_snprintf(char *bf, size_t size, const char *fmt, double percent) +{ + const char *color = get_percent_color(percent); + return color_snprintf(bf, size, color, fmt, percent); +} diff --git a/tools/perf/util/color.h b/tools/perf/util/color.h index 24e8809210bb..dea082b79602 100644 --- a/tools/perf/util/color.h +++ b/tools/perf/util/color.h @@ -32,10 +32,14 @@ int perf_color_default_config(const char *var, const char *value, void *cb); int perf_config_colorbool(const char *var, const char *value, int stdout_is_tty); void color_parse(const char *value, const char *var, char *dst); void color_parse_mem(const char *value, int len, const char *var, char *dst); +int color_vsnprintf(char *bf, size_t size, const char *color, + const char *fmt, va_list args); int color_vfprintf(FILE *fp, const char *color, const char *fmt, va_list args); int color_fprintf(FILE *fp, const char *color, const char *fmt, ...); +int color_snprintf(char *bf, size_t size, const char *color, const char *fmt, ...); int color_fprintf_ln(FILE *fp, const char *color, const char *fmt, ...); int color_fwrite_lines(FILE *fp, const char *color, size_t count, const char *buf); +int percent_color_snprintf(char *bf, size_t size, const char *fmt, double percent); int percent_color_fprintf(FILE *fp, const char *fmt, double percent); const char *get_percent_color(double percent); diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c index 0905600c3851..dd824cf3b628 100644 --- a/tools/perf/util/debug.c +++ b/tools/perf/util/debug.c @@ -6,13 +6,14 @@ #include <stdarg.h> #include <stdio.h> +#include "cache.h" #include "color.h" #include "event.h" #include "debug.h" #include "util.h" int verbose = 0; -int dump_trace = 0; +bool dump_trace = false; int eprintf(int level, const char *fmt, ...) { @@ -21,7 +22,10 @@ int eprintf(int level, const char *fmt, ...) if (verbose >= level) { va_start(args, fmt); - ret = vfprintf(stderr, fmt, args); + if (use_browser) + ret = browser__show_help(fmt, args); + else + ret = vfprintf(stderr, fmt, args); va_end(args); } diff --git a/tools/perf/util/debug.h b/tools/perf/util/debug.h index c6c24c522dea..047ac3324ebe 100644 --- a/tools/perf/util/debug.h +++ b/tools/perf/util/debug.h @@ -2,14 +2,38 @@ #ifndef __PERF_DEBUG_H #define __PERF_DEBUG_H +#include <stdbool.h> #include "event.h" extern int verbose; -extern int dump_trace; +extern bool dump_trace; -int eprintf(int level, - const char *fmt, ...) __attribute__((format(printf, 2, 3))); int dump_printf(const char *fmt, ...) __attribute__((format(printf, 1, 2))); void trace_event(event_t *event); +struct ui_progress; + +#ifdef NO_NEWT_SUPPORT +static inline int browser__show_help(const char *format __used, va_list ap __used) +{ + return 0; +} + +static inline struct ui_progress *ui_progress__new(const char *title __used, + u64 total __used) +{ + return (struct ui_progress *)1; +} + +static inline void ui_progress__update(struct ui_progress *self __used, + u64 curr __used) {} + +static inline void ui_progress__delete(struct ui_progress *self __used) {} +#else +int browser__show_help(const char *format, va_list ap); +struct ui_progress *ui_progress__new(const char *title, u64 total); +void ui_progress__update(struct ui_progress *self, u64 curr); +void ui_progress__delete(struct ui_progress *self); +#endif + #endif /* __PERF_DEBUG_H */ diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c index 705ec63548b4..50771b5813ee 100644 --- a/tools/perf/util/event.c +++ b/tools/perf/util/event.c @@ -7,6 +7,23 @@ #include "strlist.h" #include "thread.h" +const char *event__name[] = { + [0] = "TOTAL", + [PERF_RECORD_MMAP] = "MMAP", + [PERF_RECORD_LOST] = "LOST", + [PERF_RECORD_COMM] = "COMM", + [PERF_RECORD_EXIT] = "EXIT", + [PERF_RECORD_THROTTLE] = "THROTTLE", + [PERF_RECORD_UNTHROTTLE] = "UNTHROTTLE", + [PERF_RECORD_FORK] = "FORK", + [PERF_RECORD_READ] = "READ", + [PERF_RECORD_SAMPLE] = "SAMPLE", + [PERF_RECORD_HEADER_ATTR] = "ATTR", + [PERF_RECORD_HEADER_EVENT_TYPE] = "EVENT_TYPE", + [PERF_RECORD_HEADER_TRACING_DATA] = "TRACING_DATA", + [PERF_RECORD_HEADER_BUILD_ID] = "BUILD_ID", +}; + static pid_t event__synthesize_comm(pid_t pid, int full, event__handler_t process, struct perf_session *session) @@ -112,7 +129,11 @@ static int event__synthesize_mmap_events(pid_t pid, pid_t tgid, event_t ev = { .header = { .type = PERF_RECORD_MMAP, - .misc = 0, /* Just like the kernel, see kernel/perf_event.c __perf_event_mmap */ + /* + * Just like the kernel, see __perf_event_mmap + * in kernel/perf_event.c + */ + .misc = PERF_RECORD_MISC_USER, }, }; int n; @@ -130,6 +151,7 @@ static int event__synthesize_mmap_events(pid_t pid, pid_t tgid, continue; pbf += n + 3; if (*pbf == 'x') { /* vm_exec */ + u64 vm_pgoff; char *execname = strchr(bf, '/'); /* Catch VDSO */ @@ -139,6 +161,14 @@ static int event__synthesize_mmap_events(pid_t pid, pid_t tgid, if (execname == NULL) continue; + pbf += 3; + n = hex2u64(pbf, &vm_pgoff); + /* pgoff is in bytes, not pages */ + if (n >= 0) + ev.mmap.pgoff = vm_pgoff << getpagesize(); + else + ev.mmap.pgoff = 0; + size = strlen(execname); execname[size - 1] = '\0'; /* Remove \n */ memcpy(ev.mmap.filename, execname, size); @@ -158,11 +188,23 @@ static int event__synthesize_mmap_events(pid_t pid, pid_t tgid, } int event__synthesize_modules(event__handler_t process, - struct perf_session *session) + struct perf_session *session, + struct machine *machine) { struct rb_node *nd; + struct map_groups *kmaps = &machine->kmaps; + u16 misc; + + /* + * kernel uses 0 for user space maps, see kernel/perf_event.c + * __perf_event_mmap + */ + if (machine__is_host(machine)) + misc = PERF_RECORD_MISC_KERNEL; + else + misc = PERF_RECORD_MISC_GUEST_KERNEL; - for (nd = rb_first(&session->kmaps.maps[MAP__FUNCTION]); + for (nd = rb_first(&kmaps->maps[MAP__FUNCTION]); nd; nd = rb_next(nd)) { event_t ev; size_t size; @@ -173,12 +215,13 @@ int event__synthesize_modules(event__handler_t process, size = ALIGN(pos->dso->long_name_len + 1, sizeof(u64)); memset(&ev, 0, sizeof(ev)); - ev.mmap.header.misc = 1; /* kernel uses 0 for user space maps, see kernel/perf_event.c __perf_event_mmap */ + ev.mmap.header.misc = misc; ev.mmap.header.type = PERF_RECORD_MMAP; ev.mmap.header.size = (sizeof(ev.mmap) - (sizeof(ev.mmap.filename) - size)); ev.mmap.start = pos->start; ev.mmap.len = pos->end - pos->start; + ev.mmap.pid = machine->pid; memcpy(ev.mmap.filename, pos->dso->long_name, pos->dso->long_name_len + 1); @@ -241,13 +284,18 @@ static int find_symbol_cb(void *arg, const char *name, char type, u64 start) int event__synthesize_kernel_mmap(event__handler_t process, struct perf_session *session, + struct machine *machine, const char *symbol_name) { size_t size; + const char *filename, *mmap_name; + char path[PATH_MAX]; + char name_buff[PATH_MAX]; + struct map *map; + event_t ev = { .header = { .type = PERF_RECORD_MMAP, - .misc = 1, /* kernel uses 0 for user space maps, see kernel/perf_event.c __perf_event_mmap */ }, }; /* @@ -257,16 +305,37 @@ int event__synthesize_kernel_mmap(event__handler_t process, */ struct process_symbol_args args = { .name = symbol_name, }; - if (kallsyms__parse("/proc/kallsyms", &args, find_symbol_cb) <= 0) + mmap_name = machine__mmap_name(machine, name_buff, sizeof(name_buff)); + if (machine__is_host(machine)) { + /* + * kernel uses PERF_RECORD_MISC_USER for user space maps, + * see kernel/perf_event.c __perf_event_mmap + */ + ev.header.misc = PERF_RECORD_MISC_KERNEL; + filename = "/proc/kallsyms"; + } else { + ev.header.misc = PERF_RECORD_MISC_GUEST_KERNEL; + if (machine__is_default_guest(machine)) + filename = (char *) symbol_conf.default_guest_kallsyms; + else { + sprintf(path, "%s/proc/kallsyms", machine->root_dir); + filename = path; + } + } + + if (kallsyms__parse(filename, &args, find_symbol_cb) <= 0) return -ENOENT; + map = machine->vmlinux_maps[MAP__FUNCTION]; size = snprintf(ev.mmap.filename, sizeof(ev.mmap.filename), - "[kernel.kallsyms.%s]", symbol_name) + 1; + "%s%s", mmap_name, symbol_name) + 1; size = ALIGN(size, sizeof(u64)); - ev.mmap.header.size = (sizeof(ev.mmap) - (sizeof(ev.mmap.filename) - size)); + ev.mmap.header.size = (sizeof(ev.mmap) - + (sizeof(ev.mmap.filename) - size)); ev.mmap.pgoff = args.start; - ev.mmap.start = session->vmlinux_maps[MAP__FUNCTION]->start; - ev.mmap.len = session->vmlinux_maps[MAP__FUNCTION]->end - ev.mmap.start ; + ev.mmap.start = map->start; + ev.mmap.len = map->end - ev.mmap.start; + ev.mmap.pid = machine->pid; return process(&ev, session); } @@ -316,26 +385,54 @@ int event__process_comm(event_t *self, struct perf_session *session) int event__process_lost(event_t *self, struct perf_session *session) { dump_printf(": id:%Ld: lost:%Ld\n", self->lost.id, self->lost.lost); - session->events_stats.lost += self->lost.lost; + session->hists.stats.total_lost += self->lost.lost; return 0; } -int event__process_mmap(event_t *self, struct perf_session *session) +static void event_set_kernel_mmap_len(struct map **maps, event_t *self) +{ + maps[MAP__FUNCTION]->start = self->mmap.start; + maps[MAP__FUNCTION]->end = self->mmap.start + self->mmap.len; + /* + * Be a bit paranoid here, some perf.data file came with + * a zero sized synthesized MMAP event for the kernel. + */ + if (maps[MAP__FUNCTION]->end == 0) + maps[MAP__FUNCTION]->end = ~0UL; +} + +static int event__process_kernel_mmap(event_t *self, + struct perf_session *session) { - struct thread *thread; struct map *map; + char kmmap_prefix[PATH_MAX]; + struct machine *machine; + enum dso_kernel_type kernel_type; + bool is_kernel_mmap; + + machine = perf_session__findnew_machine(session, self->mmap.pid); + if (!machine) { + pr_err("Can't find id %d's machine\n", self->mmap.pid); + goto out_problem; + } - dump_printf(" %d/%d: [%#Lx(%#Lx) @ %#Lx]: %s\n", - self->mmap.pid, self->mmap.tid, self->mmap.start, - self->mmap.len, self->mmap.pgoff, self->mmap.filename); + machine__mmap_name(machine, kmmap_prefix, sizeof(kmmap_prefix)); + if (machine__is_host(machine)) + kernel_type = DSO_TYPE_KERNEL; + else + kernel_type = DSO_TYPE_GUEST_KERNEL; - if (self->mmap.pid == 0) { - static const char kmmap_prefix[] = "[kernel.kallsyms."; + is_kernel_mmap = memcmp(self->mmap.filename, + kmmap_prefix, + strlen(kmmap_prefix)) == 0; + if (self->mmap.filename[0] == '/' || + (!is_kernel_mmap && self->mmap.filename[0] == '[')) { - if (self->mmap.filename[0] == '/') { - char short_module_name[1024]; - char *name = strrchr(self->mmap.filename, '/'), *dot; + char short_module_name[1024]; + char *name, *dot; + if (self->mmap.filename[0] == '/') { + name = strrchr(self->mmap.filename, '/'); if (name == NULL) goto out_problem; @@ -343,58 +440,84 @@ int event__process_mmap(event_t *self, struct perf_session *session) dot = strrchr(name, '.'); if (dot == NULL) goto out_problem; - snprintf(short_module_name, sizeof(short_module_name), - "[%.*s]", (int)(dot - name), name); + "[%.*s]", (int)(dot - name), name); strxfrchar(short_module_name, '-', '_'); - - map = perf_session__new_module_map(session, - self->mmap.start, - self->mmap.filename); - if (map == NULL) - goto out_problem; - - name = strdup(short_module_name); - if (name == NULL) - goto out_problem; - - map->dso->short_name = name; - map->end = map->start + self->mmap.len; - } else if (memcmp(self->mmap.filename, kmmap_prefix, - sizeof(kmmap_prefix) - 1) == 0) { - const char *symbol_name = (self->mmap.filename + - sizeof(kmmap_prefix) - 1); + } else + strcpy(short_module_name, self->mmap.filename); + + map = machine__new_module(machine, self->mmap.start, + self->mmap.filename); + if (map == NULL) + goto out_problem; + + name = strdup(short_module_name); + if (name == NULL) + goto out_problem; + + map->dso->short_name = name; + map->end = map->start + self->mmap.len; + } else if (is_kernel_mmap) { + const char *symbol_name = (self->mmap.filename + + strlen(kmmap_prefix)); + /* + * Should be there already, from the build-id table in + * the header. + */ + struct dso *kernel = __dsos__findnew(&machine->kernel_dsos, + kmmap_prefix); + if (kernel == NULL) + goto out_problem; + + kernel->kernel = kernel_type; + if (__machine__create_kernel_maps(machine, kernel) < 0) + goto out_problem; + + event_set_kernel_mmap_len(machine->vmlinux_maps, self); + perf_session__set_kallsyms_ref_reloc_sym(machine->vmlinux_maps, + symbol_name, + self->mmap.pgoff); + if (machine__is_default_guest(machine)) { /* - * Should be there already, from the build-id table in - * the header. + * preload dso of guest kernel and modules */ - struct dso *kernel = __dsos__findnew(&dsos__kernel, - "[kernel.kallsyms]"); - if (kernel == NULL) - goto out_problem; - - kernel->kernel = 1; - if (__perf_session__create_kernel_maps(session, kernel) < 0) - goto out_problem; + dso__load(kernel, machine->vmlinux_maps[MAP__FUNCTION], + NULL); + } + } + return 0; +out_problem: + return -1; +} - session->vmlinux_maps[MAP__FUNCTION]->start = self->mmap.start; - session->vmlinux_maps[MAP__FUNCTION]->end = self->mmap.start + self->mmap.len; - /* - * Be a bit paranoid here, some perf.data file came with - * a zero sized synthesized MMAP event for the kernel. - */ - if (session->vmlinux_maps[MAP__FUNCTION]->end == 0) - session->vmlinux_maps[MAP__FUNCTION]->end = ~0UL; +int event__process_mmap(event_t *self, struct perf_session *session) +{ + struct machine *machine; + struct thread *thread; + struct map *map; + u8 cpumode = self->header.misc & PERF_RECORD_MISC_CPUMODE_MASK; + int ret = 0; - perf_session__set_kallsyms_ref_reloc_sym(session, symbol_name, - self->mmap.pgoff); - } + dump_printf(" %d/%d: [%#Lx(%#Lx) @ %#Lx]: %s\n", + self->mmap.pid, self->mmap.tid, self->mmap.start, + self->mmap.len, self->mmap.pgoff, self->mmap.filename); + + if (cpumode == PERF_RECORD_MISC_GUEST_KERNEL || + cpumode == PERF_RECORD_MISC_KERNEL) { + ret = event__process_kernel_mmap(self, session); + if (ret < 0) + goto out_problem; return 0; } + machine = perf_session__find_host_machine(session); + if (machine == NULL) + goto out_problem; thread = perf_session__findnew(session, self->mmap.pid); - map = map__new(&self->mmap, MAP__FUNCTION, - session->cwd, session->cwdlen); + map = map__new(&machine->user_dsos, self->mmap.start, + self->mmap.len, self->mmap.pgoff, + self->mmap.pid, self->mmap.filename, + MAP__FUNCTION, session->cwd, session->cwdlen); if (thread == NULL || map == NULL) goto out_problem; @@ -434,22 +557,56 @@ int event__process_task(event_t *self, struct perf_session *session) void thread__find_addr_map(struct thread *self, struct perf_session *session, u8 cpumode, - enum map_type type, u64 addr, + enum map_type type, pid_t pid, u64 addr, struct addr_location *al) { struct map_groups *mg = &self->mg; + struct machine *machine = NULL; al->thread = self; al->addr = addr; + al->cpumode = cpumode; + al->filtered = false; - if (cpumode == PERF_RECORD_MISC_KERNEL) { + if (cpumode == PERF_RECORD_MISC_KERNEL && perf_host) { al->level = 'k'; - mg = &session->kmaps; - } else if (cpumode == PERF_RECORD_MISC_USER) + machine = perf_session__find_host_machine(session); + if (machine == NULL) { + al->map = NULL; + return; + } + mg = &machine->kmaps; + } else if (cpumode == PERF_RECORD_MISC_USER && perf_host) { al->level = '.'; - else { - al->level = 'H'; + machine = perf_session__find_host_machine(session); + } else if (cpumode == PERF_RECORD_MISC_GUEST_KERNEL && perf_guest) { + al->level = 'g'; + machine = perf_session__find_machine(session, pid); + if (machine == NULL) { + al->map = NULL; + return; + } + mg = &machine->kmaps; + } else { + /* + * 'u' means guest os user space. + * TODO: We don't support guest user space. Might support late. + */ + if (cpumode == PERF_RECORD_MISC_GUEST_USER && perf_guest) + al->level = 'u'; + else + al->level = 'H'; al->map = NULL; + + if ((cpumode == PERF_RECORD_MISC_GUEST_USER || + cpumode == PERF_RECORD_MISC_GUEST_KERNEL) && + !perf_guest) + al->filtered = true; + if ((cpumode == PERF_RECORD_MISC_USER || + cpumode == PERF_RECORD_MISC_KERNEL) && + !perf_host) + al->filtered = true; + return; } try_again: @@ -464,8 +621,10 @@ try_again: * "[vdso]" dso, but for now lets use the old trick of looking * in the whole kernel symbol list. */ - if ((long long)al->addr < 0 && mg != &session->kmaps) { - mg = &session->kmaps; + if ((long long)al->addr < 0 && + cpumode == PERF_RECORD_MISC_KERNEL && + machine && mg != &machine->kmaps) { + mg = &machine->kmaps; goto try_again; } } else @@ -474,11 +633,11 @@ try_again: void thread__find_addr_location(struct thread *self, struct perf_session *session, u8 cpumode, - enum map_type type, u64 addr, + enum map_type type, pid_t pid, u64 addr, struct addr_location *al, symbol_filter_t filter) { - thread__find_addr_map(self, session, cpumode, type, addr, al); + thread__find_addr_map(self, session, cpumode, type, pid, addr, al); if (al->map != NULL) al->sym = map__find_symbol(al->map, al->addr, filter); else @@ -490,8 +649,10 @@ static void dso__calc_col_width(struct dso *self) if (!symbol_conf.col_width_list_str && !symbol_conf.field_sep && (!symbol_conf.dso_list || strlist__has_entry(symbol_conf.dso_list, self->name))) { - unsigned int slen = strlen(self->name); - if (slen > dsos__col_width) + u16 slen = self->short_name_len; + if (verbose) + slen = self->long_name_len; + if (dsos__col_width < slen) dsos__col_width = slen; } @@ -512,31 +673,55 @@ int event__preprocess_sample(const event_t *self, struct perf_session *session, goto out_filtered; dump_printf(" ... thread: %s:%d\n", thread->comm, thread->pid); + /* + * Have we already created the kernel maps for the host machine? + * + * This should have happened earlier, when we processed the kernel MMAP + * events, but for older perf.data files there was no such thing, so do + * it now. + */ + if (cpumode == PERF_RECORD_MISC_KERNEL && + session->host_machine.vmlinux_maps[MAP__FUNCTION] == NULL) + machine__create_kernel_maps(&session->host_machine); - thread__find_addr_location(thread, session, cpumode, MAP__FUNCTION, - self->ip.ip, al, filter); + thread__find_addr_map(thread, session, cpumode, MAP__FUNCTION, + self->ip.pid, self->ip.ip, al); dump_printf(" ...... dso: %s\n", al->map ? al->map->dso->long_name : al->level == 'H' ? "[hypervisor]" : "<not found>"); - /* - * We have to do this here as we may have a dso with no symbol hit that - * has a name longer than the ones with symbols sampled. - */ - if (al->map && !sort_dso.elide && !al->map->dso->slen_calculated) - dso__calc_col_width(al->map->dso); - - if (symbol_conf.dso_list && - (!al->map || !al->map->dso || - !(strlist__has_entry(symbol_conf.dso_list, al->map->dso->short_name) || - (al->map->dso->short_name != al->map->dso->long_name && - strlist__has_entry(symbol_conf.dso_list, al->map->dso->long_name))))) - goto out_filtered; + al->sym = NULL; + + if (al->map) { + if (symbol_conf.dso_list && + (!al->map || !al->map->dso || + !(strlist__has_entry(symbol_conf.dso_list, + al->map->dso->short_name) || + (al->map->dso->short_name != al->map->dso->long_name && + strlist__has_entry(symbol_conf.dso_list, + al->map->dso->long_name))))) + goto out_filtered; + /* + * We have to do this here as we may have a dso with no symbol + * hit that has a name longer than the ones with symbols + * sampled. + */ + if (!sort_dso.elide && !al->map->dso->slen_calculated) + dso__calc_col_width(al->map->dso); + + al->sym = map__find_symbol(al->map, al->addr, filter); + } else { + const unsigned int unresolved_col_width = BITS_PER_LONG / 4; + + if (dsos__col_width < unresolved_col_width && + !symbol_conf.col_width_list_str && !symbol_conf.field_sep && + !symbol_conf.dso_list) + dsos__col_width = unresolved_col_width; + } if (symbol_conf.sym_list && al->sym && !strlist__has_entry(symbol_conf.sym_list, al->sym->name)) goto out_filtered; - al->filtered = false; return 0; out_filtered: @@ -570,6 +755,7 @@ int event__parse_sample(event_t *event, u64 type, struct sample_data *data) array++; } + data->id = -1ULL; if (type & PERF_SAMPLE_ID) { data->id = *array; array++; diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h index a33b94952e34..8577085db067 100644 --- a/tools/perf/util/event.h +++ b/tools/perf/util/event.h @@ -68,21 +68,54 @@ struct sample_data { u64 addr; u64 id; u64 stream_id; - u32 cpu; u64 period; - struct ip_callchain *callchain; + u32 cpu; u32 raw_size; void *raw_data; + struct ip_callchain *callchain; }; #define BUILD_ID_SIZE 20 struct build_id_event { struct perf_event_header header; + pid_t pid; u8 build_id[ALIGN(BUILD_ID_SIZE, sizeof(u64))]; char filename[]; }; +enum perf_user_event_type { /* above any possible kernel type */ + PERF_RECORD_HEADER_ATTR = 64, + PERF_RECORD_HEADER_EVENT_TYPE = 65, + PERF_RECORD_HEADER_TRACING_DATA = 66, + PERF_RECORD_HEADER_BUILD_ID = 67, + PERF_RECORD_FINISHED_ROUND = 68, + PERF_RECORD_HEADER_MAX +}; + +struct attr_event { + struct perf_event_header header; + struct perf_event_attr attr; + u64 id[]; +}; + +#define MAX_EVENT_NAME 64 + +struct perf_trace_event_type { + u64 event_id; + char name[MAX_EVENT_NAME]; +}; + +struct event_type_event { + struct perf_event_header header; + struct perf_trace_event_type event_type; +}; + +struct tracing_data_event { + struct perf_event_header header; + u32 size; +}; + typedef union event_union { struct perf_event_header header; struct ip_event ip; @@ -92,22 +125,12 @@ typedef union event_union { struct lost_event lost; struct read_event read; struct sample_event sample; + struct attr_event attr; + struct event_type_event event_type; + struct tracing_data_event tracing_data; + struct build_id_event build_id; } event_t; -struct events_stats { - u64 total; - u64 lost; -}; - -struct event_stat_id { - struct rb_node rb_node; - struct rb_root hists; - struct events_stats stats; - u64 config; - u64 event_stream; - u32 type; -}; - void event__print_totals(void); struct perf_session; @@ -119,10 +142,13 @@ int event__synthesize_thread(pid_t pid, event__handler_t process, void event__synthesize_threads(event__handler_t process, struct perf_session *session); int event__synthesize_kernel_mmap(event__handler_t process, - struct perf_session *session, - const char *symbol_name); + struct perf_session *session, + struct machine *machine, + const char *symbol_name); + int event__synthesize_modules(event__handler_t process, - struct perf_session *session); + struct perf_session *session, + struct machine *machine); int event__process_comm(event_t *self, struct perf_session *session); int event__process_lost(event_t *self, struct perf_session *session); @@ -134,4 +160,6 @@ int event__preprocess_sample(const event_t *self, struct perf_session *session, struct addr_location *al, symbol_filter_t filter); int event__parse_sample(event_t *event, u64 type, struct sample_data *data); +extern const char *event__name[]; + #endif /* __PERF_RECORD_H */ diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index 6c9aa16ee51f..8847bec64c54 100644 --- a/tools/perf/util/header.c +++ b/tools/perf/util/header.c @@ -99,13 +99,6 @@ int perf_header__add_attr(struct perf_header *self, return 0; } -#define MAX_EVENT_NAME 64 - -struct perf_trace_event_type { - u64 event_id; - char name[MAX_EVENT_NAME]; -}; - static int event_count; static struct perf_trace_event_type *events; @@ -197,7 +190,8 @@ static int write_padded(int fd, const void *bf, size_t count, continue; \ else -static int __dsos__write_buildid_table(struct list_head *head, u16 misc, int fd) +static int __dsos__write_buildid_table(struct list_head *head, pid_t pid, + u16 misc, int fd) { struct dso *pos; @@ -212,6 +206,7 @@ static int __dsos__write_buildid_table(struct list_head *head, u16 misc, int fd) len = ALIGN(len, NAME_ALIGN); memset(&b, 0, sizeof(b)); memcpy(&b.build_id, pos->build_id, sizeof(pos->build_id)); + b.pid = pid; b.header.misc = misc; b.header.size = sizeof(b) + len; err = do_write(fd, &b, sizeof(b)); @@ -226,13 +221,32 @@ static int __dsos__write_buildid_table(struct list_head *head, u16 misc, int fd) return 0; } -static int dsos__write_buildid_table(int fd) +static int dsos__write_buildid_table(struct perf_header *header, int fd) { - int err = __dsos__write_buildid_table(&dsos__kernel, - PERF_RECORD_MISC_KERNEL, fd); - if (err == 0) - err = __dsos__write_buildid_table(&dsos__user, - PERF_RECORD_MISC_USER, fd); + struct perf_session *session = container_of(header, + struct perf_session, header); + struct rb_node *nd; + int err = 0; + u16 kmisc, umisc; + + for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) { + struct machine *pos = rb_entry(nd, struct machine, rb_node); + if (machine__is_host(pos)) { + kmisc = PERF_RECORD_MISC_KERNEL; + umisc = PERF_RECORD_MISC_USER; + } else { + kmisc = PERF_RECORD_MISC_GUEST_KERNEL; + umisc = PERF_RECORD_MISC_GUEST_USER; + } + + err = __dsos__write_buildid_table(&pos->kernel_dsos, pos->pid, + kmisc, fd); + if (err == 0) + err = __dsos__write_buildid_table(&pos->user_dsos, + pos->pid, umisc, fd); + if (err) + break; + } return err; } @@ -349,9 +363,12 @@ static int __dsos__cache_build_ids(struct list_head *head, const char *debugdir) return err; } -static int dsos__cache_build_ids(void) +static int dsos__cache_build_ids(struct perf_header *self) { - int err_kernel, err_user; + struct perf_session *session = container_of(self, + struct perf_session, header); + struct rb_node *nd; + int ret = 0; char debugdir[PATH_MAX]; snprintf(debugdir, sizeof(debugdir), "%s/%s", getenv("HOME"), @@ -360,9 +377,28 @@ static int dsos__cache_build_ids(void) if (mkdir(debugdir, 0755) != 0 && errno != EEXIST) return -1; - err_kernel = __dsos__cache_build_ids(&dsos__kernel, debugdir); - err_user = __dsos__cache_build_ids(&dsos__user, debugdir); - return err_kernel || err_user ? -1 : 0; + for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) { + struct machine *pos = rb_entry(nd, struct machine, rb_node); + ret |= __dsos__cache_build_ids(&pos->kernel_dsos, debugdir); + ret |= __dsos__cache_build_ids(&pos->user_dsos, debugdir); + } + return ret ? -1 : 0; +} + +static bool dsos__read_build_ids(struct perf_header *self, bool with_hits) +{ + bool ret = false; + struct perf_session *session = container_of(self, + struct perf_session, header); + struct rb_node *nd; + + for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) { + struct machine *pos = rb_entry(nd, struct machine, rb_node); + ret |= __dsos__read_build_ids(&pos->kernel_dsos, with_hits); + ret |= __dsos__read_build_ids(&pos->user_dsos, with_hits); + } + + return ret; } static int perf_header__adds_write(struct perf_header *self, int fd) @@ -373,7 +409,7 @@ static int perf_header__adds_write(struct perf_header *self, int fd) u64 sec_start; int idx = 0, err; - if (dsos__read_build_ids(true)) + if (dsos__read_build_ids(self, true)) perf_header__set_feat(self, HEADER_BUILD_ID); nr_sections = bitmap_weight(self->adds_features, HEADER_FEAT_BITS); @@ -400,7 +436,6 @@ static int perf_header__adds_write(struct perf_header *self, int fd) trace_sec->size = lseek(fd, 0, SEEK_CUR) - trace_sec->offset; } - if (perf_header__has_feat(self, HEADER_BUILD_ID)) { struct perf_file_section *buildid_sec; @@ -408,14 +443,14 @@ static int perf_header__adds_write(struct perf_header *self, int fd) /* Write build-ids */ buildid_sec->offset = lseek(fd, 0, SEEK_CUR); - err = dsos__write_buildid_table(fd); + err = dsos__write_buildid_table(self, fd); if (err < 0) { pr_debug("failed to write buildid table\n"); goto out_free; } buildid_sec->size = lseek(fd, 0, SEEK_CUR) - buildid_sec->offset; - dsos__cache_build_ids(); + dsos__cache_build_ids(self); } lseek(fd, sec_start, SEEK_SET); @@ -427,6 +462,25 @@ out_free: return err; } +int perf_header__write_pipe(int fd) +{ + struct perf_pipe_file_header f_header; + int err; + + f_header = (struct perf_pipe_file_header){ + .magic = PERF_MAGIC, + .size = sizeof(f_header), + }; + + err = do_write(fd, &f_header, sizeof(f_header)); + if (err < 0) { + pr_debug("failed to write perf pipe header\n"); + return err; + } + + return 0; +} + int perf_header__write(struct perf_header *self, int fd, bool at_exit) { struct perf_file_header f_header; @@ -518,25 +572,10 @@ int perf_header__write(struct perf_header *self, int fd, bool at_exit) return 0; } -static int do_read(int fd, void *buf, size_t size) -{ - while (size) { - int ret = read(fd, buf, size); - - if (ret <= 0) - return -1; - - size -= ret; - buf += ret; - } - - return 0; -} - static int perf_header__getbuffer64(struct perf_header *self, int fd, void *buf, size_t size) { - if (do_read(fd, buf, size)) + if (do_read(fd, buf, size) <= 0) return -1; if (self->needs_swap) @@ -592,7 +631,7 @@ int perf_file_header__read(struct perf_file_header *self, { lseek(fd, 0, SEEK_SET); - if (do_read(fd, self, sizeof(*self)) || + if (do_read(fd, self, sizeof(*self)) <= 0 || memcmp(&self->magic, __perf_magic, sizeof(self->magic))) return -1; @@ -636,6 +675,93 @@ int perf_file_header__read(struct perf_file_header *self, return 0; } +static int __event_process_build_id(struct build_id_event *bev, + char *filename, + struct perf_session *session) +{ + int err = -1; + struct list_head *head; + struct machine *machine; + u16 misc; + struct dso *dso; + enum dso_kernel_type dso_type; + + machine = perf_session__findnew_machine(session, bev->pid); + if (!machine) + goto out; + + misc = bev->header.misc & PERF_RECORD_MISC_CPUMODE_MASK; + + switch (misc) { + case PERF_RECORD_MISC_KERNEL: + dso_type = DSO_TYPE_KERNEL; + head = &machine->kernel_dsos; + break; + case PERF_RECORD_MISC_GUEST_KERNEL: + dso_type = DSO_TYPE_GUEST_KERNEL; + head = &machine->kernel_dsos; + break; + case PERF_RECORD_MISC_USER: + case PERF_RECORD_MISC_GUEST_USER: + dso_type = DSO_TYPE_USER; + head = &machine->user_dsos; + break; + default: + goto out; + } + + dso = __dsos__findnew(head, filename); + if (dso != NULL) { + char sbuild_id[BUILD_ID_SIZE * 2 + 1]; + + dso__set_build_id(dso, &bev->build_id); + + if (filename[0] == '[') + dso->kernel = dso_type; + + build_id__sprintf(dso->build_id, sizeof(dso->build_id), + sbuild_id); + pr_debug("build id event received for %s: %s\n", + dso->long_name, sbuild_id); + } + + err = 0; +out: + return err; +} + +static int perf_header__read_build_ids(struct perf_header *self, + int input, u64 offset, u64 size) +{ + struct perf_session *session = container_of(self, + struct perf_session, header); + struct build_id_event bev; + char filename[PATH_MAX]; + u64 limit = offset + size; + int err = -1; + + while (offset < limit) { + ssize_t len; + + if (read(input, &bev, sizeof(bev)) != sizeof(bev)) + goto out; + + if (self->needs_swap) + perf_event_header__bswap(&bev.header); + + len = bev.header.size - sizeof(bev); + if (read(input, filename, len) != len) + goto out; + + __event_process_build_id(&bev, filename, session); + + offset += bev.header.size; + } + err = 0; +out: + return err; +} + static int perf_file_section__process(struct perf_file_section *self, struct perf_header *ph, int feat, int fd) @@ -648,7 +774,7 @@ static int perf_file_section__process(struct perf_file_section *self, switch (feat) { case HEADER_TRACE_INFO: - trace_report(fd); + trace_report(fd, false); break; case HEADER_BUILD_ID: @@ -662,13 +788,56 @@ static int perf_file_section__process(struct perf_file_section *self, return 0; } -int perf_header__read(struct perf_header *self, int fd) +static int perf_file_header__read_pipe(struct perf_pipe_file_header *self, + struct perf_header *ph, int fd, + bool repipe) +{ + if (do_read(fd, self, sizeof(*self)) <= 0 || + memcmp(&self->magic, __perf_magic, sizeof(self->magic))) + return -1; + + if (repipe && do_write(STDOUT_FILENO, self, sizeof(*self)) < 0) + return -1; + + if (self->size != sizeof(*self)) { + u64 size = bswap_64(self->size); + + if (size != sizeof(*self)) + return -1; + + ph->needs_swap = true; + } + + return 0; +} + +static int perf_header__read_pipe(struct perf_session *session, int fd) { + struct perf_header *self = &session->header; + struct perf_pipe_file_header f_header; + + if (perf_file_header__read_pipe(&f_header, self, fd, + session->repipe) < 0) { + pr_debug("incompatible file format\n"); + return -EINVAL; + } + + session->fd = fd; + + return 0; +} + +int perf_header__read(struct perf_session *session, int fd) +{ + struct perf_header *self = &session->header; struct perf_file_header f_header; struct perf_file_attr f_attr; u64 f_id; int nr_attrs, nr_ids, i, j; + if (session->fd_pipe) + return perf_header__read_pipe(session, fd); + if (perf_file_header__read(&f_header, self, fd) < 0) { pr_debug("incompatible file format\n"); return -EINVAL; @@ -753,6 +922,14 @@ perf_header__find_attr(u64 id, struct perf_header *header) { int i; + /* + * We set id to -1 if the data file doesn't contain sample + * ids. Check for this and avoid walking through the entire + * list of ids which may be large. + */ + if (id == -1ULL) + return NULL; + for (i = 0; i < header->attrs; i++) { struct perf_header_attr *attr = header->attr[i]; int j; @@ -765,3 +942,231 @@ perf_header__find_attr(u64 id, struct perf_header *header) return NULL; } + +int event__synthesize_attr(struct perf_event_attr *attr, u16 ids, u64 *id, + event__handler_t process, + struct perf_session *session) +{ + event_t *ev; + size_t size; + int err; + + size = sizeof(struct perf_event_attr); + size = ALIGN(size, sizeof(u64)); + size += sizeof(struct perf_event_header); + size += ids * sizeof(u64); + + ev = malloc(size); + + ev->attr.attr = *attr; + memcpy(ev->attr.id, id, ids * sizeof(u64)); + + ev->attr.header.type = PERF_RECORD_HEADER_ATTR; + ev->attr.header.size = size; + + err = process(ev, session); + + free(ev); + + return err; +} + +int event__synthesize_attrs(struct perf_header *self, + event__handler_t process, + struct perf_session *session) +{ + struct perf_header_attr *attr; + int i, err = 0; + + for (i = 0; i < self->attrs; i++) { + attr = self->attr[i]; + + err = event__synthesize_attr(&attr->attr, attr->ids, attr->id, + process, session); + if (err) { + pr_debug("failed to create perf header attribute\n"); + return err; + } + } + + return err; +} + +int event__process_attr(event_t *self, struct perf_session *session) +{ + struct perf_header_attr *attr; + unsigned int i, ids, n_ids; + + attr = perf_header_attr__new(&self->attr.attr); + if (attr == NULL) + return -ENOMEM; + + ids = self->header.size; + ids -= (void *)&self->attr.id - (void *)self; + n_ids = ids / sizeof(u64); + + for (i = 0; i < n_ids; i++) { + if (perf_header_attr__add_id(attr, self->attr.id[i]) < 0) { + perf_header_attr__delete(attr); + return -ENOMEM; + } + } + + if (perf_header__add_attr(&session->header, attr) < 0) { + perf_header_attr__delete(attr); + return -ENOMEM; + } + + perf_session__update_sample_type(session); + + return 0; +} + +int event__synthesize_event_type(u64 event_id, char *name, + event__handler_t process, + struct perf_session *session) +{ + event_t ev; + size_t size = 0; + int err = 0; + + memset(&ev, 0, sizeof(ev)); + + ev.event_type.event_type.event_id = event_id; + memset(ev.event_type.event_type.name, 0, MAX_EVENT_NAME); + strncpy(ev.event_type.event_type.name, name, MAX_EVENT_NAME - 1); + + ev.event_type.header.type = PERF_RECORD_HEADER_EVENT_TYPE; + size = strlen(name); + size = ALIGN(size, sizeof(u64)); + ev.event_type.header.size = sizeof(ev.event_type) - + (sizeof(ev.event_type.event_type.name) - size); + + err = process(&ev, session); + + return err; +} + +int event__synthesize_event_types(event__handler_t process, + struct perf_session *session) +{ + struct perf_trace_event_type *type; + int i, err = 0; + + for (i = 0; i < event_count; i++) { + type = &events[i]; + + err = event__synthesize_event_type(type->event_id, type->name, + process, session); + if (err) { + pr_debug("failed to create perf header event type\n"); + return err; + } + } + + return err; +} + +int event__process_event_type(event_t *self, + struct perf_session *session __unused) +{ + if (perf_header__push_event(self->event_type.event_type.event_id, + self->event_type.event_type.name) < 0) + return -ENOMEM; + + return 0; +} + +int event__synthesize_tracing_data(int fd, struct perf_event_attr *pattrs, + int nb_events, + event__handler_t process, + struct perf_session *session __unused) +{ + event_t ev; + ssize_t size = 0, aligned_size = 0, padding; + int err = 0; + + memset(&ev, 0, sizeof(ev)); + + ev.tracing_data.header.type = PERF_RECORD_HEADER_TRACING_DATA; + size = read_tracing_data_size(fd, pattrs, nb_events); + if (size <= 0) + return size; + aligned_size = ALIGN(size, sizeof(u64)); + padding = aligned_size - size; + ev.tracing_data.header.size = sizeof(ev.tracing_data); + ev.tracing_data.size = aligned_size; + + process(&ev, session); + + err = read_tracing_data(fd, pattrs, nb_events); + write_padded(fd, NULL, 0, padding); + + return aligned_size; +} + +int event__process_tracing_data(event_t *self, + struct perf_session *session) +{ + ssize_t size_read, padding, size = self->tracing_data.size; + off_t offset = lseek(session->fd, 0, SEEK_CUR); + char buf[BUFSIZ]; + + /* setup for reading amidst mmap */ + lseek(session->fd, offset + sizeof(struct tracing_data_event), + SEEK_SET); + + size_read = trace_report(session->fd, session->repipe); + + padding = ALIGN(size_read, sizeof(u64)) - size_read; + + if (read(session->fd, buf, padding) < 0) + die("reading input file"); + if (session->repipe) { + int retw = write(STDOUT_FILENO, buf, padding); + if (retw <= 0 || retw != padding) + die("repiping tracing data padding"); + } + + if (size_read + padding != size) + die("tracing data size mismatch"); + + return size_read + padding; +} + +int event__synthesize_build_id(struct dso *pos, u16 misc, + event__handler_t process, + struct machine *machine, + struct perf_session *session) +{ + event_t ev; + size_t len; + int err = 0; + + if (!pos->hit) + return err; + + memset(&ev, 0, sizeof(ev)); + + len = pos->long_name_len + 1; + len = ALIGN(len, NAME_ALIGN); + memcpy(&ev.build_id.build_id, pos->build_id, sizeof(pos->build_id)); + ev.build_id.header.type = PERF_RECORD_HEADER_BUILD_ID; + ev.build_id.header.misc = misc; + ev.build_id.pid = machine->pid; + ev.build_id.header.size = sizeof(ev.build_id) + len; + memcpy(&ev.build_id.filename, pos->long_name, pos->long_name_len); + + err = process(&ev, session); + + return err; +} + +int event__process_build_id(event_t *self, + struct perf_session *session) +{ + __event_process_build_id(&self->build_id, + self->build_id.filename, + session); + return 0; +} diff --git a/tools/perf/util/header.h b/tools/perf/util/header.h index 82a6af72d4cc..402ac2454cf8 100644 --- a/tools/perf/util/header.h +++ b/tools/perf/util/header.h @@ -39,6 +39,11 @@ struct perf_file_header { DECLARE_BITMAP(adds_features, HEADER_FEAT_BITS); }; +struct perf_pipe_file_header { + u64 magic; + u64 size; +}; + struct perf_header; int perf_file_header__read(struct perf_file_header *self, @@ -47,21 +52,22 @@ int perf_file_header__read(struct perf_file_header *self, struct perf_header { int frozen; int attrs, size; + bool needs_swap; struct perf_header_attr **attr; s64 attr_offset; u64 data_offset; u64 data_size; u64 event_offset; u64 event_size; - bool needs_swap; DECLARE_BITMAP(adds_features, HEADER_FEAT_BITS); }; int perf_header__init(struct perf_header *self); void perf_header__exit(struct perf_header *self); -int perf_header__read(struct perf_header *self, int fd); +int perf_header__read(struct perf_session *session, int fd); int perf_header__write(struct perf_header *self, int fd, bool at_exit); +int perf_header__write_pipe(int fd); int perf_header__add_attr(struct perf_header *self, struct perf_header_attr *attr); @@ -89,4 +95,33 @@ int build_id_cache__add_s(const char *sbuild_id, const char *debugdir, const char *name, bool is_kallsyms); int build_id_cache__remove_s(const char *sbuild_id, const char *debugdir); +int event__synthesize_attr(struct perf_event_attr *attr, u16 ids, u64 *id, + event__handler_t process, + struct perf_session *session); +int event__synthesize_attrs(struct perf_header *self, + event__handler_t process, + struct perf_session *session); +int event__process_attr(event_t *self, struct perf_session *session); + +int event__synthesize_event_type(u64 event_id, char *name, + event__handler_t process, + struct perf_session *session); +int event__synthesize_event_types(event__handler_t process, + struct perf_session *session); +int event__process_event_type(event_t *self, + struct perf_session *session); + +int event__synthesize_tracing_data(int fd, struct perf_event_attr *pattrs, + int nb_events, + event__handler_t process, + struct perf_session *session); +int event__process_tracing_data(event_t *self, + struct perf_session *session); + +int event__synthesize_build_id(struct dso *pos, u16 misc, + event__handler_t process, + struct machine *machine, + struct perf_session *session); +int event__process_build_id(event_t *self, struct perf_session *session); + #endif /* __PERF_HEADER_H */ diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c index 2be33c7dbf03..9a71c94f057a 100644 --- a/tools/perf/util/hist.c +++ b/tools/perf/util/hist.c @@ -1,3 +1,4 @@ +#include "util.h" #include "hist.h" #include "session.h" #include "sort.h" @@ -8,25 +9,69 @@ struct callchain_param callchain_param = { .min_percent = 0.5 }; +static void hist_entry__add_cpumode_period(struct hist_entry *self, + unsigned int cpumode, u64 period) +{ + switch (cpumode) { + case PERF_RECORD_MISC_KERNEL: + self->period_sys += period; + break; + case PERF_RECORD_MISC_USER: + self->period_us += period; + break; + case PERF_RECORD_MISC_GUEST_KERNEL: + self->period_guest_sys += period; + break; + case PERF_RECORD_MISC_GUEST_USER: + self->period_guest_us += period; + break; + default: + break; + } +} + /* - * histogram, sorted on item, collects counts + * histogram, sorted on item, collects periods */ -struct hist_entry *__perf_session__add_hist_entry(struct rb_root *hists, - struct addr_location *al, - struct symbol *sym_parent, - u64 count, bool *hit) +static struct hist_entry *hist_entry__new(struct hist_entry *template) +{ + size_t callchain_size = symbol_conf.use_callchain ? sizeof(struct callchain_node) : 0; + struct hist_entry *self = malloc(sizeof(*self) + callchain_size); + + if (self != NULL) { + *self = *template; + self->nr_events = 1; + if (symbol_conf.use_callchain) + callchain_init(self->callchain); + } + + return self; +} + +static void hists__inc_nr_entries(struct hists *self, struct hist_entry *entry) { - struct rb_node **p = &hists->rb_node; + if (entry->ms.sym && self->max_sym_namelen < entry->ms.sym->namelen) + self->max_sym_namelen = entry->ms.sym->namelen; + ++self->nr_entries; +} + +struct hist_entry *__hists__add_entry(struct hists *self, + struct addr_location *al, + struct symbol *sym_parent, u64 period) +{ + struct rb_node **p = &self->entries.rb_node; struct rb_node *parent = NULL; struct hist_entry *he; struct hist_entry entry = { .thread = al->thread, - .map = al->map, - .sym = al->sym, + .ms = { + .map = al->map, + .sym = al->sym, + }, .ip = al->addr, .level = al->level, - .count = count, + .period = period, .parent = sym_parent, }; int cmp; @@ -38,8 +83,9 @@ struct hist_entry *__perf_session__add_hist_entry(struct rb_root *hists, cmp = hist_entry__cmp(&entry, he); if (!cmp) { - *hit = true; - return he; + he->period += period; + ++he->nr_events; + goto out; } if (cmp < 0) @@ -48,13 +94,14 @@ struct hist_entry *__perf_session__add_hist_entry(struct rb_root *hists, p = &(*p)->rb_right; } - he = malloc(sizeof(*he)); + he = hist_entry__new(&entry); if (!he) return NULL; - *he = entry; rb_link_node(&he->rb_node, parent, p); - rb_insert_color(&he->rb_node, hists); - *hit = false; + rb_insert_color(&he->rb_node, &self->entries); + hists__inc_nr_entries(self, he); +out: + hist_entry__add_cpumode_period(he, al->cpumode, period); return he; } @@ -65,7 +112,7 @@ hist_entry__cmp(struct hist_entry *left, struct hist_entry *right) int64_t cmp = 0; list_for_each_entry(se, &hist_entry__sort_list, list) { - cmp = se->cmp(left, right); + cmp = se->se_cmp(left, right); if (cmp) break; } @@ -82,7 +129,7 @@ hist_entry__collapse(struct hist_entry *left, struct hist_entry *right) list_for_each_entry(se, &hist_entry__sort_list, list) { int64_t (*f)(struct hist_entry *, struct hist_entry *); - f = se->collapse ?: se->cmp; + f = se->se_collapse ?: se->se_cmp; cmp = f(left, right); if (cmp) @@ -101,7 +148,7 @@ void hist_entry__free(struct hist_entry *he) * collapse the histogram */ -static void collapse__insert_entry(struct rb_root *root, struct hist_entry *he) +static bool collapse__insert_entry(struct rb_root *root, struct hist_entry *he) { struct rb_node **p = &root->rb_node; struct rb_node *parent = NULL; @@ -115,9 +162,9 @@ static void collapse__insert_entry(struct rb_root *root, struct hist_entry *he) cmp = hist_entry__collapse(iter, he); if (!cmp) { - iter->count += he->count; + iter->period += he->period; hist_entry__free(he); - return; + return false; } if (cmp < 0) @@ -128,9 +175,10 @@ static void collapse__insert_entry(struct rb_root *root, struct hist_entry *he) rb_link_node(&he->rb_node, parent, p); rb_insert_color(&he->rb_node, root); + return true; } -void perf_session__collapse_resort(struct rb_root *hists) +void hists__collapse_resort(struct hists *self) { struct rb_root tmp; struct rb_node *next; @@ -140,72 +188,77 @@ void perf_session__collapse_resort(struct rb_root *hists) return; tmp = RB_ROOT; - next = rb_first(hists); + next = rb_first(&self->entries); + self->nr_entries = 0; + self->max_sym_namelen = 0; while (next) { n = rb_entry(next, struct hist_entry, rb_node); next = rb_next(&n->rb_node); - rb_erase(&n->rb_node, hists); - collapse__insert_entry(&tmp, n); + rb_erase(&n->rb_node, &self->entries); + if (collapse__insert_entry(&tmp, n)) + hists__inc_nr_entries(self, n); } - *hists = tmp; + self->entries = tmp; } /* - * reverse the map, sort on count. + * reverse the map, sort on period. */ -static void perf_session__insert_output_hist_entry(struct rb_root *root, - struct hist_entry *he, - u64 min_callchain_hits) +static void __hists__insert_output_entry(struct rb_root *entries, + struct hist_entry *he, + u64 min_callchain_hits) { - struct rb_node **p = &root->rb_node; + struct rb_node **p = &entries->rb_node; struct rb_node *parent = NULL; struct hist_entry *iter; if (symbol_conf.use_callchain) - callchain_param.sort(&he->sorted_chain, &he->callchain, + callchain_param.sort(&he->sorted_chain, he->callchain, min_callchain_hits, &callchain_param); while (*p != NULL) { parent = *p; iter = rb_entry(parent, struct hist_entry, rb_node); - if (he->count > iter->count) + if (he->period > iter->period) p = &(*p)->rb_left; else p = &(*p)->rb_right; } rb_link_node(&he->rb_node, parent, p); - rb_insert_color(&he->rb_node, root); + rb_insert_color(&he->rb_node, entries); } -void perf_session__output_resort(struct rb_root *hists, u64 total_samples) +void hists__output_resort(struct hists *self) { struct rb_root tmp; struct rb_node *next; struct hist_entry *n; u64 min_callchain_hits; - min_callchain_hits = - total_samples * (callchain_param.min_percent / 100); + min_callchain_hits = self->stats.total_period * (callchain_param.min_percent / 100); tmp = RB_ROOT; - next = rb_first(hists); + next = rb_first(&self->entries); + + self->nr_entries = 0; + self->max_sym_namelen = 0; while (next) { n = rb_entry(next, struct hist_entry, rb_node); next = rb_next(&n->rb_node); - rb_erase(&n->rb_node, hists); - perf_session__insert_output_hist_entry(&tmp, n, - min_callchain_hits); + rb_erase(&n->rb_node, &self->entries); + __hists__insert_output_entry(&tmp, n, min_callchain_hits); + hists__inc_nr_entries(self, n); } - *hists = tmp; + self->entries = tmp; } static size_t callchain__fprintf_left_margin(FILE *fp, int left_margin) @@ -237,7 +290,7 @@ static size_t ipchain__fprintf_graph_line(FILE *fp, int depth, int depth_mask, } static size_t ipchain__fprintf_graph(FILE *fp, struct callchain_list *chain, - int depth, int depth_mask, int count, + int depth, int depth_mask, int period, u64 total_samples, int hits, int left_margin) { @@ -250,7 +303,7 @@ static size_t ipchain__fprintf_graph(FILE *fp, struct callchain_list *chain, ret += fprintf(fp, "|"); else ret += fprintf(fp, " "); - if (!count && i == depth - 1) { + if (!period && i == depth - 1) { double percent; percent = hits * 100.0 / total_samples; @@ -258,8 +311,8 @@ static size_t ipchain__fprintf_graph(FILE *fp, struct callchain_list *chain, } else ret += fprintf(fp, "%s", " "); } - if (chain->sym) - ret += fprintf(fp, "%s\n", chain->sym->name); + if (chain->ms.sym) + ret += fprintf(fp, "%s\n", chain->ms.sym->name); else ret += fprintf(fp, "%p\n", (void *)(long)chain->ip); @@ -278,7 +331,7 @@ static void init_rem_hits(void) } strcpy(rem_sq_bracket->name, "[...]"); - rem_hits.sym = rem_sq_bracket; + rem_hits.ms.sym = rem_sq_bracket; } static size_t __callchain__fprintf_graph(FILE *fp, struct callchain_node *self, @@ -293,6 +346,7 @@ static size_t __callchain__fprintf_graph(FILE *fp, struct callchain_node *self, u64 remaining; size_t ret = 0; int i; + uint entries_printed = 0; if (callchain_param.mode == CHAIN_GRAPH_REL) new_total = self->children_hit; @@ -328,8 +382,6 @@ static size_t __callchain__fprintf_graph(FILE *fp, struct callchain_node *self, left_margin); i = 0; list_for_each_entry(chain, &child->val, list) { - if (chain->ip >= PERF_CONTEXT_MAX) - continue; ret += ipchain__fprintf_graph(fp, chain, depth, new_depth_mask, i++, new_total, @@ -341,6 +393,8 @@ static size_t __callchain__fprintf_graph(FILE *fp, struct callchain_node *self, new_depth_mask | (1 << depth), left_margin); node = next; + if (++entries_printed == callchain_param.print_limit) + break; } if (callchain_param.mode == CHAIN_GRAPH_REL && @@ -366,11 +420,9 @@ static size_t callchain__fprintf_graph(FILE *fp, struct callchain_node *self, bool printed = false; int i = 0; int ret = 0; + u32 entries_printed = 0; list_for_each_entry(chain, &self->val, list) { - if (chain->ip >= PERF_CONTEXT_MAX) - continue; - if (!i++ && sort__first_dimension == SORT_SYM) continue; @@ -385,10 +437,13 @@ static size_t callchain__fprintf_graph(FILE *fp, struct callchain_node *self, } else ret += callchain__fprintf_left_margin(fp, left_margin); - if (chain->sym) - ret += fprintf(fp, " %s\n", chain->sym->name); + if (chain->ms.sym) + ret += fprintf(fp, " %s\n", chain->ms.sym->name); else ret += fprintf(fp, " %p\n", (void *)(long)chain->ip); + + if (++entries_printed == callchain_param.print_limit) + break; } ret += __callchain__fprintf_graph(fp, self, total_samples, 1, 1, left_margin); @@ -411,8 +466,8 @@ static size_t callchain__fprintf_flat(FILE *fp, struct callchain_node *self, list_for_each_entry(chain, &self->val, list) { if (chain->ip >= PERF_CONTEXT_MAX) continue; - if (chain->sym) - ret += fprintf(fp, " %s\n", chain->sym->name); + if (chain->ms.sym) + ret += fprintf(fp, " %s\n", chain->ms.sym->name); else ret += fprintf(fp, " %p\n", (void *)(long)chain->ip); @@ -427,6 +482,7 @@ static size_t hist_entry_callchain__fprintf(FILE *fp, struct hist_entry *self, struct rb_node *rb_node; struct callchain_node *chain; size_t ret = 0; + u32 entries_printed = 0; rb_node = rb_first(&self->sorted_chain); while (rb_node) { @@ -449,55 +505,88 @@ static size_t hist_entry_callchain__fprintf(FILE *fp, struct hist_entry *self, break; } ret += fprintf(fp, "\n"); + if (++entries_printed == callchain_param.print_limit) + break; rb_node = rb_next(rb_node); } return ret; } -static size_t hist_entry__fprintf(struct hist_entry *self, - struct perf_session *pair_session, - bool show_displacement, - long displacement, FILE *fp, - u64 session_total) +int hist_entry__snprintf(struct hist_entry *self, char *s, size_t size, + struct hists *pair_hists, bool show_displacement, + long displacement, bool color, u64 session_total) { struct sort_entry *se; - u64 count, total; + u64 period, total, period_sys, period_us, period_guest_sys, period_guest_us; const char *sep = symbol_conf.field_sep; - size_t ret; + int ret; if (symbol_conf.exclude_other && !self->parent) return 0; - if (pair_session) { - count = self->pair ? self->pair->count : 0; - total = pair_session->events_stats.total; + if (pair_hists) { + period = self->pair ? self->pair->period : 0; + total = pair_hists->stats.total_period; + period_sys = self->pair ? self->pair->period_sys : 0; + period_us = self->pair ? self->pair->period_us : 0; + period_guest_sys = self->pair ? self->pair->period_guest_sys : 0; + period_guest_us = self->pair ? self->pair->period_guest_us : 0; } else { - count = self->count; + period = self->period; total = session_total; + period_sys = self->period_sys; + period_us = self->period_us; + period_guest_sys = self->period_guest_sys; + period_guest_us = self->period_guest_us; } - if (total) - ret = percent_color_fprintf(fp, sep ? "%.2f" : " %6.2f%%", - (count * 100.0) / total); - else - ret = fprintf(fp, sep ? "%lld" : "%12lld ", count); + if (total) { + if (color) + ret = percent_color_snprintf(s, size, + sep ? "%.2f" : " %6.2f%%", + (period * 100.0) / total); + else + ret = snprintf(s, size, sep ? "%.2f" : " %6.2f%%", + (period * 100.0) / total); + if (symbol_conf.show_cpu_utilization) { + ret += percent_color_snprintf(s + ret, size - ret, + sep ? "%.2f" : " %6.2f%%", + (period_sys * 100.0) / total); + ret += percent_color_snprintf(s + ret, size - ret, + sep ? "%.2f" : " %6.2f%%", + (period_us * 100.0) / total); + if (perf_guest) { + ret += percent_color_snprintf(s + ret, + size - ret, + sep ? "%.2f" : " %6.2f%%", + (period_guest_sys * 100.0) / + total); + ret += percent_color_snprintf(s + ret, + size - ret, + sep ? "%.2f" : " %6.2f%%", + (period_guest_us * 100.0) / + total); + } + } + } else + ret = snprintf(s, size, sep ? "%lld" : "%12lld ", period); if (symbol_conf.show_nr_samples) { if (sep) - fprintf(fp, "%c%lld", *sep, count); + ret += snprintf(s + ret, size - ret, "%c%lld", *sep, period); else - fprintf(fp, "%11lld", count); + ret += snprintf(s + ret, size - ret, "%11lld", period); } - if (pair_session) { + if (pair_hists) { char bf[32]; double old_percent = 0, new_percent = 0, diff; if (total > 0) - old_percent = (count * 100.0) / total; + old_percent = (period * 100.0) / total; if (session_total > 0) - new_percent = (self->count * 100.0) / session_total; + new_percent = (self->period * 100.0) / session_total; diff = new_percent - old_percent; @@ -507,9 +596,9 @@ static size_t hist_entry__fprintf(struct hist_entry *self, snprintf(bf, sizeof(bf), " "); if (sep) - ret += fprintf(fp, "%c%s", *sep, bf); + ret += snprintf(s + ret, size - ret, "%c%s", *sep, bf); else - ret += fprintf(fp, "%11.11s", bf); + ret += snprintf(s + ret, size - ret, "%11.11s", bf); if (show_displacement) { if (displacement) @@ -518,9 +607,9 @@ static size_t hist_entry__fprintf(struct hist_entry *self, snprintf(bf, sizeof(bf), " "); if (sep) - fprintf(fp, "%c%s", *sep, bf); + ret += snprintf(s + ret, size - ret, "%c%s", *sep, bf); else - fprintf(fp, "%6.6s", bf); + ret += snprintf(s + ret, size - ret, "%6.6s", bf); } } @@ -528,33 +617,43 @@ static size_t hist_entry__fprintf(struct hist_entry *self, if (se->elide) continue; - fprintf(fp, "%s", sep ?: " "); - ret += se->print(fp, self, se->width ? *se->width : 0); + ret += snprintf(s + ret, size - ret, "%s", sep ?: " "); + ret += se->se_snprintf(self, s + ret, size - ret, + se->se_width ? *se->se_width : 0); } - ret += fprintf(fp, "\n"); + return ret; +} - if (symbol_conf.use_callchain) { - int left_margin = 0; +int hist_entry__fprintf(struct hist_entry *self, struct hists *pair_hists, + bool show_displacement, long displacement, FILE *fp, + u64 session_total) +{ + char bf[512]; + hist_entry__snprintf(self, bf, sizeof(bf), pair_hists, + show_displacement, displacement, + true, session_total); + return fprintf(fp, "%s\n", bf); +} - if (sort__first_dimension == SORT_COMM) { - se = list_first_entry(&hist_entry__sort_list, typeof(*se), - list); - left_margin = se->width ? *se->width : 0; - left_margin -= thread__comm_len(self->thread); - } +static size_t hist_entry__fprintf_callchain(struct hist_entry *self, FILE *fp, + u64 session_total) +{ + int left_margin = 0; - hist_entry_callchain__fprintf(fp, self, session_total, - left_margin); + if (sort__first_dimension == SORT_COMM) { + struct sort_entry *se = list_first_entry(&hist_entry__sort_list, + typeof(*se), list); + left_margin = se->se_width ? *se->se_width : 0; + left_margin -= thread__comm_len(self->thread); } - return ret; + return hist_entry_callchain__fprintf(fp, self, session_total, + left_margin); } -size_t perf_session__fprintf_hists(struct rb_root *hists, - struct perf_session *pair, - bool show_displacement, FILE *fp, - u64 session_total) +size_t hists__fprintf(struct hists *self, struct hists *pair, + bool show_displacement, FILE *fp) { struct sort_entry *se; struct rb_node *nd; @@ -563,7 +662,7 @@ size_t perf_session__fprintf_hists(struct rb_root *hists, long displacement = 0; unsigned int width; const char *sep = symbol_conf.field_sep; - char *col_width = symbol_conf.col_width_list_str; + const char *col_width = symbol_conf.col_width_list_str; init_rem_hits(); @@ -576,6 +675,24 @@ size_t perf_session__fprintf_hists(struct rb_root *hists, fputs(" Samples ", fp); } + if (symbol_conf.show_cpu_utilization) { + if (sep) { + ret += fprintf(fp, "%csys", *sep); + ret += fprintf(fp, "%cus", *sep); + if (perf_guest) { + ret += fprintf(fp, "%cguest sys", *sep); + ret += fprintf(fp, "%cguest us", *sep); + } + } else { + ret += fprintf(fp, " sys "); + ret += fprintf(fp, " us "); + if (perf_guest) { + ret += fprintf(fp, " guest sys "); + ret += fprintf(fp, " guest us "); + } + } + } + if (pair) { if (sep) ret += fprintf(fp, "%cDelta", *sep); @@ -594,22 +711,22 @@ size_t perf_session__fprintf_hists(struct rb_root *hists, if (se->elide) continue; if (sep) { - fprintf(fp, "%c%s", *sep, se->header); + fprintf(fp, "%c%s", *sep, se->se_header); continue; } - width = strlen(se->header); - if (se->width) { + width = strlen(se->se_header); + if (se->se_width) { if (symbol_conf.col_width_list_str) { if (col_width) { - *se->width = atoi(col_width); + *se->se_width = atoi(col_width); col_width = strchr(col_width, ','); if (col_width) ++col_width; } } - width = *se->width = max(*se->width, width); + width = *se->se_width = max(*se->se_width, width); } - fprintf(fp, " %*s", width, se->header); + fprintf(fp, " %*s", width, se->se_header); } fprintf(fp, "\n"); @@ -631,10 +748,10 @@ size_t perf_session__fprintf_hists(struct rb_root *hists, continue; fprintf(fp, " "); - if (se->width) - width = *se->width; + if (se->se_width) + width = *se->se_width; else - width = strlen(se->header); + width = strlen(se->se_header); for (i = 0; i < width; i++) fprintf(fp, "."); } @@ -642,7 +759,7 @@ size_t perf_session__fprintf_hists(struct rb_root *hists, fprintf(fp, "\n#\n"); print_entries: - for (nd = rb_first(hists); nd; nd = rb_next(nd)) { + for (nd = rb_first(&self->entries); nd; nd = rb_next(nd)) { struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node); if (show_displacement) { @@ -654,10 +771,14 @@ print_entries: ++position; } ret += hist_entry__fprintf(h, pair, show_displacement, - displacement, fp, session_total); - if (h->map == NULL && verbose > 1) { + displacement, fp, self->stats.total_period); + + if (symbol_conf.use_callchain) + ret += hist_entry__fprintf_callchain(h, fp, self->stats.total_period); + + if (h->ms.map == NULL && verbose > 1) { __map_groups__fprintf_maps(&h->thread->mg, - MAP__FUNCTION, fp); + MAP__FUNCTION, verbose, fp); fprintf(fp, "%.10s end\n", graph_dotted_line); } } @@ -666,3 +787,271 @@ print_entries: return ret; } + +enum hist_filter { + HIST_FILTER__DSO, + HIST_FILTER__THREAD, +}; + +void hists__filter_by_dso(struct hists *self, const struct dso *dso) +{ + struct rb_node *nd; + + self->nr_entries = self->stats.total_period = 0; + self->stats.nr_events[PERF_RECORD_SAMPLE] = 0; + self->max_sym_namelen = 0; + + for (nd = rb_first(&self->entries); nd; nd = rb_next(nd)) { + struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node); + + if (symbol_conf.exclude_other && !h->parent) + continue; + + if (dso != NULL && (h->ms.map == NULL || h->ms.map->dso != dso)) { + h->filtered |= (1 << HIST_FILTER__DSO); + continue; + } + + h->filtered &= ~(1 << HIST_FILTER__DSO); + if (!h->filtered) { + ++self->nr_entries; + self->stats.total_period += h->period; + self->stats.nr_events[PERF_RECORD_SAMPLE] += h->nr_events; + if (h->ms.sym && + self->max_sym_namelen < h->ms.sym->namelen) + self->max_sym_namelen = h->ms.sym->namelen; + } + } +} + +void hists__filter_by_thread(struct hists *self, const struct thread *thread) +{ + struct rb_node *nd; + + self->nr_entries = self->stats.total_period = 0; + self->stats.nr_events[PERF_RECORD_SAMPLE] = 0; + self->max_sym_namelen = 0; + + for (nd = rb_first(&self->entries); nd; nd = rb_next(nd)) { + struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node); + + if (thread != NULL && h->thread != thread) { + h->filtered |= (1 << HIST_FILTER__THREAD); + continue; + } + h->filtered &= ~(1 << HIST_FILTER__THREAD); + if (!h->filtered) { + ++self->nr_entries; + self->stats.total_period += h->period; + self->stats.nr_events[PERF_RECORD_SAMPLE] += h->nr_events; + if (h->ms.sym && + self->max_sym_namelen < h->ms.sym->namelen) + self->max_sym_namelen = h->ms.sym->namelen; + } + } +} + +static int symbol__alloc_hist(struct symbol *self) +{ + struct sym_priv *priv = symbol__priv(self); + const int size = (sizeof(*priv->hist) + + (self->end - self->start) * sizeof(u64)); + + priv->hist = zalloc(size); + return priv->hist == NULL ? -1 : 0; +} + +int hist_entry__inc_addr_samples(struct hist_entry *self, u64 ip) +{ + unsigned int sym_size, offset; + struct symbol *sym = self->ms.sym; + struct sym_priv *priv; + struct sym_hist *h; + + if (!sym || !self->ms.map) + return 0; + + priv = symbol__priv(sym); + if (priv->hist == NULL && symbol__alloc_hist(sym) < 0) + return -ENOMEM; + + sym_size = sym->end - sym->start; + offset = ip - sym->start; + + pr_debug3("%s: ip=%#Lx\n", __func__, self->ms.map->unmap_ip(self->ms.map, ip)); + + if (offset >= sym_size) + return 0; + + h = priv->hist; + h->sum++; + h->ip[offset]++; + + pr_debug3("%#Lx %s: period++ [ip: %#Lx, %#Lx] => %Ld\n", self->ms.sym->start, + self->ms.sym->name, ip, ip - self->ms.sym->start, h->ip[offset]); + return 0; +} + +static struct objdump_line *objdump_line__new(s64 offset, char *line) +{ + struct objdump_line *self = malloc(sizeof(*self)); + + if (self != NULL) { + self->offset = offset; + self->line = line; + } + + return self; +} + +void objdump_line__free(struct objdump_line *self) +{ + free(self->line); + free(self); +} + +static void objdump__add_line(struct list_head *head, struct objdump_line *line) +{ + list_add_tail(&line->node, head); +} + +struct objdump_line *objdump__get_next_ip_line(struct list_head *head, + struct objdump_line *pos) +{ + list_for_each_entry_continue(pos, head, node) + if (pos->offset >= 0) + return pos; + + return NULL; +} + +static int hist_entry__parse_objdump_line(struct hist_entry *self, FILE *file, + struct list_head *head) +{ + struct symbol *sym = self->ms.sym; + struct objdump_line *objdump_line; + char *line = NULL, *tmp, *tmp2, *c; + size_t line_len; + s64 line_ip, offset = -1; + + if (getline(&line, &line_len, file) < 0) + return -1; + + if (!line) + return -1; + + while (line_len != 0 && isspace(line[line_len - 1])) + line[--line_len] = '\0'; + + c = strchr(line, '\n'); + if (c) + *c = 0; + + line_ip = -1; + + /* + * Strip leading spaces: + */ + tmp = line; + while (*tmp) { + if (*tmp != ' ') + break; + tmp++; + } + + if (*tmp) { + /* + * Parse hexa addresses followed by ':' + */ + line_ip = strtoull(tmp, &tmp2, 16); + if (*tmp2 != ':') + line_ip = -1; + } + + if (line_ip != -1) { + u64 start = map__rip_2objdump(self->ms.map, sym->start); + offset = line_ip - start; + } + + objdump_line = objdump_line__new(offset, line); + if (objdump_line == NULL) { + free(line); + return -1; + } + objdump__add_line(head, objdump_line); + + return 0; +} + +int hist_entry__annotate(struct hist_entry *self, struct list_head *head) +{ + struct symbol *sym = self->ms.sym; + struct map *map = self->ms.map; + struct dso *dso = map->dso; + const char *filename = dso->long_name; + char command[PATH_MAX * 2]; + FILE *file; + u64 len; + + if (!filename) + return -1; + + if (dso->origin == DSO__ORIG_KERNEL) { + if (dso->annotate_warned) + return 0; + dso->annotate_warned = 1; + pr_err("Can't annotate %s: No vmlinux file was found in the " + "path:\n", sym->name); + vmlinux_path__fprintf(stderr); + return -1; + } + + pr_debug("%s: filename=%s, sym=%s, start=%#Lx, end=%#Lx\n", __func__, + filename, sym->name, map->unmap_ip(map, sym->start), + map->unmap_ip(map, sym->end)); + + len = sym->end - sym->start; + + pr_debug("annotating [%p] %30s : [%p] %30s\n", + dso, dso->long_name, sym, sym->name); + + snprintf(command, sizeof(command), + "objdump --start-address=0x%016Lx --stop-address=0x%016Lx -dS %s|grep -v %s|expand", + map__rip_2objdump(map, sym->start), + map__rip_2objdump(map, sym->end), + filename, filename); + + pr_debug("Executing: %s\n", command); + + file = popen(command, "r"); + if (!file) + return -1; + + while (!feof(file)) + if (hist_entry__parse_objdump_line(self, file, head) < 0) + break; + + pclose(file); + return 0; +} + +void hists__inc_nr_events(struct hists *self, u32 type) +{ + ++self->stats.nr_events[0]; + ++self->stats.nr_events[type]; +} + +size_t hists__fprintf_nr_events(struct hists *self, FILE *fp) +{ + int i; + size_t ret = 0; + + for (i = 0; i < PERF_RECORD_HEADER_MAX; ++i) { + if (!event__name[i]) + continue; + ret += fprintf(fp, "%10s events: %10d\n", + event__name[i], self->stats.nr_events[i]); + } + + return ret; +} diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h index 16f360cce5bf..6f17dcd8412c 100644 --- a/tools/perf/util/hist.h +++ b/tools/perf/util/hist.h @@ -6,24 +6,104 @@ extern struct callchain_param callchain_param; -struct perf_session; struct hist_entry; struct addr_location; struct symbol; struct rb_root; -struct hist_entry *__perf_session__add_hist_entry(struct rb_root *hists, - struct addr_location *al, - struct symbol *parent, - u64 count, bool *hit); +struct objdump_line { + struct list_head node; + s64 offset; + char *line; +}; + +void objdump_line__free(struct objdump_line *self); +struct objdump_line *objdump__get_next_ip_line(struct list_head *head, + struct objdump_line *pos); + +struct sym_hist { + u64 sum; + u64 ip[0]; +}; + +struct sym_ext { + struct rb_node node; + double percent; + char *path; +}; + +struct sym_priv { + struct sym_hist *hist; + struct sym_ext *ext; +}; + +/* + * The kernel collects the number of events it couldn't send in a stretch and + * when possible sends this number in a PERF_RECORD_LOST event. The number of + * such "chunks" of lost events is stored in .nr_events[PERF_EVENT_LOST] while + * total_lost tells exactly how many events the kernel in fact lost, i.e. it is + * the sum of all struct lost_event.lost fields reported. + * + * The total_period is needed because by default auto-freq is used, so + * multipling nr_events[PERF_EVENT_SAMPLE] by a frequency isn't possible to get + * the total number of low level events, it is necessary to to sum all struct + * sample_event.period and stash the result in total_period. + */ +struct events_stats { + u64 total_period; + u64 total_lost; + u32 nr_events[PERF_RECORD_HEADER_MAX]; + u32 nr_unknown_events; +}; + +struct hists { + struct rb_node rb_node; + struct rb_root entries; + u64 nr_entries; + struct events_stats stats; + u64 config; + u64 event_stream; + u32 type; + u32 max_sym_namelen; +}; + +struct hist_entry *__hists__add_entry(struct hists *self, + struct addr_location *al, + struct symbol *parent, u64 period); extern int64_t hist_entry__cmp(struct hist_entry *, struct hist_entry *); extern int64_t hist_entry__collapse(struct hist_entry *, struct hist_entry *); +int hist_entry__fprintf(struct hist_entry *self, struct hists *pair_hists, + bool show_displacement, long displacement, FILE *fp, + u64 total); +int hist_entry__snprintf(struct hist_entry *self, char *bf, size_t size, + struct hists *pair_hists, bool show_displacement, + long displacement, bool color, u64 total); void hist_entry__free(struct hist_entry *); -void perf_session__output_resort(struct rb_root *hists, u64 total_samples); -void perf_session__collapse_resort(struct rb_root *hists); -size_t perf_session__fprintf_hists(struct rb_root *hists, - struct perf_session *pair, - bool show_displacement, FILE *fp, - u64 session_total); +void hists__output_resort(struct hists *self); +void hists__collapse_resort(struct hists *self); + +void hists__inc_nr_events(struct hists *self, u32 type); +size_t hists__fprintf_nr_events(struct hists *self, FILE *fp); + +size_t hists__fprintf(struct hists *self, struct hists *pair, + bool show_displacement, FILE *fp); + +int hist_entry__inc_addr_samples(struct hist_entry *self, u64 ip); +int hist_entry__annotate(struct hist_entry *self, struct list_head *head); + +void hists__filter_by_dso(struct hists *self, const struct dso *dso); +void hists__filter_by_thread(struct hists *self, const struct thread *thread); + +#ifdef NO_NEWT_SUPPORT +static inline int hists__browse(struct hists *self __used, + const char *helpline __used, + const char *input_name __used) +{ + return 0; +} +#else +int hists__browse(struct hists *self, const char *helpline, + const char *input_name); +#endif #endif /* __PERF_HIST_H */ diff --git a/tools/perf/util/hweight.c b/tools/perf/util/hweight.c new file mode 100644 index 000000000000..5c1d0d099f0d --- /dev/null +++ b/tools/perf/util/hweight.c @@ -0,0 +1,31 @@ +#include <linux/bitops.h> + +/** + * hweightN - returns the hamming weight of a N-bit word + * @x: the word to weigh + * + * The Hamming Weight of a number is the total number of bits set in it. + */ + +unsigned int hweight32(unsigned int w) +{ + unsigned int res = w - ((w >> 1) & 0x55555555); + res = (res & 0x33333333) + ((res >> 2) & 0x33333333); + res = (res + (res >> 4)) & 0x0F0F0F0F; + res = res + (res >> 8); + return (res + (res >> 16)) & 0x000000FF; +} + +unsigned long hweight64(__u64 w) +{ +#if BITS_PER_LONG == 32 + return hweight32((unsigned int)(w >> 32)) + hweight32((unsigned int)w); +#elif BITS_PER_LONG == 64 + __u64 res = w - ((w >> 1) & 0x5555555555555555ul); + res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul); + res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful; + res = res + (res >> 8); + res = res + (res >> 16); + return (res + (res >> 32)) & 0x00000000000000FFul; +#endif +} diff --git a/tools/perf/util/include/asm/bitops.h b/tools/perf/util/include/asm/bitops.h deleted file mode 100644 index 58e9817ffae0..000000000000 --- a/tools/perf/util/include/asm/bitops.h +++ /dev/null @@ -1,18 +0,0 @@ -#ifndef _PERF_ASM_BITOPS_H_ -#define _PERF_ASM_BITOPS_H_ - -#include <sys/types.h> -#include "../../types.h" -#include <linux/compiler.h> - -/* CHECKME: Not sure both always match */ -#define BITS_PER_LONG __WORDSIZE - -#include "../../../../include/asm-generic/bitops/__fls.h" -#include "../../../../include/asm-generic/bitops/fls.h" -#include "../../../../include/asm-generic/bitops/fls64.h" -#include "../../../../include/asm-generic/bitops/__ffs.h" -#include "../../../../include/asm-generic/bitops/ffz.h" -#include "../../../../include/asm-generic/bitops/hweight.h" - -#endif diff --git a/tools/perf/util/include/asm/hweight.h b/tools/perf/util/include/asm/hweight.h new file mode 100644 index 000000000000..36cf26d434a5 --- /dev/null +++ b/tools/perf/util/include/asm/hweight.h @@ -0,0 +1,8 @@ +#ifndef PERF_HWEIGHT_H +#define PERF_HWEIGHT_H + +#include <linux/types.h> +unsigned int hweight32(unsigned int w); +unsigned long hweight64(__u64 w); + +#endif /* PERF_HWEIGHT_H */ diff --git a/tools/perf/util/include/dwarf-regs.h b/tools/perf/util/include/dwarf-regs.h new file mode 100644 index 000000000000..cf6727e99c44 --- /dev/null +++ b/tools/perf/util/include/dwarf-regs.h @@ -0,0 +1,8 @@ +#ifndef _PERF_DWARF_REGS_H_ +#define _PERF_DWARF_REGS_H_ + +#ifdef DWARF_SUPPORT +const char *get_arch_regstr(unsigned int n); +#endif + +#endif diff --git a/tools/perf/util/include/linux/bitmap.h b/tools/perf/util/include/linux/bitmap.h index 94507639a8c4..eda4416efa0a 100644 --- a/tools/perf/util/include/linux/bitmap.h +++ b/tools/perf/util/include/linux/bitmap.h @@ -1,3 +1,35 @@ -#include "../../../../include/linux/bitmap.h" -#include "../../../../include/asm-generic/bitops/find.h" -#include <linux/errno.h> +#ifndef _PERF_BITOPS_H +#define _PERF_BITOPS_H + +#include <string.h> +#include <linux/bitops.h> + +int __bitmap_weight(const unsigned long *bitmap, int bits); + +#define BITMAP_LAST_WORD_MASK(nbits) \ +( \ + ((nbits) % BITS_PER_LONG) ? \ + (1UL<<((nbits) % BITS_PER_LONG))-1 : ~0UL \ +) + +#define small_const_nbits(nbits) \ + (__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG) + +static inline void bitmap_zero(unsigned long *dst, int nbits) +{ + if (small_const_nbits(nbits)) + *dst = 0UL; + else { + int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); + } +} + +static inline int bitmap_weight(const unsigned long *src, int nbits) +{ + if (small_const_nbits(nbits)) + return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits)); + return __bitmap_weight(src, nbits); +} + +#endif /* _PERF_BITOPS_H */ diff --git a/tools/perf/util/include/linux/bitops.h b/tools/perf/util/include/linux/bitops.h index 8d63116e9435..bb4ac2e05385 100644 --- a/tools/perf/util/include/linux/bitops.h +++ b/tools/perf/util/include/linux/bitops.h @@ -1,13 +1,12 @@ #ifndef _PERF_LINUX_BITOPS_H_ #define _PERF_LINUX_BITOPS_H_ -#define __KERNEL__ +#include <linux/kernel.h> +#include <asm/hweight.h> -#define CONFIG_GENERIC_FIND_NEXT_BIT -#define CONFIG_GENERIC_FIND_FIRST_BIT -#include "../../../../include/linux/bitops.h" - -#undef __KERNEL__ +#define BITS_PER_LONG __WORDSIZE +#define BITS_PER_BYTE 8 +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) static inline void set_bit(int nr, unsigned long *addr) { @@ -20,10 +19,9 @@ static __always_inline int test_bit(unsigned int nr, const unsigned long *addr) (((unsigned long *)addr)[nr / BITS_PER_LONG])) != 0; } -unsigned long generic_find_next_zero_le_bit(const unsigned long *addr, unsigned - long size, unsigned long offset); - -unsigned long generic_find_next_le_bit(const unsigned long *addr, unsigned - long size, unsigned long offset); +static inline unsigned long hweight_long(unsigned long w) +{ + return sizeof(w) == 4 ? hweight32(w) : hweight64(w); +} #endif diff --git a/tools/perf/util/include/linux/compiler.h b/tools/perf/util/include/linux/compiler.h index dfb0713ed47f..791f9dd27ebf 100644 --- a/tools/perf/util/include/linux/compiler.h +++ b/tools/perf/util/include/linux/compiler.h @@ -7,4 +7,6 @@ #define __user #define __attribute_const__ +#define __used __attribute__((__unused__)) + #endif diff --git a/tools/perf/util/include/linux/kernel.h b/tools/perf/util/include/linux/kernel.h index f2611655ab51..1eb804fd3fbf 100644 --- a/tools/perf/util/include/linux/kernel.h +++ b/tools/perf/util/include/linux/kernel.h @@ -28,6 +28,8 @@ (type *)((char *)__mptr - offsetof(type, member)); }) #endif +#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); })) + #ifndef max #define max(x, y) ({ \ typeof(x) _max1 = (x); \ @@ -85,16 +87,19 @@ simple_strtoul(const char *nptr, char **endptr, int base) return strtoul(nptr, endptr, base); } +int eprintf(int level, + const char *fmt, ...) __attribute__((format(printf, 2, 3))); + #ifndef pr_fmt #define pr_fmt(fmt) fmt #endif #define pr_err(fmt, ...) \ - do { fprintf(stderr, pr_fmt(fmt), ##__VA_ARGS__); } while (0) + eprintf(0, pr_fmt(fmt), ##__VA_ARGS__) #define pr_warning(fmt, ...) \ - do { fprintf(stderr, pr_fmt(fmt), ##__VA_ARGS__); } while (0) + eprintf(0, pr_fmt(fmt), ##__VA_ARGS__) #define pr_info(fmt, ...) \ - do { fprintf(stderr, pr_fmt(fmt), ##__VA_ARGS__); } while (0) + eprintf(0, pr_fmt(fmt), ##__VA_ARGS__) #define pr_debug(fmt, ...) \ eprintf(1, pr_fmt(fmt), ##__VA_ARGS__) #define pr_debugN(n, fmt, ...) \ diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c index e509cd59c67d..e672f2fef65b 100644 --- a/tools/perf/util/map.c +++ b/tools/perf/util/map.c @@ -1,9 +1,11 @@ -#include "event.h" #include "symbol.h" +#include <errno.h> +#include <limits.h> #include <stdlib.h> #include <string.h> #include <stdio.h> -#include "debug.h" +#include <unistd.h> +#include "map.h" const char *map_type__name[MAP__NR_TYPES] = { [MAP__FUNCTION] = "Functions", @@ -36,15 +38,16 @@ void map__init(struct map *self, enum map_type type, self->map_ip = map__map_ip; self->unmap_ip = map__unmap_ip; RB_CLEAR_NODE(&self->rb_node); + self->groups = NULL; } -struct map *map__new(struct mmap_event *event, enum map_type type, - char *cwd, int cwdlen) +struct map *map__new(struct list_head *dsos__list, u64 start, u64 len, + u64 pgoff, u32 pid, char *filename, + enum map_type type, char *cwd, int cwdlen) { struct map *self = malloc(sizeof(*self)); if (self != NULL) { - const char *filename = event->filename; char newfilename[PATH_MAX]; struct dso *dso; int anon; @@ -62,16 +65,15 @@ struct map *map__new(struct mmap_event *event, enum map_type type, anon = is_anon_memory(filename); if (anon) { - snprintf(newfilename, sizeof(newfilename), "/tmp/perf-%d.map", event->pid); + snprintf(newfilename, sizeof(newfilename), "/tmp/perf-%d.map", pid); filename = newfilename; } - dso = dsos__findnew(filename); + dso = __dsos__findnew(dsos__list, filename); if (dso == NULL) goto out_delete; - map__init(self, type, event->start, event->start + event->len, - event->pgoff, dso); + map__init(self, type, start, start + len, pgoff, dso); if (anon) { set_identity: @@ -235,3 +237,392 @@ u64 map__objdump_2ip(struct map *map, u64 addr) map->unmap_ip(map, addr); /* RIP -> IP */ return ip; } + +void map_groups__init(struct map_groups *self) +{ + int i; + for (i = 0; i < MAP__NR_TYPES; ++i) { + self->maps[i] = RB_ROOT; + INIT_LIST_HEAD(&self->removed_maps[i]); + } + self->machine = NULL; +} + +void map_groups__flush(struct map_groups *self) +{ + int type; + + for (type = 0; type < MAP__NR_TYPES; type++) { + struct rb_root *root = &self->maps[type]; + struct rb_node *next = rb_first(root); + + while (next) { + struct map *pos = rb_entry(next, struct map, rb_node); + next = rb_next(&pos->rb_node); + rb_erase(&pos->rb_node, root); + /* + * We may have references to this map, for + * instance in some hist_entry instances, so + * just move them to a separate list. + */ + list_add_tail(&pos->node, &self->removed_maps[pos->type]); + } + } +} + +struct symbol *map_groups__find_symbol(struct map_groups *self, + enum map_type type, u64 addr, + struct map **mapp, + symbol_filter_t filter) +{ + struct map *map = map_groups__find(self, type, addr); + + if (map != NULL) { + if (mapp != NULL) + *mapp = map; + return map__find_symbol(map, map->map_ip(map, addr), filter); + } + + return NULL; +} + +struct symbol *map_groups__find_symbol_by_name(struct map_groups *self, + enum map_type type, + const char *name, + struct map **mapp, + symbol_filter_t filter) +{ + struct rb_node *nd; + + for (nd = rb_first(&self->maps[type]); nd; nd = rb_next(nd)) { + struct map *pos = rb_entry(nd, struct map, rb_node); + struct symbol *sym = map__find_symbol_by_name(pos, name, filter); + + if (sym == NULL) + continue; + if (mapp != NULL) + *mapp = pos; + return sym; + } + + return NULL; +} + +size_t __map_groups__fprintf_maps(struct map_groups *self, + enum map_type type, int verbose, FILE *fp) +{ + size_t printed = fprintf(fp, "%s:\n", map_type__name[type]); + struct rb_node *nd; + + for (nd = rb_first(&self->maps[type]); nd; nd = rb_next(nd)) { + struct map *pos = rb_entry(nd, struct map, rb_node); + printed += fprintf(fp, "Map:"); + printed += map__fprintf(pos, fp); + if (verbose > 2) { + printed += dso__fprintf(pos->dso, type, fp); + printed += fprintf(fp, "--\n"); + } + } + + return printed; +} + +size_t map_groups__fprintf_maps(struct map_groups *self, int verbose, FILE *fp) +{ + size_t printed = 0, i; + for (i = 0; i < MAP__NR_TYPES; ++i) + printed += __map_groups__fprintf_maps(self, i, verbose, fp); + return printed; +} + +static size_t __map_groups__fprintf_removed_maps(struct map_groups *self, + enum map_type type, + int verbose, FILE *fp) +{ + struct map *pos; + size_t printed = 0; + + list_for_each_entry(pos, &self->removed_maps[type], node) { + printed += fprintf(fp, "Map:"); + printed += map__fprintf(pos, fp); + if (verbose > 1) { + printed += dso__fprintf(pos->dso, type, fp); + printed += fprintf(fp, "--\n"); + } + } + return printed; +} + +static size_t map_groups__fprintf_removed_maps(struct map_groups *self, + int verbose, FILE *fp) +{ + size_t printed = 0, i; + for (i = 0; i < MAP__NR_TYPES; ++i) + printed += __map_groups__fprintf_removed_maps(self, i, verbose, fp); + return printed; +} + +size_t map_groups__fprintf(struct map_groups *self, int verbose, FILE *fp) +{ + size_t printed = map_groups__fprintf_maps(self, verbose, fp); + printed += fprintf(fp, "Removed maps:\n"); + return printed + map_groups__fprintf_removed_maps(self, verbose, fp); +} + +int map_groups__fixup_overlappings(struct map_groups *self, struct map *map, + int verbose, FILE *fp) +{ + struct rb_root *root = &self->maps[map->type]; + struct rb_node *next = rb_first(root); + + while (next) { + struct map *pos = rb_entry(next, struct map, rb_node); + next = rb_next(&pos->rb_node); + + if (!map__overlap(pos, map)) + continue; + + if (verbose >= 2) { + fputs("overlapping maps:\n", fp); + map__fprintf(map, fp); + map__fprintf(pos, fp); + } + + rb_erase(&pos->rb_node, root); + /* + * We may have references to this map, for instance in some + * hist_entry instances, so just move them to a separate + * list. + */ + list_add_tail(&pos->node, &self->removed_maps[map->type]); + /* + * Now check if we need to create new maps for areas not + * overlapped by the new map: + */ + if (map->start > pos->start) { + struct map *before = map__clone(pos); + + if (before == NULL) + return -ENOMEM; + + before->end = map->start - 1; + map_groups__insert(self, before); + if (verbose >= 2) + map__fprintf(before, fp); + } + + if (map->end < pos->end) { + struct map *after = map__clone(pos); + + if (after == NULL) + return -ENOMEM; + + after->start = map->end + 1; + map_groups__insert(self, after); + if (verbose >= 2) + map__fprintf(after, fp); + } + } + + return 0; +} + +/* + * XXX This should not really _copy_ te maps, but refcount them. + */ +int map_groups__clone(struct map_groups *self, + struct map_groups *parent, enum map_type type) +{ + struct rb_node *nd; + for (nd = rb_first(&parent->maps[type]); nd; nd = rb_next(nd)) { + struct map *map = rb_entry(nd, struct map, rb_node); + struct map *new = map__clone(map); + if (new == NULL) + return -ENOMEM; + map_groups__insert(self, new); + } + return 0; +} + +static u64 map__reloc_map_ip(struct map *map, u64 ip) +{ + return ip + (s64)map->pgoff; +} + +static u64 map__reloc_unmap_ip(struct map *map, u64 ip) +{ + return ip - (s64)map->pgoff; +} + +void map__reloc_vmlinux(struct map *self) +{ + struct kmap *kmap = map__kmap(self); + s64 reloc; + + if (!kmap->ref_reloc_sym || !kmap->ref_reloc_sym->unrelocated_addr) + return; + + reloc = (kmap->ref_reloc_sym->unrelocated_addr - + kmap->ref_reloc_sym->addr); + + if (!reloc) + return; + + self->map_ip = map__reloc_map_ip; + self->unmap_ip = map__reloc_unmap_ip; + self->pgoff = reloc; +} + +void maps__insert(struct rb_root *maps, struct map *map) +{ + struct rb_node **p = &maps->rb_node; + struct rb_node *parent = NULL; + const u64 ip = map->start; + struct map *m; + + while (*p != NULL) { + parent = *p; + m = rb_entry(parent, struct map, rb_node); + if (ip < m->start) + p = &(*p)->rb_left; + else + p = &(*p)->rb_right; + } + + rb_link_node(&map->rb_node, parent, p); + rb_insert_color(&map->rb_node, maps); +} + +struct map *maps__find(struct rb_root *maps, u64 ip) +{ + struct rb_node **p = &maps->rb_node; + struct rb_node *parent = NULL; + struct map *m; + + while (*p != NULL) { + parent = *p; + m = rb_entry(parent, struct map, rb_node); + if (ip < m->start) + p = &(*p)->rb_left; + else if (ip > m->end) + p = &(*p)->rb_right; + else + return m; + } + + return NULL; +} + +int machine__init(struct machine *self, const char *root_dir, pid_t pid) +{ + map_groups__init(&self->kmaps); + RB_CLEAR_NODE(&self->rb_node); + INIT_LIST_HEAD(&self->user_dsos); + INIT_LIST_HEAD(&self->kernel_dsos); + + self->kmaps.machine = self; + self->pid = pid; + self->root_dir = strdup(root_dir); + return self->root_dir == NULL ? -ENOMEM : 0; +} + +struct machine *machines__add(struct rb_root *self, pid_t pid, + const char *root_dir) +{ + struct rb_node **p = &self->rb_node; + struct rb_node *parent = NULL; + struct machine *pos, *machine = malloc(sizeof(*machine)); + + if (!machine) + return NULL; + + if (machine__init(machine, root_dir, pid) != 0) { + free(machine); + return NULL; + } + + while (*p != NULL) { + parent = *p; + pos = rb_entry(parent, struct machine, rb_node); + if (pid < pos->pid) + p = &(*p)->rb_left; + else + p = &(*p)->rb_right; + } + + rb_link_node(&machine->rb_node, parent, p); + rb_insert_color(&machine->rb_node, self); + + return machine; +} + +struct machine *machines__find(struct rb_root *self, pid_t pid) +{ + struct rb_node **p = &self->rb_node; + struct rb_node *parent = NULL; + struct machine *machine; + struct machine *default_machine = NULL; + + while (*p != NULL) { + parent = *p; + machine = rb_entry(parent, struct machine, rb_node); + if (pid < machine->pid) + p = &(*p)->rb_left; + else if (pid > machine->pid) + p = &(*p)->rb_right; + else + return machine; + if (!machine->pid) + default_machine = machine; + } + + return default_machine; +} + +struct machine *machines__findnew(struct rb_root *self, pid_t pid) +{ + char path[PATH_MAX]; + const char *root_dir; + struct machine *machine = machines__find(self, pid); + + if (!machine || machine->pid != pid) { + if (pid == HOST_KERNEL_ID || pid == DEFAULT_GUEST_KERNEL_ID) + root_dir = ""; + else { + if (!symbol_conf.guestmount) + goto out; + sprintf(path, "%s/%d", symbol_conf.guestmount, pid); + if (access(path, R_OK)) { + pr_err("Can't access file %s\n", path); + goto out; + } + root_dir = path; + } + machine = machines__add(self, pid, root_dir); + } + +out: + return machine; +} + +void machines__process(struct rb_root *self, machine__process_t process, void *data) +{ + struct rb_node *nd; + + for (nd = rb_first(self); nd; nd = rb_next(nd)) { + struct machine *pos = rb_entry(nd, struct machine, rb_node); + process(pos, data); + } +} + +char *machine__mmap_name(struct machine *self, char *bf, size_t size) +{ + if (machine__is_host(self)) + snprintf(bf, size, "[%s]", "kernel.kallsyms"); + else if (machine__is_default_guest(self)) + snprintf(bf, size, "[%s]", "guest.kernel.kallsyms"); + else + snprintf(bf, size, "[%s.%d]", "guest.kernel.kallsyms", self->pid); + + return bf; +} diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h index b756368076c6..f39134512829 100644 --- a/tools/perf/util/map.h +++ b/tools/perf/util/map.h @@ -4,7 +4,9 @@ #include <linux/compiler.h> #include <linux/list.h> #include <linux/rbtree.h> -#include <linux/types.h> +#include <stdio.h> +#include <stdbool.h> +#include "types.h" enum map_type { MAP__FUNCTION = 0, @@ -18,6 +20,7 @@ extern const char *map_type__name[MAP__NR_TYPES]; struct dso; struct ref_reloc_sym; struct map_groups; +struct machine; struct map { union { @@ -27,6 +30,7 @@ struct map { u64 start; u64 end; enum map_type type; + u32 priv; u64 pgoff; /* ip -> dso rip */ @@ -35,6 +39,7 @@ struct map { u64 (*unmap_ip)(struct map *, u64); struct dso *dso; + struct map_groups *groups; }; struct kmap { @@ -42,6 +47,32 @@ struct kmap { struct map_groups *kmaps; }; +struct map_groups { + struct rb_root maps[MAP__NR_TYPES]; + struct list_head removed_maps[MAP__NR_TYPES]; + struct machine *machine; +}; + +/* Native host kernel uses -1 as pid index in machine */ +#define HOST_KERNEL_ID (-1) +#define DEFAULT_GUEST_KERNEL_ID (0) + +struct machine { + struct rb_node rb_node; + pid_t pid; + char *root_dir; + struct list_head user_dsos; + struct list_head kernel_dsos; + struct map_groups kmaps; + struct map *vmlinux_maps[MAP__NR_TYPES]; +}; + +static inline +struct map *machine__kernel_map(struct machine *self, enum map_type type) +{ + return self->vmlinux_maps[type]; +} + static inline struct kmap *map__kmap(struct map *self) { return (struct kmap *)(self + 1); @@ -68,14 +99,14 @@ u64 map__rip_2objdump(struct map *map, u64 rip); u64 map__objdump_2ip(struct map *map, u64 addr); struct symbol; -struct mmap_event; typedef int (*symbol_filter_t)(struct map *map, struct symbol *sym); void map__init(struct map *self, enum map_type type, u64 start, u64 end, u64 pgoff, struct dso *dso); -struct map *map__new(struct mmap_event *event, enum map_type, - char *cwd, int cwdlen); +struct map *map__new(struct list_head *dsos__list, u64 start, u64 len, + u64 pgoff, u32 pid, char *filename, + enum map_type type, char *cwd, int cwdlen); void map__delete(struct map *self); struct map *map__clone(struct map *self); int map__overlap(struct map *l, struct map *r); @@ -91,4 +122,96 @@ void map__fixup_end(struct map *self); void map__reloc_vmlinux(struct map *self); +size_t __map_groups__fprintf_maps(struct map_groups *self, + enum map_type type, int verbose, FILE *fp); +void maps__insert(struct rb_root *maps, struct map *map); +struct map *maps__find(struct rb_root *maps, u64 addr); +void map_groups__init(struct map_groups *self); +int map_groups__clone(struct map_groups *self, + struct map_groups *parent, enum map_type type); +size_t map_groups__fprintf(struct map_groups *self, int verbose, FILE *fp); +size_t map_groups__fprintf_maps(struct map_groups *self, int verbose, FILE *fp); + +typedef void (*machine__process_t)(struct machine *self, void *data); + +void machines__process(struct rb_root *self, machine__process_t process, void *data); +struct machine *machines__add(struct rb_root *self, pid_t pid, + const char *root_dir); +struct machine *machines__find_host(struct rb_root *self); +struct machine *machines__find(struct rb_root *self, pid_t pid); +struct machine *machines__findnew(struct rb_root *self, pid_t pid); +char *machine__mmap_name(struct machine *self, char *bf, size_t size); +int machine__init(struct machine *self, const char *root_dir, pid_t pid); + +/* + * Default guest kernel is defined by parameter --guestkallsyms + * and --guestmodules + */ +static inline bool machine__is_default_guest(struct machine *self) +{ + return self ? self->pid == DEFAULT_GUEST_KERNEL_ID : false; +} + +static inline bool machine__is_host(struct machine *self) +{ + return self ? self->pid == HOST_KERNEL_ID : false; +} + +static inline void map_groups__insert(struct map_groups *self, struct map *map) +{ + maps__insert(&self->maps[map->type], map); + map->groups = self; +} + +static inline struct map *map_groups__find(struct map_groups *self, + enum map_type type, u64 addr) +{ + return maps__find(&self->maps[type], addr); +} + +struct symbol *map_groups__find_symbol(struct map_groups *self, + enum map_type type, u64 addr, + struct map **mapp, + symbol_filter_t filter); + +struct symbol *map_groups__find_symbol_by_name(struct map_groups *self, + enum map_type type, + const char *name, + struct map **mapp, + symbol_filter_t filter); + +static inline +struct symbol *machine__find_kernel_symbol(struct machine *self, + enum map_type type, u64 addr, + struct map **mapp, + symbol_filter_t filter) +{ + return map_groups__find_symbol(&self->kmaps, type, addr, mapp, filter); +} + +static inline +struct symbol *machine__find_kernel_function(struct machine *self, u64 addr, + struct map **mapp, + symbol_filter_t filter) +{ + return machine__find_kernel_symbol(self, MAP__FUNCTION, addr, mapp, filter); +} + +static inline +struct symbol *map_groups__find_function_by_name(struct map_groups *self, + const char *name, struct map **mapp, + symbol_filter_t filter) +{ + return map_groups__find_symbol_by_name(self, MAP__FUNCTION, name, mapp, filter); +} + +int map_groups__fixup_overlappings(struct map_groups *self, struct map *map, + int verbose, FILE *fp); + +struct map *map_groups__find_by_name(struct map_groups *self, + enum map_type type, const char *name); +struct map *machine__new_module(struct machine *self, u64 start, const char *filename); + +void map_groups__flush(struct map_groups *self); + #endif /* __PERF_MAP_H */ diff --git a/tools/perf/util/newt.c b/tools/perf/util/newt.c new file mode 100644 index 000000000000..ccb7c5bb269e --- /dev/null +++ b/tools/perf/util/newt.c @@ -0,0 +1,1084 @@ +#define _GNU_SOURCE +#include <stdio.h> +#undef _GNU_SOURCE + +#include <slang.h> +#include <stdlib.h> +#include <newt.h> +#include <sys/ttydefaults.h> + +#include "cache.h" +#include "hist.h" +#include "pstack.h" +#include "session.h" +#include "sort.h" +#include "symbol.h" + +#if SLANG_VERSION < 20104 +#define slsmg_printf(msg, args...) SLsmg_printf((char *)msg, ##args) +#define slsmg_write_nstring(msg, len) SLsmg_write_nstring((char *)msg, len) +#define sltt_set_color(obj, name, fg, bg) SLtt_set_color(obj,(char *)name,\ + (char *)fg, (char *)bg) +#else +#define slsmg_printf SLsmg_printf +#define slsmg_write_nstring SLsmg_write_nstring +#define sltt_set_color SLtt_set_color +#endif + +struct ui_progress { + newtComponent form, scale; +}; + +struct ui_progress *ui_progress__new(const char *title, u64 total) +{ + struct ui_progress *self = malloc(sizeof(*self)); + + if (self != NULL) { + int cols; + newtGetScreenSize(&cols, NULL); + cols -= 4; + newtCenteredWindow(cols, 1, title); + self->form = newtForm(NULL, NULL, 0); + if (self->form == NULL) + goto out_free_self; + self->scale = newtScale(0, 0, cols, total); + if (self->scale == NULL) + goto out_free_form; + newtFormAddComponent(self->form, self->scale); + newtRefresh(); + } + + return self; + +out_free_form: + newtFormDestroy(self->form); +out_free_self: + free(self); + return NULL; +} + +void ui_progress__update(struct ui_progress *self, u64 curr) +{ + newtScaleSet(self->scale, curr); + newtRefresh(); +} + +void ui_progress__delete(struct ui_progress *self) +{ + newtFormDestroy(self->form); + newtPopWindow(); + free(self); +} + +static void ui_helpline__pop(void) +{ + newtPopHelpLine(); +} + +static void ui_helpline__push(const char *msg) +{ + newtPushHelpLine(msg); +} + +static void ui_helpline__vpush(const char *fmt, va_list ap) +{ + char *s; + + if (vasprintf(&s, fmt, ap) < 0) + vfprintf(stderr, fmt, ap); + else { + ui_helpline__push(s); + free(s); + } +} + +static void ui_helpline__fpush(const char *fmt, ...) +{ + va_list ap; + + va_start(ap, fmt); + ui_helpline__vpush(fmt, ap); + va_end(ap); +} + +static void ui_helpline__puts(const char *msg) +{ + ui_helpline__pop(); + ui_helpline__push(msg); +} + +static char browser__last_msg[1024]; + +int browser__show_help(const char *format, va_list ap) +{ + int ret; + static int backlog; + + ret = vsnprintf(browser__last_msg + backlog, + sizeof(browser__last_msg) - backlog, format, ap); + backlog += ret; + + if (browser__last_msg[backlog - 1] == '\n') { + ui_helpline__puts(browser__last_msg); + newtRefresh(); + backlog = 0; + } + + return ret; +} + +static void newt_form__set_exit_keys(newtComponent self) +{ + newtFormAddHotKey(self, NEWT_KEY_LEFT); + newtFormAddHotKey(self, NEWT_KEY_ESCAPE); + newtFormAddHotKey(self, 'Q'); + newtFormAddHotKey(self, 'q'); + newtFormAddHotKey(self, CTRL('c')); +} + +static newtComponent newt_form__new(void) +{ + newtComponent self = newtForm(NULL, NULL, 0); + if (self) + newt_form__set_exit_keys(self); + return self; +} + +static int popup_menu(int argc, char * const argv[]) +{ + struct newtExitStruct es; + int i, rc = -1, max_len = 5; + newtComponent listbox, form = newt_form__new(); + + if (form == NULL) + return -1; + + listbox = newtListbox(0, 0, argc, NEWT_FLAG_RETURNEXIT); + if (listbox == NULL) + goto out_destroy_form; + + newtFormAddComponent(form, listbox); + + for (i = 0; i < argc; ++i) { + int len = strlen(argv[i]); + if (len > max_len) + max_len = len; + if (newtListboxAddEntry(listbox, argv[i], (void *)(long)i)) + goto out_destroy_form; + } + + newtCenteredWindow(max_len, argc, NULL); + newtFormRun(form, &es); + rc = newtListboxGetCurrent(listbox) - NULL; + if (es.reason == NEWT_EXIT_HOTKEY) + rc = -1; + newtPopWindow(); +out_destroy_form: + newtFormDestroy(form); + return rc; +} + +static int ui__help_window(const char *text) +{ + struct newtExitStruct es; + newtComponent tb, form = newt_form__new(); + int rc = -1; + int max_len = 0, nr_lines = 0; + const char *t; + + if (form == NULL) + return -1; + + t = text; + while (1) { + const char *sep = strchr(t, '\n'); + int len; + + if (sep == NULL) + sep = strchr(t, '\0'); + len = sep - t; + if (max_len < len) + max_len = len; + ++nr_lines; + if (*sep == '\0') + break; + t = sep + 1; + } + + tb = newtTextbox(0, 0, max_len, nr_lines, 0); + if (tb == NULL) + goto out_destroy_form; + + newtTextboxSetText(tb, text); + newtFormAddComponent(form, tb); + newtCenteredWindow(max_len, nr_lines, NULL); + newtFormRun(form, &es); + newtPopWindow(); + rc = 0; +out_destroy_form: + newtFormDestroy(form); + return rc; +} + +static bool dialog_yesno(const char *msg) +{ + /* newtWinChoice should really be accepting const char pointers... */ + char yes[] = "Yes", no[] = "No"; + return newtWinChoice(NULL, yes, no, (char *)msg) == 1; +} + +#define HE_COLORSET_TOP 50 +#define HE_COLORSET_MEDIUM 51 +#define HE_COLORSET_NORMAL 52 +#define HE_COLORSET_SELECTED 53 +#define HE_COLORSET_CODE 54 + +static int ui_browser__percent_color(double percent, bool current) +{ + if (current) + return HE_COLORSET_SELECTED; + if (percent >= MIN_RED) + return HE_COLORSET_TOP; + if (percent >= MIN_GREEN) + return HE_COLORSET_MEDIUM; + return HE_COLORSET_NORMAL; +} + +struct ui_browser { + newtComponent form, sb; + u64 index, first_visible_entry_idx; + void *first_visible_entry, *entries; + u16 top, left, width, height; + void *priv; + u32 nr_entries; +}; + +static void ui_browser__refresh_dimensions(struct ui_browser *self) +{ + int cols, rows; + newtGetScreenSize(&cols, &rows); + + if (self->width > cols - 4) + self->width = cols - 4; + self->height = rows - 5; + if (self->height > self->nr_entries) + self->height = self->nr_entries; + self->top = (rows - self->height) / 2; + self->left = (cols - self->width) / 2; +} + +static void ui_browser__reset_index(struct ui_browser *self) +{ + self->index = self->first_visible_entry_idx = 0; + self->first_visible_entry = NULL; +} + +static int objdump_line__show(struct objdump_line *self, struct list_head *head, + int width, struct hist_entry *he, int len, + bool current_entry) +{ + if (self->offset != -1) { + struct symbol *sym = he->ms.sym; + unsigned int hits = 0; + double percent = 0.0; + int color; + struct sym_priv *priv = symbol__priv(sym); + struct sym_ext *sym_ext = priv->ext; + struct sym_hist *h = priv->hist; + s64 offset = self->offset; + struct objdump_line *next = objdump__get_next_ip_line(head, self); + + while (offset < (s64)len && + (next == NULL || offset < next->offset)) { + if (sym_ext) { + percent += sym_ext[offset].percent; + } else + hits += h->ip[offset]; + + ++offset; + } + + if (sym_ext == NULL && h->sum) + percent = 100.0 * hits / h->sum; + + color = ui_browser__percent_color(percent, current_entry); + SLsmg_set_color(color); + slsmg_printf(" %7.2f ", percent); + if (!current_entry) + SLsmg_set_color(HE_COLORSET_CODE); + } else { + int color = ui_browser__percent_color(0, current_entry); + SLsmg_set_color(color); + slsmg_write_nstring(" ", 9); + } + + SLsmg_write_char(':'); + slsmg_write_nstring(" ", 8); + if (!*self->line) + slsmg_write_nstring(" ", width - 18); + else + slsmg_write_nstring(self->line, width - 18); + + return 0; +} + +static int ui_browser__refresh_entries(struct ui_browser *self) +{ + struct objdump_line *pos; + struct list_head *head = self->entries; + struct hist_entry *he = self->priv; + int row = 0; + int len = he->ms.sym->end - he->ms.sym->start; + + if (self->first_visible_entry == NULL || self->first_visible_entry == self->entries) + self->first_visible_entry = head->next; + + pos = list_entry(self->first_visible_entry, struct objdump_line, node); + + list_for_each_entry_from(pos, head, node) { + bool current_entry = (self->first_visible_entry_idx + row) == self->index; + SLsmg_gotorc(self->top + row, self->left); + objdump_line__show(pos, head, self->width, + he, len, current_entry); + if (++row == self->height) + break; + } + + SLsmg_set_color(HE_COLORSET_NORMAL); + SLsmg_fill_region(self->top + row, self->left, + self->height - row, self->width, ' '); + + return 0; +} + +static int ui_browser__run(struct ui_browser *self, const char *title, + struct newtExitStruct *es) +{ + if (self->form) { + newtFormDestroy(self->form); + newtPopWindow(); + } + + ui_browser__refresh_dimensions(self); + newtCenteredWindow(self->width + 2, self->height, title); + self->form = newt_form__new(); + if (self->form == NULL) + return -1; + + self->sb = newtVerticalScrollbar(self->width + 1, 0, self->height, + HE_COLORSET_NORMAL, + HE_COLORSET_SELECTED); + if (self->sb == NULL) + return -1; + + newtFormAddHotKey(self->form, NEWT_KEY_UP); + newtFormAddHotKey(self->form, NEWT_KEY_DOWN); + newtFormAddHotKey(self->form, NEWT_KEY_PGUP); + newtFormAddHotKey(self->form, NEWT_KEY_PGDN); + newtFormAddHotKey(self->form, NEWT_KEY_HOME); + newtFormAddHotKey(self->form, NEWT_KEY_END); + + if (ui_browser__refresh_entries(self) < 0) + return -1; + newtFormAddComponent(self->form, self->sb); + + while (1) { + unsigned int offset; + + newtFormRun(self->form, es); + + if (es->reason != NEWT_EXIT_HOTKEY) + break; + switch (es->u.key) { + case NEWT_KEY_DOWN: + if (self->index == self->nr_entries - 1) + break; + ++self->index; + if (self->index == self->first_visible_entry_idx + self->height) { + struct list_head *pos = self->first_visible_entry; + ++self->first_visible_entry_idx; + self->first_visible_entry = pos->next; + } + break; + case NEWT_KEY_UP: + if (self->index == 0) + break; + --self->index; + if (self->index < self->first_visible_entry_idx) { + struct list_head *pos = self->first_visible_entry; + --self->first_visible_entry_idx; + self->first_visible_entry = pos->prev; + } + break; + case NEWT_KEY_PGDN: + if (self->first_visible_entry_idx + self->height > self->nr_entries - 1) + break; + + offset = self->height; + if (self->index + offset > self->nr_entries - 1) + offset = self->nr_entries - 1 - self->index; + self->index += offset; + self->first_visible_entry_idx += offset; + + while (offset--) { + struct list_head *pos = self->first_visible_entry; + self->first_visible_entry = pos->next; + } + + break; + case NEWT_KEY_PGUP: + if (self->first_visible_entry_idx == 0) + break; + + if (self->first_visible_entry_idx < self->height) + offset = self->first_visible_entry_idx; + else + offset = self->height; + + self->index -= offset; + self->first_visible_entry_idx -= offset; + + while (offset--) { + struct list_head *pos = self->first_visible_entry; + self->first_visible_entry = pos->prev; + } + break; + case NEWT_KEY_HOME: + ui_browser__reset_index(self); + break; + case NEWT_KEY_END: { + struct list_head *head = self->entries; + offset = self->height - 1; + + if (offset > self->nr_entries) + offset = self->nr_entries; + + self->index = self->first_visible_entry_idx = self->nr_entries - 1 - offset; + self->first_visible_entry = head->prev; + while (offset-- != 0) { + struct list_head *pos = self->first_visible_entry; + self->first_visible_entry = pos->prev; + } + } + break; + case NEWT_KEY_ESCAPE: + case NEWT_KEY_LEFT: + case CTRL('c'): + case 'Q': + case 'q': + return 0; + default: + continue; + } + if (ui_browser__refresh_entries(self) < 0) + return -1; + } + return 0; +} + +/* + * When debugging newt problems it was useful to be able to "unroll" + * the calls to newtCheckBoxTreeAdd{Array,Item}, so that we can generate + * a source file with the sequence of calls to these methods, to then + * tweak the arrays to get the intended results, so I'm keeping this code + * here, may be useful again in the future. + */ +#undef NEWT_DEBUG + +static void newt_checkbox_tree__add(newtComponent tree, const char *str, + void *priv, int *indexes) +{ +#ifdef NEWT_DEBUG + /* Print the newtCheckboxTreeAddArray to tinker with its index arrays */ + int i = 0, len = 40 - strlen(str); + + fprintf(stderr, + "\tnewtCheckboxTreeAddItem(tree, %*.*s\"%s\", (void *)%p, 0, ", + len, len, " ", str, priv); + while (indexes[i] != NEWT_ARG_LAST) { + if (indexes[i] != NEWT_ARG_APPEND) + fprintf(stderr, " %d,", indexes[i]); + else + fprintf(stderr, " %s,", "NEWT_ARG_APPEND"); + ++i; + } + fprintf(stderr, " %s", " NEWT_ARG_LAST);\n"); + fflush(stderr); +#endif + newtCheckboxTreeAddArray(tree, str, priv, 0, indexes); +} + +static char *callchain_list__sym_name(struct callchain_list *self, + char *bf, size_t bfsize) +{ + if (self->ms.sym) + return self->ms.sym->name; + + snprintf(bf, bfsize, "%#Lx", self->ip); + return bf; +} + +static void __callchain__append_graph_browser(struct callchain_node *self, + newtComponent tree, u64 total, + int *indexes, int depth) +{ + struct rb_node *node; + u64 new_total, remaining; + int idx = 0; + + if (callchain_param.mode == CHAIN_GRAPH_REL) + new_total = self->children_hit; + else + new_total = total; + + remaining = new_total; + node = rb_first(&self->rb_root); + while (node) { + struct callchain_node *child = rb_entry(node, struct callchain_node, rb_node); + struct rb_node *next = rb_next(node); + u64 cumul = cumul_hits(child); + struct callchain_list *chain; + int first = true, printed = 0; + int chain_idx = -1; + remaining -= cumul; + + indexes[depth] = NEWT_ARG_APPEND; + indexes[depth + 1] = NEWT_ARG_LAST; + + list_for_each_entry(chain, &child->val, list) { + char ipstr[BITS_PER_LONG / 4 + 1], + *alloc_str = NULL; + const char *str = callchain_list__sym_name(chain, ipstr, sizeof(ipstr)); + + if (first) { + double percent = cumul * 100.0 / new_total; + + first = false; + if (asprintf(&alloc_str, "%2.2f%% %s", percent, str) < 0) + str = "Not enough memory!"; + else + str = alloc_str; + } else { + indexes[depth] = idx; + indexes[depth + 1] = NEWT_ARG_APPEND; + indexes[depth + 2] = NEWT_ARG_LAST; + ++chain_idx; + } + newt_checkbox_tree__add(tree, str, &chain->ms, indexes); + free(alloc_str); + ++printed; + } + + indexes[depth] = idx; + if (chain_idx != -1) + indexes[depth + 1] = chain_idx; + if (printed != 0) + ++idx; + __callchain__append_graph_browser(child, tree, new_total, indexes, + depth + (chain_idx != -1 ? 2 : 1)); + node = next; + } +} + +static void callchain__append_graph_browser(struct callchain_node *self, + newtComponent tree, u64 total, + int *indexes, int parent_idx) +{ + struct callchain_list *chain; + int i = 0; + + indexes[1] = NEWT_ARG_APPEND; + indexes[2] = NEWT_ARG_LAST; + + list_for_each_entry(chain, &self->val, list) { + char ipstr[BITS_PER_LONG / 4 + 1], *str; + + if (chain->ip >= PERF_CONTEXT_MAX) + continue; + + if (!i++ && sort__first_dimension == SORT_SYM) + continue; + + str = callchain_list__sym_name(chain, ipstr, sizeof(ipstr)); + newt_checkbox_tree__add(tree, str, &chain->ms, indexes); + } + + indexes[1] = parent_idx; + indexes[2] = NEWT_ARG_APPEND; + indexes[3] = NEWT_ARG_LAST; + __callchain__append_graph_browser(self, tree, total, indexes, 2); +} + +static void hist_entry__append_callchain_browser(struct hist_entry *self, + newtComponent tree, u64 total, int parent_idx) +{ + struct rb_node *rb_node; + int indexes[1024] = { [0] = parent_idx, }; + int idx = 0; + struct callchain_node *chain; + + rb_node = rb_first(&self->sorted_chain); + while (rb_node) { + chain = rb_entry(rb_node, struct callchain_node, rb_node); + switch (callchain_param.mode) { + case CHAIN_FLAT: + break; + case CHAIN_GRAPH_ABS: /* falldown */ + case CHAIN_GRAPH_REL: + callchain__append_graph_browser(chain, tree, total, indexes, idx++); + break; + case CHAIN_NONE: + default: + break; + } + rb_node = rb_next(rb_node); + } +} + +static size_t hist_entry__append_browser(struct hist_entry *self, + newtComponent tree, u64 total) +{ + char s[256]; + size_t ret; + + if (symbol_conf.exclude_other && !self->parent) + return 0; + + ret = hist_entry__snprintf(self, s, sizeof(s), NULL, + false, 0, false, total); + if (symbol_conf.use_callchain) { + int indexes[2]; + + indexes[0] = NEWT_ARG_APPEND; + indexes[1] = NEWT_ARG_LAST; + newt_checkbox_tree__add(tree, s, &self->ms, indexes); + } else + newtListboxAppendEntry(tree, s, &self->ms); + + return ret; +} + +static void hist_entry__annotate_browser(struct hist_entry *self) +{ + struct ui_browser browser; + struct newtExitStruct es; + struct objdump_line *pos, *n; + LIST_HEAD(head); + + if (self->ms.sym == NULL) + return; + + if (hist_entry__annotate(self, &head) < 0) + return; + + ui_helpline__push("Press <- or ESC to exit"); + + memset(&browser, 0, sizeof(browser)); + browser.entries = &head; + browser.priv = self; + list_for_each_entry(pos, &head, node) { + size_t line_len = strlen(pos->line); + if (browser.width < line_len) + browser.width = line_len; + ++browser.nr_entries; + } + + browser.width += 18; /* Percentage */ + ui_browser__run(&browser, self->ms.sym->name, &es); + newtFormDestroy(browser.form); + newtPopWindow(); + list_for_each_entry_safe(pos, n, &head, node) { + list_del(&pos->node); + objdump_line__free(pos); + } + ui_helpline__pop(); +} + +static const void *newt__symbol_tree_get_current(newtComponent self) +{ + if (symbol_conf.use_callchain) + return newtCheckboxTreeGetCurrent(self); + return newtListboxGetCurrent(self); +} + +static void hist_browser__selection(newtComponent self, void *data) +{ + const struct map_symbol **symbol_ptr = data; + *symbol_ptr = newt__symbol_tree_get_current(self); +} + +struct hist_browser { + newtComponent form, tree; + const struct map_symbol *selection; +}; + +static struct hist_browser *hist_browser__new(void) +{ + struct hist_browser *self = malloc(sizeof(*self)); + + if (self != NULL) + self->form = NULL; + + return self; +} + +static void hist_browser__delete(struct hist_browser *self) +{ + newtFormDestroy(self->form); + newtPopWindow(); + free(self); +} + +static int hist_browser__populate(struct hist_browser *self, struct hists *hists, + const char *title) +{ + int max_len = 0, idx, cols, rows; + struct ui_progress *progress; + struct rb_node *nd; + u64 curr_hist = 0; + char seq[] = ".", unit; + char str[256]; + unsigned long nr_events = hists->stats.nr_events[PERF_RECORD_SAMPLE]; + + if (self->form) { + newtFormDestroy(self->form); + newtPopWindow(); + } + + nr_events = convert_unit(nr_events, &unit); + snprintf(str, sizeof(str), "Events: %lu%c ", + nr_events, unit); + newtDrawRootText(0, 0, str); + + newtGetScreenSize(NULL, &rows); + + if (symbol_conf.use_callchain) + self->tree = newtCheckboxTreeMulti(0, 0, rows - 5, seq, + NEWT_FLAG_SCROLL); + else + self->tree = newtListbox(0, 0, rows - 5, + (NEWT_FLAG_SCROLL | + NEWT_FLAG_RETURNEXIT)); + + newtComponentAddCallback(self->tree, hist_browser__selection, + &self->selection); + + progress = ui_progress__new("Adding entries to the browser...", + hists->nr_entries); + if (progress == NULL) + return -1; + + idx = 0; + for (nd = rb_first(&hists->entries); nd; nd = rb_next(nd)) { + struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node); + int len; + + if (h->filtered) + continue; + + len = hist_entry__append_browser(h, self->tree, hists->stats.total_period); + if (len > max_len) + max_len = len; + if (symbol_conf.use_callchain) + hist_entry__append_callchain_browser(h, self->tree, + hists->stats.total_period, idx++); + ++curr_hist; + if (curr_hist % 5) + ui_progress__update(progress, curr_hist); + } + + ui_progress__delete(progress); + + newtGetScreenSize(&cols, &rows); + + if (max_len > cols) + max_len = cols - 3; + + if (!symbol_conf.use_callchain) + newtListboxSetWidth(self->tree, max_len); + + newtCenteredWindow(max_len + (symbol_conf.use_callchain ? 5 : 0), + rows - 5, title); + self->form = newt_form__new(); + if (self->form == NULL) + return -1; + + newtFormAddHotKey(self->form, 'A'); + newtFormAddHotKey(self->form, 'a'); + newtFormAddHotKey(self->form, 'D'); + newtFormAddHotKey(self->form, 'd'); + newtFormAddHotKey(self->form, 'T'); + newtFormAddHotKey(self->form, 't'); + newtFormAddHotKey(self->form, '?'); + newtFormAddHotKey(self->form, 'H'); + newtFormAddHotKey(self->form, 'h'); + newtFormAddHotKey(self->form, NEWT_KEY_F1); + newtFormAddHotKey(self->form, NEWT_KEY_RIGHT); + newtFormAddComponents(self->form, self->tree, NULL); + self->selection = newt__symbol_tree_get_current(self->tree); + + return 0; +} + +static struct hist_entry *hist_browser__selected_entry(struct hist_browser *self) +{ + int *indexes; + + if (!symbol_conf.use_callchain) + goto out; + + indexes = newtCheckboxTreeFindItem(self->tree, (void *)self->selection); + if (indexes) { + bool is_hist_entry = indexes[1] == NEWT_ARG_LAST; + free(indexes); + if (is_hist_entry) + goto out; + } + return NULL; +out: + return container_of(self->selection, struct hist_entry, ms); +} + +static struct thread *hist_browser__selected_thread(struct hist_browser *self) +{ + struct hist_entry *he = hist_browser__selected_entry(self); + return he ? he->thread : NULL; +} + +static int hist_browser__title(char *bf, size_t size, const char *input_name, + const struct dso *dso, const struct thread *thread) +{ + int printed = 0; + + if (thread) + printed += snprintf(bf + printed, size - printed, + "Thread: %s(%d)", + (thread->comm_set ? thread->comm : ""), + thread->pid); + if (dso) + printed += snprintf(bf + printed, size - printed, + "%sDSO: %s", thread ? " " : "", + dso->short_name); + return printed ?: snprintf(bf, size, "Report: %s", input_name); +} + +int hists__browse(struct hists *self, const char *helpline, const char *input_name) +{ + struct hist_browser *browser = hist_browser__new(); + struct pstack *fstack = pstack__new(2); + const struct thread *thread_filter = NULL; + const struct dso *dso_filter = NULL; + struct newtExitStruct es; + char msg[160]; + int err = -1; + + if (browser == NULL) + return -1; + + fstack = pstack__new(2); + if (fstack == NULL) + goto out; + + ui_helpline__push(helpline); + + hist_browser__title(msg, sizeof(msg), input_name, + dso_filter, thread_filter); + if (hist_browser__populate(browser, self, msg) < 0) + goto out_free_stack; + + while (1) { + const struct thread *thread; + const struct dso *dso; + char *options[16]; + int nr_options = 0, choice = 0, i, + annotate = -2, zoom_dso = -2, zoom_thread = -2; + + newtFormRun(browser->form, &es); + + thread = hist_browser__selected_thread(browser); + dso = browser->selection->map ? browser->selection->map->dso : NULL; + + if (es.reason == NEWT_EXIT_HOTKEY) { + if (es.u.key == NEWT_KEY_F1) + goto do_help; + + switch (toupper(es.u.key)) { + case 'A': + goto do_annotate; + case 'D': + goto zoom_dso; + case 'T': + goto zoom_thread; + case 'H': + case '?': +do_help: + ui__help_window("-> Zoom into DSO/Threads & Annotate current symbol\n" + "<- Zoom out\n" + "a Annotate current symbol\n" + "h/?/F1 Show this window\n" + "d Zoom into current DSO\n" + "t Zoom into current Thread\n" + "q/CTRL+C Exit browser"); + continue; + default:; + } + if (toupper(es.u.key) == 'Q' || + es.u.key == CTRL('c')) + break; + if (es.u.key == NEWT_KEY_ESCAPE) { + if (dialog_yesno("Do you really want to exit?")) + break; + else + continue; + } + + if (es.u.key == NEWT_KEY_LEFT) { + const void *top; + + if (pstack__empty(fstack)) + continue; + top = pstack__pop(fstack); + if (top == &dso_filter) + goto zoom_out_dso; + if (top == &thread_filter) + goto zoom_out_thread; + continue; + } + } + + if (browser->selection->sym != NULL && + asprintf(&options[nr_options], "Annotate %s", + browser->selection->sym->name) > 0) + annotate = nr_options++; + + if (thread != NULL && + asprintf(&options[nr_options], "Zoom %s %s(%d) thread", + (thread_filter ? "out of" : "into"), + (thread->comm_set ? thread->comm : ""), + thread->pid) > 0) + zoom_thread = nr_options++; + + if (dso != NULL && + asprintf(&options[nr_options], "Zoom %s %s DSO", + (dso_filter ? "out of" : "into"), + (dso->kernel ? "the Kernel" : dso->short_name)) > 0) + zoom_dso = nr_options++; + + options[nr_options++] = (char *)"Exit"; + + choice = popup_menu(nr_options, options); + + for (i = 0; i < nr_options - 1; ++i) + free(options[i]); + + if (choice == nr_options - 1) + break; + + if (choice == -1) + continue; + + if (choice == annotate) { + struct hist_entry *he; +do_annotate: + if (browser->selection->map->dso->origin == DSO__ORIG_KERNEL) { + ui_helpline__puts("No vmlinux file found, can't " + "annotate with just a " + "kallsyms file"); + continue; + } + + he = hist_browser__selected_entry(browser); + if (he == NULL) + continue; + + hist_entry__annotate_browser(he); + } else if (choice == zoom_dso) { +zoom_dso: + if (dso_filter) { + pstack__remove(fstack, &dso_filter); +zoom_out_dso: + ui_helpline__pop(); + dso_filter = NULL; + } else { + if (dso == NULL) + continue; + ui_helpline__fpush("To zoom out press <- or -> + \"Zoom out of %s DSO\"", + dso->kernel ? "the Kernel" : dso->short_name); + dso_filter = dso; + pstack__push(fstack, &dso_filter); + } + hists__filter_by_dso(self, dso_filter); + hist_browser__title(msg, sizeof(msg), input_name, + dso_filter, thread_filter); + if (hist_browser__populate(browser, self, msg) < 0) + goto out; + } else if (choice == zoom_thread) { +zoom_thread: + if (thread_filter) { + pstack__remove(fstack, &thread_filter); +zoom_out_thread: + ui_helpline__pop(); + thread_filter = NULL; + } else { + ui_helpline__fpush("To zoom out press <- or -> + \"Zoom out of %s(%d) thread\"", + thread->comm_set ? thread->comm : "", + thread->pid); + thread_filter = thread; + pstack__push(fstack, &thread_filter); + } + hists__filter_by_thread(self, thread_filter); + hist_browser__title(msg, sizeof(msg), input_name, + dso_filter, thread_filter); + if (hist_browser__populate(browser, self, msg) < 0) + goto out; + } + } + err = 0; +out_free_stack: + pstack__delete(fstack); +out: + hist_browser__delete(browser); + return err; +} + +static struct newtPercentTreeColors { + const char *topColorFg, *topColorBg; + const char *mediumColorFg, *mediumColorBg; + const char *normalColorFg, *normalColorBg; + const char *selColorFg, *selColorBg; + const char *codeColorFg, *codeColorBg; +} defaultPercentTreeColors = { + "red", "lightgray", + "green", "lightgray", + "black", "lightgray", + "lightgray", "magenta", + "blue", "lightgray", +}; + +void setup_browser(void) +{ + struct newtPercentTreeColors *c = &defaultPercentTreeColors; + if (!isatty(1)) + return; + + use_browser = true; + newtInit(); + newtCls(); + ui_helpline__puts(" "); + sltt_set_color(HE_COLORSET_TOP, NULL, c->topColorFg, c->topColorBg); + sltt_set_color(HE_COLORSET_MEDIUM, NULL, c->mediumColorFg, c->mediumColorBg); + sltt_set_color(HE_COLORSET_NORMAL, NULL, c->normalColorFg, c->normalColorBg); + sltt_set_color(HE_COLORSET_SELECTED, NULL, c->selColorFg, c->selColorBg); + sltt_set_color(HE_COLORSET_CODE, NULL, c->codeColorFg, c->codeColorBg); +} + +void exit_browser(bool wait_for_ok) +{ + if (use_browser) { + if (wait_for_ok) { + char title[] = "Fatal Error", ok[] = "Ok"; + newtWinMessage(title, ok, browser__last_msg); + } + newtFinished(); + } +} diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c index 05d0c5c2030c..9bf0f402ca73 100644 --- a/tools/perf/util/parse-events.c +++ b/tools/perf/util/parse-events.c @@ -5,6 +5,7 @@ #include "parse-events.h" #include "exec_cmd.h" #include "string.h" +#include "symbol.h" #include "cache.h" #include "header.h" #include "debugfs.h" @@ -409,7 +410,6 @@ static enum event_result parse_single_tracepoint_event(char *sys_name, const char *evt_name, unsigned int evt_length, - char *flags, struct perf_event_attr *attr, const char **strp) { @@ -418,14 +418,6 @@ parse_single_tracepoint_event(char *sys_name, u64 id; int fd; - if (flags) { - if (!strncmp(flags, "record", strlen(flags))) { - attr->sample_type |= PERF_SAMPLE_RAW; - attr->sample_type |= PERF_SAMPLE_TIME; - attr->sample_type |= PERF_SAMPLE_CPU; - } - } - snprintf(evt_path, MAXPATHLEN, "%s/%s/%s/id", debugfs_path, sys_name, evt_name); @@ -444,6 +436,13 @@ parse_single_tracepoint_event(char *sys_name, attr->type = PERF_TYPE_TRACEPOINT; *strp = evt_name + evt_length; + attr->sample_type |= PERF_SAMPLE_RAW; + attr->sample_type |= PERF_SAMPLE_TIME; + attr->sample_type |= PERF_SAMPLE_CPU; + + attr->sample_period = 1; + + return EVT_HANDLED; } @@ -532,8 +531,7 @@ static enum event_result parse_tracepoint_event(const char **strp, flags); } else return parse_single_tracepoint_event(sys_name, evt_name, - evt_length, flags, - attr, strp); + evt_length, attr, strp); } static enum event_result @@ -690,19 +688,29 @@ static enum event_result parse_event_modifier(const char **strp, struct perf_event_attr *attr) { const char *str = *strp; - int eu = 1, ek = 1, eh = 1; + int exclude = 0; + int eu = 0, ek = 0, eh = 0, precise = 0; if (*str++ != ':') return 0; while (*str) { - if (*str == 'u') + if (*str == 'u') { + if (!exclude) + exclude = eu = ek = eh = 1; eu = 0; - else if (*str == 'k') + } else if (*str == 'k') { + if (!exclude) + exclude = eu = ek = eh = 1; ek = 0; - else if (*str == 'h') + } else if (*str == 'h') { + if (!exclude) + exclude = eu = ek = eh = 1; eh = 0; - else + } else if (*str == 'p') { + precise++; + } else break; + ++str; } if (str >= *strp + 2) { @@ -710,6 +718,7 @@ parse_event_modifier(const char **strp, struct perf_event_attr *attr) attr->exclude_user = eu; attr->exclude_kernel = ek; attr->exclude_hv = eh; + attr->precise_ip = precise; return 1; } return 0; @@ -934,7 +943,8 @@ void print_events(void) printf("\n"); printf(" %-42s [%s]\n", - "rNNN", event_type_descriptors[PERF_TYPE_RAW]); + "rNNN (see 'perf list --help' on how to encode it)", + event_type_descriptors[PERF_TYPE_RAW]); printf("\n"); printf(" %-42s [%s]\n", diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h index b8c1f64bc935..fc4ab3fe877a 100644 --- a/tools/perf/util/parse-events.h +++ b/tools/perf/util/parse-events.h @@ -13,6 +13,7 @@ struct tracepoint_path { }; extern struct tracepoint_path *tracepoint_id_to_path(u64 config); +extern bool have_tracepoints(struct perf_event_attr *pattrs, int nb_events); extern int nr_counters; diff --git a/tools/perf/util/parse-options.c b/tools/perf/util/parse-options.c index efebd5b476b3..99d02aa57dbf 100644 --- a/tools/perf/util/parse-options.c +++ b/tools/perf/util/parse-options.c @@ -49,8 +49,9 @@ static int get_value(struct parse_opt_ctx_t *p, break; /* FALLTHROUGH */ case OPTION_BOOLEAN: + case OPTION_INCR: case OPTION_BIT: - case OPTION_SET_INT: + case OPTION_SET_UINT: case OPTION_SET_PTR: return opterror(opt, "takes no value", flags); case OPTION_END: @@ -58,7 +59,9 @@ static int get_value(struct parse_opt_ctx_t *p, case OPTION_GROUP: case OPTION_STRING: case OPTION_INTEGER: + case OPTION_UINTEGER: case OPTION_LONG: + case OPTION_U64: default: break; } @@ -73,11 +76,15 @@ static int get_value(struct parse_opt_ctx_t *p, return 0; case OPTION_BOOLEAN: + *(bool *)opt->value = unset ? false : true; + return 0; + + case OPTION_INCR: *(int *)opt->value = unset ? 0 : *(int *)opt->value + 1; return 0; - case OPTION_SET_INT: - *(int *)opt->value = unset ? 0 : opt->defval; + case OPTION_SET_UINT: + *(unsigned int *)opt->value = unset ? 0 : opt->defval; return 0; case OPTION_SET_PTR: @@ -120,6 +127,22 @@ static int get_value(struct parse_opt_ctx_t *p, return opterror(opt, "expects a numerical value", flags); return 0; + case OPTION_UINTEGER: + if (unset) { + *(unsigned int *)opt->value = 0; + return 0; + } + if (opt->flags & PARSE_OPT_OPTARG && !p->opt) { + *(unsigned int *)opt->value = opt->defval; + return 0; + } + if (get_arg(p, opt, flags, &arg)) + return -1; + *(unsigned int *)opt->value = strtol(arg, (char **)&s, 10); + if (*s) + return opterror(opt, "expects a numerical value", flags); + return 0; + case OPTION_LONG: if (unset) { *(long *)opt->value = 0; @@ -136,6 +159,22 @@ static int get_value(struct parse_opt_ctx_t *p, return opterror(opt, "expects a numerical value", flags); return 0; + case OPTION_U64: + if (unset) { + *(u64 *)opt->value = 0; + return 0; + } + if (opt->flags & PARSE_OPT_OPTARG && !p->opt) { + *(u64 *)opt->value = opt->defval; + return 0; + } + if (get_arg(p, opt, flags, &arg)) + return -1; + *(u64 *)opt->value = strtoull(arg, (char **)&s, 10); + if (*s) + return opterror(opt, "expects a numerical value", flags); + return 0; + case OPTION_END: case OPTION_ARGUMENT: case OPTION_GROUP: @@ -441,7 +480,10 @@ int usage_with_options_internal(const char * const *usagestr, switch (opts->type) { case OPTION_ARGUMENT: break; + case OPTION_LONG: + case OPTION_U64: case OPTION_INTEGER: + case OPTION_UINTEGER: if (opts->flags & PARSE_OPT_OPTARG) if (opts->long_name) pos += fprintf(stderr, "[=<n>]"); @@ -473,14 +515,14 @@ int usage_with_options_internal(const char * const *usagestr, pos += fprintf(stderr, " ..."); } break; - default: /* OPTION_{BIT,BOOLEAN,SET_INT,SET_PTR} */ + default: /* OPTION_{BIT,BOOLEAN,SET_UINT,SET_PTR} */ case OPTION_END: case OPTION_GROUP: case OPTION_BIT: case OPTION_BOOLEAN: - case OPTION_SET_INT: + case OPTION_INCR: + case OPTION_SET_UINT: case OPTION_SET_PTR: - case OPTION_LONG: break; } @@ -500,6 +542,7 @@ int usage_with_options_internal(const char * const *usagestr, void usage_with_options(const char * const *usagestr, const struct option *opts) { + exit_browser(false); usage_with_options_internal(usagestr, opts, 0); exit(129); } diff --git a/tools/perf/util/parse-options.h b/tools/perf/util/parse-options.h index 948805af43c2..c7d72dce54b2 100644 --- a/tools/perf/util/parse-options.h +++ b/tools/perf/util/parse-options.h @@ -1,6 +1,9 @@ #ifndef __PERF_PARSE_OPTIONS_H #define __PERF_PARSE_OPTIONS_H +#include <linux/kernel.h> +#include <stdbool.h> + enum parse_opt_type { /* special types */ OPTION_END, @@ -8,14 +11,17 @@ enum parse_opt_type { OPTION_GROUP, /* options with no arguments */ OPTION_BIT, - OPTION_BOOLEAN, /* _INCR would have been a better name */ - OPTION_SET_INT, + OPTION_BOOLEAN, + OPTION_INCR, + OPTION_SET_UINT, OPTION_SET_PTR, /* options with arguments (usually) */ OPTION_STRING, OPTION_INTEGER, OPTION_LONG, OPTION_CALLBACK, + OPTION_U64, + OPTION_UINTEGER, }; enum parse_opt_flags { @@ -73,7 +79,7 @@ typedef int parse_opt_cb(const struct option *, const char *arg, int unset); * * `defval`:: * default value to fill (*->value) with for PARSE_OPT_OPTARG. - * OPTION_{BIT,SET_INT,SET_PTR} store the {mask,integer,pointer} to put in + * OPTION_{BIT,SET_UINT,SET_PTR} store the {mask,integer,pointer} to put in * the value when met. * CALLBACKS can use it like they want. */ @@ -90,16 +96,21 @@ struct option { intptr_t defval; }; +#define check_vtype(v, type) ( BUILD_BUG_ON_ZERO(!__builtin_types_compatible_p(typeof(v), type)) + v ) + #define OPT_END() { .type = OPTION_END } #define OPT_ARGUMENT(l, h) { .type = OPTION_ARGUMENT, .long_name = (l), .help = (h) } #define OPT_GROUP(h) { .type = OPTION_GROUP, .help = (h) } -#define OPT_BIT(s, l, v, h, b) { .type = OPTION_BIT, .short_name = (s), .long_name = (l), .value = (v), .help = (h), .defval = (b) } -#define OPT_BOOLEAN(s, l, v, h) { .type = OPTION_BOOLEAN, .short_name = (s), .long_name = (l), .value = (v), .help = (h) } -#define OPT_SET_INT(s, l, v, h, i) { .type = OPTION_SET_INT, .short_name = (s), .long_name = (l), .value = (v), .help = (h), .defval = (i) } +#define OPT_BIT(s, l, v, h, b) { .type = OPTION_BIT, .short_name = (s), .long_name = (l), .value = check_vtype(v, int *), .help = (h), .defval = (b) } +#define OPT_BOOLEAN(s, l, v, h) { .type = OPTION_BOOLEAN, .short_name = (s), .long_name = (l), .value = check_vtype(v, bool *), .help = (h) } +#define OPT_INCR(s, l, v, h) { .type = OPTION_INCR, .short_name = (s), .long_name = (l), .value = check_vtype(v, int *), .help = (h) } +#define OPT_SET_UINT(s, l, v, h, i) { .type = OPTION_SET_UINT, .short_name = (s), .long_name = (l), .value = check_vtype(v, unsigned int *), .help = (h), .defval = (i) } #define OPT_SET_PTR(s, l, v, h, p) { .type = OPTION_SET_PTR, .short_name = (s), .long_name = (l), .value = (v), .help = (h), .defval = (p) } -#define OPT_INTEGER(s, l, v, h) { .type = OPTION_INTEGER, .short_name = (s), .long_name = (l), .value = (v), .help = (h) } -#define OPT_LONG(s, l, v, h) { .type = OPTION_LONG, .short_name = (s), .long_name = (l), .value = (v), .help = (h) } -#define OPT_STRING(s, l, v, a, h) { .type = OPTION_STRING, .short_name = (s), .long_name = (l), .value = (v), (a), .help = (h) } +#define OPT_INTEGER(s, l, v, h) { .type = OPTION_INTEGER, .short_name = (s), .long_name = (l), .value = check_vtype(v, int *), .help = (h) } +#define OPT_UINTEGER(s, l, v, h) { .type = OPTION_UINTEGER, .short_name = (s), .long_name = (l), .value = check_vtype(v, unsigned int *), .help = (h) } +#define OPT_LONG(s, l, v, h) { .type = OPTION_LONG, .short_name = (s), .long_name = (l), .value = check_vtype(v, long *), .help = (h) } +#define OPT_U64(s, l, v, h) { .type = OPTION_U64, .short_name = (s), .long_name = (l), .value = check_vtype(v, u64 *), .help = (h) } +#define OPT_STRING(s, l, v, a, h) { .type = OPTION_STRING, .short_name = (s), .long_name = (l), .value = check_vtype(v, const char **), (a), .help = (h) } #define OPT_DATE(s, l, v, h) \ { .type = OPTION_CALLBACK, .short_name = (s), .long_name = (l), .value = (v), .argh = "time", .help = (h), .callback = parse_opt_approxidate_cb } #define OPT_CALLBACK(s, l, v, a, h, f) \ diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c index 7c004b6ef24f..914c67095d96 100644 --- a/tools/perf/util/probe-event.c +++ b/tools/perf/util/probe-event.c @@ -33,20 +33,27 @@ #include <limits.h> #undef _GNU_SOURCE +#include "util.h" #include "event.h" #include "string.h" #include "strlist.h" #include "debug.h" #include "cache.h" #include "color.h" -#include "parse-events.h" /* For debugfs_path */ +#include "symbol.h" +#include "thread.h" +#include "debugfs.h" +#include "trace-event.h" /* For __unused */ #include "probe-event.h" +#include "probe-finder.h" #define MAX_CMDLEN 256 #define MAX_PROBE_ARGS 128 #define PERFPROBE_GROUP "probe" -#define semantic_error(msg ...) die("Semantic error :" msg) +bool probe_event_dry_run; /* Dry run flag */ + +#define semantic_error(msg ...) pr_err("Semantic error :" msg) /* If there is no space to write, returns -E2BIG. */ static int e_snprintf(char *str, size_t size, const char *format, ...) @@ -64,7 +71,275 @@ static int e_snprintf(char *str, size_t size, const char *format, ...) return ret; } -void parse_line_range_desc(const char *arg, struct line_range *lr) +static char *synthesize_perf_probe_point(struct perf_probe_point *pp); +static struct machine machine; + +/* Initialize symbol maps and path of vmlinux */ +static int init_vmlinux(void) +{ + struct dso *kernel; + int ret; + + symbol_conf.sort_by_name = true; + if (symbol_conf.vmlinux_name == NULL) + symbol_conf.try_vmlinux_path = true; + else + pr_debug("Use vmlinux: %s\n", symbol_conf.vmlinux_name); + ret = symbol__init(); + if (ret < 0) { + pr_debug("Failed to init symbol map.\n"); + goto out; + } + + ret = machine__init(&machine, "/", 0); + if (ret < 0) + goto out; + + kernel = dso__new_kernel(symbol_conf.vmlinux_name); + if (kernel == NULL) + die("Failed to create kernel dso."); + + ret = __machine__create_kernel_maps(&machine, kernel); + if (ret < 0) + pr_debug("Failed to create kernel maps.\n"); + +out: + if (ret < 0) + pr_warning("Failed to init vmlinux path.\n"); + return ret; +} + +#ifdef DWARF_SUPPORT +static int open_vmlinux(void) +{ + if (map__load(machine.vmlinux_maps[MAP__FUNCTION], NULL) < 0) { + pr_debug("Failed to load kernel map.\n"); + return -EINVAL; + } + pr_debug("Try to open %s\n", machine.vmlinux_maps[MAP__FUNCTION]->dso->long_name); + return open(machine.vmlinux_maps[MAP__FUNCTION]->dso->long_name, O_RDONLY); +} + +/* Convert trace point to probe point with debuginfo */ +static int convert_to_perf_probe_point(struct kprobe_trace_point *tp, + struct perf_probe_point *pp) +{ + struct symbol *sym; + int fd, ret = -ENOENT; + + sym = map__find_symbol_by_name(machine.vmlinux_maps[MAP__FUNCTION], + tp->symbol, NULL); + if (sym) { + fd = open_vmlinux(); + if (fd >= 0) { + ret = find_perf_probe_point(fd, + sym->start + tp->offset, pp); + close(fd); + } + } + if (ret <= 0) { + pr_debug("Failed to find corresponding probes from " + "debuginfo. Use kprobe event information.\n"); + pp->function = strdup(tp->symbol); + if (pp->function == NULL) + return -ENOMEM; + pp->offset = tp->offset; + } + pp->retprobe = tp->retprobe; + + return 0; +} + +/* Try to find perf_probe_event with debuginfo */ +static int try_to_find_kprobe_trace_events(struct perf_probe_event *pev, + struct kprobe_trace_event **tevs, + int max_tevs) +{ + bool need_dwarf = perf_probe_event_need_dwarf(pev); + int fd, ntevs; + + fd = open_vmlinux(); + if (fd < 0) { + if (need_dwarf) { + pr_warning("Failed to open debuginfo file.\n"); + return fd; + } + pr_debug("Could not open vmlinux. Try to use symbols.\n"); + return 0; + } + + /* Searching trace events corresponding to probe event */ + ntevs = find_kprobe_trace_events(fd, pev, tevs, max_tevs); + close(fd); + + if (ntevs > 0) { /* Succeeded to find trace events */ + pr_debug("find %d kprobe_trace_events.\n", ntevs); + return ntevs; + } + + if (ntevs == 0) { /* No error but failed to find probe point. */ + pr_warning("Probe point '%s' not found.\n", + synthesize_perf_probe_point(&pev->point)); + return -ENOENT; + } + /* Error path : ntevs < 0 */ + pr_debug("An error occurred in debuginfo analysis (%d).\n", ntevs); + if (ntevs == -EBADF) { + pr_warning("Warning: No dwarf info found in the vmlinux - " + "please rebuild kernel with CONFIG_DEBUG_INFO=y.\n"); + if (!need_dwarf) { + pr_debug("Trying to use symbols.\nn"); + return 0; + } + } + return ntevs; +} + +#define LINEBUF_SIZE 256 +#define NR_ADDITIONAL_LINES 2 + +static int show_one_line(FILE *fp, int l, bool skip, bool show_num) +{ + char buf[LINEBUF_SIZE]; + const char *color = PERF_COLOR_BLUE; + + if (fgets(buf, LINEBUF_SIZE, fp) == NULL) + goto error; + if (!skip) { + if (show_num) + fprintf(stdout, "%7d %s", l, buf); + else + color_fprintf(stdout, color, " %s", buf); + } + + while (strlen(buf) == LINEBUF_SIZE - 1 && + buf[LINEBUF_SIZE - 2] != '\n') { + if (fgets(buf, LINEBUF_SIZE, fp) == NULL) + goto error; + if (!skip) { + if (show_num) + fprintf(stdout, "%s", buf); + else + color_fprintf(stdout, color, "%s", buf); + } + } + + return 0; +error: + if (feof(fp)) + pr_warning("Source file is shorter than expected.\n"); + else + pr_warning("File read error: %s\n", strerror(errno)); + + return -1; +} + +/* + * Show line-range always requires debuginfo to find source file and + * line number. + */ +int show_line_range(struct line_range *lr) +{ + int l = 1; + struct line_node *ln; + FILE *fp; + int fd, ret; + + /* Search a line range */ + ret = init_vmlinux(); + if (ret < 0) + return ret; + + fd = open_vmlinux(); + if (fd < 0) { + pr_warning("Failed to open debuginfo file.\n"); + return fd; + } + + ret = find_line_range(fd, lr); + close(fd); + if (ret == 0) { + pr_warning("Specified source line is not found.\n"); + return -ENOENT; + } else if (ret < 0) { + pr_warning("Debuginfo analysis failed. (%d)\n", ret); + return ret; + } + + setup_pager(); + + if (lr->function) + fprintf(stdout, "<%s:%d>\n", lr->function, + lr->start - lr->offset); + else + fprintf(stdout, "<%s:%d>\n", lr->file, lr->start); + + fp = fopen(lr->path, "r"); + if (fp == NULL) { + pr_warning("Failed to open %s: %s\n", lr->path, + strerror(errno)); + return -errno; + } + /* Skip to starting line number */ + while (l < lr->start && ret >= 0) + ret = show_one_line(fp, l++, true, false); + if (ret < 0) + goto end; + + list_for_each_entry(ln, &lr->line_list, list) { + while (ln->line > l && ret >= 0) + ret = show_one_line(fp, (l++) - lr->offset, + false, false); + if (ret >= 0) + ret = show_one_line(fp, (l++) - lr->offset, + false, true); + if (ret < 0) + goto end; + } + + if (lr->end == INT_MAX) + lr->end = l + NR_ADDITIONAL_LINES; + while (l <= lr->end && !feof(fp) && ret >= 0) + ret = show_one_line(fp, (l++) - lr->offset, false, false); +end: + fclose(fp); + return ret; +} + +#else /* !DWARF_SUPPORT */ + +static int convert_to_perf_probe_point(struct kprobe_trace_point *tp, + struct perf_probe_point *pp) +{ + pp->function = strdup(tp->symbol); + if (pp->function == NULL) + return -ENOMEM; + pp->offset = tp->offset; + pp->retprobe = tp->retprobe; + + return 0; +} + +static int try_to_find_kprobe_trace_events(struct perf_probe_event *pev, + struct kprobe_trace_event **tevs __unused, + int max_tevs __unused) +{ + if (perf_probe_event_need_dwarf(pev)) { + pr_warning("Debuginfo-analysis is not supported.\n"); + return -ENOSYS; + } + return 0; +} + +int show_line_range(struct line_range *lr __unused) +{ + pr_warning("Debuginfo-analysis is not supported.\n"); + return -ENOSYS; +} + +#endif + +int parse_line_range_desc(const char *arg, struct line_range *lr) { const char *ptr; char *tmp; @@ -75,29 +350,45 @@ void parse_line_range_desc(const char *arg, struct line_range *lr) */ ptr = strchr(arg, ':'); if (ptr) { - lr->start = (unsigned int)strtoul(ptr + 1, &tmp, 0); - if (*tmp == '+') - lr->end = lr->start + (unsigned int)strtoul(tmp + 1, - &tmp, 0); - else if (*tmp == '-') - lr->end = (unsigned int)strtoul(tmp + 1, &tmp, 0); + lr->start = (int)strtoul(ptr + 1, &tmp, 0); + if (*tmp == '+') { + lr->end = lr->start + (int)strtoul(tmp + 1, &tmp, 0); + lr->end--; /* + * Adjust the number of lines here. + * If the number of lines == 1, the + * the end of line should be equal to + * the start of line. + */ + } else if (*tmp == '-') + lr->end = (int)strtoul(tmp + 1, &tmp, 0); else - lr->end = 0; - pr_debug("Line range is %u to %u\n", lr->start, lr->end); - if (lr->end && lr->start > lr->end) + lr->end = INT_MAX; + pr_debug("Line range is %d to %d\n", lr->start, lr->end); + if (lr->start > lr->end) { semantic_error("Start line must be smaller" - " than end line."); - if (*tmp != '\0') - semantic_error("Tailing with invalid character '%d'.", + " than end line.\n"); + return -EINVAL; + } + if (*tmp != '\0') { + semantic_error("Tailing with invalid character '%d'.\n", *tmp); + return -EINVAL; + } tmp = strndup(arg, (ptr - arg)); - } else + } else { tmp = strdup(arg); + lr->end = INT_MAX; + } + + if (tmp == NULL) + return -ENOMEM; if (strchr(tmp, '.')) lr->file = tmp; else lr->function = tmp; + + return 0; } /* Check the name is good for event/group */ @@ -113,8 +404,9 @@ static bool check_event_name(const char *name) } /* Parse probepoint definition. */ -static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp) +static int parse_perf_probe_point(char *arg, struct perf_probe_event *pev) { + struct perf_probe_point *pp = &pev->point; char *ptr, *tmp; char c, nc = 0; /* @@ -129,13 +421,19 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp) if (ptr && *ptr == '=') { /* Event name */ *ptr = '\0'; tmp = ptr + 1; - ptr = strchr(arg, ':'); - if (ptr) /* Group name is not supported yet. */ - semantic_error("Group name is not supported yet."); - if (!check_event_name(arg)) + if (strchr(arg, ':')) { + semantic_error("Group name is not supported yet.\n"); + return -ENOTSUP; + } + if (!check_event_name(arg)) { semantic_error("%s is bad for event name -it must " - "follow C symbol-naming rule.", arg); - pp->event = strdup(arg); + "follow C symbol-naming rule.\n", arg); + return -EINVAL; + } + pev->event = strdup(arg); + if (pev->event == NULL) + return -ENOMEM; + pev->group = NULL; arg = tmp; } @@ -145,12 +443,15 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp) *ptr++ = '\0'; } + tmp = strdup(arg); + if (tmp == NULL) + return -ENOMEM; + /* Check arg is function or file and copy it */ - if (strchr(arg, '.')) /* File */ - pp->file = strdup(arg); + if (strchr(tmp, '.')) /* File */ + pp->file = tmp; else /* Function */ - pp->function = strdup(arg); - DIE_IF(pp->file == NULL && pp->function == NULL); + pp->function = tmp; /* Parse other options */ while (ptr) { @@ -158,6 +459,8 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp) c = nc; if (c == ';') { /* Lazy pattern must be the last part */ pp->lazy_line = strdup(arg); + if (pp->lazy_line == NULL) + return -ENOMEM; break; } ptr = strpbrk(arg, ";:+@%"); @@ -168,266 +471,658 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp) switch (c) { case ':': /* Line number */ pp->line = strtoul(arg, &tmp, 0); - if (*tmp != '\0') + if (*tmp != '\0') { semantic_error("There is non-digit char" - " in line number."); + " in line number.\n"); + return -EINVAL; + } break; case '+': /* Byte offset from a symbol */ pp->offset = strtoul(arg, &tmp, 0); - if (*tmp != '\0') + if (*tmp != '\0') { semantic_error("There is non-digit character" - " in offset."); + " in offset.\n"); + return -EINVAL; + } break; case '@': /* File name */ - if (pp->file) - semantic_error("SRC@SRC is not allowed."); + if (pp->file) { + semantic_error("SRC@SRC is not allowed.\n"); + return -EINVAL; + } pp->file = strdup(arg); - DIE_IF(pp->file == NULL); + if (pp->file == NULL) + return -ENOMEM; break; case '%': /* Probe places */ if (strcmp(arg, "return") == 0) { pp->retprobe = 1; - } else /* Others not supported yet */ - semantic_error("%%%s is not supported.", arg); + } else { /* Others not supported yet */ + semantic_error("%%%s is not supported.\n", arg); + return -ENOTSUP; + } break; - default: - DIE_IF("Program has a bug."); + default: /* Buggy case */ + pr_err("This program has a bug at %s:%d.\n", + __FILE__, __LINE__); + return -ENOTSUP; break; } } /* Exclusion check */ - if (pp->lazy_line && pp->line) + if (pp->lazy_line && pp->line) { semantic_error("Lazy pattern can't be used with line number."); + return -EINVAL; + } - if (pp->lazy_line && pp->offset) + if (pp->lazy_line && pp->offset) { semantic_error("Lazy pattern can't be used with offset."); + return -EINVAL; + } - if (pp->line && pp->offset) + if (pp->line && pp->offset) { semantic_error("Offset can't be used with line number."); + return -EINVAL; + } - if (!pp->line && !pp->lazy_line && pp->file && !pp->function) + if (!pp->line && !pp->lazy_line && pp->file && !pp->function) { semantic_error("File always requires line number or " "lazy pattern."); + return -EINVAL; + } - if (pp->offset && !pp->function) + if (pp->offset && !pp->function) { semantic_error("Offset requires an entry function."); + return -EINVAL; + } - if (pp->retprobe && !pp->function) + if (pp->retprobe && !pp->function) { semantic_error("Return probe requires an entry function."); + return -EINVAL; + } - if ((pp->offset || pp->line || pp->lazy_line) && pp->retprobe) + if ((pp->offset || pp->line || pp->lazy_line) && pp->retprobe) { semantic_error("Offset/Line/Lazy pattern can't be used with " "return probe."); + return -EINVAL; + } - pr_debug("symbol:%s file:%s line:%d offset:%d return:%d lazy:%s\n", + pr_debug("symbol:%s file:%s line:%d offset:%lu return:%d lazy:%s\n", pp->function, pp->file, pp->line, pp->offset, pp->retprobe, pp->lazy_line); + return 0; } -/* Parse perf-probe event definition */ -void parse_perf_probe_event(const char *str, struct probe_point *pp, - bool *need_dwarf) +/* Parse perf-probe event argument */ +static int parse_perf_probe_arg(char *str, struct perf_probe_arg *arg) { - char **argv; - int argc, i; + char *tmp; + struct perf_probe_arg_field **fieldp; + + pr_debug("parsing arg: %s into ", str); - *need_dwarf = false; + tmp = strchr(str, '='); + if (tmp) { + arg->name = strndup(str, tmp - str); + if (arg->name == NULL) + return -ENOMEM; + pr_debug("name:%s ", arg->name); + str = tmp + 1; + } - argv = argv_split(str, &argc); - if (!argv) - die("argv_split failed."); - if (argc > MAX_PROBE_ARGS + 1) - semantic_error("Too many arguments"); + tmp = strchr(str, ':'); + if (tmp) { /* Type setting */ + *tmp = '\0'; + arg->type = strdup(tmp + 1); + if (arg->type == NULL) + return -ENOMEM; + pr_debug("type:%s ", arg->type); + } + tmp = strpbrk(str, "-."); + if (!is_c_varname(str) || !tmp) { + /* A variable, register, symbol or special value */ + arg->var = strdup(str); + if (arg->var == NULL) + return -ENOMEM; + pr_debug("%s\n", arg->var); + return 0; + } + + /* Structure fields */ + arg->var = strndup(str, tmp - str); + if (arg->var == NULL) + return -ENOMEM; + pr_debug("%s, ", arg->var); + fieldp = &arg->field; + + do { + *fieldp = zalloc(sizeof(struct perf_probe_arg_field)); + if (*fieldp == NULL) + return -ENOMEM; + if (*tmp == '.') { + str = tmp + 1; + (*fieldp)->ref = false; + } else if (tmp[1] == '>') { + str = tmp + 2; + (*fieldp)->ref = true; + } else { + semantic_error("Argument parse error: %s\n", str); + return -EINVAL; + } + + tmp = strpbrk(str, "-."); + if (tmp) { + (*fieldp)->name = strndup(str, tmp - str); + if ((*fieldp)->name == NULL) + return -ENOMEM; + pr_debug("%s(%d), ", (*fieldp)->name, (*fieldp)->ref); + fieldp = &(*fieldp)->next; + } + } while (tmp); + (*fieldp)->name = strdup(str); + if ((*fieldp)->name == NULL) + return -ENOMEM; + pr_debug("%s(%d)\n", (*fieldp)->name, (*fieldp)->ref); + + /* If no name is specified, set the last field name */ + if (!arg->name) { + arg->name = strdup((*fieldp)->name); + if (arg->name == NULL) + return -ENOMEM; + } + return 0; +} + +/* Parse perf-probe event command */ +int parse_perf_probe_command(const char *cmd, struct perf_probe_event *pev) +{ + char **argv; + int argc, i, ret = 0; + + argv = argv_split(cmd, &argc); + if (!argv) { + pr_debug("Failed to split arguments.\n"); + return -ENOMEM; + } + if (argc - 1 > MAX_PROBE_ARGS) { + semantic_error("Too many probe arguments (%d).\n", argc - 1); + ret = -ERANGE; + goto out; + } /* Parse probe point */ - parse_perf_probe_probepoint(argv[0], pp); - if (pp->file || pp->line || pp->lazy_line) - *need_dwarf = true; + ret = parse_perf_probe_point(argv[0], pev); + if (ret < 0) + goto out; /* Copy arguments and ensure return probe has no C argument */ - pp->nr_args = argc - 1; - pp->args = zalloc(sizeof(char *) * pp->nr_args); - for (i = 0; i < pp->nr_args; i++) { - pp->args[i] = strdup(argv[i + 1]); - if (!pp->args[i]) - die("Failed to copy argument."); - if (is_c_varname(pp->args[i])) { - if (pp->retprobe) - semantic_error("You can't specify local" - " variable for kretprobe"); - *need_dwarf = true; + pev->nargs = argc - 1; + pev->args = zalloc(sizeof(struct perf_probe_arg) * pev->nargs); + if (pev->args == NULL) { + ret = -ENOMEM; + goto out; + } + for (i = 0; i < pev->nargs && ret >= 0; i++) { + ret = parse_perf_probe_arg(argv[i + 1], &pev->args[i]); + if (ret >= 0 && + is_c_varname(pev->args[i].var) && pev->point.retprobe) { + semantic_error("You can't specify local variable for" + " kretprobe.\n"); + ret = -EINVAL; } } - +out: argv_free(argv); + + return ret; +} + +/* Return true if this perf_probe_event requires debuginfo */ +bool perf_probe_event_need_dwarf(struct perf_probe_event *pev) +{ + int i; + + if (pev->point.file || pev->point.line || pev->point.lazy_line) + return true; + + for (i = 0; i < pev->nargs; i++) + if (is_c_varname(pev->args[i].var)) + return true; + + return false; } /* Parse kprobe_events event into struct probe_point */ -void parse_trace_kprobe_event(const char *str, struct probe_point *pp) +int parse_kprobe_trace_command(const char *cmd, struct kprobe_trace_event *tev) { + struct kprobe_trace_point *tp = &tev->point; char pr; char *p; int ret, i, argc; char **argv; - pr_debug("Parsing kprobe_events: %s\n", str); - argv = argv_split(str, &argc); - if (!argv) - die("argv_split failed."); - if (argc < 2) - semantic_error("Too less arguments."); + pr_debug("Parsing kprobe_events: %s\n", cmd); + argv = argv_split(cmd, &argc); + if (!argv) { + pr_debug("Failed to split arguments.\n"); + return -ENOMEM; + } + if (argc < 2) { + semantic_error("Too few probe arguments.\n"); + ret = -ERANGE; + goto out; + } /* Scan event and group name. */ ret = sscanf(argv[0], "%c:%a[^/ \t]/%a[^ \t]", - &pr, (float *)(void *)&pp->group, - (float *)(void *)&pp->event); - if (ret != 3) - semantic_error("Failed to parse event name: %s", argv[0]); - pr_debug("Group:%s Event:%s probe:%c\n", pp->group, pp->event, pr); + &pr, (float *)(void *)&tev->group, + (float *)(void *)&tev->event); + if (ret != 3) { + semantic_error("Failed to parse event name: %s\n", argv[0]); + ret = -EINVAL; + goto out; + } + pr_debug("Group:%s Event:%s probe:%c\n", tev->group, tev->event, pr); - pp->retprobe = (pr == 'r'); + tp->retprobe = (pr == 'r'); /* Scan function name and offset */ - ret = sscanf(argv[1], "%a[^+]+%d", (float *)(void *)&pp->function, - &pp->offset); + ret = sscanf(argv[1], "%a[^+]+%lu", (float *)(void *)&tp->symbol, + &tp->offset); if (ret == 1) - pp->offset = 0; - - /* kprobe_events doesn't have this information */ - pp->line = 0; - pp->file = NULL; + tp->offset = 0; - pp->nr_args = argc - 2; - pp->args = zalloc(sizeof(char *) * pp->nr_args); - for (i = 0; i < pp->nr_args; i++) { + tev->nargs = argc - 2; + tev->args = zalloc(sizeof(struct kprobe_trace_arg) * tev->nargs); + if (tev->args == NULL) { + ret = -ENOMEM; + goto out; + } + for (i = 0; i < tev->nargs; i++) { p = strchr(argv[i + 2], '='); if (p) /* We don't need which register is assigned. */ - *p = '\0'; - pp->args[i] = strdup(argv[i + 2]); - if (!pp->args[i]) - die("Failed to copy argument."); + *p++ = '\0'; + else + p = argv[i + 2]; + tev->args[i].name = strdup(argv[i + 2]); + /* TODO: parse regs and offset */ + tev->args[i].value = strdup(p); + if (tev->args[i].name == NULL || tev->args[i].value == NULL) { + ret = -ENOMEM; + goto out; + } } - + ret = 0; +out: argv_free(argv); + return ret; } -/* Synthesize only probe point (not argument) */ -int synthesize_perf_probe_point(struct probe_point *pp) +/* Compose only probe arg */ +int synthesize_perf_probe_arg(struct perf_probe_arg *pa, char *buf, size_t len) { - char *buf; - char offs[64] = "", line[64] = ""; + struct perf_probe_arg_field *field = pa->field; int ret; + char *tmp = buf; - pp->probes[0] = buf = zalloc(MAX_CMDLEN); - pp->found = 1; - if (!buf) - die("Failed to allocate memory by zalloc."); + if (pa->name && pa->var) + ret = e_snprintf(tmp, len, "%s=%s", pa->name, pa->var); + else + ret = e_snprintf(tmp, len, "%s", pa->name ? pa->name : pa->var); + if (ret <= 0) + goto error; + tmp += ret; + len -= ret; + + while (field) { + ret = e_snprintf(tmp, len, "%s%s", field->ref ? "->" : ".", + field->name); + if (ret <= 0) + goto error; + tmp += ret; + len -= ret; + field = field->next; + } + + if (pa->type) { + ret = e_snprintf(tmp, len, ":%s", pa->type); + if (ret <= 0) + goto error; + tmp += ret; + len -= ret; + } + + return tmp - buf; +error: + pr_debug("Failed to synthesize perf probe argument: %s", + strerror(-ret)); + return ret; +} + +/* Compose only probe point (not argument) */ +static char *synthesize_perf_probe_point(struct perf_probe_point *pp) +{ + char *buf, *tmp; + char offs[32] = "", line[32] = "", file[32] = ""; + int ret, len; + + buf = zalloc(MAX_CMDLEN); + if (buf == NULL) { + ret = -ENOMEM; + goto error; + } if (pp->offset) { - ret = e_snprintf(offs, 64, "+%d", pp->offset); + ret = e_snprintf(offs, 32, "+%lu", pp->offset); if (ret <= 0) goto error; } if (pp->line) { - ret = e_snprintf(line, 64, ":%d", pp->line); + ret = e_snprintf(line, 32, ":%d", pp->line); + if (ret <= 0) + goto error; + } + if (pp->file) { + len = strlen(pp->file) - 31; + if (len < 0) + len = 0; + tmp = strchr(pp->file + len, '/'); + if (!tmp) + tmp = pp->file + len; + ret = e_snprintf(file, 32, "@%s", tmp + 1); if (ret <= 0) goto error; } if (pp->function) - ret = e_snprintf(buf, MAX_CMDLEN, "%s%s%s%s", pp->function, - offs, pp->retprobe ? "%return" : "", line); + ret = e_snprintf(buf, MAX_CMDLEN, "%s%s%s%s%s", pp->function, + offs, pp->retprobe ? "%return" : "", line, + file); else - ret = e_snprintf(buf, MAX_CMDLEN, "%s%s", pp->file, line); - if (ret <= 0) { + ret = e_snprintf(buf, MAX_CMDLEN, "%s%s", file, line); + if (ret <= 0) + goto error; + + return buf; error: - free(pp->probes[0]); - pp->probes[0] = NULL; - pp->found = 0; - } - return ret; + pr_debug("Failed to synthesize perf probe point: %s", + strerror(-ret)); + if (buf) + free(buf); + return NULL; } -int synthesize_perf_probe_event(struct probe_point *pp) +#if 0 +char *synthesize_perf_probe_command(struct perf_probe_event *pev) { char *buf; int i, len, ret; - len = synthesize_perf_probe_point(pp); - if (len < 0) - return 0; + buf = synthesize_perf_probe_point(&pev->point); + if (!buf) + return NULL; - buf = pp->probes[0]; - for (i = 0; i < pp->nr_args; i++) { + len = strlen(buf); + for (i = 0; i < pev->nargs; i++) { ret = e_snprintf(&buf[len], MAX_CMDLEN - len, " %s", - pp->args[i]); - if (ret <= 0) - goto error; + pev->args[i].name); + if (ret <= 0) { + free(buf); + return NULL; + } len += ret; } - pp->found = 1; - return pp->found; -error: - free(pp->probes[0]); - pp->probes[0] = NULL; + return buf; +} +#endif + +static int __synthesize_kprobe_trace_arg_ref(struct kprobe_trace_arg_ref *ref, + char **buf, size_t *buflen, + int depth) +{ + int ret; + if (ref->next) { + depth = __synthesize_kprobe_trace_arg_ref(ref->next, buf, + buflen, depth + 1); + if (depth < 0) + goto out; + } + + ret = e_snprintf(*buf, *buflen, "%+ld(", ref->offset); + if (ret < 0) + depth = ret; + else { + *buf += ret; + *buflen -= ret; + } +out: + return depth; - return ret; } -int synthesize_trace_kprobe_event(struct probe_point *pp) +static int synthesize_kprobe_trace_arg(struct kprobe_trace_arg *arg, + char *buf, size_t buflen) { + int ret, depth = 0; + char *tmp = buf; + + /* Argument name or separator */ + if (arg->name) + ret = e_snprintf(buf, buflen, " %s=", arg->name); + else + ret = e_snprintf(buf, buflen, " "); + if (ret < 0) + return ret; + buf += ret; + buflen -= ret; + + /* Dereferencing arguments */ + if (arg->ref) { + depth = __synthesize_kprobe_trace_arg_ref(arg->ref, &buf, + &buflen, 1); + if (depth < 0) + return depth; + } + + /* Print argument value */ + ret = e_snprintf(buf, buflen, "%s", arg->value); + if (ret < 0) + return ret; + buf += ret; + buflen -= ret; + + /* Closing */ + while (depth--) { + ret = e_snprintf(buf, buflen, ")"); + if (ret < 0) + return ret; + buf += ret; + buflen -= ret; + } + /* Print argument type */ + if (arg->type) { + ret = e_snprintf(buf, buflen, ":%s", arg->type); + if (ret <= 0) + return ret; + buf += ret; + } + + return buf - tmp; +} + +char *synthesize_kprobe_trace_command(struct kprobe_trace_event *tev) +{ + struct kprobe_trace_point *tp = &tev->point; char *buf; int i, len, ret; - pp->probes[0] = buf = zalloc(MAX_CMDLEN); - if (!buf) - die("Failed to allocate memory by zalloc."); - ret = e_snprintf(buf, MAX_CMDLEN, "%s+%d", pp->function, pp->offset); - if (ret <= 0) + buf = zalloc(MAX_CMDLEN); + if (buf == NULL) + return NULL; + + len = e_snprintf(buf, MAX_CMDLEN, "%c:%s/%s %s+%lu", + tp->retprobe ? 'r' : 'p', + tev->group, tev->event, + tp->symbol, tp->offset); + if (len <= 0) goto error; - len = ret; - for (i = 0; i < pp->nr_args; i++) { - ret = e_snprintf(&buf[len], MAX_CMDLEN - len, " %s", - pp->args[i]); + for (i = 0; i < tev->nargs; i++) { + ret = synthesize_kprobe_trace_arg(&tev->args[i], buf + len, + MAX_CMDLEN - len); if (ret <= 0) goto error; len += ret; } - pp->found = 1; - return pp->found; + return buf; error: - free(pp->probes[0]); - pp->probes[0] = NULL; + free(buf); + return NULL; +} + +int convert_to_perf_probe_event(struct kprobe_trace_event *tev, + struct perf_probe_event *pev) +{ + char buf[64] = ""; + int i, ret; + + /* Convert event/group name */ + pev->event = strdup(tev->event); + pev->group = strdup(tev->group); + if (pev->event == NULL || pev->group == NULL) + return -ENOMEM; + + /* Convert trace_point to probe_point */ + ret = convert_to_perf_probe_point(&tev->point, &pev->point); + if (ret < 0) + return ret; + + /* Convert trace_arg to probe_arg */ + pev->nargs = tev->nargs; + pev->args = zalloc(sizeof(struct perf_probe_arg) * pev->nargs); + if (pev->args == NULL) + return -ENOMEM; + for (i = 0; i < tev->nargs && ret >= 0; i++) { + if (tev->args[i].name) + pev->args[i].name = strdup(tev->args[i].name); + else { + ret = synthesize_kprobe_trace_arg(&tev->args[i], + buf, 64); + pev->args[i].name = strdup(buf); + } + if (pev->args[i].name == NULL && ret >= 0) + ret = -ENOMEM; + } + + if (ret < 0) + clear_perf_probe_event(pev); return ret; } -static int open_kprobe_events(int flags, int mode) +void clear_perf_probe_event(struct perf_probe_event *pev) +{ + struct perf_probe_point *pp = &pev->point; + struct perf_probe_arg_field *field, *next; + int i; + + if (pev->event) + free(pev->event); + if (pev->group) + free(pev->group); + if (pp->file) + free(pp->file); + if (pp->function) + free(pp->function); + if (pp->lazy_line) + free(pp->lazy_line); + for (i = 0; i < pev->nargs; i++) { + if (pev->args[i].name) + free(pev->args[i].name); + if (pev->args[i].var) + free(pev->args[i].var); + if (pev->args[i].type) + free(pev->args[i].type); + field = pev->args[i].field; + while (field) { + next = field->next; + if (field->name) + free(field->name); + free(field); + field = next; + } + } + if (pev->args) + free(pev->args); + memset(pev, 0, sizeof(*pev)); +} + +void clear_kprobe_trace_event(struct kprobe_trace_event *tev) +{ + struct kprobe_trace_arg_ref *ref, *next; + int i; + + if (tev->event) + free(tev->event); + if (tev->group) + free(tev->group); + if (tev->point.symbol) + free(tev->point.symbol); + for (i = 0; i < tev->nargs; i++) { + if (tev->args[i].name) + free(tev->args[i].name); + if (tev->args[i].value) + free(tev->args[i].value); + if (tev->args[i].type) + free(tev->args[i].type); + ref = tev->args[i].ref; + while (ref) { + next = ref->next; + free(ref); + ref = next; + } + } + if (tev->args) + free(tev->args); + memset(tev, 0, sizeof(*tev)); +} + +static int open_kprobe_events(bool readwrite) { char buf[PATH_MAX]; + const char *__debugfs; int ret; - ret = e_snprintf(buf, PATH_MAX, "%s/../kprobe_events", debugfs_path); - if (ret < 0) - die("Failed to make kprobe_events path."); + __debugfs = debugfs_find_mountpoint(); + if (__debugfs == NULL) { + pr_warning("Debugfs is not mounted.\n"); + return -ENOENT; + } + + ret = e_snprintf(buf, PATH_MAX, "%stracing/kprobe_events", __debugfs); + if (ret >= 0) { + pr_debug("Opening %s write=%d\n", buf, readwrite); + if (readwrite && !probe_event_dry_run) + ret = open(buf, O_RDWR, O_APPEND); + else + ret = open(buf, O_RDONLY, 0); + } - ret = open(buf, flags, mode); if (ret < 0) { if (errno == ENOENT) - die("kprobe_events file does not exist -" - " please rebuild with CONFIG_KPROBE_EVENT."); + pr_warning("kprobe_events file does not exist - please" + " rebuild kernel with CONFIG_KPROBE_EVENT.\n"); else - die("Could not open kprobe_events file: %s", - strerror(errno)); + pr_warning("Failed to open kprobe_events file: %s\n", + strerror(errno)); } return ret; } /* Get raw string list of current kprobe_events */ -static struct strlist *get_trace_kprobe_event_rawlist(int fd) +static struct strlist *get_kprobe_trace_command_rawlist(int fd) { int ret, idx; FILE *fp; @@ -447,271 +1142,486 @@ static struct strlist *get_trace_kprobe_event_rawlist(int fd) if (p[idx] == '\n') p[idx] = '\0'; ret = strlist__add(sl, buf); - if (ret < 0) - die("strlist__add failed: %s", strerror(-ret)); + if (ret < 0) { + pr_debug("strlist__add failed: %s\n", strerror(-ret)); + strlist__delete(sl); + return NULL; + } } fclose(fp); return sl; } -/* Free and zero clear probe_point */ -static void clear_probe_point(struct probe_point *pp) -{ - int i; - - if (pp->event) - free(pp->event); - if (pp->group) - free(pp->group); - if (pp->function) - free(pp->function); - if (pp->file) - free(pp->file); - if (pp->lazy_line) - free(pp->lazy_line); - for (i = 0; i < pp->nr_args; i++) - free(pp->args[i]); - if (pp->args) - free(pp->args); - for (i = 0; i < pp->found; i++) - free(pp->probes[i]); - memset(pp, 0, sizeof(*pp)); -} - /* Show an event */ -static void show_perf_probe_event(const char *event, const char *place, - struct probe_point *pp) +static int show_perf_probe_event(struct perf_probe_event *pev) { int i, ret; char buf[128]; + char *place; + + /* Synthesize only event probe point */ + place = synthesize_perf_probe_point(&pev->point); + if (!place) + return -EINVAL; - ret = e_snprintf(buf, 128, "%s:%s", pp->group, event); + ret = e_snprintf(buf, 128, "%s:%s", pev->group, pev->event); if (ret < 0) - die("Failed to copy event: %s", strerror(-ret)); - printf(" %-40s (on %s", buf, place); + return ret; + + printf(" %-20s (on %s", buf, place); - if (pp->nr_args > 0) { + if (pev->nargs > 0) { printf(" with"); - for (i = 0; i < pp->nr_args; i++) - printf(" %s", pp->args[i]); + for (i = 0; i < pev->nargs; i++) { + ret = synthesize_perf_probe_arg(&pev->args[i], + buf, 128); + if (ret < 0) + break; + printf(" %s", buf); + } } printf(")\n"); + free(place); + return ret; } /* List up current perf-probe events */ -void show_perf_probe_events(void) +int show_perf_probe_events(void) { - int fd; - struct probe_point pp; + int fd, ret; + struct kprobe_trace_event tev; + struct perf_probe_event pev; struct strlist *rawlist; struct str_node *ent; setup_pager(); - memset(&pp, 0, sizeof(pp)); + ret = init_vmlinux(); + if (ret < 0) + return ret; + + memset(&tev, 0, sizeof(tev)); + memset(&pev, 0, sizeof(pev)); - fd = open_kprobe_events(O_RDONLY, 0); - rawlist = get_trace_kprobe_event_rawlist(fd); + fd = open_kprobe_events(false); + if (fd < 0) + return fd; + + rawlist = get_kprobe_trace_command_rawlist(fd); close(fd); + if (!rawlist) + return -ENOENT; strlist__for_each(ent, rawlist) { - parse_trace_kprobe_event(ent->s, &pp); - /* Synthesize only event probe point */ - synthesize_perf_probe_point(&pp); - /* Show an event */ - show_perf_probe_event(pp.event, pp.probes[0], &pp); - clear_probe_point(&pp); + ret = parse_kprobe_trace_command(ent->s, &tev); + if (ret >= 0) { + ret = convert_to_perf_probe_event(&tev, &pev); + if (ret >= 0) + ret = show_perf_probe_event(&pev); + } + clear_perf_probe_event(&pev); + clear_kprobe_trace_event(&tev); + if (ret < 0) + break; } - strlist__delete(rawlist); + + return ret; } /* Get current perf-probe event names */ -static struct strlist *get_perf_event_names(int fd, bool include_group) +static struct strlist *get_kprobe_trace_event_names(int fd, bool include_group) { char buf[128]; struct strlist *sl, *rawlist; struct str_node *ent; - struct probe_point pp; + struct kprobe_trace_event tev; + int ret = 0; - memset(&pp, 0, sizeof(pp)); - rawlist = get_trace_kprobe_event_rawlist(fd); + memset(&tev, 0, sizeof(tev)); + rawlist = get_kprobe_trace_command_rawlist(fd); sl = strlist__new(true, NULL); strlist__for_each(ent, rawlist) { - parse_trace_kprobe_event(ent->s, &pp); + ret = parse_kprobe_trace_command(ent->s, &tev); + if (ret < 0) + break; if (include_group) { - if (e_snprintf(buf, 128, "%s:%s", pp.group, - pp.event) < 0) - die("Failed to copy group:event name."); - strlist__add(sl, buf); + ret = e_snprintf(buf, 128, "%s:%s", tev.group, + tev.event); + if (ret >= 0) + ret = strlist__add(sl, buf); } else - strlist__add(sl, pp.event); - clear_probe_point(&pp); + ret = strlist__add(sl, tev.event); + clear_kprobe_trace_event(&tev); + if (ret < 0) + break; } - strlist__delete(rawlist); + if (ret < 0) { + strlist__delete(sl); + return NULL; + } return sl; } -static void write_trace_kprobe_event(int fd, const char *buf) +static int write_kprobe_trace_event(int fd, struct kprobe_trace_event *tev) { - int ret; + int ret = 0; + char *buf = synthesize_kprobe_trace_command(tev); + + if (!buf) { + pr_debug("Failed to synthesize kprobe trace event.\n"); + return -EINVAL; + } pr_debug("Writing event: %s\n", buf); - ret = write(fd, buf, strlen(buf)); - if (ret <= 0) - die("Failed to write event: %s", strerror(errno)); + if (!probe_event_dry_run) { + ret = write(fd, buf, strlen(buf)); + if (ret <= 0) + pr_warning("Failed to write event: %s\n", + strerror(errno)); + } + free(buf); + return ret; } -static void get_new_event_name(char *buf, size_t len, const char *base, - struct strlist *namelist, bool allow_suffix) +static int get_new_event_name(char *buf, size_t len, const char *base, + struct strlist *namelist, bool allow_suffix) { int i, ret; /* Try no suffix */ ret = e_snprintf(buf, len, "%s", base); - if (ret < 0) - die("snprintf() failed: %s", strerror(-ret)); + if (ret < 0) { + pr_debug("snprintf() failed: %s\n", strerror(-ret)); + return ret; + } if (!strlist__has_entry(namelist, buf)) - return; + return 0; if (!allow_suffix) { pr_warning("Error: event \"%s\" already exists. " "(Use -f to force duplicates.)\n", base); - die("Can't add new event."); + return -EEXIST; } /* Try to add suffix */ for (i = 1; i < MAX_EVENT_INDEX; i++) { ret = e_snprintf(buf, len, "%s_%d", base, i); - if (ret < 0) - die("snprintf() failed: %s", strerror(-ret)); + if (ret < 0) { + pr_debug("snprintf() failed: %s\n", strerror(-ret)); + return ret; + } if (!strlist__has_entry(namelist, buf)) break; } - if (i == MAX_EVENT_INDEX) - die("Too many events are on the same function."); + if (i == MAX_EVENT_INDEX) { + pr_warning("Too many events are on the same function.\n"); + ret = -ERANGE; + } + + return ret; } -void add_trace_kprobe_events(struct probe_point *probes, int nr_probes, - bool force_add) +static int __add_kprobe_trace_events(struct perf_probe_event *pev, + struct kprobe_trace_event *tevs, + int ntevs, bool allow_suffix) { - int i, j, fd; - struct probe_point *pp; - char buf[MAX_CMDLEN]; - char event[64]; + int i, fd, ret; + struct kprobe_trace_event *tev = NULL; + char buf[64]; + const char *event, *group; struct strlist *namelist; - bool allow_suffix; - fd = open_kprobe_events(O_RDWR, O_APPEND); + fd = open_kprobe_events(true); + if (fd < 0) + return fd; /* Get current event names */ - namelist = get_perf_event_names(fd, false); - - for (j = 0; j < nr_probes; j++) { - pp = probes + j; - if (!pp->event) - pp->event = strdup(pp->function); - if (!pp->group) - pp->group = strdup(PERFPROBE_GROUP); - DIE_IF(!pp->event || !pp->group); - /* If force_add is true, suffix search is allowed */ - allow_suffix = force_add; - for (i = 0; i < pp->found; i++) { - /* Get an unused new event name */ - get_new_event_name(event, 64, pp->event, namelist, - allow_suffix); - snprintf(buf, MAX_CMDLEN, "%c:%s/%s %s\n", - pp->retprobe ? 'r' : 'p', - pp->group, event, - pp->probes[i]); - write_trace_kprobe_event(fd, buf); - printf("Added new event:\n"); - /* Get the first parameter (probe-point) */ - sscanf(pp->probes[i], "%s", buf); - show_perf_probe_event(event, buf, pp); - /* Add added event name to namelist */ - strlist__add(namelist, event); - /* - * Probes after the first probe which comes from same - * user input are always allowed to add suffix, because - * there might be several addresses corresponding to - * one code line. - */ - allow_suffix = true; + namelist = get_kprobe_trace_event_names(fd, false); + if (!namelist) { + pr_debug("Failed to get current event list.\n"); + return -EIO; + } + + ret = 0; + printf("Add new event%s\n", (ntevs > 1) ? "s:" : ":"); + for (i = 0; i < ntevs; i++) { + tev = &tevs[i]; + if (pev->event) + event = pev->event; + else + if (pev->point.function) + event = pev->point.function; + else + event = tev->point.symbol; + if (pev->group) + group = pev->group; + else + group = PERFPROBE_GROUP; + + /* Get an unused new event name */ + ret = get_new_event_name(buf, 64, event, + namelist, allow_suffix); + if (ret < 0) + break; + event = buf; + + tev->event = strdup(event); + tev->group = strdup(group); + if (tev->event == NULL || tev->group == NULL) { + ret = -ENOMEM; + break; } + ret = write_kprobe_trace_event(fd, tev); + if (ret < 0) + break; + /* Add added event name to namelist */ + strlist__add(namelist, event); + + /* Trick here - save current event/group */ + event = pev->event; + group = pev->group; + pev->event = tev->event; + pev->group = tev->group; + show_perf_probe_event(pev); + /* Trick here - restore current event/group */ + pev->event = (char *)event; + pev->group = (char *)group; + + /* + * Probes after the first probe which comes from same + * user input are always allowed to add suffix, because + * there might be several addresses corresponding to + * one code line. + */ + allow_suffix = true; + } + + if (ret >= 0) { + /* Show how to use the event. */ + printf("\nYou can now use it on all perf tools, such as:\n\n"); + printf("\tperf record -e %s:%s -aR sleep 1\n\n", tev->group, + tev->event); } - /* Show how to use the event. */ - printf("\nYou can now use it on all perf tools, such as:\n\n"); - printf("\tperf record -e %s:%s -a sleep 1\n\n", PERFPROBE_GROUP, event); strlist__delete(namelist); close(fd); + return ret; +} + +static int convert_to_kprobe_trace_events(struct perf_probe_event *pev, + struct kprobe_trace_event **tevs, + int max_tevs) +{ + struct symbol *sym; + int ret = 0, i; + struct kprobe_trace_event *tev; + + /* Convert perf_probe_event with debuginfo */ + ret = try_to_find_kprobe_trace_events(pev, tevs, max_tevs); + if (ret != 0) + return ret; + + /* Allocate trace event buffer */ + tev = *tevs = zalloc(sizeof(struct kprobe_trace_event)); + if (tev == NULL) + return -ENOMEM; + + /* Copy parameters */ + tev->point.symbol = strdup(pev->point.function); + if (tev->point.symbol == NULL) { + ret = -ENOMEM; + goto error; + } + tev->point.offset = pev->point.offset; + tev->nargs = pev->nargs; + if (tev->nargs) { + tev->args = zalloc(sizeof(struct kprobe_trace_arg) + * tev->nargs); + if (tev->args == NULL) { + ret = -ENOMEM; + goto error; + } + for (i = 0; i < tev->nargs; i++) { + if (pev->args[i].name) { + tev->args[i].name = strdup(pev->args[i].name); + if (tev->args[i].name == NULL) { + ret = -ENOMEM; + goto error; + } + } + tev->args[i].value = strdup(pev->args[i].var); + if (tev->args[i].value == NULL) { + ret = -ENOMEM; + goto error; + } + if (pev->args[i].type) { + tev->args[i].type = strdup(pev->args[i].type); + if (tev->args[i].type == NULL) { + ret = -ENOMEM; + goto error; + } + } + } + } + + /* Currently just checking function name from symbol map */ + sym = map__find_symbol_by_name(machine.vmlinux_maps[MAP__FUNCTION], + tev->point.symbol, NULL); + if (!sym) { + pr_warning("Kernel symbol \'%s\' not found.\n", + tev->point.symbol); + ret = -ENOENT; + goto error; + } + + return 1; +error: + clear_kprobe_trace_event(tev); + free(tev); + *tevs = NULL; + return ret; +} + +struct __event_package { + struct perf_probe_event *pev; + struct kprobe_trace_event *tevs; + int ntevs; +}; + +int add_perf_probe_events(struct perf_probe_event *pevs, int npevs, + bool force_add, int max_tevs) +{ + int i, j, ret; + struct __event_package *pkgs; + + pkgs = zalloc(sizeof(struct __event_package) * npevs); + if (pkgs == NULL) + return -ENOMEM; + + /* Init vmlinux path */ + ret = init_vmlinux(); + if (ret < 0) + return ret; + + /* Loop 1: convert all events */ + for (i = 0; i < npevs; i++) { + pkgs[i].pev = &pevs[i]; + /* Convert with or without debuginfo */ + ret = convert_to_kprobe_trace_events(pkgs[i].pev, + &pkgs[i].tevs, max_tevs); + if (ret < 0) + goto end; + pkgs[i].ntevs = ret; + } + + /* Loop 2: add all events */ + for (i = 0; i < npevs && ret >= 0; i++) + ret = __add_kprobe_trace_events(pkgs[i].pev, pkgs[i].tevs, + pkgs[i].ntevs, force_add); +end: + /* Loop 3: cleanup trace events */ + for (i = 0; i < npevs; i++) + for (j = 0; j < pkgs[i].ntevs; j++) + clear_kprobe_trace_event(&pkgs[i].tevs[j]); + + return ret; } -static void __del_trace_kprobe_event(int fd, struct str_node *ent) +static int __del_trace_kprobe_event(int fd, struct str_node *ent) { char *p; char buf[128]; + int ret; /* Convert from perf-probe event to trace-kprobe event */ - if (e_snprintf(buf, 128, "-:%s", ent->s) < 0) - die("Failed to copy event."); + ret = e_snprintf(buf, 128, "-:%s", ent->s); + if (ret < 0) + goto error; + p = strchr(buf + 2, ':'); - if (!p) - die("Internal error: %s should have ':' but not.", ent->s); + if (!p) { + pr_debug("Internal error: %s should have ':' but not.\n", + ent->s); + ret = -ENOTSUP; + goto error; + } *p = '/'; - write_trace_kprobe_event(fd, buf); + pr_debug("Writing event: %s\n", buf); + ret = write(fd, buf, strlen(buf)); + if (ret < 0) + goto error; + printf("Remove event: %s\n", ent->s); + return 0; +error: + pr_warning("Failed to delete event: %s\n", strerror(-ret)); + return ret; } -static void del_trace_kprobe_event(int fd, const char *group, - const char *event, struct strlist *namelist) +static int del_trace_kprobe_event(int fd, const char *group, + const char *event, struct strlist *namelist) { char buf[128]; struct str_node *ent, *n; - int found = 0; + int found = 0, ret = 0; - if (e_snprintf(buf, 128, "%s:%s", group, event) < 0) - die("Failed to copy event."); + ret = e_snprintf(buf, 128, "%s:%s", group, event); + if (ret < 0) { + pr_err("Failed to copy event."); + return ret; + } if (strpbrk(buf, "*?")) { /* Glob-exp */ strlist__for_each_safe(ent, n, namelist) if (strglobmatch(ent->s, buf)) { found++; - __del_trace_kprobe_event(fd, ent); + ret = __del_trace_kprobe_event(fd, ent); + if (ret < 0) + break; strlist__remove(namelist, ent); } } else { ent = strlist__find(namelist, buf); if (ent) { found++; - __del_trace_kprobe_event(fd, ent); - strlist__remove(namelist, ent); + ret = __del_trace_kprobe_event(fd, ent); + if (ret >= 0) + strlist__remove(namelist, ent); } } - if (found == 0) - pr_info("Info: event \"%s\" does not exist, could not remove it.\n", buf); + if (found == 0 && ret >= 0) + pr_info("Info: Event \"%s\" does not exist.\n", buf); + + return ret; } -void del_trace_kprobe_events(struct strlist *dellist) +int del_perf_probe_events(struct strlist *dellist) { - int fd; + int fd, ret = 0; const char *group, *event; char *p, *str; struct str_node *ent; struct strlist *namelist; - fd = open_kprobe_events(O_RDWR, O_APPEND); + fd = open_kprobe_events(true); + if (fd < 0) + return fd; + /* Get current event names */ - namelist = get_perf_event_names(fd, true); + namelist = get_kprobe_trace_event_names(fd, true); + if (namelist == NULL) + return -EINVAL; strlist__for_each(ent, dellist) { str = strdup(ent->s); - if (!str) - die("Failed to copy event."); + if (str == NULL) { + ret = -ENOMEM; + break; + } pr_debug("Parsing: %s\n", str); p = strchr(str, ':'); if (p) { @@ -723,80 +1633,14 @@ void del_trace_kprobe_events(struct strlist *dellist) event = str; } pr_debug("Group: %s, Event: %s\n", group, event); - del_trace_kprobe_event(fd, group, event, namelist); + ret = del_trace_kprobe_event(fd, group, event, namelist); free(str); + if (ret < 0) + break; } strlist__delete(namelist); close(fd); -} -#define LINEBUF_SIZE 256 -#define NR_ADDITIONAL_LINES 2 - -static void show_one_line(FILE *fp, unsigned int l, bool skip, bool show_num) -{ - char buf[LINEBUF_SIZE]; - const char *color = PERF_COLOR_BLUE; - - if (fgets(buf, LINEBUF_SIZE, fp) == NULL) - goto error; - if (!skip) { - if (show_num) - fprintf(stdout, "%7u %s", l, buf); - else - color_fprintf(stdout, color, " %s", buf); - } - - while (strlen(buf) == LINEBUF_SIZE - 1 && - buf[LINEBUF_SIZE - 2] != '\n') { - if (fgets(buf, LINEBUF_SIZE, fp) == NULL) - goto error; - if (!skip) { - if (show_num) - fprintf(stdout, "%s", buf); - else - color_fprintf(stdout, color, "%s", buf); - } - } - return; -error: - if (feof(fp)) - die("Source file is shorter than expected."); - else - die("File read error: %s", strerror(errno)); + return ret; } -void show_line_range(struct line_range *lr) -{ - unsigned int l = 1; - struct line_node *ln; - FILE *fp; - - setup_pager(); - - if (lr->function) - fprintf(stdout, "<%s:%d>\n", lr->function, - lr->start - lr->offset); - else - fprintf(stdout, "<%s:%d>\n", lr->file, lr->start); - - fp = fopen(lr->path, "r"); - if (fp == NULL) - die("Failed to open %s: %s", lr->path, strerror(errno)); - /* Skip to starting line number */ - while (l < lr->start) - show_one_line(fp, l++, true, false); - - list_for_each_entry(ln, &lr->line_list, list) { - while (ln->line > l) - show_one_line(fp, (l++) - lr->offset, false, false); - show_one_line(fp, (l++) - lr->offset, false, true); - } - - if (lr->end == INT_MAX) - lr->end = l + NR_ADDITIONAL_LINES; - while (l < lr->end && !feof(fp)) - show_one_line(fp, (l++) - lr->offset, false, false); - - fclose(fp); -} diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h index 711287d4baea..e9db1a214ca4 100644 --- a/tools/perf/util/probe-event.h +++ b/tools/perf/util/probe-event.h @@ -2,21 +2,125 @@ #define _PROBE_EVENT_H #include <stdbool.h> -#include "probe-finder.h" #include "strlist.h" -extern void parse_line_range_desc(const char *arg, struct line_range *lr); -extern void parse_perf_probe_event(const char *str, struct probe_point *pp, - bool *need_dwarf); -extern int synthesize_perf_probe_point(struct probe_point *pp); -extern int synthesize_perf_probe_event(struct probe_point *pp); -extern void parse_trace_kprobe_event(const char *str, struct probe_point *pp); -extern int synthesize_trace_kprobe_event(struct probe_point *pp); -extern void add_trace_kprobe_events(struct probe_point *probes, int nr_probes, - bool force_add); -extern void del_trace_kprobe_events(struct strlist *dellist); -extern void show_perf_probe_events(void); -extern void show_line_range(struct line_range *lr); +extern bool probe_event_dry_run; + +/* kprobe-tracer tracing point */ +struct kprobe_trace_point { + char *symbol; /* Base symbol */ + unsigned long offset; /* Offset from symbol */ + bool retprobe; /* Return probe flag */ +}; + +/* kprobe-tracer tracing argument referencing offset */ +struct kprobe_trace_arg_ref { + struct kprobe_trace_arg_ref *next; /* Next reference */ + long offset; /* Offset value */ +}; + +/* kprobe-tracer tracing argument */ +struct kprobe_trace_arg { + char *name; /* Argument name */ + char *value; /* Base value */ + char *type; /* Type name */ + struct kprobe_trace_arg_ref *ref; /* Referencing offset */ +}; + +/* kprobe-tracer tracing event (point + arg) */ +struct kprobe_trace_event { + char *event; /* Event name */ + char *group; /* Group name */ + struct kprobe_trace_point point; /* Trace point */ + int nargs; /* Number of args */ + struct kprobe_trace_arg *args; /* Arguments */ +}; + +/* Perf probe probing point */ +struct perf_probe_point { + char *file; /* File path */ + char *function; /* Function name */ + int line; /* Line number */ + bool retprobe; /* Return probe flag */ + char *lazy_line; /* Lazy matching pattern */ + unsigned long offset; /* Offset from function entry */ +}; + +/* Perf probe probing argument field chain */ +struct perf_probe_arg_field { + struct perf_probe_arg_field *next; /* Next field */ + char *name; /* Name of the field */ + bool ref; /* Referencing flag */ +}; + +/* Perf probe probing argument */ +struct perf_probe_arg { + char *name; /* Argument name */ + char *var; /* Variable name */ + char *type; /* Type name */ + struct perf_probe_arg_field *field; /* Structure fields */ +}; + +/* Perf probe probing event (point + arg) */ +struct perf_probe_event { + char *event; /* Event name */ + char *group; /* Group name */ + struct perf_probe_point point; /* Probe point */ + int nargs; /* Number of arguments */ + struct perf_probe_arg *args; /* Arguments */ +}; + + +/* Line number container */ +struct line_node { + struct list_head list; + int line; +}; + +/* Line range */ +struct line_range { + char *file; /* File name */ + char *function; /* Function name */ + int start; /* Start line number */ + int end; /* End line number */ + int offset; /* Start line offset */ + char *path; /* Real path name */ + struct list_head line_list; /* Visible lines */ +}; + +/* Command string to events */ +extern int parse_perf_probe_command(const char *cmd, + struct perf_probe_event *pev); +extern int parse_kprobe_trace_command(const char *cmd, + struct kprobe_trace_event *tev); + +/* Events to command string */ +extern char *synthesize_perf_probe_command(struct perf_probe_event *pev); +extern char *synthesize_kprobe_trace_command(struct kprobe_trace_event *tev); +extern int synthesize_perf_probe_arg(struct perf_probe_arg *pa, char *buf, + size_t len); + +/* Check the perf_probe_event needs debuginfo */ +extern bool perf_probe_event_need_dwarf(struct perf_probe_event *pev); + +/* Convert from kprobe_trace_event to perf_probe_event */ +extern int convert_to_perf_probe_event(struct kprobe_trace_event *tev, + struct perf_probe_event *pev); + +/* Release event contents */ +extern void clear_perf_probe_event(struct perf_probe_event *pev); +extern void clear_kprobe_trace_event(struct kprobe_trace_event *tev); + +/* Command string to line-range */ +extern int parse_line_range_desc(const char *cmd, struct line_range *lr); + + +extern int add_perf_probe_events(struct perf_probe_event *pevs, int npevs, + bool force_add, int max_probe_points); +extern int del_perf_probe_events(struct strlist *dellist); +extern int show_perf_probe_events(void); +extern int show_line_range(struct line_range *lr); + /* Maximum index number of event-name postfix */ #define MAX_EVENT_INDEX 1024 diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index c171a243d05b..562b1443e785 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -31,6 +31,7 @@ #include <string.h> #include <stdarg.h> #include <ctype.h> +#include <dwarf-regs.h> #include "string.h" #include "event.h" @@ -38,57 +39,8 @@ #include "util.h" #include "probe-finder.h" - -/* - * Generic dwarf analysis helpers - */ - -#define X86_32_MAX_REGS 8 -const char *x86_32_regs_table[X86_32_MAX_REGS] = { - "%ax", - "%cx", - "%dx", - "%bx", - "$stack", /* Stack address instead of %sp */ - "%bp", - "%si", - "%di", -}; - -#define X86_64_MAX_REGS 16 -const char *x86_64_regs_table[X86_64_MAX_REGS] = { - "%ax", - "%dx", - "%cx", - "%bx", - "%si", - "%di", - "%bp", - "%sp", - "%r8", - "%r9", - "%r10", - "%r11", - "%r12", - "%r13", - "%r14", - "%r15", -}; - -/* TODO: switching by dwarf address size */ -#ifdef __x86_64__ -#define ARCH_MAX_REGS X86_64_MAX_REGS -#define arch_regs_table x86_64_regs_table -#else -#define ARCH_MAX_REGS X86_32_MAX_REGS -#define arch_regs_table x86_32_regs_table -#endif - -/* Return architecture dependent register string (for kprobe-tracer) */ -static const char *get_arch_regstr(unsigned int n) -{ - return (n <= ARCH_MAX_REGS) ? arch_regs_table[n] : NULL; -} +/* Kprobe tracer basic type is up to u64 */ +#define MAX_BASIC_TYPE_BITS 64 /* * Compare the tail of two strings. @@ -108,7 +60,7 @@ static int strtailcmp(const char *s1, const char *s2) /* Line number list operations */ /* Add a line to line number list */ -static void line_list__add_line(struct list_head *head, unsigned int line) +static int line_list__add_line(struct list_head *head, int line) { struct line_node *ln; struct list_head *p; @@ -119,21 +71,23 @@ static void line_list__add_line(struct list_head *head, unsigned int line) p = &ln->list; goto found; } else if (ln->line == line) /* Already exist */ - return ; + return 1; } /* List is empty, or the smallest entry */ p = head; found: pr_debug("line list: add a line %u\n", line); ln = zalloc(sizeof(struct line_node)); - DIE_IF(ln == NULL); + if (ln == NULL) + return -ENOMEM; ln->line = line; INIT_LIST_HEAD(&ln->list); list_add(&ln->list, p); + return 0; } /* Check if the line in line number list */ -static int line_list__has_line(struct list_head *head, unsigned int line) +static int line_list__has_line(struct list_head *head, int line) { struct line_node *ln; @@ -184,9 +138,129 @@ static const char *cu_find_realpath(Dwarf_Die *cu_die, const char *fname) if (strtailcmp(src, fname) == 0) break; } + if (i == nfiles) + return NULL; return src; } +/* Compare diename and tname */ +static bool die_compare_name(Dwarf_Die *dw_die, const char *tname) +{ + const char *name; + name = dwarf_diename(dw_die); + return name ? strcmp(tname, name) : -1; +} + +/* Get type die, but skip qualifiers and typedef */ +static Dwarf_Die *die_get_real_type(Dwarf_Die *vr_die, Dwarf_Die *die_mem) +{ + Dwarf_Attribute attr; + int tag; + + do { + if (dwarf_attr(vr_die, DW_AT_type, &attr) == NULL || + dwarf_formref_die(&attr, die_mem) == NULL) + return NULL; + + tag = dwarf_tag(die_mem); + vr_die = die_mem; + } while (tag == DW_TAG_const_type || + tag == DW_TAG_restrict_type || + tag == DW_TAG_volatile_type || + tag == DW_TAG_shared_type || + tag == DW_TAG_typedef); + + return die_mem; +} + +static bool die_is_signed_type(Dwarf_Die *tp_die) +{ + Dwarf_Attribute attr; + Dwarf_Word ret; + + if (dwarf_attr(tp_die, DW_AT_encoding, &attr) == NULL || + dwarf_formudata(&attr, &ret) != 0) + return false; + + return (ret == DW_ATE_signed_char || ret == DW_ATE_signed || + ret == DW_ATE_signed_fixed); +} + +static int die_get_byte_size(Dwarf_Die *tp_die) +{ + Dwarf_Attribute attr; + Dwarf_Word ret; + + if (dwarf_attr(tp_die, DW_AT_byte_size, &attr) == NULL || + dwarf_formudata(&attr, &ret) != 0) + return 0; + + return (int)ret; +} + +/* Get data_member_location offset */ +static int die_get_data_member_location(Dwarf_Die *mb_die, Dwarf_Word *offs) +{ + Dwarf_Attribute attr; + Dwarf_Op *expr; + size_t nexpr; + int ret; + + if (dwarf_attr(mb_die, DW_AT_data_member_location, &attr) == NULL) + return -ENOENT; + + if (dwarf_formudata(&attr, offs) != 0) { + /* DW_AT_data_member_location should be DW_OP_plus_uconst */ + ret = dwarf_getlocation(&attr, &expr, &nexpr); + if (ret < 0 || nexpr == 0) + return -ENOENT; + + if (expr[0].atom != DW_OP_plus_uconst || nexpr != 1) { + pr_debug("Unable to get offset:Unexpected OP %x (%zd)\n", + expr[0].atom, nexpr); + return -ENOTSUP; + } + *offs = (Dwarf_Word)expr[0].number; + } + return 0; +} + +/* Return values for die_find callbacks */ +enum { + DIE_FIND_CB_FOUND = 0, /* End of Search */ + DIE_FIND_CB_CHILD = 1, /* Search only children */ + DIE_FIND_CB_SIBLING = 2, /* Search only siblings */ + DIE_FIND_CB_CONTINUE = 3, /* Search children and siblings */ +}; + +/* Search a child die */ +static Dwarf_Die *die_find_child(Dwarf_Die *rt_die, + int (*callback)(Dwarf_Die *, void *), + void *data, Dwarf_Die *die_mem) +{ + Dwarf_Die child_die; + int ret; + + ret = dwarf_child(rt_die, die_mem); + if (ret != 0) + return NULL; + + do { + ret = callback(die_mem, data); + if (ret == DIE_FIND_CB_FOUND) + return die_mem; + + if ((ret & DIE_FIND_CB_CHILD) && + die_find_child(die_mem, callback, data, &child_die)) { + memcpy(die_mem, &child_die, sizeof(Dwarf_Die)); + return die_mem; + } + } while ((ret & DIE_FIND_CB_SIBLING) && + dwarf_siblingof(die_mem, die_mem) == 0); + + return NULL; +} + struct __addr_die_search_param { Dwarf_Addr addr; Dwarf_Die *die_mem; @@ -205,8 +279,8 @@ static int __die_search_func_cb(Dwarf_Die *fn_die, void *data) } /* Search a real subprogram including this line, */ -static Dwarf_Die *die_get_real_subprogram(Dwarf_Die *cu_die, Dwarf_Addr addr, - Dwarf_Die *die_mem) +static Dwarf_Die *die_find_real_subprogram(Dwarf_Die *cu_die, Dwarf_Addr addr, + Dwarf_Die *die_mem) { struct __addr_die_search_param ad; ad.addr = addr; @@ -218,77 +292,64 @@ static Dwarf_Die *die_get_real_subprogram(Dwarf_Die *cu_die, Dwarf_Addr addr, return die_mem; } -/* Similar to dwarf_getfuncs, but returns inlined_subroutine if exists. */ -static Dwarf_Die *die_get_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, - Dwarf_Die *die_mem) +/* die_find callback for inline function search */ +static int __die_find_inline_cb(Dwarf_Die *die_mem, void *data) { - Dwarf_Die child_die; - int ret; + Dwarf_Addr *addr = data; - ret = dwarf_child(sp_die, die_mem); - if (ret != 0) - return NULL; + if (dwarf_tag(die_mem) == DW_TAG_inlined_subroutine && + dwarf_haspc(die_mem, *addr)) + return DIE_FIND_CB_FOUND; - do { - if (dwarf_tag(die_mem) == DW_TAG_inlined_subroutine && - dwarf_haspc(die_mem, addr)) - return die_mem; - - if (die_get_inlinefunc(die_mem, addr, &child_die)) { - memcpy(die_mem, &child_die, sizeof(Dwarf_Die)); - return die_mem; - } - } while (dwarf_siblingof(die_mem, die_mem) == 0); - - return NULL; + return DIE_FIND_CB_CONTINUE; } -/* Compare diename and tname */ -static bool die_compare_name(Dwarf_Die *dw_die, const char *tname) +/* Similar to dwarf_getfuncs, but returns inlined_subroutine if exists. */ +static Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, + Dwarf_Die *die_mem) { - const char *name; - name = dwarf_diename(dw_die); - DIE_IF(name == NULL); - return strcmp(tname, name); + return die_find_child(sp_die, __die_find_inline_cb, &addr, die_mem); } -/* Get entry pc(or low pc, 1st entry of ranges) of the die */ -static Dwarf_Addr die_get_entrypc(Dwarf_Die *dw_die) +static int __die_find_variable_cb(Dwarf_Die *die_mem, void *data) { - Dwarf_Addr epc; - int ret; + const char *name = data; + int tag; - ret = dwarf_entrypc(dw_die, &epc); - DIE_IF(ret == -1); - return epc; + tag = dwarf_tag(die_mem); + if ((tag == DW_TAG_formal_parameter || + tag == DW_TAG_variable) && + (die_compare_name(die_mem, name) == 0)) + return DIE_FIND_CB_FOUND; + + return DIE_FIND_CB_CONTINUE; } -/* Get a variable die */ +/* Find a variable called 'name' */ static Dwarf_Die *die_find_variable(Dwarf_Die *sp_die, const char *name, Dwarf_Die *die_mem) { - Dwarf_Die child_die; - int tag; - int ret; + return die_find_child(sp_die, __die_find_variable_cb, (void *)name, + die_mem); +} - ret = dwarf_child(sp_die, die_mem); - if (ret != 0) - return NULL; +static int __die_find_member_cb(Dwarf_Die *die_mem, void *data) +{ + const char *name = data; - do { - tag = dwarf_tag(die_mem); - if ((tag == DW_TAG_formal_parameter || - tag == DW_TAG_variable) && - (die_compare_name(die_mem, name) == 0)) - return die_mem; + if ((dwarf_tag(die_mem) == DW_TAG_member) && + (die_compare_name(die_mem, name) == 0)) + return DIE_FIND_CB_FOUND; - if (die_find_variable(die_mem, name, &child_die)) { - memcpy(die_mem, &child_die, sizeof(Dwarf_Die)); - return die_mem; - } - } while (dwarf_siblingof(die_mem, die_mem) == 0); + return DIE_FIND_CB_SIBLING; +} - return NULL; +/* Find a member called 'name' */ +static Dwarf_Die *die_find_member(Dwarf_Die *st_die, const char *name, + Dwarf_Die *die_mem) +{ + return die_find_child(st_die, __die_find_member_cb, (void *)name, + die_mem); } /* @@ -296,19 +357,22 @@ static Dwarf_Die *die_find_variable(Dwarf_Die *sp_die, const char *name, */ /* Show a location */ -static void show_location(Dwarf_Op *op, struct probe_finder *pf) +static int convert_location(Dwarf_Op *op, struct probe_finder *pf) { unsigned int regn; Dwarf_Word offs = 0; - int deref = 0, ret; + bool ref = false; const char *regs; + struct kprobe_trace_arg *tvar = pf->tvar; - /* TODO: support CFA */ /* If this is based on frame buffer, set the offset */ if (op->atom == DW_OP_fbreg) { - if (pf->fb_ops == NULL) - die("The attribute of frame base is not supported.\n"); - deref = 1; + if (pf->fb_ops == NULL) { + pr_warning("The attribute of frame base is not " + "supported.\n"); + return -ENOTSUP; + } + ref = true; offs = op->number; op = &pf->fb_ops[0]; } @@ -316,35 +380,164 @@ static void show_location(Dwarf_Op *op, struct probe_finder *pf) if (op->atom >= DW_OP_breg0 && op->atom <= DW_OP_breg31) { regn = op->atom - DW_OP_breg0; offs += op->number; - deref = 1; + ref = true; } else if (op->atom >= DW_OP_reg0 && op->atom <= DW_OP_reg31) { regn = op->atom - DW_OP_reg0; } else if (op->atom == DW_OP_bregx) { regn = op->number; offs += op->number2; - deref = 1; + ref = true; } else if (op->atom == DW_OP_regx) { regn = op->number; - } else - die("DW_OP %d is not supported.", op->atom); + } else { + pr_warning("DW_OP %x is not supported.\n", op->atom); + return -ENOTSUP; + } regs = get_arch_regstr(regn); - if (!regs) - die("%u exceeds max register number.", regn); + if (!regs) { + pr_warning("Mapping for DWARF register number %u missing on this architecture.", regn); + return -ERANGE; + } + + tvar->value = strdup(regs); + if (tvar->value == NULL) + return -ENOMEM; + + if (ref) { + tvar->ref = zalloc(sizeof(struct kprobe_trace_arg_ref)); + if (tvar->ref == NULL) + return -ENOMEM; + tvar->ref->offset = (long)offs; + } + return 0; +} + +static int convert_variable_type(Dwarf_Die *vr_die, + struct kprobe_trace_arg *targ) +{ + Dwarf_Die type; + char buf[16]; + int ret; + + if (die_get_real_type(vr_die, &type) == NULL) { + pr_warning("Failed to get a type information of %s.\n", + dwarf_diename(vr_die)); + return -ENOENT; + } + + ret = die_get_byte_size(&type) * 8; + if (ret) { + /* Check the bitwidth */ + if (ret > MAX_BASIC_TYPE_BITS) { + pr_info("%s exceeds max-bitwidth." + " Cut down to %d bits.\n", + dwarf_diename(&type), MAX_BASIC_TYPE_BITS); + ret = MAX_BASIC_TYPE_BITS; + } + + ret = snprintf(buf, 16, "%c%d", + die_is_signed_type(&type) ? 's' : 'u', ret); + if (ret < 0 || ret >= 16) { + if (ret >= 16) + ret = -E2BIG; + pr_warning("Failed to convert variable type: %s\n", + strerror(-ret)); + return ret; + } + targ->type = strdup(buf); + if (targ->type == NULL) + return -ENOMEM; + } + return 0; +} + +static int convert_variable_fields(Dwarf_Die *vr_die, const char *varname, + struct perf_probe_arg_field *field, + struct kprobe_trace_arg_ref **ref_ptr, + Dwarf_Die *die_mem) +{ + struct kprobe_trace_arg_ref *ref = *ref_ptr; + Dwarf_Die type; + Dwarf_Word offs; + int ret; + + pr_debug("converting %s in %s\n", field->name, varname); + if (die_get_real_type(vr_die, &type) == NULL) { + pr_warning("Failed to get the type of %s.\n", varname); + return -ENOENT; + } + + /* Check the pointer and dereference */ + if (dwarf_tag(&type) == DW_TAG_pointer_type) { + if (!field->ref) { + pr_err("Semantic error: %s must be referred by '->'\n", + field->name); + return -EINVAL; + } + /* Get the type pointed by this pointer */ + if (die_get_real_type(&type, &type) == NULL) { + pr_warning("Failed to get the type of %s.\n", varname); + return -ENOENT; + } + /* Verify it is a data structure */ + if (dwarf_tag(&type) != DW_TAG_structure_type) { + pr_warning("%s is not a data structure.\n", varname); + return -EINVAL; + } + + ref = zalloc(sizeof(struct kprobe_trace_arg_ref)); + if (ref == NULL) + return -ENOMEM; + if (*ref_ptr) + (*ref_ptr)->next = ref; + else + *ref_ptr = ref; + } else { + /* Verify it is a data structure */ + if (dwarf_tag(&type) != DW_TAG_structure_type) { + pr_warning("%s is not a data structure.\n", varname); + return -EINVAL; + } + if (field->ref) { + pr_err("Semantic error: %s must be referred by '.'\n", + field->name); + return -EINVAL; + } + if (!ref) { + pr_warning("Structure on a register is not " + "supported yet.\n"); + return -ENOTSUP; + } + } + + if (die_find_member(&type, field->name, die_mem) == NULL) { + pr_warning("%s(tyep:%s) has no member %s.\n", varname, + dwarf_diename(&type), field->name); + return -EINVAL; + } - if (deref) - ret = snprintf(pf->buf, pf->len, " %s=%+jd(%s)", - pf->var, (intmax_t)offs, regs); + /* Get the offset of the field */ + ret = die_get_data_member_location(die_mem, &offs); + if (ret < 0) { + pr_warning("Failed to get the offset of %s.\n", field->name); + return ret; + } + ref->offset += (long)offs; + + /* Converting next field */ + if (field->next) + return convert_variable_fields(die_mem, field->name, + field->next, &ref, die_mem); else - ret = snprintf(pf->buf, pf->len, " %s=%s", pf->var, regs); - DIE_IF(ret < 0); - DIE_IF(ret >= pf->len); + return 0; } /* Show a variables in kprobe event format */ -static void show_variable(Dwarf_Die *vr_die, struct probe_finder *pf) +static int convert_variable(Dwarf_Die *vr_die, struct probe_finder *pf) { Dwarf_Attribute attr; + Dwarf_Die die_mem; Dwarf_Op *expr; size_t nexpr; int ret; @@ -356,142 +549,191 @@ static void show_variable(Dwarf_Die *vr_die, struct probe_finder *pf) if (ret <= 0 || nexpr == 0) goto error; - show_location(expr, pf); + ret = convert_location(expr, pf); + if (ret == 0 && pf->pvar->field) { + ret = convert_variable_fields(vr_die, pf->pvar->var, + pf->pvar->field, &pf->tvar->ref, + &die_mem); + vr_die = &die_mem; + } + if (ret == 0) { + if (pf->pvar->type) { + pf->tvar->type = strdup(pf->pvar->type); + if (pf->tvar->type == NULL) + ret = -ENOMEM; + } else + ret = convert_variable_type(vr_die, pf->tvar); + } /* *expr will be cached in libdw. Don't free it. */ - return ; + return ret; error: /* TODO: Support const_value */ - die("Failed to find the location of %s at this address.\n" - " Perhaps, it has been optimized out.", pf->var); + pr_err("Failed to find the location of %s at this address.\n" + " Perhaps, it has been optimized out.\n", pf->pvar->var); + return -ENOENT; } /* Find a variable in a subprogram die */ -static void find_variable(Dwarf_Die *sp_die, struct probe_finder *pf) +static int find_variable(Dwarf_Die *sp_die, struct probe_finder *pf) { - int ret; Dwarf_Die vr_die; + char buf[32], *ptr; + int ret; - /* TODO: Support struct members and arrays */ - if (!is_c_varname(pf->var)) { - /* Output raw parameters */ - ret = snprintf(pf->buf, pf->len, " %s", pf->var); - DIE_IF(ret < 0); - DIE_IF(ret >= pf->len); - return ; + /* TODO: Support arrays */ + if (pf->pvar->name) + pf->tvar->name = strdup(pf->pvar->name); + else { + ret = synthesize_perf_probe_arg(pf->pvar, buf, 32); + if (ret < 0) + return ret; + ptr = strchr(buf, ':'); /* Change type separator to _ */ + if (ptr) + *ptr = '_'; + pf->tvar->name = strdup(buf); + } + if (pf->tvar->name == NULL) + return -ENOMEM; + + if (!is_c_varname(pf->pvar->var)) { + /* Copy raw parameters */ + pf->tvar->value = strdup(pf->pvar->var); + if (pf->tvar->value == NULL) + return -ENOMEM; + else + return 0; } - pr_debug("Searching '%s' variable in context.\n", pf->var); + pr_debug("Searching '%s' variable in context.\n", + pf->pvar->var); /* Search child die for local variables and parameters. */ - if (!die_find_variable(sp_die, pf->var, &vr_die)) - die("Failed to find '%s' in this function.", pf->var); - - show_variable(&vr_die, pf); + if (!die_find_variable(sp_die, pf->pvar->var, &vr_die)) { + pr_warning("Failed to find '%s' in this function.\n", + pf->pvar->var); + return -ENOENT; + } + return convert_variable(&vr_die, pf); } /* Show a probe point to output buffer */ -static void show_probe_point(Dwarf_Die *sp_die, struct probe_finder *pf) +static int convert_probe_point(Dwarf_Die *sp_die, struct probe_finder *pf) { - struct probe_point *pp = pf->pp; + struct kprobe_trace_event *tev; Dwarf_Addr eaddr; Dwarf_Die die_mem; const char *name; - char tmp[MAX_PROBE_BUFFER]; - int ret, i, len; + int ret, i; Dwarf_Attribute fb_attr; size_t nops; + if (pf->ntevs == pf->max_tevs) { + pr_warning("Too many( > %d) probe point found.\n", + pf->max_tevs); + return -ERANGE; + } + tev = &pf->tevs[pf->ntevs++]; + /* If no real subprogram, find a real one */ if (!sp_die || dwarf_tag(sp_die) != DW_TAG_subprogram) { - sp_die = die_get_real_subprogram(&pf->cu_die, + sp_die = die_find_real_subprogram(&pf->cu_die, pf->addr, &die_mem); - if (!sp_die) - die("Probe point is not found in subprograms."); + if (!sp_die) { + pr_warning("Failed to find probe point in any " + "functions.\n"); + return -ENOENT; + } } - /* Output name of probe point */ + /* Copy the name of probe point */ name = dwarf_diename(sp_die); if (name) { - dwarf_entrypc(sp_die, &eaddr); - ret = snprintf(tmp, MAX_PROBE_BUFFER, "%s+%lu", name, - (unsigned long)(pf->addr - eaddr)); - /* Copy the function name if possible */ - if (!pp->function) { - pp->function = strdup(name); - pp->offset = (size_t)(pf->addr - eaddr); + if (dwarf_entrypc(sp_die, &eaddr) != 0) { + pr_warning("Failed to get entry pc of %s\n", + dwarf_diename(sp_die)); + return -ENOENT; } - } else { + tev->point.symbol = strdup(name); + if (tev->point.symbol == NULL) + return -ENOMEM; + tev->point.offset = (unsigned long)(pf->addr - eaddr); + } else /* This function has no name. */ - ret = snprintf(tmp, MAX_PROBE_BUFFER, "0x%jx", - (uintmax_t)pf->addr); - if (!pp->function) { - /* TODO: Use _stext */ - pp->function = strdup(""); - pp->offset = (size_t)pf->addr; - } - } - DIE_IF(ret < 0); - DIE_IF(ret >= MAX_PROBE_BUFFER); - len = ret; - pr_debug("Probe point found: %s\n", tmp); + tev->point.offset = (unsigned long)pf->addr; + + pr_debug("Probe point found: %s+%lu\n", tev->point.symbol, + tev->point.offset); /* Get the frame base attribute/ops */ dwarf_attr(sp_die, DW_AT_frame_base, &fb_attr); ret = dwarf_getlocation_addr(&fb_attr, pf->addr, &pf->fb_ops, &nops, 1); - if (ret <= 0 || nops == 0) + if (ret <= 0 || nops == 0) { pf->fb_ops = NULL; + } else if (nops == 1 && pf->fb_ops[0].atom == DW_OP_call_frame_cfa && + pf->cfi != NULL) { + Dwarf_Frame *frame; + if (dwarf_cfi_addrframe(pf->cfi, pf->addr, &frame) != 0 || + dwarf_frame_cfa(frame, &pf->fb_ops, &nops) != 0) { + pr_warning("Failed to get CFA on 0x%jx\n", + (uintmax_t)pf->addr); + return -ENOENT; + } + } /* Find each argument */ - /* TODO: use dwarf_cfi_addrframe */ - for (i = 0; i < pp->nr_args; i++) { - pf->var = pp->args[i]; - pf->buf = &tmp[len]; - pf->len = MAX_PROBE_BUFFER - len; - find_variable(sp_die, pf); - len += strlen(pf->buf); + tev->nargs = pf->pev->nargs; + tev->args = zalloc(sizeof(struct kprobe_trace_arg) * tev->nargs); + if (tev->args == NULL) + return -ENOMEM; + for (i = 0; i < pf->pev->nargs; i++) { + pf->pvar = &pf->pev->args[i]; + pf->tvar = &tev->args[i]; + ret = find_variable(sp_die, pf); + if (ret != 0) + return ret; } /* *pf->fb_ops will be cached in libdw. Don't free it. */ pf->fb_ops = NULL; - - if (pp->found == MAX_PROBES) - die("Too many( > %d) probe point found.\n", MAX_PROBES); - - pp->probes[pp->found] = strdup(tmp); - pp->found++; + return 0; } /* Find probe point from its line number */ -static void find_probe_point_by_line(struct probe_finder *pf) +static int find_probe_point_by_line(struct probe_finder *pf) { Dwarf_Lines *lines; Dwarf_Line *line; size_t nlines, i; Dwarf_Addr addr; int lineno; - int ret; + int ret = 0; - ret = dwarf_getsrclines(&pf->cu_die, &lines, &nlines); - DIE_IF(ret != 0); + if (dwarf_getsrclines(&pf->cu_die, &lines, &nlines) != 0) { + pr_warning("No source lines found in this CU.\n"); + return -ENOENT; + } - for (i = 0; i < nlines; i++) { + for (i = 0; i < nlines && ret == 0; i++) { line = dwarf_onesrcline(lines, i); - dwarf_lineno(line, &lineno); - if (lineno != pf->lno) + if (dwarf_lineno(line, &lineno) != 0 || + lineno != pf->lno) continue; /* TODO: Get fileno from line, but how? */ if (strtailcmp(dwarf_linesrc(line, NULL, NULL), pf->fname) != 0) continue; - ret = dwarf_lineaddr(line, &addr); - DIE_IF(ret != 0); + if (dwarf_lineaddr(line, &addr) != 0) { + pr_warning("Failed to get the address of the line.\n"); + return -ENOENT; + } pr_debug("Probe line found: line[%d]:%d addr:0x%jx\n", (int)i, lineno, (uintmax_t)addr); pf->addr = addr; - show_probe_point(NULL, pf); + ret = convert_probe_point(NULL, pf); /* Continuing, because target line might be inlined. */ } + return ret; } /* Find lines which match lazy pattern */ @@ -499,16 +741,27 @@ static int find_lazy_match_lines(struct list_head *head, const char *fname, const char *pat) { char *fbuf, *p1, *p2; - int fd, line, nlines = 0; + int fd, ret, line, nlines = 0; struct stat st; fd = open(fname, O_RDONLY); - if (fd < 0) - die("failed to open %s", fname); - DIE_IF(fstat(fd, &st) < 0); - fbuf = malloc(st.st_size + 2); - DIE_IF(fbuf == NULL); - DIE_IF(read(fd, fbuf, st.st_size) < 0); + if (fd < 0) { + pr_warning("Failed to open %s: %s\n", fname, strerror(-fd)); + return fd; + } + + ret = fstat(fd, &st); + if (ret < 0) { + pr_warning("Failed to get the size of %s: %s\n", + fname, strerror(errno)); + return ret; + } + fbuf = xmalloc(st.st_size + 2); + ret = read(fd, fbuf, st.st_size); + if (ret < 0) { + pr_warning("Failed to read %s: %s\n", fname, strerror(errno)); + return ret; + } close(fd); fbuf[st.st_size] = '\n'; /* Dummy line */ fbuf[st.st_size + 1] = '\0'; @@ -528,7 +781,7 @@ static int find_lazy_match_lines(struct list_head *head, } /* Find probe points from lazy pattern */ -static void find_probe_point_lazy(Dwarf_Die *sp_die, struct probe_finder *pf) +static int find_probe_point_lazy(Dwarf_Die *sp_die, struct probe_finder *pf) { Dwarf_Lines *lines; Dwarf_Line *line; @@ -536,37 +789,46 @@ static void find_probe_point_lazy(Dwarf_Die *sp_die, struct probe_finder *pf) Dwarf_Addr addr; Dwarf_Die die_mem; int lineno; - int ret; + int ret = 0; if (list_empty(&pf->lcache)) { /* Matching lazy line pattern */ ret = find_lazy_match_lines(&pf->lcache, pf->fname, - pf->pp->lazy_line); - if (ret <= 0) - die("No matched lines found in %s.", pf->fname); + pf->pev->point.lazy_line); + if (ret == 0) { + pr_debug("No matched lines found in %s.\n", pf->fname); + return 0; + } else if (ret < 0) + return ret; } - ret = dwarf_getsrclines(&pf->cu_die, &lines, &nlines); - DIE_IF(ret != 0); - for (i = 0; i < nlines; i++) { + if (dwarf_getsrclines(&pf->cu_die, &lines, &nlines) != 0) { + pr_warning("No source lines found in this CU.\n"); + return -ENOENT; + } + + for (i = 0; i < nlines && ret >= 0; i++) { line = dwarf_onesrcline(lines, i); - dwarf_lineno(line, &lineno); - if (!line_list__has_line(&pf->lcache, lineno)) + if (dwarf_lineno(line, &lineno) != 0 || + !line_list__has_line(&pf->lcache, lineno)) continue; /* TODO: Get fileno from line, but how? */ if (strtailcmp(dwarf_linesrc(line, NULL, NULL), pf->fname) != 0) continue; - ret = dwarf_lineaddr(line, &addr); - DIE_IF(ret != 0); + if (dwarf_lineaddr(line, &addr) != 0) { + pr_debug("Failed to get the address of line %d.\n", + lineno); + continue; + } if (sp_die) { /* Address filtering 1: does sp_die include addr? */ if (!dwarf_haspc(sp_die, addr)) continue; /* Address filtering 2: No child include addr? */ - if (die_get_inlinefunc(sp_die, addr, &die_mem)) + if (die_find_inlinefunc(sp_die, addr, &die_mem)) continue; } @@ -574,27 +836,44 @@ static void find_probe_point_lazy(Dwarf_Die *sp_die, struct probe_finder *pf) (int)i, lineno, (unsigned long long)addr); pf->addr = addr; - show_probe_point(sp_die, pf); + ret = convert_probe_point(sp_die, pf); /* Continuing, because target line might be inlined. */ } /* TODO: deallocate lines, but how? */ + return ret; } +/* Callback parameter with return value */ +struct dwarf_callback_param { + void *data; + int retval; +}; + static int probe_point_inline_cb(Dwarf_Die *in_die, void *data) { - struct probe_finder *pf = (struct probe_finder *)data; - struct probe_point *pp = pf->pp; + struct dwarf_callback_param *param = data; + struct probe_finder *pf = param->data; + struct perf_probe_point *pp = &pf->pev->point; + Dwarf_Addr addr; if (pp->lazy_line) - find_probe_point_lazy(in_die, pf); + param->retval = find_probe_point_lazy(in_die, pf); else { /* Get probe address */ - pf->addr = die_get_entrypc(in_die); + if (dwarf_entrypc(in_die, &addr) != 0) { + pr_warning("Failed to get entry pc of %s.\n", + dwarf_diename(in_die)); + param->retval = -ENOENT; + return DWARF_CB_ABORT; + } + pf->addr = addr; pf->addr += pp->offset; pr_debug("found inline addr: 0x%jx\n", (uintmax_t)pf->addr); - show_probe_point(in_die, pf); + param->retval = convert_probe_point(in_die, pf); + if (param->retval < 0) + return DWARF_CB_ABORT; } return DWARF_CB_OK; @@ -603,59 +882,88 @@ static int probe_point_inline_cb(Dwarf_Die *in_die, void *data) /* Search function from function name */ static int probe_point_search_cb(Dwarf_Die *sp_die, void *data) { - struct probe_finder *pf = (struct probe_finder *)data; - struct probe_point *pp = pf->pp; + struct dwarf_callback_param *param = data; + struct probe_finder *pf = param->data; + struct perf_probe_point *pp = &pf->pev->point; /* Check tag and diename */ if (dwarf_tag(sp_die) != DW_TAG_subprogram || die_compare_name(sp_die, pp->function) != 0) - return 0; + return DWARF_CB_OK; pf->fname = dwarf_decl_file(sp_die); if (pp->line) { /* Function relative line */ dwarf_decl_line(sp_die, &pf->lno); pf->lno += pp->line; - find_probe_point_by_line(pf); + param->retval = find_probe_point_by_line(pf); } else if (!dwarf_func_inline(sp_die)) { /* Real function */ if (pp->lazy_line) - find_probe_point_lazy(sp_die, pf); + param->retval = find_probe_point_lazy(sp_die, pf); else { - pf->addr = die_get_entrypc(sp_die); + if (dwarf_entrypc(sp_die, &pf->addr) != 0) { + pr_warning("Failed to get entry pc of %s.\n", + dwarf_diename(sp_die)); + param->retval = -ENOENT; + return DWARF_CB_ABORT; + } pf->addr += pp->offset; /* TODO: Check the address in this function */ - show_probe_point(sp_die, pf); + param->retval = convert_probe_point(sp_die, pf); } - } else + } else { + struct dwarf_callback_param _param = {.data = (void *)pf, + .retval = 0}; /* Inlined function: search instances */ - dwarf_func_inline_instances(sp_die, probe_point_inline_cb, pf); + dwarf_func_inline_instances(sp_die, probe_point_inline_cb, + &_param); + param->retval = _param.retval; + } - return 1; /* Exit; no same symbol in this CU. */ + return DWARF_CB_ABORT; /* Exit; no same symbol in this CU. */ } -static void find_probe_point_by_func(struct probe_finder *pf) +static int find_probe_point_by_func(struct probe_finder *pf) { - dwarf_getfuncs(&pf->cu_die, probe_point_search_cb, pf, 0); + struct dwarf_callback_param _param = {.data = (void *)pf, + .retval = 0}; + dwarf_getfuncs(&pf->cu_die, probe_point_search_cb, &_param, 0); + return _param.retval; } -/* Find a probe point */ -int find_probe_point(int fd, struct probe_point *pp) +/* Find kprobe_trace_events specified by perf_probe_event from debuginfo */ +int find_kprobe_trace_events(int fd, struct perf_probe_event *pev, + struct kprobe_trace_event **tevs, int max_tevs) { - struct probe_finder pf = {.pp = pp}; + struct probe_finder pf = {.pev = pev, .max_tevs = max_tevs}; + struct perf_probe_point *pp = &pev->point; Dwarf_Off off, noff; size_t cuhl; Dwarf_Die *diep; Dwarf *dbg; + int ret = 0; + + pf.tevs = zalloc(sizeof(struct kprobe_trace_event) * max_tevs); + if (pf.tevs == NULL) + return -ENOMEM; + *tevs = pf.tevs; + pf.ntevs = 0; dbg = dwarf_begin(fd, DWARF_C_READ); - if (!dbg) - return -ENOENT; + if (!dbg) { + pr_warning("No dwarf info found in the vmlinux - " + "please rebuild with CONFIG_DEBUG_INFO=y.\n"); + return -EBADF; + } + + /* Get the call frame information from this dwarf */ + pf.cfi = dwarf_getcfi(dbg); - pp->found = 0; off = 0; line_list__init(&pf.lcache); /* Loop on CUs (Compilation Unit) */ - while (!dwarf_nextcu(dbg, off, &noff, &cuhl, NULL, NULL, NULL)) { + while (!dwarf_nextcu(dbg, off, &noff, &cuhl, NULL, NULL, NULL) && + ret >= 0) { /* Get the DIE(Debugging Information Entry) of this CU */ diep = dwarf_offdie(dbg, off + cuhl, &pf.cu_die); if (!diep) @@ -669,12 +977,12 @@ int find_probe_point(int fd, struct probe_point *pp) if (!pp->file || pf.fname) { if (pp->function) - find_probe_point_by_func(&pf); + ret = find_probe_point_by_func(&pf); else if (pp->lazy_line) - find_probe_point_lazy(NULL, &pf); + ret = find_probe_point_lazy(NULL, &pf); else { pf.lno = pp->line; - find_probe_point_by_line(&pf); + ret = find_probe_point_by_line(&pf); } } off = noff; @@ -682,41 +990,169 @@ int find_probe_point(int fd, struct probe_point *pp) line_list__free(&pf.lcache); dwarf_end(dbg); - return pp->found; + return (ret < 0) ? ret : pf.ntevs; +} + +/* Reverse search */ +int find_perf_probe_point(int fd, unsigned long addr, + struct perf_probe_point *ppt) +{ + Dwarf_Die cudie, spdie, indie; + Dwarf *dbg; + Dwarf_Line *line; + Dwarf_Addr laddr, eaddr; + const char *tmp; + int lineno, ret = 0; + bool found = false; + + dbg = dwarf_begin(fd, DWARF_C_READ); + if (!dbg) + return -EBADF; + + /* Find cu die */ + if (!dwarf_addrdie(dbg, (Dwarf_Addr)addr, &cudie)) { + ret = -EINVAL; + goto end; + } + + /* Find a corresponding line */ + line = dwarf_getsrc_die(&cudie, (Dwarf_Addr)addr); + if (line) { + if (dwarf_lineaddr(line, &laddr) == 0 && + (Dwarf_Addr)addr == laddr && + dwarf_lineno(line, &lineno) == 0) { + tmp = dwarf_linesrc(line, NULL, NULL); + if (tmp) { + ppt->line = lineno; + ppt->file = strdup(tmp); + if (ppt->file == NULL) { + ret = -ENOMEM; + goto end; + } + found = true; + } + } + } + + /* Find a corresponding function */ + if (die_find_real_subprogram(&cudie, (Dwarf_Addr)addr, &spdie)) { + tmp = dwarf_diename(&spdie); + if (!tmp || dwarf_entrypc(&spdie, &eaddr) != 0) + goto end; + + if (ppt->line) { + if (die_find_inlinefunc(&spdie, (Dwarf_Addr)addr, + &indie)) { + /* addr in an inline function */ + tmp = dwarf_diename(&indie); + if (!tmp) + goto end; + ret = dwarf_decl_line(&indie, &lineno); + } else { + if (eaddr == addr) { /* Function entry */ + lineno = ppt->line; + ret = 0; + } else + ret = dwarf_decl_line(&spdie, &lineno); + } + if (ret == 0) { + /* Make a relative line number */ + ppt->line -= lineno; + goto found; + } + } + /* We don't have a line number, let's use offset */ + ppt->offset = addr - (unsigned long)eaddr; +found: + ppt->function = strdup(tmp); + if (ppt->function == NULL) { + ret = -ENOMEM; + goto end; + } + found = true; + } + +end: + dwarf_end(dbg); + if (ret >= 0) + ret = found ? 1 : 0; + return ret; +} + +/* Add a line and store the src path */ +static int line_range_add_line(const char *src, unsigned int lineno, + struct line_range *lr) +{ + /* Copy real path */ + if (!lr->path) { + lr->path = strdup(src); + if (lr->path == NULL) + return -ENOMEM; + } + return line_list__add_line(&lr->line_list, lineno); +} + +/* Search function declaration lines */ +static int line_range_funcdecl_cb(Dwarf_Die *sp_die, void *data) +{ + struct dwarf_callback_param *param = data; + struct line_finder *lf = param->data; + const char *src; + int lineno; + + src = dwarf_decl_file(sp_die); + if (src && strtailcmp(src, lf->fname) != 0) + return DWARF_CB_OK; + + if (dwarf_decl_line(sp_die, &lineno) != 0 || + (lf->lno_s > lineno || lf->lno_e < lineno)) + return DWARF_CB_OK; + + param->retval = line_range_add_line(src, lineno, lf->lr); + if (param->retval < 0) + return DWARF_CB_ABORT; + return DWARF_CB_OK; +} + +static int find_line_range_func_decl_lines(struct line_finder *lf) +{ + struct dwarf_callback_param param = {.data = (void *)lf, .retval = 0}; + dwarf_getfuncs(&lf->cu_die, line_range_funcdecl_cb, ¶m, 0); + return param.retval; } /* Find line range from its line number */ -static void find_line_range_by_line(Dwarf_Die *sp_die, struct line_finder *lf) +static int find_line_range_by_line(Dwarf_Die *sp_die, struct line_finder *lf) { Dwarf_Lines *lines; Dwarf_Line *line; size_t nlines, i; Dwarf_Addr addr; - int lineno; - int ret; + int lineno, ret = 0; const char *src; Dwarf_Die die_mem; line_list__init(&lf->lr->line_list); - ret = dwarf_getsrclines(&lf->cu_die, &lines, &nlines); - DIE_IF(ret != 0); + if (dwarf_getsrclines(&lf->cu_die, &lines, &nlines) != 0) { + pr_warning("No source lines found in this CU.\n"); + return -ENOENT; + } + /* Search probable lines on lines list */ for (i = 0; i < nlines; i++) { line = dwarf_onesrcline(lines, i); - ret = dwarf_lineno(line, &lineno); - DIE_IF(ret != 0); - if (lf->lno_s > lineno || lf->lno_e < lineno) + if (dwarf_lineno(line, &lineno) != 0 || + (lf->lno_s > lineno || lf->lno_e < lineno)) continue; if (sp_die) { /* Address filtering 1: does sp_die include addr? */ - ret = dwarf_lineaddr(line, &addr); - DIE_IF(ret != 0); - if (!dwarf_haspc(sp_die, addr)) + if (dwarf_lineaddr(line, &addr) != 0 || + !dwarf_haspc(sp_die, addr)) continue; /* Address filtering 2: No child include addr? */ - if (die_get_inlinefunc(sp_die, addr, &die_mem)) + if (die_find_inlinefunc(sp_die, addr, &die_mem)) continue; } @@ -725,30 +1161,49 @@ static void find_line_range_by_line(Dwarf_Die *sp_die, struct line_finder *lf) if (strtailcmp(src, lf->fname) != 0) continue; - /* Copy real path */ - if (!lf->lr->path) - lf->lr->path = strdup(src); - line_list__add_line(&lf->lr->line_list, (unsigned int)lineno); + ret = line_range_add_line(src, lineno, lf->lr); + if (ret < 0) + return ret; } + + /* + * Dwarf lines doesn't include function declarations. We have to + * check functions list or given function. + */ + if (sp_die) { + src = dwarf_decl_file(sp_die); + if (src && dwarf_decl_line(sp_die, &lineno) == 0 && + (lf->lno_s <= lineno && lf->lno_e >= lineno)) + ret = line_range_add_line(src, lineno, lf->lr); + } else + ret = find_line_range_func_decl_lines(lf); + /* Update status */ - if (!list_empty(&lf->lr->line_list)) - lf->found = 1; + if (ret >= 0) + if (!list_empty(&lf->lr->line_list)) + ret = lf->found = 1; + else + ret = 0; /* Lines are not found */ else { free(lf->lr->path); lf->lr->path = NULL; } + return ret; } static int line_range_inline_cb(Dwarf_Die *in_die, void *data) { - find_line_range_by_line(in_die, (struct line_finder *)data); + struct dwarf_callback_param *param = data; + + param->retval = find_line_range_by_line(in_die, param->data); return DWARF_CB_ABORT; /* No need to find other instances */ } /* Search function from function name */ static int line_range_search_cb(Dwarf_Die *sp_die, void *data) { - struct line_finder *lf = (struct line_finder *)data; + struct dwarf_callback_param *param = data; + struct line_finder *lf = param->data; struct line_range *lr = lf->lr; if (dwarf_tag(sp_die) == DW_TAG_subprogram && @@ -757,44 +1212,55 @@ static int line_range_search_cb(Dwarf_Die *sp_die, void *data) dwarf_decl_line(sp_die, &lr->offset); pr_debug("fname: %s, lineno:%d\n", lf->fname, lr->offset); lf->lno_s = lr->offset + lr->start; - if (!lr->end) + if (lf->lno_s < 0) /* Overflow */ + lf->lno_s = INT_MAX; + lf->lno_e = lr->offset + lr->end; + if (lf->lno_e < 0) /* Overflow */ lf->lno_e = INT_MAX; - else - lf->lno_e = lr->offset + lr->end; + pr_debug("New line range: %d to %d\n", lf->lno_s, lf->lno_e); lr->start = lf->lno_s; lr->end = lf->lno_e; - if (dwarf_func_inline(sp_die)) + if (dwarf_func_inline(sp_die)) { + struct dwarf_callback_param _param; + _param.data = (void *)lf; + _param.retval = 0; dwarf_func_inline_instances(sp_die, - line_range_inline_cb, lf); - else - find_line_range_by_line(sp_die, lf); - return 1; + line_range_inline_cb, + &_param); + param->retval = _param.retval; + } else + param->retval = find_line_range_by_line(sp_die, lf); + return DWARF_CB_ABORT; } - return 0; + return DWARF_CB_OK; } -static void find_line_range_by_func(struct line_finder *lf) +static int find_line_range_by_func(struct line_finder *lf) { - dwarf_getfuncs(&lf->cu_die, line_range_search_cb, lf, 0); + struct dwarf_callback_param param = {.data = (void *)lf, .retval = 0}; + dwarf_getfuncs(&lf->cu_die, line_range_search_cb, ¶m, 0); + return param.retval; } int find_line_range(int fd, struct line_range *lr) { struct line_finder lf = {.lr = lr, .found = 0}; - int ret; + int ret = 0; Dwarf_Off off = 0, noff; size_t cuhl; Dwarf_Die *diep; Dwarf *dbg; dbg = dwarf_begin(fd, DWARF_C_READ); - if (!dbg) - return -ENOENT; + if (!dbg) { + pr_warning("No dwarf info found in the vmlinux - " + "please rebuild with CONFIG_DEBUG_INFO=y.\n"); + return -EBADF; + } /* Loop on CUs (Compilation Unit) */ - while (!lf.found) { - ret = dwarf_nextcu(dbg, off, &noff, &cuhl, NULL, NULL, NULL); - if (ret != 0) + while (!lf.found && ret >= 0) { + if (dwarf_nextcu(dbg, off, &noff, &cuhl, NULL, NULL, NULL) != 0) break; /* Get the DIE(Debugging Information Entry) of this CU */ @@ -810,20 +1276,18 @@ int find_line_range(int fd, struct line_range *lr) if (!lr->file || lf.fname) { if (lr->function) - find_line_range_by_func(&lf); + ret = find_line_range_by_func(&lf); else { lf.lno_s = lr->start; - if (!lr->end) - lf.lno_e = INT_MAX; - else - lf.lno_e = lr->end; - find_line_range_by_line(NULL, &lf); + lf.lno_e = lr->end; + ret = find_line_range_by_line(NULL, &lf); } } off = noff; } pr_debug("path: %lx\n", (unsigned long)lr->path); dwarf_end(dbg); - return lf.found; + + return (ret < 0) ? ret : lf.found; } diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h index 21f7354397b4..66f1980e3855 100644 --- a/tools/perf/util/probe-finder.h +++ b/tools/perf/util/probe-finder.h @@ -3,6 +3,7 @@ #include <stdbool.h> #include "util.h" +#include "probe-event.h" #define MAX_PATH_LEN 256 #define MAX_PROBE_BUFFER 1024 @@ -14,67 +15,39 @@ static inline int is_c_varname(const char *name) return isalpha(name[0]) || name[0] == '_'; } -struct probe_point { - char *event; /* Event name */ - char *group; /* Event group */ +#ifdef DWARF_SUPPORT +/* Find kprobe_trace_events specified by perf_probe_event from debuginfo */ +extern int find_kprobe_trace_events(int fd, struct perf_probe_event *pev, + struct kprobe_trace_event **tevs, + int max_tevs); - /* Inputs */ - char *file; /* File name */ - int line; /* Line number */ - char *lazy_line; /* Lazy line pattern */ +/* Find a perf_probe_point from debuginfo */ +extern int find_perf_probe_point(int fd, unsigned long addr, + struct perf_probe_point *ppt); - char *function; /* Function name */ - int offset; /* Offset bytes */ - - int nr_args; /* Number of arguments */ - char **args; /* Arguments */ - - int retprobe; /* Return probe */ - - /* Output */ - int found; /* Number of found probe points */ - char *probes[MAX_PROBES]; /* Output buffers (will be allocated)*/ -}; - -/* Line number container */ -struct line_node { - struct list_head list; - unsigned int line; -}; - -/* Line range */ -struct line_range { - char *file; /* File name */ - char *function; /* Function name */ - unsigned int start; /* Start line number */ - unsigned int end; /* End line number */ - int offset; /* Start line offset */ - char *path; /* Real path name */ - struct list_head line_list; /* Visible lines */ -}; - -#ifndef NO_DWARF_SUPPORT -extern int find_probe_point(int fd, struct probe_point *pp); extern int find_line_range(int fd, struct line_range *lr); #include <dwarf.h> #include <libdw.h> struct probe_finder { - struct probe_point *pp; /* Target probe point */ + struct perf_probe_event *pev; /* Target probe event */ + struct kprobe_trace_event *tevs; /* Result trace events */ + int ntevs; /* Number of trace events */ + int max_tevs; /* Max number of trace events */ /* For function searching */ - Dwarf_Addr addr; /* Address */ - const char *fname; /* File name */ int lno; /* Line number */ + Dwarf_Addr addr; /* Address */ + const char *fname; /* Real file name */ Dwarf_Die cu_die; /* Current CU */ + struct list_head lcache; /* Line cache for lazy match */ /* For variable searching */ + Dwarf_CFI *cfi; /* Call Frame Information */ Dwarf_Op *fb_ops; /* Frame base attribute */ - const char *var; /* Current variable name */ - char *buf; /* Current output buffer */ - int len; /* Length of output buffer */ - struct list_head lcache; /* Line cache for lazy match */ + struct perf_probe_arg *pvar; /* Current target variable */ + struct kprobe_trace_arg *tvar; /* Current result variable */ }; struct line_finder { @@ -87,6 +60,6 @@ struct line_finder { int found; }; -#endif /* NO_DWARF_SUPPORT */ +#endif /* DWARF_SUPPORT */ #endif /*_PROBE_FINDER_H */ diff --git a/tools/perf/util/pstack.c b/tools/perf/util/pstack.c new file mode 100644 index 000000000000..13d36faf64eb --- /dev/null +++ b/tools/perf/util/pstack.c @@ -0,0 +1,75 @@ +/* + * Simple pointer stack + * + * (c) 2010 Arnaldo Carvalho de Melo <acme@redhat.com> + */ + +#include "util.h" +#include "pstack.h" +#include <linux/kernel.h> +#include <stdlib.h> + +struct pstack { + unsigned short top; + unsigned short max_nr_entries; + void *entries[0]; +}; + +struct pstack *pstack__new(unsigned short max_nr_entries) +{ + struct pstack *self = zalloc((sizeof(*self) + + max_nr_entries * sizeof(void *))); + if (self != NULL) + self->max_nr_entries = max_nr_entries; + return self; +} + +void pstack__delete(struct pstack *self) +{ + free(self); +} + +bool pstack__empty(const struct pstack *self) +{ + return self->top == 0; +} + +void pstack__remove(struct pstack *self, void *key) +{ + unsigned short i = self->top, last_index = self->top - 1; + + while (i-- != 0) { + if (self->entries[i] == key) { + if (i < last_index) + memmove(self->entries + i, + self->entries + i + 1, + (last_index - i) * sizeof(void *)); + --self->top; + return; + } + } + pr_err("%s: %p not on the pstack!\n", __func__, key); +} + +void pstack__push(struct pstack *self, void *key) +{ + if (self->top == self->max_nr_entries) { + pr_err("%s: top=%d, overflow!\n", __func__, self->top); + return; + } + self->entries[self->top++] = key; +} + +void *pstack__pop(struct pstack *self) +{ + void *ret; + + if (self->top == 0) { + pr_err("%s: underflow!\n", __func__); + return NULL; + } + + ret = self->entries[--self->top]; + self->entries[self->top] = NULL; + return ret; +} diff --git a/tools/perf/util/pstack.h b/tools/perf/util/pstack.h new file mode 100644 index 000000000000..5ad07023504b --- /dev/null +++ b/tools/perf/util/pstack.h @@ -0,0 +1,12 @@ +#ifndef _PERF_PSTACK_ +#define _PERF_PSTACK_ + +struct pstack; +struct pstack *pstack__new(unsigned short max_nr_entries); +void pstack__delete(struct pstack *self); +bool pstack__empty(const struct pstack *self); +void pstack__remove(struct pstack *self, void *key); +void pstack__push(struct pstack *self, void *key); +void *pstack__pop(struct pstack *self); + +#endif /* _PERF_PSTACK_ */ diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c index 5376378e0cfc..b059dc50cc2d 100644 --- a/tools/perf/util/scripting-engines/trace-event-perl.c +++ b/tools/perf/util/scripting-engines/trace-event-perl.c @@ -371,7 +371,6 @@ static int perl_start_script(const char *script, int argc, const char **argv) run_start_sub(); free(command_line); - fprintf(stderr, "perf trace started with Perl script %s\n\n", script); return 0; error: perl_free(my_perl); @@ -394,8 +393,6 @@ static int perl_stop_script(void) perl_destruct(my_perl); perl_free(my_perl); - fprintf(stderr, "\nperf trace Perl script stopped\n"); - return 0; } diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c index 6a72f14c5986..81f39cab3aaa 100644 --- a/tools/perf/util/scripting-engines/trace-event-python.c +++ b/tools/perf/util/scripting-engines/trace-event-python.c @@ -374,8 +374,6 @@ static int python_start_script(const char *script, int argc, const char **argv) } free(command_line); - fprintf(stderr, "perf trace started with Python script %s\n\n", - script); return err; error: @@ -407,8 +405,6 @@ out: Py_XDECREF(main_module); Py_Finalize(); - fprintf(stderr, "\nperf trace Python script stopped\n"); - return err; } diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index eed1cb889008..25bfca4f10f0 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -14,6 +14,16 @@ static int perf_session__open(struct perf_session *self, bool force) { struct stat input_stat; + if (!strcmp(self->filename, "-")) { + self->fd_pipe = true; + self->fd = STDIN_FILENO; + + if (perf_header__read(self, self->fd) < 0) + pr_err("incompatible file format"); + + return 0; + } + self->fd = open(self->filename, O_RDONLY); if (self->fd < 0) { pr_err("failed to open file: %s", self->filename); @@ -38,7 +48,7 @@ static int perf_session__open(struct perf_session *self, bool force) goto out_close; } - if (perf_header__read(&self->header, self->fd) < 0) { + if (perf_header__read(self, self->fd) < 0) { pr_err("incompatible file format"); goto out_close; } @@ -52,12 +62,21 @@ out_close: return -1; } -static inline int perf_session__create_kernel_maps(struct perf_session *self) +void perf_session__update_sample_type(struct perf_session *self) +{ + self->sample_type = perf_header__sample_type(&self->header); +} + +int perf_session__create_kernel_maps(struct perf_session *self) { - return map_groups__create_kernel_maps(&self->kmaps, self->vmlinux_maps); + int ret = machine__create_kernel_maps(&self->host_machine); + + if (ret >= 0) + ret = machines__create_guest_kernel_maps(&self->machines); + return ret; } -struct perf_session *perf_session__new(const char *filename, int mode, bool force) +struct perf_session *perf_session__new(const char *filename, int mode, bool force, bool repipe) { size_t len = filename ? strlen(filename) + 1 : 0; struct perf_session *self = zalloc(sizeof(*self) + len); @@ -70,13 +89,15 @@ struct perf_session *perf_session__new(const char *filename, int mode, bool forc memcpy(self->filename, filename, len); self->threads = RB_ROOT; - self->stats_by_id = RB_ROOT; + self->hists_tree = RB_ROOT; self->last_match = NULL; self->mmap_window = 32; self->cwd = NULL; self->cwdlen = 0; - self->unknown_events = 0; - map_groups__init(&self->kmaps); + self->machines = RB_ROOT; + self->repipe = repipe; + INIT_LIST_HEAD(&self->ordered_samples.samples_head); + machine__init(&self->host_machine, "", HOST_KERNEL_ID); if (mode == O_RDONLY) { if (perf_session__open(self, force) < 0) @@ -90,7 +111,7 @@ struct perf_session *perf_session__new(const char *filename, int mode, bool forc goto out_delete; } - self->sample_type = perf_header__sample_type(&self->header); + perf_session__update_sample_type(self); out: return self; out_free: @@ -117,22 +138,17 @@ static bool symbol__match_parent_regex(struct symbol *sym) return 0; } -struct symbol **perf_session__resolve_callchain(struct perf_session *self, - struct thread *thread, - struct ip_callchain *chain, - struct symbol **parent) +struct map_symbol *perf_session__resolve_callchain(struct perf_session *self, + struct thread *thread, + struct ip_callchain *chain, + struct symbol **parent) { u8 cpumode = PERF_RECORD_MISC_USER; - struct symbol **syms = NULL; unsigned int i; + struct map_symbol *syms = calloc(chain->nr, sizeof(*syms)); - if (symbol_conf.use_callchain) { - syms = calloc(chain->nr, sizeof(*syms)); - if (!syms) { - fprintf(stderr, "Can't allocate memory for symbols\n"); - exit(-1); - } - } + if (!syms) + return NULL; for (i = 0; i < chain->nr; i++) { u64 ip = chain->ips[i]; @@ -152,15 +168,17 @@ struct symbol **perf_session__resolve_callchain(struct perf_session *self, continue; } + al.filtered = false; thread__find_addr_location(thread, self, cpumode, - MAP__FUNCTION, ip, &al, NULL); + MAP__FUNCTION, thread->pid, ip, &al, NULL); if (al.sym != NULL) { if (sort__has_parent && !*parent && symbol__match_parent_regex(al.sym)) *parent = al.sym; if (!symbol_conf.use_callchain) break; - syms[i] = al.sym; + syms[i].map = al.map; + syms[i].sym = al.sym; } } @@ -174,6 +192,18 @@ static int process_event_stub(event_t *event __used, return 0; } +static int process_finished_round_stub(event_t *event __used, + struct perf_session *session __used, + struct perf_event_ops *ops __used) +{ + dump_printf(": unhandled!\n"); + return 0; +} + +static int process_finished_round(event_t *event, + struct perf_session *session, + struct perf_event_ops *ops); + static void perf_event_ops__fill_defaults(struct perf_event_ops *handler) { if (handler->sample == NULL) @@ -194,29 +224,20 @@ static void perf_event_ops__fill_defaults(struct perf_event_ops *handler) handler->throttle = process_event_stub; if (handler->unthrottle == NULL) handler->unthrottle = process_event_stub; -} - -static const char *event__name[] = { - [0] = "TOTAL", - [PERF_RECORD_MMAP] = "MMAP", - [PERF_RECORD_LOST] = "LOST", - [PERF_RECORD_COMM] = "COMM", - [PERF_RECORD_EXIT] = "EXIT", - [PERF_RECORD_THROTTLE] = "THROTTLE", - [PERF_RECORD_UNTHROTTLE] = "UNTHROTTLE", - [PERF_RECORD_FORK] = "FORK", - [PERF_RECORD_READ] = "READ", - [PERF_RECORD_SAMPLE] = "SAMPLE", -}; - -unsigned long event__total[PERF_RECORD_MAX]; - -void event__print_totals(void) -{ - int i; - for (i = 0; i < PERF_RECORD_MAX; ++i) - pr_info("%10s events: %10ld\n", - event__name[i], event__total[i]); + if (handler->attr == NULL) + handler->attr = process_event_stub; + if (handler->event_type == NULL) + handler->event_type = process_event_stub; + if (handler->tracing_data == NULL) + handler->tracing_data = process_event_stub; + if (handler->build_id == NULL) + handler->build_id = process_event_stub; + if (handler->finished_round == NULL) { + if (handler->ordered_samples) + handler->finished_round = process_finished_round; + else + handler->finished_round = process_finished_round_stub; + } } void mem_bswap_64(void *src, int byte_size) @@ -270,6 +291,37 @@ static void event__read_swap(event_t *self) self->read.id = bswap_64(self->read.id); } +static void event__attr_swap(event_t *self) +{ + size_t size; + + self->attr.attr.type = bswap_32(self->attr.attr.type); + self->attr.attr.size = bswap_32(self->attr.attr.size); + self->attr.attr.config = bswap_64(self->attr.attr.config); + self->attr.attr.sample_period = bswap_64(self->attr.attr.sample_period); + self->attr.attr.sample_type = bswap_64(self->attr.attr.sample_type); + self->attr.attr.read_format = bswap_64(self->attr.attr.read_format); + self->attr.attr.wakeup_events = bswap_32(self->attr.attr.wakeup_events); + self->attr.attr.bp_type = bswap_32(self->attr.attr.bp_type); + self->attr.attr.bp_addr = bswap_64(self->attr.attr.bp_addr); + self->attr.attr.bp_len = bswap_64(self->attr.attr.bp_len); + + size = self->header.size; + size -= (void *)&self->attr.id - (void *)self; + mem_bswap_64(self->attr.id, size); +} + +static void event__event_type_swap(event_t *self) +{ + self->event_type.event_type.event_id = + bswap_64(self->event_type.event_type.event_id); +} + +static void event__tracing_data_swap(event_t *self) +{ + self->tracing_data.size = bswap_32(self->tracing_data.size); +} + typedef void (*event__swap_op)(event_t *self); static event__swap_op event__swap_ops[] = { @@ -280,9 +332,212 @@ static event__swap_op event__swap_ops[] = { [PERF_RECORD_LOST] = event__all64_swap, [PERF_RECORD_READ] = event__read_swap, [PERF_RECORD_SAMPLE] = event__all64_swap, - [PERF_RECORD_MAX] = NULL, + [PERF_RECORD_HEADER_ATTR] = event__attr_swap, + [PERF_RECORD_HEADER_EVENT_TYPE] = event__event_type_swap, + [PERF_RECORD_HEADER_TRACING_DATA] = event__tracing_data_swap, + [PERF_RECORD_HEADER_BUILD_ID] = NULL, + [PERF_RECORD_HEADER_MAX] = NULL, }; +struct sample_queue { + u64 timestamp; + struct sample_event *event; + struct list_head list; +}; + +static void flush_sample_queue(struct perf_session *s, + struct perf_event_ops *ops) +{ + struct list_head *head = &s->ordered_samples.samples_head; + u64 limit = s->ordered_samples.next_flush; + struct sample_queue *tmp, *iter; + + if (!ops->ordered_samples || !limit) + return; + + list_for_each_entry_safe(iter, tmp, head, list) { + if (iter->timestamp > limit) + return; + + if (iter == s->ordered_samples.last_inserted) + s->ordered_samples.last_inserted = NULL; + + ops->sample((event_t *)iter->event, s); + + s->ordered_samples.last_flush = iter->timestamp; + list_del(&iter->list); + free(iter->event); + free(iter); + } +} + +/* + * When perf record finishes a pass on every buffers, it records this pseudo + * event. + * We record the max timestamp t found in the pass n. + * Assuming these timestamps are monotonic across cpus, we know that if + * a buffer still has events with timestamps below t, they will be all + * available and then read in the pass n + 1. + * Hence when we start to read the pass n + 2, we can safely flush every + * events with timestamps below t. + * + * ============ PASS n ================= + * CPU 0 | CPU 1 + * | + * cnt1 timestamps | cnt2 timestamps + * 1 | 2 + * 2 | 3 + * - | 4 <--- max recorded + * + * ============ PASS n + 1 ============== + * CPU 0 | CPU 1 + * | + * cnt1 timestamps | cnt2 timestamps + * 3 | 5 + * 4 | 6 + * 5 | 7 <---- max recorded + * + * Flush every events below timestamp 4 + * + * ============ PASS n + 2 ============== + * CPU 0 | CPU 1 + * | + * cnt1 timestamps | cnt2 timestamps + * 6 | 8 + * 7 | 9 + * - | 10 + * + * Flush every events below timestamp 7 + * etc... + */ +static int process_finished_round(event_t *event __used, + struct perf_session *session, + struct perf_event_ops *ops) +{ + flush_sample_queue(session, ops); + session->ordered_samples.next_flush = session->ordered_samples.max_timestamp; + + return 0; +} + +static void __queue_sample_end(struct sample_queue *new, struct list_head *head) +{ + struct sample_queue *iter; + + list_for_each_entry_reverse(iter, head, list) { + if (iter->timestamp < new->timestamp) { + list_add(&new->list, &iter->list); + return; + } + } + + list_add(&new->list, head); +} + +static void __queue_sample_before(struct sample_queue *new, + struct sample_queue *iter, + struct list_head *head) +{ + list_for_each_entry_continue_reverse(iter, head, list) { + if (iter->timestamp < new->timestamp) { + list_add(&new->list, &iter->list); + return; + } + } + + list_add(&new->list, head); +} + +static void __queue_sample_after(struct sample_queue *new, + struct sample_queue *iter, + struct list_head *head) +{ + list_for_each_entry_continue(iter, head, list) { + if (iter->timestamp > new->timestamp) { + list_add_tail(&new->list, &iter->list); + return; + } + } + list_add_tail(&new->list, head); +} + +/* The queue is ordered by time */ +static void __queue_sample_event(struct sample_queue *new, + struct perf_session *s) +{ + struct sample_queue *last_inserted = s->ordered_samples.last_inserted; + struct list_head *head = &s->ordered_samples.samples_head; + + + if (!last_inserted) { + __queue_sample_end(new, head); + return; + } + + /* + * Most of the time the current event has a timestamp + * very close to the last event inserted, unless we just switched + * to another event buffer. Having a sorting based on a list and + * on the last inserted event that is close to the current one is + * probably more efficient than an rbtree based sorting. + */ + if (last_inserted->timestamp >= new->timestamp) + __queue_sample_before(new, last_inserted, head); + else + __queue_sample_after(new, last_inserted, head); +} + +static int queue_sample_event(event_t *event, struct sample_data *data, + struct perf_session *s) +{ + u64 timestamp = data->time; + struct sample_queue *new; + + + if (timestamp < s->ordered_samples.last_flush) { + printf("Warning: Timestamp below last timeslice flush\n"); + return -EINVAL; + } + + new = malloc(sizeof(*new)); + if (!new) + return -ENOMEM; + + new->timestamp = timestamp; + + new->event = malloc(event->header.size); + if (!new->event) { + free(new); + return -ENOMEM; + } + + memcpy(new->event, event, event->header.size); + + __queue_sample_event(new, s); + s->ordered_samples.last_inserted = new; + + if (new->timestamp > s->ordered_samples.max_timestamp) + s->ordered_samples.max_timestamp = new->timestamp; + + return 0; +} + +static int perf_session__process_sample(event_t *event, struct perf_session *s, + struct perf_event_ops *ops) +{ + struct sample_data data; + + if (!ops->ordered_samples) + return ops->sample(event, s); + + bzero(&data, sizeof(struct sample_data)); + event__parse_sample(event, s->sample_type, &data); + + queue_sample_event(event, &data, s); + + return 0; +} + static int perf_session__process_event(struct perf_session *self, event_t *event, struct perf_event_ops *ops, @@ -290,12 +545,11 @@ static int perf_session__process_event(struct perf_session *self, { trace_event(event); - if (event->header.type < PERF_RECORD_MAX) { + if (event->header.type < PERF_RECORD_HEADER_MAX) { dump_printf("%#Lx [%#x]: PERF_RECORD_%s", offset + head, event->header.size, event__name[event->header.type]); - ++event__total[0]; - ++event__total[event->header.type]; + hists__inc_nr_events(&self->hists, event->header.type); } if (self->header.needs_swap && event__swap_ops[event->header.type]) @@ -303,7 +557,7 @@ static int perf_session__process_event(struct perf_session *self, switch (event->header.type) { case PERF_RECORD_SAMPLE: - return ops->sample(event, self); + return perf_session__process_sample(event, self, ops); case PERF_RECORD_MMAP: return ops->mmap(event, self); case PERF_RECORD_COMM: @@ -320,8 +574,20 @@ static int perf_session__process_event(struct perf_session *self, return ops->throttle(event, self); case PERF_RECORD_UNTHROTTLE: return ops->unthrottle(event, self); + case PERF_RECORD_HEADER_ATTR: + return ops->attr(event, self); + case PERF_RECORD_HEADER_EVENT_TYPE: + return ops->event_type(event, self); + case PERF_RECORD_HEADER_TRACING_DATA: + /* setup for reading amidst mmap */ + lseek(self->fd, offset + head, SEEK_SET); + return ops->tracing_data(event, self); + case PERF_RECORD_HEADER_BUILD_ID: + return ops->build_id(event, self); + case PERF_RECORD_FINISHED_ROUND: + return ops->finished_round(event, self, ops); default: - self->unknown_events++; + ++self->hists.stats.nr_unknown_events; return -1; } } @@ -333,56 +599,114 @@ void perf_event_header__bswap(struct perf_event_header *self) self->size = bswap_16(self->size); } -int perf_header__read_build_ids(struct perf_header *self, - int input, u64 offset, u64 size) +static struct thread *perf_session__register_idle_thread(struct perf_session *self) { - struct build_id_event bev; - char filename[PATH_MAX]; - u64 limit = offset + size; - int err = -1; - - while (offset < limit) { - struct dso *dso; - ssize_t len; - struct list_head *head = &dsos__user; + struct thread *thread = perf_session__findnew(self, 0); - if (read(input, &bev, sizeof(bev)) != sizeof(bev)) - goto out; + if (thread == NULL || thread__set_comm(thread, "swapper")) { + pr_err("problem inserting idle task.\n"); + thread = NULL; + } - if (self->needs_swap) - perf_event_header__bswap(&bev.header); + return thread; +} - len = bev.header.size - sizeof(bev); - if (read(input, filename, len) != len) - goto out; +int do_read(int fd, void *buf, size_t size) +{ + void *buf_start = buf; - if (bev.header.misc & PERF_RECORD_MISC_KERNEL) - head = &dsos__kernel; + while (size) { + int ret = read(fd, buf, size); - dso = __dsos__findnew(head, filename); - if (dso != NULL) { - dso__set_build_id(dso, &bev.build_id); - if (head == &dsos__kernel && filename[0] == '[') - dso->kernel = 1; - } + if (ret <= 0) + return ret; - offset += bev.header.size; + size -= ret; + buf += ret; } - err = 0; -out: - return err; + + return buf - buf_start; } -static struct thread *perf_session__register_idle_thread(struct perf_session *self) +#define session_done() (*(volatile int *)(&session_done)) +volatile int session_done; + +static int __perf_session__process_pipe_events(struct perf_session *self, + struct perf_event_ops *ops) { - struct thread *thread = perf_session__findnew(self, 0); + event_t event; + uint32_t size; + int skip = 0; + u64 head; + int err; + void *p; - if (thread == NULL || thread__set_comm(thread, "swapper")) { - pr_err("problem inserting idle task.\n"); - thread = NULL; + perf_event_ops__fill_defaults(ops); + + head = 0; +more: + err = do_read(self->fd, &event, sizeof(struct perf_event_header)); + if (err <= 0) { + if (err == 0) + goto done; + + pr_err("failed to read event header\n"); + goto out_err; } - return thread; + if (self->header.needs_swap) + perf_event_header__bswap(&event.header); + + size = event.header.size; + if (size == 0) + size = 8; + + p = &event; + p += sizeof(struct perf_event_header); + + if (size - sizeof(struct perf_event_header)) { + err = do_read(self->fd, p, + size - sizeof(struct perf_event_header)); + if (err <= 0) { + if (err == 0) { + pr_err("unexpected end of event stream\n"); + goto done; + } + + pr_err("failed to read event data\n"); + goto out_err; + } + } + + if (size == 0 || + (skip = perf_session__process_event(self, &event, ops, + 0, head)) < 0) { + dump_printf("%#Lx [%#x]: skipping unknown header type: %d\n", + head, event.header.size, event.header.type); + /* + * assume we lost track of the stream, check alignment, and + * increment a single u64 in the hope to catch on again 'soon'. + */ + if (unlikely(head & 7)) + head &= ~7ULL; + + size = 8; + } + + head += size; + + dump_printf("\n%#Lx [%#x]: event: %d\n", + head, event.header.size, event.header.type); + + if (skip > 0) + head += skip; + + if (!session_done()) + goto more; +done: + err = 0; +out_err: + return err; } int __perf_session__process_events(struct perf_session *self, @@ -396,6 +720,10 @@ int __perf_session__process_events(struct perf_session *self, event_t *event; uint32_t size; char *buf; + struct ui_progress *progress = ui_progress__new("Processing events...", + self->size); + if (progress == NULL) + return -1; perf_event_ops__fill_defaults(ops); @@ -424,6 +752,7 @@ remap: more: event = (event_t *)(buf + head); + ui_progress__update(progress, offset); if (self->header.needs_swap) perf_event_header__bswap(&event->header); @@ -473,7 +802,11 @@ more: goto more; done: err = 0; + /* do the final flush for ordered samples */ + self->ordered_samples.next_flush = ULLONG_MAX; + flush_sample_queue(self, ops); out_err: + ui_progress__delete(progress); return err; } @@ -502,9 +835,13 @@ out_getcwd_err: self->cwdlen = strlen(self->cwd); } - err = __perf_session__process_events(self, self->header.data_offset, - self->header.data_size, - self->size, ops); + if (!self->fd_pipe) + err = __perf_session__process_events(self, + self->header.data_offset, + self->header.data_size, + self->size, ops); + else + err = __perf_session__process_pipe_events(self, ops); out_err: return err; } @@ -519,56 +856,41 @@ bool perf_session__has_traces(struct perf_session *self, const char *msg) return true; } -int perf_session__set_kallsyms_ref_reloc_sym(struct perf_session *self, +int perf_session__set_kallsyms_ref_reloc_sym(struct map **maps, const char *symbol_name, u64 addr) { char *bracket; enum map_type i; + struct ref_reloc_sym *ref; - self->ref_reloc_sym.name = strdup(symbol_name); - if (self->ref_reloc_sym.name == NULL) + ref = zalloc(sizeof(struct ref_reloc_sym)); + if (ref == NULL) return -ENOMEM; - bracket = strchr(self->ref_reloc_sym.name, ']'); + ref->name = strdup(symbol_name); + if (ref->name == NULL) { + free(ref); + return -ENOMEM; + } + + bracket = strchr(ref->name, ']'); if (bracket) *bracket = '\0'; - self->ref_reloc_sym.addr = addr; + ref->addr = addr; for (i = 0; i < MAP__NR_TYPES; ++i) { - struct kmap *kmap = map__kmap(self->vmlinux_maps[i]); - kmap->ref_reloc_sym = &self->ref_reloc_sym; + struct kmap *kmap = map__kmap(maps[i]); + kmap->ref_reloc_sym = ref; } return 0; } -static u64 map__reloc_map_ip(struct map *map, u64 ip) -{ - return ip + (s64)map->pgoff; -} - -static u64 map__reloc_unmap_ip(struct map *map, u64 ip) -{ - return ip - (s64)map->pgoff; -} - -void map__reloc_vmlinux(struct map *self) +size_t perf_session__fprintf_dsos(struct perf_session *self, FILE *fp) { - struct kmap *kmap = map__kmap(self); - s64 reloc; - - if (!kmap->ref_reloc_sym || !kmap->ref_reloc_sym->unrelocated_addr) - return; - - reloc = (kmap->ref_reloc_sym->unrelocated_addr - - kmap->ref_reloc_sym->addr); - - if (!reloc) - return; - - self->map_ip = map__reloc_map_ip; - self->unmap_ip = map__reloc_unmap_ip; - self->pgoff = reloc; + return __dsos__fprintf(&self->host_machine.kernel_dsos, fp) + + __dsos__fprintf(&self->host_machine.user_dsos, fp) + + machines__fprintf_dsos(&self->machines, fp); } diff --git a/tools/perf/util/session.h b/tools/perf/util/session.h index 5c33417eebb3..e7fce486ebe2 100644 --- a/tools/perf/util/session.h +++ b/tools/perf/util/session.h @@ -1,6 +1,7 @@ #ifndef __PERF_SESSION_H #define __PERF_SESSION_H +#include "hist.h" #include "event.h" #include "header.h" #include "symbol.h" @@ -8,45 +9,69 @@ #include <linux/rbtree.h> #include "../../../include/linux/perf_event.h" +struct sample_queue; struct ip_callchain; struct thread; +struct ordered_samples { + u64 last_flush; + u64 next_flush; + u64 max_timestamp; + struct list_head samples_head; + struct sample_queue *last_inserted; +}; + struct perf_session { struct perf_header header; unsigned long size; unsigned long mmap_window; - struct map_groups kmaps; struct rb_root threads; struct thread *last_match; - struct map *vmlinux_maps[MAP__NR_TYPES]; - struct events_stats events_stats; - struct rb_root stats_by_id; - unsigned long event_total[PERF_RECORD_MAX]; - unsigned long unknown_events; - struct rb_root hists; + struct machine host_machine; + struct rb_root machines; + struct rb_root hists_tree; + /* + * FIXME: should point to the first entry in hists_tree and + * be a hists instance. Right now its only 'report' + * that is using ->hists_tree while all the rest use + * ->hists. + */ + struct hists hists; u64 sample_type; - struct ref_reloc_sym ref_reloc_sym; int fd; + bool fd_pipe; + bool repipe; int cwdlen; char *cwd; + struct ordered_samples ordered_samples; char filename[0]; }; +struct perf_event_ops; + typedef int (*event_op)(event_t *self, struct perf_session *session); +typedef int (*event_op2)(event_t *self, struct perf_session *session, + struct perf_event_ops *ops); struct perf_event_ops { - event_op sample, - mmap, - comm, - fork, - exit, - lost, - read, - throttle, - unthrottle; + event_op sample, + mmap, + comm, + fork, + exit, + lost, + read, + throttle, + unthrottle, + attr, + event_type, + tracing_data, + build_id; + event_op2 finished_round; + bool ordered_samples; }; -struct perf_session *perf_session__new(const char *filename, int mode, bool force); +struct perf_session *perf_session__new(const char *filename, int mode, bool force, bool repipe); void perf_session__delete(struct perf_session *self); void perf_event_header__bswap(struct perf_event_header *self); @@ -57,33 +82,66 @@ int __perf_session__process_events(struct perf_session *self, int perf_session__process_events(struct perf_session *self, struct perf_event_ops *event_ops); -struct symbol **perf_session__resolve_callchain(struct perf_session *self, - struct thread *thread, - struct ip_callchain *chain, - struct symbol **parent); +struct map_symbol *perf_session__resolve_callchain(struct perf_session *self, + struct thread *thread, + struct ip_callchain *chain, + struct symbol **parent); bool perf_session__has_traces(struct perf_session *self, const char *msg); -int perf_header__read_build_ids(struct perf_header *self, int input, - u64 offset, u64 file_size); - -int perf_session__set_kallsyms_ref_reloc_sym(struct perf_session *self, +int perf_session__set_kallsyms_ref_reloc_sym(struct map **maps, const char *symbol_name, u64 addr); void mem_bswap_64(void *src, int byte_size); -static inline int __perf_session__create_kernel_maps(struct perf_session *self, - struct dso *kernel) +int perf_session__create_kernel_maps(struct perf_session *self); + +int do_read(int fd, void *buf, size_t size); +void perf_session__update_sample_type(struct perf_session *self); + +static inline +struct machine *perf_session__find_host_machine(struct perf_session *self) +{ + return &self->host_machine; +} + +static inline +struct machine *perf_session__find_machine(struct perf_session *self, pid_t pid) +{ + if (pid == HOST_KERNEL_ID) + return &self->host_machine; + return machines__find(&self->machines, pid); +} + +static inline +struct machine *perf_session__findnew_machine(struct perf_session *self, pid_t pid) +{ + if (pid == HOST_KERNEL_ID) + return &self->host_machine; + return machines__findnew(&self->machines, pid); +} + +static inline +void perf_session__process_machines(struct perf_session *self, + machine__process_t process) +{ + process(&self->host_machine, self); + return machines__process(&self->machines, process, self); +} + +size_t perf_session__fprintf_dsos(struct perf_session *self, FILE *fp); + +static inline +size_t perf_session__fprintf_dsos_buildid(struct perf_session *self, FILE *fp, + bool with_hits) { - return __map_groups__create_kernel_maps(&self->kmaps, - self->vmlinux_maps, kernel); + return machines__fprintf_dsos_buildid(&self->machines, fp, with_hits); } -static inline struct map * - perf_session__new_module_map(struct perf_session *self, - u64 start, const char *filename) +static inline +size_t perf_session__fprintf_nr_events(struct perf_session *self, FILE *fp) { - return map_groups__new_module(&self->kmaps, start, filename); + return hists__fprintf_nr_events(&self->hists, fp); } #endif /* __PERF_SESSION_H */ diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c index cb0f327de9e8..2316cb5a4116 100644 --- a/tools/perf/util/sort.c +++ b/tools/perf/util/sort.c @@ -1,10 +1,10 @@ #include "sort.h" regex_t parent_regex; -char default_parent_pattern[] = "^sys_|^do_page_fault"; -char *parent_pattern = default_parent_pattern; -char default_sort_order[] = "comm,dso,symbol"; -char *sort_order = default_sort_order; +const char default_parent_pattern[] = "^sys_|^do_page_fault"; +const char *parent_pattern = default_parent_pattern; +const char default_sort_order[] = "comm,dso,symbol"; +const char *sort_order = default_sort_order; int sort__need_collapse = 0; int sort__has_parent = 0; @@ -18,39 +18,50 @@ char * field_sep; LIST_HEAD(hist_entry__sort_list); +static int hist_entry__thread_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width); +static int hist_entry__comm_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width); +static int hist_entry__dso_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width); +static int hist_entry__sym_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width); +static int hist_entry__parent_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width); + struct sort_entry sort_thread = { - .header = "Command: Pid", - .cmp = sort__thread_cmp, - .print = sort__thread_print, - .width = &threads__col_width, + .se_header = "Command: Pid", + .se_cmp = sort__thread_cmp, + .se_snprintf = hist_entry__thread_snprintf, + .se_width = &threads__col_width, }; struct sort_entry sort_comm = { - .header = "Command", - .cmp = sort__comm_cmp, - .collapse = sort__comm_collapse, - .print = sort__comm_print, - .width = &comms__col_width, + .se_header = "Command", + .se_cmp = sort__comm_cmp, + .se_collapse = sort__comm_collapse, + .se_snprintf = hist_entry__comm_snprintf, + .se_width = &comms__col_width, }; struct sort_entry sort_dso = { - .header = "Shared Object", - .cmp = sort__dso_cmp, - .print = sort__dso_print, - .width = &dsos__col_width, + .se_header = "Shared Object", + .se_cmp = sort__dso_cmp, + .se_snprintf = hist_entry__dso_snprintf, + .se_width = &dsos__col_width, }; struct sort_entry sort_sym = { - .header = "Symbol", - .cmp = sort__sym_cmp, - .print = sort__sym_print, + .se_header = "Symbol", + .se_cmp = sort__sym_cmp, + .se_snprintf = hist_entry__sym_snprintf, }; struct sort_entry sort_parent = { - .header = "Parent symbol", - .cmp = sort__parent_cmp, - .print = sort__parent_print, - .width = &parent_symbol__col_width, + .se_header = "Parent symbol", + .se_cmp = sort__parent_cmp, + .se_snprintf = hist_entry__parent_snprintf, + .se_width = &parent_symbol__col_width, }; struct sort_dimension { @@ -85,45 +96,38 @@ sort__thread_cmp(struct hist_entry *left, struct hist_entry *right) return right->thread->pid - left->thread->pid; } -int repsep_fprintf(FILE *fp, const char *fmt, ...) +static int repsep_snprintf(char *bf, size_t size, const char *fmt, ...) { int n; va_list ap; va_start(ap, fmt); - if (!field_sep) - n = vfprintf(fp, fmt, ap); - else { - char *bf = NULL; - n = vasprintf(&bf, fmt, ap); - if (n > 0) { - char *sep = bf; - - while (1) { - sep = strchr(sep, *field_sep); - if (sep == NULL) - break; - *sep = '.'; - } + n = vsnprintf(bf, size, fmt, ap); + if (field_sep && n > 0) { + char *sep = bf; + + while (1) { + sep = strchr(sep, *field_sep); + if (sep == NULL) + break; + *sep = '.'; } - fputs(bf, fp); - free(bf); } va_end(ap); return n; } -size_t -sort__thread_print(FILE *fp, struct hist_entry *self, unsigned int width) +static int hist_entry__thread_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width) { - return repsep_fprintf(fp, "%*s:%5d", width - 6, + return repsep_snprintf(bf, size, "%*s:%5d", width, self->thread->comm ?: "", self->thread->pid); } -size_t -sort__comm_print(FILE *fp, struct hist_entry *self, unsigned int width) +static int hist_entry__comm_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width) { - return repsep_fprintf(fp, "%*s", width, self->thread->comm); + return repsep_snprintf(bf, size, "%*s", width, self->thread->comm); } /* --sort dso */ @@ -131,8 +135,8 @@ sort__comm_print(FILE *fp, struct hist_entry *self, unsigned int width) int64_t sort__dso_cmp(struct hist_entry *left, struct hist_entry *right) { - struct dso *dso_l = left->map ? left->map->dso : NULL; - struct dso *dso_r = right->map ? right->map->dso : NULL; + struct dso *dso_l = left->ms.map ? left->ms.map->dso : NULL; + struct dso *dso_r = right->ms.map ? right->ms.map->dso : NULL; const char *dso_name_l, *dso_name_r; if (!dso_l || !dso_r) @@ -149,16 +153,16 @@ sort__dso_cmp(struct hist_entry *left, struct hist_entry *right) return strcmp(dso_name_l, dso_name_r); } -size_t -sort__dso_print(FILE *fp, struct hist_entry *self, unsigned int width) +static int hist_entry__dso_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width) { - if (self->map && self->map->dso) { - const char *dso_name = !verbose ? self->map->dso->short_name : - self->map->dso->long_name; - return repsep_fprintf(fp, "%-*s", width, dso_name); + if (self->ms.map && self->ms.map->dso) { + const char *dso_name = !verbose ? self->ms.map->dso->short_name : + self->ms.map->dso->long_name; + return repsep_snprintf(bf, size, "%-*s", width, dso_name); } - return repsep_fprintf(fp, "%*llx", width, (u64)self->ip); + return repsep_snprintf(bf, size, "%*Lx", width, self->ip); } /* --sort symbol */ @@ -168,31 +172,31 @@ sort__sym_cmp(struct hist_entry *left, struct hist_entry *right) { u64 ip_l, ip_r; - if (left->sym == right->sym) + if (left->ms.sym == right->ms.sym) return 0; - ip_l = left->sym ? left->sym->start : left->ip; - ip_r = right->sym ? right->sym->start : right->ip; + ip_l = left->ms.sym ? left->ms.sym->start : left->ip; + ip_r = right->ms.sym ? right->ms.sym->start : right->ip; return (int64_t)(ip_r - ip_l); } - -size_t -sort__sym_print(FILE *fp, struct hist_entry *self, unsigned int width __used) +static int hist_entry__sym_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width __used) { size_t ret = 0; if (verbose) { - char o = self->map ? dso__symtab_origin(self->map->dso) : '!'; - ret += repsep_fprintf(fp, "%#018llx %c ", (u64)self->ip, o); + char o = self->ms.map ? dso__symtab_origin(self->ms.map->dso) : '!'; + ret += repsep_snprintf(bf, size, "%#018llx %c ", self->ip, o); } - ret += repsep_fprintf(fp, "[%c] ", self->level); - if (self->sym) - ret += repsep_fprintf(fp, "%s", self->sym->name); + ret += repsep_snprintf(bf + ret, size - ret, "[%c] ", self->level); + if (self->ms.sym) + ret += repsep_snprintf(bf + ret, size - ret, "%s", + self->ms.sym->name); else - ret += repsep_fprintf(fp, "%#016llx", (u64)self->ip); + ret += repsep_snprintf(bf + ret, size - ret, "%#016llx", self->ip); return ret; } @@ -231,10 +235,10 @@ sort__parent_cmp(struct hist_entry *left, struct hist_entry *right) return strcmp(sym_l->name, sym_r->name); } -size_t -sort__parent_print(FILE *fp, struct hist_entry *self, unsigned int width) +static int hist_entry__parent_snprintf(struct hist_entry *self, char *bf, + size_t size, unsigned int width) { - return repsep_fprintf(fp, "%-*s", width, + return repsep_snprintf(bf, size, "%-*s", width, self->parent ? self->parent->name : "[other]"); } @@ -251,7 +255,7 @@ int sort_dimension__add(const char *tok) if (strncasecmp(tok, sd->name, strlen(tok))) continue; - if (sd->entry->collapse) + if (sd->entry->se_collapse) sort__need_collapse = 1; if (sd->entry == &sort_parent) { @@ -260,9 +264,8 @@ int sort_dimension__add(const char *tok) char err[BUFSIZ]; regerror(ret, &parent_regex, err, sizeof(err)); - fprintf(stderr, "Invalid regex: %s\n%s", - parent_pattern, err); - exit(-1); + pr_err("Invalid regex: %s\n%s", parent_pattern, err); + return -EINVAL; } sort__has_parent = 1; } diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h index 753f9ea99fb0..0d61c4082f43 100644 --- a/tools/perf/util/sort.h +++ b/tools/perf/util/sort.h @@ -25,10 +25,10 @@ #include "sort.h" extern regex_t parent_regex; -extern char *sort_order; -extern char default_parent_pattern[]; -extern char *parent_pattern; -extern char default_sort_order[]; +extern const char *sort_order; +extern const char default_parent_pattern[]; +extern const char *parent_pattern; +extern const char default_sort_order[]; extern int sort__need_collapse; extern int sort__has_parent; extern char *field_sep; @@ -43,19 +43,24 @@ extern enum sort_type sort__first_dimension; struct hist_entry { struct rb_node rb_node; - u64 count; + u64 period; + u64 period_sys; + u64 period_us; + u64 period_guest_sys; + u64 period_guest_us; + struct map_symbol ms; struct thread *thread; - struct map *map; - struct symbol *sym; u64 ip; + u32 nr_events; char level; - struct symbol *parent; - struct callchain_node callchain; + u8 filtered; + struct symbol *parent; union { unsigned long position; struct hist_entry *pair; struct rb_root sorted_chain; }; + struct callchain_node callchain[0]; }; enum sort_type { @@ -73,12 +78,13 @@ enum sort_type { struct sort_entry { struct list_head list; - const char *header; + const char *se_header; - int64_t (*cmp)(struct hist_entry *, struct hist_entry *); - int64_t (*collapse)(struct hist_entry *, struct hist_entry *); - size_t (*print)(FILE *fp, struct hist_entry *, unsigned int width); - unsigned int *width; + int64_t (*se_cmp)(struct hist_entry *, struct hist_entry *); + int64_t (*se_collapse)(struct hist_entry *, struct hist_entry *); + int (*se_snprintf)(struct hist_entry *self, char *bf, size_t size, + unsigned int width); + unsigned int *se_width; bool elide; }; @@ -87,7 +93,6 @@ extern struct list_head hist_entry__sort_list; void setup_sorting(const char * const usagestr[], const struct option *opts); -extern int repsep_fprintf(FILE *fp, const char *fmt, ...); extern size_t sort__thread_print(FILE *, struct hist_entry *, unsigned int); extern size_t sort__comm_print(FILE *, struct hist_entry *, unsigned int); extern size_t sort__dso_print(FILE *, struct hist_entry *, unsigned int); diff --git a/tools/perf/util/string.c b/tools/perf/util/string.c index a175949ed216..0409fc7c0058 100644 --- a/tools/perf/util/string.c +++ b/tools/perf/util/string.c @@ -1,48 +1,5 @@ -#include "string.h" #include "util.h" - -static int hex(char ch) -{ - if ((ch >= '0') && (ch <= '9')) - return ch - '0'; - if ((ch >= 'a') && (ch <= 'f')) - return ch - 'a' + 10; - if ((ch >= 'A') && (ch <= 'F')) - return ch - 'A' + 10; - return -1; -} - -/* - * While we find nice hex chars, build a long_val. - * Return number of chars processed. - */ -int hex2u64(const char *ptr, u64 *long_val) -{ - const char *p = ptr; - *long_val = 0; - - while (*p) { - const int hex_val = hex(*p); - - if (hex_val < 0) - break; - - *long_val = (*long_val << 4) | hex_val; - p++; - } - - return p - ptr; -} - -char *strxfrchar(char *s, char from, char to) -{ - char *p = s; - - while ((p = strchr(p, from)) != NULL) - *p++ = to; - - return s; -} +#include "string.h" #define K 1024LL /* diff --git a/tools/perf/util/string.h b/tools/perf/util/string.h deleted file mode 100644 index 542e44de3719..000000000000 --- a/tools/perf/util/string.h +++ /dev/null @@ -1,18 +0,0 @@ -#ifndef __PERF_STRING_H_ -#define __PERF_STRING_H_ - -#include <stdbool.h> -#include "types.h" - -int hex2u64(const char *ptr, u64 *val); -char *strxfrchar(char *s, char from, char to); -s64 perf_atoll(const char *str); -char **argv_split(const char *str, int *argcp); -void argv_free(char **argv); -bool strglobmatch(const char *str, const char *pat); -bool strlazymatch(const char *str, const char *pat); - -#define _STR(x) #x -#define STR(x) _STR(x) - -#endif /* __PERF_STRING_H */ diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index c458c4a371d1..a06131f6259a 100644 --- a/tools/perf/util/symbol.c +++ b/tools/perf/util/symbol.c @@ -1,13 +1,19 @@ -#include "util.h" -#include "../perf.h" -#include "sort.h" -#include "string.h" +#define _GNU_SOURCE +#include <ctype.h> +#include <dirent.h> +#include <errno.h> +#include <libgen.h> +#include <stdlib.h> +#include <stdio.h> +#include <string.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <sys/param.h> +#include <fcntl.h> +#include <unistd.h> #include "symbol.h" -#include "thread.h" +#include "strlist.h" -#include "debug.h" - -#include <asm/bug.h> #include <libelf.h> #include <gelf.h> #include <elf.h> @@ -18,22 +24,12 @@ #define NT_GNU_BUILD_ID 3 #endif -enum dso_origin { - DSO__ORIG_KERNEL = 0, - DSO__ORIG_JAVA_JIT, - DSO__ORIG_BUILD_ID_CACHE, - DSO__ORIG_FEDORA, - DSO__ORIG_UBUNTU, - DSO__ORIG_BUILDID, - DSO__ORIG_DSO, - DSO__ORIG_KMODULE, - DSO__ORIG_NOT_FOUND, -}; - static void dsos__add(struct list_head *head, struct dso *dso); static struct map *map__new2(u64 start, struct dso *dso, enum map_type type); static int dso__load_kernel_sym(struct dso *self, struct map *map, symbol_filter_t filter); +static int dso__load_guest_kernel_sym(struct dso *self, struct map *map, + symbol_filter_t filter); static int vmlinux_path__nr_entries; static char **vmlinux_path; @@ -126,16 +122,17 @@ static void map_groups__fixup_end(struct map_groups *self) static struct symbol *symbol__new(u64 start, u64 len, const char *name) { size_t namelen = strlen(name) + 1; - struct symbol *self = zalloc(symbol_conf.priv_size + - sizeof(*self) + namelen); + struct symbol *self = calloc(1, (symbol_conf.priv_size + + sizeof(*self) + namelen)); if (self == NULL) return NULL; if (symbol_conf.priv_size) self = ((void *)self) + symbol_conf.priv_size; - self->start = start; - self->end = len ? start + len - 1 : start; + self->start = start; + self->end = len ? start + len - 1 : start; + self->namelen = namelen - 1; pr_debug4("%s: %s %#Lx-%#Lx\n", __func__, name, start, self->end); @@ -178,7 +175,7 @@ static void dso__set_basename(struct dso *self) struct dso *dso__new(const char *name) { - struct dso *self = zalloc(sizeof(*self) + strlen(name) + 1); + struct dso *self = calloc(1, sizeof(*self) + strlen(name) + 1); if (self != NULL) { int i; @@ -192,6 +189,8 @@ struct dso *dso__new(const char *name) self->loaded = 0; self->sorted_by_name = 0; self->has_build_id = 0; + self->kernel = DSO_TYPE_USER; + INIT_LIST_HEAD(&self->node); } return self; @@ -408,12 +407,9 @@ int kallsyms__parse(const char *filename, void *arg, char *symbol_name; line_len = getline(&line, &n, file); - if (line_len < 0) + if (line_len < 0 || !line) break; - if (!line) - goto out_failure; - line[--line_len] = '\0'; /* \n */ len = hex2u64(line, &start); @@ -465,6 +461,7 @@ static int map__process_kallsym_symbol(void *arg, const char *name, * map__split_kallsyms, when we have split the maps per module */ symbols__insert(root, sym); + return 0; } @@ -489,6 +486,7 @@ static int dso__split_kallsyms(struct dso *self, struct map *map, symbol_filter_t filter) { struct map_groups *kmaps = map__kmap(map)->kmaps; + struct machine *machine = kmaps->machine; struct map *curr_map = map; struct symbol *pos; int count = 0; @@ -510,15 +508,33 @@ static int dso__split_kallsyms(struct dso *self, struct map *map, *module++ = '\0'; if (strcmp(curr_map->dso->short_name, module)) { - curr_map = map_groups__find_by_name(kmaps, map->type, module); + if (curr_map != map && + self->kernel == DSO_TYPE_GUEST_KERNEL && + machine__is_default_guest(machine)) { + /* + * We assume all symbols of a module are + * continuous in * kallsyms, so curr_map + * points to a module and all its + * symbols are in its kmap. Mark it as + * loaded. + */ + dso__set_loaded(curr_map->dso, + curr_map->type); + } + + curr_map = map_groups__find_by_name(kmaps, + map->type, module); if (curr_map == NULL) { - pr_debug("/proc/{kallsyms,modules} " + pr_debug("%s/proc/{kallsyms,modules} " "inconsistency while looking " - "for \"%s\" module!\n", module); - return -1; + "for \"%s\" module!\n", + machine->root_dir, module); + curr_map = map; + goto discard_symbol; } - if (curr_map->dso->loaded) + if (curr_map->dso->loaded && + !machine__is_default_guest(machine)) goto discard_symbol; } /* @@ -531,13 +547,21 @@ static int dso__split_kallsyms(struct dso *self, struct map *map, char dso_name[PATH_MAX]; struct dso *dso; - snprintf(dso_name, sizeof(dso_name), "[kernel].%d", - kernel_range++); + if (self->kernel == DSO_TYPE_GUEST_KERNEL) + snprintf(dso_name, sizeof(dso_name), + "[guest.kernel].%d", + kernel_range++); + else + snprintf(dso_name, sizeof(dso_name), + "[kernel].%d", + kernel_range++); dso = dso__new(dso_name); if (dso == NULL) return -1; + dso->kernel = self->kernel; + curr_map = map__new2(pos->start, dso, map->type); if (curr_map == NULL) { dso__delete(dso); @@ -561,6 +585,12 @@ discard_symbol: rb_erase(&pos->rb_node, root); } } + if (curr_map != map && + self->kernel == DSO_TYPE_GUEST_KERNEL && + machine__is_default_guest(kmaps->machine)) { + dso__set_loaded(curr_map->dso, curr_map->type); + } + return count; } @@ -571,7 +601,10 @@ int dso__load_kallsyms(struct dso *self, const char *filename, return -1; symbols__fixup_end(&self->symbols[map->type]); - self->origin = DSO__ORIG_KERNEL; + if (self->kernel == DSO_TYPE_GUEST_KERNEL) + self->origin = DSO__ORIG_GUEST_KERNEL; + else + self->origin = DSO__ORIG_KERNEL; return dso__split_kallsyms(self, map, filter); } @@ -870,8 +903,8 @@ out_close: if (err == 0) return nr; out: - pr_warning("%s: problems reading %s PLT info.\n", - __func__, self->long_name); + pr_debug("%s: problems reading %s PLT info.\n", + __func__, self->long_name); return 0; } @@ -958,7 +991,7 @@ static int dso__load_sym(struct dso *self, struct map *map, const char *name, nr_syms = shdr.sh_size / shdr.sh_entsize; memset(&sym, 0, sizeof(sym)); - if (!self->kernel) { + if (self->kernel == DSO_TYPE_USER) { self->adjust_symbols = (ehdr.e_type == ET_EXEC || elf_section_by_name(elf, &ehdr, &shdr, ".gnu.prelink_undo", @@ -990,7 +1023,7 @@ static int dso__load_sym(struct dso *self, struct map *map, const char *name, section_name = elf_sec__name(&shdr, secstrs); - if (self->kernel || kmodule) { + if (self->kernel != DSO_TYPE_USER || kmodule) { char dso_name[PATH_MAX]; if (strcmp(section_name, @@ -1017,6 +1050,7 @@ static int dso__load_sym(struct dso *self, struct map *map, const char *name, curr_dso = dso__new(dso_name); if (curr_dso == NULL) goto out_elf_end; + curr_dso->kernel = self->kernel; curr_map = map__new2(start, curr_dso, map->type); if (curr_map == NULL) { @@ -1025,9 +1059,9 @@ static int dso__load_sym(struct dso *self, struct map *map, const char *name, } curr_map->map_ip = identity__map_ip; curr_map->unmap_ip = identity__map_ip; - curr_dso->origin = DSO__ORIG_KERNEL; + curr_dso->origin = self->origin; map_groups__insert(kmap->kmaps, curr_map); - dsos__add(&dsos__kernel, curr_dso); + dsos__add(&self->node, curr_dso); dso__set_loaded(curr_dso, map->type); } else curr_dso = curr_map->dso; @@ -1089,7 +1123,7 @@ static bool dso__build_id_equal(const struct dso *self, u8 *build_id) return memcmp(self->build_id, build_id, sizeof(self->build_id)) == 0; } -static bool __dsos__read_build_ids(struct list_head *head, bool with_hits) +bool __dsos__read_build_ids(struct list_head *head, bool with_hits) { bool have_build_id = false; struct dso *pos; @@ -1107,13 +1141,6 @@ static bool __dsos__read_build_ids(struct list_head *head, bool with_hits) return have_build_id; } -bool dsos__read_build_ids(bool with_hits) -{ - bool kbuildids = __dsos__read_build_ids(&dsos__kernel, with_hits), - ubuildids = __dsos__read_build_ids(&dsos__user, with_hits); - return kbuildids || ubuildids; -} - /* * Align offset to 4 bytes as needed for note name and descriptor data. */ @@ -1248,6 +1275,8 @@ char dso__symtab_origin(const struct dso *self) [DSO__ORIG_BUILDID] = 'b', [DSO__ORIG_DSO] = 'd', [DSO__ORIG_KMODULE] = 'K', + [DSO__ORIG_GUEST_KERNEL] = 'g', + [DSO__ORIG_GUEST_KMODULE] = 'G', }; if (self == NULL || self->origin == DSO__ORIG_NOT_FOUND) @@ -1263,11 +1292,20 @@ int dso__load(struct dso *self, struct map *map, symbol_filter_t filter) char build_id_hex[BUILD_ID_SIZE * 2 + 1]; int ret = -1; int fd; + struct machine *machine; + const char *root_dir; dso__set_loaded(self, map->type); - if (self->kernel) + if (self->kernel == DSO_TYPE_KERNEL) return dso__load_kernel_sym(self, map, filter); + else if (self->kernel == DSO_TYPE_GUEST_KERNEL) + return dso__load_guest_kernel_sym(self, map, filter); + + if (map->groups && map->groups->machine) + machine = map->groups->machine; + else + machine = NULL; name = malloc(size); if (!name) @@ -1321,6 +1359,13 @@ more: case DSO__ORIG_DSO: snprintf(name, size, "%s", self->long_name); break; + case DSO__ORIG_GUEST_KMODULE: + if (map->groups && map->groups->machine) + root_dir = map->groups->machine->root_dir; + else + root_dir = ""; + snprintf(name, size, "%s%s", root_dir, self->long_name); + break; default: goto out; @@ -1374,7 +1419,8 @@ struct map *map_groups__find_by_name(struct map_groups *self, return NULL; } -static int dso__kernel_module_get_build_id(struct dso *self) +static int dso__kernel_module_get_build_id(struct dso *self, + const char *root_dir) { char filename[PATH_MAX]; /* @@ -1384,8 +1430,8 @@ static int dso__kernel_module_get_build_id(struct dso *self) const char *name = self->short_name + 1; snprintf(filename, sizeof(filename), - "/sys/module/%.*s/notes/.note.gnu.build-id", - (int)strlen(name - 1), name); + "%s/sys/module/%.*s/notes/.note.gnu.build-id", + root_dir, (int)strlen(name) - 1, name); if (sysfs__read_build_id(filename, self->build_id, sizeof(self->build_id)) == 0) @@ -1394,26 +1440,33 @@ static int dso__kernel_module_get_build_id(struct dso *self) return 0; } -static int map_groups__set_modules_path_dir(struct map_groups *self, char *dirname) +static int map_groups__set_modules_path_dir(struct map_groups *self, + const char *dir_name) { struct dirent *dent; - DIR *dir = opendir(dirname); + DIR *dir = opendir(dir_name); if (!dir) { - pr_debug("%s: cannot open %s dir\n", __func__, dirname); + pr_debug("%s: cannot open %s dir\n", __func__, dir_name); return -1; } while ((dent = readdir(dir)) != NULL) { char path[PATH_MAX]; + struct stat st; - if (dent->d_type == DT_DIR) { + /*sshfs might return bad dent->d_type, so we have to stat*/ + sprintf(path, "%s/%s", dir_name, dent->d_name); + if (stat(path, &st)) + continue; + + if (S_ISDIR(st.st_mode)) { if (!strcmp(dent->d_name, ".") || !strcmp(dent->d_name, "..")) continue; snprintf(path, sizeof(path), "%s/%s", - dirname, dent->d_name); + dir_name, dent->d_name); if (map_groups__set_modules_path_dir(self, path) < 0) goto failure; } else { @@ -1433,13 +1486,13 @@ static int map_groups__set_modules_path_dir(struct map_groups *self, char *dirna continue; snprintf(path, sizeof(path), "%s/%s", - dirname, dent->d_name); + dir_name, dent->d_name); long_name = strdup(path); if (long_name == NULL) goto failure; dso__set_long_name(map->dso, long_name); - dso__kernel_module_get_build_id(map->dso); + dso__kernel_module_get_build_id(map->dso, ""); } } @@ -1449,18 +1502,47 @@ failure: return -1; } -static int map_groups__set_modules_path(struct map_groups *self) +static char *get_kernel_version(const char *root_dir) { - struct utsname uts; + char version[PATH_MAX]; + FILE *file; + char *name, *tmp; + const char *prefix = "Linux version "; + + sprintf(version, "%s/proc/version", root_dir); + file = fopen(version, "r"); + if (!file) + return NULL; + + version[0] = '\0'; + tmp = fgets(version, sizeof(version), file); + fclose(file); + + name = strstr(version, prefix); + if (!name) + return NULL; + name += strlen(prefix); + tmp = strchr(name, ' '); + if (tmp) + *tmp = '\0'; + + return strdup(name); +} + +static int machine__set_modules_path(struct machine *self) +{ + char *version; char modules_path[PATH_MAX]; - if (uname(&uts) < 0) + version = get_kernel_version(self->root_dir); + if (!version) return -1; - snprintf(modules_path, sizeof(modules_path), "/lib/modules/%s/kernel", - uts.release); + snprintf(modules_path, sizeof(modules_path), "%s/lib/modules/%s/kernel", + self->root_dir, version); + free(version); - return map_groups__set_modules_path_dir(self, modules_path); + return map_groups__set_modules_path_dir(&self->kmaps, modules_path); } /* @@ -1470,8 +1552,8 @@ static int map_groups__set_modules_path(struct map_groups *self) */ static struct map *map__new2(u64 start, struct dso *dso, enum map_type type) { - struct map *self = zalloc(sizeof(*self) + - (dso->kernel ? sizeof(struct kmap) : 0)); + struct map *self = calloc(1, (sizeof(*self) + + (dso->kernel ? sizeof(struct kmap) : 0))); if (self != NULL) { /* * ->end will be filled after we load all the symbols @@ -1482,11 +1564,11 @@ static struct map *map__new2(u64 start, struct dso *dso, enum map_type type) return self; } -struct map *map_groups__new_module(struct map_groups *self, u64 start, - const char *filename) +struct map *machine__new_module(struct machine *self, u64 start, + const char *filename) { struct map *map; - struct dso *dso = __dsos__findnew(&dsos__kernel, filename); + struct dso *dso = __dsos__findnew(&self->kernel_dsos, filename); if (dso == NULL) return NULL; @@ -1495,18 +1577,31 @@ struct map *map_groups__new_module(struct map_groups *self, u64 start, if (map == NULL) return NULL; - dso->origin = DSO__ORIG_KMODULE; - map_groups__insert(self, map); + if (machine__is_host(self)) + dso->origin = DSO__ORIG_KMODULE; + else + dso->origin = DSO__ORIG_GUEST_KMODULE; + map_groups__insert(&self->kmaps, map); return map; } -static int map_groups__create_modules(struct map_groups *self) +static int machine__create_modules(struct machine *self) { char *line = NULL; size_t n; - FILE *file = fopen("/proc/modules", "r"); + FILE *file; struct map *map; + const char *modules; + char path[PATH_MAX]; + + if (machine__is_default_guest(self)) + modules = symbol_conf.default_guest_modules; + else { + sprintf(path, "%s/proc/modules", self->root_dir); + modules = path; + } + file = fopen(modules, "r"); if (file == NULL) return -1; @@ -1538,16 +1633,16 @@ static int map_groups__create_modules(struct map_groups *self) *sep = '\0'; snprintf(name, sizeof(name), "[%s]", line); - map = map_groups__new_module(self, start, name); + map = machine__new_module(self, start, name); if (map == NULL) goto out_delete_line; - dso__kernel_module_get_build_id(map->dso); + dso__kernel_module_get_build_id(map->dso, self->root_dir); } free(line); fclose(file); - return map_groups__set_modules_path(self); + return machine__set_modules_path(self); out_delete_line: free(line); @@ -1714,8 +1809,56 @@ out_fixup: return err; } -LIST_HEAD(dsos__user); -LIST_HEAD(dsos__kernel); +static int dso__load_guest_kernel_sym(struct dso *self, struct map *map, + symbol_filter_t filter) +{ + int err; + const char *kallsyms_filename = NULL; + struct machine *machine; + char path[PATH_MAX]; + + if (!map->groups) { + pr_debug("Guest kernel map hasn't the point to groups\n"); + return -1; + } + machine = map->groups->machine; + + if (machine__is_default_guest(machine)) { + /* + * if the user specified a vmlinux filename, use it and only + * it, reporting errors to the user if it cannot be used. + * Or use file guest_kallsyms inputted by user on commandline + */ + if (symbol_conf.default_guest_vmlinux_name != NULL) { + err = dso__load_vmlinux(self, map, + symbol_conf.default_guest_vmlinux_name, filter); + goto out_try_fixup; + } + + kallsyms_filename = symbol_conf.default_guest_kallsyms; + if (!kallsyms_filename) + return -1; + } else { + sprintf(path, "%s/proc/kallsyms", machine->root_dir); + kallsyms_filename = path; + } + + err = dso__load_kallsyms(self, kallsyms_filename, map, filter); + if (err > 0) + pr_debug("Using %s for symbols\n", kallsyms_filename); + +out_try_fixup: + if (err > 0) { + if (kallsyms_filename != NULL) { + machine__mmap_name(machine, path, sizeof(path)); + dso__set_long_name(self, strdup(path)); + } + map__fixup_start(map); + map__fixup_end(map); + } + + return err; +} static void dsos__add(struct list_head *head, struct dso *dso) { @@ -1747,21 +1890,32 @@ struct dso *__dsos__findnew(struct list_head *head, const char *name) return dso; } -static void __dsos__fprintf(struct list_head *head, FILE *fp) +size_t __dsos__fprintf(struct list_head *head, FILE *fp) { struct dso *pos; + size_t ret = 0; list_for_each_entry(pos, head, node) { int i; for (i = 0; i < MAP__NR_TYPES; ++i) - dso__fprintf(pos, i, fp); + ret += dso__fprintf(pos, i, fp); } + + return ret; } -void dsos__fprintf(FILE *fp) +size_t machines__fprintf_dsos(struct rb_root *self, FILE *fp) { - __dsos__fprintf(&dsos__kernel, fp); - __dsos__fprintf(&dsos__user, fp); + struct rb_node *nd; + size_t ret = 0; + + for (nd = rb_first(self); nd; nd = rb_next(nd)) { + struct machine *pos = rb_entry(nd, struct machine, rb_node); + ret += __dsos__fprintf(&pos->kernel_dsos, fp); + ret += __dsos__fprintf(&pos->user_dsos, fp); + } + + return ret; } static size_t __dsos__fprintf_buildid(struct list_head *head, FILE *fp, @@ -1779,10 +1933,17 @@ static size_t __dsos__fprintf_buildid(struct list_head *head, FILE *fp, return ret; } -size_t dsos__fprintf_buildid(FILE *fp, bool with_hits) +size_t machines__fprintf_dsos_buildid(struct rb_root *self, FILE *fp, bool with_hits) { - return (__dsos__fprintf_buildid(&dsos__kernel, fp, with_hits) + - __dsos__fprintf_buildid(&dsos__user, fp, with_hits)); + struct rb_node *nd; + size_t ret = 0; + + for (nd = rb_first(self); nd; nd = rb_next(nd)) { + struct machine *pos = rb_entry(nd, struct machine, rb_node); + ret += __dsos__fprintf_buildid(&pos->kernel_dsos, fp, with_hits); + ret += __dsos__fprintf_buildid(&pos->user_dsos, fp, with_hits); + } + return ret; } struct dso *dso__new_kernel(const char *name) @@ -1791,55 +1952,98 @@ struct dso *dso__new_kernel(const char *name) if (self != NULL) { dso__set_short_name(self, "[kernel]"); - self->kernel = 1; + self->kernel = DSO_TYPE_KERNEL; } return self; } -void dso__read_running_kernel_build_id(struct dso *self) +static struct dso *dso__new_guest_kernel(struct machine *machine, + const char *name) { - if (sysfs__read_build_id("/sys/kernel/notes", self->build_id, + char bf[PATH_MAX]; + struct dso *self = dso__new(name ?: machine__mmap_name(machine, bf, sizeof(bf))); + + if (self != NULL) { + dso__set_short_name(self, "[guest.kernel]"); + self->kernel = DSO_TYPE_GUEST_KERNEL; + } + + return self; +} + +void dso__read_running_kernel_build_id(struct dso *self, struct machine *machine) +{ + char path[PATH_MAX]; + + if (machine__is_default_guest(machine)) + return; + sprintf(path, "%s/sys/kernel/notes", machine->root_dir); + if (sysfs__read_build_id(path, self->build_id, sizeof(self->build_id)) == 0) self->has_build_id = true; } -static struct dso *dsos__create_kernel(const char *vmlinux) +static struct dso *machine__create_kernel(struct machine *self) { - struct dso *kernel = dso__new_kernel(vmlinux); + const char *vmlinux_name = NULL; + struct dso *kernel; - if (kernel != NULL) { - dso__read_running_kernel_build_id(kernel); - dsos__add(&dsos__kernel, kernel); + if (machine__is_host(self)) { + vmlinux_name = symbol_conf.vmlinux_name; + kernel = dso__new_kernel(vmlinux_name); + } else { + if (machine__is_default_guest(self)) + vmlinux_name = symbol_conf.default_guest_vmlinux_name; + kernel = dso__new_guest_kernel(self, vmlinux_name); } + if (kernel != NULL) { + dso__read_running_kernel_build_id(kernel, self); + dsos__add(&self->kernel_dsos, kernel); + } return kernel; } -int __map_groups__create_kernel_maps(struct map_groups *self, - struct map *vmlinux_maps[MAP__NR_TYPES], - struct dso *kernel) +int __machine__create_kernel_maps(struct machine *self, struct dso *kernel) { enum map_type type; for (type = 0; type < MAP__NR_TYPES; ++type) { struct kmap *kmap; - vmlinux_maps[type] = map__new2(0, kernel, type); - if (vmlinux_maps[type] == NULL) + self->vmlinux_maps[type] = map__new2(0, kernel, type); + if (self->vmlinux_maps[type] == NULL) return -1; - vmlinux_maps[type]->map_ip = - vmlinux_maps[type]->unmap_ip = identity__map_ip; + self->vmlinux_maps[type]->map_ip = + self->vmlinux_maps[type]->unmap_ip = identity__map_ip; - kmap = map__kmap(vmlinux_maps[type]); - kmap->kmaps = self; - map_groups__insert(self, vmlinux_maps[type]); + kmap = map__kmap(self->vmlinux_maps[type]); + kmap->kmaps = &self->kmaps; + map_groups__insert(&self->kmaps, self->vmlinux_maps[type]); } return 0; } +int machine__create_kernel_maps(struct machine *self) +{ + struct dso *kernel = machine__create_kernel(self); + + if (kernel == NULL || + __machine__create_kernel_maps(self, kernel) < 0) + return -1; + + if (symbol_conf.use_modules && machine__create_modules(self) < 0) + pr_debug("Problems creating module maps, continuing anyway...\n"); + /* + * Now that we have all the maps created, just set the ->end of them: + */ + map_groups__fixup_end(&self->kmaps); + return 0; +} + static void vmlinux_path__exit(void) { while (--vmlinux_path__nr_entries >= 0) { @@ -1895,6 +2099,17 @@ out_fail: return -1; } +size_t vmlinux_path__fprintf(FILE *fp) +{ + int i; + size_t printed = 0; + + for (i = 0; i < vmlinux_path__nr_entries; ++i) + printed += fprintf(fp, "[%d] %s\n", i, vmlinux_path[i]); + + return printed; +} + static int setup_list(struct strlist **list, const char *list_str, const char *list_name) { @@ -1945,22 +2160,129 @@ out_free_comm_list: return -1; } -int map_groups__create_kernel_maps(struct map_groups *self, - struct map *vmlinux_maps[MAP__NR_TYPES]) +int machines__create_kernel_maps(struct rb_root *self, pid_t pid) { - struct dso *kernel = dsos__create_kernel(symbol_conf.vmlinux_name); + struct machine *machine = machines__findnew(self, pid); - if (kernel == NULL) + if (machine == NULL) return -1; - if (__map_groups__create_kernel_maps(self, vmlinux_maps, kernel) < 0) - return -1; + return machine__create_kernel_maps(machine); +} - if (symbol_conf.use_modules && map_groups__create_modules(self) < 0) - pr_debug("Problems creating module maps, continuing anyway...\n"); - /* - * Now that we have all the maps created, just set the ->end of them: - */ - map_groups__fixup_end(self); - return 0; +static int hex(char ch) +{ + if ((ch >= '0') && (ch <= '9')) + return ch - '0'; + if ((ch >= 'a') && (ch <= 'f')) + return ch - 'a' + 10; + if ((ch >= 'A') && (ch <= 'F')) + return ch - 'A' + 10; + return -1; +} + +/* + * While we find nice hex chars, build a long_val. + * Return number of chars processed. + */ +int hex2u64(const char *ptr, u64 *long_val) +{ + const char *p = ptr; + *long_val = 0; + + while (*p) { + const int hex_val = hex(*p); + + if (hex_val < 0) + break; + + *long_val = (*long_val << 4) | hex_val; + p++; + } + + return p - ptr; +} + +char *strxfrchar(char *s, char from, char to) +{ + char *p = s; + + while ((p = strchr(p, from)) != NULL) + *p++ = to; + + return s; +} + +int machines__create_guest_kernel_maps(struct rb_root *self) +{ + int ret = 0; + struct dirent **namelist = NULL; + int i, items = 0; + char path[PATH_MAX]; + pid_t pid; + + if (symbol_conf.default_guest_vmlinux_name || + symbol_conf.default_guest_modules || + symbol_conf.default_guest_kallsyms) { + machines__create_kernel_maps(self, DEFAULT_GUEST_KERNEL_ID); + } + + if (symbol_conf.guestmount) { + items = scandir(symbol_conf.guestmount, &namelist, NULL, NULL); + if (items <= 0) + return -ENOENT; + for (i = 0; i < items; i++) { + if (!isdigit(namelist[i]->d_name[0])) { + /* Filter out . and .. */ + continue; + } + pid = atoi(namelist[i]->d_name); + sprintf(path, "%s/%s/proc/kallsyms", + symbol_conf.guestmount, + namelist[i]->d_name); + ret = access(path, R_OK); + if (ret) { + pr_debug("Can't access file %s\n", path); + goto failure; + } + machines__create_kernel_maps(self, pid); + } +failure: + free(namelist); + } + + return ret; +} + +int machine__load_kallsyms(struct machine *self, const char *filename, + enum map_type type, symbol_filter_t filter) +{ + struct map *map = self->vmlinux_maps[type]; + int ret = dso__load_kallsyms(map->dso, filename, map, filter); + + if (ret > 0) { + dso__set_loaded(map->dso, type); + /* + * Since /proc/kallsyms will have multiple sessions for the + * kernel, with modules between them, fixup the end of all + * sections. + */ + __map_groups__fixup_end(&self->kmaps, type); + } + + return ret; +} + +int machine__load_vmlinux_path(struct machine *self, enum map_type type, + symbol_filter_t filter) +{ + struct map *map = self->vmlinux_maps[type]; + int ret = dso__load_vmlinux_path(map->dso, map, filter); + + if (ret > 0) { + dso__set_loaded(map->dso, type); + map__reloc_vmlinux(map); + } + + return ret; } diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h index f30a37428919..032469e41876 100644 --- a/tools/perf/util/symbol.h +++ b/tools/perf/util/symbol.h @@ -3,10 +3,11 @@ #include <linux/types.h> #include <stdbool.h> -#include "types.h" +#include <stdint.h> +#include "map.h" #include <linux/list.h> #include <linux/rbtree.h> -#include "event.h" +#include <stdio.h> #define DEBUG_CACHE_DIR ".debug" @@ -29,6 +30,9 @@ static inline char *bfd_demangle(void __used *v, const char __used *c, #endif #endif +int hex2u64(const char *ptr, u64 *val); +char *strxfrchar(char *s, char from, char to); + /* * libelf 0.8.x and earlier do not support ELF_C_READ_MMAP; * for newer versions we can use mmap to reduce memory usage: @@ -44,10 +48,13 @@ static inline char *bfd_demangle(void __used *v, const char __used *c, #define DMGL_ANSI (1 << 1) /* Include const, volatile, etc */ #endif +#define BUILD_ID_SIZE 20 + struct symbol { struct rb_node rb_node; u64 start; u64 end; + u16 namelen; char name[0]; }; @@ -63,10 +70,15 @@ struct symbol_conf { show_nr_samples, use_callchain, exclude_other, - full_paths; + full_paths, + show_cpu_utilization; const char *vmlinux_name, *field_sep; - char *dso_list_str, + const char *default_guest_vmlinux_name, + *default_guest_kallsyms, + *default_guest_modules; + const char *guestmount; + const char *dso_list_str, *comm_list_str, *sym_list_str, *col_width_list_str; @@ -88,6 +100,11 @@ struct ref_reloc_sym { u64 unrelocated_addr; }; +struct map_symbol { + struct map *map; + struct symbol *sym; +}; + struct addr_location { struct thread *thread; struct map *map; @@ -95,6 +112,13 @@ struct addr_location { u64 addr; char level; bool filtered; + unsigned int cpumode; +}; + +enum dso_kernel_type { + DSO_TYPE_USER = 0, + DSO_TYPE_KERNEL, + DSO_TYPE_GUEST_KERNEL }; struct dso { @@ -104,8 +128,9 @@ struct dso { u8 adjust_symbols:1; u8 slen_calculated:1; u8 has_build_id:1; - u8 kernel:1; + enum dso_kernel_type kernel; u8 hit:1; + u8 annotate_warned:1; unsigned char origin; u8 sorted_by_name; u8 loaded; @@ -131,42 +156,65 @@ static inline void dso__set_loaded(struct dso *self, enum map_type type) void dso__sort_by_name(struct dso *self, enum map_type type); -extern struct list_head dsos__user, dsos__kernel; - struct dso *__dsos__findnew(struct list_head *head, const char *name); -static inline struct dso *dsos__findnew(const char *name) -{ - return __dsos__findnew(&dsos__user, name); -} - int dso__load(struct dso *self, struct map *map, symbol_filter_t filter); int dso__load_vmlinux_path(struct dso *self, struct map *map, symbol_filter_t filter); int dso__load_kallsyms(struct dso *self, const char *filename, struct map *map, symbol_filter_t filter); -void dsos__fprintf(FILE *fp); -size_t dsos__fprintf_buildid(FILE *fp, bool with_hits); +int machine__load_kallsyms(struct machine *self, const char *filename, + enum map_type type, symbol_filter_t filter); +int machine__load_vmlinux_path(struct machine *self, enum map_type type, + symbol_filter_t filter); + +size_t __dsos__fprintf(struct list_head *head, FILE *fp); + +size_t machines__fprintf_dsos(struct rb_root *self, FILE *fp); +size_t machines__fprintf_dsos_buildid(struct rb_root *self, FILE *fp, bool with_hits); size_t dso__fprintf_buildid(struct dso *self, FILE *fp); size_t dso__fprintf(struct dso *self, enum map_type type, FILE *fp); + +enum dso_origin { + DSO__ORIG_KERNEL = 0, + DSO__ORIG_GUEST_KERNEL, + DSO__ORIG_JAVA_JIT, + DSO__ORIG_BUILD_ID_CACHE, + DSO__ORIG_FEDORA, + DSO__ORIG_UBUNTU, + DSO__ORIG_BUILDID, + DSO__ORIG_DSO, + DSO__ORIG_GUEST_KMODULE, + DSO__ORIG_KMODULE, + DSO__ORIG_NOT_FOUND, +}; + char dso__symtab_origin(const struct dso *self); void dso__set_long_name(struct dso *self, char *name); void dso__set_build_id(struct dso *self, void *build_id); -void dso__read_running_kernel_build_id(struct dso *self); +void dso__read_running_kernel_build_id(struct dso *self, struct machine *machine); struct symbol *dso__find_symbol(struct dso *self, enum map_type type, u64 addr); struct symbol *dso__find_symbol_by_name(struct dso *self, enum map_type type, const char *name); int filename__read_build_id(const char *filename, void *bf, size_t size); int sysfs__read_build_id(const char *filename, void *bf, size_t size); -bool dsos__read_build_ids(bool with_hits); +bool __dsos__read_build_ids(struct list_head *head, bool with_hits); int build_id__sprintf(const u8 *self, int len, char *bf); int kallsyms__parse(const char *filename, void *arg, int (*process_symbol)(void *arg, const char *name, char type, u64 start)); +int __machine__create_kernel_maps(struct machine *self, struct dso *kernel); +int machine__create_kernel_maps(struct machine *self); + +int machines__create_kernel_maps(struct rb_root *self, pid_t pid); +int machines__create_guest_kernel_maps(struct rb_root *self); + int symbol__init(void); bool symbol_type__is_a(char symbol_type, enum map_type map_type); +size_t vmlinux_path__fprintf(FILE *fp); + #endif /* __PERF_SYMBOL */ diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c index fa968312ee7d..1f7ecd47f499 100644 --- a/tools/perf/util/thread.c +++ b/tools/perf/util/thread.c @@ -7,13 +7,35 @@ #include "util.h" #include "debug.h" -void map_groups__init(struct map_groups *self) +int find_all_tid(int pid, pid_t ** all_tid) { + char name[256]; + int items; + struct dirent **namelist = NULL; + int ret = 0; int i; - for (i = 0; i < MAP__NR_TYPES; ++i) { - self->maps[i] = RB_ROOT; - INIT_LIST_HEAD(&self->removed_maps[i]); + + sprintf(name, "/proc/%d/task", pid); + items = scandir(name, &namelist, NULL, NULL); + if (items <= 0) + return -ENOENT; + *all_tid = malloc(sizeof(pid_t) * items); + if (!*all_tid) { + ret = -ENOMEM; + goto failure; } + + for (i = 0; i < items; i++) + (*all_tid)[i] = atoi(namelist[i]->d_name); + + ret = items; + +failure: + for (i=0; i<items; i++) + free(namelist[i]); + free(namelist); + + return ret; } static struct thread *thread__new(pid_t pid) @@ -31,28 +53,6 @@ static struct thread *thread__new(pid_t pid) return self; } -static void map_groups__flush(struct map_groups *self) -{ - int type; - - for (type = 0; type < MAP__NR_TYPES; type++) { - struct rb_root *root = &self->maps[type]; - struct rb_node *next = rb_first(root); - - while (next) { - struct map *pos = rb_entry(next, struct map, rb_node); - next = rb_next(&pos->rb_node); - rb_erase(&pos->rb_node, root); - /* - * We may have references to this map, for - * instance in some hist_entry instances, so - * just move them to a separate list. - */ - list_add_tail(&pos->node, &self->removed_maps[pos->type]); - } - } -} - int thread__set_comm(struct thread *self, const char *comm) { int err; @@ -79,69 +79,10 @@ int thread__comm_len(struct thread *self) return self->comm_len; } -size_t __map_groups__fprintf_maps(struct map_groups *self, - enum map_type type, FILE *fp) -{ - size_t printed = fprintf(fp, "%s:\n", map_type__name[type]); - struct rb_node *nd; - - for (nd = rb_first(&self->maps[type]); nd; nd = rb_next(nd)) { - struct map *pos = rb_entry(nd, struct map, rb_node); - printed += fprintf(fp, "Map:"); - printed += map__fprintf(pos, fp); - if (verbose > 2) { - printed += dso__fprintf(pos->dso, type, fp); - printed += fprintf(fp, "--\n"); - } - } - - return printed; -} - -size_t map_groups__fprintf_maps(struct map_groups *self, FILE *fp) -{ - size_t printed = 0, i; - for (i = 0; i < MAP__NR_TYPES; ++i) - printed += __map_groups__fprintf_maps(self, i, fp); - return printed; -} - -static size_t __map_groups__fprintf_removed_maps(struct map_groups *self, - enum map_type type, FILE *fp) -{ - struct map *pos; - size_t printed = 0; - - list_for_each_entry(pos, &self->removed_maps[type], node) { - printed += fprintf(fp, "Map:"); - printed += map__fprintf(pos, fp); - if (verbose > 1) { - printed += dso__fprintf(pos->dso, type, fp); - printed += fprintf(fp, "--\n"); - } - } - return printed; -} - -static size_t map_groups__fprintf_removed_maps(struct map_groups *self, FILE *fp) -{ - size_t printed = 0, i; - for (i = 0; i < MAP__NR_TYPES; ++i) - printed += __map_groups__fprintf_removed_maps(self, i, fp); - return printed; -} - -static size_t map_groups__fprintf(struct map_groups *self, FILE *fp) -{ - size_t printed = map_groups__fprintf_maps(self, fp); - printed += fprintf(fp, "Removed maps:\n"); - return printed + map_groups__fprintf_removed_maps(self, fp); -} - static size_t thread__fprintf(struct thread *self, FILE *fp) { return fprintf(fp, "Thread %d %s\n", self->pid, self->comm) + - map_groups__fprintf(&self->mg, fp); + map_groups__fprintf(&self->mg, verbose, fp); } struct thread *perf_session__findnew(struct perf_session *self, pid_t pid) @@ -183,127 +124,12 @@ struct thread *perf_session__findnew(struct perf_session *self, pid_t pid) return th; } -static int map_groups__fixup_overlappings(struct map_groups *self, - struct map *map) -{ - struct rb_root *root = &self->maps[map->type]; - struct rb_node *next = rb_first(root); - - while (next) { - struct map *pos = rb_entry(next, struct map, rb_node); - next = rb_next(&pos->rb_node); - - if (!map__overlap(pos, map)) - continue; - - if (verbose >= 2) { - fputs("overlapping maps:\n", stderr); - map__fprintf(map, stderr); - map__fprintf(pos, stderr); - } - - rb_erase(&pos->rb_node, root); - /* - * We may have references to this map, for instance in some - * hist_entry instances, so just move them to a separate - * list. - */ - list_add_tail(&pos->node, &self->removed_maps[map->type]); - /* - * Now check if we need to create new maps for areas not - * overlapped by the new map: - */ - if (map->start > pos->start) { - struct map *before = map__clone(pos); - - if (before == NULL) - return -ENOMEM; - - before->end = map->start - 1; - map_groups__insert(self, before); - if (verbose >= 2) - map__fprintf(before, stderr); - } - - if (map->end < pos->end) { - struct map *after = map__clone(pos); - - if (after == NULL) - return -ENOMEM; - - after->start = map->end + 1; - map_groups__insert(self, after); - if (verbose >= 2) - map__fprintf(after, stderr); - } - } - - return 0; -} - -void maps__insert(struct rb_root *maps, struct map *map) -{ - struct rb_node **p = &maps->rb_node; - struct rb_node *parent = NULL; - const u64 ip = map->start; - struct map *m; - - while (*p != NULL) { - parent = *p; - m = rb_entry(parent, struct map, rb_node); - if (ip < m->start) - p = &(*p)->rb_left; - else - p = &(*p)->rb_right; - } - - rb_link_node(&map->rb_node, parent, p); - rb_insert_color(&map->rb_node, maps); -} - -struct map *maps__find(struct rb_root *maps, u64 ip) -{ - struct rb_node **p = &maps->rb_node; - struct rb_node *parent = NULL; - struct map *m; - - while (*p != NULL) { - parent = *p; - m = rb_entry(parent, struct map, rb_node); - if (ip < m->start) - p = &(*p)->rb_left; - else if (ip > m->end) - p = &(*p)->rb_right; - else - return m; - } - - return NULL; -} - void thread__insert_map(struct thread *self, struct map *map) { - map_groups__fixup_overlappings(&self->mg, map); + map_groups__fixup_overlappings(&self->mg, map, verbose, stderr); map_groups__insert(&self->mg, map); } -/* - * XXX This should not really _copy_ te maps, but refcount them. - */ -static int map_groups__clone(struct map_groups *self, - struct map_groups *parent, enum map_type type) -{ - struct rb_node *nd; - for (nd = rb_first(&parent->maps[type]); nd; nd = rb_next(nd)) { - struct map *map = rb_entry(nd, struct map, rb_node); - struct map *new = map__clone(map); - if (new == NULL) - return -ENOMEM; - map_groups__insert(self, new); - } - return 0; -} - int thread__fork(struct thread *self, struct thread *parent) { int i; @@ -336,15 +162,3 @@ size_t perf_session__fprintf(struct perf_session *self, FILE *fp) return ret; } - -struct symbol *map_groups__find_symbol(struct map_groups *self, - enum map_type type, u64 addr, - symbol_filter_t filter) -{ - struct map *map = map_groups__find(self, type, addr); - - if (map != NULL) - return map__find_symbol(map, map->map_ip(map, addr), filter); - - return NULL; -} diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h index dcf70303e58e..1dfd9ff8bdcd 100644 --- a/tools/perf/util/thread.h +++ b/tools/perf/util/thread.h @@ -5,14 +5,6 @@ #include <unistd.h> #include "symbol.h" -struct map_groups { - struct rb_root maps[MAP__NR_TYPES]; - struct list_head removed_maps[MAP__NR_TYPES]; -}; - -size_t __map_groups__fprintf_maps(struct map_groups *self, - enum map_type type, FILE *fp); - struct thread { struct rb_node rb_node; struct map_groups mg; @@ -23,29 +15,16 @@ struct thread { int comm_len; }; -void map_groups__init(struct map_groups *self); +struct perf_session; + +int find_all_tid(int pid, pid_t ** all_tid); int thread__set_comm(struct thread *self, const char *comm); int thread__comm_len(struct thread *self); struct thread *perf_session__findnew(struct perf_session *self, pid_t pid); void thread__insert_map(struct thread *self, struct map *map); int thread__fork(struct thread *self, struct thread *parent); -size_t map_groups__fprintf_maps(struct map_groups *self, FILE *fp); size_t perf_session__fprintf(struct perf_session *self, FILE *fp); -void maps__insert(struct rb_root *maps, struct map *map); -struct map *maps__find(struct rb_root *maps, u64 addr); - -static inline void map_groups__insert(struct map_groups *self, struct map *map) -{ - maps__insert(&self->maps[map->type], map); -} - -static inline struct map *map_groups__find(struct map_groups *self, - enum map_type type, u64 addr) -{ - return maps__find(&self->maps[type], addr); -} - static inline struct map *thread__find_map(struct thread *self, enum map_type type, u64 addr) { @@ -54,34 +33,12 @@ static inline struct map *thread__find_map(struct thread *self, void thread__find_addr_map(struct thread *self, struct perf_session *session, u8 cpumode, - enum map_type type, u64 addr, + enum map_type type, pid_t pid, u64 addr, struct addr_location *al); void thread__find_addr_location(struct thread *self, struct perf_session *session, u8 cpumode, - enum map_type type, u64 addr, + enum map_type type, pid_t pid, u64 addr, struct addr_location *al, symbol_filter_t filter); -struct symbol *map_groups__find_symbol(struct map_groups *self, - enum map_type type, u64 addr, - symbol_filter_t filter); - -static inline struct symbol *map_groups__find_function(struct map_groups *self, - u64 addr, - symbol_filter_t filter) -{ - return map_groups__find_symbol(self, MAP__FUNCTION, addr, filter); -} - -struct map *map_groups__find_by_name(struct map_groups *self, - enum map_type type, const char *name); - -int __map_groups__create_kernel_maps(struct map_groups *self, - struct map *vmlinux_maps[MAP__NR_TYPES], - struct dso *kernel); -int map_groups__create_kernel_maps(struct map_groups *self, - struct map *vmlinux_maps[MAP__NR_TYPES]); - -struct map *map_groups__new_module(struct map_groups *self, u64 start, - const char *filename); #endif /* __PERF_THREAD_H */ diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c index 5ea8973ad331..b1572601286c 100644 --- a/tools/perf/util/trace-event-info.c +++ b/tools/perf/util/trace-event-info.c @@ -154,10 +154,17 @@ static void put_tracing_file(char *file) free(file); } +static ssize_t calc_data_size; + static ssize_t write_or_die(const void *buf, size_t len) { int ret; + if (calc_data_size) { + calc_data_size += len; + return len; + } + ret = write(output_fd, buf, len); if (ret < 0) die("writing to '%s'", output_file); @@ -480,6 +487,17 @@ get_tracepoints_path(struct perf_event_attr *pattrs, int nb_events) return nr_tracepoints > 0 ? path.next : NULL; } +bool have_tracepoints(struct perf_event_attr *pattrs, int nb_events) +{ + int i; + + for (i = 0; i < nb_events; i++) + if (pattrs[i].type == PERF_TYPE_TRACEPOINT) + return true; + + return false; +} + int read_tracing_data(int fd, struct perf_event_attr *pattrs, int nb_events) { char buf[BUFSIZ]; @@ -526,3 +544,20 @@ int read_tracing_data(int fd, struct perf_event_attr *pattrs, int nb_events) return 0; } + +ssize_t read_tracing_data_size(int fd, struct perf_event_attr *pattrs, + int nb_events) +{ + ssize_t size; + int err = 0; + + calc_data_size = 1; + err = read_tracing_data(fd, pattrs, nb_events); + size = calc_data_size - 1; + calc_data_size = 0; + + if (err < 0) + return err; + + return size; +} diff --git a/tools/perf/util/trace-event-parse.c b/tools/perf/util/trace-event-parse.c index 613c9cc90570..73a02223c629 100644 --- a/tools/perf/util/trace-event-parse.c +++ b/tools/perf/util/trace-event-parse.c @@ -37,10 +37,12 @@ int header_page_ts_offset; int header_page_ts_size; int header_page_size_offset; int header_page_size_size; +int header_page_overwrite_offset; +int header_page_overwrite_size; int header_page_data_offset; int header_page_data_size; -int latency_format; +bool latency_format; static char *input_buf; static unsigned long long input_buf_ptr; @@ -628,23 +630,32 @@ static int test_type(enum event_type type, enum event_type expect) return 0; } -static int test_type_token(enum event_type type, char *token, - enum event_type expect, const char *expect_tok) +static int __test_type_token(enum event_type type, char *token, + enum event_type expect, const char *expect_tok, + bool warn) { if (type != expect) { - warning("Error: expected type %d but read %d", - expect, type); + if (warn) + warning("Error: expected type %d but read %d", + expect, type); return -1; } if (strcmp(token, expect_tok) != 0) { - warning("Error: expected '%s' but read '%s'", - expect_tok, token); + if (warn) + warning("Error: expected '%s' but read '%s'", + expect_tok, token); return -1; } return 0; } +static int test_type_token(enum event_type type, char *token, + enum event_type expect, const char *expect_tok) +{ + return __test_type_token(type, token, expect, expect_tok, true); +} + static int __read_expect_type(enum event_type expect, char **tok, int newline_ok) { enum event_type type; @@ -661,7 +672,8 @@ static int read_expect_type(enum event_type expect, char **tok) return __read_expect_type(expect, tok, 1); } -static int __read_expected(enum event_type expect, const char *str, int newline_ok) +static int __read_expected(enum event_type expect, const char *str, + int newline_ok, bool warn) { enum event_type type; char *token; @@ -672,7 +684,7 @@ static int __read_expected(enum event_type expect, const char *str, int newline_ else type = read_token_item(&token); - ret = test_type_token(type, token, expect, str); + ret = __test_type_token(type, token, expect, str, warn); free_token(token); @@ -681,12 +693,12 @@ static int __read_expected(enum event_type expect, const char *str, int newline_ static int read_expected(enum event_type expect, const char *str) { - return __read_expected(expect, str, 1); + return __read_expected(expect, str, 1, true); } static int read_expected_item(enum event_type expect, const char *str) { - return __read_expected(expect, str, 0); + return __read_expected(expect, str, 0, true); } static char *event_read_name(void) @@ -744,7 +756,7 @@ static int field_is_string(struct format_field *field) static int field_is_dynamic(struct format_field *field) { - if (!strcmp(field->type, "__data_loc")) + if (!strncmp(field->type, "__data_loc", 10)) return 1; return 0; @@ -3087,88 +3099,6 @@ static void print_args(struct print_arg *args) } } -static void parse_header_field(const char *field, - int *offset, int *size) -{ - char *token; - int type; - - if (read_expected(EVENT_ITEM, "field") < 0) - return; - if (read_expected(EVENT_OP, ":") < 0) - return; - - /* type */ - if (read_expect_type(EVENT_ITEM, &token) < 0) - goto fail; - free_token(token); - - if (read_expected(EVENT_ITEM, field) < 0) - return; - if (read_expected(EVENT_OP, ";") < 0) - return; - if (read_expected(EVENT_ITEM, "offset") < 0) - return; - if (read_expected(EVENT_OP, ":") < 0) - return; - if (read_expect_type(EVENT_ITEM, &token) < 0) - goto fail; - *offset = atoi(token); - free_token(token); - if (read_expected(EVENT_OP, ";") < 0) - return; - if (read_expected(EVENT_ITEM, "size") < 0) - return; - if (read_expected(EVENT_OP, ":") < 0) - return; - if (read_expect_type(EVENT_ITEM, &token) < 0) - goto fail; - *size = atoi(token); - free_token(token); - if (read_expected(EVENT_OP, ";") < 0) - return; - type = read_token(&token); - if (type != EVENT_NEWLINE) { - /* newer versions of the kernel have a "signed" type */ - if (type != EVENT_ITEM) - goto fail; - - if (strcmp(token, "signed") != 0) - goto fail; - - free_token(token); - - if (read_expected(EVENT_OP, ":") < 0) - return; - - if (read_expect_type(EVENT_ITEM, &token)) - goto fail; - - free_token(token); - if (read_expected(EVENT_OP, ";") < 0) - return; - - if (read_expect_type(EVENT_NEWLINE, &token)) - goto fail; - } - fail: - free_token(token); -} - -int parse_header_page(char *buf, unsigned long size) -{ - init_input_buf(buf, size); - - parse_header_field("timestamp", &header_page_ts_offset, - &header_page_ts_size); - parse_header_field("commit", &header_page_size_offset, - &header_page_size_size); - parse_header_field("data", &header_page_data_offset, - &header_page_data_size); - - return 0; -} - int parse_ftrace_file(char *buf, unsigned long size) { struct format_field *field; diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c index 7cd1193918c7..cb54cd002f49 100644 --- a/tools/perf/util/trace-event-read.c +++ b/tools/perf/util/trace-event-read.c @@ -50,14 +50,51 @@ static int long_size; static unsigned long page_size; +static ssize_t calc_data_size; +static bool repipe; + +/* If it fails, the next read will report it */ +static void skip(int size) +{ + lseek(input_fd, size, SEEK_CUR); +} + +static int do_read(int fd, void *buf, int size) +{ + int rsize = size; + + while (size) { + int ret = read(fd, buf, size); + + if (ret <= 0) + return -1; + + if (repipe) { + int retw = write(STDOUT_FILENO, buf, ret); + + if (retw <= 0 || retw != ret) + die("repiping input file"); + } + + size -= ret; + buf += ret; + } + + return rsize; +} + static int read_or_die(void *data, int size) { int r; - r = read(input_fd, data, size); - if (r != size) + r = do_read(input_fd, data, size); + if (r <= 0) die("reading input file (size expected=%d received=%d)", size, r); + + if (calc_data_size) + calc_data_size += r; + return r; } @@ -82,57 +119,36 @@ static char *read_string(void) char buf[BUFSIZ]; char *str = NULL; int size = 0; - int i; off_t r; + char c; for (;;) { - r = read(input_fd, buf, BUFSIZ); + r = read(input_fd, &c, 1); if (r < 0) die("reading input file"); if (!r) die("no data"); - for (i = 0; i < r; i++) { - if (!buf[i]) - break; - } - if (i < r) - break; + if (repipe) { + int retw = write(STDOUT_FILENO, &c, 1); - if (str) { - size += BUFSIZ; - str = realloc(str, size); - if (!str) - die("malloc of size %d", size); - memcpy(str + (size - BUFSIZ), buf, BUFSIZ); - } else { - size = BUFSIZ; - str = malloc_or_die(size); - memcpy(str, buf, size); + if (retw <= 0 || retw != r) + die("repiping input file string"); } - } - /* trailing \0: */ - i++; - - /* move the file descriptor to the end of the string */ - r = lseek(input_fd, -(r - i), SEEK_CUR); - if (r == (off_t)-1) - die("lseek"); - - if (str) { - size += i; - str = realloc(str, size); - if (!str) - die("malloc of size %d", size); - memcpy(str + (size - i), buf, i); - } else { - size = i; - str = malloc_or_die(i); - memcpy(str, buf, i); + buf[size++] = c; + + if (!c) + break; } + if (calc_data_size) + calc_data_size += size; + + str = malloc_or_die(size); + memcpy(str, buf, size); + return str; } @@ -174,7 +190,6 @@ static void read_ftrace_printk(void) static void read_header_files(void) { unsigned long long size; - char *header_page; char *header_event; char buf[BUFSIZ]; @@ -184,10 +199,7 @@ static void read_header_files(void) die("did not read header page"); size = read8(); - header_page = malloc_or_die(size); - read_or_die(header_page, size); - parse_header_page(header_page, size); - free(header_page); + skip(size); /* * The size field in the page is of type long, @@ -459,7 +471,7 @@ struct record *trace_read_data(int cpu) return data; } -void trace_report(int fd) +ssize_t trace_report(int fd, bool __repipe) { char buf[BUFSIZ]; char test[] = { 23, 8, 68 }; @@ -467,6 +479,10 @@ void trace_report(int fd) int show_version = 0; int show_funcs = 0; int show_printk = 0; + ssize_t size; + + calc_data_size = 1; + repipe = __repipe; input_fd = fd; @@ -499,14 +515,18 @@ void trace_report(int fd) read_proc_kallsyms(); read_ftrace_printk(); + size = calc_data_size - 1; + calc_data_size = 0; + repipe = false; + if (show_funcs) { print_funcs(); - return; + return size; } if (show_printk) { print_printk(); - return; + return size; } - return; + return size; } diff --git a/tools/perf/util/trace-event.h b/tools/perf/util/trace-event.h index c3269b937db4..406d452956db 100644 --- a/tools/perf/util/trace-event.h +++ b/tools/perf/util/trace-event.h @@ -1,6 +1,7 @@ #ifndef __PERF_TRACE_EVENTS_H #define __PERF_TRACE_EVENTS_H +#include <stdbool.h> #include "parse-events.h" #define __unused __attribute__((unused)) @@ -162,7 +163,7 @@ struct record *trace_read_data(int cpu); void parse_set_info(int nr_cpus, int long_sz); -void trace_report(int fd); +ssize_t trace_report(int fd, bool repipe); void *malloc_or_die(unsigned int size); @@ -241,9 +242,8 @@ extern int header_page_size_size; extern int header_page_data_offset; extern int header_page_data_size; -extern int latency_format; +extern bool latency_format; -int parse_header_page(char *buf, unsigned long size); int trace_parse_common_type(void *data); int trace_parse_common_pid(void *data); int parse_common_pc(void *data); @@ -258,6 +258,8 @@ void *raw_field_ptr(struct event *event, const char *name, void *data); unsigned long long eval_flag(const char *flag); int read_tracing_data(int fd, struct perf_event_attr *pattrs, int nb_events); +ssize_t read_tracing_data_size(int fd, struct perf_event_attr *pattrs, + int nb_events); /* taken from kernel/trace/trace.h */ enum trace_flag_type { diff --git a/tools/perf/util/util.c b/tools/perf/util/util.c index f9b890fde681..214265674ddd 100644 --- a/tools/perf/util/util.c +++ b/tools/perf/util/util.c @@ -92,3 +92,25 @@ out_close_from: out: return err; } + +unsigned long convert_unit(unsigned long value, char *unit) +{ + *unit = ' '; + + if (value > 1000) { + value /= 1000; + *unit = 'K'; + } + + if (value > 1000) { + value /= 1000; + *unit = 'M'; + } + + if (value > 1000) { + value /= 1000; + *unit = 'G'; + } + + return value; +} diff --git a/tools/perf/util/util.h b/tools/perf/util/util.h index 0f5b2a6f1080..0795bf304b19 100644 --- a/tools/perf/util/util.h +++ b/tools/perf/util/util.h @@ -42,12 +42,14 @@ #define _ALL_SOURCE 1 #define _GNU_SOURCE 1 #define _BSD_SOURCE 1 +#define HAS_BOOL #include <unistd.h> #include <stdio.h> #include <sys/stat.h> #include <sys/statfs.h> #include <fcntl.h> +#include <stdbool.h> #include <stddef.h> #include <stdlib.h> #include <stdarg.h> @@ -78,6 +80,7 @@ #include <pwd.h> #include <inttypes.h> #include "../../../include/linux/magic.h" +#include "types.h" #ifndef NO_ICONV @@ -295,6 +298,13 @@ extern void *xmemdupz(const void *data, size_t len); extern char *xstrndup(const char *str, size_t len); extern void *xrealloc(void *ptr, size_t size) __attribute__((weak)); +static inline void *xzalloc(size_t size) +{ + void *buf = xmalloc(size); + + return memset(buf, 0, size); +} + static inline void *zalloc(size_t size) { return calloc(1, size); @@ -309,6 +319,7 @@ static inline int has_extension(const char *filename, const char *ext) { size_t len = strlen(filename); size_t extlen = strlen(ext); + return len > extlen && !memcmp(filename + len - extlen, ext, extlen); } @@ -322,6 +333,7 @@ static inline int has_extension(const char *filename, const char *ext) #undef isalnum #undef tolower #undef toupper + extern unsigned char sane_ctype[256]; #define GIT_SPACE 0x01 #define GIT_DIGIT 0x02 @@ -406,4 +418,14 @@ void git_qsort(void *base, size_t nmemb, size_t size, int mkdir_p(char *path, mode_t mode); int copyfile(const char *from, const char *to); +s64 perf_atoll(const char *str); +char **argv_split(const char *str, int *argcp); +void argv_free(char **argv); +bool strglobmatch(const char *str, const char *pat); +bool strlazymatch(const char *str, const char *pat); +unsigned long convert_unit(unsigned long value, char *unit); + +#define _STR(x) #x +#define STR(x) _STR(x) + #endif diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c index 03a5eb22da2b..7c79c1d76d0c 100644 --- a/virt/kvm/ioapic.c +++ b/virt/kvm/ioapic.c @@ -197,7 +197,7 @@ int kvm_ioapic_set_irq(struct kvm_ioapic *ioapic, int irq, int level) union kvm_ioapic_redirect_entry entry; int ret = 1; - mutex_lock(&ioapic->lock); + spin_lock(&ioapic->lock); if (irq >= 0 && irq < IOAPIC_NUM_PINS) { entry = ioapic->redirtbl[irq]; level ^= entry.fields.polarity; @@ -214,7 +214,7 @@ int kvm_ioapic_set_irq(struct kvm_ioapic *ioapic, int irq, int level) } trace_kvm_ioapic_set_irq(entry.bits, irq, ret == 0); } - mutex_unlock(&ioapic->lock); + spin_unlock(&ioapic->lock); return ret; } @@ -238,9 +238,9 @@ static void __kvm_ioapic_update_eoi(struct kvm_ioapic *ioapic, int vector, * is dropped it will be put into irr and will be delivered * after ack notifier returns. */ - mutex_unlock(&ioapic->lock); + spin_unlock(&ioapic->lock); kvm_notify_acked_irq(ioapic->kvm, KVM_IRQCHIP_IOAPIC, i); - mutex_lock(&ioapic->lock); + spin_lock(&ioapic->lock); if (trigger_mode != IOAPIC_LEVEL_TRIG) continue; @@ -259,9 +259,9 @@ void kvm_ioapic_update_eoi(struct kvm *kvm, int vector, int trigger_mode) smp_rmb(); if (!test_bit(vector, ioapic->handled_vectors)) return; - mutex_lock(&ioapic->lock); + spin_lock(&ioapic->lock); __kvm_ioapic_update_eoi(ioapic, vector, trigger_mode); - mutex_unlock(&ioapic->lock); + spin_unlock(&ioapic->lock); } static inline struct kvm_ioapic *to_ioapic(struct kvm_io_device *dev) @@ -287,7 +287,7 @@ static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len, ASSERT(!(addr & 0xf)); /* check alignment */ addr &= 0xff; - mutex_lock(&ioapic->lock); + spin_lock(&ioapic->lock); switch (addr) { case IOAPIC_REG_SELECT: result = ioapic->ioregsel; @@ -301,7 +301,7 @@ static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len, result = 0; break; } - mutex_unlock(&ioapic->lock); + spin_unlock(&ioapic->lock); switch (len) { case 8: @@ -338,7 +338,7 @@ static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len, } addr &= 0xff; - mutex_lock(&ioapic->lock); + spin_lock(&ioapic->lock); switch (addr) { case IOAPIC_REG_SELECT: ioapic->ioregsel = data; @@ -356,7 +356,7 @@ static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len, default: break; } - mutex_unlock(&ioapic->lock); + spin_unlock(&ioapic->lock); return 0; } @@ -386,7 +386,7 @@ int kvm_ioapic_init(struct kvm *kvm) ioapic = kzalloc(sizeof(struct kvm_ioapic), GFP_KERNEL); if (!ioapic) return -ENOMEM; - mutex_init(&ioapic->lock); + spin_lock_init(&ioapic->lock); kvm->arch.vioapic = ioapic; kvm_ioapic_reset(ioapic); kvm_iodevice_init(&ioapic->dev, &ioapic_mmio_ops); @@ -419,9 +419,9 @@ int kvm_get_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state) if (!ioapic) return -EINVAL; - mutex_lock(&ioapic->lock); + spin_lock(&ioapic->lock); memcpy(state, ioapic, sizeof(struct kvm_ioapic_state)); - mutex_unlock(&ioapic->lock); + spin_unlock(&ioapic->lock); return 0; } @@ -431,9 +431,9 @@ int kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state) if (!ioapic) return -EINVAL; - mutex_lock(&ioapic->lock); + spin_lock(&ioapic->lock); memcpy(ioapic, state, sizeof(struct kvm_ioapic_state)); update_handled_vectors(ioapic); - mutex_unlock(&ioapic->lock); + spin_unlock(&ioapic->lock); return 0; } diff --git a/virt/kvm/ioapic.h b/virt/kvm/ioapic.h index 8a751b78a430..0b190c34ccc3 100644 --- a/virt/kvm/ioapic.h +++ b/virt/kvm/ioapic.h @@ -45,7 +45,7 @@ struct kvm_ioapic { struct kvm_io_device dev; struct kvm *kvm; void (*ack_notifier)(void *opaque, int irq); - struct mutex lock; + spinlock_t lock; DECLARE_BITMAP(handled_vectors, 256); }; diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c index 80fd3ad3b2de..11692b9e8830 100644 --- a/virt/kvm/iommu.c +++ b/virt/kvm/iommu.c @@ -32,12 +32,30 @@ static int kvm_iommu_unmap_memslots(struct kvm *kvm); static void kvm_iommu_put_pages(struct kvm *kvm, gfn_t base_gfn, unsigned long npages); +static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, unsigned long size) +{ + gfn_t end_gfn; + pfn_t pfn; + + pfn = gfn_to_pfn_memslot(kvm, slot, gfn); + end_gfn = gfn + (size >> PAGE_SHIFT); + gfn += 1; + + if (is_error_pfn(pfn)) + return pfn; + + while (gfn < end_gfn) + gfn_to_pfn_memslot(kvm, slot, gfn++); + + return pfn; +} + int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) { - gfn_t gfn = slot->base_gfn; - unsigned long npages = slot->npages; + gfn_t gfn, end_gfn; pfn_t pfn; - int i, r = 0; + int r = 0; struct iommu_domain *domain = kvm->arch.iommu_domain; int flags; @@ -45,31 +63,62 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) if (!domain) return 0; + gfn = slot->base_gfn; + end_gfn = gfn + slot->npages; + flags = IOMMU_READ | IOMMU_WRITE; if (kvm->arch.iommu_flags & KVM_IOMMU_CACHE_COHERENCY) flags |= IOMMU_CACHE; - for (i = 0; i < npages; i++) { - /* check if already mapped */ - if (iommu_iova_to_phys(domain, gfn_to_gpa(gfn))) + + while (gfn < end_gfn) { + unsigned long page_size; + + /* Check if already mapped */ + if (iommu_iova_to_phys(domain, gfn_to_gpa(gfn))) { + gfn += 1; + continue; + } + + /* Get the page size we could use to map */ + page_size = kvm_host_page_size(kvm, gfn); + + /* Make sure the page_size does not exceed the memslot */ + while ((gfn + (page_size >> PAGE_SHIFT)) > end_gfn) + page_size >>= 1; + + /* Make sure gfn is aligned to the page size we want to map */ + while ((gfn << PAGE_SHIFT) & (page_size - 1)) + page_size >>= 1; + + /* + * Pin all pages we are about to map in memory. This is + * important because we unmap and unpin in 4kb steps later. + */ + pfn = kvm_pin_pages(kvm, slot, gfn, page_size); + if (is_error_pfn(pfn)) { + gfn += 1; continue; + } - pfn = gfn_to_pfn_memslot(kvm, slot, gfn); - r = iommu_map_range(domain, - gfn_to_gpa(gfn), - pfn_to_hpa(pfn), - PAGE_SIZE, flags); + /* Map into IO address space */ + r = iommu_map(domain, gfn_to_gpa(gfn), pfn_to_hpa(pfn), + get_order(page_size), flags); if (r) { printk(KERN_ERR "kvm_iommu_map_address:" "iommu failed to map pfn=%lx\n", pfn); goto unmap_pages; } - gfn++; + + gfn += page_size >> PAGE_SHIFT; + + } + return 0; unmap_pages: - kvm_iommu_put_pages(kvm, slot->base_gfn, i); + kvm_iommu_put_pages(kvm, slot->base_gfn, gfn); return r; } @@ -189,27 +238,47 @@ out_unmap: return r; } +static void kvm_unpin_pages(struct kvm *kvm, pfn_t pfn, unsigned long npages) +{ + unsigned long i; + + for (i = 0; i < npages; ++i) + kvm_release_pfn_clean(pfn + i); +} + static void kvm_iommu_put_pages(struct kvm *kvm, gfn_t base_gfn, unsigned long npages) { - gfn_t gfn = base_gfn; + struct iommu_domain *domain; + gfn_t end_gfn, gfn; pfn_t pfn; - struct iommu_domain *domain = kvm->arch.iommu_domain; - unsigned long i; u64 phys; + domain = kvm->arch.iommu_domain; + end_gfn = base_gfn + npages; + gfn = base_gfn; + /* check if iommu exists and in use */ if (!domain) return; - for (i = 0; i < npages; i++) { + while (gfn < end_gfn) { + unsigned long unmap_pages; + int order; + + /* Get physical address */ phys = iommu_iova_to_phys(domain, gfn_to_gpa(gfn)); - pfn = phys >> PAGE_SHIFT; - kvm_release_pfn_clean(pfn); - gfn++; - } + pfn = phys >> PAGE_SHIFT; + + /* Unmap address from IO address space */ + order = iommu_unmap(domain, gfn_to_gpa(gfn), PAGE_SIZE); + unmap_pages = 1ULL << order; - iommu_unmap_range(domain, gfn_to_gpa(base_gfn), PAGE_SIZE * npages); + /* Unpin all pages we just unmapped to not leak any memory */ + kvm_unpin_pages(kvm, pfn, unmap_pages); + + gfn += unmap_pages; + } } static int kvm_iommu_unmap_memslots(struct kvm *kvm) |