summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'trace-v5.11' of ↵Linus Torvalds2020-12-1757-415/+1140
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "The major update to this release is that there's a new arch config option called CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS. Currently, only x86_64 enables it. All the ftrace callbacks now take a struct ftrace_regs instead of a struct pt_regs. If the architecture has HAVE_DYNAMIC_FTRACE_WITH_ARGS enabled, then the ftrace_regs will have enough information to read the arguments of the function being traced, as well as access to the stack pointer. This way, if a user (like live kernel patching) only cares about the arguments, then it can avoid using the heavier weight "regs" callback, that puts in enough information in the struct ftrace_regs to simulate a breakpoint exception (needed for kprobes). A new config option that audits the timestamps of the ftrace ring buffer at most every event recorded. Ftrace recursion protection has been cleaned up to move the protection to the callback itself (this saves on an extra function call for those callbacks). Perf now handles its own RCU protection and does not depend on ftrace to do it for it (saving on that extra function call). New debug option to add "recursed_functions" file to tracefs that lists all the places that triggered the recursion protection of the function tracer. This will show where things need to be fixed as recursion slows down the function tracer. The eval enum mapping updates done at boot up are now offloaded to a work queue, as it caused a noticeable pause on slow embedded boards. Various clean ups and last minute fixes" * tag 'trace-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (33 commits) tracing: Offload eval map updates to a work queue Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS" ring-buffer: Add rb_check_bpage in __rb_allocate_pages ring-buffer: Fix two typos in comments tracing: Drop unneeded assignment in ring_buffer_resize() tracing: Disable ftrace selftests when any tracer is running seq_buf: Avoid type mismatch for seq_buf_init ring-buffer: Fix a typo in function description ring-buffer: Remove obsolete rb_event_is_commit() ring-buffer: Add test to validate the time stamp deltas ftrace/documentation: Fix RST C code blocks tracing: Clean up after filter logic rewriting tracing: Remove the useless value assignment in test_create_synth_event() livepatch: Use the default ftrace_ops instead of REGS when ARGS is available ftrace/x86: Allow for arguments to be passed in to ftrace_regs by default ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs MAINTAINERS: assign ./fs/tracefs to TRACING tracing: Fix some typos in comments ftrace: Remove unused varible 'ret' ring-buffer: Add recording of ring buffer recursion into recursed_functions ...
| * tracing: Offload eval map updates to a work queueSteven Rostedt (VMware)2020-12-151-1/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order for tracepoints to export their enums to user space, the use of the TRACE_DEFINE_ENUM() macro is used. On boot up, the strings shown in the tracefs "print fmt" lines are processed, and all the enums registered by TRACE_DEFINE_ENUM are replaced with the interger value. This way, userspace tools that read the raw binary data, knows how to evaluate the raw events. This is currently done in an initcall, but it has been noticed that slow embedded boards that have tracing may take a few seconds to process them all, and a few seconds slow down on an embedded device is detrimental to the system. Instead, offload the work to a work queue and make sure that its finished by destroying the work queue (which flushes all work) in a late initcall. This will allow the system to continue to boot and run the updates in the background, and this speeds up the boot time. Note, the strings being updated are only used by user space, so finishing the process before the system is fully booted will prevent any race issues. Link: https://lore.kernel.org/r/68d7b3327052757d0cd6359a6c9015a85b437232.camel@pengutronix.de Reported-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"Steven Rostedt (VMware)2020-12-142-4/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was believed that metag was the only architecture that required the ring buffer to keep 8 byte words aligned on 8 byte architectures, and with its removal, it was assumed that the ring buffer code did not need to handle this case. It appears that sparc64 also requires this. The following was reported on a sparc64 boot up: kernel: futex hash table entries: 65536 (order: 9, 4194304 bytes, linear) kernel: Running postponed tracer tests: kernel: Testing tracer function: kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140 kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140 kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140 kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140 kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140 kernel: PASSED Need to put back the 64BIT aligned code for the ring buffer. Link: https://lore.kernel.org/r/CADxRZqzXQRYgKc=y-KV=S_yHL+Y8Ay2mh5ezeZUnpRvg+syWKw@mail.gmail.com Cc: stable@vger.kernel.org Fixes: 86b3de60a0b6 ("ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS") Reported-by: Anatoly Pugachev <matorola@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ring-buffer: Add rb_check_bpage in __rb_allocate_pagesQiujun Huang2020-12-141-8/+11
| | | | | | | | | | | | | | | | | | | | It may be better to check each page is aligned by 4 bytes. The 2 least significant bits of the address will be used as flags. Link: https://lkml.kernel.org/r/20201015113842.2921-1-hqjagain@gmail.com Signed-off-by: Qiujun Huang <hqjagain@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ring-buffer: Fix two typos in commentsQiujun Huang2020-12-141-2/+2
| | | | | | | | | | | | | | | | | | | | s/inerrupting/interrupting/ s/beween/between/ Link: https://lkml.kernel.org/r/20201014152749.29986-1-hqjagain@gmail.com Signed-off-by: Qiujun Huang <hqjagain@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * tracing: Drop unneeded assignment in ring_buffer_resize()Lukas Bulwahn2020-12-141-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | Since commit 0a1754b2a97e ("ring-buffer: Return 0 on success from ring_buffer_resize()"), computing the size is not needed anymore. Drop unneeded assignment in ring_buffer_resize(). Link: https://lkml.kernel.org/r/20201214084503.3079-1-lukas.bulwahn@gmail.com Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * tracing: Disable ftrace selftests when any tracer is runningMasami Hiramatsu2020-12-146-14/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Disable ftrace selftests when any tracer (kernel command line options like ftrace=, trace_events=, kprobe_events=, and boot-time tracing) starts running because selftest can disturb it. Currently ftrace= and trace_events= are checked, but kprobe_events has a different flag, and boot-time tracing didn't checked. This unifies the disabled flag and all of those boot-time tracing features sets the flag. This also fixes warnings on kprobe-event selftest (CONFIG_FTRACE_STARTUP_TEST=y and CONFIG_KPROBE_EVENTS=y) with boot-time tracing (ftrace.event.kprobes.EVENT.probes) like below; [ 59.803496] trace_kprobe: Testing kprobe tracing: [ 59.804258] ------------[ cut here ]------------ [ 59.805682] WARNING: CPU: 3 PID: 1 at kernel/trace/trace_kprobe.c:1987 kprobe_trace_self_tests_ib [ 59.806944] Modules linked in: [ 59.807335] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.10.0-rc7+ #172 [ 59.808029] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1 04/01/204 [ 59.808999] RIP: 0010:kprobe_trace_self_tests_init+0x5f/0x42b [ 59.809696] Code: e8 03 00 00 48 c7 c7 30 8e 07 82 e8 6d 3c 46 ff 48 c7 c6 00 b2 1a 81 48 c7 c7 7 [ 59.812439] RSP: 0018:ffffc90000013e78 EFLAGS: 00010282 [ 59.813038] RAX: 00000000ffffffef RBX: 0000000000000000 RCX: 0000000000049443 [ 59.813780] RDX: 0000000000049403 RSI: 0000000000049403 RDI: 000000000002deb0 [ 59.814589] RBP: ffffc90000013e90 R08: 0000000000000001 R09: 0000000000000001 [ 59.815349] R10: 0000000000000001 R11: 0000000000000000 R12: 00000000ffffffef [ 59.816138] R13: ffff888004613d80 R14: ffffffff82696940 R15: ffff888004429138 [ 59.816877] FS: 0000000000000000(0000) GS:ffff88807dcc0000(0000) knlGS:0000000000000000 [ 59.817772] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 59.818395] CR2: 0000000001a8dd38 CR3: 0000000002222000 CR4: 00000000000006a0 [ 59.819144] Call Trace: [ 59.819469] ? init_kprobe_trace+0x6b/0x6b [ 59.819948] do_one_initcall+0x5f/0x300 [ 59.820392] ? rcu_read_lock_sched_held+0x4f/0x80 [ 59.820916] kernel_init_freeable+0x22a/0x271 [ 59.821416] ? rest_init+0x241/0x241 [ 59.821841] kernel_init+0xe/0x10f [ 59.822251] ret_from_fork+0x22/0x30 [ 59.822683] irq event stamp: 16403349 [ 59.823121] hardirqs last enabled at (16403359): [<ffffffff810db81e>] console_unlock+0x48e/0x580 [ 59.824074] hardirqs last disabled at (16403368): [<ffffffff810db786>] console_unlock+0x3f6/0x580 [ 59.825036] softirqs last enabled at (16403200): [<ffffffff81c0033a>] __do_softirq+0x33a/0x484 [ 59.825982] softirqs last disabled at (16403087): [<ffffffff81a00f02>] asm_call_irq_on_stack+0x10 [ 59.827034] ---[ end trace 200c544775cdfeb3 ]--- [ 59.827635] trace_kprobe: error on probing function entry. Link: https://lkml.kernel.org/r/160741764955.3448999.3347769358299456915.stgit@devnote2 Fixes: 4d655281eb1b ("tracing/boot Add kprobe event support") Cc: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * seq_buf: Avoid type mismatch for seq_buf_initArnd Bergmann2020-12-082-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building with W=2 prints a number of warnings for one function that has a pointer type mismatch: linux/seq_buf.h: In function 'seq_buf_init': linux/seq_buf.h:35:12: warning: pointer targets in assignment from 'unsigned char *' to 'char *' differ in signedness [-Wpointer-sign] Change the type in the function prototype according to the type in the structure. Link: https://lkml.kernel.org/r/20201026161108.3707783-1-arnd@kernel.org Fixes: 9a7777935c34 ("tracing: Convert seq_buf fields to be like seq_file fields") Reviewed-by: Cezary Rojewski <cezary.rojewski@intel.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ring-buffer: Fix a typo in function descriptionQiujun Huang2020-12-081-1/+1
| | | | | | | | | | | | | | | | | | s/ring_buffer_commit_discard/ring_buffer_discard_commit/ Link: https://lkml.kernel.org/r/20201112151800.14382-1-hqjagain@gmail.com Signed-off-by: Qiujun Huang <hqjagain@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ring-buffer: Remove obsolete rb_event_is_commit()Lukas Bulwahn2020-12-081-17/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a389d86f7fd0 ("ring-buffer: Have nested events still record running time stamp") removed the only uses of rb_event_is_commit() in rb_update_event() and rb_update_write_stamp(). Hence, since then, make CC=clang W=1 warns: kernel/trace/ring_buffer.c:2763:1: warning: unused function 'rb_event_is_commit' [-Wunused-function] Remove this obsolete function. Link: https://lkml.kernel.org/r/20201117053703.11275-1-lukas.bulwahn@gmail.com Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ring-buffer: Add test to validate the time stamp deltasSteven Rostedt (VMware)2020-12-022-0/+170
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While debugging a situation where a delta for an event was calucalted wrong, I realize there was nothing making sure that the delta of events are correct. If a single event has an incorrect delta, then all events after it will also have one. If the discrepency gets large enough, it could cause the time stamps to go backwards when crossing sub buffers, that record a full 64 bit time stamp, and the new deltas are added to that. Add a way to validate the events at most events and when crossing a buffer page. This will help make sure that the deltas are always correct. This test will detect if they are ever corrupted. The test adds a high overhead to the ring buffer recording, as it does the audit for almost every event, and should only be used for testing the ring buffer. This will catch the bug that is fixed by commit 55ea4cf40380 ("ring-buffer: Update write stamp with the correct ts"), which is not applied when this commit is applied. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace/documentation: Fix RST C code blocksSteven Rostedt (VMware)2020-11-181-0/+6
| | | | | | | | | | | | | | | | | | | | | | Some C code in the ftrace-users.rst document is missing RST C block annotation, which has to be added. Link: https://lore.kernel.org/r/20201116173502.392a769c@canb.auug.org.au Acked-by: Jonathan Corbet <corbet@lwn.net> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * tracing: Clean up after filter logic rewritingLukas Bulwahn2020-11-161-21/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The functions event_{set,clear,}_no_set_filter_flag were only used in replace_system_preds() [now, renamed to process_system_preds()]. Commit 80765597bc58 ("tracing: Rewrite filter logic to be simpler and faster") removed the use of those functions in replace_system_preds(). Since then, the functions event_{set,clear,}_no_set_filter_flag were unused. Fortunately, make CC=clang W=1 indicates this with -Wunused-function warnings on those three functions. So, clean up these obsolete unused functions. Link: https://lkml.kernel.org/r/20201115155336.20248-1-lukas.bulwahn@gmail.com Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * tracing: Remove the useless value assignment in test_create_synth_event()Kaixu Xia2020-11-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | The value of variable ret is overwritten on the delete branch in the test_create_synth_event() and we care more about the above error than this delete portion. Remove it. Link: https://lkml.kernel.org/r/1605283360-6804-1-git-send-email-kaixuxia@tencent.com Reported-by: Tosk Robot <tencent_os_robot@tencent.com> Signed-off-by: Kaixu Xia <kaixuxia@tencent.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * livepatch: Use the default ftrace_ops instead of REGS when ARGS is availableSteven Rostedt (VMware)2020-11-138-9/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS is available, the ftrace call will be able to set the ip of the calling function. This will improve the performance of live kernel patching where it does not need all the regs to be stored just to change the instruction pointer. If all archs that support live kernel patching also support HAVE_DYNAMIC_FTRACE_WITH_ARGS, then the architecture specific function klp_arch_set_pc() could be made generic. It is possible that an arch can support HAVE_DYNAMIC_FTRACE_WITH_ARGS but not HAVE_DYNAMIC_FTRACE_WITH_REGS and then have access to live patching. Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: live-patching@vger.kernel.org Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace/x86: Allow for arguments to be passed in to ftrace_regs by defaultSteven Rostedt (VMware)2020-11-135-3/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the only way to get access to the registers of a function via a ftrace callback is to set the "FL_SAVE_REGS" bit in the ftrace_ops. But as this saves all regs as if a breakpoint were to trigger (for use with kprobes), it is expensive. The regs are already saved on the stack for the default ftrace callbacks, as that is required otherwise a function being traced will get the wrong arguments and possibly crash. And on x86, the arguments are already stored where they would be on a pt_regs structure to use that code for both the regs version of a callback, it makes sense to pass that information always to all functions. If an architecture does this (as x86_64 now does), it is to set HAVE_DYNAMIC_FTRACE_WITH_ARGS, and this will let the generic code that it could have access to arguments without having to set the flags. This also includes having the stack pointer being saved, which could be used for accessing arguments on the stack, as well as having the function graph tracer not require its own trampoline! Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regsSteven Rostedt (VMware)2020-11-1318-45/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation to have arguments of a function passed to callbacks attached to functions as default, change the default callback prototype to receive a struct ftrace_regs as the forth parameter instead of a pt_regs. For callbacks that set the FL_SAVE_REGS flag in their ftrace_ops flags, they will now need to get the pt_regs via a ftrace_get_regs() helper call. If this is called by a callback that their ftrace_ops did not have a FL_SAVE_REGS flag set, it that helper function will return NULL. This will allow the ftrace_regs to hold enough just to get the parameters and stack pointer, but without the worry that callbacks may have a pt_regs that is not completely filled. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * MAINTAINERS: assign ./fs/tracefs to TRACINGLukas Bulwahn2020-11-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | A check with ./scripts/get_maintainer.pl --letters -f fs/tracefs/ shows that the tracefs is not assigned to the TRACING section in MAINTAINERS. Add the file pattern for the TRACING section to rectify that. Link: https://lkml.kernel.org/r/20201109122250.31915-1-lukas.bulwahn@gmail.com Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * tracing: Fix some typos in commentsQiujun Huang2020-11-1115-25/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | s/detetector/detector/ s/enfoced/enforced/ s/writen/written/ s/actualy/actually/ s/bascially/basically/ s/Regarldess/Regardless/ s/zeroes/zeros/ s/followd/followed/ s/incrememented/incremented/ s/separatelly/separately/ s/accesible/accessible/ s/sythetic/synthetic/ s/enabed/enabled/ s/heurisitc/heuristic/ s/assocated/associated/ s/otherwides/otherwise/ s/specfied/specified/ s/seaching/searching/ s/hierachry/hierarchy/ s/internel/internal/ s/Thise/This/ Link: https://lkml.kernel.org/r/20201029150554.3354-1-hqjagain@gmail.com Signed-off-by: Qiujun Huang <hqjagain@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Remove unused varible 'ret'Alex Shi2020-11-111-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | 'ret' in 2 functions are not used. and one of them is a void function. So remove them to avoid gcc warning: kernel/trace/ftrace.c:4166:6: warning: variable ‘ret’ set but not used [-Wunused-but-set-variable] kernel/trace/ftrace.c:5571:6: warning: variable ‘ret’ set but not used [-Wunused-but-set-variable] Link: https://lkml.kernel.org/r/1604674486-52350-1-git-send-email-alex.shi@linux.alibaba.com Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ring-buffer: Add recording of ring buffer recursion into recursed_functionsSteven Rostedt (VMware)2020-11-112-1/+25
| | | | | | | | | | | | | | Add a new config RING_BUFFER_RECORD_RECURSION that will place functions that recurse from the ring buffer into the ftrace recused_functions file. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Clean up the recursion code a bitSteven Rostedt (VMware)2020-11-111-15/+7
| | | | | | | | | | | | | | | | | | | | | | In trace_test_and_set_recursion(), current->trace_recursion is placed into a variable, and that variable should be used for the processing, as there's no reason to dereference current multiple times. On trace_clear_recursion(), current->trace_recursion is modified and there's no reason to copy it over to a variable. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * fgraph: Make overruns 4 bytes in graph stack structureSteven Rostedt (VMware)2020-11-113-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | Inspecting the data structures of the function graph tracer, I found that the overrun value is unsigned long, which is 8 bytes on a 64 bit machine, and not only that, the depth is an int (4 bytes). The overrun can be simply an unsigned int (4 bytes) and pack the ftrace_graph_ret structure better. The depth is moved up next to the func, as it is used more often with func, and improves cache locality. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Add recording of functions that caused recursionSteven Rostedt (VMware)2020-11-0617-20/+306
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds CONFIG_FTRACE_RECORD_RECURSION that will record to a file "recursed_functions" all the functions that caused recursion while a callback to the function tracer was running. Link: https://lkml.kernel.org/r/20201106023548.102375687@goodmis.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Guo Ren <guoren@kernel.org> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Petr Mladek <pmladek@suse.com> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-csky@vger.kernel.org Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: live-patching@vger.kernel.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Reverse what the RECURSION flag means in the ftrace_opsSteven Rostedt (VMware)2020-11-068-49/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that all callbacks are recursion safe, reverse the meaning of the RECURSION flag and rename it from RECURSION_SAFE to simply RECURSION. Now only callbacks that request to have recursion protecting it will have the added trampoline to do so. Also remove the outdated comment about "PER_CPU" when determining to use the ftrace_ops_assist_func. Link: https://lkml.kernel.org/r/20201028115613.742454631@goodmis.org Link: https://lkml.kernel.org/r/20201106023547.904270143@goodmis.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Cc: Petr Mladek <pmladek@suse.com> Cc: linux-doc@vger.kernel.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * perf/ftrace: Check for rcu_is_watching() in callback functionSteven Rostedt (VMware)2020-11-061-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a ftrace callback requires "rcu_is_watching", then it adds the FTRACE_OPS_FL_RCU flag and it will not be called if RCU is not "watching". But this means that it will use a trampoline when called, and this slows down the function tracing a tad. By checking rcu_is_watching() from within the callback, it no longer needs the RCU flag set in the ftrace_ops and it can be safely called directly. Link: https://lkml.kernel.org/r/20201028115613.591878956@goodmis.org Link: https://lkml.kernel.org/r/20201106023547.711035826@goodmis.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Petr Mladek <pmladek@suse.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * perf/ftrace: Add recursion protection to the ftrace callbackSteven Rostedt (VMware)2020-11-061-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a ftrace callback does not supply its own recursion protection and does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will make a helper trampoline to do so before calling the callback instead of just calling the callback directly. The default for ftrace_ops is going to change. It will expect that handlers provide their own recursion protection, unless its ftrace_ops states otherwise. Link: https://lkml.kernel.org/r/20201028115613.444477858@goodmis.org Link: https://lkml.kernel.org/r/20201106023547.466892083@goodmis.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Petr Mladek <pmladek@suse.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * livepatch: Trigger WARNING if livepatch function fails due to recursionSteven Rostedt (VMware)2020-11-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If for some reason a function is called that triggers the recursion detection of live patching, trigger a warning. By not executing the live patch code, it is possible that the old unpatched function will be called placing the system into an unknown state. Link: https://lore.kernel.org/r/20201029145709.GD16774@alley Link: https://lkml.kernel.org/r/20201106023547.312639435@goodmis.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: live-patching@vger.kernel.org Suggested-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * livepatch/ftrace: Add recursion protection to the ftrace callbackSteven Rostedt (VMware)2020-11-061-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a ftrace callback does not supply its own recursion protection and does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will make a helper trampoline to do so before calling the callback instead of just calling the callback directly. The default for ftrace_ops is going to change. It will expect that handlers provide their own recursion protection, unless its ftrace_ops states otherwise. Link: https://lkml.kernel.org/r/20201028115613.291169246@goodmis.org Link: https://lkml.kernel.org/r/20201106023547.122802424@goodmis.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: live-patching@vger.kernel.org Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * kprobes/ftrace: Add recursion protection to the ftrace callbackSteven Rostedt (VMware)2020-11-065-11/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a ftrace callback does not supply its own recursion protection and does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will make a helper trampoline to do so before calling the callback instead of just calling the callback directly. The default for ftrace_ops is going to change. It will expect that handlers provide their own recursion protection, unless its ftrace_ops states otherwise. Link: https://lkml.kernel.org/r/20201028115613.140212174@goodmis.org Link: https://lkml.kernel.org/r/20201106023546.944907560@goodmis.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Guo Ren <guoren@kernel.org> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-csky@vger.kernel.org Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * pstore/ftrace: Add recursion protection to the ftrace callbackSteven Rostedt (VMware)2020-11-061-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a ftrace callback does not supply its own recursion protection and does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will make a helper trampoline to do so before calling the callback instead of just calling the callback directly. The default for ftrace_ops is going to change. It will expect that handlers provide their own recursion protection, unless its ftrace_ops states otherwise. Link: https://lkml.kernel.org/r/20201028115612.990886844@goodmis.org Link: https://lkml.kernel.org/r/20201106023546.720372267@goodmis.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Petr Mladek <pmladek@suse.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Meyer <thomas@m3y3r.de> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Optimize testing what context current is inSteven Rostedt (VMware)2020-11-061-13/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | The preempt_count() is not a simple location in memory, it could be part of per_cpu code or more. Each access to preempt_count(), or one of its accessor functions (like in_interrupt()) takes several cycles. By reading preempt_count() once, and then doing tests to find the context against the value return is slightly faster than using in_nmi() and in_interrupt(). Link: https://lkml.kernel.org/r/20201028115612.780796355@goodmis.org Link: https://lkml.kernel.org/r/20201106023546.558881845@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Add ftrace_test_recursion_trylock() helper functionSteven Rostedt (VMware)2020-11-062-7/+30
| | | | | | | | | | | | | | | | | | | | | | To make it easier for ftrace callbacks to have recursion protection, provide a ftrace_test_recursion_trylock() and ftrace_test_recursion_unlock() helper that tests for recursion. Link: https://lkml.kernel.org/r/20201028115612.634927593@goodmis.org Link: https://lkml.kernel.org/r/20201106023546.378584067@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
| * ftrace: Move the recursion testing into global headersSteven Rostedt (VMware)2020-11-063-177/+188
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, if a callback is registered to a ftrace function and its ftrace_ops does not have the RECURSION flag set, it is encapsulated in a helper function that does the recursion for it. Really, all the callbacks should have their own recursion protection for performance reasons. But they should not all implement their own. Move the recursion helpers to global headers, so that all callbacks can use them. Link: https://lkml.kernel.org/r/20201028115612.460535535@goodmis.org Link: https://lkml.kernel.org/r/20201106023546.166456258@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
* | Merge tag 'modules-for-v5.11' of ↵Linus Torvalds2020-12-175-110/+142
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux Pull modules updates from Jessica Yu: "Summary of modules changes for the 5.11 merge window: - Fix a race condition between systemd/udev and the module loader. The module loader was sending a uevent before the module was fully initialized (i.e., before its init function has been called). This means udev can start processing the module uevent before the module has finished initializing, and some udev rules expect that the module has initialized already upon receiving the uevent. This resulted in some systemd mount units failing if udev processes the event faster than the module can finish init. This is fixed by delaying the uevent until after the module has called its init routine. - Make the linker array sections for kernel params and module version attributes more robust by switching to use the alignment of the type in question. Namely, linker section arrays will be constructed using the alignment required by the struct (using __alignof__()) as opposed to a specific value such as sizeof(void *) or sizeof(long). This is less likely to cause breakages should the size of the type ever change (Johan Hovold) - Fix module state inconsistency by setting it back to GOING when a module fails to load and is on its way out (Miroslav Benes) - Some comment and code cleanups (Sergey Shtylyov)" * tag 'modules-for-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux: module: delay kobject uevent until after module init call module: drop semicolon from version macro init: use type alignment for kernel parameters params: clean up module-param macros params: use type alignment for kernel parameters params: drop redundant "unused" attributes module: simplify version-attribute handling module: drop version-attribute alignment module: fix comment style module: add more 'kernel-doc' comments module: fix up 'kernel-doc' comments module: only handle errors with the *switch* statement in module_sig_check() module: avoid *goto*s in module_sig_check() module: merge repetitive strings in module_sig_check() module: set MODULE_STATE_GOING state when a module fails to load
| * | module: delay kobject uevent until after module init callJessica Yu2020-12-091-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Apparently there has been a longstanding race between udev/systemd and the module loader. Currently, the module loader sends a uevent right after sysfs initialization, but before the module calls its init function. However, some udev rules expect that the module has initialized already upon receiving the uevent. This race has been triggered recently (see link in references) in some systemd mount unit files. For instance, the configfs module creates the /sys/kernel/config mount point in its init function, however the module loader issues the uevent before this happens. sys-kernel-config.mount expects to be able to mount /sys/kernel/config upon receipt of the module loading uevent, but if the configfs module has not called its init function yet, then this directory will not exist and the mount unit fails. A similar situation exists for sys-fs-fuse-connections.mount, as the fuse sysfs mount point is created during the fuse module's init function. If udev is faster than module initialization then the mount unit would fail in a similar fashion. To fix this race, delay the module KOBJ_ADD uevent until after the module has finished calling its init routine. References: https://github.com/systemd/systemd/issues/17586 Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Tested-By: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: drop semicolon from version macroJohan Hovold2020-12-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Drop the trailing semicolon from the MODULE_VERSION() macro definition which was left when removing the array-of-pointer indirection. Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | init: use type alignment for kernel parametersJohan Hovold2020-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Specify type alignment for kernel parameters instead of sizeof(long). The alignment attribute is used to prevent gcc from increasing the alignment of objects with static extent as an optimisation, something which would mess up the __setup array stride. Using __alignof__(struct obs_kernel_param) rather than sizeof(long) is preferred since it better indicates why it is there and doesn't break should the type size or alignment change. Note that on m68k the alignment of struct obs_kernel_param is actually two and that adding a 1- or 2-byte field to the 12-byte struct would cause a breakage with the current 4-byte alignment. Link: https://lore.kernel.org/lkml/20201103175711.10731-1-johan@kernel.org Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | params: clean up module-param macrosJohan Hovold2020-11-251-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up the module-param macros by adding some indentation and using the __aligned() macro to improve readability. Link: https://lore.kernel.org/lkml/20201103175711.10731-1-johan@kernel.org Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | params: use type alignment for kernel parametersJohan Hovold2020-11-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Specify type alignment for kernel parameters instead of sizeof(void *). The alignment attribute is used to prevent gcc from increasing the alignment of objects with static extent as an optimisation, something which would mess up the __param array stride. Using __alignof__(struct kernel_param) rather than sizeof(void *) is preferred since it better indicates why it is there and doesn't break should the type size or alignment change. Note that on m68k the alignment of struct kernel_param is actually two and that adding a 1- or 2-byte field to the 20-byte struct would cause a breakage with the current 4-byte alignment. Link: https://lore.kernel.org/lkml/20201103175711.10731-1-johan@kernel.org Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | params: drop redundant "unused" attributesJohan Hovold2020-11-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Drop the redundant "unused" attributes from module-parameter structures already marked "used". Link: https://lore.kernel.org/lkml/20201103175711.10731-1-johan@kernel.org Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: simplify version-attribute handlingJohan Hovold2020-11-252-19/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using the array-of-pointers trick to avoid having gcc mess up the built-in module-version array stride, specify type alignment when declaring entries to prevent gcc from increasing alignment. This is essentially an alternative (one-line) fix to the problem addressed by commit b4bc842802db ("module: deal with alignment issues in built-in module versions"). gcc can increase the alignment of larger objects with static extent as an optimisation, but this can be suppressed by using the aligned attribute when declaring variables. Note that we have been relying on this behaviour for kernel parameters for 16 years and it indeed hasn't changed since the introduction of the aligned attribute in gcc-3.1. Link: https://lore.kernel.org/lkml/20201103175711.10731-1-johan@kernel.org Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: drop version-attribute alignmentJohan Hovold2020-11-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 98562ad8cb03 ("module: explicitly align module_version_attribute structure") added an alignment attribute to the struct module_version_attribute type in order to fix an alignment issue on m68k where the structure is 2-byte aligned while MODULE_VERSION() forced the __modver section entries to be 4-byte aligned (sizeof(void *)). This was essentially an alternative fix to the problem addressed by b4bc842802db ("module: deal with alignment issues in built-in module versions") which used the array-of-pointer trick to prevent gcc from increasing alignment of the version attribute entries. And with the pointer indirection in place there's no need to increase the alignment of the type. Link: https://lore.kernel.org/lkml/20201103175711.10731-1-johan@kernel.org Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: fix comment styleSergey Shtylyov2020-11-091-43/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many comments in this module do not comply with the preferred multi-line comment style as reported by 'scripts/checkpatch.pl': WARNING: Block comments use * on subsequent lines WARNING: Block comments use a trailing */ on a separate line Fix those comments, along with (unreported for some reason?) the starts of the multi-line comments not being /* on their own line... Signed-off-by: Sergey Shtylyov <s.shtylyov@omprussia.ru> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: add more 'kernel-doc' commentsSergey Shtylyov2020-11-091-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | Some functions have the proper 'kernel-doc' comments but these don't start with proper /** -- fix that, along with adding () to the function name on the following lines to fully comply with the 'kernel-doc' format. Signed-off-by: Sergey Shtylyov <s.shtylyov@omprussia.ru> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: fix up 'kernel-doc' commentsSergey Shtylyov2020-11-091-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some 'kernel-doc' function comments do not fully comply with the specified format due to: - missing () after the function name; - "RETURNS:"/"Returns:" instead of "Return:" when documenting the function's result. - empty line before describing the function's arguments. Signed-off-by: Sergey Shtylyov <s.shtylyov@omprussia.ru> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: only handle errors with the *switch* statement in module_sig_check()Sergey Shtylyov2020-11-041-12/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's handle the successful call of mod_verify_sig() right after that call, making the *switch* statement only handle the real errors, and then move the comment from the first *case* before *switch* itself and the comment before *default* after it. Fix the comment style, add article/comma/dash, spell out "nomem" as "lack of memory" in these comments, while at it... Suggested-by: Joe Perches <joe@perches.com> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Sergey Shtylyov <s.shtylyov@omprussia.ru> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: avoid *goto*s in module_sig_check()Sergey Shtylyov2020-11-041-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's move the common handling of the non-fatal errors after the *switch* statement -- this avoids *goto*s inside that *switch*... Suggested-by: Joe Perches <joe@perches.com> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Sergey Shtylyov <s.shtylyov@omprussia.ru> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: merge repetitive strings in module_sig_check()Sergey Shtylyov2020-11-041-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'reason' variable in module_sig_check() points to 3 strings across the *switch* statement, all needlessly starting with the same text. Let's put the starting text into the pr_notice() call -- it saves 21 bytes of the object code (x86 gcc 10.2.1). Suggested-by: Joe Perches <joe@perches.com> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Sergey Shtylyov <s.shtylyov@omprussia.ru> Signed-off-by: Jessica Yu <jeyu@kernel.org>
| * | module: set MODULE_STATE_GOING state when a module fails to loadMiroslav Benes2020-10-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a module fails to load due to an error in prepare_coming_module(), the following error handling in load_module() runs with MODULE_STATE_COMING in module's state. Fix it by correctly setting MODULE_STATE_GOING under "bug_cleanup" label. Signed-off-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jessica Yu <jeyu@kernel.org>