summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* bpf: Use bpf_prog_run_pin_on_cpu() at simple call sites.David Miller2020-02-255-18/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All of these cases are strictly of the form: preempt_disable(); BPF_PROG_RUN(...); preempt_enable(); Replace this with bpf_prog_run_pin_on_cpu() which wraps BPF_PROG_RUN() with: migrate_disable(); BPF_PROG_RUN(...); migrate_enable(); On non RT enabled kernels this maps to preempt_disable/enable() and on RT enabled kernels this solely prevents migration, which is sufficient as there is no requirement to prevent reentrancy to any BPF program from a preempting task. The only requirement is that the program stays on the same CPU. Therefore, this is a trivially correct transformation. The seccomp loop does not need protection over the loop. It only needs protection per BPF filter program [ tglx: Converted to bpf_prog_run_pin_on_cpu() ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.691493094@linutronix.de
* bpf: Replace cant_sleep() with cant_migrate()Thomas Gleixner2020-02-251-1/+1
| | | | | | | | | | | | | | | | As already discussed in the previous change which introduced BPF_RUN_PROG_PIN_ON_CPU() BPF only requires to disable migration to guarantee per CPUness. If RT substitutes the preempt disable based migration protection then the cant_sleep() check will obviously trigger as preemption is not disabled. Replace it by cant_migrate() which maps to cant_sleep() on a non RT kernel and will verify that migration is disabled on a full RT kernel. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.583038889@linutronix.de
* bpf: Provide bpf_prog_run_pin_on_cpu() helperThomas Gleixner2020-02-251-2/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | BPF programs require to run on one CPU to completion as they use per CPU storage, but according to Alexei they don't need reentrancy protection as obviously BPF programs running in thread context can always be 'preempted' by hard and soft interrupts and instrumentation and the same program can run concurrently on a different CPU. The currently used mechanism to ensure CPUness is to wrap the invocation into a preempt_disable/enable() pair. Disabling preemption is also disabling migration for a task. preempt_disable/enable() is used because there is no explicit way to reliably disable only migration. Provide a separate macro to invoke a BPF program which can be used in migrateable task context. It wraps BPF_PROG_RUN() in a migrate_disable/enable() pair which maps on non RT enabled kernels to preempt_disable/enable(). On RT enabled kernels this merely disables migration. Both methods ensure that the invoked BPF program runs on one CPU to completion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.474592620@linutronix.de
* bpf: Dont iterate over possible CPUs with interrupts disabledThomas Gleixner2020-02-251-10/+10
| | | | | | | | | | | | | | | | | pcpu_freelist_populate() is disabling interrupts and then iterates over the possible CPUs. The reason why this disables interrupts is to silence lockdep because the invoked ___pcpu_freelist_push() takes spin locks. Neither the interrupt disabling nor the locking are required in this function because it's called during initialization and the resulting map is not yet visible to anything. Split out the actual push assignement into an inline, call it from the loop and remove the interrupt disable. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.365930116@linutronix.de
* bpf: Remove recursion prevention from rcu free callbackThomas Gleixner2020-02-251-8/+0
| | | | | | | | | | | If an element is freed via RCU then recursion into BPF instrumentation functions is not a concern. The element is already detached from the map and the RCU callback does not hold any locks on which a kprobe, perf event or tracepoint attached BPF program could deadlock. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.259118710@linutronix.de
* perf/bpf: Remove preempt disable around BPF invocationThomas Gleixner2020-02-251-2/+0
| | | | | | | | | | The BPF invocation from the perf event overflow handler does not require to disable preemption because this is called from NMI or at least hard interrupt context which is already non-preemptible. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.151953573@linutronix.de
* bpf/trace: Remove redundant preempt_disable from trace_call_bpf()Thomas Gleixner2020-02-251-2/+1
| | | | | | | | | | | | | Similar to __bpf_trace_run this is redundant because __bpf_trace_run() is invoked from a trace point via __DO_TRACE() which already disables preemption _before_ invoking any of the functions which are attached to a trace point. Remove it and add a cant_sleep() check. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145643.059995527@linutronix.de
* bpf: disable preemption for bpf progs attached to uprobeAlexei Starovoitov2020-02-251-2/+9
| | | | | | | | trace_call_bpf() no longer disables preemption on its own. All callers of this function has to do it explicitly. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de>
* bpf/trace: Remove EXPORT from trace_call_bpf()Thomas Gleixner2020-02-251-1/+0
| | | | | | | All callers are built in. No point to export this. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* bpf/tracing: Remove redundant preempt_disable() in __bpf_trace_run()Thomas Gleixner2020-02-251-2/+1
| | | | | | | | | | | | | | __bpf_trace_run() disables preemption around the BPF_PROG_RUN() invocation. This is redundant because __bpf_trace_run() is invoked from a trace point via __DO_TRACE() which already disables preemption _before_ invoking any of the functions which are attached to a trace point. Remove it and add a cant_sleep() check. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.847220186@linutronix.de
* bpf: Update locking comment in hashtab codeThomas Gleixner2020-02-251-4/+20
| | | | | | | | | | | | | | | | | The comment where the bucket lock is acquired says: /* bpf_map_update_elem() can be called in_irq() */ which is not really helpful and aside of that it does not explain the subtle details of the hash bucket locks expecially in the context of BPF and perf, kprobes and tracing. Add a comment at the top of the file which explains the protection scopes and the details how potential deadlocks are prevented. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.755793061@linutronix.de
* bpf: Enforce preallocation for instrumentation programs on RTThomas Gleixner2020-02-251-4/+9
| | | | | | | | | | | | | | | | Aside of the general unsafety of run-time map allocation for instrumentation type programs RT enabled kernels have another constraint: The instrumentation programs are invoked with preemption disabled, but the memory allocator spinlocks cannot be acquired in atomic context because they are converted to 'sleeping' spinlocks on RT. Therefore enforce map preallocation for these programs types when RT is enabled. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.648784007@linutronix.de
* bpf: Tighten the requirements for preallocated hash mapsThomas Gleixner2020-02-251-11/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The assumption that only programs attached to perf NMI events can deadlock on memory allocators is wrong. Assume the following simplified callchain: kmalloc() from regular non BPF context cache empty freelist empty lock(zone->lock); tracepoint or kprobe BPF() update_elem() lock(bucket) kmalloc() cache empty freelist empty lock(zone->lock); <- DEADLOCK There are other ways which do not involve locking to create wreckage: kmalloc() from regular non BPF context local_irq_save(); ... obj = slab_first(); kprobe() BPF() update_elem() lock(bucket) kmalloc() local_irq_save(); ... obj = slab_first(); <- Same object as above ... So preallocation _must_ be enforced for all variants of intrusive instrumentation. Unfortunately immediate enforcement would break backwards compatibility, so for now such programs still are allowed to run, but a one time warning is emitted in dmesg and the verifier emits a warning in the verifier log as well so developers are made aware about this and can fix their programs before the enforcement becomes mandatory. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200224145642.540542802@linutronix.de
* Merge tag 'sched-for-bpf-2020-02-20' of ↵Alexei Starovoitov2020-02-222-0/+37
|\ | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into bpf-next Two migrate disable related stubs for BPF to base the RT patches on
| * sched: Provide cant_migrate()Thomas Gleixner2020-02-201-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some code pathes rely on preempt_disable() to prevent migration on a non RT enabled kernel. These preempt_disable/enable() pairs are substituted by migrate_disable/enable() pairs or other forms of RT specific protection. On RT these protections prevent migration but not preemption. Obviously a cant_sleep() check in such a section will trigger on RT because preemption is not disabled. Provide a cant_migrate() macro which maps to cant_sleep() on a non RT kernel and an empty placeholder for RT for now. The placeholder will be changed to a proper debug check along with the RT specific migration protection mechanism. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200214161503.070487511@linutronix.de
| * sched/rt: Provide migrate_disable/enable() inlinesThomas Gleixner2020-02-201-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Code which solely needs to prevent migration of a task uses preempt_disable()/enable() pairs. This is the only reliable way to do so as setting the task affinity to a single CPU can be undone by a setaffinity operation from a different task/process. RT provides a seperate migrate_disable/enable() mechanism which does not disable preemption to achieve the semantic requirements of a (almost) fully preemptible kernel. As it is unclear from looking at a given code path whether the intention is to disable preemption or migration, introduce migrate_disable/enable() inline functions which can be used to annotate code which merely needs to disable migration. Map them to preempt_disable/enable() for now. The RT substitution will be provided later. Code which is annotated that way documents that it has no requirement to protect against reentrancy of a preempting task. Either this is not required at all or the call sites are already serialized by other means. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ben Segall <bsegall@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/r/878slclv1u.fsf@nanos.tec.linutronix.de
* | Merge branch 'mlxfw-Improve-error-reporting-and-FW-reactivate-support'David S. Miller2020-02-225-96/+308
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saeed Mahameed says: ==================== mlxfw: Improve error reporting and FW reactivate support This patchset improves mlxfw error reporting to netlink and to kernel log. V2: - Use proper err codes, EBUSY/EIO instead of EALREADY/EREMOTEIO - Fix typo. From Eran and me. 1) patch #1, Make mlxfw/mlxsw fw flash devlink status notify generic, and enable it for mlx5. 2) patches #2..#5 are improving mlxfw flash error messages by reporting detailed mlxfw FSM error messages to netlink and kernel log. 3) patches #6,7 From Eran: Add FW reactivate flow to mlxfw and mlx5 ==================== Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net/mlx5: Add fsm_reactivate callback supportEran Ben Elisha2020-02-221-0/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for fsm reactivate via MIRC (Management Image Re-activation Control) set and query commands. For re-activation flow, driver shall first run MIRC set, and then wait until FW is done (via querying MIRC status). Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net/mlxfw: Add reactivate flow support to FSM burn flowEran Ben Elisha2020-02-222-4/+119
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Expose fsm_reactivate callback to the mlxfw_dev_ops struct. FSM reactivate is needed before flashing the new image in order to flush the old flashed but not running firmware image. In case mlxfw_dev do not support the reactivation, this step will be skipped. But if later image flash will fail, a hint will be provided by the extack to advise the user that the failure might be related to it. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net/mlxfw: Use MLXFW_ERR_MSG macro for error reportingSaeed Mahameed2020-02-221-21/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of always calling both mlxfw_err and NL_SET_ERR_MSG_MOD with the same message, use the dedicated macro instead. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net/mlxfw: Convert pr_* to dev_* in mlxfw_fsm.cSaeed Mahameed2020-02-222-38/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce mlxfw_{info, err, dbg} macros and make them call corresponding dev_* macros, then convert all instances of pr_* to mlxfw_*. This will allow printing the device name mlxfw is operating on. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net/mlxfw: More error messages coverageSaeed Mahameed2020-02-221-9/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure mlxfw_firmware_flash reports a detailed user readable error message in every possible error path, basically every time mlxfw_dev->ops->*() is called and an error is returned, or when image initialization is failed. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net/mlxfw: Improve FSM err message reporting and return codesSaeed Mahameed2020-02-221-29/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Report unique and standard error codes corresponding to the specific FW flash error. In addition, add a more detailed error messages to netlink. Before: $ devlink dev flash pci/0000:05:00.0 file ... Error: mlxfw: Firmware flash failed. devlink answers: Invalid argument After: $ devlink dev flash pci/0000:05:00.0 file ... Error: mlxfw: Firmware flash failed: pending reset. devlink answers: Device busy Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net/mlxfw: Generic mlx FW flash status notifySaeed Mahameed2020-02-225-30/+16
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FW flash status notify is currently implemented via a callback to the caller mlx module, and all it is doing is to call devlink_flash_update_status_notify with the specific module devlink instance. Instead of repeating the whole process for all mlx modules and re-implement the status_notify callback again and again. Just provide the devlink instance as part of mlxfw_dev when calling mlxfw_firmware_flash and let mlxfw do the devlink status updates directly. This will be very useful for adding status notify support to mlx5, as already done in this patch, with a simple one line of just providing the devlink instance to mlxfw_firmware_flash. mlxfw now depends on NET_DEVLINK as all other mlx modules. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller2020-02-2233-161/+2433
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Daniel Borkmann says: ==================== pull-request: bpf-next 2020-02-21 The following pull-request contains BPF updates for your *net-next* tree. We've added 25 non-merge commits during the last 4 day(s) which contain a total of 33 files changed, 2433 insertions(+), 161 deletions(-). The main changes are: 1) Allow for adding TCP listen sockets into sock_map/hash so they can be used with reuseport BPF programs, from Jakub Sitnicki. 2) Add a new bpf_program__set_attach_target() helper for adding libbpf support to specify the tracepoint/function dynamically, from Eelco Chaudron. 3) Add bpf_read_branch_records() BPF helper which helps use cases like profile guided optimizations, from Daniel Xu. 4) Enable bpf_perf_event_read_value() in all tracing programs, from Song Liu. 5) Relax BTF mandatory check if only used for libbpf itself e.g. to process BTF defined maps, from Andrii Nakryiko. 6) Move BPF selftests -mcpu compilation attribute from 'probe' to 'v3' as it has been observed that former fails in envs with low memlock, from Yonghong Song. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * \ Merge branch 'bpf-sockmap-listen'Daniel Borkmann2020-02-2119-103/+1910
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jakub Sitnicki says: ==================== This patch set turns SOCK{MAP,HASH} into generic collections for TCP sockets, both listening and established. Adding support for listening sockets enables us to use these BPF map types with reuseport BPF programs. Why? SOCKMAP and SOCKHASH, in comparison to REUSEPORT_SOCKARRAY, allow the socket to be in more than one map at the same time. Having a BPF map type that can hold listening sockets, and gracefully co-exist with reuseport BPF is important if, in the future, we want BPF programs that run at socket lookup time [0]. Cover letter for v1 of this series tells the full story of how we got here [1]. Although SOCK{MAP,HASH} are not a drop-in replacement for SOCKARRAY just yet, because UDP support is lacking, it's a step in this direction. We're working with Lorenz on extending SOCK{MAP,HASH} to hold UDP sockets, and expect to post RFC series for sockmap + UDP in the near future. I've dropped Acks from all patches that have been touched since v6. The audit for missing READ_ONCE annotations for access to sk_prot is ongoing. Thus far I've found one location specific to TCP listening sockets that needed annotating. This got fixed it in this iteration. I wonder if sparse checker could be put to work to identify places where we have sk_prot access while not holding sk_lock... The patch series depends on another one, posted earlier [2], that has been split out of it. v6 -> v7: - Extended the series to cover SOCKHASH. (patches 4-8, 10-11) (John) - Rebased onto recent bpf-next. Resolved conflicts in recent fixes to sk_state checks on sockmap/sockhash update path. (patch 4) - Added missing READ_ONCE annotation in sock_copy. (patch 1) - Split out patches that simplify sk_psock_restore_proto [2]. v5 -> v6: - Added a fix-up for patch 1 which I forgot to commit in v5. Sigh. v4 -> v5: - Rebase onto recent bpf-next to resolve conflicts. (Daniel) v3 -> v4: - Make tcp_bpf_clone parameter names consistent across function declaration and definition. (Martin) - Use sock_map_redirect_okay helper everywhere we need to take a different action for listening sockets. (Lorenz) - Expand comment explaining the need for a callback from reuseport to sockarray code in reuseport_detach_sock. (Martin) - Mention the possibility of using a u64 counter for reuseport IDs in the future in the description for patch 10. (Martin) v2 -> v3: - Generate reuseport ID when group is created. Please see patch 10 description for details. (Martin) - Fix the build when CONFIG_NET_SOCK_MSG is not selected by either CONFIG_BPF_STREAM_PARSER or CONFIG_TLS. (kbuild bot & John) - Allow updating sockmap from BPF on BPF_SOCK_OPS_TCP_LISTEN_CB callback. An oversight in previous iterations. Users may want to populate the sockmap with listening sockets from BPF as well. - Removed RCU read lock assertion in sock_map_lookup_sys. (Martin) - Get rid of a warning when child socket was cloned with parent's psock state. (John) - Check for tcp_bpf_unhash rather than tcp_bpf_recvmsg when deciding if sk_proto needs restoring on clone. Check for recvmsg in the context of listening socket cloning was confusing. (Martin) - Consolidate sock_map_sk_is_suitable with sock_map_update_okay. This led to adding dedicated predicates for sockhash. Update self-tests accordingly. (John) - Annotate unlikely branch in bpf_{sk,msg}_redirect_map when socket isn't in a map, or isn't a valid redirect target. (John) - Document paired READ/WRITE_ONCE annotations and cover shared access in more detail in patch 2 description. (John) - Correct a couple of log messages in sockmap_listen self-tests so the message reflects the actual failure. - Rework reuseport tests from sockmap_listen suite so that ENOENT error from bpf_sk_select_reuseport handler does not happen on happy path. v1 -> v2: - af_ops->syn_recv_sock callback is no longer overridden and burdened with restoring sk_prot and clearing sk_user_data in the child socket. As child socket is already hashed when syn_recv_sock returns, it is too late to put it in the right state. Instead patches 3 & 4 address restoring sk_prot and clearing sk_user_data before we hash the child socket. (Pointed out by Martin Lau) - Annotate shared access to sk->sk_prot with READ_ONCE/WRITE_ONCE macros as we write to it from sk_msg while socket might be getting cloned on another CPU. (Suggested by John Fastabend) - Convert tests for SOCKMAP holding listening sockets to return-on-error style, and hook them up to test_progs. Also use BPF skeleton for setup. Add new tests to cover the race scenario discovered during v1 review. RFC -> v1: - Switch from overriding proto->accept to af_ops->syn_recv_sock, which happens earlier. Clearing the psock state after accept() does not work for child sockets that become orphaned (never got accepted). v4-mapped sockets need special care. - Return the socket cookie on SOCKMAP lookup from syscall to be on par with REUSEPORT_SOCKARRAY. Requires SOCKMAP to take u64 on lookup/update from syscall. - Make bpf_sk_redirect_map (ingress) and bpf_msg_redirect_map (egress) SOCKMAP helpers fail when target socket is a listening one. - Make bpf_sk_select_reuseport helper fail when target is a TCP established socket. - Teach libbpf to recognize SK_REUSEPORT program type from section name. - Add a dedicated set of tests for SOCKMAP holding listening sockets, covering map operations, overridden socket callbacks, and BPF helpers. [0] https://lore.kernel.org/bpf/20190828072250.29828-1-jakub@cloudflare.com/ [1] https://lore.kernel.org/bpf/20191123110751.6729-1-jakub@cloudflare.com/ [2] https://lore.kernel.org/bpf/20200217121530.754315-1-jakub@cloudflare.com/ ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * | selftests/bpf: Tests for sockmap/sockhash holding listening socketsJakub Sitnicki2020-02-212-0/+1594
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that SOCKMAP and SOCKHASH map types can store listening sockets, user-space and BPF API is open to a new set of potential pitfalls. Exercise the map operations, with extra attention to code paths susceptible to races between map ops and socket cloning, and BPF helpers that work with SOCKMAP/SOCKHASH to gain confidence that all works as expected. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-12-jakub@cloudflare.com
| | * | selftests/bpf: Extend SK_REUSEPORT tests to cover SOCKMAP/SOCKHASHJakub Sitnicki2020-02-211-10/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parametrize the SK_REUSEPORT tests so that the map type for storing sockets is not hard-coded in the test setup routine. This, together with careful state cleaning after the tests, lets us run the test cases for REUSEPORT_ARRAY, SOCKMAP, and SOCKHASH to have test coverage for all supported map types. The last two support only TCP sockets at the moment. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-11-jakub@cloudflare.com
| | * | net: Generate reuseport group ID on group creationJakub Sitnicki2020-02-214-47/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 736b46027eb4 ("net: Add ID (if needed) to sock_reuseport and expose reuseport_lock") has introduced lazy generation of reuseport group IDs that survive group resize. By comparing the identifier we check if BPF reuseport program is not trying to select a socket from a BPF map that belongs to a different reuseport group than the one the packet is for. Because SOCKARRAY used to be the only BPF map type that can be used with reuseport BPF, it was possible to delay the generation of reuseport group ID until a socket from the group was inserted into BPF map for the first time. Now that SOCK{MAP,HASH} can be used with reuseport BPF we have two options, either generate the reuseport ID on map update, like SOCKARRAY does, or allocate an ID from the start when reuseport group gets created. This patch takes the latter approach to keep sockmap free of calls into reuseport code. This streamlines the reuseport_id access as its lifetime now matches the longevity of reuseport object. The cost of this simplification, however, is that we allocate reuseport IDs for all SO_REUSEPORT users. Even those that don't use SOCKARRAY in their setups. With the way identifiers are currently generated, we can have at most S32_MAX reuseport groups, which hopefully is sufficient. If we ever get close to the limit, we can switch an u64 counter like sk_cookie. Another change is that we now always call into SOCKARRAY logic to unlink the socket from the map when unhashing or closing the socket. Previously we did it only when at least one socket from the group was in a BPF map. It is worth noting that this doesn't conflict with sockmap tear-down in case a socket is in a SOCK{MAP,HASH} and belongs to a reuseport group. sockmap tear-down happens first: prot->unhash `- tcp_bpf_unhash |- tcp_bpf_remove | `- while (sk_psock_link_pop(psock)) | `- sk_psock_unlink | `- sock_map_delete_from_link | `- __sock_map_delete | `- sock_map_unref | `- sk_psock_put | `- sk_psock_drop | `- rcu_assign_sk_user_data(sk, NULL) `- inet_unhash `- reuseport_detach_sock `- bpf_sk_reuseport_detach `- WRITE_ONCE(sk->sk_user_data, NULL) Suggested-by: Martin Lau <kafai@fb.com> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200218171023.844439-10-jakub@cloudflare.com
| | * | bpf: Allow selecting reuseport socket from a SOCKMAP/SOCKHASHJakub Sitnicki2020-02-212-8/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SOCKMAP & SOCKHASH now support storing references to listening sockets. Nothing keeps us from using these map types a collection of sockets to select from in BPF reuseport programs. Whitelist the map types with the bpf_sk_select_reuseport helper. The restriction that the socket has to be a member of a reuseport group still applies. Sockets in SOCKMAP/SOCKHASH that don't have sk_reuseport_cb set are not a valid target and we signal it with -EINVAL. The main benefit from this change is that, in contrast to REUSEPORT_SOCKARRAY, SOCK{MAP,HASH} don't impose a restriction that a listening socket can be just one BPF map at the same time. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200218171023.844439-9-jakub@cloudflare.com
| | * | bpf, sockmap: Let all kernel-land lookup values in SOCKMAP/SOCKHASHJakub Sitnicki2020-02-211-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't require the kernel code, like BPF helpers, that needs access to SOCK{MAP,HASH} map contents to live in net/core/sock_map.c. Expose the lookup operation to all kernel-land. Lookup from BPF context is not whitelisted yet. While syscalls have a dedicated lookup handler. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-8-jakub@cloudflare.com
| | * | bpf, sockmap: Return socket cookie on lookup from syscallJakub Sitnicki2020-02-211-4/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tooling that populates the SOCK{MAP,HASH} with sockets from user-space needs a way to inspect its contents. Returning the struct sock * that the map holds to user-space is neither safe nor useful. An approach established by REUSEPORT_SOCKARRAY is to return a socket cookie (a unique identifier) instead. Since socket cookies are u64 values, SOCK{MAP,HASH} need to support such a value size for lookup to be possible. This requires special handling on update, though. Attempts to do a lookup on a map holding u32 values will be met with ENOSPC error. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-7-jakub@cloudflare.com
| | * | bpf, sockmap: Don't set up upcalls and progs for listening socketsJakub Sitnicki2020-02-211-7/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that sockmap/sockhash can hold listening sockets, when setting up the psock we will (i) grab references to verdict/parser progs, and (2) override socket upcalls sk_data_ready and sk_write_space. However, since we cannot redirect to listening sockets so we don't need to link the socket to the BPF progs. And more importantly we don't want the listening socket to have overridden upcalls because they would get inherited by child sockets cloned from it. Introduce a separate initialization path for listening sockets that does not change the upcalls and ignores the BPF progs. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-6-jakub@cloudflare.com
| | * | bpf, sockmap: Allow inserting listening TCP sockets into sockmapJakub Sitnicki2020-02-212-20/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order for sockmap/sockhash types to become generic collections for storing TCP sockets we need to loosen the checks during map update, while tightening the checks in redirect helpers. Currently sock{map,hash} require the TCP socket to be in established state, which prevents inserting listening sockets. Change the update pre-checks so the socket can also be in listening state. Since it doesn't make sense to redirect with sock{map,hash} to listening sockets, add appropriate socket state checks to BPF redirect helpers too. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-5-jakub@cloudflare.com
| | * | tcp_bpf: Don't let child socket inherit parent protocol ops on copyJakub Sitnicki2020-02-213-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prepare for cloning listening sockets that have their protocol callbacks overridden by sk_msg. Child sockets must not inherit parent callbacks that access state stored in sk_user_data owned by the parent. Restore the child socket protocol callbacks before it gets hashed and any of the callbacks can get invoked. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-4-jakub@cloudflare.com
| | * | net, sk_msg: Clear sk_user_data pointer on clone if taggedJakub Sitnicki2020-02-213-3/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sk_user_data can hold a pointer to an object that is not intended to be shared between the parent socket and the child that gets a pointer copy on clone. This is the case when sk_user_data points at reference-counted object, like struct sk_psock. One way to resolve it is to tag the pointer with a no-copy flag by repurposing its lowest bit. Based on the bit-flag value we clear the child sk_user_data pointer after cloning the parent socket. The no-copy flag is stored in the pointer itself as opposed to externally, say in socket flags, to guarantee that the pointer and the flag are copied from parent to child socket in an atomic fashion. Parent socket state is subject to change while copying, we don't hold any locks at that time. This approach relies on an assumption that sk_user_data holds a pointer to an object aligned at least 2 bytes. A manual audit of existing users of rcu_dereference_sk_user_data helper confirms our assumption. Also, an RCU-protected sk_user_data is not likely to hold a pointer to a char value or a pathological case of "struct { char c; }". To be safe, warn when the flag-bit is set when setting sk_user_data to catch any future misuses. It is worth considering why clearing sk_user_data unconditionally is not an option. There exist users, DRBD, NVMe, and Xen drivers being among them, that rely on the pointer being copied when cloning the listening socket. Potentially we could distinguish these users by checking if the listening socket has been created in kernel-space via sock_create_kern, and hence has sk_kern_sock flag set. However, this is not the case for NVMe and Xen drivers, which create sockets without marking them as belonging to the kernel. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200218171023.844439-3-jakub@cloudflare.com
| | * | net, sk_msg: Annotate lockless access to sk_prot on cloneJakub Sitnicki2020-02-215-7/+14
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sk_msg and ULP frameworks override protocol callbacks pointer in sk->sk_prot, while tcp accesses it locklessly when cloning the listening socket, that is with neither sk_lock nor sk_callback_lock held. Once we enable use of listening sockets with sockmap (and hence sk_msg), there will be shared access to sk->sk_prot if socket is getting cloned while being inserted/deleted to/from the sockmap from another CPU: Read side: tcp_v4_rcv sk = __inet_lookup_skb(...) tcp_check_req(sk) inet_csk(sk)->icsk_af_ops->syn_recv_sock tcp_v4_syn_recv_sock tcp_create_openreq_child inet_csk_clone_lock sk_clone_lock READ_ONCE(sk->sk_prot) Write side: sock_map_ops->map_update_elem sock_map_update_elem sock_map_update_common sock_map_link_no_progs tcp_bpf_init tcp_bpf_update_sk_prot sk_psock_update_proto WRITE_ONCE(sk->sk_prot, ops) sock_map_ops->map_delete_elem sock_map_delete_elem __sock_map_delete sock_map_unref sk_psock_put sk_psock_drop sk_psock_restore_proto tcp_update_ulp WRITE_ONCE(sk->sk_prot, proto) Mark the shared access with READ_ONCE/WRITE_ONCE annotations. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200218171023.844439-2-jakub@cloudflare.com
| * | docs/bpf: Update bpf development Q/A fileYonghong Song2020-02-211-17/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bpf now has its own mailing list bpf@vger.kernel.org. Update the bpf_devel_QA.rst file to reflect this. Also llvm has switch to github with llvm and clang in the same repo https://github.com/llvm/llvm-project.git. Update the QA file with newer build instructions. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200221004354.930952-1-yhs@fb.com
| * | selftests/bpf: Fix trampoline_count clean up logicAndrii Nakryiko2020-02-211-7/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Libbpf's Travis CI tests caught this issue. Ensure bpf_link and bpf_object clean up is performed correctly. Fixes: d633d57902a5 ("selftest/bpf: Add test for allowed trampolines count") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20200220230546.769250-1-andriin@fb.com
| * | Merge branch 'set_attach_target'Alexei Starovoitov2020-02-215-9/+54
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Eelco Chaudron says: ==================== Currently when you want to attach a trace program to a bpf program the section name needs to match the tracepoint/function semantics. However the addition of the bpf_program__set_attach_target() API allows you to specify the tracepoint/function dynamically. The call flow would look something like this: xdp_fd = bpf_prog_get_fd_by_id(id); trace_obj = bpf_object__open_file("func.o", NULL); prog = bpf_object__find_program_by_title(trace_obj, "fentry/myfunc"); bpf_program__set_expected_attach_type(prog, BPF_TRACE_FENTRY); bpf_program__set_attach_target(prog, xdp_fd, "xdpfilt_blk_all"); bpf_object__load(trace_obj) v1 -> v2: Remove requirement for attach type hint in API v2 -> v3: Moved common warning to __find_vmlinux_btf_id, requested by Andrii Updated the xdp_bpf2bpf test to use this new API v3 -> v4: Split up patch, update libbpf.map version v4 -> v5: Fix return code, and prog assignment in test case ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| | * | selftests/bpf: Update xdp_bpf2bpf test to use new set_attach_target APIEelco Chaudron2020-02-212-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the new bpf_program__set_attach_target() API in the xdp_bpf2bpf selftest so it can be referenced as an example on how to use it. Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/158220520562.127661.14289388017034825841.stgit@xdp-tutorial
| | * | libbpf: Add support for dynamic program attach targetEelco Chaudron2020-02-213-4/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently when you want to attach a trace program to a bpf program the section name needs to match the tracepoint/function semantics. However the addition of the bpf_program__set_attach_target() API allows you to specify the tracepoint/function dynamically. The call flow would look something like this: xdp_fd = bpf_prog_get_fd_by_id(id); trace_obj = bpf_object__open_file("func.o", NULL); prog = bpf_object__find_program_by_title(trace_obj, "fentry/myfunc"); bpf_program__set_expected_attach_type(prog, BPF_TRACE_FENTRY); bpf_program__set_attach_target(prog, xdp_fd, "xdpfilt_blk_all"); bpf_object__load(trace_obj) Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/158220519486.127661.7964708960649051384.stgit@xdp-tutorial
| | * | libbpf: Bump libpf current version to v0.0.8Eelco Chaudron2020-02-211-0/+3
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | New development cycles starts, bump to v0.0.8. Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/158220518424.127661.8278643006567775528.stgit@xdp-tutorial
| * | libbpf: Relax check whether BTF is mandatoryAndrii Nakryiko2020-02-201-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If BPF program is using BTF-defined maps, BTF is required only for libbpf itself to process map definitions. If after that BTF fails to be loaded into kernel (e.g., if it doesn't support BTF at all), this shouldn't prevent valid BPF program from loading. Existing retry-without-BTF logic for creating maps will succeed to create such maps without any problems. So, presence of .maps section shouldn't make BTF required for kernel. Update the check accordingly. Validated by ensuring simple BPF program with BTF-defined maps is still loaded on old kernel without BTF support and map is correctly parsed and created. Fixes: abd29c931459 ("libbpf: allow specifying map definitions using BTF") Reported-by: Julia Kartseva <hex@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200220062635.1497872-1-andriin@fb.com
| * | selftests/bpf: Fix build of sockmap_ktls.cAlexei Starovoitov2020-02-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The selftests fails to build with: tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c: In function ‘test_sockmap_ktls_disconnect_after_delete’: tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c:72:37: error: ‘TCP_ULP’ undeclared (first use in this function) 72 | err = setsockopt(cli, IPPROTO_TCP, TCP_ULP, "tls", strlen("tls")); | ^~~~~~~ Similar to commit that fixes build of sockmap_basic.c on systems with old /usr/include fix the build of sockmap_ktls.c Fixes: d1ba1204f2ee ("selftests/bpf: Test unhashing kTLS socket after removing from map") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200219205514.3353788-1-ast@kernel.org
| * | selftests/bpf: Change llvm flag -mcpu=probe to -mcpu=v3Yonghong Song2020-02-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The latest llvm supports cpu version v3, which is cpu version v1 plus some additional 64bit jmp insns and 32bit jmp insn support. In selftests/bpf Makefile, the llvm flag -mcpu=probe did runtime probe into the host system. Depending on compilation environments, it is possible that runtime probe may fail, e.g., due to memlock issue. This will cause generated code with cpu version v1. This may cause confusion as the same compiler and the same C code generates different byte codes in different environment. Let us change the llvm flag -mcpu=probe to -mcpu=v3 so the generated code will be the same regardless of the compilation environment. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200219004236.2291125-1-yhs@fb.com
| * | Merge branch 'bpf_read_branch_records'Alexei Starovoitov2020-02-205-2/+309
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Daniel Xu says: ==================== Branch records are a CPU feature that can be configured to record certain branches that are taken during code execution. This data is particularly interesting for profile guided optimizations. perf has had branch record support for a while but the data collection can be a bit coarse grained. We (Facebook) have seen in experiments that associating metadata with branch records can improve results (after postprocessing). We generally use bpf_probe_read_*() to get metadata out of userspace. That's why bpf support for branch records is useful. Aside from this particular use case, having branch data available to bpf progs can be useful to get stack traces out of userspace applications that omit frame pointers. Changes in v8: - Use globals instead of perf buffer - Call test_perf_branches__detach() before destroying skeleton - Fix typo in docs Changes in v7: - Const-ify and static-ify local var - Documentation formatting Changes in v6: - Move #ifdef a little to avoid unused variable warnings on !x86 - Test negative condition in selftest (-EINVAL on improperly configured perf event) - Skip positive condition selftest on setups that don't support branch records Changes in v5: - Rename bpf_perf_prog_read_branches() -> bpf_read_branch_records() - Rename BPF_F_GET_BR_SIZE -> BPF_F_GET_BRANCH_RECORDS_SIZE - Squash tools/ bpf.h sync into selftest commit Changes in v4: - Add BPF_F_GET_BR_SIZE flag - Return -ENOENT on unsupported architectures - Only accept initialized memory in helper - Check buffer size is multiple of sizeof(struct perf_branch_entry) - Use bpf skeleton in selftest - Add commit messages - Spelling and formatting Changes in v3: - Document filling unused buffer with zero - Formatting fixes - Rebase Changes in v2: - Change to a bpf helper instead of context access - Avoid mentioning Intel specific things ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| | * | selftests/bpf: Add bpf_read_branch_records() selftestDaniel Xu2020-02-203-1/+244
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a selftest to test: * default bpf_read_branch_records() behavior * BPF_F_GET_BRANCH_RECORDS_SIZE flag behavior * error path on non branch record perf events * using helper to write to stack * using helper to write to global On host with hardware counter support: # ./test_progs -t perf_branches #27/1 perf_branches_hw:OK #27/2 perf_branches_no_hw:OK #27 perf_branches:OK Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED On host without hardware counter support (VM): # ./test_progs -t perf_branches #27/1 perf_branches_hw:OK #27/2 perf_branches_no_hw:OK #27 perf_branches:OK Summary: 1/2 PASSED, 1 SKIPPED, 0 FAILED Also sync tools/include/uapi/linux/bpf.h. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200218030432.4600-3-dxu@dxuuu.xyz
| | * | bpf: Add bpf_read_branch_records() helperDaniel Xu2020-02-192-1/+65
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Branch records are a CPU feature that can be configured to record certain branches that are taken during code execution. This data is particularly interesting for profile guided optimizations. perf has had branch record support for a while but the data collection can be a bit coarse grained. We (Facebook) have seen in experiments that associating metadata with branch records can improve results (after postprocessing). We generally use bpf_probe_read_*() to get metadata out of userspace. That's why bpf support for branch records is useful. Aside from this particular use case, having branch data available to bpf progs can be useful to get stack traces out of userspace applications that omit frame pointers. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200218030432.4600-2-dxu@dxuuu.xyz
| * | Merge branch 'bpf-skmsg-simplify-restore'Daniel Borkmann2020-02-192-16/+124
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jakub Sitnicki says: ==================== This series has been split out from "Extend SOCKMAP to store listening sockets" [0]. I think it stands on its own, and makes the latter series smaller, which will make the review easier, hopefully. The essence is that we don't need to do a complicated dance in sk_psock_restore_proto, if we agree that the contract with tcp_update_ulp is to restore callbacks even when the socket doesn't use ULP. This is what tcp_update_ulp currently does, and we just make use of it. Series is accompanied by a test for a particularly tricky case of restoring callbacks when we have both sockmap and tls callbacks configured in sk->sk_prot. [0] https://lore.kernel.org/bpf/20200127131057.150941-1-jakub@cloudflare.com/ ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>