summaryrefslogtreecommitdiffstats
path: root/tools/lib/find_bit.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2019-05-25tools: bpf: sync uapi header bpf.hJiong Wang1-0/+18
Sync new bpf prog load flag "BPF_F_TEST_RND_HI32" to tools/. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-25bpf: introduce new bpf prog load flags "BPF_F_TEST_RND_HI32"Jiong Wang2-1/+21
x86_64 and AArch64 perhaps are two arches that running bpf testsuite frequently, however the zero extension insertion pass is not enabled for them because of their hardware support. It is critical to guarantee the pass correction as it is supposed to be enabled at default for a couple of other arches, for example PowerPC, SPARC, arm, NFP etc. Therefore, it would be very useful if there is a way to test this pass on for example x86_64. The test methodology employed by this set is "poisoning" useless bits. High 32-bit of a definition is randomized if it is identified as not used by any later insn. Such randomization is only enabled under testing mode which is gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32". Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-25bpf: verifier: insert zero extension according to analysis resultJiong Wang4-0/+52
After previous patches, verifier will mark a insn if it really needs zero extension on dst_reg. It is then for back-ends to decide how to use such information to eliminate unnecessary zero extension code-gen during JIT compilation. One approach is verifier insert explicit zero extension for those insns that need zero extension in a generic way, JIT back-ends then do not generate zero extension for sub-register write at default. However, only those back-ends which do not have hardware zero extension want this optimization. Back-ends like x86_64 and AArch64 have hardware zero extension support that the insertion should be disabled. This patch introduces new target hook "bpf_jit_needs_zext" which returns false at default, meaning verifier zero extension insertion is disabled at default. A back-end could override this hook to return true if it doesn't have hardware support and want verifier insert zero extension explicitly. Offload targets do not use this native target hook, instead, they could get the optimization results using bpf_prog_offload_ops.finalize. NOTE: arches could have diversified features, it is possible for one arch to have hardware zero extension support for some sub-register write insns but not for all. For example, PowerPC, SPARC have zero extended loads, but not for alu32. So when verifier zero extension insertion enabled, these JIT back-ends need to peephole insns to remove those zero extension inserted for insn that actually has hardware zero extension support. The peephole could be as simple as looking the next insn, if it is a special zero extension insn then it is safe to eliminate it if the current insn has hardware zero extension support. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-25bpf: introduce new mov32 variant for doing explicit zero extensionJiong Wang1-0/+14
The encoding for this new variant is based on BPF_X format. "imm" field was 0 only, now it could be 1 which means doing zero extension unconditionally .code = BPF_ALU | BPF_MOV | BPF_X .dst_reg = DST .src_reg = SRC .imm = 1 We use this new form for doing zero extension for which verifier will guarantee SRC == DST. Implications on JIT back-ends when doing code-gen for BPF_ALU | BPF_MOV | BPF_X: 1. No change if hardware already does zero extension unconditionally for sub-register write. 2. Otherwise, when seeing imm == 1, just generate insns to clear high 32-bit. No need to generate insns for the move because when imm == 1, dst_reg is the same as src_reg at the moment. Interpreter doesn't need change as well. It is doing unconditionally zero extension for mov32 already. One helper macro BPF_ZEXT_REG is added to help creating zero extension insn using this new mov32 variant. One helper function insn_is_zext is added for checking one insn is an zero extension on dst. This will be widely used by a few JIT back-ends in later patches in this set. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-25bpf: verifier: mark patched-insn with sub-register zext flagJiong Wang1-4/+33
Patched insns do not go through generic verification, therefore doesn't has zero extension information collected during insn walking. We don't bother analyze them at the moment, for any sub-register def comes from them, just conservatively mark it as needing zero extension. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-25bpf: verifier: mark verified-insn with sub-register zext flagJiong Wang2-16/+171
eBPF ISA specification requires high 32-bit cleared when low 32-bit sub-register is written. This applies to destination register of ALU32 etc. JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and AArch64 ISA has the same semantics, so the corresponding JIT back-end doesn't need to do extra work. However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches (PowerPC, SPARC etc) need to do explicit zero extension to meet this requirement, otherwise code like the following will fail. u64_value = (u64) u32_value ... other uses of u64_value This is because compiler could exploit the semantic described above and save those zero extensions for extending u32_value to u64_value, these JIT back-ends are expected to guarantee this through inserting extra zero extensions which however could be a significant increase on the code size. Some benchmarks show there could be ~40% sub-register writes out of total insns, meaning at least ~40% extra code-gen. One observation is these extra zero extensions are not always necessary. Take above code snippet for example, it is possible u32_value will never be casted into a u64, the value of high 32-bit of u32_value then could be ignored and extra zero extension could be eliminated. This patch implements this idea, insns defining sub-registers will be marked when the high 32-bit of the defined sub-register matters. For those unmarked insns, it is safe to eliminate high 32-bit clearnace for them. Algo: - Split read flags into READ32 and READ64. - Record index of insn that does sub-register write. Keep the index inside reg state and update it during verifier insn walking. - A full register read on a sub-register marks its definition insn as needing zero extension on dst register. A new sub-register write overrides the old one. - When propagating read64 during path pruning, also mark any insn defining a sub-register that is read in the pruned path as full-register. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24tools/bpf: add selftest in test_progs for bpf_send_signal() helperYonghong Song3-0/+250
The test covered both nmi and tracepoint perf events. $ ./test_progs ... test_send_signal_tracepoint:PASS:tracepoint 0 nsec ... test_send_signal_common:PASS:tracepoint 0 nsec ... test_send_signal_common:PASS:perf_event 0 nsec ... test_send_signal:OK Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24tools/bpf: sync bpf uapi header bpf.h to tools directoryYonghong Song1-1/+16
The bpf uapi header include/uapi/linux/bpf.h is sync'ed to tools/include/uapi/linux/bpf.h. Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24bpf: implement bpf_send_signal() helperYonghong Song2-1/+88
This patch tries to solve the following specific use case. Currently, bpf program can already collect stack traces through kernel function get_perf_callchain() when certain events happens (e.g., cache miss counter or cpu clock counter overflows). But such stack traces are not enough for jitted programs, e.g., hhvm (jited php). To get real stack trace, jit engine internal data structures need to be traversed in order to get the real user functions. bpf program itself may not be the best place to traverse the jit engine as the traversing logic could be complex and it is not a stable interface either. Instead, hhvm implements a signal handler, e.g. for SIGALARM, and a set of program locations which it can dump stack traces. When it receives a signal, it will dump the stack in next such program location. Such a mechanism can be implemented in the following way: . a perf ring buffer is created between bpf program and tracing app. . once a particular event happens, bpf program writes to the ring buffer and the tracing app gets notified. . the tracing app sends a signal SIGALARM to the hhvm. But this method could have large delays and causing profiling results skewed. This patch implements bpf_send_signal() helper to send a signal to hhvm in real time, resulting in intended stack traces. Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24bpftool: update bash-completion w/ new c option for btf dumpAndrii Nakryiko1-4/+17
Add bash completion for new C btf dump option. Cc: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24bpftool/docs: add description of btf dump C optionAndrii Nakryiko1-15/+20
Document optional **c** option for btf dump subcommand. Cc: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24bpftool: add C output format option to btf dump subcommandAndrii Nakryiko1-2/+73
Utilize new libbpf's btf_dump API to emit BTF as a C definitions. Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24selftests/bpf: add btf_dump BTF-to-C conversion testsAndrii Nakryiko10-1/+824
Add new test_btf_dump set of tests, validating BTF-to-C conversion correctness. Tests rely on clang to generate BTF from provided C test cases. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: add btf_dump API for BTF-to-C conversionAndrii Nakryiko4-1/+1359
BTF contains enough type information to allow generating valid compilable C header w/ correct layout of structs/unions and all the typedef/enum definitions. This patch adds a new "object" - btf_dump to facilitate dumping BTF as valid C. btf_dump__dump_type() is the main API which takes care of dumping out (through user-provided printf-like callback function) C definitions for given type ID and it's required dependencies. This allows for not just dumping out entirety of BTF types, but also selective filtering based on user-provided criterias w/ minimal set of dependent types. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: switch btf_dedup() to hashmap for dedup tableAndrii Nakryiko1-116/+85
Utilize libbpf's hashmap as a multimap fof dedup_table implementation. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24selftests/bpf: add tests for libbpf's hashmapAndrii Nakryiko3-1/+384
Test all APIs for internal hashmap implementation. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: add resizable non-thread safe internal hashmapAndrii Nakryiko3-1/+403
There is a need for fast point lookups inside libbpf for multiple use cases (e.g., name resolution for BTF-to-C conversion, by-name lookups in BTF for upcoming BPF CO-RE relocation support, etc). This patch implements simple resizable non-thread safe hashmap using single linked list chains. Four different insert strategies are supported: - HASHMAP_ADD - only add key/value if key doesn't exist yet; - HASHMAP_SET - add key/value pair if key doesn't exist yet; otherwise, update value; - HASHMAP_UPDATE - update value, if key already exists; otherwise, do nothing and return -ENOENT; - HASHMAP_APPEND - always add key/value pair, even if key already exists. This turns hashmap into a multimap by allowing multiple values to be associated with the same key. Most useful read API for such hashmap is hashmap__for_each_key_entry() iteration. If hashmap__find() is still used, it will return last inserted key/value entry (first in a bucket chain). For HASHMAP_SET and HASHMAP_UPDATE, old key/value pair is returned, so that calling code can handle proper memory management, if necessary. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24selftests/bpf: use btf__parse_elf to check presence of BTF/BTF.extAndrii Nakryiko1-58/+13
Switch test_btf.c to rely on btf__parse_elf to check presence of BTF and BTF.ext data, instead of implementing its own ELF parsing. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24bpftool: use libbpf's btf__parse_elf APIAndrii Nakryiko1-109/+8
Use btf__parse_elf() API, provided by libbpf, instead of implementing ELF parsing by itself. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: add btf__parse_elf API to load .BTF and .BTF.extAndrii Nakryiko3-0/+135
Loading BTF and BTF.ext from ELF file is a common need. Instead of requiring every user to re-implement it, let's provide this API from libbpf itself. It's mostly copy/paste from `bpftool btf dump` implementation, which will be switched to libbpf's version in next patch. btf__parse_elf allows to load BTF and optionally BTF.ext. This is also useful for tests that need to load/work with BTF, loaded from test ELF files. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: ensure libbpf.h is included along libbpf_internal.hAndrii Nakryiko1-0/+2
libbpf_internal.h expects a bunch of stuff defined in libbpf.h to be defined. This patch makes sure that libbpf.h is always included. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24samples: bpf: Do not define bpf_printk macroMichal Rostecki10-72/+2
The bpf_printk macro was moved to bpf_helpers.h which is included in all example programs. Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24selftests: bpf: Move bpf_printk to bpf_helpers.hMichal Rostecki7-42/+8
bpf_printk is a macro which is commonly used to print out debug messages in BPF programs and it was copied in many selftests and samples. Since all of them include bpf_helpers.h, this change moves the macro there. Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24bpf: convert explored_states to hash tableAlexei Starovoitov2-5/+19
All prune points inside a callee bpf function most likely will have different callsites. For example, if function foo() is called from two callsites the half of explored states in all prune points in foo() will be useless for subsequent walking of one of those callsites. Fortunately explored_states pruning heuristics keeps the number of states per prune point small, but walking these states is still a waste of cpu time when the callsite of the current state is different from the callsite of the explored state. To improve pruning logic convert explored_states into hash table and use simple insn_idx ^ callsite hash to select hash bucket. This optimization has no effect on programs without bpf2bpf calls and drastically improves programs with calls. In the later case it reduces total memory consumption in 1M scale tests by almost 3 times (peak_states drops from 5752 to 2016). Care should be taken when comparing the states for equivalency. Since the same hash bucket can now contain states with different indices the insn_idx has to be part of verifier_state and compared. Different hash table sizes and different hash functions were explored, but the results were not significantly better vs this patch. They can be improved in the future. Hit/miss heuristic is not counting index miscompare as a miss. Otherwise verifier stats become unstable when experimenting with different hash functions. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24bpf: split explored_statesAlexei Starovoitov2-18/+14
split explored_states into prune_point boolean mark and link list of explored states. This removes STATE_LIST_MARK hack and allows marks to be separate from states. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24bpf: cleanup explored_statesAlexei Starovoitov1-9/+20
clean up explored_states to prep for introduction of hashtable No functional changes. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-23selftests/bpf: add pyperf scale testAlexei Starovoitov5-13/+298
Add a snippet of pyperf bpf program used to collect python stack traces as a scale test for the verifier. At 189 loop iterations llvm 9.0 starts ignoring '#pragma unroll' and generates partially unrolled loop instead. Hence use 50, 100, and 180 loop iterations to stress test. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-23selftests/bpf: adjust verifier scale testAlexei Starovoitov1-14/+17
Adjust scale tests to check for new jmp sequence limit. BPF_JGT had to be changed to BPF_JEQ because the verifier was too smart. It tracked the known safe range of R0 values and pruned the search earlier before hitting exact 8192 limit. bpf_semi_rand_get() was too (un)?lucky. k = 0; was missing in bpf_fill_scale2. It was testing a bit shorter sequence of jumps than intended. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-23bpf: bump jmp sequence limitAlexei Starovoitov1-3/+4
The limit of 1024 subsequent jumps was causing otherwise valid programs to be rejected. Bump it to 8192 and make the error more verbose. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-23libbpf: emit diff of mismatched public API, if anyAndrii Nakryiko1-0/+10
It's easy to have a mismatch of "intended to be public" vs really exposed API functions. While Makefile does check for this mismatch, if it actually occurs it's not trivial to determine which functions are accidentally exposed. This patch dumps out a diff showing what's not supposed to be exposed facilitating easier fixing. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-23hv_sock: perf: loop in send() to maximize bandwidthSunil Muthuswamy1-14/+31
Currently, the hv_sock send() iterates once over the buffer, puts data into the VMBUS channel and returns. It doesn't maximize on the case when there is a simultaneous reader draining data from the channel. In such a case, the send() can maximize the bandwidth (and consequently minimize the cpu cycles) by iterating until the channel is found to be full. Perf data: Total Data Transfer: 10GB/iteration Single threaded reader/writer, Linux hvsocket writer with Windows hvsocket reader Packet size: 64KB CPU sys time was captured using the 'time' command for the writer to send 10GB of data. 'Send Buffer Loop' is with the patch applied. The values below are over 10 iterations. |--------------------------------------------------------| | | Current | Send Buffer Loop | |--------------------------------------------------------| | | Throughput | CPU sys | Throughput | CPU sys | | | (MB/s) | time (s) | (MB/s) | time (s) | |--------------------------------------------------------| | Min | 407 | 7.048 | 401 | 5.958 | |--------------------------------------------------------| | Max | 455 | 7.563 | 542 | 6.993 | |--------------------------------------------------------| | Avg | 440 | 7.411 | 451 | 6.639 | |--------------------------------------------------------| | Median | 446 | 7.417 | 447 | 6.761 | |--------------------------------------------------------| Observation: 1. The avg throughput doesn't really change much with this change for this scenario. This is most probably because the bottleneck on throughput is somewhere else. 2. The average system (or kernel) cpu time goes down by 10%+ with this change, for the same amount of data transfer. Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com> Reviewed-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23hv_sock: perf: Allow the socket buffer size options to influence the actual ↵Sunil Muthuswamy1-10/+40
socket buffers Currently, the hv_sock buffer size is static and can't scale to the bandwidth requirements of the application. This change allows the applications to influence the socket buffer sizes using the SO_SNDBUF and the SO_RCVBUF socket options. Few interesting points to note: 1. Since the VMBUS does not allow a resize operation of the ring size, the socket buffer size option should be set prior to establishing the connection for it to take effect. 2. Setting the socket option comes with the cost of that much memory being reserved/allocated by the kernel, for the lifetime of the connection. Perf data: Total Data Transfer: 1GB Single threaded reader/writer Results below are summarized over 10 iterations. Linux hvsocket writer + Windows hvsocket reader: |---------------------------------------------------------------------------------------------| |Packet size -> | 128B | 1KB | 4KB | 64KB | |---------------------------------------------------------------------------------------------| |SO_SNDBUF size | | Throughput in MB/s (min/max/avg/median): | | v | | |---------------------------------------------------------------------------------------------| | Default | 109/118/114/116 | 636/774/701/700 | 435/507/480/476 | 410/491/462/470 | | 16KB | 110/116/112/111 | 575/705/662/671 | 749/900/854/869 | 592/824/692/676 | | 32KB | 108/120/115/115 | 703/823/767/772 | 718/878/850/866 | 1593/2124/2000/2085 | | 64KB | 108/119/114/114 | 592/732/683/688 | 805/934/903/911 | 1784/1943/1862/1843 | |---------------------------------------------------------------------------------------------| Windows hvsocket writer + Linux hvsocket reader: |---------------------------------------------------------------------------------------------| |Packet size -> | 128B | 1KB | 4KB | 64KB | |---------------------------------------------------------------------------------------------| |SO_RCVBUF size | | Throughput in MB/s (min/max/avg/median): | | v | | |---------------------------------------------------------------------------------------------| | Default | 69/82/75/73 | 313/343/333/336 | 418/477/446/445 | 659/701/676/678 | | 16KB | 69/83/76/77 | 350/401/375/382 | 506/548/517/516 | 602/624/615/615 | | 32KB | 62/83/73/73 | 471/529/496/494 | 830/1046/935/939 | 944/1180/1070/1100 | | 64KB | 64/70/68/69 | 467/533/501/497 | 1260/1590/1430/1431 | 1605/1819/1670/1660 | |---------------------------------------------------------------------------------------------| Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com> Reviewed-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv4/igmp: shrink struct ip_sf_listEric Dumazet1-1/+1
Removing two 4 bytes holes allows to use kmalloc-32 kmem cache instead of kmalloc-64 on 64bit kernels. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23neighbor: Add tracepoint to __neigh_createDavid Ahern2-0/+51
Add tracepoint to __neigh_create to enable debugging of new entries. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23selftests: pmtu: Simplify cleanup and namespace namesDavid Ahern1-9/+8
The point of the pause-on-fail argument is to leave the setup as is after a test fails to allow a user to debug why it failed. Move the cleanup after posting the result to the user to make it so. Random names for the namespaces are not user friendly when trying to debug a failure. Make them simpler and more direct for the tests. Run cleanup at the beginning to ensure they are cleaned up if they already exist. Remove cleanup_done. There is no harm in doing cleanup twice; just ignore any errors related to not existing - which is already done. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23selftests: fib-onlink: Make quiet by defaultDavid Ahern1-5/+43
Add VERBOSE argument to fib-onlink-tests.sh and make output quiet by default. Add getopt parsing of inputs and support for -v (verbose) and -p (pause on fail). Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23net: Set strict_start_type for routes and rulesDavid Ahern3-0/+3
New userspace on an older kernel can send unknown and unsupported attributes resulting in an incompelete config which is almost always wrong for routing (few exceptions are passthrough settings like the protocol that installed the route). Set strict_start_type in the policies for IPv4 and IPv6 routes and rules to detect new, unsupported attributes and fail the route add. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv4: Rename and export nh_update_mtuDavid Ahern2-2/+3
Rename nh_update_mtu to fib_nhc_update_mtu and export for use by the nexthop code. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv4: export fib_info_update_nh_saddrDavid Ahern2-7/+7
Add scope as input argument versus relying on fib_info reference in fib_nh, and export fib_info_update_nh_saddr. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv4: export fib_flushDavid Ahern2-1/+2
As nexthops are deleted, fib entries referencing it are marked dead. Export fib_flush so those entries can be removed in a timely manner. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv4: export fib_check_nhDavid Ahern2-6/+8
Change fib_check_nh to take net, table and scope as input arguments over struct fib_config and export for use by nexthop code. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv4: Add function to send route updatesDavid Ahern2-0/+74
Add fib_info_notify_update to walk the fib and send RTM_NEWROUTE notifications with NLM_F_REPLACE set for entries linked to a fib_info that have nh_updated flag set. This helper will be used by the nexthop code to notify userspace of routes that are impacted when a nexthop config is updated via replace. The new function and its helper are similar to how fib_flush and fib_table_flush work for address delete and link down events. This notification is needed for legacy apps that do not understand the new nexthop object. Apps that are nexthop aware can use the RTA_NH_ID attribute in the route notification to just ignore it. In the future this should be wrapped in a sysctl to allow OS'es that are fully updated to avoid the notificaton storm. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv6: export function to send route updatesDavid Ahern5-4/+46
Add fib6_rt_update to send RTM_NEWROUTE with NLM_F_REPLACE set. This helper will be used by the nexthop code to notify userspace of routes that are impacted when a nexthop config is updated via replace. This notification is needed for legacy apps that do not understand the new nexthop object. Apps that are nexthop aware can use the RTA_NH_ID attribute in the route notification to just ignore it. In the future this should be wrapped in a sysctl to allow OS'es that are fully updated to avoid the notificaton storm. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv6: Add hook to bump sernum for a route to stubsDavid Ahern4-0/+11
Add hook to ipv6 stub to bump the sernum up to the root node for a route. This is needed by the nexthop code when a nexthop config changes. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23ipv6: Add delete route hook to stubsDavid Ahern3-0/+8
Add ip6_del_rt to the IPv6 stub. The hook is needed by the nexthop code to remove entries linked to a nexthop that is getting deleted. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23net: phy: Make phy_basic_t1_features use base100t1.Andrew Lunn1-1/+1
Now that there is a link mode for 100BaseT1, use it in phy_basic_t1_features so T1 PHY drivers will indicate this mode via the Ethtool API. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23net: phy: Add support for 100BaseT1 and 1000BaseT1Andrew Lunn2-1/+5
Add link modes for 100Mbps and 1Gbps over a single pair. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23net: phy: dp83867: Allocate state struct in probeTrent Piepho1-14/+19
This was being done in config the first time the phy was configured. Should be in the probe method. Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Trent Piepho <tpiepho@impinj.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23net: phy: dp83867: Validate FIFO depth propertyTrent Piepho1-2/+15
Insure property is in valid range and fail when reading DT if it is not. Also add error message for existing failure if required property is not present. Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Trent Piepho <tpiepho@impinj.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-23net: phy: dp83867: IO impedance is not dependent on RGMII delayTrent Piepho1-10/+9
The driver would only set the IO impedance value when RGMII internal delays were enabled. There is no reason for this. Move the IO impedance block out of the RGMII delay block. Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Trent Piepho <tpiepho@impinj.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>