summaryrefslogtreecommitdiffstats
path: root/net (follow)
Commit message (Collapse)AuthorAgeFilesLines
* udp: Do not pass MSS as parameter to GSO segmentationAlexander Duyck2018-05-092-3/+5
| | | | | | | | | | | There is no point in passing MSS as a parameter for for the GSO segmentation call as it is already available via the shared info for the skb itself. Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* udp: Record gso_segs when supporting UDP segmentation offloadAlexander Duyck2018-05-091-0/+2
| | | | | | | | | | | | | We need to record the number of segments that will be generated when this frame is segmented. The expectation is that if gso_size is set then gso_segs is set as well. Without this some drivers such as ixgbe get confused if they attempt to offload this as they record 0 segments for the entire packet instead of the correct value. Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* flow_dissector: do not rely on implicit castsPaolo Abeni2018-05-081-1/+1
| | | | | | | | | This change fixes a couple of type mismatch reported by the sparse tool, explicitly using the requested type for the offending arguments. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: core: rework basic flow dissection helperPaolo Abeni2018-05-082-11/+12
| | | | | | | | | | | | | | | | | | | | | | When the core networking needs to detect the transport offset in a given packet and parse it explicitly, a full-blown flow_keys struct is used for storage. This patch introduces a smaller keys store, rework the basic flow dissect helper to use it, and apply this new helper where possible - namely in skb_probe_transport_header(). The used flow dissector data structures are renamed to match more closely the new role. The above gives ~50% performance improvement in micro benchmarking around skb_probe_transport_header() and ~30% around eth_get_headlen(), mostly due to the smaller memset. Small, but measurable improvement is measured also in macro benchmarking. v1 -> v2: use the new helper in eth_get_headlen() and skb_get_poff(), as per DaveM suggestion Suggested-by: David Miller <davem@davemloft.net> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'master' of ↵David S. Miller2018-05-084-8/+11
|\ | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next Minor conflict in ip_output.c, overlapping changes to the body of an if() statement. Signed-off-by: David S. Miller <davem@davemloft.net>
| * xfrm: use a dedicated slab cache for struct xfrm_stateMathias Krause2018-05-041-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | struct xfrm_state is rather large (768 bytes here) and therefore wastes quite a lot of memory as it falls into the kmalloc-1024 slab cache, leaving 256 bytes of unused memory per XFRM state object -- a net waste of 25%. Using a dedicated slab cache for struct xfrm_state reduces the level of internal fragmentation to a minimum. On my configuration SLUB chooses to create a slab cache covering 4 pages holding 21 objects, resulting in an average memory waste of ~13 bytes per object -- a net waste of only 1.6%. In my tests this led to memory savings of roughly 2.3MB for 10k XFRM states. Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
| * change the comment of vti6_ioctlSun Lianwen2018-05-011-1/+1
| | | | | | | | | | | | | | | | The comment of vti6_ioctl() is wrong. which use vti6_tnl_ioctl instead of vti6_ioctl. Signed-off-by: Sun Lianwen <sunlw.fnst@cn.fujitsu.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
| * xfrm: remove VLA usage in __xfrm6_sort()Kees Cook2018-04-261-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | In the quest to remove all stack VLA usage removed from the kernel[1], just use XFRM_MAX_DEPTH as already done for the "class" array. In one case, it'll do this loop up to 5, the other caller up to 6. [1] https://lkml.org/lkml/2018/3/7/621 Co-developed-by: Andreas Christoforou <andreaschristofo@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
| * udp: enable UDP checksum offload for ESPJacek Kalwas2018-04-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | In case NIC has support for ESP TX CSUM offload skb->ip_summed is not set to CHECKSUM_PARTIAL which results in checksum calculated by SW. Fix enables ESP TX CSUM for UDP by extending condition with check for NETIF_F_HW_ESP_TX_CSUM. Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
* | net: ipv6/gre: Add GRO supportEran Ben Elisha2018-05-081-10/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add GRO capability for IPv6 GRE tunnel and ip6erspan tap, via gro_cells infrastructure. Performance testing: 55% higher badwidth. Measuring bandwidth of 1 thread IPv4 TCP traffic over IPv6 GRE tunnel while GRO on the physical interface is disabled. CPU: Intel Xeon E312xx (Sandy Bridge) NIC: Mellanox Technologies MT27700 Family [ConnectX-4] Before (GRO not working in tunnel) : 2.47 Gbits/sec After (GRO working in tunnel) : 3.85 Gbits/sec Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> CC: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: ipv6: Fix typo in ipv6_find_hdr() documentationTariq Toukan2018-05-081-1/+1
| | | | | | | | | | | | | | Fix 'an' into 'and', and use a comma instead of a period. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/9p: correct the variable name in v9fs_get_trans_by_name() commentSun Lianwen2018-05-081-1/+1
| | | | | | | | | | | | | | The v9fs_get_trans_by_name(char *s) variable name is not "name" but "s". Signed-off-by: Sun Lianwen <sunlw.fnst@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | vlan: correct the file path in vlan_dev_change_flags() commentSun Lianwen2018-05-081-1/+3
| | | | | | | | | | | | | | | | The vlan_flags enum is defined in include/uapi/linux/if_vlan.h file. not in include/linux/if_vlan.h file. Signed-off-by: Sun Lianwen <sunlw.fnst@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller2018-05-0815-90/+1734
|\ \ | | | | | | | | | | | | | | | | | | | | | Minor conflict, a CHECK was placed into an if() statement in net-next, whilst a newline was added to that CHECK call in 'net'. Thanks to Daniel for the merge resolution. Signed-off-by: David S. Miller <davem@davemloft.net>
| * | bpf: add skb_load_bytes_relative helperDaniel Borkmann2018-05-041-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a small BPF helper similar to bpf_skb_load_bytes() that is able to load relative to mac/net header offset from the skb's linear data. Compared to bpf_skb_load_bytes(), it takes a fifth argument namely start_header, which is either BPF_HDR_START_MAC or BPF_HDR_START_NET. This allows for a more flexible alternative compared to LD_ABS/LD_IND with negative offset. It's enabled for tc BPF programs as well as sock filter program types where it's mainly useful in reuseport programs to ease access to lower header data. Reference: https://lists.iovisor.org/pipermail/iovisor-dev/2017-March/000698.html Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | bpf: implement ld_abs/ld_ind in native bpfDaniel Borkmann2018-05-041-11/+225
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main part of this work is to finally allow removal of LD_ABS and LD_IND from the BPF core by reimplementing them through native eBPF instead. Both LD_ABS/LD_IND were carried over from cBPF and keeping them around in native eBPF caused way more trouble than actually worth it. To just list some of the security issues in the past: * fdfaf64e7539 ("x86: bpf_jit: support negative offsets") * 35607b02dbef ("sparc: bpf_jit: fix loads from negative offsets") * e0ee9c12157d ("x86: bpf_jit: fix two bugs in eBPF JIT compiler") * 07aee9439454 ("bpf, sparc: fix usage of wrong reg for load_skb_regs after call") * 6d59b7dbf72e ("bpf, s390x: do not reload skb pointers in non-skb context") * 87338c8e2cbb ("bpf, ppc64: do not reload skb pointers in non-skb context") For programs in native eBPF, LD_ABS/LD_IND are pretty much legacy these days due to their limitations and more efficient/flexible alternatives that have been developed over time such as direct packet access. LD_ABS/LD_IND only cover 1/2/4 byte loads into a register, the load happens in host endianness and its exception handling can yield unexpected behavior. The latter is explained in depth in f6b1b3bf0d5f ("bpf: fix subprog verifier bypass by div/mod by 0 exception") with similar cases of exceptions we had. In native eBPF more recent program types will disable LD_ABS/LD_IND altogether through may_access_skb() in verifier, and given the limitations in terms of exception handling, it's also disabled in programs that use BPF to BPF calls. In terms of cBPF, the LD_ABS/LD_IND is used in networking programs to access packet data. It is not used in seccomp-BPF but programs that use it for socket filtering or reuseport for demuxing with cBPF. This is mostly relevant for applications that have not yet migrated to native eBPF. The main complexity and source of bugs in LD_ABS/LD_IND is coming from their implementation in the various JITs. Most of them keep the model around from cBPF times by implementing a fastpath written in asm. They use typically two from the BPF program hidden CPU registers for caching the skb's headlen (skb->len - skb->data_len) and skb->data. Throughout the JIT phase this requires to keep track whether LD_ABS/LD_IND are used and if so, the two registers need to be recached each time a BPF helper would change the underlying packet data in native eBPF case. At least in eBPF case, available CPU registers are rare and the additional exit path out of the asm written JIT helper makes it also inflexible since not all parts of the JITer are in control from plain C. A LD_ABS/LD_IND implementation in eBPF therefore allows to significantly reduce the complexity in JITs with comparable performance results for them, e.g.: test_bpf tcpdump port 22 tcpdump complex x64 - before 15 21 10 14 19 18 - after 7 10 10 7 10 15 arm64 - before 40 91 92 40 91 151 - after 51 64 73 51 62 113 For cBPF we now track any usage of LD_ABS/LD_IND in bpf_convert_filter() and cache the skb's headlen and data in the cBPF prologue. The BPF_REG_TMP gets remapped from R8 to R2 since it's mainly just used as a local temporary variable. This allows to shrink the image on x86_64 also for seccomp programs slightly since mapping to %rsi is not an ereg. In callee-saved R8 and R9 we now track skb data and headlen, respectively. For normal prologue emission in the JITs this does not add any extra instructions since R8, R9 are pushed to stack in any case from eBPF side. cBPF uses the convert_bpf_ld_abs() emitter which probes the fast path inline already and falls back to bpf_skb_load_helper_{8,16,32}() helper relying on the cached skb data and headlen as well. R8 and R9 never need to be reloaded due to bpf_helper_changes_pkt_data() since all skb access in cBPF is read-only. Then, for the case of native eBPF, we use the bpf_gen_ld_abs() emitter, which calls the bpf_skb_load_helper_{8,16,32}_no_cache() helper unconditionally, does neither cache skb data and headlen nor has an inlined fast path. The reason for the latter is that native eBPF does not have any extra registers available anyway, but even if there were, it avoids any reload of skb data and headlen in the first place. Additionally, for the negative offsets, we provide an alternative bpf_skb_load_bytes_relative() helper in eBPF which operates similarly as bpf_skb_load_bytes() and allows for more flexibility. Tested myself on x64, arm64, s390x, from Sandipan on ppc64. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | bpf: migrate ebpf ld_abs/ld_ind tests to test_verifierDaniel Borkmann2018-05-041-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove all eBPF tests involving LD_ABS/LD_IND from test_bpf.ko. Reason is that the eBPF tests from test_bpf module do not go via BPF verifier and therefore any instruction rewrites from verifier cannot take place. Therefore, move them into test_verifier which runs out of user space, so that verfier can rewrite LD_ABS/LD_IND internally in upcoming patches. It will have the same effect since runtime tests are also performed from there. This also allows to finally unexport bpf_skb_vlan_{push,pop}_proto and keep it internal to core kernel. Additionally, also add further cBPF LD_ABS/LD_IND test coverage into test_bpf.ko suite. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | bpf: prefix cbpf internal helpers with bpf_Daniel Borkmann2018-05-041-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No change in functionality, just remove the '__' prefix and replace it with a 'bpf_' prefix instead. We later on add a couple of more helpers for cBPF and keeping the scheme with '__' is suboptimal there. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: statistics supportMagnus Karlsson2018-05-042-1/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In this commit, a new getsockopt is added: XDP_STATISTICS. This is used to obtain stats from the sockets. v2: getsockopt now returns size of stats structure. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: support for TxMagnus Karlsson2018-05-042-4/+200
| | | | | | | | | | | | | | | | | | | | | | | | | | | Here, Tx support is added. The user fills the Tx queue with frames to be sent by the kernel, and let's the kernel know using the sendmsg syscall. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | dev: packet: make packet_direct_xmit a common functionMagnus Karlsson2018-05-042-37/+43
| | | | | | | | | | | | | | | | | | | | | The new dev_direct_xmit will be used by AF_XDP in later commits. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: add Tx queue setup and mmap supportMagnus Karlsson2018-05-041-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Another setsockopt (XDP_TX_QUEUE) is added to let the process allocate a queue, where the user process can pass frames to be transmitted by the kernel. The mmapping of the queue is done using the XDP_PGOFF_TX_QUEUE offset. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: add umem completion queue support and mmapMagnus Karlsson2018-05-043-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here, we add another setsockopt for registered user memory (umem) called XDP_UMEM_COMPLETION_QUEUE. Using this socket option, the process can ask the kernel to allocate a queue (ring buffer) and also mmap it (XDP_UMEM_PGOFF_COMPLETION_QUEUE) into the process. The queue is used to explicitly pass ownership of umem frames from the kernel to user process. This will be used by the TX path to tell user space that a certain frame has been transmitted and user space can use it for something else, if it wishes. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: wire up XDP_SKB side of AF_XDPBjörn Töpel2018-05-042-19/+33
| | | | | | | | | | | | | | | | | | | | | This commit wires up the xskmap to XDP_SKB layer. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: wire up XDP_DRV side of AF_XDPBjörn Töpel2018-05-041-3/+20
| | | | | | | | | | | | | | | | | | | | | This commit wires up the xskmap to XDP_DRV layer. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | bpf: introduce new bpf AF_XDP map type BPF_MAP_TYPE_XSKMAPBjörn Töpel2018-05-041-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The xskmap is yet another BPF map, very much inspired by dev/cpu/sockmap, and is a holder of AF_XDP sockets. A user application adds AF_XDP sockets into the map, and by using the bpf_redirect_map helper, an XDP program can redirect XDP frames to an AF_XDP socket. Note that a socket that is bound to certain ifindex/queue index will *only* accept XDP frames from that netdev/queue index. If an XDP program tries to redirect from a netdev/queue index other than what the socket is bound to, the frame will not be received on the socket. A socket can reside in multiple maps. v3: Fixed race and simplified code. v2: Removed one indirection in map lookup. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: add Rx receive functions and poll supportBjörn Töpel2018-05-044-5/+215
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here the actual receive functions of AF_XDP are implemented, that in a later commit, will be called from the XDP layers. There's one set of functions for the XDP_DRV side and another for XDP_SKB (generic). A new XDP API, xdp_return_buff, is also introduced. Adding xdp_return_buff, which is analogous to xdp_return_frame, but acts upon an struct xdp_buff. The API will be used by AF_XDP in future commits. Support for the poll syscall is also implemented. v2: xskq_validate_id did not update cons_tail. The entries variable was calculated twice in xskq_nb_avail. Squashed xdp_return_buff commit. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: add support for bind for RxMagnus Karlsson2018-05-045-1/+138
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here, the bind syscall is added. Binding an AF_XDP socket, means associating the socket to an umem, a netdev and a queue index. This can be done in two ways. The first way, creating a "socket from scratch". Create the umem using the XDP_UMEM_REG setsockopt and an associated fill queue with XDP_UMEM_FILL_QUEUE. Create the Rx queue using the XDP_RX_QUEUE setsockopt. Call bind passing ifindex and queue index ("channel" in ethtool speak). The second way to bind a socket, is simply skipping the umem/netdev/queue index, and passing another already setup AF_XDP socket. The new socket will then have the same umem/netdev/queue index as the parent so it will share the same umem. You must also set the flags field in the socket address to XDP_SHARED_UMEM. v2: Use PTR_ERR instead of passing error variable explicitly. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: add Rx queue setup and mmap supportBjörn Töpel2018-05-043-12/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Another setsockopt (XDP_RX_QUEUE) is added to let the process allocate a queue, where the kernel can pass completed Rx frames from the kernel to user process. The mmapping of the queue is done using the XDP_PGOFF_RX_QUEUE offset. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: add umem fill queue support and mmapMagnus Karlsson2018-05-046-2/+168
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here, we add another setsockopt for registered user memory (umem) called XDP_UMEM_FILL_QUEUE. Using this socket option, the process can ask the kernel to allocate a queue (ring buffer) and also mmap it (XDP_UMEM_PGOFF_FILL_QUEUE) into the process. The queue is used to explicitly pass ownership of umem frames from the user process to the kernel. These frames will in a later patch be filled in with Rx packet data by the kernel. v2: Fixed potential crash in xsk_mmap. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | xsk: add user memory registration support sockoptBjörn Töpel2018-05-046-0/+531
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In this commit the base structure of the AF_XDP address family is set up. Further, we introduce the abilty register a window of user memory to the kernel via the XDP_UMEM_REG setsockopt syscall. The memory window is viewed by an AF_XDP socket as a set of equally large frames. After a user memory registration all frames are "owned" by the user application, and not the kernel. v2: More robust checks on umem creation and unaccount on error. Call set_page_dirty_lock on cleanup. Simplified xdp_umem_reg. Co-authored-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * | net: initial AF_XDP skeletonBjörn Töpel2018-05-043-4/+16
| | | | | | | | | | | | | | | | | | | | | | | | Buildable skeleton of AF_XDP without any functionality. Just what it takes to register a new address family. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-nextDavid S. Miller2018-05-0781-1663/+2471
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pablo Neira Ayuso says: ==================== Netfilter/IPVS updates for net-next The following patchset contains Netfilter/IPVS updates for your net-next tree, more relevant updates in this batch are: 1) Add Maglev support to IPVS. Moreover, store lastest server weight in IPVS since this is needed by maglev, patches from from Inju Song. 2) Preparation works to add iptables flowtable support, patches from Felix Fietkau. 3) Hand over flows back to conntrack slow path in case of TCP RST/FIN packet is seen via new teardown state, also from Felix. 4) Add support for extended netlink error reporting for nf_tables. 5) Support for larger timeouts that 23 days in nf_tables, patch from Florian Westphal. 6) Always set an upper limit to dynamic sets, also from Florian. 7) Allow number generator to make map lookups, from Laura Garcia. 8) Use hash_32() instead of opencode hashing in IPVS, from Vicent Bernat. 9) Extend ip6tables SRH match to support previous, next and last SID, from Ahmed Abdelsalam. 10) Move Passive OS fingerprint nf_osf.c, from Fernando Fernandez. 11) Expose nf_conntrack_max through ctnetlink, from Florent Fourcot. 12) Several housekeeping patches for xt_NFLOG, x_tables and ebtables, from Taehee Yoo. 13) Unify meta bridge with core nft_meta, then make nft_meta built-in. Make rt and exthdr built-in too, again from Florian. 14) Missing initialization of tbl->entries in IPVS, from Cong Wang. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | | netfilter: nft_dynset: fix timeout updates on 32bitFlorian Westphal2018-05-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This must now use a 64bit jiffies value, else we set a bogus timeout on 32bit. Fixes: 8e1102d5a1596 ("netfilter: nf_tables: support timeouts larger than 23 days") Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | netfilter: ctnetlink: export nf_conntrack_maxFlorent Fourcot2018-05-072-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IPCTNL_MSG_CT_GET_STATS netlink command allow to monitor current number of conntrack entries. However, if one wants to compare it with the maximum (and detect exhaustion), the only solution is currently to read sysctl value. This patch add nf_conntrack_max value in netlink message, and simplify monitoring for application built on netlink API. Signed-off-by: Florent Fourcot <florent.fourcot@wifirst.fr> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | netfilter: extract Passive OS fingerprint infrastructure from xt_osfFernando Fernandez Mancera2018-05-074-200/+225
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add nf_osf_ttl() and nf_osf_match() into nf_osf.c to prepare for nf_tables support. Signed-off-by: Fernando Fernandez Mancera <ffmancera@riseup.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | netfilter: nf_nat: remove unused ct arg from lookup functionsFlorian Westphal2018-05-066-26/+14
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | netfilter: ip6t_srh: extend SRH matching for previous, next and last SIDAhmed Abdelsalam2018-05-061-9/+164
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IPv6 Segment Routing Header (SRH) contains a list of SIDs to be crossed by SR encapsulated packet. Each SID is encoded as an IPv6 prefix. When a Firewall receives an SR encapsulated packet, it should be able to identify which node previously processed the packet (previous SID), which node is going to process the packet next (next SID), and which node is the last to process the packet (last SID) which represent the final destination of the packet in case of inline SR mode. An example use-case of using these features could be SID list that includes two firewalls. When the second firewall receives a packet, it can check whether the packet has been processed by the first firewall or not. Based on that check, it decides to apply all rules, apply just subset of the rules, or totally skip all rules and forward the packet to the next SID. This patch extends SRH match to support matching previous SID, next SID, and last SID. Signed-off-by: Ahmed Abdelsalam <amsalam20@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | netfilter: nft_numgen: enable hashing of one elementLaura Garcia Liebana2018-05-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The modulus in the hash function was limited to > 1 as initially there was no sense to create a hashing of just one element. Nevertheless, there are certain cases specially for load balancing where this case needs to be addressed. This patch fixes the following error. Error: Could not process rule: Numerical result out of range add rule ip nftlb lb01 dnat to jhash ip saddr mod 1 map { 0: 192.168.0.10 } ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The solution comes to force the hash to 0 when the modulus is 1. Signed-off-by: Laura Garcia Liebana <nevola@gmail.com>
| * | | netfilter: nft_numgen: add map lookups for numgen statementsLaura Garcia Liebana2018-05-061-5/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch includes a new attribute in the numgen structure to allow the lookup of an element based on the number generator as a key. For this purpose, different ops have been included to extend the current numgen inc functions. Currently, only supported for numgen incremental operations, but it will be supported for random in a follow-up patch. Signed-off-by: Laura Garcia Liebana <nevola@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | ipvs: initialize tbl->entries in ip_vs_lblc_init_svc()Cong Wang2018-04-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similarly, tbl->entries is not initialized after kmalloc(), therefore causes an uninit-value warning in ip_vs_lblc_check_expire(), as reported by syzbot. Reported-by: <syzbot+3e9695f147fb529aa9bc@syzkaller.appspotmail.com> Cc: Simon Horman <horms@verge.net.au> Cc: Julian Anastasov <ja@ssi.bg> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Julian Anastasov <ja@ssi.bg> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | ipvs: initialize tbl->entries after allocationCong Wang2018-04-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tbl->entries is not initialized after kmalloc(), therefore causes an uninit-value warning in ip_vs_lblc_check_expire() as reported by syzbot. Reported-by: <syzbot+3dfdea57819073a04f21@syzkaller.appspotmail.com> Cc: Simon Horman <horms@verge.net.au> Cc: Julian Anastasov <ja@ssi.bg> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Julian Anastasov <ja@ssi.bg> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | Merge tag 'ipvs-for-v4.18' of ↵Pablo Neira Ayuso2018-04-279-6/+592
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs-next Simon Horman says: ==================== IPVS Updates for v4.18 please consider these IPVS enhancements for v4.18. * Whitepace cleanup * Add Maglev hashing algorithm as a IPVS scheduler Inju Song says "Implements the Google's Maglev hashing algorithm as a IPVS scheduler. Basically it provides consistent hashing but offers some special features about disruption and load balancing. 1) minimal disruption: when the set of destinations changes, a connection will likely be sent to the same destination as it was before. 2) load balancing: each destination will receive an almost equal number of connections. Seel also: [3.4 Consistent Hasing] in https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-eisenbud.pdf " * Fix to correct implementation of Knuth's multiplicative hashing which is used in sh/dh/lblc/lblcr algorithms. Instead the implementation provided by the hash_32() macro is used. ==================== Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| | * | | ipvs: fix multiplicative hashing in sh/dh/lblc/lblcr algorithmsVincent Bernat2018-04-094-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sh/dh/lblc/lblcr algorithms are using Knuth's multiplicative hashing incorrectly. Replace its use by the hash_32() macro, which correctly implements this algorithm. It doesn't use the same constant, but it shouldn't matter. Signed-off-by: Vincent Bernat <vincent@bernat.im> Acked-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au>
| | * | | netfilter: ipvs: Add configurations of Maglev hashingInju Song2018-04-092-0/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To build the maglev hashing scheduler, add some configuration to Kconfig and Makefile. - The compile configurations of MH are added to the Kconfig. - The MH build rule is added to the Makefile. Signed-off-by: Inju Song <inju.song@navercorp.com> Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au>
| | * | | netfilter: ipvs: Add Maglev hashing schedulerInju Song2018-04-091-0/+540
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implements the Google's Maglev hashing algorithm as a IPVS scheduler. Basically it provides consistent hashing but offers some special features about disruption and load balancing. 1) minimal disruption: when the set of destinations changes, a connection will likely be sent to the same destination as it was before. 2) load balancing: each destination will receive an almost equal number of connections. Seel also for detail: [3.4 Consistent Hasing] in https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-eisenbud.pdf Signed-off-by: Inju Song <inju.song@navercorp.com> Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au>
| | * | | netfilter: ipvs: Keep latest weight of destinationInju Song2018-04-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The hashing table in scheduler such as source hash or maglev hash should ignore the changed weight to 0 and allow changing the weight from/to non-0 values. So, struct ip_vs_dest needs to keep weight with latest non-0 weight. Signed-off-by: Inju Song <inju.song@navercorp.com> Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au>
| | * | | netfilter: ipvs: Fix space before '[' error.Arvind Yadav2018-04-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix checkpatch.pl error: ERROR: space prohibited before open square bracket '['. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Simon Horman <horms@verge.net.au>
| * | | | netfilter: nf_tables: merge exthdr expression into nft coreFlorian Westphal2018-04-274-29/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | before: text data bss dec hex filename 5056 844 0 5900 170c net/netfilter/nft_exthdr.ko 102456 2316 401 105173 19ad5 net/netfilter/nf_tables.ko after: 106410 2392 401 109203 1aa93 net/netfilter/nf_tables.ko Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
| * | | | netfilter: nf_tables: merge rt expression into nft coreFlorian Westphal2018-04-274-29/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | before: text data bss dec hex filename 2657 844 0 3501 dad net/netfilter/nft_rt.ko 100826 2240 401 103467 1942b net/netfilter/nf_tables.ko after: 2657 844 0 3501 dad net/netfilter/nft_rt.ko 102456 2316 401 105173 19ad5 net/netfilter/nf_tables.ko Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>