summaryrefslogtreecommitdiffstats
path: root/Documentation/networking/lapb-module.txt (unfollow)
Commit message (Collapse)AuthorFilesLines
2017-06-07net: phy: add XAUI and 10GBASE-KR PHY connection typesRussell King3-1/+10
XAUI allows XGMII to reach an extended distance by using a XGXS layer at each end of the MAC to PHY link, operating over four Serdes lanes. 10GBASE-KR is a single lane Serdes backplane ethernet connection method with autonegotiation on the link. Some PHYs use this to connect to the ethernet interface at 10G speeds, switching to other connection types when utilising slower speeds. 10GBASE-KR is also used for XFI and SFI to connect to XFP and SFP fiber modules. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-07net: phy: split out 10G genphy supportRussell King2-86/+83
Move the old 10G genphy support to sit beside the new clause 45 library functions, so all the 10G phy code is together. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-07net: phy: hook up clause 45 autonegotiation restartRussell King2-2/+22
genphy_restart_aneg() can only restart autonegotiation on clause 22 PHYs. Add a phy_restart_aneg() function which selects between the clause 22 and clause 45 restart functionality depending on the PHY type and whether the Clause 45 PHY supports the Clause 22 register set. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-07net: phy: avoid genphy_aneg_done() for PHYs without clause 22 supportRussell King1-0/+6
Avoid calling genphy_aneg_done() for PHYs that do not implement the Clause 22 register set. Clause 45 PHYs may implement the Clause 22 register set along with the Clause 22 extension MMD. Hence, we can't simply block access to the Clause 22 functions based on the PHY being a Clause 45 PHY. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-07net: phy: add 802.3 clause 45 support to phylibRussell King4-15/+253
Add generic helpers for 802.3 clause 45 PHYs for >= 10Gbps support. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06compiler, clang: suppress warning for unused static inline functionsDavid Rientjes1-0/+7
GCC explicitly does not warn for unused static inline functions for -Wunused-function. The manual states: Warn whenever a static function is declared but not defined or a non-inline static function is unused. Clang does warn for static inline functions that are unused. It turns out that suppressing the warnings avoids potentially complex #ifdef directives, which also reduces LOC. Suppress the warning for clang. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-06sparc64: delete old wrap codePavel Tatashin6-45/+1
The old method that is using xcall and softint to get new context id is deleted, as it is replaced by a method of using per_cpu_secondary_mm without xcall to perform the context wrap. Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Steven Sistare <steven.sistare@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc64: new context wrapPavel Tatashin1-27/+54
The current wrap implementation has a race issue: it is called outside of the ctx_alloc_lock, and also does not wait for all CPUs to complete the wrap. This means that a thread can get a new context with a new version and another thread might still be running with the same context. The problem is especially severe on CPUs with shared TLBs, like sun4v. I used the following test to very quickly reproduce the problem: - start over 8K processes (must be more than context IDs) - write and read values at a memory location in every process. Very quickly memory corruptions start happening, and what we read back does not equal what we wrote. Several approaches were explored before settling on this one: Approach 1: Move smp_new_mmu_context_version() inside ctx_alloc_lock, and wait for every process to complete the wrap. (Note: every CPU must WAIT before leaving smp_new_mmu_context_version_client() until every one arrives). This approach ends up with deadlocks, as some threads own locks which other threads are waiting for, and they never receive softint until these threads exit smp_new_mmu_context_version_client(). Since we do not allow the exit, deadlock happens. Approach 2: Handle wrap right during mondo interrupt. Use etrap/rtrap to enter into into C code, and issue new versions to every CPU. This approach adds some overhead to runtime: in switch_mm() we must add some checks to make sure that versions have not changed due to wrap while we were loading the new secondary context. (could be protected by PSTATE_IE but that degrades performance as on M7 and older CPUs as it takes 50 cycles for each access). Also, we still need a global per-cpu array of MMs to know where we need to load new contexts, otherwise we can change context to a thread that is going way (if we received mondo between switch_mm() and switch_to() time). Finally, there are some issues with window registers in rtrap() when context IDs are changed during CPU mondo time. The approach in this patch is the simplest and has almost no impact on runtime. We use the array with mm's where last secondary contexts were loaded onto CPUs and bump their versions to the new generation without changing context IDs. If a new process comes in to get a context ID, it will go through get_new_mmu_context() because of version mismatch. But the running processes do not need to be interrupted. And wrap is quicker as we do not need to xcall and wait for everyone to receive and complete wrap. Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Steven Sistare <steven.sistare@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc64: add per-cpu mm of secondary contextsPavel Tatashin2-2/+4
The new wrap is going to use information from this array to figure out mm's that currently have valid secondary contexts setup. Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Steven Sistare <steven.sistare@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc64: redefine first versionPavel Tatashin2-4/+4
CTX_FIRST_VERSION defines the first context version, but also it defines first context. This patch redefines it to only include the first context version. Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Steven Sistare <steven.sistare@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc64: combine activate_mm and switch_mmPavel Tatashin1-20/+1
The only difference between these two functions is that in activate_mm we unconditionally flush context. However, there is no need to keep this difference after fixing a bug where cpumask was not reset on a wrap. So, in this patch we combine these. Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Steven Sistare <steven.sistare@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc64: reset mm cpumask after wrapPavel Tatashin1-0/+2
After a wrap (getting a new context version) a process must get a new context id, which means that we would need to flush the context id from the TLB before running for the first time with this ID on every CPU. But, we use mm_cpumask to determine if this process has been running on this CPU before, and this mask is not reset after a wrap. So, there are two possible fixes for this issue: 1. Clear mm cpumask whenever mm gets a new context id 2. Unconditionally flush context every time process is running on a CPU This patch implements the first solution Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Steven Sistare <steven.sistare@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc/mm/hugepages: Fix setup_hugepagesz for invalid values.Liam R. Howlett1-1/+2
hugetlb_bad_size needs to be called on invalid values. Also change the pr_warn to a pr_err to better align with other platforms. Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc: Machine description indices can varyJames Clarke2-4/+65
VIO devices were being looked up by their index in the machine description node block, but this often varies over time as devices are added and removed. Instead, store the ID and look up using the type, config handle and ID. Signed-off-by: James Clarke <jrtc27@jrtc27.com> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=112541 Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06sparc64: mm: fix copy_tsb to correctly copy huge page TSBsMike Kravetz2-6/+12
When a TSB grows beyond its current capacity, a new TSB is allocated and copy_tsb is called to copy entries from the old TSB to the new. A hash shift based on page size is used to calculate the index of an entry in the TSB. copy_tsb has hard coded PAGE_SHIFT in these calculations. However, for huge page TSBs the value REAL_HPAGE_SHIFT should be used. As a result, when copy_tsb is called for a huge page TSB the entries are placed at the incorrect index in the newly allocated TSB. When doing hardware table walk, the MMU does not match these entries and we end up in the TSB miss handling code. This code will then create and write an entry to the correct index in the TSB. We take a performance hit for the table walk miss and recreation of these entries. Pass a new parameter to copy_tsb that is the page size shift to be used when copying the TSB. Suggested-by: Anthony Yznaga <anthony.yznaga@oracle.com> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06arch/sparc: support NR_CPUS = 4096Jane Chu2-6/+15
Linux SPARC64 limits NR_CPUS to 4064 because init_cpu_send_mondo_info() only allocates a single page for NR_CPUS mondo entries. Thus we cannot use all 4096 CPUs on some SPARC platforms. To fix, allocate (2^order) pages where order is set according to the size of cpu_list for possible cpus. Since cpu_list_pa and cpu_mondo_block_pa are not used in asm code, there are no imm13 offsets from the base PA that will break because they can only reach one page. Orabug: 25505750 Signed-off-by: Jane Chu <jane.chu@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Atish Patra <atish.patra@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: cgroup skb progs cannot access ld_abs/indDaniel Borkmann1-1/+0
Commit fb9a307d11d6 ("bpf: Allow CGROUP_SKB eBPF program to access sk_buff") enabled programs of BPF_PROG_TYPE_CGROUP_SKB type to use ld_abs/ind instructions. However, at this point, we cannot use them, since offsets relative to SKF_LL_OFF will end up pointing skb_mac_header(skb) out of bounds since in the egress path it is not yet set at that point in time, but only after __dev_queue_xmit() did a general reset on the mac header. bpf_internal_load_pointer_neg_helper() will then end up reading data from a wrong offset. BPF_PROG_TYPE_CGROUP_SKB programs can use bpf_skb_load_bytes() already to access packet data, which is also more flexible than the insns carried over from cBPF. Fixes: fb9a307d11d6 ("bpf: Allow CGROUP_SKB eBPF program to access sk_buff") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Cc: Chenbo Feng <fengc@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net: stmmac: fix a broken u32 less than zero checkColin Ian King1-1/+1
The check that queue is less or equal to zero is always true because queue is a u32; queue is decremented and will wrap around and never go -ve. Fix this by making queue an int. Detected by CoverityScan, CID#1428988 ("Unsigned compared against 0") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net: stmmac: fix completely hung TX when using TSONiklas Cassel1-1/+1
stmmac_tso_allocator can fail to set the Last Descriptor bit on a descriptor that actually was the last descriptor. This happens when the buffer of the last descriptor ends up having a size of exactly TSO_MAX_BUFF_SIZE. When the IP eventually reaches the next last descriptor, which actually has the bit set, the DMA will hang. When the DMA hangs, we get a tx timeout, however, since stmmac does not do a complete reset of the IP in stmmac_tx_timeout, we end up in a state with completely hung TX. Signed-off-by: Niklas Cassel <niklas.cassel@axis.com> Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Acked-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06tun: use symmetric hashJason Wang1-3/+3
Tun actually expects a symmetric hash for queue selecting to work correctly, otherwise packets belongs to a single flow may be redirected to the wrong queue. So this patch switch to use __skb_get_hash_symmetric(). Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net: ethoc: enable NAPI before poll may be scheduledMax Filippov1-1/+2
ethoc_reset enables device interrupts, ethoc_interrupt may schedule a NAPI poll before NAPI is enabled in the ethoc_open, which results in device being unable to send or receive anything until it's closed and reopened. In case the device is flooded with ingress packets it may be unable to recover at all. Move napi_enable above ethoc_reset in the ethoc_open to fix that. Fixes: a1702857724f ("net: Add support for the OpenCores 10/100 Mbps Ethernet MAC.") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Tobias Klauser <tklauser@distanz.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net: bridge: fix a null pointer dereference in br_afspecNikolay Aleksandrov1-1/+1
We might call br_afspec() with p == NULL which is a valid use case if the action is on the bridge device itself, but the bridge tunnel code dereferences the p pointer without checking, so check if p is null first. Reported-by: Gustavo A. R. Silva <garsilva@embeddedor.com> Fixes: efa5356b0d97 ("bridge: per vlan dst_metadata netlink support") Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net: dsa: mv88e6xxx: fix 6085 frame mode maskingVivien Didelot1-1/+1
The register bits used for the frame mode were masked with DSA (0x1) instead of the mask value (0x3) in the 6085 implementation of port_set_frame_mode. Fix this. Fixes: 56995cbc3540 ("net: dsa: mv88e6xxx: Refactor CPU and DSA port setup") Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06ravb: Fix use-after-free on `ifconfig eth0 down`Eugeniu Rosca1-12/+12
Commit a47b70ea86bd ("ravb: unmap descriptors when freeing rings") has introduced the issue seen in [1] reproduced on H3ULCB board. Fix this by relocating the RX skb ringbuffer free operation, so that swiotlb page unmapping can be done first. Freeing of aligned TX buffers is not relevant to the issue seen in [1]. Still, reposition TX free calls as well, to have all kfree() operations performed consistently _after_ dma_unmap_*()/dma_free_*(). [1] Console screenshot with the problem reproduced: salvator-x login: root root@salvator-x:~# ifconfig eth0 up Micrel KSZ9031 Gigabit PHY e6800000.ethernet-ffffffff:00: \ attached PHY driver [Micrel KSZ9031 Gigabit PHY] \ (mii_bus:phy_addr=e6800000.ethernet-ffffffff:00, irq=235) IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready root@salvator-x:~# root@salvator-x:~# ifconfig eth0 down ================================================================== BUG: KASAN: use-after-free in swiotlb_tbl_unmap_single+0xc4/0x35c Write of size 1538 at addr ffff8006d884f780 by task ifconfig/1649 CPU: 0 PID: 1649 Comm: ifconfig Not tainted 4.12.0-rc4-00004-g112eb07287d1 #32 Hardware name: Renesas H3ULCB board based on r8a7795 (DT) Call trace: [<ffff20000808f11c>] dump_backtrace+0x0/0x3a4 [<ffff20000808f4d4>] show_stack+0x14/0x1c [<ffff20000865970c>] dump_stack+0xf8/0x150 [<ffff20000831f8b0>] print_address_description+0x7c/0x330 [<ffff200008320010>] kasan_report+0x2e0/0x2f4 [<ffff20000831eac0>] check_memory_region+0x20/0x14c [<ffff20000831f054>] memcpy+0x48/0x68 [<ffff20000869ed50>] swiotlb_tbl_unmap_single+0xc4/0x35c [<ffff20000869fcf4>] unmap_single+0x90/0xa4 [<ffff20000869fd14>] swiotlb_unmap_page+0xc/0x14 [<ffff2000080a2974>] __swiotlb_unmap_page+0xcc/0xe4 [<ffff2000088acdb8>] ravb_ring_free+0x514/0x870 [<ffff2000088b25dc>] ravb_close+0x288/0x36c [<ffff200008aaf8c4>] __dev_close_many+0x14c/0x174 [<ffff200008aaf9b4>] __dev_close+0xc8/0x144 [<ffff200008ac2100>] __dev_change_flags+0xd8/0x194 [<ffff200008ac221c>] dev_change_flags+0x60/0xb0 [<ffff200008ba2dec>] devinet_ioctl+0x484/0x9d4 [<ffff200008ba7b78>] inet_ioctl+0x190/0x194 [<ffff200008a78c44>] sock_do_ioctl+0x78/0xa8 [<ffff200008a7a128>] sock_ioctl+0x110/0x3c4 [<ffff200008365a70>] vfs_ioctl+0x90/0xa0 [<ffff200008365dbc>] do_vfs_ioctl+0x148/0xc38 [<ffff2000083668f0>] SyS_ioctl+0x44/0x74 [<ffff200008083770>] el0_svc_naked+0x24/0x28 The buggy address belongs to the page: page:ffff7e001b6213c0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x4000000000000000() raw: 4000000000000000 0000000000000000 0000000000000000 00000000ffffffff raw: 0000000000000000 ffff7e001b6213e0 0000000000000000 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8006d884f680: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff8006d884f700: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >ffff8006d884f780: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ^ ffff8006d884f800: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff8006d884f880: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ================================================================== Disabling lock debugging due to kernel taint root@salvator-x:~# Fixes: a47b70ea86bd ("ravb: unmap descriptors when freeing rings") Signed-off-by: Eugeniu Rosca <erosca@de.adit-jv.com> Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Test for bpf IDMartin KaFai Lau6-1/+341
Add test to exercise the bpf_prog/map id generation, bpf_(prog|map)_get_next_id(), bpf_(prog|map)_get_fd_by_id() and bpf_get_obj_info_by_fd(). Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Add BPF_OBJ_GET_INFO_BY_FDMartin KaFai Lau3-19/+174
A single BPF_OBJ_GET_INFO_BY_FD cmd is used to obtain the info for both bpf_prog and bpf_map. The kernel can figure out the fd is associated with a bpf_prog or bpf_map. The suggested struct bpf_prog_info and struct bpf_map_info are not meant to be a complete list and it is not the goal of this patch. New fields can be added in the future patch. The focus of this patch is to create the interface, BPF_OBJ_GET_INFO_BY_FD cmd for exposing the bpf_prog's and bpf_map's info. The obj's info, which will be extended (and get bigger) over time, is separated from the bpf_attr to avoid bloating the bpf_attr. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Add jited_len to struct bpf_progMartin KaFai Lau6-0/+6
Add jited_len to struct bpf_prog. It will be useful for the struct bpf_prog_info which will be added in the later patch. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Add BPF_MAP_GET_FD_BY_IDMartin KaFai Lau2-10/+87
Add BPF_MAP_GET_FD_BY_ID command to allow user to get a fd from a bpf_map's ID. bpf_map_inc_not_zero() is added and is called with map_idr_lock held. __bpf_map_put() is also added which has the 'bool do_idr_lock' param to decide if the map_idr_lock should be acquired when freeing the map->id. In the error path of bpf_map_inc_not_zero(), it may have to call __bpf_map_put(map, false) which does not need to take the map_idr_lock when freeing the map->id. It is currently limited to CAP_SYS_ADMIN which we can consider to lift it in followup patches. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Add BPF_PROG_GET_FD_BY_IDMartin KaFai Lau2-12/+87
Add BPF_PROG_GET_FD_BY_ID command to allow user to get a fd from a bpf_prog's ID. bpf_prog_inc_not_zero() is added and is called with prog_idr_lock held. __bpf_prog_put() is also added which has the 'bool do_idr_lock' param to decide if the prog_idr_lock should be acquired when freeing the prog->id. In the error path of bpf_prog_inc_not_zero(), it may have to call __bpf_prog_put(map, false) which does not need to take the prog_idr_lock when freeing the prog->id. It is currently limited to CAP_SYS_ADMIN which we can consider to lift it in followup patches. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Add BPF_(PROG|MAP)_GET_NEXT_ID commandMartin KaFai Lau2-0/+45
This patch adds BPF_PROG_GET_NEXT_ID and BPF_MAP_GET_NEXT_ID to allow userspace to iterate all bpf_prog IDs and bpf_map IDs. The API is trying to be consistent with the existing BPF_MAP_GET_NEXT_KEY. It is currently limited to CAP_SYS_ADMIN which we can consider to lift it in followup patches. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Introduce bpf_map IDMartin KaFai Lau2-1/+34
This patch generates an unique ID for each created bpf_map. The approach is similar to the earlier patch for bpf_prog ID. It is worth to note that the bpf_map's ID and bpf_prog's ID are in two independent ID spaces and both have the same valid range: [1, INT_MAX). Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06bpf: Introduce bpf_prog IDMartin KaFai Lau2-1/+40
This patch generates an unique ID for each BPF_PROG_LOAD-ed prog. It is worth to note that each BPF_PROG_LOAD-ed prog will have a different ID even they have the same bpf instructions. The ID is generated by the existing idr_alloc_cyclic(). The ID is ranged from [1, INT_MAX). It is allocated in cyclic manner, so an ID will get reused every 2 billion BPF_PROG_LOAD. The bpf_prog_alloc_id() is done after bpf_prog_select_runtime() because the jit process may have allocated a new prog. Hence, we need to ensure the value of pointer 'prog' will not be changed any more before storing the prog to the prog_idr. After bpf_prog_select_runtime(), the prog is read-only. Hence, the id is stored in 'struct bpf_prog_aux'. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net/ipv6: Fix CALIPSO causing GPF with datagram supportRichard Haines1-1/+5
When using CALIPSO with IPPROTO_UDP it is possible to trigger a GPF as the IP header may have moved. Also update the payload length after adding the CALIPSO option. Signed-off-by: Richard Haines <richard_c_haines@btinternet.com> Acked-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Huw Davies <huw@codeweavers.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06cxgb4: implement ndo_set_vf_rate()Ganesh Goudar2-0/+107
Implement ndo_set_vf_rate() for mgmt interface to support rate-limiting of VF traffic using 'ip' command. Based on the original work of Kumar Sanghvi <kumaras@chelsio.com> Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06ppp: mppe: Use vsnprintf extension %phNJoe Perches1-11/+4
Using this extension reduces the object size. $ size drivers/net/ppp/ppp_mppe.o* text data bss dec hex filename 5683 216 8 5907 1713 drivers/net/ppp/ppp_mppe.o.new 5808 216 8 6032 1790 drivers/net/ppp/ppp_mppe.o.old Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net: stmmac: ensure jumbo_frm error return is correctly checked for -ve valueColin Ian King1-1/+2
The current comparison of entry < 0 will never be true since entry is an unsigned integer. Make entry an int to ensure -ve error return values from the call to jumbo_frm are correctly being caught. Detected by CoverityScan, CID#1238760 ("Macro compares unsigned to 0") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06net: phy: Delete unused function phy_ethtool_gsetyuval.shaia@oracle.com3-26/+0
It's unused, so remove it. Signed-off-by: Yuval Shaia <yuval.shaia@oracle.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: do early device setup for z/VM IQD NICsJulian Wiedmann3-11/+37
qeth currently supports early setup for OSM and OSN devices. This patch adds early setup support for z/VM HiperSockets, since they can only be coupled to L3 networks. Based on an initial version by Dmitriy Lakhvich. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: add support for early L3 device setupUrsula Braun3-5/+25
Similar to how qeth currently does early L2 setup of OSM and OSN devices, add support for early setup of L3-only devices. This adds a qeth_l3_devtype that contains all core and l3-specific sysfs attributes, so that they can be created in one go while probing. This just adds the infrastructure, exploitation of the support happens in a subsequent patch. Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: silence qeth_fix_features()Julian Wiedmann1-4/+1
Noting the lack of TSO support on every feature change is just silly, in particular since the requested features might not even affect NETIF_F_TSO. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: consolidate pack buffer flushingJulian Wiedmann1-40/+25
qeth_switch_to_nonpacking_if_needed() contains an open-coded version of qeth_flush_buffers_on_no_pci(). Extract a single helper instead. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: add missing strings for IPA return codesJulian Wiedmann1-0/+2
commit 76b11f8e270f ("qeth: HiperSockets Network Traffic Analyzer") missed adding the human-readable translations when adding new RCs. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: log bridgeport capabilitiesJulian Wiedmann1-0/+8
Bridgeport is a l2-specific feature, and we should write its capabilities to a debug entry. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: query IPv6 IPA support on HiperSocketsJulian Wiedmann1-3/+4
HiperSocket devices don't need the full IPv6 initialization, but we should still query the supported assists for logging purposes. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: remove skb_is_nonlinear() check on IQDJulian Wiedmann1-7/+1
qeth doesn't advertise NETIF_F_SG for L3 IQDs. So trust the stack to not hand us any nonlinear skbs, and remove an always-true condition. With the fact that data_offset < 0 is no longer possible on IQDs, apply a small cleanup to subsequent code. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06s390/qeth: remove support for IPA_IP_FRAGMENTATIONJulian Wiedmann3-29/+2
This Assist was never actually implemented in any hardware, so just remove the leftovers. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Reviewed-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06spectrum_flower: Implement gact trap TC action offloadJiri Pirko1-0/+4
Just use the previously prepared infrastructure and offload the gact trap action to ACL. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Yotam Gigi <yotamg@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06acl: Introduce ACL trap actionJiri Pirko4-2/+45
Use trap/discard flex action to implement trap. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Yotam Gigi <yotamg@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06mlxsw: spectrum: Introduce ACL trapJiri Pirko2-1/+4
Introduce an ACL trap and put it into ip2me trap group. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Yotam Gigi <yotamg@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-06mlxsw: pci: Fix size of trap_id field in CQEJiri Pirko1-1/+1
The "trap_id" is 9bits long. So far, this was not a problem since we used only traps with ids that fit into 8bits. But the ACL traps that are going to be introduced use the 9th bit. Fixes: eda6500a987a ("mlxsw: Add PCI bus implementation") Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Yotam Gigi <yotamg@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>