summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* arch: Use asm-generic/socket.h when possibleDeepa Dinamani2019-02-039-366/+5
| | | | | | | | | | | | | | | | | | | Many architectures maintain an arch specific copy of the file even though there are no differences with the asm-generic one. Allow these architectures to use the generic one instead. Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Cc: chris@zankel.net Cc: fenghua.yu@intel.com Cc: tglx@linutronix.de Cc: schwidefsky@de.ibm.com Cc: linux-ia64@vger.kernel.org Cc: linux-xtensa@linux-xtensa.org Cc: linux-s390@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
* socket: move compat timeout handling into sock.cArnd Bergmann2019-02-033-89/+46
| | | | | | | | | | | | | | | | | This is a cleanup to prepare for the addition of 64-bit time_t in O_SNDTIMEO/O_RCVTIMEO. The existing compat handler seems unnecessarily complex and error-prone, moving it all into the main setsockopt()/getsockopt() implementation requires half as much code and is easier to extend. 32-bit user space can now use old_timeval32 on both 32-bit and 64-bit machines, while 64-bit code can use __old_kernel_timeval. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* selftests: add missing include unistdDeepa Dinamani2019-02-031-0/+1
| | | | | | | | | | | | Compiling rxtimestamp.c generates error messages due to non-existing declaration for write() library call. Add missing unistd.h include to provide the declaration and silence the error. Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* cxgb4/cxgb4vf: Program hash region for {t4/t4vf}_change_mac()Arjun Vynipadath2019-02-034-40/+136
| | | | | | | | | | | | | | {t4/t4_vf}_change_mac() API's were only doing additions to MPS_TCAM. This will fail, when the number of tcam entries is limited particularly in vf's. This fix programs hash region with the mac address, when TCAM addtion fails for {t4/t4vf}_change_mac(). Since the locally maintained driver list for hash entries is shared across mac_{sync/unsync}(), added an extra parameter if_mac to track the address added thorugh {t4/t4vf}_change_mac() Signed-off-by: Arjun Vynipadath <arjun@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ipv4/igmp: Don't drop IGMP pkt with zeros src addrEdward Chron2019-02-031-1/+2
| | | | | | | | | | Don't drop IGMP packets with a source address of all zeros which are IGMP proxy reports. This is documented in Section 2.1.1 IGMP Forwarding Rules of RFC 4541 IGMP and MLD Snooping Switches Considerations. Signed-off-by: Edward Chron <echron@arista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: phy: realtek: add generic Realtek PHY driverHeiner Kallweit2019-02-031-0/+9
| | | | | | | | | | | | | | The integrated PHY's of later RTL8168 network chips report the generic PHYID 0x001cc800 (Realtek OUI, model and revision number both set to zero) and therefore currently the genphy driver is used. To be able to use the paged version of e.g. phy_write() we need a PHY driver with the read_page and write_page callbacks implemented. So basically make a copy of the genphy driver, just with the read_page and write_page callbacks being set. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* atheros: atl2: fix an indentaion issue on a return statementColin Ian King2019-02-031-1/+1
| | | | | | | | A return statment is not indented correctly, fix this by adding an extra tab. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* atl1c: fix indentation issue on an if statementColin Ian King2019-02-031-4/+4
| | | | | | | | | An if statement is indented one level too deep, fix this by removing the extra tabs. Also add some spaces to the dev_warn arguments to clean up checkpatch warnings. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bna: fix indentation issue on call to bfa_ioc_pf_failedColin Ian King2019-02-031-1/+1
| | | | | | | | The call to bfa_ioc_pf_failed is indented too far, fix this by removing a tab. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* chelsio: clean up indentation issueColin Ian King2019-02-031-2/+1
| | | | | | | | The assignment to size is indented too far, fix this and join two lines into one. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: nixge: Update device-tree bindings with v3.00Alex Williams2019-02-031-4/+12
| | | | | | | Now the DMA engine is free to float elsewhere in the system map. Signed-off-by: Alex Williams <alex.williams@ni.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: nixge: Separate ctrl and dma resourcesAlex Williams2019-02-031-16/+58
| | | | | | | | The DMA engine is a separate entity altogether, and this allows the DMA controller's address to float elsewhere in the FPGA's map. Signed-off-by: Alex Williams <alex.williams@ni.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* r8169: remove rtl_wol_pll_power_downHeiner Kallweit2019-02-031-12/+4
| | | | | | | | rtl_wol_pll_power_down() is used in only one place and removing it makes the code simpler and better readable. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'hns3-next'David S. Miller2019-02-0211-159/+216
|\ | | | | | | | | | | | | | | | | | | | | | | | | Huazhong Tan says: ==================== code optimizations & bugfixes for HNS3 driver This patchset includes bugfixes and code optimizations for the HNS3 ethernet controller driver ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: MAC table entry count function increases operation 0 value ↵liuzhongzhu2019-02-021-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | protection measures When updating the available MAC VLAN table counts, MAC VLAN table entry count function adds operation 0 value protection measures. Signed-off-by: liuzhongzhu <liuzhongzhu@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: modify the upper limit judgment conditionliuzhongzhu2019-02-021-2/+2
| | | | | | | | | | | | | | | | | | | | In order to prevent the variable anomaly from being larger than desc_num, the upper limit judgment condition becomes >=. Signed-off-by: liuzhongzhu <liuzhongzhu@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: don't allow user to change vlan filter stateJian Shen2019-02-021-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | When user disables vlan filter, and adds vlan device, it won't notify the driver the update the vlan filter. In this case, when user enables vlan filter again, the packets with new vlan tag will be filtered by vlan filter. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: optimize the maximum TC macroliuzhongzhu2019-02-023-7/+6
| | | | | | | | | | | | | | | | | | | | Multiple macros with the largest number of TCs in the system, optimized to HCLGE_MAX_TC_NUM. Signed-off-by: liuzhongzhu <liuzhongzhu@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: fix the problem that the supported port is emptyliuzhongzhu2019-02-025-4/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | Run ethtool ethx when displaying device information in VF, the supported port and link mode items will be empty. This patch fixes it. Fixes: e2cb1dec9779 ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support") Signed-off-by: liuzhongzhu <liuzhongzhu@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: fix a wrong checking in the hclge_tx_buffer_calc()Huazhong Tan2019-02-021-4/+5
| | | | | | | | | | | | | | | | | | | | Only the TC is enabled, we need to check whether the buffer is enough, otherwise it may lead to a wrong -ENOMEM case. Fixes: 9ffe79a9c2ee ("net: hns3: Support for dynamically assigning tx buffer to TC") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: move some set_bit statement into hclge_prepare_mac_addrWeihang Li2019-02-021-13/+11
| | | | | | | | | | | | | | | | | | | | | | | | This patch does not change the code logic. There are some same set_bit statements called by add/rm_uc/mc_addr_common, and move this statements into hclge_prepare_mac_addr to reduce duplicate code. Signed-off-by: Weihang Li <liweihang@hisilicon.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: add hclge_cmd_check_retval() to parse comman's return valueWeihang Li2019-02-021-27/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For simplifying the code, this patch adds hclge_cmd_check_retval() to check the return value of the command. Also, according the IMP's description, when there are several descriptors in a command, then the IMP will save the return value on the last description, so hclge_cmd_check_retval() just check the last one for this case. Signed-off-by: Weihang Li <liweihang@hisilicon.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: code optimization for hclge_rx_buffer_calcYunsheng Lin2019-02-021-77/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are four steps to calcuate the rx private buffer, each step can be done in a function to avoid code duplication and aid code readability. This patch adds three separate functions do the job. Also, the function name more or less make the comment redundant, so remove some obvious comment. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: Modify parameter type from int to bool in set_gro_enYonglong Liu2019-02-024-20/+12
| | | | | | | | | | | | | | | | | | | | The second parameter to the hook function set_gro_en is always passed in true/false, so modify it's type from int to bool. Signed-off-by: Yonglong Liu <liuyonglong@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: fix an issue for hns3_update_new_int_glPeng Li2019-02-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | HNS3 supports setting rx-usecs|tx-usecs as 0, but it will not update dynamically when adaptive-tx or adaptive-rx is enable. This patch removes the Redundant check. Fixes: a95e1f8666e9 ("net: hns3: change the time interval of int_gl calculating") Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: hns3: fix a code style issue for hns3_update_new_int_gl()Peng Li2019-02-021-1/+1
|/ | | | | | | | | Use the same code style for rx_group and tx_group in the hns3_update_new_int_gl(). Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller2019-02-0250-262/+2197
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alexei Starovoitov says: ==================== pull-request: bpf-next 2019-02-01 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) introduce bpf_spin_lock, from Alexei. 2) convert xdp samples to libbpf, from Maciej. 3) skip verifier tests for unsupported program/map types, from Stanislav. 4) powerpc64 JIT support for BTF line info, from Sandipan. 5) assorted fixed, from Valdis, Jesper, Jiong. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * Merge branch 'shifts-cleanup'Alexei Starovoitov2019-02-021-10/+82
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jiong Wang says: ==================== NFP JIT back-end is missing several ALU32 logic shifts support. Also, shifts with shift amount be zero are not handled properly. This set cleans up these issues. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| | * nfp: bpf: complete ALU32 logic shift supportsJiong Wang2019-02-021-5/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following ALU32 logic shift supports are missing: BPF_ALU | BPF_LSH | BPF_X BPF_ALU | BPF_RSH | BPF_X BPF_ALU | BPF_RSH | BPF_K For BPF_RSH | BPF_K, it could be implemented using NFP direct shift instruction. For the other BPF_X shifts, NFP indirect shifts sequences need to be used. Separate code-gen hook is assigned to each instruction to make the implementation clear. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| | * nfp: bpf: correct the behavior for shifts by zeroJiong Wang2019-02-021-10/+20
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Shifts by zero do nothing, and should be treated as nops. Even though compiler is not supposed to generate such instructions and manual written assembly is unlikely to have them, but they are legal instructions and have defined behavior. This patch correct existing shifts code-gen to make sure they do nothing when shift amount is zero except when the instruction is ALU32 for which high bits need to be cleared. For shift amount bigger than type size, already, NFP JIT back-end errors out for immediate shift and only low 5 bits will be taken into account for indirect shift which is the same as x86. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * selftests/bpf: remove generated verifier/tests.h on 'make clean'Stanislav Fomichev2019-02-021-3/+5
| | | | | | | | | | | | | | | | 'make clean' is supposed to remove generated files. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
| * Merge branch 'bpf-xdp-sample-libbpf'Daniel Borkmann2019-02-0116-200/+796
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Maciej Fijalkowski says: ==================== This patchset tries to address the situation where: * user loads a particular xdp sample application that does stats polling * user loads another sample application on the same interface * then, user sends SIGINT/SIGTERM to the app that was attached as a first one * second application ends up with an unloaded xdp program 1st patch contains a helper libbpf function for getting the map fd by a given map name. In patch 2 Jesper removes the read_trace_pipe usage from xdp_redirect_cpu which was a blocker for converting this sample to libbpf usage. 3rd patch updates a bunch of xdp samples to make the use of libbpf. Patch 4 adjusts RLIMIT_MEMLOCK for two samples touched in this patchset. In patch 5 extack messages are added for cases where dev_change_xdp_fd returns with an error so user has an idea what was the reason for not attaching the xdp program onto interface. Patch 6 makes the samples behavior similar to what iproute2 does when loading xdp prog - the "force" flag is introduced. Patch 7 introduces the libbpf function that will query the driver from userspace about the currently attached xdp prog id. Use it in samples that do polling by checking the prog id in signal handler and comparing it with previously stored one which is the scope of patch 8. Thanks! v1->v2: * add a libbpf helper for getting a prog via relative index * include xdp_redirect_cpu into conversion v2->v3: mostly addressing Daniel's/Jesper's comments * get rid of the helper from v1->v2 * feed the xdp_redirect_cpu with program name instead of number v3->v4: * fix help message in xdp_sample_pkts v4->v5: * in get_link_xdp_fd, assign prog_id only when libbpf_nl_get_link returned with 0 * add extack messages in dev_change_xdp_fd * check the return value of bpf_get_link_xdp_id when exiting from sample progs v5->v6: * rebase ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * samples/bpf: Check the prog id before exitingMaciej Fijalkowski2019-02-0110-48/+308
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Check the program id within the signal handler on polling xdp samples that were previously converted to libbpf usage. Avoid the situation of unloading the program that was not attached by sample that is exiting. Handle also the case where bpf_get_link_xdp_id didn't exit with an error but the xdp program was not found on an interface. Reported-by: Michal Papaj <michal.papaj@intel.com> Reported-by: Jakub Spizewski <jakub.spizewski@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * libbpf: Add a support for getting xdp prog id on ifindexMaciej Fijalkowski2019-02-013-0/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we have a dedicated netlink attributes for xdp setup on a particular interface, it is now possible to retrieve the program id that is currently attached to the interface. The use case is targeted for sample xdp programs, which will store the program id just after loading bpf program onto iface. On shutdown, the sample will make sure that it can unload the program by querying again the iface and verifying that both program id's matches. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * samples/bpf: Add a "force" flag to XDP samplesMaciej Fijalkowski2019-02-0110-40/+119
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make xdp samples consistent with iproute2 behavior and set the XDP_FLAGS_UPDATE_IF_NOEXIST by default when setting the xdp program on interface. Provide an option for user to force the program loading, which as a result will not include the mentioned flag in bpf_set_link_xdp_fd call. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * xdp: Provide extack messages when prog attachment failedMaciej Fijalkowski2019-02-011-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to provide more meaningful messages to user when the process of loading xdp program onto network interface failed, let's add extack messages within dev_change_xdp_fd. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * samples/bpf: Extend RLIMIT_MEMLOCK for xdp_{sample_pkts, router_ipv4}Maciej Fijalkowski2019-02-012-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a common problem with xdp samples that happens when user wants to run a particular sample and some bpf program is already loaded. The default 64kb RLIMIT_MEMLOCK resource limit will cause a following error (assuming that xdp sample that is failing was converted to libbpf usage): libbpf: Error in bpf_object__probe_name():Operation not permitted(1). Couldn't load basic 'r0 = 0' BPF program. libbpf: failed to load object './xdp_sample_pkts_kern.o' Fix it in xdp_sample_pkts and xdp_router_ipv4 by setting RLIMIT_MEMLOCK to RLIM_INFINITY. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * samples/bpf: Convert XDP samples to libbpf usageMaciej Fijalkowski2019-02-016-103/+253
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some of XDP samples that are attaching the bpf program to the interface via libbpf's bpf_set_link_xdp_fd are still using the bpf_load.c for loading and manipulating the ebpf program and maps. Convert them to do this through libbpf usage and remove bpf_load from the picture. While at it remove what looks like debug leftover in xdp_redirect_map_user.c In xdp_redirect_cpu, change the way that the program to be loaded onto interface is chosen - user now needs to pass the program's section name instead of the relative number. In case of typo print out the section names to choose from. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * samples/bpf: xdp_redirect_cpu have not need for read_trace_pipeJesper Dangaard Brouer2019-02-011-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sample xdp_redirect_cpu is not using helper bpf_trace_printk. Thus it makes no sense that the --debug option us reading from /sys/kernel/debug/tracing/trace_pipe via read_trace_pipe. Simply remove it. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * libbpf: Add a helper for retrieving a map fd for a given nameMaciej Fijalkowski2019-02-013-0/+10
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | XDP samples are mostly cooperating with eBPF maps through their file descriptors. In case of a eBPF program that contains multiple maps it might be tiresome to iterate through them and call bpf_map__fd for each one. Add a helper mostly based on bpf_object__find_map_by_name, but instead of returning the struct bpf_map pointer, return map fd. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| * bpf: powerpc64: add JIT support for bpf line infoSandipan Das2019-02-011-0/+1
| | | | | | | | | | | | | | | | This adds support for generating bpf line info for JITed programs. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| * Merge branch 'bpf-spinlocks'Daniel Borkmann2019-02-0126-39/+1248
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alexei Starovoitov says: ==================== Many algorithms need to read and modify several variables atomically. Until now it was hard to impossible to implement such algorithms in BPF. Hence introduce support for bpf_spin_lock. The api consists of 'struct bpf_spin_lock' that should be placed inside hash/array/cgroup_local_storage element and bpf_spin_lock/unlock() helper function. Example: struct hash_elem { int cnt; struct bpf_spin_lock lock; }; struct hash_elem * val = bpf_map_lookup_elem(&hash_map, &key); if (val) { bpf_spin_lock(&val->lock); val->cnt++; bpf_spin_unlock(&val->lock); } and BPF_F_LOCK flag for lookup/update bpf syscall commands that allows user space to read/write map elements under lock. Together these primitives allow race free access to map elements from bpf programs and from user space. Key restriction: root only. Key requirement: maps must be annotated with BTF. This concept was discussed at Linux Plumbers Conference 2018. Thank you everyone who participated and helped to iron out details of api and implementation. Patch 1: bpf_spin_lock support in the verifier, BTF, hash, array. Patch 2: bpf_spin_lock in cgroup local storage. Patches 3,4,5: tests Patch 6: BPF_F_LOCK flag to lookup/update Patches 7,8,9: tests v6->v7: - fixed this_cpu->__this_cpu per Peter's suggestion and added Ack. - simplified bpf_spin_lock and load/store overlap check in the verifier as suggested by Andrii - rebase v5->v6: - adopted arch_spinlock approach suggested by Peter - switched to spin_lock_irqsave equivalent as the simplest way to avoid deadlocks in rare case of nested networking progs (cgroup-bpf prog in preempt_disable vs clsbpf in softirq sharing the same map with bpf_spin_lock) bpf_spin_lock is only allowed in networking progs that don't have arbitrary entry points unlike tracing progs. - rebase and split test_verifier tests v4->v5: - disallow bpf_spin_lock for tracing progs due to insufficient preemption checks - socket filter progs cannot use bpf_spin_lock due to missing preempt_disable - fix atomic_set_release. Spotted by Peter. - fixed hash_of_maps v3->v4: - fix BPF_EXIST | BPF_NOEXIST check patch 6. Spotted by Jakub. Thanks! - rebase v2->v3: - fixed build on ia64 and archs where qspinlock is not supported - fixed missing lock init during lookup w/o BPF_F_LOCK. Spotted by Martin v1->v2: - addressed several issues spotted by Daniel and Martin in patch 1 - added test11 to patch 4 as suggested by Daniel ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * selftests/bpf: test for BPF_F_LOCKAlexei Starovoitov2019-02-013-1/+141
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add C based test that runs 4 bpf programs in parallel that update the same hash and array maps. And another 2 threads that read from these two maps via lookup(key, value, BPF_F_LOCK) api to make sure the user space sees consistent value in both hash and array elements while user space races with kernel bpf progs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * libbpf: introduce bpf_map_lookup_elem_flags()Alexei Starovoitov2019-02-013-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags) helper to lookup array/hash/cgroup_local_storage elements with BPF_F_LOCK flag. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * tools/bpf: sync uapi/bpf.hAlexei Starovoitov2019-02-011-0/+1
| | | | | | | | | | | | | | | | | | | | | add BPF_F_LOCK definition to tools/include/uapi/linux/bpf.h Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * bpf: introduce BPF_F_LOCK flagAlexei Starovoitov2019-02-017-14/+110
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce BPF_F_LOCK flag for map_lookup and map_update syscall commands and for map_update() helper function. In all these cases take a lock of existing element (which was provided in BTF description) before copying (in or out) the rest of map value. Implementation details that are part of uapi: Array: The array map takes the element lock for lookup/update. Hash: hash map also takes the lock for lookup/update and tries to avoid the bucket lock. If old element exists it takes the element lock and updates the element in place. If element doesn't exist it allocates new one and inserts into hash table while holding the bucket lock. In rare case the hashmap has to take both the bucket lock and the element lock to update old value in place. Cgroup local storage: It is similar to array. update in place and lookup are done with lock taken. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * selftests/bpf: add bpf_spin_lock C testAlexei Starovoitov2019-02-014-2/+155
| | | | | | | | | | | | | | | | | | | | | add bpf_spin_lock C based test that requires latest llvm with BTF support Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * selftests/bpf: add bpf_spin_lock verifier testsAlexei Starovoitov2019-02-012-1/+434
| | | | | | | | | | | | | | | | | | | | | | | | add bpf_spin_lock tests to test_verifier.c that don't require latest llvm with BTF support Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * tools/bpf: sync include/uapi/linux/bpf.hAlexei Starovoitov2019-02-011-1/+6
| | | | | | | | | | | | | | | | | | | | | sync bpf.h Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
| | * bpf: add support for bpf_spin_lock to cgroup local storageAlexei Starovoitov2019-02-013-1/+6
| | | | | | | | | | | | | | | | | | | | | Allow 'struct bpf_spin_lock' to reside inside cgroup local storage. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>