summaryrefslogtreecommitdiffstats
path: root/drivers/irqchip/irq-renesas-irqc.c (unfollow)
Commit message (Collapse)AuthorFilesLines
39 hourscachestat: fix page cache statistics permission checkingLinus Torvalds1-0/+17
When the 'cachestat()' system call was added in commit cf264e1329fb ("cachestat: implement cachestat syscall"), it was meant to be a much more convenient (and performant) version of mincore() that didn't need mapping things into the user virtual address space in order to work. But it ended up missing the "check for writability or ownership" fix for mincore(), done in commit 134fca9063ad ("mm/mincore.c: make mincore() more conservative"). This just adds equivalent logic to 'cachestat()', modified for the file context (rather than vma). Reported-by: Sudheendra Raghav Neela <sneela@tugraz.at> Fixes: cf264e1329fb ("cachestat: implement cachestat syscall") Tested-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Nhat Pham <nphamcs@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2 daysMAINTAINERS: ipmi: update my email addressCorey Minyard1-1/+1
Old email still works, and will indefinitely, but I'm switching to a new one. Signed-off-by: Corey Minyard <corey@minyard.net>
2 daysKEYS: trusted: dcp: fix improper sg use with CONFIG_VMAP_STACK=yDavid Gstir1-4/+18
With vmalloc stack addresses enabled (CONFIG_VMAP_STACK=y) DCP trusted keys can crash during en- and decryption of the blob encryption key via the DCP crypto driver. This is caused by improperly using sg_init_one() with vmalloc'd stack buffers (plain_key_blob). Fix this by always using kmalloc() for buffers we give to the DCP crypto driver. Cc: stable@vger.kernel.org # v6.10+ Fixes: 0e28bf61a5f9 ("KEYS: trusted: dcp: fix leak of blob encryption key") Signed-off-by: David Gstir <david@sigma-star.at> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2 dayskeys: drop shadowing dead prototypeChristian Göttsche1-1/+1
The global variable pkcs7 does not exist. Drop the variable declaration, but keep the struct prototype needed for is_key_on_revocation_list(). Reported by clang: ./include/keys/system_keyring.h:104:67: warning: declaration shadows a variable in the global scope [-Wshadow] 104 | static inline int is_key_on_revocation_list(struct pkcs7_message *pkcs7) | ^ ./include/keys/system_keyring.h:76:30: note: previous declaration is here 76 | extern struct pkcs7_message *pkcs7; | ^ Fixes: 56c5812623f9 ("certs: Add EFI_CERT_X509_GUID support for dbx entries") Signed-off-by: Christian Göttsche <cgzones@googlemail.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
3 daysrseq: Fix rseq unregistration regressionMathieu Desnoyers1-1/+1
A logic inversion in rseq_reset_rseq_cpu_node_id() causes the rseq unregistration to fail when rseq_validate_ro_fields() succeeds rather than the opposite. This affects both CONFIG_DEBUG_RSEQ=y and CONFIG_DEBUG_RSEQ=n. Fixes: 7d5265ffcd8b ("rseq: Validate read-only fields under DEBUG_RSEQ config") Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250116205956.836074-1-mathieu.desnoyers@efficios.com
3 daysInput: synaptics - fix crash when enabling pass-through portDmitry Torokhov2-14/+43
When enabling a pass-through port an interrupt might come before psmouse driver binds to the pass-through port. However synaptics sub-driver tries to access psmouse instance presumably associated with the pass-through port to figure out if only 1 byte of response or entire protocol packet needs to be forwarded to the pass-through port and may crash if psmouse instance has not been attached to the port yet. Fix the crash by introducing open() and close() methods for the port and check if the port is open before trying to access psmouse instance. Because psmouse calls serio_open() only after attaching psmouse instance to serio port instance this prevents the potential crash. Reported-by: Takashi Iwai <tiwai@suse.de> Fixes: 100e16959c3c ("Input: libps2 - attach ps2dev instances as serio port's drvdata") Link: https://bugzilla.suse.com/show_bug.cgi?id=1219522 Cc: stable@vger.kernel.org Reviewed-by: Takashi Iwai <tiwai@suse.de> Link: https://lore.kernel.org/r/Z4qSHORvPn7EU2j1@google.com Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
3 daysInput: atkbd - map F23 key to support default copilot shortcutMark Pearson1-1/+1
Microsoft defined Meta+Shift+F23 as the Copilot shortcut instead of a dedicated keycode, and multiple vendors have their keyboards emit this sequence in response to users pressing a dedicated "Copilot" key. Unfortunately the default keymap table in atkbd does not map scancode 0x6e (F23) and so the key combination does not work even if userspace is ready to handle it. Because this behavior is common between multiple vendors and the scancode is currently unused map 0x6e to keycode 193 (KEY_F23) so that key sequence is generated properly. MS documentation for the scan code: https://learn.microsoft.com/en-us/windows/win32/inputdev/about-keyboard-input#scan-codes Confirmed on Lenovo, HP and Dell machines by Canonical. Tested on Lenovo T14s G6 AMD. Signed-off-by: Mark Pearson <mpearson-lenovo@squebb.ca> Link: https://lore.kernel.org/r/20250107034554.25843-1-mpearson-lenovo@squebb.ca Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
3 daysnet/rose: prevent integer overflows in rose_setsockopt()Nikita Zhandarovich1-8/+8
In case of possible unpredictably large arguments passed to rose_setsockopt() and multiplied by extra values on top of that, integer overflows may occur. Do the safest minimum and fix these issues by checking the contents of 'opt' and returning -EINVAL if they are too large. Also, switch to unsigned int and remove useless check for negative 'opt' in ROSE_IDLE case. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Nikita Zhandarovich <n.zhandarovich@fintech.ru> Link: https://patch.msgid.link/20250115164220.19954-1-n.zhandarovich@fintech.ru Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: phylink: fix regression when binding a PHYRussell King (Oracle)1-1/+5
Some PHYs don't support clause 45 access, and return -EOPNOTSUPP from phy_modify_mmd(), which causes phylink_bringup_phy() to fail. Prevent this failure by allowing -EOPNOTSUPP to also mean success. Reported-by: Jiawen Wu <jiawenwu@trustnetic.com> Tested-by: Jiawen Wu <jiawenwu@trustnetic.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/E1tZp1a-001V62-DT@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: ethernet: ti: am65-cpsw: streamline TX queue creation and cleanupRoger Quadros1-46/+77
Introduce am65_cpsw_create_txqs() and am65_cpsw_destroy_txqs() and use them. Signed-off-by: Roger Quadros <rogerq@kernel.org> Link: https://patch.msgid.link/20250117-am65-cpsw-streamline-v2-3-91a29c97e569@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: ethernet: ti: am65-cpsw: streamline RX queue creation and cleanupRoger Quadros1-124/+119
Introduce am65_cpsw_create_rxqs() and am65_cpsw_destroy_rxqs() and use them. Signed-off-by: Roger Quadros <rogerq@kernel.org> Link: https://patch.msgid.link/20250117-am65-cpsw-streamline-v2-2-91a29c97e569@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: ethernet: ti: am65-cpsw: ensure proper channel cleanup in error pathRoger Quadros1-23/+44
We are missing netif_napi_del() and am65_cpsw_nuss_free_tx/rx_chns() in error path when am65_cpsw_nuss_init_tx/rx_chns() is used anywhere other than at probe(). i.e. am65_cpsw_nuss_update_tx_rx_chns and am65_cpsw_nuss_resume() As reported, in am65_cpsw_nuss_update_tx_rx_chns(), if am65_cpsw_nuss_init_tx_chns() partially fails then devm_add_action(dev, am65_cpsw_nuss_free_tx_chns,..) is added but the cleanup via am65_cpsw_nuss_free_tx_chns() will not run. Same issue exists for am65_cpsw_nuss_init_tx/rx_chns() failures in am65_cpsw_nuss_resume() as well. This would otherwise require more instances of devm_add/remove_action and is clearly more of a distraction than any benefit. So, drop devm_add/remove_action for am65_cpsw_nuss_free_tx/rx_chns() and call am65_cpsw_nuss_free_tx/rx_chns() and netif_napi_del() where required. Reported-by: Siddharth Vadapalli <s-vadapalli@ti.com> Closes: https://lore.kernel.org/all/m4rhkzcr7dlylxr54udyt6lal5s2q4krrvmyay6gzgzhcu4q2c@r34snfumzqxy/ Signed-off-by: Roger Quadros <rogerq@kernel.org> Link: https://patch.msgid.link/20250117-am65-cpsw-streamline-v2-1-91a29c97e569@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Convert inet6_rtm_deladdr() to per-netns RTNL.Kuniyuki Iwashima1-4/+8
Let's register inet6_rtm_deladdr() with RTNL_FLAG_DOIT_PERNET and hold rtnl_net_lock() before inet6_addr_del(). Now that inet6_addr_del() is always called under per-netns RTNL. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-12-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Convert inet6_rtm_newaddr() to per-netns RTNL.Kuniyuki Iwashima1-8/+17
Let's register inet6_rtm_newaddr() with RTNL_FLAG_DOIT_PERNET and hold rtnl_net_lock() before __dev_get_by_index(). Now that inet6_addr_add() and inet6_addr_modify() are always called under per-netns RTNL. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-11-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Move lifetime validation to inet6_rtm_newaddr().Kuniyuki Iwashima1-58/+35
inet6_addr_add() and inet6_addr_modify() have the same code to validate IPv6 lifetime that is done under RTNL. Let's factorise it out to inet6_rtm_newaddr() so that we can validate the lifetime without RTNL later. Note that inet6_addr_add() is called from addrconf_add_ifaddr(), but the lifetime is INFINITY_LIFE_TIME in the path, so expires and flags are 0. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-10-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Set cfg.ifa_flags before device lookup in inet6_rtm_newaddr().Kuniyuki Iwashima1-7/+7
We will convert inet6_rtm_newaddr() to per-netns RTNL. Except for IFA_F_OPTIMISTIC, cfg.ifa_flags can be set before __dev_get_by_index(). Let's move ifa_flags setup before __dev_get_by_index() so that we can set ifa_flags without RTNL. Also, now it's moved before tb[IFA_CACHEINFO] in preparing for the next patch. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-9-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Pass dev to inet6_addr_add().Kuniyuki Iwashima1-10/+10
inet6_addr_add() is called from inet6_rtm_newaddr() and addrconf_add_ifaddr(). inet6_addr_add() looks up dev by __dev_get_by_index(), but it's already done in inet6_rtm_newaddr(). Let's move the 2nd lookup to addrconf_add_ifaddr() and pass dev to inet6_addr_add(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-8-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Convert inet6_ioctl() to per-netns RTNL.Kuniyuki Iwashima1-6/+6
These functions are called from inet6_ioctl() with a socket's netns and hold RTNL. * SIOCSIFADDR : addrconf_add_ifaddr() * SIOCDIFADDR : addrconf_del_ifaddr() * SIOCSIFDSTADDR : addrconf_set_dstaddr() Let's use rtnl_net_lock(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-7-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Hold rtnl_net_lock() in addrconf_init() and addrconf_cleanup().Kuniyuki Iwashima1-5/+5
addrconf_init() holds RTNL for blackhole_netdev, which is the global device in init_net. addrconf_cleanup() holds RTNL to clean up devices in init_net too. Let's use rtnl_net_lock(&init_net) there. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-6-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Hold rtnl_net_lock() in addrconf_dad_work().Kuniyuki Iwashima1-3/+6
addrconf_dad_work() is per-address work and holds RTNL internally. We can fetch netns as dev_net(ifp->idev->dev). Let's use rtnl_net_lock(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-5-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Hold rtnl_net_lock() in addrconf_verify_work().Kuniyuki Iwashima1-2/+2
addrconf_verify_work() is per-netns work to call addrconf_verify_rtnl() under RTNL. Let's use rtnl_net_lock(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-4-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Convert net.ipv6.conf.${DEV}.XXX sysctl to per-netns RTNL.Kuniyuki Iwashima1-30/+30
net.ipv6.conf.${DEV}.XXX sysctl are changed under RTNL: * forwarding * ignore_routes_with_linkdown * disable_ipv6 * proxy_ndp * addr_gen_mode * stable_secret * disable_policy Let's use rtnl_net_lock() there. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysipv6: Add __in6_dev_get_rtnl_net().Kuniyuki Iwashima1-0/+5
We will convert rtnl_lock() with rtnl_net_lock(), and we want to convert __in6_dev_get() too. __in6_dev_get() uses rcu_dereference_rtnl(), but as written in its comment, rtnl_dereference() or rcu_dereference() is preferable. Let's add __in6_dev_get_rtnl_net() that uses rtnl_net_dereference(). We can add the RCU version helper later if needed. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250115080608.28127-2-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: stmmac: Drop redundant skb_mark_for_recycle() for SKB fragsFurong Xu1-6/+0
After commit df542f669307 ("net: stmmac: Switch to zero-copy in non-XDP RX path"), SKBs are always marked for recycle, it is redundant to mark SKBs more than once when new frags are appended. Signed-off-by: Furong Xu <0x1207@gmail.com> Link: https://patch.msgid.link/20250117062805.192393-1-0x1207@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: mii: Fix the Speed display when the network cable is not connectedXiangqian Zhang1-0/+3
Two different models of usb card, the drivers are r8152 and asix. If no network cable is connected, Speed = 10Mb/s. This problem is repeated in linux 3.10, 4.19, 5.4, 6.12. This problem also exists on the latest kernel. Both drivers call mii_ethtool_get_link_ksettings, but the value of cmd->base.speed in this function can only be SPEED_1000 or SPEED_100 or SPEED_10. When the network cable is not connected, set cmd->base.speed =SPEED_UNKNOWN. Signed-off-by: Xiangqian Zhang <zhangxiangqian@kylinos.cn> Link: https://patch.msgid.link/20250117094603.4192594-1-zhangxiangqian@kylinos.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 dayssysctl net: Remove macro checks for CONFIG_SYSCTLDenis Kirjanov2-8/+0
Since dccp and llc makefiles already check sysctl code compilation with xxx-$(CONFIG_SYSCTL) we can drop the checks Signed-off-by: Denis Kirjanov <kirjanov@gmail.com> Link: https://patch.msgid.link/20250119134254.19250-1-kirjanov@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 dayseth: bnxt: update header sizing defaultsJakub Kicinski1-1/+6
300-400B RPC requests are fairly common. With the current default of 256B HDS threshold bnxt ends up splitting those, lowering PCIe bandwidth efficiency and increasing the number of memory allocation. Increase the HDS threshold to fit 4 buffers in a 4k page. This works out to 640B as the threshold on a typical kernel confing. This change increases the performance for a microbenchmark which receives 400B RPCs and sends empty responses by 4.5%. Admittedly this is just a single benchmark, but 256B works out to just 6 (so 2 more) packets per head page, because shinfo size dominates the headers. Now that we use page pool for the header pages I was also tempted to default rx_copybreak to 0, but in synthetic testing the copybreak size doesn't seem to make much difference. Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250119020518.1962249-8-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 dayseth: bnxt: allocate enough buffer space to meet HDS thresholdJakub Kicinski1-3/+4
Now that we can configure HDS threshold separately from the rx_copybreak HDS threshold may be higher than rx_copybreak. We need to make sure that we have enough space for the headers. Fixes: 6b43673a25c3 ("bnxt_en: add support for hds-thresh ethtool command") Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250119020518.1962249-7-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: ethtool: populate the default HDS params in the coreJakub Kicinski3-6/+4
The core has the current HDS config, it can pre-populate the values for the drivers. While at it, remove the zero-setting in netdevsim. Zero are the default values since the config is zalloc'ed. Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250119020518.1962249-6-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 dayseth: bnxt: apply hds_thrs settings correctlyJakub Kicinski1-1/+1
Use the pending config for hds_thrs. Core will only update the "current" one after we return success. Without this change 2 reconfigs would be required for the setting to reach the device. Fixes: 6b43673a25c3 ("bnxt_en: add support for hds-thresh ethtool command") Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250119020518.1962249-5-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: provide pending ring configuration in net_deviceJakub Kicinski4-8/+29
Record the pending configuration in net_device struct. ethtool core duplicates the current config and the specific handlers (for now just ringparam) can modify it. Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250119020518.1962249-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: ethtool: store netdev in a temp variable in ethnl_default_set_doit()Jakub Kicinski1-3/+6
For ease of review of the next patch store the dev pointer on the stack, instead of referring to req_info.dev every time. No functional changes. Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250119020518.1962249-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: move HDS config from ethtool stateJakub Kicinski10-24/+43
Separate the HDS config from the ethtool state struct. The HDS config contains just simple parameters, not state. Having it as a separate struct will make it easier to clone / copy and also long term potentially make it per-queue. Reviewed-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250119020518.1962249-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Use consume_skb() in connect() and sendmsg().Kuniyuki Iwashima1-14/+14
This is based on Donald Hunter's patch. These functions could fail for various reasons, sometimes triggering kfree_skb(). * unix_stream_connect() : connect() * unix_stream_sendmsg() : sendmsg() * queue_oob() : sendmsg(MSG_OOB) * unix_dgram_sendmsg() : sendmsg() Such kfree_skb() is tied to the errno of connect() and sendmsg(), and we need not define skb drop reasons. Let's use consume_skb() not to churn kfree_skb() events. Link: https://lore.kernel.org/netdev/eb30b164-7f86-46bf-a5d3-0f8bda5e9398@redhat.com/ Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-10-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Reuse out_pipe label in unix_stream_sendmsg().Kuniyuki Iwashima1-14/+9
This is a follow-up of commit d460b04bc452 ("af_unix: Clean up error paths in unix_stream_sendmsg()."). If we initialise skb with NULL in unix_stream_sendmsg(), we can reuse the existing out_pipe label for the SEND_SHUTDOWN check. Let's rename it and adjust the existing label as out_pipe_lock. While at it, size and data_len are moved to the while loop scope. Suggested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-9-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Set drop reason in unix_dgram_disconnected().Kuniyuki Iwashima2-1/+10
unix_dgram_disconnected() is called from two places: 1. when a connect()ed socket dis-connect()s or re-connect()s to another socket 2. when sendmsg() fails because the peer socket that the client has connect()ed to has been close()d Then, the client's recv queue is purged to remove all messages from the old peer socket. Let's define a new drop reason for that case. # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> >>> # s1 has a message from s2 >>> s1, s2 = socketpair(AF_UNIX, SOCK_DGRAM) >>> s2.send(b'hello world') >>> >>> # re-connect() drops the message from s2 >>> s3 = socket(AF_UNIX, SOCK_DGRAM) >>> s3.bind('') >>> s1.connect(s3.getsockname()) # cat /sys/kernel/tracing/trace_pipe python3-250 ... kfree_skb: ... location=skb_queue_purge_reason+0xdc/0x110 reason: UNIX_DISCONNECT Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-8-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Set drop reason in unix_stream_read_skb().Kuniyuki Iwashima1-2/+2
unix_stream_read_skb() is called when BPF SOCKMAP reads some data from a socket in the map. SOCKMAP does not support MSG_OOB, and reading OOB results in a drop. Let's set drop reasons respectively. * SOCKET_CLOSE : the socket in SOCKMAP was close()d * UNIX_SKIP_OOB : OOB was read from the socket in SOCKMAP Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-7-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Set drop reason in manage_oob().Kuniyuki Iwashima2-1/+7
AF_UNIX SOCK_STREAM socket supports MSG_OOB. When OOB data is sent to a socket, recv() will break at that point. If the next recv() does not have MSG_OOB, the normal data following the OOB data is returned. Then, the OOB skb is dropped. Let's define a new drop reason for that case in manage_oob(). # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> s1, s2 = socketpair(AF_UNIX) >>> s1.send(b'a', MSG_OOB) >>> s1.send(b'b') >>> s2.recv(2) b'b' # cat /sys/kernel/tracing/trace_pipe ... python3-223 ... kfree_skb: ... location=unix_stream_read_generic+0x59e/0xc20 reason: UNIX_SKIP_OOB Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-6-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Set drop reason in __unix_gc().Kuniyuki Iwashima1-1/+1
Inflight file descriptors by SCM_RIGHTS hold references to the struct file. AF_UNIX sockets could hold references to each other, forming reference cycles. Once such sockets are close()d without the fd recv()ed, they will be unaccessible from userspace but remain in kernel. __unix_gc() garbage-collects skb with the dead file descriptors and frees them by __skb_queue_purge(). Let's set SKB_DROP_REASON_SOCKET_CLOSE there. # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> from array import array >>> >>> # Create a reference cycle >>> s1 = socket(AF_UNIX, SOCK_DGRAM) >>> s1.bind('') >>> s1.sendmsg([b"nop"], [(SOL_SOCKET, SCM_RIGHTS, array("i", [s1.fileno()]))], 0, s1.getsockname()) >>> s1.close() >>> >>> # Trigger GC >>> s2 = socket(AF_UNIX) >>> s2.close() # cat /sys/kernel/tracing/trace_pipe ... kworker/u16:2-42 ... kfree_skb: ... location=__unix_gc+0x4ad/0x580 reason: SOCKET_CLOSE Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-5-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Set drop reason in unix_sock_destructor().Kuniyuki Iwashima1-1/+1
unix_sock_destructor() is called as sk->sk_destruct() just before the socket is actually freed. Let's use SKB_DROP_REASON_SOCKET_CLOSE for skb_queue_purge(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-4-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysaf_unix: Set drop reason in unix_release_sock().Kuniyuki Iwashima2-2/+5
unix_release_sock() is called when the last refcnt of struct file is released. Let's define a new drop reason SKB_DROP_REASON_SOCKET_CLOSE and set it for kfree_skb() in unix_release_sock(). # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> s1, s2 = socketpair(AF_UNIX) >>> s1.send(b'hello world') >>> s2.close() # cat /sys/kernel/tracing/trace_pipe ... python3-280 ... kfree_skb: ... protocol=0 location=unix_release_sock+0x260/0x420 reason: SOCKET_CLOSE To be precise, unix_release_sock() is also called for a new child socket in unix_stream_connect() when something fails, but the new sk does not have skb in the recv queue then and no event is logged. Note that only tcp_inbound_ao_hash() uses a similar drop reason, SKB_DROP_REASON_TCP_CLOSE, and this can be generalised later. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-3-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysnet: dropreason: Gather SOCKET_ drop reasons.Kuniyuki Iwashima1-6/+6
The following patch adds a new drop reason starting with the SOCKET_ prefix. Let's gather the existing SOCKET_ reasons. Note that the order is not part of uAPI. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250116053441.5758-2-kuniyu@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysselftests/net/ipsec: Fix Null pointer dereference in rtattr_pack()Liu Ye1-1/+2
Address Null pointer dereference / undefined behavior in rtattr_pack (note that size is 0 in the bad case). Flagged by cppcheck as: tools/testing/selftests/net/ipsec.c:230:25: warning: Possible null pointer dereference: payload [nullPointer] memcpy(RTA_DATA(attr), payload, size); ^ tools/testing/selftests/net/ipsec.c:1618:54: note: Calling function 'rtattr_pack', 4th argument 'NULL' value is 0 if (rtattr_pack(&req.nh, sizeof(req), XFRMA_IF_ID, NULL, 0)) { ^ tools/testing/selftests/net/ipsec.c:230:25: note: Null pointer dereference memcpy(RTA_DATA(attr), payload, size); ^ Signed-off-by: Liu Ye <liuye@kylinos.cn> Link: https://patch.msgid.link/20250116013037.29470-1-liuye@kylinos.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
3 daysselftests/bpf: Add some tests related to 'may_goto 0' insnsYonghong Song3-0/+129
Add both asm-based and C-based tests which have 'may_goto 0' insns. For the following code in C-based test, int i, tmp[3]; for (i = 0; i < 3 && can_loop; i++) tmp[i] = 0; The clang compiler (clang 19 and 20) generates may_goto 2 may_goto 1 may_goto 0 r1 = 0 r2 = 0 r3 = 0 The above asm codes are due to llvm pass SROAPass. This ensures the successful verification since tmp[0-2] are initialized. Otherwise, the code without SROAPass like may_goto 5 r1 = 0 may_goto 3 r2 = 0 may_goto 1 r3 = 0 will have verification failure. Although from the source code C-based test should have verification failure, clang compiler optimization generates code with successful verification. If gcc generates different asm codes than clang, the following code can be used for gcc: int i, tmp[3]; for (i = 0; i < 3; i++) tmp[i] = 0; Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20250118192034.2124952-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 daysbpf: Remove 'may_goto 0' instruction in opt_remove_nops()Yonghong Song1-3/+8
Since 'may_goto 0' insns are actually no-op, let us remove them. Otherwise, verifier will generate code like /* r10 - 8 stores the implicit loop count */ r11 = *(u64 *)(r10 -8) if r11 == 0x0 goto pc+2 r11 -= 1 *(u64 *)(r10 -8) = r11 which is the pure overhead. The following code patterns (from the previous commit) are also handled: may_goto 2 may_goto 1 may_goto 0 With this commit, the above three 'may_goto' insns are all eliminated. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20250118192029.2124584-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 daysbpf: Allow 'may_goto 0' instruction in verifierYonghong Song1-3/+2
Commit 011832b97b31 ("bpf: Introduce may_goto instruction") added support for may_goto insn. The 'may_goto 0' insn is disallowed since the insn is equivalent to a nop as both branch will go to the next insn. But it is possible that compiler transformation may generate 'may_goto 0' insn. Emil Tsalapatis from Meta reported such a case which caused verification failure. For example, for the following code, int i, tmp[3]; for (i = 0; i < 3 && can_loop; i++) tmp[i] = 0; ... clang 20 may generate code like may_goto 2; may_goto 1; may_goto 0; r1 = 0; /* tmp[0] = 0; */ r2 = 0; /* tmp[1] = 0; */ r3 = 0; /* tmp[2] = 0; */ Let us permit 'may_goto 0' insn to avoid verification failure for codes like the above. Reported-by: Emil Tsalapatis <etsal@meta.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20250118192024.2124059-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 daysselftests/bpf: Add test case for the freeing of bpf_timerHou Tao2-0/+236
The main purpose of the test is to demonstrate the lock problem for the free of bpf_timer under PREEMPT_RT. When freeing a bpf_timer which is running on other CPU in bpf_timer_cancel_and_free(), hrtimer_cancel() will try to acquire a spin-lock (namely softirq_expiry_lock), however the freeing procedure has already held a raw-spin-lock. The test first creates two threads: one to start timers and the other to free timers. The start-timers thread will start the timer and then wake up the free-timers thread to free these timers when the starts complete. After freeing, the free-timer thread will wake up the start-timer thread to complete the current iteration. A loop of 10 iterations is used. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250117101816.2101857-6-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 daysbpf: Cancel the running bpf_timer through kworker for PREEMPT_RTHou Tao1-2/+16
During the update procedure, when overwrite element in a pre-allocated htab, the freeing of old_element is protected by the bucket lock. The reason why the bucket lock is necessary is that the old_element has already been stashed in htab->extra_elems after alloc_htab_elem() returns. If freeing the old_element after the bucket lock is unlocked, the stashed element may be reused by concurrent update procedure and the freeing of old_element will run concurrently with the reuse of the old_element. However, the invocation of check_and_free_fields() may acquire a spin-lock which violates the lockdep rule because its caller has already held a raw-spin-lock (bucket lock). The following warning will be reported when such race happens: BUG: scheduling while atomic: test_progs/676/0x00000003 3 locks held by test_progs/676: #0: ffffffff864b0240 (rcu_read_lock_trace){....}-{0:0}, at: bpf_prog_test_run_syscall+0x2c0/0x830 #1: ffff88810e961188 (&htab->lockdep_key){....}-{2:2}, at: htab_map_update_elem+0x306/0x1500 #2: ffff8881f4eac1b8 (&base->softirq_expiry_lock){....}-{2:2}, at: hrtimer_cancel_wait_running+0xe9/0x1b0 Modules linked in: bpf_testmod(O) Preemption disabled at: [<ffffffff817837a3>] htab_map_update_elem+0x293/0x1500 CPU: 0 UID: 0 PID: 676 Comm: test_progs Tainted: G ... 6.12.0+ #11 Tainted: [W]=WARN, [O]=OOT_MODULE Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)... Call Trace: <TASK> dump_stack_lvl+0x57/0x70 dump_stack+0x10/0x20 __schedule_bug+0x120/0x170 __schedule+0x300c/0x4800 schedule_rtlock+0x37/0x60 rtlock_slowlock_locked+0x6d9/0x54c0 rt_spin_lock+0x168/0x230 hrtimer_cancel_wait_running+0xe9/0x1b0 hrtimer_cancel+0x24/0x30 bpf_timer_delete_work+0x1d/0x40 bpf_timer_cancel_and_free+0x5e/0x80 bpf_obj_free_fields+0x262/0x4a0 check_and_free_fields+0x1d0/0x280 htab_map_update_elem+0x7fc/0x1500 bpf_prog_9f90bc20768e0cb9_overwrite_cb+0x3f/0x43 bpf_prog_ea601c4649694dbd_overwrite_timer+0x5d/0x7e bpf_prog_test_run_syscall+0x322/0x830 __sys_bpf+0x135d/0x3ca0 __x64_sys_bpf+0x75/0xb0 x64_sys_call+0x1b5/0xa10 do_syscall_64+0x3b/0xc0 entry_SYSCALL_64_after_hwframe+0x4b/0x53 ... </TASK> It seems feasible to break the reuse and refill of per-cpu extra_elems into two independent parts: reuse the per-cpu extra_elems with bucket lock being held and refill the old_element as per-cpu extra_elems after the bucket lock is unlocked. However, it will make the concurrent overwrite procedures on the same CPU return unexpected -E2BIG error when the map is full. Therefore, the patch fixes the lock problem by breaking the cancelling of bpf_timer into two steps for PREEMPT_RT: 1) use hrtimer_try_to_cancel() and check its return value 2) if the timer is running, use hrtimer_cancel() through a kworker to cancel it again Considering that the current implementation of hrtimer_cancel() will try to acquire a being held softirq_expiry_lock when the current timer is running, these steps above are reasonable. However, it also has downside. When the timer is running, the cancelling of the timer is delayed when releasing the last map uref. The delay is also fixable (e.g., break the cancelling of bpf timer into two parts: one part in locked scope, another one in unlocked scope), it can be revised later if necessary. It is a bit hard to decide the right fix tag. One reason is that the problem depends on PREEMPT_RT which is enabled in v6.12. Considering the softirq_expiry_lock lock exists since v5.4 and bpf_timer is introduced in v5.15, the bpf_timer commit is used in the fixes tag and an extra depends-on tag is added to state the dependency on PREEMPT_RT. Fixes: b00628b1c7d5 ("bpf: Introduce bpf timers.") Depends-on: v6.12+ with PREEMPT_RT enabled Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Closes: https://lore.kernel.org/bpf/20241106084527.4gPrMnHt@linutronix.de Signed-off-by: Hou Tao <houtao1@huawei.com> Reviewed-by: Toke Høiland-Jørgensen <toke@kernel.org> Link: https://lore.kernel.org/r/20250117101816.2101857-5-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 daysbpf: Free element after unlock in __htab_map_lookup_and_delete_elem()Hou Tao1-4/+6
The freeing of special fields in map value may acquire a spin-lock (e.g., the freeing of bpf_timer), however, the lookup_and_delete_elem procedure has already held a raw-spin-lock, which violates the lockdep rule. The running context of __htab_map_lookup_and_delete_elem() has already disabled the migration. Therefore, it is OK to invoke free_htab_elem() after unlocking the bucket lock. Fix the potential problem by freeing element after unlocking bucket lock in __htab_map_lookup_and_delete_elem(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250117101816.2101857-4-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 daysbpf: Bail out early in __htab_map_lookup_and_delete_elem()Hou Tao1-25/+26
Use goto statement to bail out early when the target element is not found, instead of using a large else branch to handle the more likely case. This change doesn't affect functionality and simply make the code cleaner. Signed-off-by: Hou Tao <houtao1@huawei.com> Reviewed-by: Toke Høiland-Jørgensen <toke@kernel.org> Link: https://lore.kernel.org/r/20250117101816.2101857-3-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>