summaryrefslogtreecommitdiffstats
path: root/drivers/hwmon/ltc4151.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2023-08-13platform/x86: lenovo-ymc: Only bind on machines with a convertible DMI ↵Hans de Goede1-0/+25
chassis-type The lenovo-ymc driver is causing the keyboard + touchpad to stop working on some regular laptop models such as the Lenovo ThinkBook 13s G2 ITL 20V9. The problem is that there are YMC WMI GUID methods in the ACPI tables of these laptops, despite them not being Yogas and lenovo-ymc loading causes libinput to see a SW_TABLET_MODE switch with state 1. This in turn causes libinput to ignore events from the builtin keyboard and touchpad, since it filters those out for a Yoga in tablet mode. Similar issues with false-positive SW_TABLET_MODE=1 reporting have been seen with the intel-hid driver. Copy the intel-hid driver approach to fix this and only bind to the WMI device on machines where the DMI chassis-type indicates the machine is a convertible. Add a 'force' module parameter to allow overriding the chassis-type check so that users can easily test if the YMC interface works on models which report an unexpected chassis-type. Fixes: e82882cdd241 ("platform/x86: Add driver for Yoga Tablet Mode switch") Link: https://bugzilla.redhat.com/show_bug.cgi?id=2229373 Cc: André Apitzsch <git@apitzsch.eu> Cc: stable@vger.kernel.org Tested-by: Andrew Kallmeyer <kallmeyeras@gmail.com> Tested-by: Gergő Köteles <soyer@irl.hu> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20230812144818.383230-1-hdegoede@redhat.com
2023-08-13platform: mellanox: Change register offset addressesVadim Pasternak1-4/+4
Move debug register offsets to different location due to hardware changes. Fixes: dd635e33b5c9 ("platform: mellanox: Introduce support of new Nvidia L1 switch") Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Michael Shych <michaelsh@nvidia.com> Link: https://lore.kernel.org/r/20230813083735.39090-5-vadimp@nvidia.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-08-13platform: mellanox: mlx-platform: Modify graceful shutdown callback and ↵Vadim Pasternak1-2/+2
power down mask Use kernel_power_off() instead of kernel_halt() to pass through machine_power_off() -> pm_power_off(), otherwise axillary power does not go off. Change "power down" bitmask. Fixes: dd635e33b5c9 ("platform: mellanox: Introduce support of new Nvidia L1 switch") Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Michael Shych <michaelsh@nvidia.com> Link: https://lore.kernel.org/r/20230813083735.39090-4-vadimp@nvidia.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-08-13platform: mellanox: mlx-platform: Fix signals polarity and latch maskVadim Pasternak1-4/+4
Change polarity of chassis health and power signals and fix latch reset mask for L1 switch. Fixes: dd635e33b5c9 ("platform: mellanox: Introduce support of new Nvidia L1 switch") Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Michael Shych <michaelsh@nvidia.com> Link: https://lore.kernel.org/r/20230813083735.39090-3-vadimp@nvidia.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-08-13platform: mellanox: Fix order in exit flowVadim Pasternak1-2/+1
Fix exit flow order: call mlxplat_post_exit() after mlxplat_i2c_main_exit() in order to unregister main i2c driver before to "mlxplat" driver. Fixes: 0170f616f496 ("platform: mellanox: Split initialization procedure") Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Michael Shych <michaelsh@nvidia.com> Link: https://lore.kernel.org/r/20230813083735.39090-2-vadimp@nvidia.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-08-12locking: remove spin_lock_prefetchMateusz Guzik9-47/+1
The only remaining consumer is new_inode, where it showed up in 2001 as commit c37fa164f793 ("v2.4.9.9 -> v2.4.9.10") in a historical repo [1] with a changelog which does not mention it. Since then the line got only touched up to keep compiling. While it may have been of benefit back in the day, it is guaranteed to at best not get in the way in the multicore setting -- as the code performs *a lot* of work between the prefetch and actual lock acquire, any contention means the cacheline is already invalid by the time the routine calls spin_lock(). It adds spurious traffic, for short. On top of it prefetch is notoriously tricky to use for single-threaded purposes, making it questionable from the get go. As such, remove it. I admit upfront I did not see value in benchmarking this change, but I can do it if that is deemed appropriate. Removal from new_inode and of the entire thing are in the same patch as requested by Linus, so whatever weird looks can be directed at that guy. Link: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git/commit/fs/inode.c?id=c37fa164f793735b32aa3f53154ff1a7659e6442 [1] Signed-off-by: Mateusz Guzik <mjguzik@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-08-12tpm_tis: Opt-in interruptsJarkko Sakkinen1-1/+1
Cc: stable@vger.kernel.org # v6.4+ Link: https://lore.kernel.org/linux-integrity/CAHk-=whRVp4h8uWOX1YO+Y99+44u4s=XxMK4v00B6F1mOfqPLg@mail.gmail.com/ Fixes: e644b2f498d2 ("tpm, tpm_tis: Enable interrupt test") Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-08-12tpm: tpm_tis: Fix UPX-i11 DMI_MATCH conditionPeter Ujfalusi1-1/+1
The patch which made it to the kernel somehow changed the match condition from DMI_MATCH(DMI_PRODUCT_NAME, "UPX-TGL01") to DMI_MATCH(DMI_PRODUCT_VERSION, "UPX-TGL") Revert back to the correct match condition to disable the interrupt mode on the board. Cc: stable@vger.kernel.org # v6.4+ Fixes: edb13d7bb034 ("tpm: tpm_tis: Disable interrupts *only* for AEON UPX-i11") Link: https://lore.kernel.org/lkml/20230524085844.11580-1-peter.ujfalusi@linux.intel.com/ Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-08-11x86/cpu/amd: Enable Zenbleed fix for AMD Custom APU 0405Cristian Ciocaltea1-0/+1
Commit 522b1d69219d ("x86/cpu/amd: Add a Zenbleed fix") provided a fix for the Zen2 VZEROUPPER data corruption bug affecting a range of CPU models, but the AMD Custom APU 0405 found on SteamDeck was not listed, although it is clearly affected by the vulnerability. Add this CPU variant to the Zenbleed erratum list, in order to unconditionally enable the fallback fix until a proper microcode update is available. Fixes: 522b1d69219d ("x86/cpu/amd: Add a Zenbleed fix") Signed-off-by: Cristian Ciocaltea <cristian.ciocaltea@collabora.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230811203705.1699914-1-cristian.ciocaltea@collabora.com
2023-08-11gpio: ws16c48: Fix off-by-one error in WS16C48 resource region extentWilliam Breathitt Gray1-1/+1
The WinSystems WS16C48 I/O address region spans offsets 0x0 through 0xA, which is a total of 11 bytes. Fix the WS16C48_EXTENT define to the correct value of 11 so that access to necessary device registers is properly requested in the ws16c48_probe() callback by the devm_request_region() function call. Fixes: 2c05a0f29f41 ("gpio: ws16c48: Implement and utilize register structures") Cc: stable@vger.kernel.org Cc: Paul Demetrotion <pdemetrotion@winsystems.com> Signed-off-by: William Breathitt Gray <william.gray@linaro.org> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2023-08-11driver core: cpu: Fix the fallback cpu_show_gds() nameBorislav Petkov (AMD)1-2/+2
In 6524c798b727 ("driver core: cpu: Make cpu_show_not_affected() static") I fat-fingered the name of cpu_show_gds(). Usually, I'd rebase but since those are extraordinary embargoed times, the commit above was already pulled into another tree so no no. Therefore, fix it ontop. Fixes: 6524c798b727 ("driver core: cpu: Make cpu_show_not_affected() static") Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230811095831.27513-1-bp@alien8.de
2023-08-11nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopollMing Lei1-2/+0
Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io polling, and the associated NS is guaranteed to be live in case of io polling, so request is guaranteed to be valid because blk-mq uses pre-allocated request pool. Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which isn't needed any more after switching to request based io polling. Fix "BUG: sleeping function called from invalid context" because set_page_dirty_lock() from blk_rq_unmap_user() may sleep. Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands") Reported-by: Guangwu Zhang <guazhang@redhat.com> Cc: Kanchan Joshi <joshi.k@samsung.com> Cc: Anuj Gupta <anuj20.g@samsung.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Tested-by: Guangwu Zhang <guazhang@redhat.com> Link: https://lore.kernel.org/r/20230809020440.174682-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-10parisc: perf: Make cpu_device variable staticHelge Deller1-1/+1
Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: ftrace: Add declaration for ftrace_function_trampoline()Helge Deller2-1/+5
Make sparse happy by adding declaration for ftrace_function_trampoline(). Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: boot: Nuke some sparse warnings in decompressorHelge Deller1-5/+5
Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: processor: Include asm/smp.h for init_per_cpu()Helge Deller1-0/+1
Fix sparse warning that init_per_cpu() isn't declared. Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: unaligned: Include linux/sysctl.h for unaligned_enabledHelge Deller1-0/+1
Fix sparse warning that unaligned_enabled wasn't declared. Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: Move proc_mckinley_root and proc_runway_root to sba_iommuHelge Deller3-49/+7
Clean up the procfs root entries for gsc, runway, and mckinley busses. Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10RDMA/bnxt_re: Initialize dpi_tbl_lock mutexKashyap Desai1-0/+1
Fix the missing dpi_tbl_lock mutex initialization. Fixes: 0ac20faf5d83 ("RDMA/bnxt_re: Reorg the bar mapping") Link: https://lore.kernel.org/r/1691642677-21369-4-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-08-10RDMA/bnxt_re: Fix error handling in probe failure pathKalesh AP1-0/+2
During bnxt_re_dev_init(), when bnxt_re_setup_chip_ctx() fails unregister with L2 first before bailing out probe. Fixes: ae8637e13185 ("RDMA/bnxt_re: Add chip context to identify 57500 series") Link: https://lore.kernel.org/r/1691642677-21369-3-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-08-10RDMA/bnxt_re: Properly order ib_device_unalloc() to avoid UAFSelvin Xavier1-1/+1
ib_dealloc_device() should be called only after device cleanup. Fix the dealloc sequence. Fixes: 6d758147c7b8 ("RDMA/bnxt_re: Use auxiliary driver interface") Link: https://lore.kernel.org/r/1691642677-21369-2-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-08-10net: hns3: fix strscpy causing content truncation issueHao Chen2-4/+4
hns3_dbg_fill_content()/hclge_dbg_fill_content() is aim to integrate some items to a string for content, and we add '\n' and '\0' in the last two bytes of content. strscpy() will add '\0' in the last byte of destination buffer(one of items), it result in finishing content print ahead of schedule and some dump content truncation. One Error log shows as below: cat mac_list/uc UC MAC_LIST: Expected: UC MAC_LIST: FUNC_ID MAC_ADDR STATE pf 00:2b:19:05:03:00 ACTIVE The destination buffer is length-bounded and not required to be NUL-terminated, so just change strscpy() to memcpy() to fix it. Fixes: 1cf3d5567f27 ("net: hns3: fix strncpy() not using dest-buf length as length issue") Signed-off-by: Hao Chen <chenhao418@huawei.com> Signed-off-by: Jijie Shao <shaojijie@huawei.com> Link: https://lore.kernel.org/r/20230809020902.1941471-1-shaojijie@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10net: tls: set MSG_SPLICE_PAGES consistentlyJakub Kicinski1-3/+0
We used to change the flags for the last segment, because non-last segments had the MSG_SENDPAGE_NOTLAST flag set. That flag is no longer a thing so remove the setting. Since flags most likely don't have MSG_SPLICE_PAGES set this avoids passing parts of the sg as splice and parts as non-splice. Before commit under Fixes we'd have called tcp_sendpage() which would add the MSG_SPLICE_PAGES. Why this leads to trouble remains unclear but Tariq reports hitting the WARN_ON(!sendpage_ok()) due to page refcount of 0. Fixes: e117dcfd646e ("tls: Inline do_tcp_sendpages()") Reported-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/all/4c49176f-147a-4283-f1b1-32aac7b4b996@gmail.com/ Tested-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20230808180917.1243540-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10ibmvnic: Ensure login failure recovery is safe from other resetsNick Child1-21/+47
If a login request fails, the recovery process should be protected against parallel resets. It is a known issue that freeing and registering CRQ's in quick succession can result in a failover CRQ from the VIOS. Processing a failover during login recovery is dangerous for two reasons: 1. This will result in two parallel initialization processes, this can cause serious issues during login. 2. It is possible that the failover CRQ is received but never executed. We get notified of a pending failover through a transport event CRQ. The reset is not performed until a INIT CRQ request is received. Previously, if CRQ init fails during login recovery, then the ibmvnic irq is freed and the login process returned error. If failover_pending is true (a transport event was received), then the ibmvnic device would never be able to process the reset since it cannot receive the CRQ_INIT request due to the irq being freed. This leaved the device in a inoperable state. Therefore, the login failure recovery process must be hardened against these possible issues. Possible failovers (due to quick CRQ free and init) must be avoided and any issues during re-initialization should be dealt with instead of being propagated up the stack. This logic is similar to that of ibmvnic_probe(). Fixes: dff515a3e71d ("ibmvnic: Harden device login requests") Signed-off-by: Nick Child <nnac123@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230809221038.51296-5-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10ibmvnic: Do partial reset on login failureNick Child1-6/+40
Perform a partial reset before sending a login request if any of the following are true: 1. If a previous request times out. This can be dangerous because the VIOS could still receive the old login request at any point after the timeout. Therefore, it is best to re-register the CRQ's and sub-CRQ's before retrying. 2. If the previous request returns an error that is not described in PAPR. PAPR provides procedures if the login returns with partial success or aborted return codes (section L.5.1) but other values do not have a defined procedure. Previously, these conditions just returned error from the login function rather than trying to resolve the issue. This can cause further issues since most callers of the login function are not prepared to handle an error when logging in. This improper cleanup can lead to the device being permanently DOWN'd. For example, if the VIOS believes that the device is already logged in then it will return INVALID_STATE (-7). If we never re-register CRQ's then it will always think that the device is already logged in. This leaves the device inoperable. The partial reset involves freeing the sub-CRQs, freeing the CRQ then registering and initializing a new CRQ and sub-CRQs. This essentially restarts all communication with VIOS to allow for a fresh login attempt that will be unhindered by any previous failed attempts. Fixes: dff515a3e71d ("ibmvnic: Harden device login requests") Signed-off-by: Nick Child <nnac123@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230809221038.51296-4-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10ibmvnic: Handle DMA unmapping of login buffs in release functionsNick Child1-5/+10
Rather than leaving the DMA unmapping of the login buffers to the login response handler, move this work into the login release functions. Previously, these functions were only used for freeing the allocated buffers. This could lead to issues if there are more than one outstanding login buffer requests, which is possible if a login request times out. If a login request times out, then there is another call to send login. The send login function makes a call to the login buffer release function. In the past, this freed the buffers but did not DMA unmap. Therefore, the VIOS could still write to the old login (now freed) buffer. It is for this reason that it is a good idea to leave the DMA unmap call to the login buffers release function. Since the login buffer release functions now handle DMA unmapping, remove the duplicate DMA unmapping in handle_login_rsp(). Fixes: dff515a3e71d ("ibmvnic: Harden device login requests") Signed-off-by: Nick Child <nnac123@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230809221038.51296-3-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10ibmvnic: Unmap DMA login rsp buffer on send login failNick Child1-1/+4
If the LOGIN CRQ fails to send then we must DMA unmap the response buffer. Previously, if the CRQ failed then the memory was freed without DMA unmapping. Fixes: c98d9cc4170d ("ibmvnic: send_login should check for crq errors") Signed-off-by: Nick Child <nnac123@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230809221038.51296-2-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10ibmvnic: Enforce stronger sanity checks on login responseNick Child1-0/+18
Ensure that all offsets in a login response buffer are within the size of the allocated response buffer. Any offsets or lengths that surpass the allocation are likely the result of an incomplete response buffer. In these cases, a full reset is necessary. When attempting to login, the ibmvnic device will allocate a response buffer and pass a reference to the VIOS. The VIOS will then send the ibmvnic device a LOGIN_RSP CRQ to signal that the buffer has been filled with data. If the ibmvnic device does not get a response in 20 seconds, the old buffer is freed and a new login request is sent. With 2 outstanding requests, any LOGIN_RSP CRQ's could be for the older login request. If this is the case then the login response buffer (which is for the newer login request) could be incomplete and contain invalid data. Therefore, we must enforce strict sanity checks on the response buffer values. Testing has shown that the `off_rxadd_buff_size` value is filled in last by the VIOS and will be the smoking gun for these circumstances. Until VIOS can implement a mechanism for tracking outstanding response buffers and a method for mapping a LOGIN_RSP CRQ to a particular login response buffer, the best ibmvnic can do in this situation is perform a full reset. Fixes: dff515a3e71d ("ibmvnic: Harden device login requests") Signed-off-by: Nick Child <nnac123@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230809221038.51296-1-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10net: mana: Fix MANA VF unload when hardware is unresponsiveSouradeep Chakrabarti1-4/+33
When unloading the MANA driver, mana_dealloc_queues() waits for the MANA hardware to complete any inflight packets and set the pending send count to zero. But if the hardware has failed, mana_dealloc_queues() could wait forever. Fix this by adding a timeout to the wait. Set the timeout to 120 seconds, which is a somewhat arbitrary value that is more than long enough for functional hardware to complete any sends. Cc: stable@vger.kernel.org Fixes: ca9c54d2d6a5 ("net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)") Signed-off-by: Souradeep Chakrabarti <schakrabarti@linux.microsoft.com> Link: https://lore.kernel.org/r/1691576525-24271-1-git-send-email-schakrabarti@linux.microsoft.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-10parisc: dma: Add prototype for pcxl_dma_startHelge Deller3-5/+3
Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: parisc_ksyms: Include libgcc.h for libgcc prototypesHelge Deller1-6/+1
Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: ucmpdi2: Fix no previous prototype for '__ucmpdi2' warningHelge Deller1-1/+2
Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10x86: Move gds_ucode_mitigated() declaration to headerArnd Bergmann2-2/+2
The declaration got placed in the .c file of the caller, but that causes a warning for the definition: arch/x86/kernel/cpu/bugs.c:682:6: error: no previous prototype for 'gds_ucode_mitigated' [-Werror=missing-prototypes] Move it to a header where both sides can observe it instead. Fixes: 81ac7e5d74174 ("KVM: Add GDS_NO support to KVM") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Tested-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Cc: stable@kernel.org Link: https://lore.kernel.org/all/20230809130530.1913368-2-arnd%40kernel.org
2023-08-10x86/speculation: Add cpu_show_gds() prototypeArnd Bergmann1-0/+2
The newly added function has two definitions but no prototypes: drivers/base/cpu.c:605:16: error: no previous prototype for 'cpu_show_gds' [-Werror=missing-prototypes] Add a declaration next to the other ones for this file to avoid the warning. Fixes: 8974eb588283b ("x86/speculation: Add Gather Data Sampling mitigation") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Tested-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Cc: stable@kernel.org Link: https://lore.kernel.org/all/20230809130530.1913368-1-arnd%40kernel.org
2023-08-10riscv: Implement flush_cache_vmap()Alexandre Ghiti1-0/+4
The RISC-V kernel needs a sfence.vma after a page table modification: we used to rely on the vmalloc fault handling to emit an sfence.vma, but commit 7d3332be011e ("riscv: mm: Pre-allocate PGD entries for vmalloc/modules area") got rid of this path for 64-bit kernels, so now we need to explicitly emit a sfence.vma in flush_cache_vmap(). Note that we don't need to implement flush_cache_vunmap() as the generic code should emit a flush tlb after unmapping a vmalloc region. Fixes: 7d3332be011e ("riscv: mm: Pre-allocate PGD entries for vmalloc/modules area") Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230725132246.817726-1-alexghiti@rivosinc.com Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-08-10riscv: Do not allow vmap pud mappings for 3-level page tableAlexandre Ghiti1-1/+3
The vmalloc_fault() path was removed and to avoid syncing the vmalloc PGD mappings, they are now preallocated. But if the kernel can use a PUD mapping (which in sv39 is actually a PGD mapping) for large vmalloc allocation, it will free the current unused preallocated PGD mapping and install a new leaf one. Since there is no sync anymore, some page tables lack this new mapping and that triggers a panic. So only allow PUD mappings for sv48 and sv57. Fixes: 7d3332be011e ("riscv: mm: Pre-allocate PGD entries for vmalloc/modules area") Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230808130709.1502614-1-alexghiti@rivosinc.com Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-08-10parisc: firmware: Mark pdc_result buffers localHelge Deller1-2/+2
This fixes a sparse warning which suggest to make those static. Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: firmware: Fix sparse context imbalance warningsHelge Deller1-2/+2
Tell sparse about correct context for pdc_cpu_rendezvous_*lock() functions. Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: signal: Fix sparse incorrect type in assignment warningHelge Deller1-1/+1
Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: ioremap: Fix sparse warningsHelge Deller1-5/+4
Fix sparse warning: incorrect type in assignment (different base types) expected unsigned long [usertype] addr got void *addr Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: fault: Use C99 arrary initializersHelge Deller1-25/+25
Sparse wants C99 array initializers. Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: pdt: Use PTR_ERR_OR_ZERO() to simplify codeYang Yingliang1-3/+1
Return PTR_ERR_OR_ZERO() instead of return 0 or PTR_ERR() to simplify code. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Helge Deller <deller@gmx.de>
2023-08-10parisc: Fix lightweight spinlock checks to not break futexesHelge Deller4-6/+27
The lightweight spinlock checks verify that a spinlock has either value 0 (spinlock locked) and that not any other bits than in __ARCH_SPIN_LOCK_UNLOCKED_VAL is set. This breaks the current LWS code, which writes the address of the lock into the lock word to unlock it, which was an optimization to save one assembler instruction. Fix it by making spinlock_types.h accessible for asm code, change the LWS spinlock-unlocking code to write __ARCH_SPIN_LOCK_UNLOCKED_VAL into the lock word, and add some missing lightweight spinlock checks to the LWS path. Finally, make the spinlock checks dependend on DEBUG_KERNEL. Noticed-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Helge Deller <deller@gmx.de> Tested-by: John David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # v6.4+ Fixes: 15e64ef6520e ("parisc: Add lightweight spinlock checks")
2023-08-10btrfs: set cache_block_group_error if we find an errorJosef Bacik1-1/+4
We set cache_block_group_error if btrfs_cache_block_group() returns an error, this is because we could end up not finding space to allocate and mistakenly return -ENOSPC, and which could then abort the transaction with the incorrect errno, and in the case of ENOSPC result in a WARN_ON() that will trip up tests like generic/475. However there's the case where multiple threads can be racing, one thread gets the proper error, and the other thread doesn't actually call btrfs_cache_block_group(), it instead sees ->cached == BTRFS_CACHE_ERROR. Again the result is the same, we fail to allocate our space and return -ENOSPC. Instead we need to set cache_block_group_error to -EIO in this case to make sure that if we do not make our allocation we get the appropriate error returned back to the caller. CC: stable@vger.kernel.org # 4.14+ Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2023-08-10btrfs: reject invalid reloc tree root keys with stack dumpQu Wenruo2-1/+16
[BUG] Syzbot reported a crash that an ASSERT() got triggered inside prepare_to_merge(). That ASSERT() makes sure the reloc tree is properly pointed back by its subvolume tree. [CAUSE] After more debugging output, it turns out we had an invalid reloc tree: BTRFS error (device loop1): reloc tree mismatch, root 8 has no reloc root, expect reloc root key (-8, 132, 8) gen 17 Note the above root key is (TREE_RELOC_OBJECTID, ROOT_ITEM, QUOTA_TREE_OBJECTID), meaning it's a reloc tree for quota tree. But reloc trees can only exist for subvolumes, as for non-subvolume trees, we just COW the involved tree block, no need to create a reloc tree since those tree blocks won't be shared with other trees. Only subvolumes tree can share tree blocks with other trees (thus they have BTRFS_ROOT_SHAREABLE flag). Thus this new debug output proves my previous assumption that corrupted on-disk data can trigger that ASSERT(). [FIX] Besides the dedicated fix and the graceful exit, also let tree-checker to check such root keys, to make sure reloc trees can only exist for subvolumes. CC: stable@vger.kernel.org # 5.15+ Reported-by: syzbot+ae97a827ae1c3336bbb4@syzkaller.appspotmail.com Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2023-08-10btrfs: exit gracefully if reloc roots don't matchQu Wenruo1-8/+37
[BUG] Syzbot reported a crash that an ASSERT() got triggered inside prepare_to_merge(). [CAUSE] The root cause of the triggered ASSERT() is we can have a race between quota tree creation and relocation. This leads us to create a duplicated quota tree in the btrfs_read_fs_root() path, and since it's treated as fs tree, it would have ROOT_SHAREABLE flag, causing us to create a reloc tree for it. The bug itself is fixed by a dedicated patch for it, but this already taught us the ASSERT() is not something straightforward for developers. [ENHANCEMENT] Instead of using an ASSERT(), let's handle it gracefully and output extra info about the mismatch reloc roots to help debug. Also with the above ASSERT() removed, we can trigger ASSERT(0)s inside merge_reloc_roots() later. Also replace those ASSERT(0)s with WARN_ON()s. CC: stable@vger.kernel.org # 5.15+ Reported-by: syzbot+ae97a827ae1c3336bbb4@syzkaller.appspotmail.com Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2023-08-10btrfs: avoid race between qgroup tree creation and relocationQu Wenruo1-0/+10
[BUG] Syzbot reported a weird ASSERT() triggered inside prepare_to_merge(). assertion failed: root->reloc_root == reloc_root, in fs/btrfs/relocation.c:1919 ------------[ cut here ]------------ kernel BUG at fs/btrfs/relocation.c:1919! invalid opcode: 0000 [#1] PREEMPT SMP KASAN CPU: 0 PID: 9904 Comm: syz-executor.3 Not tainted 6.4.0-syzkaller-08881-g533925cb7604 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023 RIP: 0010:prepare_to_merge+0xbb2/0xc40 fs/btrfs/relocation.c:1919 Code: fe e9 f5 (...) RSP: 0018:ffffc9000325f760 EFLAGS: 00010246 RAX: 000000000000004f RBX: ffff888075644030 RCX: 1481ccc522da5800 RDX: ffffc90005c09000 RSI: 00000000000364ca RDI: 00000000000364cb RBP: ffffc9000325f870 R08: ffffffff816f33ac R09: 1ffff9200064bea0 R10: dffffc0000000000 R11: fffff5200064bea1 R12: ffff888075644000 R13: ffff88803b166000 R14: ffff88803b166560 R15: ffff88803b166558 FS: 00007f4e305fd700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000056080679c000 CR3: 00000000193ad000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> relocate_block_group+0xa5d/0xcd0 fs/btrfs/relocation.c:3749 btrfs_relocate_block_group+0x7ab/0xd70 fs/btrfs/relocation.c:4087 btrfs_relocate_chunk+0x12c/0x3b0 fs/btrfs/volumes.c:3283 __btrfs_balance+0x1b06/0x2690 fs/btrfs/volumes.c:4018 btrfs_balance+0xbdb/0x1120 fs/btrfs/volumes.c:4402 btrfs_ioctl_balance+0x496/0x7c0 fs/btrfs/ioctl.c:3604 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:870 [inline] __se_sys_ioctl+0xf8/0x170 fs/ioctl.c:856 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f4e2f88c389 [CAUSE] With extra debugging, the offending reloc_root is for quota tree (rootid 8). Normally we should not use the reloc tree for quota root at all, as reloc trees are only for subvolume trees. But there is a race between quota enabling and relocation, this happens after commit 85724171b302 ("btrfs: fix the btrfs_get_global_root return value"). Before that commit, for quota and free space tree, we exit immediately if we cannot grab it from fs_info. But now we would try to read it from disk, just as if they are fs trees, this sets ROOT_SHAREABLE flags in such race: Thread A | Thread B ---------------------------------+------------------------------ btrfs_quota_enable() | | | btrfs_get_root_ref() | | |- btrfs_get_global_root() | | | Returned NULL | | |- btrfs_lookup_fs_root() | | | Returned NULL |- btrfs_create_tree() | | | Now quota root item is | | | inserted | |- btrfs_read_tree_root() | | | Got the newly inserted quota root | | |- btrfs_init_fs_root() | | | Set ROOT_SHAREABLE flag [FIX] Get back to the old behavior by returning PTR_ERR(-ENOENT) if the target objectid is not a subvolume tree or data reloc tree. Reported-and-tested-by: syzbot+ae97a827ae1c3336bbb4@syzkaller.appspotmail.com Fixes: 85724171b302 ("btrfs: fix the btrfs_get_global_root return value") Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2023-08-10btrfs: properly clear end of the unreserved range in cow_file_rangeChristoph Hellwig1-5/+5
When the call to btrfs_reloc_clone_csums in cow_file_range returns an error, we jump to the out_unlock label with the extent_reserved variable set to false. The cleanup at the label will then call extent_clear_unlock_delalloc on the range from start to end. But we've already added cur_alloc_size to start before the jump, so there might no range be left from the newly incremented start to end. Move the check for 'start < end' so that it is reached by also for the !extent_reserved case. CC: stable@vger.kernel.org # 6.1+ Fixes: a315e68f6e8b ("Btrfs: fix invalid attempt to free reserved space on failure to cow range") Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2023-08-10btrfs: don't wait for writeback on clean pages in extent_write_cache_pagesChristoph Hellwig1-0/+6
__extent_writepage could have started on more pages than the one it was called for. This happens regularly for zoned file systems, and in theory could happen for compressed I/O if the worker thread was executed very quickly. For such pages extent_write_cache_pages waits for writeback to complete before moving on to the next page, which is highly inefficient as it blocks the flusher thread. Port over the PageDirty check that was added to write_cache_pages in commit 515f4a037fb ("mm: write_cache_pages optimise page cleaning") to fix this. CC: stable@vger.kernel.org # 4.14+ Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2023-08-10btrfs: don't stop integrity writeback too earlyChristoph Hellwig1-3/+4
extent_write_cache_pages stops writing pages as soon as nr_to_write hits zero. That is the right thing for opportunistic writeback, but incorrect for data integrity writeback, which needs to ensure that no dirty pages are left in the range. Thus only stop the writeback for WB_SYNC_NONE if nr_to_write hits 0. This is a port of write_cache_pages changes in commit 05fe478dd04e ("mm: write_cache_pages integrity fix"). Note that I've only trigger the problem with other changes to the btrfs writeback code, but this condition seems worthwhile fixing anyway. CC: stable@vger.kernel.org # 4.14+ Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> [ updated comment ] Signed-off-by: David Sterba <dsterba@suse.com>