summaryrefslogtreecommitdiffstats
path: root/drivers/pps (unfollow)
Commit message (Collapse)AuthorFilesLines
2019-04-14fs: prevent page refcount overflow in pipe_buf_getMatthew Wilcox5-15/+29
Change pipe_buf_get() to return a bool indicating whether it succeeded in raising the refcount of the page (if the thing in the pipe is a page). This removes another mechanism for overflowing the page refcount. All callers converted to handle a failure. Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Matthew Wilcox <willy@infradead.org> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-14mm: prevent get_user_pages() from overflowing page refcountLinus Torvalds2-12/+49
If the page refcount wraps around past zero, it will be freed while there are still four billion references to it. One of the possible avenues for an attacker to try to make this happen is by doing direct IO on a page multiple times. This patch makes get_user_pages() refuse to take a new page reference if there are already more than two billion references to the page. Reported-by: Jann Horn <jannh@google.com> Acked-by: Matthew Wilcox <willy@infradead.org> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-14mm: add 'try_get_page()' helper functionLinus Torvalds1-0/+9
This is the same as the traditional 'get_page()' function, but instead of unconditionally incrementing the reference count of the page, it only does so if the count was "safe". It returns whether the reference count was incremented (and is marked __must_check, since the caller obviously has to be aware of it). Also like 'get_page()', you can't use this function unless you already had a reference to the page. The intent is that you can use this exactly like get_page(), but in situations where you want to limit the maximum reference count. The code currently does an unconditional WARN_ON_ONCE() if we ever hit the reference count issues (either zero or negative), as a notification that the conditional non-increment actually happened. NOTE! The count access for the "safety" check is inherently racy, but that doesn't matter since the buffer we use is basically half the range of the reference count (ie we look at the sign of the count). Acked-by: Matthew Wilcox <willy@infradead.org> Cc: Jann Horn <jannh@google.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-14mm: make page ref count overflow check tighter and more explicitLinus Torvalds1-1/+5
We have a VM_BUG_ON() to check that the page reference count doesn't underflow (or get close to overflow) by checking the sign of the count. That's all fine, but we actually want to allow people to use a "get page ref unless it's already very high" helper function, and we want that one to use the sign of the page ref (without triggering this VM_BUG_ON). Change the VM_BUG_ON to only check for small underflows (or _very_ close to overflowing), and ignore overflows which have strayed into negative territory. Acked-by: Matthew Wilcox <willy@infradead.org> Cc: Jann Horn <jannh@google.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-12clk: imx: Fix PLL_1416X not rounding ratesLeonard Crestez1-1/+1
Code which initializes the "clk_init_data.ops" checks pll->rate_table before that field is ever assigned to so it always picks "clk_pll1416x_min_ops". This breaks dynamic rate rounding for features such as cpufreq. Fix by checking pll_clk->rate_table instead, here pll_clk refers to the constant initialization data coming from per-soc clk driver. Signed-off-by: Leonard Crestez <leonard.crestez@nxp.com> Fixes: 8646d4dcc7fb ("clk: imx: Add PLLs driver for imx8mm soc") Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2019-04-12clk: mediatek: fix clk-gate flag settingWeiyi Lu1-2/+1
CLK_SET_RATE_PARENT would be dropped. Merge two flag setting together to correct the error. Fixes: 5a1cc4c27ad2 ("clk: mediatek: Add flags to mtk_gate") Cc: <stable@vger.kernel.org> Signed-off-by: Weiyi Lu <weiyi.lu@mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2019-04-12arm64: futex: Fix FUTEX_WAKE_OP atomic ops with non-zero result valueWill Deacon1-8/+8
Rather embarrassingly, our futex() FUTEX_WAKE_OP implementation doesn't explicitly set the return value on the non-faulting path and instead leaves it holding the result of the underlying atomic operation. This means that any FUTEX_WAKE_OP atomic operation which computes a non-zero value will be reported as having failed. Regrettably, I wrote the buggy code back in 2011 and it was upstreamed as part of the initial arm64 support in 2012. The reasons we appear to get away with this are: 1. FUTEX_WAKE_OP is rarely used and therefore doesn't appear to get exercised by futex() test applications 2. If the result of the atomic operation is zero, the system call behaves correctly 3. Prior to version 2.25, the only operation used by GLIBC set the futex to zero, and therefore worked as expected. From 2.25 onwards, FUTEX_WAKE_OP is not used by GLIBC at all. Fix the implementation by ensuring that the return value is either 0 to indicate that the atomic operation completed successfully, or -EFAULT if we encountered a fault when accessing the user mapping. Cc: <stable@kernel.org> Fixes: 6170a97460db ("arm64: Atomic operations") Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-12iommu/amd: Set exclusion range correctlyJoerg Roedel1-1/+1
The exlcusion range limit register needs to contain the base-address of the last page that is part of the range, as bits 0-11 of this register are treated as 0xfff by the hardware for comparisons. So correctly set the exclusion range in the hardware to the last page which is _in_ the range. Fixes: b2026aa2dce44 ('x86, AMD IOMMU: add functions for programming IOMMU MMIO space') Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-04-12clang-format: Update with the latest for_each macro listMiguel Ojeda1-0/+24
Re-run the shell fragment that generated the original list now that there are two dozens of new entries after v5.1's merge window. Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2019-04-12perf/core: Fix perf_event_disable_inatomic() racePeter Zijlstra2-11/+45
Thomas-Mich Richter reported he triggered a WARN()ing from event_function_local() on his s390. The problem boils down to: CPU-A CPU-B perf_event_overflow() perf_event_disable_inatomic() @pending_disable = 1 irq_work_queue(); sched-out event_sched_out() @pending_disable = 0 sched-in perf_event_overflow() perf_event_disable_inatomic() @pending_disable = 1; irq_work_queue(); // FAILS irq_work_run() perf_pending_event() if (@pending_disable) perf_event_disable_local(); // WHOOPS The problem exists in generic, but s390 is particularly sensitive because it doesn't implement arch_irq_work_raise(), nor does it call irq_work_run() from it's PMU interrupt handler (nor would that be sufficient in this case, because s390 also generates perf_event_overflow() from pmu::stop). Add to that the fact that s390 is a virtual architecture and (virtual) CPU-A can stall long enough for the above race to happen, even if it would self-IPI. Adding a irq_work_sync() to event_sched_in() would work for all hardare PMUs that properly use irq_work_run() but fails for software PMUs. Instead encode the CPU number in @pending_disable, such that we can tell which CPU requested the disable. This then allows us to detect the above scenario and even redirect the IPI to make up for the failed queue. Reported-by: Thomas-Mich Richter <tmricht@linux.ibm.com> Tested-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Hendrik Brueckner <brueckner@linux.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-12block: fix the return errno for direct IOJason Yan1-4/+4
If the last bio returned is not dio->bio, the status of the bio will not assigned to dio->bio if it is error. This will cause the whole IO status wrong. ksoftirqd/21-117 [021] ..s. 4017.966090: 8,0 C N 4883648 [0] <idle>-0 [018] ..s. 4017.970888: 8,0 C WS 4924800 + 1024 [0] <idle>-0 [018] ..s. 4017.970909: 8,0 D WS 4935424 + 1024 [<idle>] <idle>-0 [018] ..s. 4017.970924: 8,0 D WS 4936448 + 321 [<idle>] ksoftirqd/21-117 [021] ..s. 4017.995033: 8,0 C R 4883648 + 336 [65475] ksoftirqd/21-117 [021] d.s. 4018.001988: myprobe1: (blkdev_bio_end_io+0x0/0x168) bi_status=7 ksoftirqd/21-117 [021] d.s. 4018.001992: myprobe: (aio_complete_rw+0x0/0x148) x0=0xffff802f2595ad80 res=0x12a000 res2=0x0 We always have to assign bio->bi_status to dio->bio.bi_status because we will only check dio->bio.bi_status when we return the whole IO to the upper layer. Fixes: 542ff7bf18c6 ("block: new direct I/O implementation") Cc: stable@vger.kernel.org Cc: Christoph Hellwig <hch@lst.de> Cc: Jens Axboe <axboe@kernel.dk> Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jason Yan <yanaijie@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-11Revert "SUNRPC: Micro-optimise when the task is known not to be sleeping"Trond Myklebust2-45/+8
This reverts commit 009a82f6437490c262584d65a14094a818bcb747. The ability to optimise here relies on compiler being able to optimise away tail calls to avoid stack overflows. Unfortunately, we are seeing reports of problems, so let's just revert. Reported-by: Daniel Mack <daniel@zonque.org> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2019-04-11NFSv4.1 fix incorrect return value in copy_file_rangeOlga Kornievskaia2-4/+3
According to the NFSv4.2 spec if the input and output file is the same file, operation should fail with EINVAL. However, linux copy_file_range() system call has no such restrictions. Therefore, in such case let's return EOPNOTSUPP and allow VFS to fallback to doing do_splice_direct(). Also when copy_file_range is called on an NFSv4.0 or 4.1 mount (ie., a server that doesn't support COPY functionality), we also need to return EOPNOTSUPP and fallback to a regular copy. Fixes xfstest generic/075, generic/091, generic/112, generic/263 for all NFSv4.x versions. Signed-off-by: Olga Kornievskaia <kolga@netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2019-04-11xprtrdma: Fix helper that drains the transportChuck Lever1-1/+1
We want to drain only the RQ first. Otherwise the transport can deadlock on ->close if there are outstanding Send completions. Fixes: 6d2d0ee27c7a ("xprtrdma: Replace rpcrdma_receive_wq ... ") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2019-04-11NFS: Fix handling of reply page vectorChuck Lever1-2/+2
NFSv4 GETACL and FS_LOCATIONS requests stopped working in v5.1-rc. These two need the extra padding to be added directly to the reply length. Reported-by: Olga Kornievskaia <aglo@umich.edu> Fixes: 02ef04e432ba ("NFS: Account for XDR pad of buf->pages") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: Olga Kornievskaia <aglo@umich.edu> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2019-04-11NFS: Forbid setting AF_INET6 to "struct sockaddr_in"->sin_family.Tetsuo Handa1-1/+2
syzbot is reporting uninitialized value at rpc_sockaddr2uaddr() [1]. This is because syzbot is setting AF_INET6 to "struct sockaddr_in"->sin_family (which is embedded into user-visible "struct nfs_mount_data" structure) despite nfs23_validate_mount_data() cannot pass sizeof(struct sockaddr_in6) bytes of AF_INET6 address to rpc_sockaddr2uaddr(). Since "struct nfs_mount_data" structure is user-visible, we can't change "struct nfs_mount_data" to use "struct sockaddr_storage". Therefore, assuming that everybody is using AF_INET family when passing address via "struct nfs_mount_data"->addr, reject if its sin_family is not AF_INET. [1] https://syzkaller.appspot.com/bug?id=599993614e7cbbf66bc2656a919ab2a95fb5d75c Reported-by: syzbot <syzbot+047a11c361b872896a4f@syzkaller.appspotmail.com> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2019-04-11dma-debug: only skip one stackframe entryScott Wood1-1/+1
With skip set to 1, I get a traceback like this: [ 106.867637] DMA-API: Mapped at: [ 106.870784] afu_dma_map_region+0x2cd/0x4f0 [dfl_afu] [ 106.875839] afu_ioctl+0x258/0x380 [dfl_afu] [ 106.880108] do_vfs_ioctl+0xa9/0x720 [ 106.883688] ksys_ioctl+0x60/0x90 [ 106.887007] __x64_sys_ioctl+0x16/0x20 With the previous value of 2, afu_dma_map_region was being omitted. I suspect that the code paths have simply changed since the value of 2 was chosen a decade ago, but it's also possible that it varies based on which mapping function was used, compiler inlining choices, etc. In any case, it's best to err on the side of skipping less. Signed-off-by: Scott Wood <swood@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-04-11platform/x86: pmc_atom: Drop __initconst on dmi tableStephen Boyd1-1/+1
It's used by probe and that isn't an init function. Drop this so that we don't get a section mismatch. Reported-by: kbuild test robot <lkp@intel.com> Cc: David Müller <dave.mueller@gmx.ch> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Fixes: 7c2e07130090 ("clk: x86: Add system specific quirk to mark clocks as critical") Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2019-04-11nvmet: fix discover log page when offsets are usedKeith Busch4-25/+58
The nvme target hadn't been taking the Get Log Page offset parameter into consideration, and so has been returning corrupted log pages when offsets are used. Since many tools, including nvme-cli, split the log request to 4k, we've been breaking discovery log responses when more than 3 subsystems exist. Fix the returned data by internally generating the entire discovery log page and copying only the requested bytes into the user buffer. The command log page offset type has been modified to a native __le64 to make it easier to extract the value from a command. Signed-off-by: Keith Busch <keith.busch@intel.com> Tested-by: Minwoo Im <minwoo.im@samsung.com> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-04-11nvme-fc: correct csn initialization and increments on errorJames Smart1-5/+15
This patch fixes a long-standing bug that initialized the FC-NVME cmnd iu CSN value to 1. Early FC-NVME specs had the connection starting with CSN=1. By the time the spec reached approval, the language had changed to state a connection should start with CSN=0. This patch corrects the initialization value for FC-NVME connections. Additionally, in reviewing the transport, the CSN value is assigned to the new IU early in the start routine. It's possible that a later dma map request may fail, causing the command to never be sent to the controller. Change the location of the assignment so that it is immediately prior to calling the lldd. Add a comment block to explain the impacts if the lldd were to additionally fail sending the command. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-04-11mmc: sdhci-omap: Don't finish_mrq() on a command error during tuningFaiz Abbas1-0/+38
commit 5b0d62108b46 ("mmc: sdhci-omap: Add platform specific reset callback") skips data resets during tuning operation. Because of this, a data error or data finish interrupt might still arrive after a command error has been handled and the mrq ended. This ends up with a "mmc0: Got data interrupt 0x00000002 even though no data operation was in progress" error message. Fix this by adding a platform specific callback for sdhci_irq. Mark the mrq as a failure but wait for a data interrupt instead of calling finish_mrq(). Fixes: 5b0d62108b46 ("mmc: sdhci-omap: Add platform specific reset callback") Signed-off-by: Faiz Abbas <faiz_abbas@ti.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-04-11gpu: host1x: Fix compile error when IOMMU API is not availableStefan Agner1-1/+1
In case the IOMMU API is not available compiling host1x fails with the following error: In file included from drivers/gpu/host1x/hw/host1x06.c:27: drivers/gpu/host1x/hw/channel_hw.c: In function ‘host1x_channel_set_streamid’: drivers/gpu/host1x/hw/channel_hw.c:118:30: error: implicit declaration of function ‘dev_iommu_fwspec_get’; did you mean ‘iommu_fwspec_free’? [-Werror=implicit-function-declaration] struct iommu_fwspec *spec = dev_iommu_fwspec_get(channel->dev->parent); ^~~~~~~~~~~~~~~~~~~~ iommu_fwspec_free Fixes: de5469c21ff9 ("gpu: host1x: Program the channel stream ID") Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Thierry Reding <treding@nvidia.com>
2019-04-11drm/i915/gvt: Roundup fb->height into tile's height at calucation fb->sizeXiong Zhang1-3/+6
When fb is tiled and fb->height isn't the multiple of tile's height, the format fb->size = fb->stride * fb->height, will get a smaller size than the actual size. As the memory height of tiled fb should be multiple of tile's height. Fixes: 7f1a93b1f1d1 ("drm/i915/gvt: Correct the calculation of plane size") Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
2019-04-11clk: x86: Add system specific quirk to mark clocks as criticalDavid Müller3-3/+35
Since commit 648e921888ad ("clk: x86: Stop marking clocks as CLK_IS_CRITICAL"), the pmc_plt_clocks of the Bay Trail SoC are unconditionally gated off. Unfortunately this will break systems where these clocks are used for external purposes beyond the kernel's knowledge. Fix it by implementing a system specific quirk to mark the necessary pmc_plt_clks as critical. Fixes: 648e921888ad ("clk: x86: Stop marking clocks as CLK_IS_CRITICAL") Signed-off-by: David Müller <dave.mueller@gmx.ch> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2019-04-11block: do not leak memory in bio_copy_user_iov()Jérôme Glisse1-1/+4
When bio_add_pc_page() fails in bio_copy_user_iov() we should free the page we just allocated otherwise we are leaking it. Cc: linux-block@vger.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: stable@vger.kernel.org Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Jérôme Glisse <jglisse@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10PCI: pciehp: Ignore Link State Changes after powering off a slotSergey Miroshnichenko1-0/+4
During a safe hot remove, the OS powers off the slot, which may cause a Data Link Layer State Changed event. The slot has already been set to OFF_STATE, so that event results in re-enabling the device, making it impossible to safely remove it. Clear out the Presence Detect Changed and Data Link Layer State Changed events when the disabled slot has settled down. It is still possible to re-enable the device if it remains in the slot after pressing the Attention Button by pressing it again. Fixes the problem that Micah reported below: an NVMe drive power button may not actually turn off the drive. Link: https://bugzilla.kernel.org/show_bug.cgi?id=203237 Reported-by: Micah Parrish <micah.parrish@hpe.com> Tested-by: Micah Parrish <micah.parrish@hpe.com> Signed-off-by: Sergey Miroshnichenko <s.miroshnichenko@yadro.com> [bhelgaas: changelog, add bugzilla URL] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v4.19+
2019-04-10sparc64/pci_sun4v: fix ATU checks for large DMA masksChristoph Hellwig1-9/+11
Now that we allow drivers to always need to set larger than required DMA masks we need to be a little more careful in the sun4v PCI iommu driver to chose when to select the ATU support - a larger DMA mask can be set even when the platform does not support ATU, so we always have to check if it is avaiable before using it. Add a little helper for that and use it in all the places where we make ATU usage decisions based on the DMA mask. Fixes: 24132a419c68 ("sparc64/pci_sun4v: allow large DMA masks") Reported-by: Meelis Roos <mroos@linux.ee> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Meelis Roos <mroos@linux.ee> Acked-by: David S. Miller <davem@davemloft.net>
2019-04-10lightnvm: pblk: fix crash in pblk_end_partial_read due to multipage bvecsHans Holmberg1-22/+28
The introduction of multipage bio vectors broke pblk's partial read logic due to it not being prepared for multipage bio vectors. Use bio vector iterators instead of direct bio vector indexing. Fixes: 07173c3ec276 ("block: enable multipage bvecs") Reported-by: Klaus Jensen <klaus.jensen@cnexlabs.com> Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Updated description. Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10IB/hfi1: Do not flush send queue in the TID RDMA second legKaike Wan1-23/+8
When a QP is put into error state, the send queue will be flushed. This mechanism is implemented in both the first and the second leg of the send engine. Since the second leg is only responsible for data transactions in the KDETH space for the TID RDMA WRITE request, it should not perform the flushing of the send queue. This patch removes the flushing function of the second leg, but still keeps the bailing out of the QP if it is put into error state. Fixes: 70dcb2e3dc6a ("IB/hfi1: Add the TID second leg send packet builder") Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-04-10ASoC: wcd9335: Fix missing regmap requirementMarc Gonzalez1-0/+1
wcd9335.c: undefined reference to 'devm_regmap_add_irq_chip' Signed-off-by: Marc Gonzalez <marc.w.gonzalez@free.fr> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-10drm/i915/dp: revert back to max link rate and lane count on eDPJani Nikula1-59/+10
Commit 7769db588384 ("drm/i915/dp: optimize eDP 1.4+ link config fast and narrow") started to optize the eDP 1.4+ link config, both per spec and as preparation for display stream compression support. Sadly, we again face panels that flat out fail with parameters they claim to support. Revert, and go back to the drawing board. v2: Actually revert to max params instead of just wide-and-slow. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109959 Fixes: 7769db588384 ("drm/i915/dp: optimize eDP 1.4+ link config fast and narrow") Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Manasi Navare <manasi.d.navare@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Matt Atwood <matthew.s.atwood@intel.com> Cc: "Lee, Shawn C" <shawn.c.lee@intel.com> Cc: Dave Airlie <airlied@gmail.com> Cc: intel-gfx@lists.freedesktop.org Cc: <stable@vger.kernel.org> # v5.0+ Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Manasi Navare <manasi.d.navare@intel.com> Tested-by: Albert Astals Cid <aacid@kde.org> # v5.0 backport Tested-by: Emanuele Panigati <ilpanich@gmail.com> # v5.0 backport Tested-by: Matteo Iervasi <matteoiervasi@gmail.com> # v5.0 backport Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190405075220.9815-1-jani.nikula@intel.com (cherry picked from commit f11cb1c19ad0563b3c1ea5eb16a6bac0e401f428) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-04-10drm/i915/icl: Fix port disable sequence for mipi-dsiVandita Kulkarni2-4/+4
Re-enable clock gating of DDI clocks. v2: Fix the default ddi clk state for mipi-dsi (Imre) Fixes: 1026bea00381 ("drm/i915/icl: Ungate DSI clocks") Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com> Reviewed-by: Uma Shankar <uma.shankar@intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1553513202-13863-2-git-send-email-vandita.kulkarni@intel.com (cherry picked from commit 942d1cf48eae3fcd7e973cfb708d5c4860f0c713) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-04-10drm/i915/icl: Ungate ddi clocks before IO enableVandita Kulkarni1-0/+6
IO enable sequencing needs ddi clocks enabled. These clocks will be gated at a later point in the enable sequence. v2: Fix the commit header (Uma) v3: Remove the redundant read (Ville) Fixes: 949fc52af19e ("drm/i915/icl: add pll mapping for DSI") Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com> Reviewed-by: Uma Shankar <uma.shankar@intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1553513202-13863-1-git-send-email-vandita.kulkarni@intel.com (cherry picked from commit c5b81a325263a891d5811aabe938c87e03db4c37) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-04-10nvme: cancel request synchronouslyMing Lei1-1/+1
nvme_cancel_request() is used in error handler, and it is always reliable to cancel request synchronously, and avoids possible race in which request may be completed after real hw queue is destroyed. One issue is reported by our customer on NVMe RDMA, in which freed ib queue pair may be used in nvme_rdma_complete_rq(). Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Bart Van Assche <bvanassche@acm.org> Cc: James Smart <james.smart@broadcom.com> Cc: linux-nvme@lists.infradead.org Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10blk-mq: introduce blk_mq_complete_request_sync()Ming Lei2-0/+8
In NVMe's error handler, follows the typical steps of tearing down hardware for recovering controller: 1) stop blk_mq hw queues 2) stop the real hw queues 3) cancel in-flight requests via blk_mq_tagset_busy_iter(tags, cancel_request, ...) cancel_request(): mark the request as abort blk_mq_complete_request(req); 4) destroy real hw queues However, there may be race between #3 and #4, because blk_mq_complete_request() may run q->mq_ops->complete(rq) remotelly and asynchronously, and ->complete(rq) may be run after #4. This patch introduces blk_mq_complete_request_sync() for fixing the above race. Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Bart Van Assche <bvanassche@acm.org> Cc: James Smart <james.smart@broadcom.com> Cc: linux-nvme@lists.infradead.org Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10scsi: virtio_scsi: limit number of hw queues by nr_cpu_idsDongli Zhang1-0/+1
When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-scsi, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num_queues' specified by qemu is more than maxcpus, virtio-scsi would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-scsi by nr_cpu_ids. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10virtio-blk: limit number of hw queues by nr_cpu_idsDongli Zhang1-0/+2
When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-blk by nr_cpu_ids. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10block, bfq: fix use after free in bfq_bfqq_expirePaolo Valente3-11/+23
The function bfq_bfqq_expire() invokes the function __bfq_bfqq_expire(), and the latter may free the in-service bfq-queue. If this happens, then no other instruction of bfq_bfqq_expire() must be executed, or a use-after-free will occur. Basing on the assumption that __bfq_bfqq_expire() invokes bfq_put_queue() on the in-service bfq-queue exactly once, the queue is assumed to be freed if its refcounter is equal to one right before invoking __bfq_bfqq_expire(). But, since commit 9dee8b3b057e ("block, bfq: fix queue removal from weights tree") this assumption is false. __bfq_bfqq_expire() may also invoke bfq_weights_tree_remove() and, since commit 9dee8b3b057e ("block, bfq: fix queue removal from weights tree"), also the latter function may invoke bfq_put_queue(). So __bfq_bfqq_expire() may invoke bfq_put_queue() twice, and this is the actual case where the in-service queue may happen to be freed. To address this issue, this commit moves the check on the refcounter of the queue right around the last bfq_put_queue() that may be invoked on the queue. Fixes: 9dee8b3b057e ("block, bfq: fix queue removal from weights tree") Reported-by: Dmitrii Tcvetkov <demfloro@demfloro.ru> Reported-by: Douglas Anderson <dianders@chromium.org> Tested-by: Dmitrii Tcvetkov <demfloro@demfloro.ru> Tested-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10ALSA: hda: Fix racy display power accessTakashi Iwai3-2/+6
snd_hdac_display_power() doesn't handle the concurrent calls carefully enough, and it may lead to the doubly get_power or put_power calls, when a runtime PM and an async work get called in racy way. This patch addresses it by reusing the bus->lock mutex that has been used for protecting the link state change in ext bus code, so that it can protect against racy display state changes. The initialization of bus->lock was moved from snd_hdac_ext_bus_init() to snd_hdac_bus_init() as well accordingly. Testcase: igt/i915_pm_rpm/module-reload #glk-dsi Reported-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-04-10alarmtimer: Return correct remaining timeAndrei Vagin1-1/+1
To calculate a remaining time, it's required to subtract the current time from the expiration time. In alarm_timer_remaining() the arguments of ktime_sub are swapped. Fixes: d653d8457c76 ("alarmtimer: Implement remaining callback") Signed-off-by: Andrei Vagin <avagin@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190408041542.26338-1-avagin@gmail.com
2019-04-10locking/lockdep: Zap lock classes even with lock debugging disabledBart Van Assche1-17/+12
The following commit: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use") changed the behavior of lockdep_free_key_range() from unconditionally zapping lock classes into only zapping lock classes if debug_lock == true. Not zapping lock classes if debug_lock == false leaves dangling pointers in several lockdep datastructures, e.g. lock_class::name in the all_lock_classes list. The shell command "cat /proc/lockdep" causes the kernel to iterate the all_lock_classes list. Hence the "unable to handle kernel paging request" cash that Shenghui encountered by running cat /proc/lockdep. Since the new behavior can cause cat /proc/lockdep to crash, restore the pre-v5.1 behavior. This patch avoids that cat /proc/lockdep triggers the following crash with debug_lock == false: BUG: unable to handle kernel paging request at fffffbfff40ca448 RIP: 0010:__asan_load1+0x28/0x50 Call Trace: string+0xac/0x180 vsnprintf+0x23e/0x820 seq_vprintf+0x82/0xc0 seq_printf+0x92/0xb0 print_name+0x34/0xb0 l_show+0x184/0x200 seq_read+0x59e/0x6c0 proc_reg_read+0x11f/0x170 __vfs_read+0x4d/0x90 vfs_read+0xc5/0x1f0 ksys_read+0xab/0x130 __x64_sys_read+0x43/0x50 do_syscall_64+0x71/0x210 entry_SYSCALL_64_after_hwframe+0x49/0xbe Reported-by: shenghui <shhuiw@foxmail.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Fixes: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use") # v5.1-rc1. Link: https://lkml.kernel.org/r/20190403233552.124673-1-bvanassche@acm.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10ASoC: pcm: fix error handling when try_module_get() fails.Ranjani Sridharan1-3/+5
Handle error before returning when try_module_get() fails to prevent inconsistent mutex lock/unlock. Fixes: 52034add7 (ASoC: pcm: update module refcount if module_get_upon_open is set) Signed-off-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-10apparmor: Restore Y/N in /sys for apparmor's "enabled"Kees Cook1-1/+48
Before commit c5459b829b71 ("LSM: Plumb visibility into optional "enabled" state"), /sys/module/apparmor/parameters/enabled would show "Y" or "N" since it was using the "bool" handler. After being changed to "int", this switched to "1" or "0", breaking the userspace AppArmor detection of dbus-broker. This restores the Y/N output while keeping the LSM infrastructure happy. Before: $ cat /sys/module/apparmor/parameters/enabled 1 After: $ cat /sys/module/apparmor/parameters/enabled Y Reported-by: David Rheinsberg <david.rheinsberg@gmail.com> Reviewed-by: David Rheinsberg <david.rheinsberg@gmail.com> Link: https://lkml.kernel.org/r/CADyDSO6k8vYb1eryT4g6+EHrLCvb68GAbHVWuULkYjcZcYNhhw@mail.gmail.com Fixes: c5459b829b71 ("LSM: Plumb visibility into optional "enabled" state") Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: John Johansen <john.johansen@canonical.com>
2019-04-10ASoC: stm32: sai: fix master clock managementOlivier Moysan1-17/+47
When master clock is used, master clock rate is set exclusively. Parent clocks of master clock cannot be changed after a call to clk_set_rate_exclusive(). So the parent clock of SAI kernel clock must be set before. Ensure also that exclusive rate operations are balanced in STM32 SAI driver. Signed-off-by: Olivier Moysan <olivier.moysan@st.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-10ASoC: Intel: kbl: fix wrong number of channelsTzung-Bi Shih1-1/+1
Fix wrong setting on number of channels. The context wants to set constraint to 2 channels instead of 4. Signed-off-by: Tzung-Bi Shih <tzungbi@google.com> Acked-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-10x86/perf/amd: Remove need to check "running" bit in NMI handlerLendacky, Thomas2-12/+22
Spurious interrupt support was added to perf in the following commit, almost a decade ago: 63e6be6d98e1 ("perf, x86: Catch spurious interrupts after disabling counters") The two previous patches (resolving the race condition when disabling a PMC and NMI latency mitigation) allow for the removal of this older spurious interrupt support. Currently in x86_pmu_stop(), the bit for the PMC in the active_mask bitmap is cleared before disabling the PMC, which sets up a race condition. This race condition was mitigated by introducing the running bitmap. That race condition can be eliminated by first disabling the PMC, waiting for PMC reset on overflow and then clearing the bit for the PMC in the active_mask bitmap. The NMI handler will not re-enable a disabled counter. If x86_pmu_stop() is called from the perf NMI handler, the NMI latency mitigation support will guard against any unhandled NMI messages. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> # 4.14.x- Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: https://lkml.kernel.org/r/Message-ID: Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10powerpc/mm: Define MAX_PHYSMEM_BITS for all 64-bit configsMichael Ellerman1-1/+1
The recent commit 8bc086899816 ("powerpc/mm: Only define MAX_PHYSMEM_BITS in SPARSEMEM configurations") removed our definition of MAX_PHYSMEM_BITS when SPARSEMEM is disabled. This inadvertently broke some 64-bit FLATMEM using configs with eg: arch/powerpc/include/asm/book3s/64/mmu-hash.h:584:6: error: "MAX_PHYSMEM_BITS" is not defined, evaluates to 0 #if (MAX_PHYSMEM_BITS > MAX_EA_BITS_PER_CONTEXT) ^~~~~~~~~~~~~~~~ Fix it by making sure we define MAX_PHYSMEM_BITS for all 64-bit configs regardless of SPARSEMEM. Fixes: 8bc086899816 ("powerpc/mm: Only define MAX_PHYSMEM_BITS in SPARSEMEM configurations") Reported-by: Andreas Schwab <schwab@linux-m68k.org> Reported-by: Hugh Dickins <hughd@google.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-04-10Bluetooth: btusb: request wake pin with NOAUTOENBrian Norris1-1/+1
Badly-designed systems might have (for example) active-high wake pins that default to high (e.g., because of external pull ups) until they have an active firmware which starts driving it low. This can cause an interrupt storm in the time between request_irq() and disable_irq(). We don't support shared interrupts here, so let's just pre-configure the interrupt to avoid auto-enabling it. Fixes: fd913ef7ce61 ("Bluetooth: btusb: Add out-of-band wakeup support") Fixes: 5364a0b4f4be ("arm64: dts: rockchip: move QCA6174A wakeup pin into its USB node") Signed-off-by: Brian Norris <briannorris@chromium.org> Reviewed-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-09drm/mediatek: no change parent rate in round_rate() for MT2701 hdmi phyWangyan Wang4-16/+20
This is the third step to make MT2701 HDMI stable. We should not change the rate of parent for hdmi phy when doing round_rate for this clock. The parent clock of hdmi phy must be the same as it. We change it when doing set_rate only. Signed-off-by: Wangyan Wang <wangyan.wang@mediatek.com> Signed-off-by: CK Hu <ck.hu@mediatek.com>
2019-04-09drm/mediatek: using new factor for tvdpll for MT2701 hdmi phyWangyan Wang1-5/+3
This is the second step to make MT2701 HDMI stable. The factor depends on the divider of DPI in MT2701, therefore, we should fix this factor to the right and new one. Test: search ok Signed-off-by: Wangyan Wang <wangyan.wang@mediatek.com> Signed-off-by: CK Hu <ck.hu@mediatek.com>