| Commit message (Collapse) | Author | Files | Lines |
|
After commit 88f718e3fa4d67f3a8dbe79a2f97d722323e4051
"ARM: pxa: delete the custom GPIO header" a compilation
error was introduced in the PXA25x gadget driver.
An attempt to fix the problem was made in
commit b144e4ab1ef130e8bf30bcd3e529b7f35112c503
"usb: gadget: fix pxa25x compilation problems"
by explictly stating the driver needs the <mach/hardware.h>
header, which solved the compilation for a few boards,
such as the pxa255-idp and its defconfig.
However the Lubbock board has this special clause in
drivers/usb/gadget/pxa25x_udc.c:
This include file has an implicit dependency on
<mach/irqs.h> having been included before <mach/lubbock.h>
was included.
Before commit 88f718e3fa4d67f3a8dbe79a2f97d722323e4051
"ARM: pxa: delete the custom GPIO header" this implicit
dependency for the pxa25x_udc compile on the Lubbock was
satisfied by <linux/gpio.h> implicitly including
<mach/gpio.h> which was in turn including <mach/irqs.h>,
apart from the earlier added <mach/hardware.h>.
Fix this by having the PXA25x <mach/lubbock.h> explicitly
include <mach/irqs.h>.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Greg Kroah-Hartmann <gregkh@linuxfoundation.org>
Cc: Felipe Balbi <balbi@ti.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
Looks like the LCD panel on LDP has been broken quite a while, and
recently got fixed by commit 0b2aa8bed3e1 (gpio: twl4030: Fix regression
for twl gpio output). However, there's still an issue left where the panel
backlight does not come on if the LCD drivers are built into the
kernel.
Fix the issue by registering the DPI LCD panel only after the twl4030
GPIO has probed.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
[tony@atomide.com: updated per Tomi's comments]
Signed-off-by: Tony Lindgren <tony@atomide.com>
|
|
Commit 7d7e1eb (ARM: OMAP2+: Prepare for irqs.h removal) and commit
ec2c082 (ARM: OMAP2+: Remove hardcoded IRQs and enable SPARSE_IRQ)
updated the way interrupts for OMAP2/3 devices are defined in the
HWMOD data structures to being an index plus a fixed offset (defined
by OMAP_INTC_START).
Couple of irqs in the OMAP2/3 hwmod data were misconfigured completely
as they were missing this OMAP_INTC_START relative offset. Add this
offset back to fix the incorrect irq data for the following modules:
OMAP2 - GPMC, RNG
OMAP3 - GPMC, ISP MMU & IVA MMU
Signed-off-by: Suman Anna <s-anna@ti.com>
Fixes: 7d7e1eba7e92 ("ARM: OMAP2+: Prepare for irqs.h removal")
Fixes: ec2c0825ca31 ("ARM: OMAP2+: Remove hardcoded IRQs and enable SPARSE_IRQ")
Cc: Tony Lindgren <tony@atomide.com>
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
With commit '7dedd34: ARM: OMAP2+: hwmod: Fix a crash in _setup_reset() with
DEBUG_LL' we moved from parsing cmdline to identify uart used for earlycon
to using the requsite hwmod CONFIG_DEBUG_OMAPxUARTy FLAGS.
On DRA7 though, we seem to be missing this flag, and atleast on the DRA7 EVM
where we use uart1 for console, boot fails with DEBUG_LL enabled.
Reported-by: Lokesh Vutla <lokeshvutla@ti.com>
Tested-by: Lokesh Vutla <lokeshvutla@ti.com> # on a different base
Signed-off-by: Rajendra Nayak <rnayak@ti.com>
Fixes: 7dedd346941d ("ARM: OMAP2+: hwmod: Fix a crash in _setup_reset() with DEBUG_LL")
Signed-off-by: Paul Walmsley <paul@pwsan.com>
|
|
Commit 2171364d1a92 ("powerpc: Add HWCAP2 aux entry") introduced a new
AT_ auxv entry type AT_HWCAP2 but failed to update AT_VECTOR_SIZE_BASE
accordingly.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: 2171364d1a92 (powerpc: Add HWCAP2 aux entry)
Cc: stable@vger.kernel.org
Acked-by: Michael Neuling <michael@neuling.org>
Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
selinux_setprocattr() does ptrace_parent(p) under task_lock(p),
but task_struct->alloc_lock doesn't pin ->parent or ->ptrace,
this looks confusing and triggers the "suspicious RCU usage"
warning because ptrace_parent() does rcu_dereference_check().
And in theory this is wrong, spin_lock()->preempt_disable()
doesn't necessarily imply rcu_read_lock() we need to access
the ->parent.
Reported-by: Evan McNabb <emcnabb@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Fix a broken networking check. Return an error if peer recv fails. If
secmark is active and the packet recv succeeds the peer recv error is
ignored.
Signed-off-by: Chad Hanson <chanson@trustedcs.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Commit "drm/ttm: Don't move non-existing data" didn't take the
swapped-out corner case into account. This patch corrects that.
Fixes blank screen after attempted suspend / hibernate on vmwgfx.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Fix build error: qxl uses crc32 functions so it needs to select
CRC32.
Also use angle quotes around a kernel header file name.
drivers/built-in.o: In function `qxl_display_read_client_monitors_config':
(.text+0x19d754): undefined reference to `crc32_le'
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
|
|
Since commit 36bc08cc01709 ("fs/aio: Add support to aio ring pages
migration") the aio ring setup code has used a special per-ring backing
inode for the page allocations, rather than just using random anonymous
pages.
However, rather than remembering the pages as it allocated them, it
would allocate the pages, insert them into the file mapping (dirty, so
that they couldn't be free'd), and then forget about them. And then to
look them up again, it would mmap the mapping, and then use
"get_user_pages()" to get back an array of the pages we just created.
Now, not only is that incredibly inefficient, it also leaked all the
pages if the mmap failed (which could happen due to excessive number of
mappings, for example).
So clean it all up, making it much more straightforward. Also remove
some left-overs of the previous (broken) mm_populate() usage that was
removed in commit d6c355c7dabc ("aio: fix race in ring buffer page
lookup introduced by page migration support") but left the pointless and
now misleading MAP_POPULATE flag around.
Tested-and-acked-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This patch adds support for RAPL on Intel ValleyView based SoC
platforms, such as Baytrail.
Besides adding CPU ID, special energy unit encoding is handled
for ValleyView.
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
kmemleak reported a memory leak as below.
unreferenced object 0xffff880118f14700 (size 32):
comm "swapper/0", pid 1, jiffies 4294877401 (age 123.283s)
hex dump (first 32 bytes):
00 01 10 00 00 00 ad de 00 02 20 00 00 00 ad de .......... .....
00 d4 d2 18 01 88 ff ff 01 00 00 00 00 04 00 00 ................
backtrace:
[<ffffffff814edb1e>] kmemleak_alloc+0x4e/0xb0
[<ffffffff811889dc>] kmem_cache_alloc_trace+0x1ec/0x260
[<ffffffff810aba66>] pm_vt_switch_required+0x76/0xb0
[<ffffffff812f39f5>] register_framebuffer+0x195/0x320
[<ffffffff8130af18>] efifb_probe+0x718/0x780
[<ffffffff81391495>] platform_drv_probe+0x45/0xb0
[<ffffffff8138f407>] driver_probe_device+0x87/0x3a0
[<ffffffff8138f7f3>] __driver_attach+0x93/0xa0
[<ffffffff8138d413>] bus_for_each_dev+0x63/0xa0
[<ffffffff8138ee5e>] driver_attach+0x1e/0x20
[<ffffffff8138ea40>] bus_add_driver+0x180/0x250
[<ffffffff8138fe74>] driver_register+0x64/0xf0
[<ffffffff813913ba>] __platform_driver_register+0x4a/0x50
[<ffffffff8191e028>] efifb_driver_init+0x12/0x14
[<ffffffff8100214a>] do_one_initcall+0xfa/0x1b0
[<ffffffff818e40e0>] kernel_init_freeable+0x17b/0x201
In pm_vt_switch_required(), "entry" variable is allocated via kmalloc().
So, in pm_vt_switch_unregister(), it needs to call kfree() when object
is deleted from list.
Signed-off-by: Masami Ichikawa <masami256@gmail.com>
Reviewed-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
drivers
When configuring a default governor (via CONFIG_CPU_FREQ_DEFAULT_*) with the
intel_pstate driver, the desired default policy is not properly set. For
example, setting 'CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE' ends up with the
'powersave' policy being set.
Fix by configuring the correct default policy, if either 'powersave' or
'performance' are requested. Otherwise, fallback to what the driver originally
set via its 'init' routine.
Signed-off-by: Jason Baron <jbaron@akamai.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
There are cases where cpufreq_add_dev() may fail for some CPUs
during system resume. With the current code we will still have
sysfs cpufreq files for those CPUs and struct cpufreq_policy
would be already freed for them. Hence any operation on those
sysfs files would result in kernel warnings.
Example of problems resulting from resume errors (from Bjørn Mork):
WARNING: CPU: 0 PID: 6055 at fs/sysfs/file.c:343 sysfs_open_file+0x77/0x212()
missing sysfs attribute operations for kobject: (null)
Modules linked in: [stripped as irrelevant]
CPU: 0 PID: 6055 Comm: grep Tainted: G D 3.13.0-rc2 #153
Hardware name: LENOVO 2776LEG/2776LEG, BIOS 6EET55WW (3.15 ) 12/19/2011
0000000000000009 ffff8802327ebb78 ffffffff81380b0e 0000000000000006
ffff8802327ebbc8 ffff8802327ebbb8 ffffffff81038635 0000000000000000
ffffffff811823c7 ffff88021a19e688 ffff88021a19e688 ffff8802302f9310
Call Trace:
[<ffffffff81380b0e>] dump_stack+0x55/0x76
[<ffffffff81038635>] warn_slowpath_common+0x7c/0x96
[<ffffffff811823c7>] ? sysfs_open_file+0x77/0x212
[<ffffffff810386e3>] warn_slowpath_fmt+0x41/0x43
[<ffffffff81182dec>] ? sysfs_get_active+0x6b/0x82
[<ffffffff81182382>] ? sysfs_open_file+0x32/0x212
[<ffffffff811823c7>] sysfs_open_file+0x77/0x212
[<ffffffff81182350>] ? sysfs_schedule_callback+0x1ac/0x1ac
[<ffffffff81122562>] do_dentry_open+0x17c/0x257
[<ffffffff8112267e>] finish_open+0x41/0x4f
[<ffffffff81130225>] do_last+0x80c/0x9ba
[<ffffffff8112dbbd>] ? inode_permission+0x40/0x42
[<ffffffff81130606>] path_openat+0x233/0x4a1
[<ffffffff81130b7e>] do_filp_open+0x35/0x85
[<ffffffff8113b787>] ? __alloc_fd+0x172/0x184
[<ffffffff811232ea>] do_sys_open+0x6b/0xfa
[<ffffffff811233a7>] SyS_openat+0xf/0x11
[<ffffffff8138c812>] system_call_fastpath+0x16/0x1b
To fix this, remove those sysfs files or put the associated kobject
in case of such errors. Also, to make it simple, remove the cpufreq
sysfs links from all the CPUs (except for the policy->cpu) during
suspend, as that operation won't result in a loss of sysfs file
permissions and we can create those links during resume just fine.
Fixes: 5302c3fb2e62 ("cpufreq: Perform light-weight init/teardown during suspend/resume")
Reported-and-tested-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 3.12+ <stable@vger.kernel.org> # 3.12+
[rjw: Changelog]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The arbitrary restriction on page counts offered by the core
migrate_page_move_mapping() code results in rather suspicious looking
fiddling with page reference counts in the aio_migratepage() operation.
To fix this, make migrate_page_move_mapping() take an extra_count parameter
that allows aio to tell the code about its own reference count on the page
being migrated.
While cleaning up aio_migratepage(), make it validate that the old page
being passed in is actually what aio_migratepage() expects to prevent
misbehaviour in the case of races.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
|
|
e34ecee2ae791df674dfb466ce40692ca6218e43 reworked the percpu reference
counting to correct a bug trinity found. Unfortunately, the change lead
to kioctxes being leaked because there was no final reference count to
put. Add that reference count back in to fix things.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Cc: stable@vger.kernel.org
|
|
In the case of both the submit_queues param and use_per_node_hctx param
are used. We limit the number af submit_queues to the number of online
nodes.
If the submit_queues is a multiple of nr_online_nodes, its trivial. Simply map
them to the nodes. For example: 8 submit queues are mapped as node0[0,1],
node1[2,3], ...
If uneven, we are left with an uneven number of submit_queues that must be
mapped. These are mapped toward the first node and onward. E.g. 5
submit queues mapped onto 4 nodes are mapped as node0[0,1], node1[2], ...
Signed-off-by: Matias Bjorling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The defaults for the module is to instantiate itself with blk-mq and a
submit queue for each CPU node in the system.
To save resources, initialize instead with a single submit queue.
Signed-off-by: Matias Bjorling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Randy Dunlap reported a couple of grammar errors and unfortunate usages of
socket/node/core.
Signed-off-by: Matias Bjorling <m@bjorling.me>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Commit 1bf49dd4be0b ("./Makefile: export initial ramdisk compression
config option") started setting the INITRD_COMPRESS environment variable
depending on which decompression models the kernel had available.
That is completely broken.
For example, we by default have CONFIG_RD_LZ4 enabled, and are able to
decompress such an initrd, but the user tools to *create* such an initrd
may not be availble. So trying to tell dracut to generate an
lz4-compressed image just because we can decode such an image is
completely inappropriate.
Cc: J P <ppandit@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit 597d795a2a78 ('mm: do not allocate page->ptl dynamically, if
spinlock_t fits to long') restructures some allocators that are compiled
even if USE_SPLIT_PTLOCKS arn't used. It results in compilation
failure:
mm/memory.c:4282:6: error: 'struct page' has no member named 'ptl'
mm/memory.c:4288:12: error: 'struct page' has no member named 'ptl'
Add in the missing ifdef.
Fixes: 597d795a2a78 ('mm: do not allocate page->ptl dynamically, if spinlock_t fits to long')
Signed-off-by: Olof Johansson <olof@lixom.net>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Some pstore backing devices use on board flash as persistent
storage. These have limited numbers of write cycles so it
is a poor idea to use them from high frequency operations.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
In struct page we have enough space to fit long-size page->ptl there,
but we use dynamically-allocated page->ptl if size(spinlock_t) is larger
than sizeof(int).
It hurts 64-bit architectures with CONFIG_GENERIC_LOCKBREAK, where
sizeof(spinlock_t) == 8, but it easily fits into struct page.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy") meant
to bring aging fairness among zones in system, but it was overzealous
and badly regressed basic workloads on NUMA systems.
Due to the way kswapd and page allocator interacts, we still want to
make sure that all zones in any given node are used equally for all
allocations to maximize memory utilization and prevent thrashing on the
highest zone in the node.
While the same principle applies to NUMA nodes - memory utilization is
obviously improved by spreading allocations throughout all nodes -
remote references can be costly and so many workloads prefer locality
over memory utilization. The original change assumed that
zone_reclaim_mode would be a good enough predictor for that, but it
turned out to be as indicative as a coin flip.
Revert the NUMA aspect of the fairness until we can find a proper way to
make it configurable and agree on a sane default.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: <stable@kernel.org> # 3.12
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
policy"
This reverts commit 73f038b863df. The NUMA behaviour of this patch is
less than ideal. An alternative approch is to interleave allocations
only within local zones which is implemented in the next patch.
Cc: stable@vger.kernel.org
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Sasha Levin found a NULL pointer dereference that is due to a missing
page table lock, which in turn is due to the pmd entry in question being
a transparent huge-table entry.
The code - introduced in commit 1998cc048901 ("mm: make
madvise(MADV_WILLNEED) support swap file prefetch") - correctly checks
for this situation using pmd_none_or_trans_huge_or_clear_bad(), but it
turns out that that function doesn't work correctly.
pmd_none_or_trans_huge_or_clear_bad() expected that pmd_bad() would
trigger if the transparent hugepage bit was set, but it doesn't do that
if pmd_numa() is also set. Note that the NUMA bit only gets set on real
NUMA machines, so people trying to reproduce this on most normal
development systems would never actually trigger this.
Fix it by removing the very subtle (and subtly incorrect) expectation,
and instead just checking pmd_trans_huge() explicitly.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Acked-by: Andrea Arcangeli <aarcange@redhat.com>
[ Additionally remove the now stale test for pmd_trans_huge() inside the
pmd_bad() case - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This patch adds a check on the output buffer with access_ok(VERIFY_WRITE, ...)
to ensure the whole buffer is in userspace memory before using the
pointer in uverbs functions. If the buffer or a subset of it is not
valid, returns -EFAULT to the caller.
This will also catch invalid buffer before the final call to
copy_to_user() which happen late in most uverb functions.
Just like the check in read(2) syscall, it's a sanity check to detect
invalid parameters provided by userspace. This particular check was added
in vfs_read() by Linus Torvalds for v2.6.12 with following commit message:
https://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/?id=fd770e66c9a65b14ce114e171266cf6f393df502
Make read/write always do the full "access_ok()" tests.
The actual user copy will do them too, but only for the
range that ends up being actually copied. That hides
bugs when the range has been clamped by file size or other
issues.
Note: there's no need to check input buffer since vfs_write() already does
access_ok(VERIFY_READ, ...) as part of write() syscall.
Link: http://marc.info/?i=cover.1387273677.git.ydroneaud@opteya.com
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
Since ib_copy_from_udata() doesn't check yet the available input data
length before accessing userspace memory, an explicit check of this
length is required to prevent:
- reading past the user provided buffer,
- underflow when subtracting the expected command size from the input
length.
This will ensure the newly added flow steering uverbs don't try to
process truncated commands.
Link: http://marc.info/?i=cover.1386798254.git.ydroneaud@opteya.com>
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
If the flow_spec items parsed count does not match the number of items
declared in the flow_attr command, or if not all bytes are used for
flow_spec items (eg. trailing garbage), a log message is reported and
the function leave through the error path. Unfortunately the error
code is currently not set.
This patch set error code to -EINVAL in such cases, so that the error
is reported to userspace instead of silently fail.
Link: http://marc.info/?i=cover.1386798254.git.ydroneaud@opteya.com>
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
As noted by Daniel Vetter in its article "Botching up ioctls"[1]
"Check *all* unused fields and flags and all the padding for whether
it's 0, and reject the ioctl if that's not the case. Otherwise
your nice plan for future extensions is going right down the
gutters since someone *will* submit an ioctl struct with random
stack garbage in the yet unused parts. Which then bakes in the ABI
that those fields can never be used for anything else but garbage."
It's important to ensure that reserved fields are set to known value,
so that it will be possible to use them latter to extend the ABI.
The same reasonning apply to comp_mask field present in newer uverbs
command: per commit 22878dbc9173 ("IB/core: Better checking of
userspace values for receive flow steering"), unsupported values in
comp_mask are rejected.
[1] http://blog.ffwll.ch/2013/11/botching-up-ioctls.html
Link: http://marc.info/?i=cover.1386798254.git.ydroneaud@opteya.com>
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
Just like the check added to create_flow in 22878dbc9173 ("IB/core:
Better checking of userspace values for receive flow steering"),
comp_mask must be checked in destroy_flow too.
Since only empty comp_mask is currently supported, any other value
must be rejected.
This check was silently added in a previous patch[1] to move comp_mask
in extended command header, part of previous patchset[2] against
create/destroy_flow uverbs. The idea of moving comp_mask to the header
was discarded for the final patchset[3].
Unfortunately the check added in destroy_flow uverb was not integrated
in the final patchset.
[1] http://marc.info/?i=40175eda10d670d098204da6aa4c327a0171ae5f.1381510045.git.ydroneaud@opteya.com
[2] http://marc.info/?i=cover.1381510045.git.ydroneaud@opteya.com
[3] http://marc.info/?i=cover.1383773832.git.ydroneaud@opteya.com
Cc: Matan Barak <matanb@mellanox.com>
Link: http://marc.info/?i=cover.1386798254.git.ydroneaud@opteya.com>
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
As noted by Daniel Vetter in its article "Botching up ioctls"[1]
"Check *all* unused fields and flags and all the padding for whether
it's 0, and reject the ioctl if that's not the case. Otherwise
your nice plan for future extensions is going right down the
gutters since someone *will* submit an ioctl struct with random
stack garbage in the yet unused parts. Which then bakes in the ABI
that those fields can never be used for anything else but garbage."
It's important to ensure that reserved fields are set to known value,
so that it will be possible to use them latter to extend the ABI.
The same reasonning apply to comp_mask field present in newer uverbs
command: per commit 22878dbc9173 ("IB/core: Better checking of
userspace values for receive flow steering"), unsupported values in
comp_mask are rejected.
[1] http://blog.ffwll.ch/2013/11/botching-up-ioctls.html
Link: http://marc.info/?i=cover.1386798254.git.ydroneaud@opteya.com>
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
Trying to have a ternary operator to choose between NULL (or 0) and the
real pointer value in invocations leads to an impossible choice between
a sparse error about a literal 0 used as a NULL pointer, and a gcc
warning about "pointer/integer type mismatch in conditional expression."
Rather than clutter the source with more casts, move the ternary
operator into a new INIT_UDATA_BUF_OR_NULL() macro, which makes it
easier to use and simplifies its callers.
Reported-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
|
|
The missing casts can cause the high 64-bits of the physical blocks to
be lost. Set up new macros which allows us to make sure the right
thing happen, even if at some point we end up supporting larger
logical block numbers.
Thanks to the Emese Revfy and the PaX security team for reporting this
issue.
Reported-by: PaX Team <pageexec@freemail.hu>
Reported-by: Emese Revfy <re.emese@gmail.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
|
|
Fixes gfx corruption on certain TN/RL parts.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=60389
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
This patch fixes a possible scsi_host reference leak in qlt_lport_register(),
when a non zero return from the passed (*callback) does not call drop the
local reference via scsi_host_put() before returning.
This currently does not effect existing tcm_qla2xxx code as the passed callback
will never fail, but fix this up regardless for future code.
Cc: Chad Dupuis <chad.dupuis@qlogic.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
|
lun->lun_ref is also initialized in core_tpg_post_addlun, so it doesn't
need to be done in core_tpg_setup_virtual_lun0.
(nab: Drop left-over percpu_ref_cancel_init in failure path)
Signed-off-by: Andy Grover <agrover@redhat.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
|
Commit 7ea6c6c1 ("Move cper.c from drivers/acpi/apei to
drivers/firmware/efi") results in CONFIG_EFI being enabled even
when the user doesn't want this. Since ACPI APEI used to build
fine without UEFI (and as far as I know also has no functional
depency on it), at least in that case using a reverse dependency
is wrong (and a straight one isn't needed).
Whether the same is true for ACPI_EXTLOG I don't know - if there
is a functional dependency, it should depend on EFI rather than
selecting it. It certainly has (currently) no build dependency.
Adjust Kconfig and build logic so that the bad dependency gets
avoided.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Cc: Matt Fleming <matt.fleming@intel.com>
Link: http://lkml.kernel.org/r/52AF1EBC020000780010DBF9@nat28.tlf.novell.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Linux 3.10 changed the timing of how thread_info->flags is touched:
x86: Use generic idle loop
(7d1a941731fabf27e5fb6edbebb79fe856edb4e5)
This caused Intel NHM-EX and WSM-EX servers to experience a large number
of immediate MONITOR/MWAIT break wakeups, which caused cpuidle to demote
from deep C-states to shallow C-states, which caused these platforms
to experience a significant increase in idle power.
Note that this issue was already present before the commit above,
however, it wasn't seen often enough to be noticed in power measurements.
Here we extend an errata workaround from the Core2 EX "Dunnington"
to extend to NHM-EX and WSM-EX, to prevent these immediate
returns from MWAIT, reducing idle power on these platforms.
While only acpi_idle ran on Dunnington, intel_idle
may also run on these two newer systems.
As of today, there are no other models that are known
to need this tweak.
Link: http://lkml.kernel.org/r/CAJvTdK=%2BaNN66mYpCGgbHGCHhYQAKx-vB0kJSWjVpsNb_hOAtQ@mail.gmail.com
Signed-off-by: Len Brown <len.brown@intel.com>
Link: http://lkml.kernel.org/r/baff264285f6e585df757d58b17788feabc68918.1387403066.git.len.brown@intel.com
Cc: <stable@vger.kernel.org> # 3.12.x, 3.11.x, 3.10.x
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
Freezable kthreads and workqueues are fundamentally problematic in
that they effectively introduce a big kernel lock widely used in the
kernel and have already been the culprit of several deadlock
scenarios. This is the latest occurrence.
During resume, libata rescans all the ports and revalidates all
pre-existing devices. If it determines that a device has gone
missing, the device is removed from the system which involves
invalidating block device and flushing bdi while holding driver core
layer locks. Unfortunately, this can race with the rest of device
resume. Because freezable kthreads and workqueues are thawed after
device resume is complete and block device removal depends on
freezable workqueues and kthreads (e.g. bdi_wq, jbd2) to make
progress, this can lead to deadlock - block device removal can't
proceed because kthreads are frozen and kthreads can't be thawed
because device resume is blocked behind block device removal.
839a8e8660b6 ("writeback: replace custom worker pool implementation
with unbound workqueue") made this particular deadlock scenario more
visible but the underlying problem has always been there - the
original forker task and jbd2 are freezable too. In fact, this is
highly likely just one of many possible deadlock scenarios given that
freezer behaves as a big kernel lock and we don't have any debug
mechanism around it.
I believe the right thing to do is getting rid of freezable kthreads
and workqueues. This is something fundamentally broken. For now,
implement a funny workaround in libata - just avoid doing block device
hot[un]plug while the system is frozen. Kernel engineering at its
finest. :(
v2: Add EXPORT_SYMBOL_GPL(pm_freezing) for cases where libata is built
as a module.
v3: Comment updated and polling interval changed to 10ms as suggested
by Rafael.
v4: Add #ifdef CONFIG_FREEZER around the hack as pm_freezing is not
defined when FREEZER is not configured thus breaking build.
Reported by kbuild test robot.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Tomaž Šolc <tomaz.solc@tablix.org>
Reviewed-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=62801
Link: http://lkml.kernel.org/r/20131213174932.GA27070@htj.dyndns.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Len Brown <len.brown@intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: stable@vger.kernel.org
Cc: kbuild test robot <fengguang.wu@intel.com>
|
|
Commit 8f34a1da35ae ("arm64: ptrace: use HW_BREAKPOINT_EMPTY type for
disabled breakpoints") fixed an issue with GDB trying to zero breakpoint
control registers. The problem there is that the arch hw_breakpoint code
will attempt to create a (disabled), execute breakpoint of length 0.
This will fail validation and report unexpected failure to GDB. To avoid
this, we treated disabled breakpoints as HW_BREAKPOINT_EMPTY, but that
seems to have broken with recent kernels, causing watchpoints to be
treated as TYPE_INST in the core code and returning ENOSPC for any
further breakpoints.
This patch fixes the problem by prioritising the `enable' field of the
breakpoint: if it is cleared, we simply update the perf_event_attr to
indicate that the thing is disabled and don't bother changing either the
type or the length. This reinforces the behaviour that the breakpoint
control register is essentially read-only apart from the enable bit
when disabling a breakpoint.
Cc: <stable@vger.kernel.org>
Reported-by: Aaron Liu <liucy214@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Let the user know when the number of submission queues are being
ignored.
Signed-off-by: Matias Bjorling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Simplify the initialization logic of the three block-layers.
- The queue initialization is split into two parts. This allows reuse of
code when initializing the sq-, bio- and mq-based layers.
- Set submit_queues default value to 0 and always set it at init time.
- Simplify the init error code paths.
Signed-off-by: Matias Bjorling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|