| Commit message (Collapse) | Author | Files | Lines |
|
There is a race in the MIPS fork code which allows the child to get a
stale copy of parent MSA/FPU/DSP state that is active in hardware
registers when the fork() is called. This is because copy_thread() saves
the live register state into the child context only if the hardware is
currently in use, apparently on the assumption that the hardware state
cannot have been saved and disabled since the initial duplication of the
task_struct. However preemption is certainly possible during this
window.
An example sequence of events is as follows:
1) The parent userland process puts important data into saved floating
point registers ($f20-$f31), which are then dirty compared to the
process' stored context.
2) The parent process calls fork() which does a clone system call.
3) In the kernel, do_fork() -> copy_process() -> dup_task_struct() ->
arch_dup_task_struct() (which uses the weakly defined default
implementation). This duplicates the parent process' task context,
which includes a stale version of its FP context from when it was
last saved, probably some time before (1).
4) At some point before copy_process() calls copy_thread(), such as when
duplicating the memory map, the process is desceduled. Perhaps it is
preempted asynchronously, or perhaps it sleeps while blocked on a
mutex. The dirty FP state in the FP registers is saved to the parent
process' context and the FPU is disabled.
5) When the process is rescheduled again it continues copying state
until it gets to copy_thread(), which checks whether the FPU is in
use, so that it can copy that dirty state to the child process' task
context. Because of the deschedule however the FPU is not in use, so
the child process' context is left with stale FP context from the
last time the parent saved it (some time before (1)).
6) When the new child process is scheduled it reads the important data
from the saved floating point register, and ends up doing a NULL
pointer dereference as a result of the stale data.
This use of saved floating point registers across function calls can be
triggered fairly easily by explicitly using inline asm with a current
(MIPS R2) compiler, but is far more likely to happen unintentionally
with a MIPS R6 compiler where the FP registers are more likely to get
used as scratch registers for storing non-fp data.
It is easily fixed, in the same way that other architectures do it, by
overriding the implementation of arch_dup_task_struct() to sync the
dirty hardware state to the parent process' task context *prior* to
duplicating it, rather than copying straight to the child process' task
context in copy_thread(). Note, the FPU hardware is not disabled so the
parent process may continue executing with the live register context,
but now the child process is guaranteed to have an identical copy of it
at that point.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reported-by: Matthew Fortune <matthew.fortune@imgtec.com>
Tested-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9075/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The following commits:
5890f70f15c52d (MIPS: Use dedicated exception handler if CPU supports RI/XI exceptions)
6575b1d4173eae (MIPS: kernel: cpu-probe: Detect unique RI/XI exceptions)
break the kernel for *all* existing MIPS CPUs that implement the
CP0_PageGrain[IEC] bit. They cause the TLB exception handlers to be
generated without the legacy execute-inhibit handling, but never set
the CP0_PageGrain[IEC] bit to activate the use of dedicated exception
vectors for execute-inhibit exceptions. The result is that upon
detection of an execute-inhibit violation, we loop forever in the TLB
exception handlers instead of sending SIGSEGV to the task.
If we are generating TLB exception handlers expecting separate
vectors, we must also enable the CP0_PageGrain[IEC] feature.
The bug was introduced in kernel version 3.17.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: <stable@vger.kernel.org>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/8880/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Commit 842dfc11ea9a ("MIPS: Fix build with binutils 2.24.51+") in v3.18
enabled -msoft-float and sprinkled ".set hardfloat" where necessary to
use FP instructions. However it missed enable_restore_fp_context() which
since v3.17 does a ctc1 with inline assembly, causing the following
assembler errors on Mentor's 2014.05 toolchain:
{standard input}: Assembler messages:
{standard input}:2913: Error: opcode not supported on this processor: mips32r2 (mips32r2) `ctc1 $2,$31'
scripts/Makefile.build:257: recipe for target 'arch/mips/kernel/traps.o' failed
Fix that to use the new write_32bit_cp1_register() macro so that ".set
hardfloat" is automatically added when -msoft-float is in use.
Fixes 842dfc11ea9a ("MIPS: Fix build with binutils 2.24.51+")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.18+, depends on "MIPS: mipsregs.h: Add write_32bit_cp1_register()"
Patchwork: https://patchwork.linux-mips.org/patch/9173/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Add a write_32bit_cp1_register() macro to compliment the
read_32bit_cp1_register() macro. This is to abstract whether .set
hardfloat needs to be used based on GAS_HAS_SET_HARDFLOAT.
The implementation of _read_32bit_cp1_register() .sets mips1 due to
failure of gas v2.19 to assemble cfc1 for Octeon (see commit
25c300030016 ("MIPS: Override assembler target architecture for
octeon.")). I haven't copied this over to _write_32bit_cp1_register() as
I'm uncertain whether it applies to ctc1 too, or whether anybody cares
about that version of binutils any longer.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9172/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
As printk() invocation can cause e.g. a TLB miss, printk() cannot be
called before the exception handlers have been properly initialized.
This can happen e.g. when netconsole has been loaded as a kernel module
and the TLB table has been cleared when a CPU was offline.
Call cpu_report() in start_secondary() only after the exception handlers
have been initialized to fix this.
Without the patch the kernel will randomly either lockup or crash
after a CPU is onlined and the console driver is a module.
Signed-off-by: Hemmo Nieminen <hemmo.nieminen@iki.fi>
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: stable@vger.kernel.org
Cc: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/8953/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
octeon_cpu_disable() will unconditionally enable interrupts when called.
We can assume that the routine is always called with interrupts disabled,
so just delete the incorrect local_irq_disable/enable().
The patch fixes the following crash when offlining a CPU:
[ 93.818785] ------------[ cut here ]------------
[ 93.823421] WARNING: CPU: 1 PID: 10 at kernel/smp.c:231 flush_smp_call_function_queue+0x1c4/0x1d0()
[ 93.836215] Modules linked in:
[ 93.839287] CPU: 1 PID: 10 Comm: migration/1 Not tainted 3.19.0-rc4-octeon-los_b5f0 #1
[ 93.847212] Stack : 0000000000000001 ffffffff81b2cf90 0000000000000004 ffffffff81630000
0000000000000000 0000000000000000 0000000000000000 000000000000004a
0000000000000006 ffffffff8117e550 0000000000000000 0000000000000000
ffffffff81b30000 ffffffff81b26808 8000000032c77748 ffffffff81627e07
ffffffff81595ec8 ffffffff81b26808 000000000000000a 0000000000000001
0000000000000001 0000000000000003 0000000010008ce1 ffffffff815030c8
8000000032cbbb38 ffffffff8113d42c 0000000010008ce1 ffffffff8117f36c
8000000032c77300 8000000032cbba50 0000000000000001 ffffffff81503984
0000000000000000 0000000000000000 0000000000000000 0000000000000000
0000000000000000 ffffffff81121668 0000000000000000 0000000000000000
...
[ 93.912819] Call Trace:
[ 93.915273] [<ffffffff81121668>] show_stack+0x68/0x80
[ 93.920335] [<ffffffff81503984>] dump_stack+0x6c/0x90
[ 93.925395] [<ffffffff8113d58c>] warn_slowpath_common+0x94/0xd8
[ 93.931324] [<ffffffff811a402c>] flush_smp_call_function_queue+0x1c4/0x1d0
[ 93.938208] [<ffffffff811a4128>] hotplug_cfd+0xf0/0x108
[ 93.943444] [<ffffffff8115bacc>] notifier_call_chain+0x5c/0xb8
[ 93.949286] [<ffffffff8113d704>] cpu_notify+0x24/0x60
[ 93.954348] [<ffffffff81501738>] take_cpu_down+0x38/0x58
[ 93.959670] [<ffffffff811b343c>] multi_cpu_stop+0x154/0x180
[ 93.965250] [<ffffffff811b3768>] cpu_stopper_thread+0xd8/0x160
[ 93.971093] [<ffffffff8115ea4c>] smpboot_thread_fn+0x1ec/0x1f8
[ 93.976936] [<ffffffff8115ab04>] kthread+0xd4/0xf0
[ 93.981735] [<ffffffff8111c4f0>] ret_from_kernel_thread+0x14/0x1c
[ 93.987835]
[ 93.989326] ---[ end trace c9e3815ee655bda9 ]---
[ 93.993951] Kernel bug detected[#1]:
[ 93.997533] CPU: 1 PID: 10 Comm: migration/1 Tainted: G W 3.19.0-rc4-octeon-los_b5f0 #1
[ 94.006591] task: 8000000032c77300 ti: 8000000032cb8000 task.ti: 8000000032cb8000
[ 94.014081] $ 0 : 0000000000000000 0000000010000ce1 0000000000000001 ffffffff81620000
[ 94.022146] $ 4 : 8000000002c72ac0 0000000000000000 00000000000001a7 ffffffff813b06f0
[ 94.030210] $ 8 : ffffffff813b20d8 0000000000000000 0000000000000000 ffffffff81630000
[ 94.038275] $12 : 0000000000000087 0000000000000000 0000000000000086 0000000000000000
[ 94.046339] $16 : ffffffff81623168 0000000000000001 0000000000000000 0000000000000008
[ 94.054405] $20 : 0000000000000001 0000000000000001 0000000000000001 0000000000000003
[ 94.062470] $24 : 0000000000000038 ffffffff813b7f10
[ 94.070536] $28 : 8000000032cb8000 8000000032cbbc20 0000000010008ce1 ffffffff811bcaf4
[ 94.078601] Hi : 0000000000f188e8
[ 94.082179] Lo : d4fdf3b646c09d55
[ 94.085760] epc : ffffffff811bc9d0 irq_work_run_list+0x8/0xf8
[ 94.091686] Tainted: G W
[ 94.095613] ra : ffffffff811bcaf4 irq_work_run+0x34/0x60
[ 94.101192] Status: 10000ce3 KX SX UX KERNEL EXL IE
[ 94.106235] Cause : 40808034
[ 94.109119] PrId : 000d9301 (Cavium Octeon II)
[ 94.113653] Modules linked in:
[ 94.116721] Process migration/1 (pid: 10, threadinfo=8000000032cb8000, task=8000000032c77300, tls=0000000000000000)
[ 94.127168] Stack : 8000000002c74c80 ffffffff811a4128 0000000000000001 ffffffff81635720
fffffffffffffff2 ffffffff8115bacc 80000000320fbce0 80000000320fbca4
80000000320fbc80 0000000000000002 0000000000000004 ffffffff8113d704
80000000320fbce0 ffffffff81501738 0000000000000003 ffffffff811b343c
8000000002c72aa0 8000000002c72aa8 ffffffff8159cae8 ffffffff8159caa0
ffffffff81650000 80000000320fbbf0 80000000320fbc80 ffffffff811b32e8
0000000000000000 ffffffff811b3768 ffffffff81622b80 ffffffff815148a8
8000000032c77300 8000000002c73e80 ffffffff815148a8 8000000032c77300
ffffffff81622b80 ffffffff815148a8 8000000032c77300 ffffffff81503f48
ffffffff8115ea0c ffffffff81620000 0000000000000000 ffffffff81174d64
...
[ 94.192771] Call Trace:
[ 94.195222] [<ffffffff811bc9d0>] irq_work_run_list+0x8/0xf8
[ 94.200802] [<ffffffff811bcaf4>] irq_work_run+0x34/0x60
[ 94.206036] [<ffffffff811a4128>] hotplug_cfd+0xf0/0x108
[ 94.211269] [<ffffffff8115bacc>] notifier_call_chain+0x5c/0xb8
[ 94.217111] [<ffffffff8113d704>] cpu_notify+0x24/0x60
[ 94.222171] [<ffffffff81501738>] take_cpu_down+0x38/0x58
[ 94.227491] [<ffffffff811b343c>] multi_cpu_stop+0x154/0x180
[ 94.233072] [<ffffffff811b3768>] cpu_stopper_thread+0xd8/0x160
[ 94.238914] [<ffffffff8115ea4c>] smpboot_thread_fn+0x1ec/0x1f8
[ 94.244757] [<ffffffff8115ab04>] kthread+0xd4/0xf0
[ 94.249555] [<ffffffff8111c4f0>] ret_from_kernel_thread+0x14/0x1c
[ 94.255654]
[ 94.257146]
Code: a2423c40 40026000 30420001 <00020336> dc820000 10400037 00000000 0000010f 0000010f
[ 94.267183] ---[ end trace c9e3815ee655bdaa ]---
[ 94.271804] Fatal exception: panic in 5 seconds
Reported-by: Hemmo Nieminen <hemmo.nieminen@iki.fi>
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Acked-by: David Daney <david.daney@cavium.com>
Cc: stable@vger.kernel.org # v3.18+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/8952/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
[...]
struct component {
^
In file included from ./arch/mips/include/asm/sn/klconfig.h:58:0,
from ./arch/mips/include/asm/sn/module.h:12,
from ./arch/mips/include/asm/sn/node.h:8,
from ./arch/mips/include/asm/mach-ip35/mmzone.h:4,
from ./arch/mips/include/asm/mmzone.h:9,
from ./arch/mips/include/asm/mach-ip35/topology.h:7,
from ./arch/mips/include/asm/topology.h:11,
from include/linux/topology.h:35,
from include/linux/gfp.h:8,
from include/linux/device.h:29,
from drivers/base/component.c:14:
./arch/mips/include/asm/fw/arc/hinv.h:122:16: note: originally defined here
typedef struct component {
^
make[2]: *** [drivers/base/component.o] Error 1
make[2]: Target `__build' not remade because of errors.
make[1]: *** [drivers/base] Error 2
make[1]: Target `__build' not remade because of errors.
Fix by using an nameless struct definition in the COMPONENT definition.
Which is what the ARC spec uses anyway. While at it, do the same thing
for two other typedefs.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
If the irq_chip does not define .irq_disable, any call to disable_irq
will defer disabling the IRQ until it fires while marked as disabled.
This assumes that the handler function checks for this condition, which
handle_percpu_irq does not. In this case, calling disable_irq leads to
an IRQ storm, if the interrupt fires while disabled.
This optimization is only useful when disabling the IRQ is slow, which
is not true for the MIPS CPU IRQ.
Disable this optimization by implementing .irq_disable and .irq_enable
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Cc: stable@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8949/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Commit 18743d2781d0 ("irqchip: mips-gic: Stop using per-platform mapping
tables") in v3.19-rc1 changed the routing of IPIs through the GIC to go
to the HW0 IRQ pin along with the rest of the GIC interrupts, rather
than to HW1 and HW2 pins.
This breaks SMP boot using the CMP or MT SMP implementations because HW0
doesn't get unmasked when secondary CPUs are initialised so the IPIs
will never interrupt secondary CPUs (nor any other interrupts routed
through the GIC).
Commit ff1e29ade4c6 ("MIPS: smp-cps: Enable all hardware interrupts on
secondary CPUs") fixed this in advance for the CPS SMP implementation by
unmasking all hardware interrupt lines for secondary CPUs, so lets do
the same for the CMP and MT implementations.
Fixes: 18743d2781d0 ("irqchip: mips-gic: Stop using per-platform mapping tables")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Andrew Bresticker <abrestic@chromium.org>
Cc: Qais Yousef <qais.yousef@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9025/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When 32-bit MIPS userspace invokes a syscall indirectly via syscall(number,
arg1, ..., arg7), the kernel looks up the actual syscall based on the given
number, shifts the other arguments to the left, and jumps to the syscall.
If the syscall is interrupted by a signal and indicates it needs to be
restarted by the kernel (by returning ERESTARTNOINTR for example), the
syscall must be called directly, since the number is no longer the first
argument, and the other arguments are now staged for a direct call.
Before shifting the arguments, store the syscall number in pt_regs->regs[2].
This gets copied temporarily into pt_regs->regs[0] after the syscall returns.
If the syscall needs to be restarted, handle_signal()/do_signal() copies the
number back to pt_regs->reg[2], which ends up in $v0 once control returns to
userspace.
Signed-off-by: Ed Swierk <eswierk@skyportsystems.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8929/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Commit 90cee759f08a ("MIPS: ELF: Set FP mode according to .MIPS.abiflags")
introduced checking of the .MIPS.abiflags ELF section but did so through
the native sized "elfhdr" and "elf_phdr" structures regardless whether the
ELF was actually 32-bit or 64-bit. This produces wrong results when trying
to use a 64-bit kernel to load o32 ELF files.
Change the uses of the generic elf structures to their 32-bit versions.
Since the code bails out on any 64-bit cases, this is OK until they are
implemented.
Fixes: 90cee759f08a ("MIPS: ELF: Set FP mode according to .MIPS.abiflags")
Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Maciej W. Rozycki <macro@linux-mips.org>
Patchwork: https://patchwork.linux-mips.org/patch/8932/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Sparse emits a bunch of warnings in mips-cm.h due to casting away of
__iomem by the addr_gcr_*() functions:
arch/mips/include/asm/mips-cm.h:134:1: warning: cast removes address space of expression
And subsequent passing of the return values to __raw_readl() and
__raw_writel() in the read_gcr_*() and write_gcr_*() functions:
arch/mips/include/asm/mips-cm.h:134:1: warning: incorrect type in argument 2 (different address spaces)
arch/mips/include/asm/mips-cm.h:134:1: expected void volatile [noderef] <asn:2>*mem
arch/mips/include/asm/mips-cm.h:134:1: got unsigned int [usertype] *
arch/mips/include/asm/mips-cm.h:134:1: warning: incorrect type in argument 1 (different address spaces)
arch/mips/include/asm/mips-cm.h:134:1: expected void const volatile [noderef] <asn:2>*mem
arch/mips/include/asm/mips-cm.h:134:1: got unsigned int [usertype] *
Fix by adding __iomem to the addr_gcr_*() return type and cast.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8874/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
[...]
HOSTCC scripts/kconfig/zconf.tab.o
HOSTLD scripts/kconfig/conf
arch/mips/Kconfig:2681:error: recursive dependency detected!
arch/mips/Kconfig:2681: symbol MIPS32_N32 depends on MIPS32_COMPAT
arch/mips/Kconfig:2658: symbol MIPS32_COMPAT is selected by MIPS32_N32
Introduced by d74473bdf7a4c1ef7ae2b75f585fe5649ac2dcea (MIPS: Compat: Fix
build error if CONFIG_MIPS32_COMPAT but no compat ABI.)
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
In that case nor __NR_seccomp_*_32 symbols will be defined in
<asm/unistd.h> so the attempt to use it in kernel.seccomp.c will fail
with:
kernel/seccomp.c:565:2: error: '__NR_seccomp_read_32' undeclared here (not in a function)
__NR_seccomp_read_32, __NR_seccomp_write_32, __NR_seccomp_exit_32, __NR_seccomp_sigreturn_32,
^
kernel/seccomp.c:565:24: error: '__NR_seccomp_write_32' undeclared here (not in a function)
__NR_seccomp_read_32, __NR_seccomp_write_32, __NR_seccomp_exit_32, __NR_seccomp_sigreturn_32,
^
kernel/seccomp.c:565:47: error: '__NR_seccomp_exit_32' undeclared here (not in a function)
__NR_seccomp_read_32, __NR_seccomp_write_32, __NR_seccomp_exit_32, __NR_seccomp_sigreturn_32,
^
kernel/seccomp.c:565:69: error: '__NR_seccomp_sigreturn_32' undeclared here (not in a function)
__NR_seccomp_read_32, __NR_seccomp_write_32, __NR_seccomp_exit_32, __NR_seccomp_sigreturn_32,
Solved by changing the compat ABIs in kconfig to select MIPS32_COMPAT
directly. This also means the user no longer has to select MIPS32_COMPAT
before being able to see the ABI options.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Fixes sparse warnings:
arch/mips/jz4740/irq.c:63:6: warning: symbol 'jz4740_irq_suspend' was not declared. Should it be static?
arch/mips/jz4740/irq.c:69:6: warning: symbol 'jz4740_irq_resume' was not declared. Should it be static?
Also, I've seen some elusive build errors on my automated build test
where JZ4740_IRQ_BASE and NR_IRQS are missing, but I can't reproduce
them manually for some reason. Anyway, mach-jz4740/irq.h should help us
avoid relying on some implicit include.
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8724/
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
In particular the use of the antiquated PIX PATA drivers was a nuiscance
since most userland has switched to the new /dev/sda drivers as well as
the lack of EXT4.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Commit 4227a2d4efc9c84f35826dc4d1e6dc183f6c1c05 (MIPS: Support for hybrid
FPRs) changes the kernel to execute read_c0_config5() even on processors
that don't have a Config5 register. According to the arch spec the
behaviour of trying to read or write this register is UNDEFINED where this
register doesn't exist, that is merely looking at this register is
already cruel because that might kill a kitten.
In case of Qemu older than v2.2 Qemu has elected to implement this
UNDEFINED behaviour by taking a RI exception - which then fries the
kernel:
[...]
Freeing YAMON memory: 956k freed
Freeing unused kernel memory: 240K (80674000 - 806b0000)
Reserved instruction in kernel code[#1]:
CPU: 0 PID: 1 Comm: init Not tainted 3.18.0-rc6-00058-g4227a2d #26
task: 86047588 ti: 86048000 task.ti: 86048000
$ 0 : 00000000 77a638cc 00000000 00000000
[...]
For qemu v2.2.0 commit f31b035a9f10dc9b57f01c426110af845d453ce2
(target-mips: correctly handle access to unimplemented CP0 register)
changed the behaviour to returning zero on read and ignoring writes
which more matches how typical hardware implementations actually behave.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
|
|
Fix for BUG_ON(anon_vma->degree) splashes in unlink_anon_vmas() ("kernel
BUG at mm/rmap.c:399!") caused by commit 7a3ef208e662 ("mm: prevent
endless growth of anon_vma hierarchy")
Anon_vma_clone() is usually called for a copy of source vma in
destination argument. If source vma has anon_vma it should be already
in dst->anon_vma. NULL in dst->anon_vma is used as a sign that it's
called from anon_vma_fork(). In this case anon_vma_clone() finds
anon_vma for reusing.
Vma_adjust() calls it differently and this breaks anon_vma reusing
logic: anon_vma_clone() links vma to old anon_vma and updates degree
counters but vma_adjust() overrides vma->anon_vma right after that. As
a result final unlink_anon_vmas() decrements degree for wrong anon_vma.
This patch assigns ->anon_vma before calling anon_vma_clone().
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Reported-and-tested-by: Chris Clayton <chris2553@googlemail.com>
Reported-and-tested-by: Oded Gabbay <oded.gabbay@amd.com>
Reported-and-tested-by: Chih-Wei Huang <cwhuang@android-x86.org>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Daniel Forrest <dan.forrest@ssec.wisc.edu>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: stable@vger.kernel.org # to match back-porting of 7a3ef208e662
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit fee7e49d4514 ("mm: propagate error from stack expansion even for
guard page") made sure that we return the error properly for stack
growth conditions. It also theorized that counting the guard page
towards the stack limit might break something, but also said "Let's see
if anybody notices".
Somebody did notice. Apparently android-x86 sets the stack limit very
close to the limit indeed, and including the guard page in the rlimit
check causes the android 'zygote' process problems.
So this adds the (fairly trivial) code to make the stack rlimit check be
against the actual real stack size, rather than the size of the vma that
includes the guard page.
Reported-and-tested-by: Chih-Wei Huang <cwhuang@android-x86.org>
Cc: Jay Foad <jay.foad@gmail.com>
Cc: stable@kernel.org # to match back-porting of fee7e49d4514
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
In v3.19-rc3 tree when CONFIG_ARM_LPAE and CONFIG_DEBUG_RODATA are enabled
image failed to compile with the following error:
arch/arm/mm/init.c:661:14: error: ‘PMD_SECT_RDONLY’ undeclared here (not in a function)
It seems that '80d6b0c ARM: mm: allow text and rodata sections to be read-only'
and 'ded9477 ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE'
commits crossed. 80d6b0c uses PMD_SECT_RDONLY macro but ded9477 renames it
and uses software bits L_PMD_SECT_RDONLY instead.
Fix is to use L_PMD_SECT_RDONLY instead PMD_SECT_RDONLY as ded9477 does in
another places.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This is a static checker fix. We write some binary settings to the
sysfs file. One of the settings is the "->startup_profile". There
isn't any checking to make sure it fits into the
pyra->profile_settings[] array in the profile_activated() function.
I added a check to pyra_sysfs_write_settings() in both places because
I wasn't positive that the other callers were correct.
Cc: <stable@vger.kernel.org>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
Currently if DEBUG_MUTEXES is enabled, the mutex->owner field is only
cleared iff debug_locks is active. This exposes a race to other users of
the field where the mutex->owner may be still set to a stale value,
potentially upsetting mutex_spin_on_owner() among others.
References: https://bugs.freedesktop.org/show_bug.cgi?id=87955
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1420540175-30204-1-git-send-email-chris@chris-wilson.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
When alloc_fair_sched_group() in sched_create_group() fails,
free_sched_group() is called, and free_fair_sched_group() is called by
free_sched_group(). Since destroy_cfs_bandwidth() is called by
free_fair_sched_group() without calling init_cfs_bandwidth(),
RCU stall occurs at hrtimer_cancel():
INFO: rcu_sched self-detected stall on CPU { 1} (t=60000 jiffies g=13074 c=13073 q=0)
Task dump for CPU 1:
(fprintd) R running task 0 6249 1 0x00000088
...
Call Trace:
<IRQ> [<ffffffff81094988>] sched_show_task+0xa8/0x110
[<ffffffff81097acd>] dump_cpu_task+0x3d/0x50
[<ffffffff810c3a80>] rcu_dump_cpu_stacks+0x90/0xd0
[<ffffffff810c7751>] rcu_check_callbacks+0x491/0x700
[<ffffffff810cbf2b>] update_process_times+0x4b/0x80
[<ffffffff810db046>] tick_sched_handle.isra.20+0x36/0x50
[<ffffffff810db0a2>] tick_sched_timer+0x42/0x70
[<ffffffff810ccb19>] __run_hrtimer+0x69/0x1a0
[<ffffffff810db060>] ? tick_sched_handle.isra.20+0x50/0x50
[<ffffffff810ccedf>] hrtimer_interrupt+0xef/0x230
[<ffffffff810452cb>] local_apic_timer_interrupt+0x3b/0x70
[<ffffffff8164a465>] smp_apic_timer_interrupt+0x45/0x60
[<ffffffff816485bd>] apic_timer_interrupt+0x6d/0x80
<EOI> [<ffffffff810cc588>] ? lock_hrtimer_base.isra.23+0x18/0x50
[<ffffffff81193cf1>] ? __kmalloc+0x211/0x230
[<ffffffff810cc9d2>] hrtimer_try_to_cancel+0x22/0xd0
[<ffffffff81193cf1>] ? __kmalloc+0x211/0x230
[<ffffffff810ccaa2>] hrtimer_cancel+0x22/0x30
[<ffffffff810a3cb5>] free_fair_sched_group+0x25/0xd0
[<ffffffff8108df46>] free_sched_group+0x16/0x40
[<ffffffff810971bb>] sched_create_group+0x4b/0x80
[<ffffffff810aa383>] sched_autogroup_create_attach+0x43/0x1c0
[<ffffffff8107dc9c>] sys_setsid+0x7c/0x110
[<ffffffff81647729>] system_call_fastpath+0x12/0x17
Check whether init_cfs_bandwidth() was called before calling
destroy_cfs_bandwidth().
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
[ Move the check into destroy_cfs_bandwidth() to aid compilability. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/201412252210.GCC30204.SOMVFFOtQJFLOH@I-love.SAKURA.ne.jp
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The dl_runtime_exceeded() function is supposed to ckeck if
a SCHED_DEADLINE task must be throttled, by checking if its
current runtime is <= 0. However, it also checks if the
scheduling deadline has been missed (the current time is
larger than the current scheduling deadline), further
decreasing the runtime if this happens.
This "double accounting" is wrong:
- In case of partitioned scheduling (or single CPU), this
happens if task_tick_dl() has been called later than expected
(due to small HZ values). In this case, the current runtime is
also negative, and replenish_dl_entity() can take care of the
deadline miss by recharging the current runtime to a value smaller
than dl_runtime
- In case of global scheduling on multiple CPUs, scheduling
deadlines can be missed even if the task did not consume more
runtime than expected, hence penalizing the task is wrong
This patch fix this problem by throttling a SCHED_DEADLINE task
only when its runtime becomes negative, and not modifying the runtime
Signed-off-by: Luca Abeni <luca.abeni@unitn.it>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@gmail.com>
Cc: <stable@vger.kernel.org>
Cc: Dario Faggioli <raistlin@linux.it>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1418813432-20797-3-git-send-email-luca.abeni@unitn.it
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
According to global EDF, tasks should be migrated between runqueues
without checking if their scheduling deadlines and runtimes are valid.
However, SCHED_DEADLINE currently performs such a check:
a migration happens doing:
deactivate_task(rq, next_task, 0);
set_task_cpu(next_task, later_rq->cpu);
activate_task(later_rq, next_task, 0);
which ends up calling dequeue_task_dl(), setting the new CPU, and then
calling enqueue_task_dl().
enqueue_task_dl() then calls enqueue_dl_entity(), which calls
update_dl_entity(), which can modify scheduling deadline and runtime,
breaking global EDF scheduling.
As a result, some of the properties of global EDF are not respected:
for example, a taskset {(30, 80), (40, 80), (120, 170)} scheduled on
two cores can have unbounded response times for the third task even
if 30/80+40/80+120/170 = 1.5809 < 2
This can be fixed by invoking update_dl_entity() only in case of
wakeup, or if this is a new SCHED_DEADLINE task.
Signed-off-by: Luca Abeni <luca.abeni@unitn.it>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@gmail.com>
Cc: <stable@vger.kernel.org>
Cc: Dario Faggioli <raistlin@linux.it>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1418813432-20797-2-git-send-email-luca.abeni@unitn.it
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
In effective_load, we have (long w * unsigned long tg->shares) / long W,
when w is negative, it is cast to unsigned long and hence the product is
insanely large. Fix this by casting tg->shares to long.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Dave Jones <davej@redhat.com>
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20141219002956.GA25405@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
As per e23738a7300a ("sched, inotify: Deal with nested sleeps").
fanotify_read is a wait loop with sleeps in. Wait loops rely on
task_struct::state and sleeps do too, since that's the only means of
actually sleeping. Therefore the nested sleeps destroy the wait loop
state and the wait loop breaks the sleep functions that assume
TASK_RUNNING (mutex_lock).
Fix this by using the new woken_wake_function and wait_woken() stuff,
which registers wakeups in wait and thereby allows shrinking the
task_state::state changes to the actual sleep part.
Reported-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reported-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Eric Paris <eparis@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Paris <eparis@redhat.com>
Link: http://lkml.kernel.org/r/20141216152838.GZ3337@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
There was another report of a boot failure with a #GP fault in the
uncore SBOX initialization. The earlier work around was not enough
for this system.
The boot was failing while trying to initialize the third SBOX.
This patch detects parts with only two SBOXes and limits the number
of SBOX units to two there.
Stable material, as it affects boot problems on 3.18.
Tested-by: Andreas Oehler <andreas@oehler-net.de>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Yan, Zheng <zheng.z.yan@intel.com>
Link: http://lkml.kernel.org/r/1420583675-9163-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Perf reports user regs for kernel-mode samples so that samples can
be backtraced through user code. The old code was very broken in
syscall context, resulting in useless backtraces.
The new code, in contrast, is still dangerously racy, but it should
at least work most of the time.
Tested-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: chenggang.qcg@taobao.com
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/243560c26ff0f739978e2459e203f6515367634d.1420396372.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
On x86_64, at least, task_pt_regs may be only partially initialized
in many contexts, so x86_64 should not use it without extra care
from interrupt context, let alone NMI context.
This will allow x86_64 to override the logic and will supply some
scratch space to use to make a cleaner copy of user regs.
Tested-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: chenggang.qcg@taobao.com
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jean Pihet <jean.pihet@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/e431cd4c18c2e1c44c774f10758527fb2d1025c4.1420396372.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Stephane reported that the PEBS fixup was broken by the recent commit to
the instruction decoder. The thing had an off-by-one which resulted in
not being able to decode the last instruction and always bail.
Reported-by: Stephane Eranian <eranian@google.com>
Fixes: 6ba48ff46f76 ("x86: Remove arbitrary instruction size limit in instruction decoder")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: stable@vger.kernel.org # 3.18
Cc: <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Liang Kan <kan.liang@intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Link: http://lkml.kernel.org/r/20141216104614.GV3337@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
being killed
Charles Shirron and Paul Cassella from Cray Inc have reported kswapd
stuck in a busy loop with nothing left to balance, but
kswapd_try_to_sleep() failing to sleep. Their analysis found the cause
to be a combination of several factors:
1. A process is waiting in throttle_direct_reclaim() on pgdat->pfmemalloc_wait
2. The process has been killed (by OOM in this case), but has not yet been
scheduled to remove itself from the waitqueue and die.
3. kswapd checks for throttled processes in prepare_kswapd_sleep():
if (waitqueue_active(&pgdat->pfmemalloc_wait)) {
wake_up(&pgdat->pfmemalloc_wait);
return false; // kswapd will not go to sleep
}
However, for a process that was already killed, wake_up() does not remove
the process from the waitqueue, since try_to_wake_up() checks its state
first and returns false when the process is no longer waiting.
4. kswapd is running on the same CPU as the only CPU that the process is
allowed to run on (through cpus_allowed, or possibly single-cpu system).
5. CONFIG_PREEMPT_NONE=y kernel is used. If there's nothing to balance, kswapd
encounters no voluntary preemption points and repeatedly fails
prepare_kswapd_sleep(), blocking the process from running and removing
itself from the waitqueue, which would let kswapd sleep.
So, the source of the problem is that we prevent kswapd from going to
sleep until there are processes waiting on the pfmemalloc_wait queue,
and a process waiting on a queue is guaranteed to be removed from the
queue only when it gets scheduled. This was done to make sure that no
process is left sleeping on pfmemalloc_wait when kswapd itself goes to
sleep.
However, it isn't necessary to postpone kswapd sleep until the
pfmemalloc_wait queue actually empties. To prevent processes from being
left sleeping, it's actually enough to guarantee that all processes
waiting on pfmemalloc_wait queue have been woken up by the time we put
kswapd to sleep.
This patch therefore fixes this issue by substituting 'wake_up' with
'wake_up_all' and removing 'return false' in the code snippet from
prepare_kswapd_sleep() above. Note that if any process puts itself in
the queue after this waitqueue_active() check, or after the wake up
itself, it means that the process will also wake up kswapd - and since
we are under prepare_to_wait(), the wake up won't be missed. Also we
update the comment prepare_kswapd_sleep() to hopefully more clearly
describe the races it is preventing.
Fixes: 5515061d22f0 ("mm: throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: <stable@vger.kernel.org> [3.6+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
We are supposed to take one css reference per each memory page and per
each swap entry accounted to a memory cgroup. However, during task
charges migration we take a reference to the destination cgroup twice
per each swap entry: first in mem_cgroup_do_precharge()->try_charge()
and then in mem_cgroup_move_swap_account(), permanently leaking the
destination cgroup.
The hunk taking the second reference seems to be a leftover from the
pre-00501b531c472 ("mm: memcontrol: rewrite charge API") era. Remove it
to fix the leak.
Fixes: e8ea14cc6ead (mm: memcontrol: take a css reference for each charged page)
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit 3e32cb2e0a12 ("mm: memcontrol: lockless page counters")
accidentally switched the soft limit default from infinity to zero,
which turns all memcgs with even a single page into soft limit excessors
and engages soft limit reclaim on all of them during global memory
pressure. This makes global reclaim generally more aggressive, but also
inverts the meaning of existing soft limit configurations where unset
soft limits are usually more generous than set ones.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
These are obsolete since commit e30825f1869a ("mm/debug-pagealloc:
prepare boottime configurable") was merged. So remove them.
[pebolle@tiscali.nl: find obsolete Kconfig options]
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Paul Bolle <pebolle@tiscali.nl>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Dave Hansen <dave@sr71.net>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Jungsoo Son <jungsoo.son@lge.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Fix clashing values for O_PATH and FMODE_NONOTIFY on sparc. The
clashing O_PATH value was added in commit 5229645bdc35 ("vfs: add
nonconflicting values for O_PATH") but this can't be changed as it is
user-visible.
FMODE_NONOTIFY is only used internally in the kernel, but it is in the
same numbering space as the other O_* flags, as indicated by the comment
at the top of include/uapi/asm-generic/fcntl.h (and its use in
fs/notify/fanotify/fanotify_user.c). So renumber it to avoid the clash.
All of this has happened before (commit 12ed2e36c98a: "fanotify:
FMODE_NONOTIFY and __O_SYNC in sparc conflict"), and all of this will
happen again -- so update the uniqueness check in fcntl_init() to
include __FMODE_NONOTIFY.
Signed-off-by: David Drysdale <drysdale@google.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Jan Kara <jack@suse.cz>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Eric Paris <eparis@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
build error
arch/blackfin/mach-bf533/boards/stamp.c:834:2: error: implicit declaration of function 'mdelay'
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
link file
In ocfs2_link(), the parent directory inode passed to function
ocfs2_lookup_ino_from_name() is wrong. Parameter dir is the parent of
new_dentry not old_dentry. We should get old_dir from old_dentry and
lookup old_dentry in old_dir in case another node remove the old dentry.
With this change, hard linking works again, when paths are relative with
at least one subdirectory. This is how the problem was reproducable:
# mkdir a
# mkdir b
# touch a/test
# ln a/test b/test
ln: failed to create hard link `b/test' => `a/test': No such file or directory
However when creating links in the same dir, it worked well.
Now the link gets created.
Fixes: 0e048316ff57 ("ocfs2: check existence of old dentry in ocfs2_link()")
Signed-off-by: joyce.xue <xuejiufei@huawei.com>
Reported-by: Szabo Aron - UBIT <aron@ubit.hu>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Joel Becker <jlbec@evilplan.org>
Tested-by: Aron Szabo <aron@ubit.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
My ISP finally gave up on the old mail address, so I am moving things
over to bitmath.org instead. Also change the status fields to better
reflect reality.
Signed-off-by: Henrik Rydberg <rydberg@bitmath.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Tejun, while reviewing the code, spotted the following race condition
between the dirtying and truncation of a page:
__set_page_dirty_nobuffers() __delete_from_page_cache()
if (TestSetPageDirty(page))
page->mapping = NULL
if (PageDirty())
dec_zone_page_state(page, NR_FILE_DIRTY);
dec_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE);
if (page->mapping)
account_page_dirtied(page)
__inc_zone_page_state(page, NR_FILE_DIRTY);
__inc_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE);
which results in an imbalance of NR_FILE_DIRTY and BDI_RECLAIMABLE.
Dirtiers usually lock out truncation, either by holding the page lock
directly, or in case of zap_pte_range(), by pinning the mapcount with
the page table lock held. The notable exception to this rule, though,
is do_wp_page(), for which this race exists. However, do_wp_page()
already waits for a locked page to unlock before setting the dirty bit,
in order to prevent a race where clear_page_dirty() misses the page bit
in the presence of dirty ptes. Upgrade that wait to a fully locked
set_page_dirty() to also cover the situation explained above.
Afterwards, the code in set_page_dirty() dealing with a truncation race
is no longer needed. Remove it.
Reported-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Constantly forking task causes unlimited grow of anon_vma chain. Each
next child allocates new level of anon_vmas and links vma to all
previous levels because pages might be inherited from any level.
This patch adds heuristic which decides to reuse existing anon_vma
instead of forking new one. It adds counter anon_vma->degree which
counts linked vmas and directly descending anon_vmas and reuses anon_vma
if counter is lower than two. As a result each anon_vma has either vma
or at least two descending anon_vmas. In such trees half of nodes are
leafs with alive vmas, thus count of anon_vmas is no more than two times
bigger than count of vmas.
This heuristic reuses anon_vmas as few as possible because each reuse
adds false aliasing among vmas and rmap walker ought to scan more ptes
when it searches where page is might be mapped.
Link: http://lkml.kernel.org/r/20120816024610.GA5350@evergreen.ssec.wisc.edu
Fixes: 5beb49305251 ("mm: change anon_vma linking to fix multi-process server scalability issue")
[akpm@linux-foundation.org: fix typo, per Rik]
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Reported-by: Daniel Forrest <dan.forrest@ssec.wisc.edu>
Tested-by: Michal Hocko <mhocko@suse.cz>
Tested-by: Jerome Marchand <jmarchan@redhat.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: <stable@vger.kernel.org> [2.6.34+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
wait_consider_task() checks EXIT_ZOMBIE after EXIT_DEAD/EXIT_TRACE and
both checks can fail if we race with EXIT_ZOMBIE -> EXIT_DEAD/EXIT_TRACE
change in between, gcc needs to reload p->exit_state after
security_task_wait(). In this case ->notask_error will be wrongly
cleared and do_wait() can hang forever if it was the last eligible
child.
Many thanks to Arne who carefully investigated the problem.
Note: this bug is very old but it was pure theoretical until commit
b3ab03160dfa ("wait: completely ignore the EXIT_DEAD tasks"). Before
this commit "-O2" was probably enough to guarantee that compiler won't
read ->exit_state twice.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Arne Goedeke <el@laramies.com>
Tested-by: Arne Goedeke <el@laramies.com>
Cc: <stable@vger.kernel.org> [3.15+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
In dlm_process_recovery_data, only when dlm_new_lock failed the ret will
be set to -ENOMEM. And in this case, newlock is definitely NULL. So
test newlock is meaningless, remove it.
Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Reviewed-by: Alex Chen <alex.chen@huawei.com>
Reviewed-by: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The introduction of the uapi directories in v3.7-rc1 moved some of the
generated headers from arch/*/include/generated to the uapi directory,
keeping the #include directives intact.
This creates a problem when bisecting, because the unversioned files are
not cleaned automatically by git and the compiler might include stale
headers as a result. Instead of cleaning them in the Makefiles, promote
arch/*/include/generated/uapi in the search path. Under normal
circumstances, there is no overlap between this uapi subdirectory and
its parent, so the include choices remain the same. We keep
arch/*/include/generated/uapi in the USERINCLUDE variable so that it is
usable standalone.
Note that we cannot completely swap the order of the uapi and
kernel-only directories, since the headers in include/uapi/asm-generic
are meant to be wrapped by their include/asm-generic counterparts when
building kernel code.
Reported-by: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Reported-by: David Drysdale <dmd@lurklurk.org>
Signed-off-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The only real issue is the one in auth_x.c and it came with
3.19-rc1 merge.
Signed-off-by: Ilya Dryomov <idryomov@redhat.com>
|
|
len is size_t, should be printed with %zu.
Signed-off-by: Ilya Dryomov <idryomov@redhat.com>
|
|
When perf report on TUI shows callchain it checks first node has
siblings to determine whether it needs to print percentage value.
But it missed a case that first node is NULL. So sometimes it segfaults
like below:
$ perf top -g
perf: Segmentation fault
-------- backtrace --------
perf[0x4fcefb]
/usr/lib/libc.so.6(+0x33b20)[0x7f2a35839b20]
perf(rb_next+0x8)[0x47d3d8]
perf[0x4f6058]
perf[0x4f833b]
perf[0x4f8610]
perf[0x4f209e]
perf(ui_browser__run+0x3a)[0x4f2e6a]
perf[0x4f94ee]
perf(perf_evlist__tui_browse_hists+0x94)[0x4fbbf4]
perf[0x444d10]
/usr/lib/libpthread.so.0(+0x7314)[0x7f2a37070314]
/usr/lib/libc.so.6(clone+0x6d)[0x7f2a358ee5bd]
$ addr2line -e `which perf` 0x4f6058
/home/namhyung/project/linux/tools/perf/ui/browsers/hists.c:553
I don't know why the backtrace didn't print some symbols..
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Fixes: 4087d11cd945 ("perf hists browser: Print overhead percent value for first-level callchain")
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1419401076-21700-1-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|