| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull ARM pcmcia updates from Russell King:
"A series of changes updating the PXA and SA11x0 PCMCIA code to use
devm_* APIs, and resolve some resource leaks in doing so. This
results in a few small cleanups which are included in this set.
FYI, the recommit of these today is to add Robert Jarzmik's
reviewed-by tags, which I'd forgotten to add from mid-July"
* 'pcmcia' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
pcmcia: soc_common: remove skt_dev_info's clk pointer
pcmcia: sa11xx_base.c: remove useless init/exit functions
pcmcia: sa1111: simplify clk handing in sa1111_pcmcia_add()
pcmcia: sa1111: update socket driver to use devm_clk_get() API
pcmcia: pxa2xx: convert memory allocation to devm_* API
pcmcia: pxa2xx: update socket driver to use devm_clk_get() API
pcmcia: sa11x0: convert memory allocation to devm_* API
pcmcia: sa11x0: fix missing clk_put() in sa11x0 socket drivers
|
| |
| |
| |
| |
| |
| |
| |
| | |
We no longer need to store the clk pointer in struct skt_dev_info as we
no longer need to remember the clk pointer for the cleanup paths.
Reviewed-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| | |
A library module is not required to have module init/exit functions.
Get rid of these unnecessary functions.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
clk_get(dev, NULL) will always refer to the same clock, so it's
pointless calling this multiple times for the same device. As we no
longer have to worry about the cleanup (via use of devm_clk_get()) we
can simplify sa1111_pcmcia_add() too.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| | |
Update the pxa2xx socket driver to use the devm_clk_get() API so that
the cleanup paths are simplified.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Convert the pxa2xx socket driver memory allocation to use devm_kzalloc()
to simplify the cleanup path.
Reviewed-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Update the pxa2xx socket driver to use the devm_clk_get() API so that
the cleanup paths are simplified.
Reviewed-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| | |
Convert the sa11x0 socket driver memory allocation to use devm_kzalloc()
to simplify the cleanup path.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fix the lack of clk_put() in sa11xx_base.c's error cleanup paths by
converting the driver to the devm_* API.
Fixes: 86d88bfca475 ("ARM: 8247/2: pcmcia: sa1100: make use of device clock")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Pull ARM development updates from Russell King:
"Included in this update:
- moving PSCI code from ARM64/ARM to drivers/
- removal of some architecture internals from global kernel view
- addition of software based "privileged no access" support using the
old domains register to turn off the ability for kernel
loads/stores to access userspace. Only the proper accessors will
be usable.
- addition of early fixup support for early console
- re-addition (and reimplementation) of OMAP special interconnect
barrier
- removal of finish_arch_switch()
- only expose cpuX/online in sysfs if hotpluggable
- a number of code cleanups"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (41 commits)
ARM: software-based priviledged-no-access support
ARM: entry: provide uaccess assembly macro hooks
ARM: entry: get rid of multiple macro definitions
ARM: 8421/1: smp: Collapse arch_cpu_idle_dead() into cpu_die()
ARM: uaccess: provide uaccess_save_and_enable() and uaccess_restore()
ARM: mm: improve do_ldrd_abort macro
ARM: entry: ensure that IRQs are enabled when calling syscall_trace_exit()
ARM: entry: efficiency cleanups
ARM: entry: get rid of asm_trace_hardirqs_on_cond
ARM: uaccess: simplify user access assembly
ARM: domains: remove DOMAIN_TABLE
ARM: domains: keep vectors in separate domain
ARM: domains: get rid of manager mode for user domain
ARM: domains: move initial domain setting value to asm/domains.h
ARM: domains: provide domain_mask()
ARM: domains: switch to keeping domain value in register
ARM: 8419/1: dma-mapping: harmonize definition of DMA_ERROR_CODE
ARM: 8417/1: refactor bitops functions with BIT_MASK() and BIT_WORD()
ARM: 8416/1: Feroceon: use of_iomap() to map register base
ARM: 8415/1: early fixmap support for earlycon
...
|
| |\ \
| | | |
| | | |
| | | |
| | | | |
Conflicts:
drivers/perf/arm_pmu.c
|
| | |\ \
| | | | |
| | | | |
| | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into devel-stable
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Now that the common PSCI client code has been factored out to
drivers/firmware, and made safe for 32-bit use, move the 32-bit ARM code
over to it. This results in a moderate reduction of duplicated lines,
and will prevent further duplication as the PSCI client code is updated
for PSCI 1.0 and beyond.
The two legacy platform users of the PSCI invocation code are updated to
account for interface changes. In both cases the power state parameter
(which is constant) is now generated using macros, so that the
pack/unpack logic can be killed in preparation for PSCI 1.0 power state
changes.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Rob Herring <robh@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
On some PAE systems (e.g. TI Keystone), memory is above the 32-bit
addressable limit, and the interconnect provides an aliased view of
parts of physical memory in the 32-bit addressable space. This alias
is strictly for boot time usage, and is not otherwise usable because
of coherency limitations.
In this case, virt_to_phys(secondary_startup) would return the
physical address of the secondary CPU boot entry point, but on such
systems, this would be above the 4GB limit.
A separate function, virt_to_idmap(), has been provided to return a
usable physical address for functions in the identity mapping, and
this must be used in preference to virt_to_phys() or __pa() to find
the physical entry point for functions in the identity mapping range.
For other systems, virt_to_idmap() and virt_to_phys() return identical
physical addresses.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
[Mark: apply rmk's suggested rewording]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Add myself and Lorenzo as maintainers of the PSCI client code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
A 32-bit OS cannot make calls with SMC64 IDs, while a 64-bit OS must
invoke some PSCI functions with SMC64 IDs.
This patch introduces and makes use of a new macro to choose the
appropriate IDs based on the register width of the OS, which will allow
32-bit callers to use the PSCI client code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
To enable sharing with arm, move the core PSCI framework code to
drivers/firmware. This results in a minor gain in lines of code, but
this will quickly be amortised by the removal of code currently
duplicated in arch/arm.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
To enable sharing of the arm_pmu code with arm64, this patch factors it
out to drivers/perf/. A new drivers/perf directory is added for
performance monitor drivers to live under.
MAINTAINERS is updated accordingly. Files added previously without a
corresponsing MAINTAINERS update (perf_regs.c, perf_callchain.c, and
perf_event.h) are also added.
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[will: augmented Kconfig help slightly]
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
arch_find_n_match_cpu_physical_id parses the device tree to get the
device node for a given logical cpu index. However, since ARM PMUs get
probed after the CPU device nodes are stashed while registering the
cpus, we can use of_cpu_device_node_get to avoid another DT parse.
This patch replaces arch_find_n_match_cpu_physical_id with
of_cpu_device_node_get to reuse the stashed value directly instead.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
On systems containing multiple, heterogeneous clusters we need a way to
associate a PMU "device" with the CPU(s) on which it exists. For PMUs
that signal overflow with SPIs, this relationship is determined via the
"interrupt-affinity" property, which contains a list of phandles to CPU
nodes for the PMU. For PMUs using PPIs, the per-cpu nature of the
interrupt isn't enough to determine the set of CPUs which actually
contain the device.
This patch allows the interrupt-affinity property to be specified on a
PMU node irrespective of the interrupt type. For PPIs, it identifies
the set of CPUs signalling the PPI in question.
Tested-by: Stephen Boyd <sboyd@codeaurora.org> # Krait PMU
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
For PPI based PMUs, we bail out early in of_pmu_irq_cfg() without
setting the PMU's supported_cpus bitmap. This causes the
smp_call_function_any() in armv7_probe_num_events() to fail. Set
the bitmap to be all CPUs so that we properly probe PMUs that use
PPIs.
Fixes: cc88116da0d1 ("arm: perf: treat PMUs as CPU affine")
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | | | |
| | \ \ \ | |
| | \ \ \ | |
| | \ \ \ | |
| | \ \ \ | |
| | \ \ \ | |
| |\ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
for-linus
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Provide a software-based implementation of the priviledged no access
support found in ARMv8.1.
Userspace pages are mapped using a different domain number from the
kernel and IO mappings. If we switch the user domain to "no access"
when we enter the kernel, we can prevent the kernel from touching
userspace.
However, the kernel needs to be able to access userspace via the
various user accessor functions. With the wrapping in the previous
patch, we can temporarily enable access when the kernel needs user
access, and re-disable it afterwards.
This allows us to trap non-intended accesses to userspace, eg, caused
by an inadvertent dereference of the LIST_POISON* values, which, with
appropriate user mappings setup, can be made to succeed. This in turn
can allow use-after-free bugs to be further exploited than would
otherwise be possible.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Provide hooks into the kernel entry and exit paths to permit control
of userspace visibility to the kernel. The intended use is:
- on entry to kernel from user, uaccess_disable will be called to
disable userspace visibility
- on exit from kernel to user, uaccess_enable will be called to
enable userspace visibility
- on entry from a kernel exception, uaccess_save_and_disable will be
called to save the current userspace visibility setting, and disable
access
- on exit from a kernel exception, uaccess_restore will be called to
restore the userspace visibility as it was before the exception
occurred.
These hooks allows us to keep userspace visibility disabled for the
vast majority of the kernel, except for localised regions where we
want to explicitly access userspace.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
The following structure is just asking for trouble:
#ifdef CONFIG_symbol
.macro foo
...
.endm
.macro bar
...
.endm
.macro baz
...
.endm
#else
.macro foo
...
.endm
.macro bar
...
.endm
#ifdef CONFIG_symbol2
.macro baz
...
.endm
#else
.macro baz
...
.endm
#endif
#endif
such as one defintion being updated, but the other definitions miss out.
Where the contents of a macro needs to be conditional, the hint is in
the first clause of this very sentence. "contents" "conditional". Not
multiple separate definitions, especially not when much of the macro
is the same between different configs.
This patch fixes this bad style, which had caused the Thumb2 code to
miss-out on the uaccess updates.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Provide uaccess_save_and_enable() and uaccess_restore() to permit
control of userspace visibility to the kernel, and hook these into
the appropriate places in the kernel where we need to access
userspace.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Improve the do_ldrd_abort macro code - firstly, it inefficiently checks
for the LDRD encoding by doing a multi-stage test of various bits. This
can be simplified by generating a mask, bitmasking the instruction and
then comparing the result.
Secondly, we want to be able to test the result rather than branching
to do_DataAbort, so remove the branch at the end and rename the macro
to 'teq_ldrd' to reflect it's new usage. teq_ldrd macro returns 'eq'
if the instruction was a LDRD.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
DOMAIN_TABLE is not used; in any case, it aliases to the kernel domain.
Remove this definition.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Keep the machine vectors in its own domain to avoid software based
user access control from making the vector code inaccessible, and
thereby deadlocking the machine.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Since we switched to early trap initialisation in 94e5a85b3be0
("ARM: earlier initialization of vectors page") we haven't been writing
directly to the vectors page, and so there's no need for this domain
to be in manager mode. Switch it to client mode.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Provide a macro to generate the mask for a domain, rather than using
domain_val(, DOMAIN_MANAGER) which won't work when CPU_USE_DOMAINS
is turned off.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |_|/
| | | | | |/| |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Rather than modifying both the domain access control register and our
per-thread copy, modify only the domain access control register, and
use the per-thread copy to save and restore the register over context
switches. We can also avoid the explicit initialisation of the
init thread_info structure.
This allows us to avoid needing to gain access to the thread information
at the uaccess control sites.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Restore the OMAP4 barrier behaviour using the new implementation which
allows multiplatform systems to hook into the mb() and wmb() ARM
implementations to perform any necessary additional barrier maintanence.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This reverts commit 606da4826b89b044b51e3a84958b802204cfe4c7.
We actually need this code for proper behaviour of OMAP4, and it needs
fixing a different way other than just removing the code. Disabling
code which is necessary in the hopes of persuing multiplatform kernels
is a stupid approach.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Add an extension to the heavy barrier code to allow a SoC specific
memory barrier function to be provided. This is needed for platforms
where the interconnect has weak ordering, and thus needs assistance
to ensure that memory writes are properly visible in the correct order
to other parts of the system.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | |/ / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The existing memory barrier macro causes a significant amount of code
to be inserted inline at every call site. For example, in
gpio_set_irq_type(), we have this for mb():
c0344c08: f57ff04e dsb st
c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230>
c0344c10: e3590004 cmp r9, #4
c0344c14: e5983014 ldr r3, [r8, #20]
c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc>
c0344c1c: e3530000 cmp r3, #0
c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4>
c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc
c0344c2c: e12fff33 blx r3
c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc
c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0
c0344c38: e5963004 ldr r3, [r6, #4]
Moving the outer_cache_sync() call out of line reduces the impact of
the barrier:
c0344968: f57ff04e dsb st
c034496c: e35a0004 cmp sl, #4
c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8>
c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb>
c034497c: e5953004 ldr r3, [r5, #4]
This should reduce the cache footprint of this code. Overall, this
results in a reduction of around 20K in the kernel size:
text data bss dec hex filename
10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old
10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new
Another advantage to this approach is that we can finally resolve the
issue of SoCs which have their own memory barrier requirements within
multiplatform kernels (such as OMAP.) Here, the bus interconnects
need additional handling to ensure that writes become visible in the
correct order (eg, between dma_map() operations, writes to DMA
coherent memory, and MMIO accesses.)
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The only caller of cpu_die() on ARM is arch_cpu_idle_dead(), so
let's simplify the code by renaming cpu_die() to
arch_cpu_idle_dead(). While were here, drop the __ref annotation
because __cpuinit is gone nowadays.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
All architectures except arm that define DMA_ERROR_CODE are casting it
to (dma_addr_t) - as it is always compared to dma_addr_t in arm as well
this could be harmonized.
Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Use BIT_MASK() and BIT_WORD() rather than hard-coding the size
of the "long" type.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The chain of of_address_to_resource() and ioremap() can be replaced
with of_iomap().
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Add early fixmap support, initially to support permanent, fixed
mapping support for early console. A temporary, early pte is
created which is migrated to a permanent mapping in paging_init.
This is also needed since the attributes may change as the memory
types are initialized. The 3MiB range of fixmap spans two pte
tables, but currently only one pte is created for early fixmap
support.
Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since
the index for kmap does not start at zero anymore. This reverts
4221e2e6b316 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and
FIX_KMAP_END") to some extent.
Cc: Mark Salter <msalter@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Rathe rthan directly accessing architecture internal functions, provide
an "method"-centric wrapper for qcom_scm-32 to do what's necessary to
ensure that the secure monitor can see the data. This is called
"secure_flush_area" and ensures that the specified memory area is
coherent across the secure boundary.
Acked-by: Andy Gross <agross@codeaurora.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This patch allows the use of CMA for DMA coherent memory allocation.
At the moment if the input parameter "is_coherent" is set to true
the allocation is not made using the CMA, which I think is not the
desired behaviour.
The patch covers the allocation and free of memory for coherent
DMA.
Signed-off-by: Lorenzo Nava <lorenx4@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The dmac_* functions are private to the ARM DMA API implementation, and
should not be used by drivers. In order to discourage their use, remove
their prototypes and macros from asm/*.h.
We have to leave dmac_flush_range() behind as Exynos and MSM IOMMU code
use these; once these sites are fixed, this can be moved also.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Fold finish_arch_switch() into switch_to(), in preparation for the
removal of the finish_arch_switch call from core sched code.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Writes to /sys/.../cpuX/online fail if we determine the platform
doesn't support hotplug for that CPU. Furthermore, if the cpu_die
op isn't specified the system hangs when we try to offline a CPU
and it comes right back online unexpectedly. Let's figure this
stuff out before we make the sysfs nodes so that the online file
doesn't even exist if it isn't (at least sometimes) possible to
hotplug the CPU.
Add a new 'cpu_can_disable' op and repoint all 'cpu_disable'
implementations at it because all implementers use the op to
indicate if a CPU can be hotplugged or not in a static fashion.
With PSCI we may need to add a 'cpu_disable' op so that the
secure OS can be migrated off the CPU we're trying to hotplug.
In this case, the 'cpu_can_disable' op will indicate that all
CPUs are hotpluggable by returning true, but the 'cpu_disable' op
will make a PSCI migration call and occasionally fail, denying
the hotplug of a CPU. This shouldn't be any worse than x86 where
we may indicate that all CPUs are hotpluggable but occasionally
we can't offline a CPU due to check_irq_vectors_for_cpu_disable()
failing to find a CPU to move vectors to.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Dave Martin <Dave.Martin@arm.com>
Acked-by: Simon Horman <horms@verge.net.au> [shmobile portion]
Tested-by: Simon Horman <horms@verge.net.au>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: <linux-sh@vger.kernel.org>
Tested-by: Tyler Baker <tyler.baker@linaro.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
We provide our own implementation of asm/mcs_spinlock.h, so there's no
need to ask for the (empty) generic version.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
For PPI based PMUs, we bail out early in of_pmu_irq_cfg() without
setting the PMU's supported_cpus bitmap. This causes the
smp_call_function_any() in armv7_probe_num_events() to fail. Set
the bitmap to be all CPUs so that we properly probe PMUs that use
PPIs.
Fixes: cc88116da0d1 ("arm: perf: treat PMUs as CPU affine")
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | |/ / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
"CoreLink Level 2 Cache Controller L2C-310", p. 2-15, section 2.3.2
Shareable attribute" states:
"The default behavior of the cache controller with respect to the
shareable attribute is to transform Normal Memory Non-cacheable
transactions into:
- cacheable no allocate for reads
- write through no write allocate for writes."
Depending on the system architecture, this may cause memory corruption
in the presence of bus mastering devices (e.g. OHCI). To avoid such
corruption, the default behavior can be disabled by setting the Shared
Override bit in the Auxiliary Control register.
Currently the Shared Override bit can be set only using C code:
- by calling l2x0_init() directly, which is deprecated,
- by setting/clearing the bit in the machine_desc.l2c_aux_val/mask
fields, but using values differing from 0/~0 is also deprecated.
Hence add support for an "arm,shared-override" device tree property for
the l2c device node. By specifying this property, affected systems can
indicate that non-cacheable transactions must not be transformed.
Then, it's up to the OS to decide. The current behavior is to set the
"shared attribute override enable" bit, as there may exist kernel linear
mappings and cacheable aliases for the DMA buffers, even if CMA is
enabled.
See also commit 1a8e41cd672f894b ("ARM: 6395/1: VExpress: Set bit 22 in
the PL310 (cache controller) AuxCtlr register"):
"Clearing bit 22 in the PL310 Auxiliary Control register (shared
attribute override enable) has the side effect of transforming
Normal Shared Non-cacheable reads into Cacheable no-allocate reads.
Coherent DMA buffers in Linux always have a Cacheable alias via the
kernel linear mapping and the processor can speculatively load
cache lines into the PL310 controller. With bit 22 cleared,
Non-cacheable reads would unexpectedly hit such cache lines leading
to buffer corruption."
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|