| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
For a network file system we generally prefer large i/o, but
if the server returns invalid file system block/sector sizes
in cifs (vers=1.0) QFSInfo then set block size to a default
of a reasonable minimum (4K).
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
|
|
|
|
|
|
|
|
|
|
| |
Currently, "echo 0 > /proc/fs/cifs/Stats" resets all of the stats
except the session and share reconnect counts. Fix it to
reset those as well.
CC: Stable <stable@vger.kernel.org>
Signed-off-by: Steve French <stfrench@microsoft.com>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When "backup intent" is requested on the mount (e.g. backupuid or
backupgid mount options), the corresponding flag was missing from
some of the new compounding operations as well (now that
open_query_close is gone).
Related to kernel bugzilla #200953
Reported-and-tested-by: <whh@rubrik.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
|
|
|
|
|
|
|
| |
So we don't overflow the io vector arrays accidentally
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
| |
Get rid of smb2_open_op_close() as all operations are now migrated
to smb2_compound_op().
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
|
|
| |
We never pass is_falloc==true here anyway and if we ever need to support
is_falloc in the future, SMB2_set_eof is such a trivial wrapper around
send_set_info() that we can/should just create a differently named wrapper
for that new functionality.
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
| |
Cuts number of network roundtrips significantly for some common syscalls
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes SMB2_OP_SET_EOF to use compounding in some situations.
This is part of the path based API to truncate a file.
Most of the time this will however not be invoked for SMB2 since
cifs_set_file_size() will as far as I can tell almost always just
open the file synchronously and switch to the handle based truncate
code path, thus bypassing the compounding we add here.
Rewriting cifs_set_file_size() and make that whole pile of code more
compounding friendly, and also easier to read and understand, is a
different project though and not for this patch.
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
| |
This and previous patches drop the number of roundtrips we need for rmdir()
from 6 to 2.
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
| |
so that we can use these later for compounded set-info calls.
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
| |
This,and previous patches, drops the number of roundtrips from five to two
for unlink()
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
| |
This with the previous patch changes mkdir() from needing 6 roundtrips to
just 3.
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
|
|
| |
This turns most open/query-info/close patterns in cifs.ko
to become compounds.
This changes stat from using 3 roundtrips to just a single one.
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When processing the mids for compounds we would only add credits based on
the last successful mid in the compound which would leak credits and
eventually triggering a re-connect.
Fix this by splitting the mid processing part into two loops instead of one
where the first loop just waits for all mids and then counts how many
credits we were granted for the whole compound.
Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
|
|
|
| |
overlaps reconnect
Add tracepoint to catch potential cases where a pending operation overlapping a
reconnect could fail and incorrectly refund its credits causing the client
to think it has more credits available than the server thinks it does.
Signed-off-by: Steve French <stfrench@microsoft.com>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes gcc '-Wunused-but-set-variable' warning:
fs/cifs/ioctl.c: In function 'cifs_ioctl':
fs/cifs/ioctl.c:164:23: warning:
variable 'cifs_sb' set but not used [-Wunused-but-set-variable]
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
|
| |
smb311_posix_mkdir()
Use kmemdup rather than duplicating its implementation
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
|
|
|
|
|
|
|
|
|
|
| |
Some servers (e.g. Azure) return "STATUS_NOT_IMPLEMENTED" rather
than "STATUS_INVALID_DEVICE_REQUEST" on query network interface
info at mount. This shouldn't cause us to log a warning message
automatically. Don't log this unless noisier cifsFYI is enabled.
Signed-off-by: Steve French <stfrench@microsoft.com>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux
Pull parisc updates from Helge Deller:
"Lots of small fixes and enhancements, most noteably:
- Many TLB and cache flush optimizations (Dave)
- Fixed HPMC/crash handler on 64-bit kernel (Dave and myself)
- Added alternative infrastructre. The kernel now live-patches itself
for various situations, e.g. replace SMP code when running on one
CPU only or drop cache flushes when system has no cache installed.
- vmlinuz now contains a full copy of the compressed vmlinux file.
This simplifies debugging the currently booted kernel.
- Unused driver removal (Christoph)
- Reduced warnings of Dino PCI bridge when running in qemu
- Removed gcc version check (Masahiro)"
* 'parisc-4.20-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: (23 commits)
parisc: Retrieve and display the PDC PAT capabilities
parisc: Optimze cache flush algorithms
parisc: Remove pte_inserted define
parisc: Add PDC PAT cell_info() and pd_get_pdc_revisions() functions
parisc: Drop two instructions from pte lookup code
parisc: Use zdep for shlw macro on PA1.1 and PA2.0
parisc: Add alternative coding infrastructure
parisc: Include compressed vmlinux file in vmlinuz boot kernel
extract-vmlinux: Check for uncompressed image as fallback
parisc: Fix address in HPMC IVA
parisc: Fix exported address of os_hpmc handler
parisc: Fix map_pages() to not overwrite existing pte entries
parisc: Purge TLB entries after updating page table entry and set page accessed flag in TLB handler
parisc: Release spinlocks using ordered store
parisc: Ratelimit dino stuck interrupt warnings
parisc: dino: Utilize DINO_MASK_IRQ() macro
parisc: Clean up crash header output
parisc: Add SYSTEM_INFO and REGISTER TOC PAT functions
parisc: Remove PTE load and fault check from L2_ptep macro
parisc: Reorder TLB flush timing calculation
...
|
| |
| |
| |
| | |
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The attached patch implements three optimizations:
1) Loops in flush_user_dcache_range_asm, flush_kernel_dcache_range_asm,
purge_kernel_dcache_range_asm, flush_user_icache_range_asm, and
flush_kernel_icache_range_asm are unrolled to reduce branch overhead.
2) The static branch prediction for cmpb instructions in pacache.S have
been reviewed and the operand order adjusted where necessary.
3) For flush routines in cache.c, we purge rather flush when we have no
context. The pdc instruction at level 0 is not required to write back
dirty lines to memory. This provides a performance improvement over the
fdc instruction if the feature is implemented.
Version 2 adds alternative patching.
The patch provides an average improvement of about 2%.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The attached change removes the pte_inserted from pgtable.h. As a
result, we always flush the TLB entry when the associated page table
entry is changed.
This change doesn't impact performance signifcantly and it may catch
some cases where the TLB needs flushing but wasn't.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add wrappers for the PDC_PAT_CELL_GET_INFO and
PDC_PAT_PD_GET_PDC_INTERF_REV PAT PDC subfunctions.
Both provide access to the PAT capability bitfield which can guide us if
simultaneous PTLBs are allowed on the bus, and if firmware will
rendezvous all processors within PDCE_Check in case of an HPMC.
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| | |
Remove two instruction from the hot path. The temporary move to %r9 is
unneccessary, and the zero-inialization of pte happens twice.
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The zdep and depw,z mnemonics generate the same code. The assembler will
accept the depw,z mnemonic when generating PA 1.x code. The zdep
mnemonic is okay when generating PA 2.0 code. This patch changes depw,z
to zdep in the current shlw macro, while the binary code will be the
same.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: John David Anglin <dave.anglin@bell.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch adds the necessary code to patch a running kernel at runtime
to improve performance.
The current implementation offers a few optimizations variants:
- When running a SMP kernel on a single UP processor, unwanted assembler
statements like locking functions are overwritten with NOPs. When
multiple instructions shall be skipped, one branch instruction is used
instead of multiple nop instructions.
- In the UP case, some pdtlb and pitlb instructions are patched to
become pdtlb,l and pitlb,l which only flushes the CPU-local tlb
entries instead of broadcasting the flush to other CPUs in the system
and thus may improve performance.
- fic and fdc instructions are skipped if no I- or D-caches are
installed. This should speed up qemu emulation and cacheless systems.
- If no cache coherence is needed for IO operations, the relevant fdc
and sync instructions in the sba and ccio drivers are replaced by
nops.
- On systems which share I- and D-TLBs and thus don't have a seperate
instruction TLB, the pitlb instruction is replaced by a nop.
Live-patching is done early in the boot process, just after having run
the system inventory. No drivers are running and thus no external
interrupts should arrive. So the hope is that no TLB exceptions will
occur during the patching. If this turns out to be wrong we will
probably need to do the patching in real-mode.
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Change the parisc vmlinuz boot code to include and process the real
compressed vmlinux.gz ELF file instead of a compressed memory dump.
This brings parisc in sync on how it's done on x86_64.
The benefit of this change is that, e.g. for debugging purposes, one can
then extract the vmlinux file out of the vmlinuz which was booted which
wasn't possible before. This can be archieved with the existing
scripts/extract-vmlinux script, which just needs a small tweak to prefer
to extract a compressed file before trying the existing given binary.
The downside of this approach is that due to the extra round of
decompression/ELF processing we need more physical memory installed to
be able to boot a kernel.
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As on x86-64 and other architectures, the boot kernel on parisc (vmlinuz
and bzImage) contains a full compressed copy of the final kernel
executable (vmlinux.bin.gz), which one should be able to extract with
the extract-vmlinux script.
But on parisc extracting the kernel with extract-vmlinux fails.
Currently the script first checks if the given file is an ELF file
(which is true on parisc) and if so returns it. Thus on parisc we
unexpectedly get back the vmlinuz boot file instead of the uncompressed
vmlinux image.
This patch fixes this issue by reverting the logic. It now first tries
to find a compression signature in the given file and if that fails it
checks the file itself as fallback.
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Helge noticed that the address of the os_hpmc handler was not being
correctly calculated in the hpmc macro. As a result, PDCE_CHECK would
fail to call os_hpmc:
<Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC
<Cpu2> 37000f7302e00000 8040004000000000 CC_ERR_CPU_CHECK_SUMMARY
<Cpu2> f600105e02e00000 fffffff0f0c00000 CC_MC_HPMC_MONARCH_SELECTED
<Cpu2> 140003b202e00000 000000000000000b CC_ERR_HPMC_STATE_ENTRY
<Cpu2> 5600100b02e00000 00000000000001a0 CC_MC_OS_HPMC_LEN_ERR
<Cpu2> 5600106402e00000 fffffff0f0438e70 CC_MC_BR_TO_OS_HPMC_FAILED
<Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC
<Cpu2> 37000f7302e00000 8040004000000000 CC_ERR_CPU_CHECK_SUMMARY
<Cpu2> 4000109f02e00000 0000000000000000 CC_MC_HPMC_INITIATED
<Cpu2> 4000101902e00000 0000000000000000 CC_MC_MULTIPLE_HPMCS
<Cpu2> 030010d502e00000 0000000000000000 CC_CPU_STOP
The address problem can be seen by dumping the fault vector:
0000000040159000 <fault_vector_20>:
40159000: 63 6f 77 73 stb r15,-2447(dp)
40159004: 20 63 61 6e ldil L%b747000,r3
40159008: 20 66 6c 79 ldil L%-1c3b3000,r3
...
40159020: 08 00 02 40 nop
40159024: 20 6e 60 02 ldil L%15d000,r3
40159028: 34 63 00 00 ldo 0(r3),r3
4015902c: e8 60 c0 02 bv,n r0(r3)
40159030: 08 00 02 40 nop
40159034: 00 00 00 00 break 0,0
40159038: c0 00 70 00 bb,*< r0,sar,40159840 <fault_vector_20+0x840>
4015903c: 00 00 00 00 break 0,0
Location 40159038 should contain the physical address of os_hpmc:
000000004015d000 <os_hpmc>:
4015d000: 08 1a 02 43 copy r26,r3
4015d004: 01 c0 08 a4 mfctl iva,r4
4015d008: 48 85 00 68 ldw 34(r4),r5
This patch moves the address setup into initialize_ivt to resolve the
above problem. I tested the change by dumping the HPMC entry after setup:
0000000040209020: 8000240
0000000040209024: 206a2004
0000000040209028: 34630ac0
000000004020902c: e860c002
0000000040209030: 8000240
0000000040209034: 1bdddce6
0000000040209038: 15d000
000000004020903c: 1a0
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: <stable@vger.kernel.org>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the C-code we need to put the physical address of the hpmc handler in
the interrupt vector table (IVA) in order to get HPMCs working. Since
on parisc64 function pointers are indirect (in fact they are function
descriptors) we instead export the address as variable and not as
function.
This reverts a small part of commit f39cce654f9a ("parisc: Add
cfi_startproc and cfi_endproc to assembly code").
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: <stable@vger.kernel.org> [4.9+]
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix a long-existing small nasty bug in the map_pages() implementation which
leads to overwriting already written pte entries with zero, *if* map_pages() is
called a second time with an end address which isn't aligned on a pmd boundry.
This happens for example if we want to remap only the text segment read/write
in order to run alternative patching on the code. Exiting the loop when we
reach the end address fixes this.
Cc: stable@vger.kernel.org
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
accessed flag in TLB handler
This patch may resolve some races in TLB handling. Hopefully, TLB
inserts are accesses and protected by spin lock.
If not, we may need to IPI calls and do local purges on PA 2.0.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch updates the spin unlock code to use an ordered store with
release semanatics. All prior accesses are guaranteed to be performed
before an ordered store is performed.
Using an ordered store is significantly faster than using the sync
memory barrier.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While playing with qemu with an emulated RT8139cp NIC, I faced lots of
the following warnings:
Dino 0x00810000: stuck interrupt 2
This patch ratelimits this warning and reports back that the IRQ was
handled.
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| | |
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On kernel crash, this is the current output:
Kernel Fault: Code=26 (Data memory access rights trap) regs=(ptrval) (Addr=00000004)
Drop the address of regs, it's of no use for debugging, and show the
faulty address without parenthesis.
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| | |
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change removes the PTE load and present check from the L2_ptep
macro. The load and check for kernel pages is now done in the tlb_lock
macro. This avoids a double load and check for user pages. The load
and check for user pages is now done inside the lock so the fault
handler can't be called while the entry is being updated. This version
uses an ordered store to release the lock when the page table entry
isn't present. It also corrects the check in the non SMP case.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On boot (mostly reboot), my c8000 sometimes crashes after it prints the
TLB flush threshold. The lockup is hard. The front LED flashes red and
the box must be unplugged to reset the error.
I noticed that when the crash occurs the TLB flush threshold is about
one quarter what it is on a successful boot. If I disabled the
calculation, the crash didn't occur. There also seemed to be a timing
dependency affecting the crash. I finally realized that the
flush_tlb_all() timing test runs just after the secondary CPUs are
started. There seems to be a problem with running flush_tlb_all() too
soon after the CPUs are started.
The timing for the range test always seemed okay. So, I reversed the
order of the two timing tests and I haven't had a crash at this point so
far.
I added a couple of information messages which I have left to help with
diagnosis if the problem should appear on another machine.
This version reduces the minimum TLB flush threshold to 16 KiB.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This driver has never been wired up due to the life of the Linux
git tree, and has severely bitrotted.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit cafa0010cd51 ("Raise the minimum required gcc version to 4.6")
bumped the minimum GCC version to 4.6 for all architectures.
The version check in arch/parisc/Makefile is obsolete now.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Helge Deller <deller@gmx.de>
|
| |
| |
| |
| |
| | |
Fixes: 5b00ca0b8035 ("parisc: Restore possibility to execute 64-bit applications")
Signed-off-by: Helge Deller <deller@gmx.de>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Pull ARM updates from Russell King:
"The main item in this pull request are the Spectre variant 1.1 fixes
from Julien Thierry.
A few other patches to improve various areas, and removal of some
obsolete mcount bits and a redundant kbuild conditional"
* 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 8802/1: Call syscall_trace_exit even when system call skipped
ARM: 8797/1: spectre-v1.1: harden __copy_to_user
ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization
ARM: 8795/1: spectre-v1.1: use put_user() for __put_user()
ARM: 8794/1: uaccess: Prevent speculative use of the current addr_limit
ARM: 8793/1: signal: replace __put_user_error with __put_user
ARM: 8792/1: oabi-compat: copy oabi events using __copy_to_user()
ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state
ARM: 8790/1: signal: always use __copy_to_user to save iwmmxt context
ARM: 8789/1: signal: copy registers using __copy_to_user()
ARM: 8801/1: makefile: use ARMv3M mode for RiscPC
ARM: 8800/1: use choice for kernel unwinders
ARM: 8798/1: remove unnecessary KBUILD_SRC ifeq conditional
ARM: 8788/1: ftrace: remove old mcount support
ARM: 8786/1: Debug kernel copy by printing
|
| | \ | |
| | \ | |
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Sanitize user pointer given to __copy_to_user, both for standard version
and memcopy version of the user accessor.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Introduce C and asm helpers to sanitize user address, taking the
address range they target into account.
Use asm helper for existing sanitization in __copy_from_user().
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When Spectre mitigation is required, __put_user() needs to include
check_uaccess. This is already the case for put_user(), so just make
__put_user() an alias of put_user().
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
A mispredicted conditional call to set_fs could result in the wrong
addr_limit being forwarded under speculation to a subsequent access_ok
check, potentially forming part of a spectre-v1 attack using uaccess
routines.
This patch prevents this forwarding from taking place, but putting heavy
barriers in set_fs after writing the addr_limit.
Porting commit c2f0ad4fc089cff8 ("arm64: uaccess: Prevent speculative use
of the current addr_limit").
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
With Spectre-v1.1 mitigations, __put_user_error is pointless. In an attempt
to remove it, replace its references in frame setups with __put_user.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Copy events to user using __copy_to_user() rather than copy members of
individually with __put_user_error().
This has the benefit of disabling/enabling PAN once per event intead of
once per event member.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
|