summaryrefslogtreecommitdiffstats
path: root/arch/arc/mm (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'arc-5.5-rc1' of ↵Linus Torvalds2019-12-052-58/+41
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC updates from Vineet Gupta - Jump Label support for ARC - kmemleak enabled - arc mm backend TLB Miss / flush optimizations - nSIM platform switching to dwuart (vs. arcuart) and ensuing defconfig updates and cleanups - axs platform pll / video-mode updates * tag 'arc-5.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: add kmemleak support ARC: [plat-axs10x]: remove hardcoded video mode from bootargs ARC: [plat-axs10x]: use pgu pll instead of fixed clock ARC: ARCv2: jump label: implement jump label patching ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3 ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loop ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not available ARC: mm: TLB Miss optim: avoid re-reading ECR ARCv2: mm: TLB Miss optim: Use double world load/stores LDD/STD ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch reg ARC: nSIM_700: remove unused network options ARC: nSIM_700: switch to DW UART usage ARC: merge HAPS-HS with nSIM-HS configs ARC: HAPS: cleanup defconfigs from unused ETH drivers ARC: HAPS: add HIGHMEM memory zone to DTS ARC: HAPS: use same UART configuration everywhere ARC: HAPS: cleanup defconfigs from unused IO-related options ARC: regenerate nSIM and HAPS defconfigs
| * ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3Vineet Gupta2019-10-281-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | For MMUv3 (and prior) the flush_tlb_{range,mm,page} API use the MMU TLBWrite cmd which already nukes the entire uTLB, so NO need for additional IVUTLB cmd from utlb_invalidate() - hence this patch local_flush_tlb_all() is special since it uses a weaker TLBWriteNI cmd (prec commit) to shoot down JTLB, hence we retain the explicit uTLB flush Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loopVineet Gupta2019-10-281-45/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The unconditional full TLB flush (on say ASID rollover) iterates over each entry and uses TLBWrite to zero it out. TLBWrite by design also invalidates the uTLBs thus we end up invalidating it as many times as numbe rof entries (512 or 1k) Optimize this by using a weaker TLBWriteNI cmd in loop, which doesn't tinker with uTLBs and an explicit one time IVUTLB, outside the loop to invalidate them all once. And given the optimiztion, the IVUTLB is now needed on MMUv4 too where the uTLBs and JTLBs are otherwise coherent given the TLBInsertEntry / TLBDeleteEntry commands Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not availableVineet Gupta2019-10-281-4/+0
| | | | | | | | | | | | | | TLBWriteNI was introduced in MMUv2 (to not invalidate uTLBs in Fast Path TLB Refill Handler). To avoid #ifdef'ery make it fallback to TLBWrite availabel on all MMUs. This will also help with next change Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: TLB Miss optim: avoid re-reading ECRVineet Gupta2019-10-281-2/+0
| | | | | | | | | | | | | | | | | | For setting PTE Dirty bit, reuse the prior test for ST miss. No need to reload ECR and test for ST cause code as the prev condition code is still valid (uncloberred) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARCv2: mm: TLB Miss optim: Use double world load/stores LDD/STDVineet Gupta2019-10-281-0/+10
| | | | | | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch regVineet Gupta2019-10-282-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARC700 exception (and intr handling) didn't have auto stack switching thus had to rely on stashing a reg temporarily (to free it up) at a known place in memory, allowing to code up the low level stack switching. This however was not re-entrant in SMP which thus had to repurpose the per-cpu MMU SCRATCH DATA register otherwise used to "cache" the task pdg pointer (vs. reading it from mm struct) The newer HS cores do have auto-stack switching and thus even SMP builds can use the MMU SCRATCH reg as originally intended. This patch fixes the restriction to ARC700 SMP builds only Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* | ARC: mm: remove __ARCH_USE_5LEVEL_HACKVineet Gupta2019-12-012-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch series "elide extraneous generated code for folded p4d/pud/pmd", v3. This series came out of seemingly benign excursion into understanding/removing __ARCH_USE_5LEVEL_HACK from ARC port showing some extraneous code being generated despite folded p4d/pud/pmd | bloat-o-meter2 vmlinux-[AB]* | add/remove: 0/0 grow/shrink: 3/0 up/down: 130/0 (130) | function old new delta | free_pgd_range 548 660 +112 | p4d_clear_bad 2 20 +18 The patches here address that | bloat-o-meter2 vmlinux-[BF]* | add/remove: 0/2 grow/shrink: 0/1 up/down: 0/-386 (-386) | function old new delta | pud_clear_bad 20 - -20 | p4d_clear_bad 20 - -20 | free_pgd_range 660 314 -346 The code savings are not a whole lot, but still worthwhile IMHO. This patch (of 5): With paging code made 5-level compliant, this is no longer needed. ARC has software page walker with 2 lookup levels (pgd -> pte) This was expected to be non functional change but ended with slight code bloat due to needless inclusions of p*d_free_tlb() macros which will be addressed in further patches. | bloat-o-meter2 vmlinux-[AB]* | add/remove: 0/0 grow/shrink: 2/0 up/down: 128/0 (128) | function old new delta | free_pgd_range 546 656 +110 | p4d_clear_bad 2 20 +18 | Total: Before=4137148, After=4137276, chg 0.000000% Link: http://lkml.kernel.org/r/20191016162400.14796-2-vgupta@synopsys.com Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Nick Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | dma-mapping: drop the dev argument to arch_sync_dma_for_*Christoph Hellwig2019-11-201-4/+4
|/ | | | | | | | These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: Christoph Hellwig <hch@lst.de> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
* Merge tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2019-09-191-6/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dma-mapping updates from Christoph Hellwig: - add dma-mapping and block layer helpers to take care of IOMMU merging for mmc plus subsequent fixups (Yoshihiro Shimoda) - rework handling of the pgprot bits for remapping (me) - take care of the dma direct infrastructure for swiotlb-xen (me) - improve the dma noncoherent remapping infrastructure (me) - better defaults for ->mmap, ->get_sgtable and ->get_required_mask (me) - cleanup mmaping of coherent DMA allocations (me) - various misc cleanups (Andy Shevchenko, me) * tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mapping: (41 commits) mmc: renesas_sdhi_internal_dmac: Add MMC_CAP2_MERGE_CAPABLE mmc: queue: Fix bigger segments usage arm64: use asm-generic/dma-mapping.h swiotlb-xen: merge xen_unmap_single into xen_swiotlb_unmap_page swiotlb-xen: simplify cache maintainance swiotlb-xen: use the same foreign page check everywhere swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable xen: remove the exports for xen_{create,destroy}_contiguous_region xen/arm: remove xen_dma_ops xen/arm: simplify dma_cache_maint xen/arm: use dev_is_dma_coherent xen/arm: consolidate page-coherent.h xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainance arm: remove wrappers for the generic dma remap helpers dma-mapping: introduce a dma_common_find_pages helper dma-mapping: always use VM_DMA_COHERENT for generic DMA remap vmalloc: lift the arm flag for coherent mappings to common code dma-mapping: provide a better default ->get_required_mask dma-mapping: remove the dma_declare_coherent_memory export remoteproc: don't allow modular build ...
| * dma-mapping: make dma_atomic_pool_init self-containedChristoph Hellwig2019-08-291-6/+0
| | | | | | | | | | | | | | | | | | | | The memory allocated for the atomic pool needs to have the same mapping attributes that we use for remapping, so use pgprot_dmacoherent instead of open coding it. Also deduct a suitable zone to allocate the memory from based on the presence of the DMA zones. Signed-off-by: Christoph Hellwig <hch@lst.de>
* | ARC: fix typo in setup_dma_ops log messageEugeniy Paltsev2019-08-051-1/+1
|/ | | | | Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* Merge tag 'arc-5.3-rc1' of ↵Linus Torvalds2019-07-172-108/+88
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC updates from Vineet Gupta: - long due rewrite of do_page_fault - refactoring of entry/exit code to utilize the double load/store instructions - hsdk platform updates * tag 'arc-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: [plat-hsdk]: Enable AXI DW DMAC in defconfig ARC: [plat-hsdk]: enable DW SPI controller ARC: hide unused function unw_hdr_alloc ARC: [haps] Add Virtio support ARCv2: entry: simplify return to Delay Slot via interrupt ARC: entry: EV_Trap expects r10 (vs. r9) to have exception cause ARCv2: entry: rewrite to enable use of double load/stores LDD/STD ARCv2: entry: avoid a branch ARCv2: entry: push out the Z flag unclobber from common EXCEPTION_PROLOGUE ARCv2: entry: comments about hardware auto-save on taken interrupts ARC: mm: do_page_fault refactor #8: release mmap_sem sooner ARC: mm: do_page_fault refactor #7: fold the various error handling ARC: mm: do_page_fault refactor #6: error handlers to use same pattern ARC: mm: do_page_fault refactor #5: scoot no_context to end ARC: mm: do_page_fault refactor #4: consolidate retry related logic ARC: mm: do_page_fault refactor #3: tidyup vma access permission code ARC: mm: do_page_fault refactor #2: remove short lived variable ARC: mm: do_page_fault refactor #1: remove label @good_area
| * ARCv2: entry: push out the Z flag unclobber from common EXCEPTION_PROLOGUEVineet Gupta2019-07-011-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Upon a taken interrupt/exception from User mode, HS hardware auto sets Z flag. This helps shave a few instructions from EXCEPTION_PROLOGUE by eliding re-reading ERSTATUS and some bit fiddling. However TLB Miss Exception handler can clobber the CPU flags and still end up in EXCEPTION_PROLOGUE in the slow path handling TLB handling case: EV_TLBMissD do_slow_path_pf EV_TLBProtV (aliased to call_do_page_fault) EXCEPTION_PROLOGUE As a result, EXCEPTION_PROLOGUE need to "unclobber" the Z flag which this patch changes. It is now pushed out to TLB Miss Exception handler. The reasons beings: - The flag restoration is only needed for slowpath TLB Miss Exception handling, but currently being in EXCEPTION_PROLOGUE penalizes all exceptions such as ProtV and syscall Trap, where Z flag is already as expected. - Pushing unclobber out to where it was clobbered is much cleaner and also serves to document the fact. - Makes EXCEPTION_PROLGUE similar to INTERRUPT_PROLOGUE so easier to refactor the common parts which is what this series aims to do Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #8: release mmap_sem soonerVineet Gupta2019-07-011-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case of successful page fault handling, this patch releases mmap_sem before updating the perf stat event for major/minor faults. So even though the contention reduction is NOT super high, it is still an improvement. There's an additional code size improvement as we only have 2 up_read() calls now. Note to myself: -------------- 1. Given the way it is done, we are forced to move @bad_area label earlier causing the various "goto bad_area" cases to hit perf stat code. - PERF_COUNT_SW_PAGE_FAULTS is NOW updated for access errors which is what arm/arm64 seem to be doing as well (with slightly different code) - PERF_COUNT_SW_PAGE_FAULTS_{MAJ,MIN} must NOT be updated for the error case which is guarded by now setting @fault initial value to VM_FAULT_ERROR which serves both cases when handle_mm_fault() returns error or is not called at all. 2. arm/arm64 use two homebrew fault flags VM_FAULT_BAD{MAP,MAPACCESS} which I was inclined to add too but seems not needed for ARC - given that we have everything is 1 function we can still use goto - we setup si_code at the right place (arm* do that in the end) - we init fault already to error value which guards entry into perf stats event update Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #7: fold the various error handlingVineet Gupta2019-07-011-34/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - single up_read() call vs. 4 - so much easier on eyes Technically it seems like @bad_area label moved up, but even in old regime, it was a special case of delivering SIGSEGV unconditionally which we now do as well, although with checks. Also note that @fault needs to be initialized since we can land in @bad_area (which reads it) without setting it up with return value of handle_mm_fault() - failing to do so did bite us although as a side effect of different patch: see [1] [1]: http://lists.infradead.org/pipermail/linux-snps-arc/2019-May/005803.html Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #6: error handlers to use same patternVineet Gupta2019-07-011-11/+10
| | | | | | | | | | | | | | | | - up_read - if !user_mode - whatever error handling Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #5: scoot no_context to endVineet Gupta2019-07-011-14/+7
| | | | | | | | | | | | | | | | This is different than the rest of signal handling stuff No functional change Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #4: consolidate retry related logicVineet Gupta2019-07-011-29/+31
| | | | | | | | | | | | | | stats update code can now elide "retry" check and additional level of indentation since all retry handling is done ahead of it already Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #3: tidyup vma access permission codeVineet Gupta2019-07-011-18/+21
| | | | | | | | | | | | | | | | The coding pattern to NOT intialize variables at declaration time but rather near code which makes us eof them makes it much easier to grok the overall logic, specially when the init is not simply 0 or 1 Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #2: remove short lived variableVineet Gupta2019-07-011-6/+1
| | | | | | | | | | | | | | | | Compiler will do this anyways, still.. No functional change. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
| * ARC: mm: do_page_fault refactor #1: remove label @good_areaVineet Gupta2019-07-011-7/+5
| | | | | | | | | | | | | | Invert the condition for stack expansion. No functional change Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* | Merge tag 'dma-mapping-5.3' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2019-07-131-61/+10
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dma-mapping updates from Christoph Hellwig: - move the USB special case that bounced DMA through a device bar into the USB code instead of handling it in the common DMA code (Laurentiu Tudor and Fredrik Noring) - don't dip into the global CMA pool for single page allocations (Nicolin Chen) - fix a crash when allocating memory for the atomic pool failed during boot (Florian Fainelli) - move support for MIPS-style uncached segments to the common code and use that for MIPS and nios2 (me) - make support for DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING generic (me) - convert nds32 to the generic remapping allocator (me) * tag 'dma-mapping-5.3' of git://git.infradead.org/users/hch/dma-mapping: (29 commits) dma-mapping: mark dma_alloc_need_uncached as __always_inline MIPS: only select ARCH_HAS_UNCACHED_SEGMENT for non-coherent platforms usb: host: Fix excessive alignment restriction for local memory allocations lib/genalloc.c: Add algorithm, align and zeroed family of DMA allocators nios2: use the generic uncached segment support in dma-direct nds32: use the generic remapping allocator for coherent DMA allocations arc: use the generic remapping allocator for coherent DMA allocations dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code dma-mapping: add a dma_alloc_need_uncached helper openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support arc: remove the partial DMA_ATTR_NON_CONSISTENT support arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support ARM: dma-mapping: allow larger DMA mask than supported dma-mapping: truncate dma masks to what dma_addr_t can hold iommu/dma: Apply dma_{alloc,free}_contiguous functions dma-remap: Avoid de-referencing NULL atomic_pool MIPS: use the generic uncached segment support in dma-direct dma-direct: provide generic support for uncached kernel segments au1100fb: fix DMA API abuse ...
| * | arc: use the generic remapping allocator for coherent DMA allocationsChristoph Hellwig2019-06-251-52/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the code that sets up uncached PTEs with the generic vmap based remapping code. It also provides an atomic pool for allocations from non-blocking context, which we not properly supported by the existing arc code. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Evgeniy Paltsev <paltsev@synopsys.com> Tested-by: Evgeniy Paltsev <paltsev@synopsys.com>
| * | arc: remove the partial DMA_ATTR_NON_CONSISTENT supportChristoph Hellwig2019-06-251-15/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Evgeniy Paltsev <paltsev@synopsys.com> Tested-by: Evgeniy Paltsev <paltsev@synopsys.com>
* | | Merge branch 'siginfo-linus' of ↵Linus Torvalds2019-07-091-2/+2
|\ \ \ | |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull force_sig() argument change from Eric Biederman: "A source of error over the years has been that force_sig has taken a task parameter when it is only safe to use force_sig with the current task. The force_sig function is built for delivering synchronous signals such as SIGSEGV where the userspace application caused a synchronous fault (such as a page fault) and the kernel responded with a signal. Because the name force_sig does not make this clear, and because the force_sig takes a task parameter the function force_sig has been abused for sending other kinds of signals over the years. Slowly those have been fixed when the oopses have been tracked down. This set of changes fixes the remaining abusers of force_sig and carefully rips out the task parameter from force_sig and friends making this kind of error almost impossible in the future" * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (27 commits) signal/x86: Move tsk inside of CONFIG_MEMORY_FAILURE in do_sigbus signal: Remove the signal number and task parameters from force_sig_info signal: Factor force_sig_info_to_task out of force_sig_info signal: Generate the siginfo in force_sig signal: Move the computation of force into send_signal and correct it. signal: Properly set TRACE_SIGNAL_LOSE_INFO in __send_signal signal: Remove the task parameter from force_sig_fault signal: Use force_sig_fault_to_task for the two calls that don't deliver to current signal: Explicitly call force_sig_fault on current signal/unicore32: Remove tsk parameter from __do_user_fault signal/arm: Remove tsk parameter from __do_user_fault signal/arm: Remove tsk parameter from ptrace_break signal/nds32: Remove tsk parameter from send_sigtrap signal/riscv: Remove tsk parameter from do_trap signal/sh: Remove tsk parameter from force_sig_info_fault signal/um: Remove task parameter from send_sigtrap signal/x86: Remove task parameter from send_sigtrap signal: Remove task parameter from force_sig_mceerr signal: Remove task parameter from force_sig signal: Remove task parameter from force_sigsegv ...
| * | signal: Remove the task parameter from force_sig_faultEric W. Biederman2019-05-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As synchronous exceptions really only make sense against the current task (otherwise how are you synchronous) remove the task parameter from from force_sig_fault to make it explicit that is what is going on. The two known exceptions that deliver a synchronous exception to a stopped ptraced task have already been changed to force_sig_fault_to_task. The callers have been changed with the following emacs regular expression (with obvious variations on the architectures that take more arguments) to avoid typos: force_sig_fault[(]\([^,]+\)[,]\([^,]+\)[,]\([^,]+\)[,]\W+current[)] -> force_sig_fault(\1,\2,\3) Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
| * | signal: Explicitly call force_sig_fault on currentEric W. Biederman2019-05-291-2/+2
| |/ | | | | | | | | | | | | | | | | | | Update the calls of force_sig_fault that pass in a variable that is set to current earlier to explicitly use current. This is to make the next change that removes the task parameter from force_sig_fault easier to verify. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* | treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner2019-06-1911-45/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | ARC: mm: SIGSEGV userspace trying to access kernel virtual memoryEugeniy Paltsev2019-05-201-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of today if userspace process tries to access a kernel virtual addres (0x7000_0000 to 0x7ffff_ffff) such that a legit kernel mapping already exists, that process hangs instead of being killed with SIGSEGV Fix that by ensuring that do_page_fault() handles kenrel vaddr only if in kernel mode. And given this, we can also simplify the code a bit. Now a vmalloc fault implies kernel mode so its failure (for some reason) can reuse the @no_context label and we can remove @bad_area_nosemaphore. Reproduce user test for original problem: ------------------------>8----------------- #include <stdlib.h> #include <stdint.h> int main(int argc, char *argv[]) { volatile uint32_t temp; temp = *(uint32_t *)(0x70000000); } ------------------------>8----------------- Cc: <stable@vger.kernel.org> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* | ARC: fix build warningsVineet Gupta2019-05-201-5/+8
|/ | | | | | | | arch/arc/mm/tlb.c:914:2: warning: variable length array 'pd0' is used [-Wvla] | arch/arc/include/asm/cmpxchg.h:95:29: warning: value computed is not used [-Wunused-value] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* init: provide a generic free_initmem implementationMike Rapoport2019-05-141-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch series "provide a generic free_initmem implementation", v2. Many architectures implement free_initmem() in exactly the same or very similar way: they wrap the call to free_initmem_default() with sometimes different 'poison' parameter. These patches switch those architectures to use a generic implementation that does free_initmem_default(POISON_FREE_INITMEM). This was inspired by Christoph's patches for free_initrd_mem [1] and I shamelessly copied changelog entries from his patches :) [1] https://lore.kernel.org/lkml/20190213174621.29297-1-hch@lst.de/ This patch (of 2): For most architectures free_initmem just a wrapper for the same free_initmem_default(-1) call. Provide that as a generic implementation marked __weak. Link: http://lkml.kernel.org/r/1550515285-17446-2-git-send-email-rppt@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Richard Kuo <rkuo@codeaurora.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* initramfs: provide a generic free_initrd_mem implementationChristoph Hellwig2019-05-141-7/+0
| | | | | | | | | | | | | | | | | | | For most architectures free_initrd_mem just expands to the same free_reserved_area call. Provide that as a generic implementation marked __weak. Link: http://lkml.kernel.org/r/20190213174621.29297-8-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Steven Price <steven.price@arm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ARC: PAE40: don't panic and instead turn off hw iocVineet Gupta2019-04-021-15/+16
| | | | | | | | | | | HSDK currently panics when built for HIGHMEM/ARC_HAS_PAE40 because ioc is enabled with default which doesn't work for the 2 non contiguous memory nodes. So get PAE working by disabling ioc instead. Tested with !PAE40 by forcing @ioc_enable=0 and running the glibc testsuite over ssh Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* treewide: add checks for the return value of memblock_alloc*()Mike Rapoport2019-03-121-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add check for the return value of memblock_alloc*() functions and call panic() in case of error. The panic message repeats the one used by panicing memblock allocators with adjustment of parameters to include only relevant ones. The replacement was mostly automated with semantic patches like the one below with manual massaging of format strings. @@ expression ptr, size, align; @@ ptr = memblock_alloc(size, align); + if (!ptr) + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align); [anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type] Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org [rppt@linux.ibm.com: fix format strings for panics after memblock_alloc] Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com [rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails] Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx [akpm@linux-foundation.org: fix xtensa printk warning] Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Reviewed-by: Guo Ren <ren_guo@c-sky.com> [c-sky] Acked-by: Paul Burton <paul.burton@mips.com> [MIPS] Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> [s390] Reviewed-by: Juergen Gross <jgross@suse.com> [Xen] Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Christoph Hellwig <hch@lst.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dennis Zhou <dennis@kernel.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Petr Mladek <pmladek@suse.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Rob Herring <robh+dt@kernel.org> Cc: Rob Herring <robh@kernel.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ARC: mm: do_page_fault fixes #1: relinquish mmap_sem if signal arrives while ↵Vineet Gupta2019-01-181-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | handle_mm_fault do_page_fault() forgot to relinquish mmap_sem if a signal came while handling handle_mm_fault() - due to say a ctl+c or oom etc. This would later cause a deadlock by acquiring it twice. This came to light when running libc testsuite tst-tls3-malloc test but is likely also the cause for prior seen LTP failures. Using lockdep clearly showed what the issue was. | # while true; do ./tst-tls3-malloc ; done | Didn't expect signal from child: got `Segmentation fault' | ^C | ============================================ | WARNING: possible recursive locking detected | 4.17.0+ #25 Not tainted | -------------------------------------------- | tst-tls3-malloc/510 is trying to acquire lock: | 606c7728 (&mm->mmap_sem){++++}, at: __might_fault+0x28/0x5c | |but task is already holding lock: |606c7728 (&mm->mmap_sem){++++}, at: do_page_fault+0x9c/0x2a0 | | other info that might help us debug this: | Possible unsafe locking scenario: | | CPU0 | ---- | lock(&mm->mmap_sem); | lock(&mm->mmap_sem); | | *** DEADLOCK *** | ------------------------------------------------------------ What the change does is not obvious (note to myself) prior code was | do_page_fault | | down_read() <-- lock taken | handle_mm_fault <-- signal pending as this runs | if fatal_signal_pending | if VM_FAULT_ERROR | up_read | if user_mode | return <-- lock still held, this was the BUG New code | do_page_fault | | down_read() <-- lock taken | handle_mm_fault <-- signal pending as this runs | if fatal_signal_pending | if VM_FAULT_RETRY | return <-- not same case as above, but still OK since | core mm already relinq lock for FAULT_RETRY | ... | | < Now falls through for bug case above > | | up_read() <-- lock relinquished Cc: stable@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: adjust memblock_reserve of kernel memoryEugeniy Paltsev2019-01-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | In setup_arch_memory we reserve the memory area wherein the kernel is located. Current implementation may reserve more memory than it actually required in case of CONFIG_LINUX_LINK_BASE is not equal to CONFIG_LINUX_RAM_BASE. This happens because we calculate start of the reserved region relatively to the CONFIG_LINUX_RAM_BASE and end of the region relatively to the CONFIG_LINUX_RAM_BASE. For example in case of HSDK board we wasted 256MiB of physical memory: ------------------->8------------------------------ Memory: 770416K/1048576K available (5496K kernel code, 240K rwdata, 1064K rodata, 2200K init, 275K bss, 278160K reserved, 0K cma-reserved) ------------------->8------------------------------ Fix that. Fixes: 9ed68785f7f2b ("ARC: mm: Decouple RAM base address from kernel link addr") Cc: stable@vger.kernel.org #4.14+ Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* arch/arc/mm/fault.c: remove caller signal_pending_branch predictionsDavidlohr Bueso2019-01-041-1/+1
| | | | | | | | | | | This is already done for us internally by the signal machinery. Link: http://lkml.kernel.org/r/20181116002713.8474-4-dave@stgolabs.net Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'devicetree-for-4.21' of ↵Linus Torvalds2018-12-291-20/+5
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux Pull Devicetree updates from Rob Herring: "The biggest highlight here is the start of using json-schema for DT bindings. Being able to validate bindings has been discussed for years with little progress. - Initial support for DT bindings using json-schema language. This is the start of converting DT bindings from free-form text to a structured format. - Reworking of initrd address initialization. This moves to using the phys address instead of virt addr in the DT parsing code. This rework was motivated by CONFIG_DEV_BLK_INITRD causing unnecessary rebuilding of lots of files. - Fix stale phandle entries in phandle cache - DT overlay validation improvements. This exposed several memory leak bugs which have been fixed. - Use node name and device_type helper functions in DT code - Last remaining conversions to using %pOFn printk specifier instead of device_node.name directly - Create new common RTC binding doc and move all trivial RTC devices out of trivial-devices.txt. - New bindings for Freescale MAG3110 magnetometer, Cadence Sierra PHY, and Xen shared memory - Update dtc to upstream version v1.4.7-57-gf267e674d145" * tag 'devicetree-for-4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (68 commits) of: __of_detach_node() - remove node from phandle cache of: of_node_get()/of_node_put() nodes held in phandle cache gpio-omap.txt: add reg and interrupts properties dt-bindings: mrvl,intc: fix a trivial typo dt-bindings: iio: magnetometer: add dt-bindings for freescale mag3110 dt-bindings: Convert trivial-devices.txt to json-schema dt-bindings: arm: mrvl: amend Browstone compatible string dt-bindings: arm: Convert Tegra board/soc bindings to json-schema dt-bindings: arm: Convert ZTE board/soc bindings to json-schema dt-bindings: arm: Add missing Xilinx boards dt-bindings: arm: Convert Xilinx board/soc bindings to json-schema dt-bindings: arm: Convert VIA board/soc bindings to json-schema dt-bindings: arm: Convert ST STi board/soc bindings to json-schema dt-bindings: arm: Convert SPEAr board/soc bindings to json-schema dt-bindings: arm: Convert CSR SiRF board/soc bindings to json-schema dt-bindings: arm: Convert QCom board/soc bindings to json-schema dt-bindings: arm: Convert TI nspire board/soc bindings to json-schema dt-bindings: arm: Convert TI davinci board/soc bindings to json-schema dt-bindings: arm: Convert Calxeda board/soc bindings to json-schema dt-bindings: arm: Convert Altera board/soc bindings to json-schema ...
| * arch: Move initrd= parsing into do_mounts_initrd.cFlorian Fainelli2018-11-261-20/+5
| | | | | | | | | | | | | | | | | | | | | | ARC, ARM, ARM64 and Unicore32 are all capable of parsing the "initrd=" command line parameter to allow specifying the physical address and size of an initrd. Move that parsing into init/do_mounts_initrd.c such that we no longer duplicate that logic. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Rob Herring <robh@kernel.org>
* | Merge tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2018-12-282-2/+2
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull DMA mapping updates from Christoph Hellwig: "A huge update this time, but a lot of that is just consolidating or removing code: - provide a common DMA_MAPPING_ERROR definition and avoid indirect calls for dma_map_* error checking - use direct calls for the DMA direct mapping case, avoiding huge retpoline overhead for high performance workloads - merge the swiotlb dma_map_ops into dma-direct - provide a generic remapping DMA consistent allocator for architectures that have devices that perform DMA that is not cache coherent. Based on the existing arm64 implementation and also used for csky now. - improve the dma-debug infrastructure, including dynamic allocation of entries (Robin Murphy) - default to providing chaining scatterlist everywhere, with opt-outs for the few architectures (alpha, parisc, most arm32 variants) that can't cope with it - misc sparc32 dma-related cleanups - remove the dma_mark_clean arch hook used by swiotlb on ia64 and replace it with the generic noncoherent infrastructure - fix the return type of dma_set_max_seg_size (Niklas Söderlund) - move the dummy dma ops for not DMA capable devices from arm64 to common code (Robin Murphy) - ensure dma_alloc_coherent returns zeroed memory to avoid kernel data leaks through userspace. We already did this for most common architectures, but this ensures we do it everywhere. dma_zalloc_coherent has been deprecated and can hopefully be removed after -rc1 with a coccinelle script" * tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mapping: (73 commits) dma-mapping: fix inverted logic in dma_supported dma-mapping: deprecate dma_zalloc_coherent dma-mapping: zero memory returned from dma_alloc_* sparc/iommu: fix ->map_sg return value sparc/io-unit: fix ->map_sg return value arm64: default to the direct mapping in get_arch_dma_ops PCI: Remove unused attr variable in pci_dma_configure ia64: only select ARCH_HAS_DMA_COHERENT_TO_PFN if swiotlb is enabled dma-mapping: bypass indirect calls for dma-direct vmd: use the proper dma_* APIs instead of direct methods calls dma-direct: merge swiotlb_dma_ops into the dma_direct code dma-direct: use dma_direct_map_page to implement dma_direct_map_sg dma-direct: improve addressability error reporting swiotlb: remove dma_mark_clean swiotlb: remove SWIOTLB_MAP_ERROR ACPI / scan: Refactor _CCA enforcement dma-mapping: factor out dummy DMA ops dma-mapping: always build the direct mapping code dma-mapping: move dma_cache_sync out of line dma-mapping: move various slow path functions out of line ...
| * | dma-mapping: zero memory returned from dma_alloc_*Christoph Hellwig2018-12-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we want to map memory from the DMA allocator to userspace it must be zeroed at allocation time to prevent stale data leaks. We already do this on most common architectures, but some architectures don't do this yet, fix them up, either by passing GFP_ZERO when we use the normal page allocator or doing a manual memset otherwise. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Sam Ravnborg <sam@ravnborg.org> [sparc]
| * | dma-mapping: bypass indirect calls for dma-directChristoph Hellwig2018-12-131-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | Avoid expensive indirect calls in the fast path DMA mapping operations by directly calling the dma_direct_* ops if we are using the directly mapped DMA operations. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Tony Luck <tony.luck@intel.com>
* | ARC: mm: fix uninitialised signal code in do_page_faultEugeniy Paltsev2018-11-121-1/+1
| | | | | | | | | | | | | | | | | | | | Commit 15773ae938d8 ("signal/arc: Use force_sig_fault where appropriate") introduced undefined behaviour by leaving si_code unitiailized and leaking random kernel values to user space. Fixes: 15773ae938d8 ("signal/arc: Use force_sig_fault where appropriate") Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* | ARC: IOC: panic if kernel was started with previously enabled IOCEugeniy Paltsev2018-11-121-3/+17
|/ | | | | | | | | | | | | | | | | | | | If IOC was already enabled (due to bootloader) it technically needs to be reconfigured with aperture base,size corresponding to Linux memory map which will certainly be different than uboot's. But disabling and reenabling IOC when DMA might be potentially active is tricky business. To avoid random memory issues later, just panic here and ask user to upgrade bootloader to one which doesn't enable IOC This was actually seen as issue on some of the HSDK board with a version of uboot which enabled IOC. There were random issues later with starting of X or peripherals etc. Also while I'm at it, replace hardcoded bits in ARC_REG_IO_COH_PARTIAL and ARC_REG_IO_COH_ENABLE registers by definitions. Inspired by: https://lkml.org/lkml/2018/1/19/557 Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* mm: remove include/linux/bootmem.hMike Rapoport2018-10-312-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* memblock: rename free_all_bootmem to memblock_free_allMike Rapoport2018-10-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The conversion is done using sed -i 's@free_all_bootmem@memblock_free_all@' \ $(git grep -l free_all_bootmem) Link: http://lkml.kernel.org/r/1536927045-23536-26-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* memblock: replace alloc_bootmem_low_pages with memblock_alloc_lowMike Rapoport2018-10-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The alloc_bootmem_low_pages() function allocates PAGE_SIZE aligned regions from low memory. memblock_alloc_low() with alignment set to PAGE_SIZE does exactly the same thing. The conversion is done using the following semantic patch: @@ expression e; @@ - alloc_bootmem_low_pages(e) + memblock_alloc_low(e, PAGE_SIZE) Link: http://lkml.kernel.org/r/1536927045-23536-19-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'siginfo-linus' of ↵Linus Torvalds2018-10-241-15/+5
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull siginfo updates from Eric Biederman: "I have been slowly sorting out siginfo and this is the culmination of that work. The primary result is in several ways the signal infrastructure has been made less error prone. The code has been updated so that manually specifying SEND_SIG_FORCED is never necessary. The conversion to the new siginfo sending functions is now complete, which makes it difficult to send a signal without filling in the proper siginfo fields. At the tail end of the patchset comes the optimization of decreasing the size of struct siginfo in the kernel from 128 bytes to about 48 bytes on 64bit. The fundamental observation that enables this is by definition none of the known ways to use struct siginfo uses the extra bytes. This comes at the cost of a small user space observable difference. For the rare case of siginfo being injected into the kernel only what can be copied into kernel_siginfo is delivered to the destination, the rest of the bytes are set to 0. For cases where the signal and the si_code are known this is safe, because we know those bytes are not used. For cases where the signal and si_code combination is unknown the bits that won't fit into struct kernel_siginfo are tested to verify they are zero, and the send fails if they are not. I made an extensive search through userspace code and I could not find anything that would break because of the above change. If it turns out I did break something it will take just the revert of a single change to restore kernel_siginfo to the same size as userspace siginfo. Testing did reveal dependencies on preferring the signo passed to sigqueueinfo over si->signo, so bit the bullet and added the complexity necessary to handle that case. Testing also revealed bad things can happen if a negative signal number is passed into the system calls. Something no sane application will do but something a malicious program or a fuzzer might do. So I have fixed the code that performs the bounds checks to ensure negative signal numbers are handled" * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (80 commits) signal: Guard against negative signal numbers in copy_siginfo_from_user32 signal: Guard against negative signal numbers in copy_siginfo_from_user signal: In sigqueueinfo prefer sig not si_signo signal: Use a smaller struct siginfo in the kernel signal: Distinguish between kernel_siginfo and siginfo signal: Introduce copy_siginfo_from_user and use it's return value signal: Remove the need for __ARCH_SI_PREABLE_SIZE and SI_PAD_SIZE signal: Fail sigqueueinfo if si_signo != sig signal/sparc: Move EMT_TAGOVF into the generic siginfo.h signal/unicore32: Use force_sig_fault where appropriate signal/unicore32: Generate siginfo in ucs32_notify_die signal/unicore32: Use send_sig_fault where appropriate signal/arc: Use force_sig_fault where appropriate signal/arc: Push siginfo generation into unhandled_exception signal/ia64: Use force_sig_fault where appropriate signal/ia64: Use the force_sig(SIGSEGV,...) in ia64_rt_sigreturn signal/ia64: Use the generic force_sigsegv in setup_frame signal/arm/kvm: Use send_sig_mceerr signal/arm: Use send_sig_fault where appropriate signal/arm: Use force_sig_fault where appropriate ...
| * signal/arc: Use force_sig_fault where appropriateEric W. Biederman2018-09-271-15/+5
| | | | | | | | | | Acked-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>