summaryrefslogtreecommitdiffstats
Commit message (Expand)AuthorAgeFilesLines
* mm: ptep_get() conversionRyan Roberts2023-06-2047-228/+301
* mm: move ptep_get() and pmdp_get() helpersRyan Roberts2023-06-201-14/+14
* mm: ptdump should use ptep_get_lockless()Ryan Roberts2023-06-201-1/+1
* sh: move the ARCH_DMA_MINALIGN definition to asm/cache.hCatalin Marinas2023-06-202-6/+6
* microblaze: move the ARCH_{DMA,SLAB}_MINALIGN definitions to asm/cache.hCatalin Marinas2023-06-202-5/+5
* powerpc: move the ARCH_DMA_MINALIGN definition to asm/cache.hCatalin Marinas2023-06-202-4/+4
* arm64: enable ARCH_WANT_KMALLOC_DMA_BOUNCE for arm64Catalin Marinas2023-06-202-1/+7
* mm: slab: reduce the kmalloc() minimum alignment if DMA bouncing possibleCatalin Marinas2023-06-201-0/+5
* iommu/dma: force bouncing if the size is not cacheline-alignedCatalin Marinas2023-06-203-11/+81
* dma-mapping: force bouncing if the kmalloc() size is not cache-line-alignedCatalin Marinas2023-06-203-1/+67
* dma-mapping: name SG DMA flag helpers consistentlyRobin Murphy2023-06-204-10/+10
* scatterlist: add dedicated config for DMA flagsRobin Murphy2023-06-203-7/+10
* arm64: allow kmalloc() caches aligned to the smaller cache_line_size()Catalin Marinas2023-06-201-0/+3
* iio: core: use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGNCatalin Marinas2023-06-201-1/+1
* dm-crypt: use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGNCatalin Marinas2023-06-201-1/+1
* drivers/spi: use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGNCatalin Marinas2023-06-201-1/+1
* drivers/usb: use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGNCatalin Marinas2023-06-201-4/+4
* drivers/gpu: use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGNCatalin Marinas2023-06-201-3/+3
* drivers/base: use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGNCatalin Marinas2023-06-201-3/+3
* mm/slab: limit kmalloc() minimum alignment to dma_get_cache_alignment()Catalin Marinas2023-06-201-3/+21
* mm/slab: simplify create_kmalloc_cache() args and make it staticCatalin Marinas2023-06-203-16/+9
* dma: allow dma_get_cache_alignment() to be overridden by the arch codeCatalin Marinas2023-06-201-0/+2
* mm/slab: decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGNCatalin Marinas2023-06-203-5/+18
* mm/hugetlb: fix pgtable lock on pmd sharingPeter Xu2023-06-201-3/+2
* mm: remove set_compound_page_dtor()Sidhartha Kumar2023-06-203-12/+2
* perf/core: allow pte_offset_map() to failHugh Dickins2023-06-202-2/+8
* mm/swap: swap_vma_readahead() do the pte_offset_map()Hugh Dickins2023-06-202-40/+24
* mm/pgtable: delete pmd_trans_unstable() and friendsHugh Dickins2023-06-202-100/+7
* mm/memory: handle_pte_fault() use pte_offset_map_nolock()Hugh Dickins2023-06-202-27/+17
* mm/memory: allow pte_offset_map[_lock]() to failHugh Dickins2023-06-201-86/+81
* mm/khugepaged: allow pte_offset_map[_lock]() to failHugh Dickins2023-06-201-23/+49
* mm/huge_memory: split huge pmd under one pte_offset_map()Hugh Dickins2023-06-201-10/+18
* mm/gup: remove FOLL_SPLIT_PMD use of pmd_trans_unstable()Hugh Dickins2023-06-201-15/+4
* mm/migrate_device: allow pte_offset_map_lock() to failHugh Dickins2023-06-201-27/+4
* mm/mglru: allow pte_offset_map_nolock() to failHugh Dickins2023-06-201-9/+7
* mm/swapoff: allow pte_offset_map[_lock]() to failHugh Dickins2023-06-201-18/+20
* mm/madvise: clean up force_shm_swapin_readahead()Hugh Dickins2023-06-201-11/+13
* mm/madvise: clean up pte_offset_map_lock() scansHugh Dickins2023-06-201-54/+68
* mm/mremap: retry if either pte_offset_map_*lock() failsHugh Dickins2023-06-202-10/+23
* mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge()Hugh Dickins2023-06-201-57/+17
* mm/various: give up if pte_offset_map[_lock]() failsHugh Dickins2023-06-205-13/+22
* mm/debug_vm_pgtable,page_table_check: warn pte map failsHugh Dickins2023-06-202-1/+10
* mm/userfaultfd: allow pte_offset_map_lock() to failHugh Dickins2023-06-201-0/+8
* mm/userfaultfd: retry if pte_offset_map() failsHugh Dickins2023-06-201-5/+6
* mm/hmm: retry if pte_offset_map() failsHugh Dickins2023-06-201-0/+2
* mm/vmalloc: vmalloc_to_page() use pte_offset_kernel()Hugh Dickins2023-06-201-2/+1
* mm/vmwgfx: simplify pmd & pud mapping dirty helpersHugh Dickins2023-06-201-25/+9
* mm/pagewalk: walk_pte_range() allow for pte_offset_map()Hugh Dickins2023-06-201-10/+23
* mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() failsHugh Dickins2023-06-205-28/+36
* mm/page_vma_mapped: pte_offset_map_nolock() not pte_lockptr()Hugh Dickins2023-06-201-6/+22