summaryrefslogtreecommitdiffstats
path: root/mm (follow)
Commit message (Expand)AuthorAgeFilesLines
* mm/sparse: only sub-section aligned range would be populatedWei Yang2020-08-071-14/+6
* mm/sparse: never partially remove memmap for early sectionWei Yang2020-08-071-3/+7
* mm/mremap: start addresses are properly alignedWei Yang2020-08-072-6/+0
* mm/mremap: calculate extent in one placeWei Yang2020-08-071-3/+3
* mm/mremap: it is sure to have enough space when extent meets requirementWei Yang2020-08-072-11/+6
* mm: remove unnecessary wrapper function do_mmap_pgoff()Peter Collingbourne2020-08-074-14/+14
* mm: mmap: merge vma after call_mmap() if possibleMiaohe Lin2020-08-071-1/+21
* mm/sparsemem: enable vmem_altmap support in vmemmap_alloc_block_buf()Anshuman Khandual2020-08-071-15/+13
* mm/sparsemem: enable vmem_altmap support in vmemmap_populate_basepages()Anshuman Khandual2020-08-071-5/+11
* mm: adjust vm_committed_as_batch according to vm overcommit policyFeng Tang2020-08-072-6/+57
* mm/util.c: make vm_memory_committed() more accurateFeng Tang2020-08-071-1/+6
* mm/mmap: optimize a branch judgment in ksys_mmap_pgoff()Zhen Lei2020-08-071-3/+4
* mm: move p?d_alloc_track to separate header fileJoerg Roedel2020-08-073-0/+54
* mm: move lib/ioremap.c to mm/Mike Rapoport2020-08-072-1/+288
* mm: remove unneeded includes of <asm/pgalloc.h>Mike Rapoport2020-08-072-1/+1
* mm/memory.c: make remap_pfn_range() reject unaligned addrAlex Zhang2020-08-071-1/+4
* mm: remove redundant check non_swap_entry()Ralph Campbell2020-08-071-1/+1
* mm/page_counter.c: fix protection usage propagationMichal Koutný2020-08-071-3/+3
* mm: memcontrol: don't count limit-setting reclaim as memory pressureJohannes Weiner2020-08-072-7/+10
* mm: memcontrol: restore proper dirty throttling when memory.high changesJohannes Weiner2020-08-071-0/+2
* memcg, oom: check memcg margin for parallel oomYafang Shao2020-08-071-1/+7
* mm, memcg: decouple e{low,min} state mutations from protection checksChris Down2020-08-072-34/+11
* mm, memcg: avoid stale protection values when cgroup is above protectionYafang Shao2020-08-072-1/+10
* mm, memcg: unify reclaim retry limits with page allocatorChris Down2020-08-071-9/+6
* mm, memcg: reclaim more aggressively before high allocator throttlingChris Down2020-08-071-5/+37
* mm: memcontrol: avoid workload stalls when lowering memory.highRoman Gushchin2020-08-071-2/+2
* mm: slab: rename (un)charge_slab_page() to (un)account_slab_page()Roman Gushchin2020-08-073-8/+8
* mm: memcg/slab: remove unused argument by charge_slab_page()Roman Gushchin2020-08-073-4/+3
* mm: memcontrol: account kernel stack per nodeShakeel Butt2020-08-073-13/+13
* mm: memcg/slab: use a single set of kmem_caches for all allocationsRoman Gushchin2020-08-075-575/+78
* mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo()Roman Gushchin2020-08-071-3/+0
* mm: memcg/slab: deprecate slab_root_cachesRoman Gushchin2020-08-074-48/+8
* mm: memcg/slab: remove memcg_kmem_get_cache()Roman Gushchin2020-08-073-27/+11
* mm: memcg/slab: simplify memcg cache creationRoman Gushchin2020-08-073-57/+15
* mm: memcg/slab: use a single set of kmem_caches for all accounted allocationsRoman Gushchin2020-08-075-690/+132
* mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.hRoman Gushchin2020-08-071-12/+0
* mm: memcg/slab: deprecate memory.kmem.slabinfoRoman Gushchin2020-08-072-30/+4
* mm: memcg/slab: charge individual slab objects instead of pagesRoman Gushchin2020-08-071-96/+78
* mm: memcg/slab: save obj_cgroup for non-root slab objectsRoman Gushchin2020-08-074-20/+86
* mm: memcg/slab: allocate obj_cgroups for non-root slab pagesRoman Gushchin2020-08-072-3/+66
* mm: memcg/slab: obj_cgroup APIRoman Gushchin2020-08-071-1/+287
* mm: memcontrol: decouple reference counting from page accountingJohannes Weiner2020-08-072-20/+21
* mm: slub: implement SLUB version of obj_to_index()Roman Gushchin2020-08-071-10/+5
* mm: memcg: convert vmstat slab counters to bytesRoman Gushchin2020-08-079-34/+35
* mm: memcg: prepare for byte-sized vmstat itemsRoman Gushchin2020-08-072-8/+36
* mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_sta...Roman Gushchin2020-08-071-19/+24
* mm: kmem: make memcg_kmem_enabled() irreversibleRoman Gushchin2020-08-071-6/+2
* tmpfs: support 64-bit inums per-sbChris Down2020-08-071-2/+63
* tmpfs: per-superblock i_ino supportChris Down2020-08-071-5/+61
* mm/page_io.c: use blk_io_schedule() for avoiding task hung in sync ioXianting Tian2020-08-071-1/+1