summaryrefslogtreecommitdiffstats
path: root/mm/slab.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* mm/slab.c: fix SLAB freelist randomization duplicate entriesJohn Sperbeck2017-01-111-4/+4
* Merge branch 'for-4.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wqLinus Torvalds2016-12-131-6/+1
|\
| * Merge branch 'for-4.9' into for-4.10Tejun Heo2016-10-191-6/+1
| |\
| | * slab, workqueue: remove keventd_up() usageTejun Heo2016-09-171-6/+1
* | | mm, slab: maintain total slab count instead of active countDavid Rientjes2016-12-131-41/+29
* | | mm, slab: faster active and free statsGreg Thelen2016-12-131-70/+47
* | | slub: move synchronize_sched out of slab_mutex on shrinkVladimir Davydov2016-12-131-2/+2
* | | mm/slab: improve performance of gathering slabinfo statsAruna Ramakrishna2016-10-281-16/+27
* | | mm/slab: fix kmemcg cache creation delayed issueJoonsoo Kim2016-10-281-1/+1
|/ /
* / slab: Convert to hotplug state machineSebastian Andrzej Siewior2016-09-061-63/+51
|/
* Merge tag 'usercopy-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/ke...Linus Torvalds2016-08-081-0/+30
|\
| * mm: SLAB hardened usercopy supportKees Cook2016-07-261-0/+30
* | treewide: replace obsolete _refok by __refFabian Frederick2016-08-021-1/+1
* | mm/kasan: get rid of ->state in struct kasan_alloc_metaAndrey Ryabinin2016-08-021-1/+3
* | mm/slab: use list_move instead of list_del/list_addWei Yongjun2016-07-271-2/+1
* | slab: do not panic on invalid gfp_maskMichal Hocko2016-07-271-2/+4
* | slab: make GFP_SLAB_BUG_MASK information more human readableMichal Hocko2016-07-271-1/+2
* | mm: reorganize SLAB freelist randomizationThomas Garnier2016-07-271-60/+20
|/
* mm, kasan: don't call kasan_krealloc() from ksize().Alexander Potapenko2016-05-211-1/+1
* mm: kasan: initial memory quarantine implementationAlexander Potapenko2016-05-211-2/+10
* include/linux/nodemask.h: create next_node_in() helperAndrew Morton2016-05-201-10/+3
* mm: slab: remove ZONE_DMA_FLAGYang Shi2016-05-201-22/+1
* mm: SLAB freelist randomizationThomas Garnier2016-05-201-2/+165
* mm/slab: lockless decision to grow cacheJoonsoo Kim2016-05-201-3/+18
* mm/slab: refill cpu cache through a new slab without holding a node lockJoonsoo Kim2016-05-201-32/+36
* mm/slab: separate cache_grow() to two partsJoonsoo Kim2016-05-201-22/+52
* mm/slab: make cache_grow() handle the page allocated on arbitrary nodeJoonsoo Kim2016-05-201-39/+21
* mm/slab: racy access/modify the slab colorJoonsoo Kim2016-05-201-13/+13
* mm/slab: don't keep free slabs if free_objects exceeds free_limitJoonsoo Kim2016-05-201-9/+14
* mm/slab: clean-up kmem_cache_node setupJoonsoo Kim2016-05-201-100/+68
* mm/slab: factor out kmem_cache_node initialization codeJoonsoo Kim2016-05-201-29/+45
* mm/slab: drain the free slab as much as possibleJoonsoo Kim2016-05-201-9/+3
* mm/slab: remove BAD_ALIEN_MAGIC againJoonsoo Kim2016-05-201-4/+2
* mm/slab: fix the theoretical race by holding proper lockJoonsoo Kim2016-05-201-23/+45
* mm, kasan: add GFP flags to KASAN APIAlexander Potapenko2016-03-261-7/+8
* mm, kasan: SLAB supportAlexander Potapenko2016-03-261-6/+37
* mm: convert printk(KERN_<LEVEL> to pr_<level>Joe Perches2016-03-171-27/+24
* mm: coalesce split stringsJoe Perches2016-03-171-18/+10
* mm: thp: set THP defrag by default to madvise and add a stall-free defrag optionMel Gorman2016-03-171-4/+4
* mm: memcontrol: report slab usage in cgroup2 memory.statVladimir Davydov2016-03-171-3/+5
* mm, sl[au]b: print gfp_flags as strings in slab_out_of_memory()Vlastimil Babka2016-03-161-6/+4
* mm/slab: re-implement pfmemalloc supportJoonsoo Kim2016-03-161-168/+116
* mm/slab: avoid returning values by referenceJoonsoo Kim2016-03-161-5/+8
* mm/slab: introduce new slab management type, OBJFREELIST_SLABJoonsoo Kim2016-03-161-8/+86
* mm/slab: factor out debugging initialization in cache_init_objs()Joonsoo Kim2016-03-161-6/+18
* mm/slab: factor out slab list fixup codeJoonsoo Kim2016-03-161-12/+13
* mm/slab: make criteria for off slab determination robust and simpleJoonsoo Kim2016-03-161-28/+17
* mm/slab: do not change cache size if debug pagealloc isn't possibleJoonsoo Kim2016-03-161-4/+11
* mm/slab: clean up cache type determinationJoonsoo Kim2016-03-161-34/+71
* mm/slab: align cache size first before determination of OFF_SLAB candidateJoonsoo Kim2016-03-161-11/+15