summaryrefslogtreecommitdiffstats
path: root/mm/slub.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* Merge tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kern...Linus Torvalds2013-02-261-1/+1
|\
| * taint: add explicit flag to show whether lock dep is still OK.Rusty Russell2013-01-211-1/+1
* | mm: rename page struct field helpersMel Gorman2013-02-241-1/+1
|/
* slub: drop mutex before deleting sysfs entryGlauber Costa2012-12-191-1/+12
* memcg: add comments clarifying aspects of cache attribute propagationGlauber Costa2012-12-191-4/+17
* slub: slub-specific propagation changesGlauber Costa2012-12-191-1/+75
* memcg: destroy memcg cachesGlauber Costa2012-12-191-1/+6
* sl[au]b: allocate objects from memcg cacheGlauber Costa2012-12-191-3/+4
* sl[au]b: always get the cache from its page in kmem_cache_free()Glauber Costa2012-12-191-12/+3
* slab/slub: consider a memcg parameter in kmem_create_cacheGlauber Costa2012-12-191-4/+15
* Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds2012-12-181-231/+70
|\
| * mm/sl[aou]b: Common alignment codeChristoph Lameter2012-12-111-37/+1
| * slub: Use statically allocated kmem_cache boot structure for bootstrapChristoph Lameter2012-12-111-47/+20
| * mm, sl[au]b: create common functions for boot slab creationChristoph Lameter2012-12-111-32/+5
| * slub: Use correct cpu_slab on dead cpuChristoph Lameter2012-12-111-5/+7
| * slab: Ignore internal flags in cache creationGlauber Costa2012-10-311-3/+0
| * mm/sl[aou]b: Move common kmem_cache_size() to slab.hEzequiel Garcia2012-10-311-9/+0
| * slub: Commonize slab_cache field in struct pageGlauber Costa2012-10-241-12/+12
| * Merge branch 'slab/procfs' into slab/nextPekka Enberg2012-10-241-66/+11
| |\
| | * sl[au]b: Process slabinfo_show in common codeGlauber Costa2012-10-241-14/+10
| | * mm/sl[au]b: Move print_slabinfo_header to slab_common.cGlauber Costa2012-10-241-10/+0
| | * mm/sl[au]b: Move slabinfo processing to slab_common.cGlauber Costa2012-10-241-46/+5
| * | slub: remove one code path and reduce lock contention in __slab_free()Joonsoo Kim2012-10-191-20/+14
| |/
* / slub, hotplug: ignore unrelated node's hot-adding and hot-removingLai Jiangshan2012-12-121-2/+2
|/
* Merge branch 'slab/common-for-cgroups' into slab/for-linusPekka Enberg2012-10-031-89/+56
|\
| * slub: Zero initial memory segment for kmem_cache and kmem_cache_nodeChristoph Lameter2012-09-101-1/+1
| * Revert "mm/sl[aou]b: Move sysfs_slab_add to common"Pekka Enberg2012-09-051-2/+17
| * mm/sl[aou]b: Move kmem_cache refcounting to common codeChristoph Lameter2012-09-051-1/+0
| * mm/sl[aou]b: Shrink __kmem_cache_create() parameter listsChristoph Lameter2012-09-051-21/+18
| * mm/sl[aou]b: Move kmem_cache allocations into common codeChristoph Lameter2012-09-051-17/+7
| * mm/sl[aou]b: Move sysfs_slab_add to commonChristoph Lameter2012-09-051-13/+2
| * mm/sl[aou]b: Do slab aliasing call from common codeChristoph Lameter2012-09-051-4/+11
| * mm/sl[aou]b: Move duping of slab name to slab_common.cChristoph Lameter2012-09-051-19/+2
| * mm/sl[aou]b: Get rid of __kmem_cache_destroyChristoph Lameter2012-09-051-5/+5
| * mm/sl[aou]b: Move freeing of kmem_cache structure to common codeChristoph Lameter2012-09-051-2/+0
| * mm/sl[aou]b: Use "kmem_cache" name for slab cache with kmem_cache structChristoph Lameter2012-09-051-2/+0
| * mm/sl[aou]b: Extract a common function for kmem_cache_destroyChristoph Lameter2012-09-051-25/+11
| * mm/sl[aou]b: Move list_add() to slab_common.cChristoph Lameter2012-09-051-2/+0
| * mm/slub: Use kmem_cache for the kmem_cache structureChristoph Lameter2012-09-051-4/+4
| * mm/slub: Add debugging to verify correct cache use on kmem_cache_free()Christoph Lameter2012-09-051-0/+7
* | Merge branch 'slab/next' into slab/for-linusPekka Enberg2012-10-031-24/+39
|\ \
| * | slub: init_kmem_cache_cpus() and put_cpu_partial() can be staticFengguang Wu2012-10-031-2/+2
| * | mm, slub: Rename slab_alloc() -> slab_alloc_node() to match SLABEzequiel Garcia2012-09-251-9/+15
| * | mm, sl[au]b: Taint kernel when we detect a corrupted slabDave Jones2012-09-191-0/+2
| |/
| * slub: reduce failure of this_cpu_cmpxchg in put_cpu_partial() after unfreezingJoonsoo Kim2012-08-161-0/+1
| * slub: Take node lock during object free checksChristoph Lameter2012-08-161-12/+18
| * slub: use free_page instead of put_page for freeing kmalloc allocationGlauber Costa2012-08-161-1/+1
* | slub: consider pfmemalloc_match() in get_partial_node()Joonsoo Kim2012-09-181-5/+10
|/
* mm: slub: optimise the SLUB fast path to avoid pfmemalloc checksChristoph Lameter2012-08-011-4/+3
* mm: sl[au]b: add knowledge of PFMEMALLOC reserve pagesMel Gorman2012-08-011-2/+27