summaryrefslogtreecommitdiffstats
path: root/mm/slub.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* SLUB: fix checkpatch warningsIngo Molnar2008-02-081-16/+21
* Use non atomic unlockNick Piggin2008-02-081-1/+1
* SLUB: Support for performance statisticsChristoph Lameter2008-02-081-8/+119
* SLUB: Alternate fast paths using cmpxchg_localChristoph Lameter2008-02-081-5/+88
* SLUB: Use unique end pointer for each slab page.Christoph Lameter2008-02-081-23/+47
* SLUB: Deal with annoying gcc warning on kfree()Christoph Lameter2008-02-081-1/+2
* SLUB: Do not upset lockdeproot2008-02-041-0/+8
* SLUB: Fix coding style violationsPekka Enberg2008-02-041-23/+23
* Add parameter to add_partial to avoid having two functionsChristoph Lameter2008-02-041-16/+15
* SLUB: rename defrag to remote_node_defrag_ratioChristoph Lameter2008-02-041-8/+9
* Move count_partial before kmem_cache_shrinkChristoph Lameter2008-02-041-13/+13
* SLUB: Fix sysfs refcountingChristoph Lameter2008-02-041-2/+13
* slub: fix shadowed variable sparse warningsHarvey Harrison2008-02-041-21/+18
* Kobject: convert mm/slub.c to use kobject_init/add_ng()Greg Kroah-Hartman2008-01-251-5/+4
* kobject: convert kernel_kset to be a kobjectGreg Kroah-Hartman2008-01-251-2/+1
* kset: move /sys/slab to /sys/kernel/slabGreg Kroah-Hartman2008-01-251-1/+2
* kset: convert slub to use kset_createGreg Kroah-Hartman2008-01-251-8/+7
* kobject: remove struct kobj_type from struct ksetGreg Kroah-Hartman2008-01-251-2/+3
* Unify /proc/slabinfo configurationLinus Torvalds2008-01-021-2/+9
* slub: provide /proc/slabinfoPekka J Enberg2008-01-011-13/+92
* SLUB: Improve hackbench speedChristoph Lameter2007-12-221-2/+2
* SLUB: remove useless masking of GFP_ZEROChristoph Lameter2007-12-181-3/+0
* Avoid double memclear() in SLOB/SLUBLinus Torvalds2007-12-091-0/+3
* SLUB's ksize() fails for size > 2048Vegard Nossum2007-12-051-1/+5
* SLUB: killed the unused "end" variableDenis Cheng2007-11-121-2/+0
* SLUB: Fix memory leak by not reusing cpu_slabChristoph Lameter2007-11-051-19/+1
* missing atomic_read_long() in slub.cAl Viro2007-10-291-1/+1
* memory hotplug: make kmem_cache_node for SLUB on memory online avoid panicYasunori Goto2007-10-221-0/+118
* Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter2007-10-171-6/+6
* SLUB: simplify IRQ off handlingChristoph Lameter2007-10-171-11/+7
* slub: list_locations() can use GFP_TEMPORARYAndrew Morton2007-10-161-1/+1
* SLUB: Optimize cacheline use for zeroingChristoph Lameter2007-10-161-2/+12
* SLUB: Place kmem_cache_cpu structures in a NUMA aware wayChristoph Lameter2007-10-161-14/+154
* SLUB: Avoid touching page struct when freeing to per cpu slabChristoph Lameter2007-10-161-5/+9
* SLUB: Move page->offset to kmem_cache_cpu->offsetChristoph Lameter2007-10-161-41/+11
* SLUB: Do not use page->mappingChristoph Lameter2007-10-161-2/+0
* SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slabChristoph Lameter2007-10-161-74/+116
* Group short-lived and reclaimable kernel allocationsMel Gorman2007-10-161-0/+3
* Categorize GFP flagsChristoph Lameter2007-10-161-2/+3
* Memoryless nodes: SLUB supportChristoph Lameter2007-10-161-8/+8
* Slab allocators: fail if ksize is called with a NULL parameterChristoph Lameter2007-10-161-1/+2
* {slub, slob}: use unlikely() for kfree(ZERO_OR_NULL_PTR) checkSatyam Sharma2007-10-161-4/+4
* SLUB: direct pass through of page size or higher kmalloc requestsChristoph Lameter2007-10-161-25/+38
* slub.c:early_kmem_cache_node_alloc() shouldn't be __initAdrian Bunk2007-10-161-2/+2
* SLUB: accurately compare debug flags during slab cache mergeChristoph Lameter2007-09-121-15/+23
* slub: do not fail if we cannot register a slab with sysfsChristoph Lameter2007-08-311-2/+6
* SLUB: do not fail on broken memory configurationsChristoph Lameter2007-08-231-1/+8
* SLUB: use atomic_long_read for atomic_long variablesChristoph Lameter2007-08-231-3/+3
* SLUB: Fix dynamic dma kmalloc cache creationChristoph Lameter2007-08-101-14/+45
* SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrinkChristoph Lameter2007-08-101-7/+2