| Commit message (Expand) | Author | Age | Files | Lines |
* | slub: list_locations() can use GFP_TEMPORARY | Andrew Morton | 2007-10-16 | 1 | -1/+1 |
* | SLUB: Optimize cacheline use for zeroing | Christoph Lameter | 2007-10-16 | 1 | -2/+12 |
* | SLUB: Place kmem_cache_cpu structures in a NUMA aware way | Christoph Lameter | 2007-10-16 | 1 | -14/+154 |
* | SLUB: Avoid touching page struct when freeing to per cpu slab | Christoph Lameter | 2007-10-16 | 1 | -5/+9 |
* | SLUB: Move page->offset to kmem_cache_cpu->offset | Christoph Lameter | 2007-10-16 | 1 | -41/+11 |
* | SLUB: Do not use page->mapping | Christoph Lameter | 2007-10-16 | 1 | -2/+0 |
* | SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab | Christoph Lameter | 2007-10-16 | 1 | -74/+116 |
* | Group short-lived and reclaimable kernel allocations | Mel Gorman | 2007-10-16 | 1 | -0/+3 |
* | Categorize GFP flags | Christoph Lameter | 2007-10-16 | 1 | -2/+3 |
* | Memoryless nodes: SLUB support | Christoph Lameter | 2007-10-16 | 1 | -8/+8 |
* | Slab allocators: fail if ksize is called with a NULL parameter | Christoph Lameter | 2007-10-16 | 1 | -1/+2 |
* | {slub, slob}: use unlikely() for kfree(ZERO_OR_NULL_PTR) check | Satyam Sharma | 2007-10-16 | 1 | -4/+4 |
* | SLUB: direct pass through of page size or higher kmalloc requests | Christoph Lameter | 2007-10-16 | 1 | -25/+38 |
* | slub.c:early_kmem_cache_node_alloc() shouldn't be __init | Adrian Bunk | 2007-10-16 | 1 | -2/+2 |
* | SLUB: accurately compare debug flags during slab cache merge | Christoph Lameter | 2007-09-12 | 1 | -15/+23 |
* | slub: do not fail if we cannot register a slab with sysfs | Christoph Lameter | 2007-08-31 | 1 | -2/+6 |
* | SLUB: do not fail on broken memory configurations | Christoph Lameter | 2007-08-23 | 1 | -1/+8 |
* | SLUB: use atomic_long_read for atomic_long variables | Christoph Lameter | 2007-08-23 | 1 | -3/+3 |
* | SLUB: Fix dynamic dma kmalloc cache creation | Christoph Lameter | 2007-08-10 | 1 | -14/+45 |
* | SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrink | Christoph Lameter | 2007-08-10 | 1 | -7/+2 |
* | slub: fix bug in slub debug support | Peter Zijlstra | 2007-07-30 | 1 | -1/+1 |
* | slub: add lock debugging check | Peter Zijlstra | 2007-07-30 | 1 | -0/+1 |
* | mm: Remove slab destructors from kmem_cache_create(). | Paul Mundt | 2007-07-20 | 1 | -3/+1 |
* | slub: fix ksize() for zero-sized pointers | Linus Torvalds | 2007-07-19 | 1 | -1/+1 |
* | SLUB: Fix CONFIG_SLUB_DEBUG use for CONFIG_NUMA | Christoph Lameter | 2007-07-17 | 1 | -0/+4 |
* | SLUB: Move sysfs operations outside of slub_lock | Christoph Lameter | 2007-07-17 | 1 | -13/+15 |
* | SLUB: Do not allocate object bit array on stack | Christoph Lameter | 2007-07-17 | 1 | -14/+25 |
* | Slab allocators: Cleanup zeroing allocations | Christoph Lameter | 2007-07-17 | 1 | -11/+0 |
* | SLUB: Do not use length parameter in slab_alloc() | Christoph Lameter | 2007-07-17 | 1 | -11/+9 |
* | SLUB: Style fix up the loop to disable small slabs | Christoph Lameter | 2007-07-17 | 1 | -1/+1 |
* | mm/slub.c: make code static | Adrian Bunk | 2007-07-17 | 1 | -3/+3 |
* | SLUB: Simplify dma index -> size calculation | Christoph Lameter | 2007-07-17 | 1 | -9/+1 |
* | SLUB: faster more efficient slab determination for __kmalloc | Christoph Lameter | 2007-07-17 | 1 | -7/+64 |
* | SLUB: do proper locking during dma slab creation | Christoph Lameter | 2007-07-17 | 1 | -2/+9 |
* | SLUB: extract dma_kmalloc_cache from get_cache. | Christoph Lameter | 2007-07-17 | 1 | -30/+36 |
* | SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUG | Christoph Lameter | 2007-07-17 | 1 | -6/+7 |
* | Slab allocators: support __GFP_ZERO in all allocators | Christoph Lameter | 2007-07-17 | 1 | -9/+15 |
* | Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semantics | Christoph Lameter | 2007-07-17 | 1 | -13/+16 |
* | Slab allocators: consolidate code for krealloc in mm/util.c | Christoph Lameter | 2007-07-17 | 1 | -37/+0 |
* | SLUB Debug: fix initial object debug state of NUMA bootstrap objects | Christoph Lameter | 2007-07-17 | 1 | -1/+2 |
* | SLUB: ensure that the number of objects per slab stays low for high orders | Christoph Lameter | 2007-07-17 | 1 | -2/+19 |
* | SLUB slab validation: Move tracking information alloc outside of lock | Christoph Lameter | 2007-07-17 | 1 | -10/+7 |
* | SLUB: use list_for_each_entry for loops over all slabs | Christoph Lameter | 2007-07-17 | 1 | -38/+13 |
* | SLUB: change error reporting format to follow lockdep loosely | Christoph Lameter | 2007-07-17 | 1 | -123/+154 |
* | SLUB: support slub_debug on by default | Christoph Lameter | 2007-07-16 | 1 | -28/+51 |
* | slub: remove useless EXPORT_SYMBOL | Christoph Lameter | 2007-07-06 | 1 | -1/+0 |
* | SLUB: Make lockdep happy by not calling add_partial with interrupts enabled d... | Christoph Lameter | 2007-07-03 | 1 | -2/+6 |
* | SLUB: fix behavior if the text output of list_locations overflows PAGE_SIZE | Christoph Lameter | 2007-06-24 | 1 | -2/+4 |
* | SLUB: minimum alignment fixes | Christoph Lameter | 2007-06-16 | 1 | -5/+15 |
* | SLUB slab validation: Alloc while interrupts are disabled must use GFP_ATOMIC | Christoph Lameter | 2007-06-16 | 1 | -1/+1 |