summaryrefslogtreecommitdiffstats
path: root/mm/slub.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* ipc: define the slab_memory_callback priority as a constantNadia Derbey2008-04-291-1/+1
* Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/pen...Linus Torvalds2008-04-281-194/+287
|\
| * slub: pack objects denserChristoph Lameter2008-04-271-2/+2
| * slub: Calculate min_objects based on number of processors.Christoph Lameter2008-04-271-1/+3
| * slub: Drop DEFAULT_MAX_ORDER / DEFAULT_MIN_OBJECTSChristoph Lameter2008-04-271-21/+2
| * slub: Simplify any_slab_object checksChristoph Lameter2008-04-271-9/+1
| * slub: Make the order configurable for each slab cacheChristoph Lameter2008-04-271-7/+22
| * slub: Drop fallback to page allocator methodChristoph Lameter2008-04-271-41/+2
| * slub: Fallback to minimal order during slab page allocationChristoph Lameter2008-04-271-11/+28
| * slub: Update statistics handling for variable order slabsChristoph Lameter2008-04-271-53/+97
| * slub: Add kmem_cache_order_objects structChristoph Lameter2008-04-271-25/+51
| * slub: for_each_object must be passed the number of objects in a slabChristoph Lameter2008-04-271-6/+18
| * slub: Store max number of objects in the page struct.Christoph Lameter2008-04-271-20/+34
| * slub: Dump list of objects not freed on kmem_cache_close()Christoph Lameter2008-04-271-1/+31
| * slub: free_list() cleanupChristoph Lameter2008-04-271-11/+7
| * slub: improve kmem_cache_destroy() error messagePekka Enberg2008-04-271-2/+5
* | mm: move cache_line_size() to <linux/cache.h>Pekka Enberg2008-04-281-5/+0
* | mm: have zonelist contains structs with both a zone pointer and zone_idxMel Gorman2008-04-281-1/+1
* | mm: use two zonelist that are filtered by GFP maskMel Gorman2008-04-281-3/+5
* | mm: introduce node_zonelist() for accessing the zonelist for a GFP maskMel Gorman2008-04-281-2/+1
|/
* slab_err: Pass parameters correctly to slab_bugChristoph Lameter2008-04-231-2/+2
* slub: No need for per node slab counters if !SLUB_DEBUGChristoph Lameter2008-04-141-11/+40
* slub: Move map/flag clearing to __free_slabChristoph Lameter2008-04-141-2/+2
* slub: Fixes to per cpu stat output in sysfsChristoph Lameter2008-04-141-1/+3
* slub: Deal with config variable dependenciesChristoph Lameter2008-04-141-15/+15
* slub: Reduce #ifdef ZONE_DMA by moving kmalloc_caches_dma near dma logicChristoph Lameter2008-04-141-4/+1
* slub: Initialize per-cpu statsPekka Enberg2008-04-141-0/+3
* Fix undefined count_partial if !CONFIG_SLABINFOChristoph Lameter2008-04-011-1/+1
* Revert "SLUB: remove useless masking of GFP_ZERO"Linus Torvalds2008-03-281-0/+3
* count_partial() is not used if !SLUB_DEBUG and !CONFIG_SLABINFOChristoph Lameter2008-03-261-0/+2
* slub page alloc fallback: Enable interrupts for GFP_WAIT.Christoph Lameter2008-03-171-3/+9
* slub: Do not cross cacheline boundaries for very small objectsNick Piggin2008-03-071-4/+7
* slub statistics: Fix check for DEACTIVATE_REMOTE_FREESChristoph Lameter2008-03-071-1/+1
* slub: fix possible NULL pointer dereferenceCyrill Gorcunov2008-03-031-2/+4
* slub: Add kmalloc_large_node() to support kmalloc_node fallbackChristoph Lameter2008-03-031-2/+13
* slub: look up object from the freelist oncePekka J Enberg2008-03-031-2/+0
* slub: Fix up commentsChristoph Lameter2008-03-031-21/+28
* slub: Rearrange #ifdef CONFIG_SLUB_DEBUG in calculate_sizes()Christoph Lameter2008-03-031-7/+8
* slub: Remove BUG_ON() from ksize and omit checks for !SLUB_DEBUGChristoph Lameter2008-03-031-4/+2
* slub: Use the objsize from the kmem_cache_cpu structureChristoph Lameter2008-03-031-1/+1
* slub: Remove useless checks in alloc_debug_processingChristoph Lameter2008-03-031-2/+2
* slub: Remove objsize check in kmem_cache_flags()Christoph Lameter2008-03-031-23/+4
* slub: rename slab_objects to show_slab_objectsChristoph Lameter2008-03-031-5/+5
* Revert "unique end pointer" patchChristoph Lameter2008-03-031-47/+23
* Revert "SLUB: Alternate fast paths using cmpxchg_local"Linus Torvalds2008-02-191-86/+1
* slub: Support 4k kmallocs again to compensate for page allocator slownessChristoph Lameter2008-02-151-9/+9
* slub: Fallback to kmalloc_large for failing higher order allocsChristoph Lameter2008-02-151-5/+38
* slub: Determine gfpflags once and not every time a slab is allocatedChristoph Lameter2008-02-151-8/+11
* make slub.c:slab_address() staticAdrian Bunk2008-02-151-1/+1
* slub: kmalloc page allocator pass-through cleanupPekka Enberg2008-02-151-8/+6