summaryrefslogtreecommitdiffstats
path: root/mm/slub.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabledVlastimil Babka2022-01-061-2/+6
* mm/kasan: Convert to struct folio and struct slabMatthew Wilcox (Oracle)2022-01-061-1/+1
* mm: Convert struct page to struct slab in functions used by other subsystemsVlastimil Babka2022-01-061-1/+1
* mm/slub: Finish struct page to struct slab conversionVlastimil Babka2022-01-061-53/+52
* mm/slub: Convert most struct page to struct slab by spatchVlastimil Babka2022-01-061-436/+436
* mm/slub: Convert pfmemalloc_match() to take a struct slabMatthew Wilcox (Oracle)2022-01-061-19/+6
* mm/slub: Convert __free_slab() to use struct slabVlastimil Babka2022-01-061-14/+13
* mm/slub: Convert alloc_slab_page() to return a struct slabVlastimil Babka2022-01-061-10/+16
* mm/slub: Convert print_page_info() to print_slab_info()Matthew Wilcox (Oracle)2022-01-061-6/+7
* mm/slub: Convert __slab_lock() and __slab_unlock() to struct slabVlastimil Babka2022-01-061-7/+11
* mm/slub: Convert kfree() to use a struct slabMatthew Wilcox (Oracle)2022-01-061-13/+16
* mm/slub: Convert detached_freelist to use a struct slabMatthew Wilcox (Oracle)2022-01-061-14/+17
* mm: Convert check_heap_object() to use struct slabMatthew Wilcox (Oracle)2022-01-061-5/+5
* mm: Use struct slab in kmem_obj_info()Matthew Wilcox (Oracle)2022-01-061-6/+7
* mm: Convert __ksize() to struct slabMatthew Wilcox (Oracle)2022-01-061-7/+5
* mm: Convert [un]account_slab_page() to struct slabMatthew Wilcox (Oracle)2022-01-061-2/+2
* mm: Split slab into its own typeMatthew Wilcox (Oracle)2022-01-061-4/+4
* mm/slub: Make object_err() staticVlastimil Babka2022-01-061-15/+15
* mm/slub: fix endianness bug for alloc/free_traces attributesGerald Schaefer2021-12-111-6/+9
* mm: emit the "free" trace report before freeing memory in kmem_cache_free()Yunfeng Ye2021-11-201-1/+1
* Merge branch 'akpm' (patches from Andrew)Linus Torvalds2021-11-061-46/+63
|\
| * mm: remove HARDENED_USERCOPY_FALLBACKStephen Kitt2021-11-061-14/+0
| * mm, slub: use prefetchw instead of prefetchHyeonggon Yoo2021-11-061-1/+1
| * mm/slub: increase default cpu partial list sizesVlastimil Babka2021-11-061-4/+4
| * mm, slub: change percpu partial accounting from objects to pagesVlastimil Babka2021-11-061-30/+59
| * slub: add back check for free nonslab objectsKefeng Wang2021-11-061-1/+3
* | Merge tag 'printk-for-5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/...Linus Torvalds2021-11-021-2/+2
|\ \ | |/ |/|
| * vsprintf: Make %pGp print the hex valueMatthew Wilcox (Oracle)2021-10-271-2/+2
* | mm, slub: fix incorrect memcg slab count for bulk freeMiaohe Lin2021-10-191-1/+3
* | mm, slub: fix potential use-after-free in slab_debugfs_fopsMiaohe Lin2021-10-191-2/+4
* | mm, slub: fix potential memoryleak in kmem_cache_open()Miaohe Lin2021-10-191-1/+1
* | mm, slub: fix mismatch between reconstructed freelist depth and cntMiaohe Lin2021-10-191-2/+9
* | mm, slub: fix two bugs in slab_debug_trace_open()Miaohe Lin2021-10-191-1/+7
|/
* mm, slub: convert kmem_cpu_slab protection to local_lockVlastimil Babka2021-09-041-35/+111
* mm, slub: use migrate_disable() on PREEMPT_RTVlastimil Babka2021-09-041-9/+30
* mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchgVlastimil Babka2021-09-041-37/+44
* mm, slub: make slab_lock() disable irqs with PREEMPT_RTVlastimil Babka2021-09-041-17/+41
* mm: slub: make object_map_lock a raw_spinlock_tSebastian Andrzej Siewior2021-09-041-3/+3
* mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of ...Sebastian Andrzej Siewior2021-09-041-16/+78
* mm, slab: split out the cpu offline variant of flush_slab()Vlastimil Babka2021-09-041-2/+10
* mm, slub: don't disable irqs in slub_cpu_dead()Vlastimil Babka2021-09-041-5/+1
* mm, slub: only disable irq with spin_lock in __unfreeze_partials()Vlastimil Babka2021-09-041-8/+4
* mm, slub: separate detaching of partial list in unfreeze_partials() from unfr...Vlastimil Babka2021-09-041-22/+51
* mm, slub: detach whole partial list at once in unfreeze_partials()Vlastimil Babka2021-09-041-3/+7
* mm, slub: discard slabs in unfreeze_partials() without irqs disabledVlastimil Babka2021-09-041-1/+2
* mm, slub: move irq control into unfreeze_partials()Vlastimil Babka2021-09-041-6/+7
* mm, slub: call deactivate_slab() without disabling irqsVlastimil Babka2021-09-041-5/+19
* mm, slub: make locking in deactivate_slab() irq-safeVlastimil Babka2021-09-041-4/+5
* mm, slub: move reset of c->page and freelist out of deactivate_slab()Vlastimil Babka2021-09-041-13/+18
* mm, slub: stop disabling irqs around get_partial()Vlastimil Babka2021-09-041-14/+8