Vlastimil Babka
|
96db800f5d
mm: rename alloc_pages_exact_node() to __alloc_pages_node()
|
10 years ago |
Joonsoo Kim
|
45eb00cd3a
mm/slub: don't wait for high-order page allocation
|
10 years ago |
Konstantin Khlebnikov
|
80da026a8e
mm/slub: fix slab double-free in case of duplicate sysfs filename
|
10 years ago |
Thomas Gleixner
|
588f8ba913
mm/slub: move slab initialization into irq enabled region
|
10 years ago |
Jesper Dangaard Brouer
|
3eed034d04
slub: add support for kmem_cache_debug in bulk calls
|
10 years ago |
Jesper Dangaard Brouer
|
fbd02630c6
slub: initial bulk free implementation
|
10 years ago |
Jesper Dangaard Brouer
|
ebe909e0fd
slub: improve bulk alloc strategy
|
10 years ago |
Jesper Dangaard Brouer
|
994eb764ec
slub bulk alloc: extract objects from the per cpu slab
|
10 years ago |
Christoph Lameter
|
484748f0b6
slab: infrastructure for bulk object allocation and freeing
|
10 years ago |
Jesper Dangaard Brouer
|
2ae44005b6
slub: fix spelling succedd to succeed
|
10 years ago |
Michal Hocko
|
2f064f3485
mm: make page pfmemalloc check more robust
|
10 years ago |
Daniel Sanders
|
34cc6990d4
slab: correct size_index table before replacing the bootstrap kmem_cache_node
|
10 years ago |
Jason Low
|
4db0c3c298
mm: remove rest of ACCESS_ONCE() usages
|
10 years ago |
Joe Perches
|
6f6528a163
slub: use bool function return values of true/false not 1/0
|
10 years ago |
Chris J Arges
|
08303a73c6
mm/slub.c: parse slub_debug O option in switch statement
|
10 years ago |
Mark Rutland
|
859b7a0e89
mm/slub: fix lockups on PREEMPT && !SMP kernels
|
10 years ago |
Andrey Ryabinin
|
0316bec22e
mm: slub: add kernel address sanitizer support for slub allocator
|
11 years ago |
Andrey Ryabinin
|
a79316c617
mm: slub: introduce metadata_access_enable()/metadata_access_disable()
|
11 years ago |
Andrey Ryabinin
|
75c66def8d
mm: slub: share object_err function
|
11 years ago |
Tejun Heo
|
5024c1d71b
slub: use %*pb[l] to print bitmaps including cpumasks and nodemasks
|
11 years ago |
Vladimir Davydov
|
d6e0b7fa11
slub: make dead caches discard free slabs immediately
|
11 years ago |
Vladimir Davydov
|
ce3712d74d
slub: fix kmem_cache_shrink return value
|
11 years ago |
Vladimir Davydov
|
832f37f5d5
slub: never fail to shrink cache
|
11 years ago |
Vladimir Davydov
|
426589f571
slab: link memcg caches of the same kind into a list
|
11 years ago |
Vladimir Davydov
|
f7ce3190c4
slab: embed memcg_cache_params to kmem_cache
|
11 years ago |
Kim Phillips
|
94e4d712eb
mm/slub.c: fix typo in comment
|
11 years ago |
Joonsoo Kim
|
9aabf810a6
mm/slub: optimize alloc/free fastpath by removing preemption on/off
|
11 years ago |
Vladimir Davydov
|
dee2f8aaab
slub: fix cpuset check in get_any_partial
|
11 years ago |
Vladimir Davydov
|
8135be5a80
memcg: fix possible use-after-free in memcg_kmem_get_cache()
|
11 years ago |
Linus Torvalds
|
2756d373a3
Merge branch 'for-3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup
|
11 years ago |