| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 1 | /* | 
 | 2 |  * SLOB Allocator: Simple List Of Blocks | 
 | 3 |  * | 
 | 4 |  * Matt Mackall <mpm@selenic.com> 12/30/03 | 
 | 5 |  * | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 6 |  * NUMA support by Paul Mundt, 2007. | 
 | 7 |  * | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 8 |  * How SLOB works: | 
 | 9 |  * | 
 | 10 |  * The core of SLOB is a traditional K&R style heap allocator, with | 
 | 11 |  * support for returning aligned objects. The granularity of this | 
| Nick Piggin | 5539484 | 2007-07-15 23:38:09 -0700 | [diff] [blame] | 12 |  * allocator is as little as 2 bytes, however typically most architectures | 
 | 13 |  * will require 4 bytes on 32-bit and 8 bytes on 64-bit. | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 14 |  * | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 15 |  * The slob heap is a set of linked list of pages from alloc_pages(), | 
 | 16 |  * and within each page, there is a singly-linked list of free blocks | 
 | 17 |  * (slob_t). The heap is grown on demand. To reduce fragmentation, | 
 | 18 |  * heap pages are segregated into three lists, with objects less than | 
 | 19 |  * 256 bytes, objects less than 1024 bytes, and all other objects. | 
 | 20 |  * | 
 | 21 |  * Allocation from heap involves first searching for a page with | 
 | 22 |  * sufficient free blocks (using a next-fit-like approach) followed by | 
 | 23 |  * a first-fit scan of the page. Deallocation inserts objects back | 
 | 24 |  * into the free list in address order, so this is effectively an | 
 | 25 |  * address-ordered first fit. | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 26 |  * | 
 | 27 |  * Above this is an implementation of kmalloc/kfree. Blocks returned | 
| Nick Piggin | 5539484 | 2007-07-15 23:38:09 -0700 | [diff] [blame] | 28 |  * from kmalloc are prepended with a 4-byte header with the kmalloc size. | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 29 |  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 30 |  * alloc_pages() directly, allocating compound pages so the page order | 
| Ezequiel Garcia | 999d879 | 2012-10-19 09:33:10 -0300 | [diff] [blame] | 31 |  * does not have to be separately tracked. | 
 | 32 |  * These objects are detected in kfree() because PageSlab() | 
| Nick Piggin | d87a133 | 2007-07-15 23:38:08 -0700 | [diff] [blame] | 33 |  * is false for them. | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 34 |  * | 
 | 35 |  * SLAB is emulated on top of SLOB by simply calling constructors and | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 36 |  * destructors for every SLAB allocation. Objects are returned with the | 
 | 37 |  * 4-byte alignment unless the SLAB_HWCACHE_ALIGN flag is set, in which | 
 | 38 |  * case the low-level allocator will fragment blocks to create the proper | 
 | 39 |  * alignment. Again, objects of page-size or greater are allocated by | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 40 |  * calling alloc_pages(). As SLAB objects know their size, no separate | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 41 |  * size bookkeeping is necessary and there is essentially no allocation | 
| Nick Piggin | d87a133 | 2007-07-15 23:38:08 -0700 | [diff] [blame] | 42 |  * space overhead, and compound pages aren't needed for multi-page | 
 | 43 |  * allocations. | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 44 |  * | 
 | 45 |  * NUMA support in SLOB is fairly simplistic, pushing most of the real | 
 | 46 |  * logic down to the page allocator, and simply doing the node accounting | 
 | 47 |  * on the upper levels. In the event that a node id is explicitly | 
| Mel Gorman | 6484eb3 | 2009-06-16 15:31:54 -0700 | [diff] [blame] | 48 |  * provided, alloc_pages_exact_node() with the specified node id is used | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 49 |  * instead. The common case (or when the node id isn't explicitly provided) | 
 | 50 |  * will default to the current node, as per numa_node_id(). | 
 | 51 |  * | 
 | 52 |  * Node aware pages are still inserted in to the global freelist, and | 
 | 53 |  * these are scanned for by matching against the node id encoded in the | 
 | 54 |  * page flags. As a result, block allocations that can be satisfied from | 
 | 55 |  * the freelist will only be done so on pages residing on the same node, | 
 | 56 |  * in order to prevent random node placement. | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 57 |  */ | 
 | 58 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 59 | #include <linux/kernel.h> | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 60 | #include <linux/slab.h> | 
| Christoph Lameter | 97d0660 | 2012-07-06 15:25:11 -0500 | [diff] [blame] | 61 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 62 | #include <linux/mm.h> | 
| Nick Piggin | 1f0532e | 2009-05-05 19:13:45 +1000 | [diff] [blame] | 63 | #include <linux/swap.h> /* struct reclaim_state */ | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 64 | #include <linux/cache.h> | 
 | 65 | #include <linux/init.h> | 
| Paul Gortmaker | b95f1b31 | 2011-10-16 02:01:52 -0400 | [diff] [blame] | 66 | #include <linux/export.h> | 
| Nick Piggin | afc0ced | 2007-05-16 22:10:49 -0700 | [diff] [blame] | 67 | #include <linux/rcupdate.h> | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 68 | #include <linux/list.h> | 
| Catalin Marinas | 4374e61 | 2009-06-11 13:23:17 +0100 | [diff] [blame] | 69 | #include <linux/kmemleak.h> | 
| Li Zefan | 039ca4e | 2010-05-26 17:22:17 +0800 | [diff] [blame] | 70 |  | 
 | 71 | #include <trace/events/kmem.h> | 
 | 72 |  | 
| Arun Sharma | 60063497 | 2011-07-26 16:09:06 -0700 | [diff] [blame] | 73 | #include <linux/atomic.h> | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 74 |  | 
| Glauber Costa | b9ce5ef | 2012-12-18 14:22:46 -0800 | [diff] [blame] | 75 | #include "slab.h" | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 76 | /* | 
 | 77 |  * slob_block has a field 'units', which indicates size of block if +ve, | 
 | 78 |  * or offset of next block if -ve (in SLOB_UNITs). | 
 | 79 |  * | 
 | 80 |  * Free blocks of size 1 unit simply contain the offset of the next block. | 
 | 81 |  * Those with larger size contain their size in the first SLOB_UNIT of | 
 | 82 |  * memory, and the offset of the next free block in the second SLOB_UNIT. | 
 | 83 |  */ | 
| Nick Piggin | 5539484 | 2007-07-15 23:38:09 -0700 | [diff] [blame] | 84 | #if PAGE_SIZE <= (32767 * 2) | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 85 | typedef s16 slobidx_t; | 
 | 86 | #else | 
 | 87 | typedef s32 slobidx_t; | 
 | 88 | #endif | 
 | 89 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 90 | struct slob_block { | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 91 | 	slobidx_t units; | 
| Nick Piggin | 5539484 | 2007-07-15 23:38:09 -0700 | [diff] [blame] | 92 | }; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 93 | typedef struct slob_block slob_t; | 
 | 94 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 95 | /* | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 96 |  * All partially free slob pages go on these lists. | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 97 |  */ | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 98 | #define SLOB_BREAK1 256 | 
 | 99 | #define SLOB_BREAK2 1024 | 
 | 100 | static LIST_HEAD(free_slob_small); | 
 | 101 | static LIST_HEAD(free_slob_medium); | 
 | 102 | static LIST_HEAD(free_slob_large); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 103 |  | 
 | 104 | /* | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 105 |  * slob_page_free: true for pages on free_slob_pages list. | 
 | 106 |  */ | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 107 | static inline int slob_page_free(struct page *sp) | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 108 | { | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 109 | 	return PageSlobFree(sp); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 110 | } | 
 | 111 |  | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 112 | static void set_slob_page_free(struct page *sp, struct list_head *list) | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 113 | { | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 114 | 	list_add(&sp->list, list); | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 115 | 	__SetPageSlobFree(sp); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 116 | } | 
 | 117 |  | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 118 | static inline void clear_slob_page_free(struct page *sp) | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 119 | { | 
 | 120 | 	list_del(&sp->list); | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 121 | 	__ClearPageSlobFree(sp); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 122 | } | 
 | 123 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 124 | #define SLOB_UNIT sizeof(slob_t) | 
 | 125 | #define SLOB_UNITS(size) (((size) + SLOB_UNIT - 1)/SLOB_UNIT) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 126 |  | 
| Nick Piggin | afc0ced | 2007-05-16 22:10:49 -0700 | [diff] [blame] | 127 | /* | 
 | 128 |  * struct slob_rcu is inserted at the tail of allocated slob blocks, which | 
 | 129 |  * were created with a SLAB_DESTROY_BY_RCU slab. slob_rcu is used to free | 
 | 130 |  * the block using call_rcu. | 
 | 131 |  */ | 
 | 132 | struct slob_rcu { | 
 | 133 | 	struct rcu_head head; | 
 | 134 | 	int size; | 
 | 135 | }; | 
 | 136 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 137 | /* | 
 | 138 |  * slob_lock protects all slob allocator structures. | 
 | 139 |  */ | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 140 | static DEFINE_SPINLOCK(slob_lock); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 141 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 142 | /* | 
 | 143 |  * Encode the given size and next info into a free slob block s. | 
 | 144 |  */ | 
 | 145 | static void set_slob(slob_t *s, slobidx_t size, slob_t *next) | 
 | 146 | { | 
 | 147 | 	slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK); | 
 | 148 | 	slobidx_t offset = next - base; | 
| Dimitri Gorokhovik | bcb4ddb | 2006-12-29 16:48:28 -0800 | [diff] [blame] | 149 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 150 | 	if (size > 1) { | 
 | 151 | 		s[0].units = size; | 
 | 152 | 		s[1].units = offset; | 
 | 153 | 	} else | 
 | 154 | 		s[0].units = -offset; | 
 | 155 | } | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 156 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 157 | /* | 
 | 158 |  * Return the size of a slob block. | 
 | 159 |  */ | 
 | 160 | static slobidx_t slob_units(slob_t *s) | 
 | 161 | { | 
 | 162 | 	if (s->units > 0) | 
 | 163 | 		return s->units; | 
 | 164 | 	return 1; | 
 | 165 | } | 
 | 166 |  | 
 | 167 | /* | 
 | 168 |  * Return the next free slob block pointer after this one. | 
 | 169 |  */ | 
 | 170 | static slob_t *slob_next(slob_t *s) | 
 | 171 | { | 
 | 172 | 	slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK); | 
 | 173 | 	slobidx_t next; | 
 | 174 |  | 
 | 175 | 	if (s[0].units < 0) | 
 | 176 | 		next = -s[0].units; | 
 | 177 | 	else | 
 | 178 | 		next = s[1].units; | 
 | 179 | 	return base+next; | 
 | 180 | } | 
 | 181 |  | 
 | 182 | /* | 
 | 183 |  * Returns true if s is the last free block in its page. | 
 | 184 |  */ | 
 | 185 | static int slob_last(slob_t *s) | 
 | 186 | { | 
 | 187 | 	return !((unsigned long)slob_next(s) & ~PAGE_MASK); | 
 | 188 | } | 
 | 189 |  | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 190 | static void *slob_new_pages(gfp_t gfp, int order, int node) | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 191 | { | 
 | 192 | 	void *page; | 
 | 193 |  | 
 | 194 | #ifdef CONFIG_NUMA | 
| Ezequiel Garcia | 90f2cbb | 2012-09-08 17:47:51 -0300 | [diff] [blame] | 195 | 	if (node != NUMA_NO_NODE) | 
| Mel Gorman | 6484eb3 | 2009-06-16 15:31:54 -0700 | [diff] [blame] | 196 | 		page = alloc_pages_exact_node(node, gfp, order); | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 197 | 	else | 
 | 198 | #endif | 
 | 199 | 		page = alloc_pages(gfp, order); | 
 | 200 |  | 
 | 201 | 	if (!page) | 
 | 202 | 		return NULL; | 
 | 203 |  | 
 | 204 | 	return page_address(page); | 
 | 205 | } | 
 | 206 |  | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 207 | static void slob_free_pages(void *b, int order) | 
 | 208 | { | 
| Nick Piggin | 1f0532e | 2009-05-05 19:13:45 +1000 | [diff] [blame] | 209 | 	if (current->reclaim_state) | 
 | 210 | 		current->reclaim_state->reclaimed_slab += 1 << order; | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 211 | 	free_pages((unsigned long)b, order); | 
 | 212 | } | 
 | 213 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 214 | /* | 
 | 215 |  * Allocate a slob block within a given slob_page sp. | 
 | 216 |  */ | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 217 | static void *slob_page_alloc(struct page *sp, size_t size, int align) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 218 | { | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 219 | 	slob_t *prev, *cur, *aligned = NULL; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 220 | 	int delta = 0, units = SLOB_UNITS(size); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 221 |  | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 222 | 	for (prev = NULL, cur = sp->freelist; ; prev = cur, cur = slob_next(cur)) { | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 223 | 		slobidx_t avail = slob_units(cur); | 
 | 224 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 225 | 		if (align) { | 
 | 226 | 			aligned = (slob_t *)ALIGN((unsigned long)cur, align); | 
 | 227 | 			delta = aligned - cur; | 
 | 228 | 		} | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 229 | 		if (avail >= units + delta) { /* room enough? */ | 
 | 230 | 			slob_t *next; | 
 | 231 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 232 | 			if (delta) { /* need to fragment head to align? */ | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 233 | 				next = slob_next(cur); | 
 | 234 | 				set_slob(aligned, avail - delta, next); | 
 | 235 | 				set_slob(cur, delta, aligned); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 236 | 				prev = cur; | 
 | 237 | 				cur = aligned; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 238 | 				avail = slob_units(cur); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 239 | 			} | 
 | 240 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 241 | 			next = slob_next(cur); | 
 | 242 | 			if (avail == units) { /* exact fit? unlink. */ | 
 | 243 | 				if (prev) | 
 | 244 | 					set_slob(prev, slob_units(prev), next); | 
 | 245 | 				else | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 246 | 					sp->freelist = next; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 247 | 			} else { /* fragment */ | 
 | 248 | 				if (prev) | 
 | 249 | 					set_slob(prev, slob_units(prev), cur + units); | 
 | 250 | 				else | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 251 | 					sp->freelist = cur + units; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 252 | 				set_slob(cur + units, avail - units, next); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 253 | 			} | 
 | 254 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 255 | 			sp->units -= units; | 
 | 256 | 			if (!sp->units) | 
 | 257 | 				clear_slob_page_free(sp); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 258 | 			return cur; | 
 | 259 | 		} | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 260 | 		if (slob_last(cur)) | 
 | 261 | 			return NULL; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 262 | 	} | 
 | 263 | } | 
 | 264 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 265 | /* | 
 | 266 |  * slob_alloc: entry point into the slob allocator. | 
 | 267 |  */ | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 268 | static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 269 | { | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 270 | 	struct page *sp; | 
| Matt Mackall | d626954 | 2007-07-21 04:37:40 -0700 | [diff] [blame] | 271 | 	struct list_head *prev; | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 272 | 	struct list_head *slob_list; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 273 | 	slob_t *b = NULL; | 
 | 274 | 	unsigned long flags; | 
 | 275 |  | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 276 | 	if (size < SLOB_BREAK1) | 
 | 277 | 		slob_list = &free_slob_small; | 
 | 278 | 	else if (size < SLOB_BREAK2) | 
 | 279 | 		slob_list = &free_slob_medium; | 
 | 280 | 	else | 
 | 281 | 		slob_list = &free_slob_large; | 
 | 282 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 283 | 	spin_lock_irqsave(&slob_lock, flags); | 
 | 284 | 	/* Iterate through each partially free page, try to find room */ | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 285 | 	list_for_each_entry(sp, slob_list, list) { | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 286 | #ifdef CONFIG_NUMA | 
 | 287 | 		/* | 
 | 288 | 		 * If there's a node specification, search for a partial | 
 | 289 | 		 * page with a matching node id in the freelist. | 
 | 290 | 		 */ | 
| Ezequiel Garcia | 90f2cbb | 2012-09-08 17:47:51 -0300 | [diff] [blame] | 291 | 		if (node != NUMA_NO_NODE && page_to_nid(sp) != node) | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 292 | 			continue; | 
 | 293 | #endif | 
| Matt Mackall | d626954 | 2007-07-21 04:37:40 -0700 | [diff] [blame] | 294 | 		/* Enough room on this page? */ | 
 | 295 | 		if (sp->units < SLOB_UNITS(size)) | 
 | 296 | 			continue; | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 297 |  | 
| Matt Mackall | d626954 | 2007-07-21 04:37:40 -0700 | [diff] [blame] | 298 | 		/* Attempt to alloc */ | 
 | 299 | 		prev = sp->list.prev; | 
 | 300 | 		b = slob_page_alloc(sp, size, align); | 
 | 301 | 		if (!b) | 
 | 302 | 			continue; | 
 | 303 |  | 
 | 304 | 		/* Improve fragment distribution and reduce our average | 
 | 305 | 		 * search time by starting our next search here. (see | 
 | 306 | 		 * Knuth vol 1, sec 2.5, pg 449) */ | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 307 | 		if (prev != slob_list->prev && | 
 | 308 | 				slob_list->next != prev->next) | 
 | 309 | 			list_move_tail(slob_list, prev->next); | 
| Matt Mackall | d626954 | 2007-07-21 04:37:40 -0700 | [diff] [blame] | 310 | 		break; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 311 | 	} | 
 | 312 | 	spin_unlock_irqrestore(&slob_lock, flags); | 
 | 313 |  | 
 | 314 | 	/* Not enough space: must allocate a new page */ | 
 | 315 | 	if (!b) { | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 316 | 		b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 317 | 		if (!b) | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 318 | 			return NULL; | 
| Christoph Lameter | b556828 | 2012-06-13 10:24:54 -0500 | [diff] [blame] | 319 | 		sp = virt_to_page(b); | 
 | 320 | 		__SetPageSlab(sp); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 321 |  | 
 | 322 | 		spin_lock_irqsave(&slob_lock, flags); | 
 | 323 | 		sp->units = SLOB_UNITS(PAGE_SIZE); | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 324 | 		sp->freelist = b; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 325 | 		INIT_LIST_HEAD(&sp->list); | 
 | 326 | 		set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE)); | 
| Matt Mackall | 20cecba | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 327 | 		set_slob_page_free(sp, slob_list); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 328 | 		b = slob_page_alloc(sp, size, align); | 
 | 329 | 		BUG_ON(!b); | 
 | 330 | 		spin_unlock_irqrestore(&slob_lock, flags); | 
 | 331 | 	} | 
| Christoph Lameter | d07dbea | 2007-07-17 04:03:23 -0700 | [diff] [blame] | 332 | 	if (unlikely((gfp & __GFP_ZERO) && b)) | 
 | 333 | 		memset(b, 0, size); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 334 | 	return b; | 
 | 335 | } | 
 | 336 |  | 
 | 337 | /* | 
 | 338 |  * slob_free: entry point into the slob allocator. | 
 | 339 |  */ | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 340 | static void slob_free(void *block, int size) | 
 | 341 | { | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 342 | 	struct page *sp; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 343 | 	slob_t *prev, *next, *b = (slob_t *)block; | 
 | 344 | 	slobidx_t units; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 345 | 	unsigned long flags; | 
| Bob Liu | d602dab | 2010-07-10 18:05:33 +0800 | [diff] [blame] | 346 | 	struct list_head *slob_list; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 347 |  | 
| Satyam Sharma | 2408c55 | 2007-10-16 01:24:44 -0700 | [diff] [blame] | 348 | 	if (unlikely(ZERO_OR_NULL_PTR(block))) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 349 | 		return; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 350 | 	BUG_ON(!size); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 351 |  | 
| Christoph Lameter | b556828 | 2012-06-13 10:24:54 -0500 | [diff] [blame] | 352 | 	sp = virt_to_page(block); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 353 | 	units = SLOB_UNITS(size); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 354 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 355 | 	spin_lock_irqsave(&slob_lock, flags); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 356 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 357 | 	if (sp->units + units == SLOB_UNITS(PAGE_SIZE)) { | 
 | 358 | 		/* Go directly to page allocator. Do not pass slob allocator */ | 
 | 359 | 		if (slob_page_free(sp)) | 
 | 360 | 			clear_slob_page_free(sp); | 
| Nick Piggin | 6fb8f42 | 2009-03-16 21:00:28 +1100 | [diff] [blame] | 361 | 		spin_unlock_irqrestore(&slob_lock, flags); | 
| Christoph Lameter | b556828 | 2012-06-13 10:24:54 -0500 | [diff] [blame] | 362 | 		__ClearPageSlab(sp); | 
 | 363 | 		reset_page_mapcount(sp); | 
| Nick Piggin | 1f0532e | 2009-05-05 19:13:45 +1000 | [diff] [blame] | 364 | 		slob_free_pages(b, 0); | 
| Nick Piggin | 6fb8f42 | 2009-03-16 21:00:28 +1100 | [diff] [blame] | 365 | 		return; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 366 | 	} | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 367 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 368 | 	if (!slob_page_free(sp)) { | 
 | 369 | 		/* This slob page is about to become partially free. Easy! */ | 
 | 370 | 		sp->units = units; | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 371 | 		sp->freelist = b; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 372 | 		set_slob(b, units, | 
 | 373 | 			(void *)((unsigned long)(b + | 
 | 374 | 					SLOB_UNITS(PAGE_SIZE)) & PAGE_MASK)); | 
| Bob Liu | d602dab | 2010-07-10 18:05:33 +0800 | [diff] [blame] | 375 | 		if (size < SLOB_BREAK1) | 
 | 376 | 			slob_list = &free_slob_small; | 
 | 377 | 		else if (size < SLOB_BREAK2) | 
 | 378 | 			slob_list = &free_slob_medium; | 
 | 379 | 		else | 
 | 380 | 			slob_list = &free_slob_large; | 
 | 381 | 		set_slob_page_free(sp, slob_list); | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 382 | 		goto out; | 
 | 383 | 	} | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 384 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 385 | 	/* | 
 | 386 | 	 * Otherwise the page is already partially free, so find reinsertion | 
 | 387 | 	 * point. | 
 | 388 | 	 */ | 
 | 389 | 	sp->units += units; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 390 |  | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 391 | 	if (b < (slob_t *)sp->freelist) { | 
 | 392 | 		if (b + units == sp->freelist) { | 
 | 393 | 			units += slob_units(sp->freelist); | 
 | 394 | 			sp->freelist = slob_next(sp->freelist); | 
| Matt Mackall | 679299b | 2008-02-04 22:29:37 -0800 | [diff] [blame] | 395 | 		} | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 396 | 		set_slob(b, units, sp->freelist); | 
 | 397 | 		sp->freelist = b; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 398 | 	} else { | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 399 | 		prev = sp->freelist; | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 400 | 		next = slob_next(prev); | 
 | 401 | 		while (b > next) { | 
 | 402 | 			prev = next; | 
 | 403 | 			next = slob_next(prev); | 
 | 404 | 		} | 
 | 405 |  | 
 | 406 | 		if (!slob_last(prev) && b + units == next) { | 
 | 407 | 			units += slob_units(next); | 
 | 408 | 			set_slob(b, units, slob_next(next)); | 
 | 409 | 		} else | 
 | 410 | 			set_slob(b, units, next); | 
 | 411 |  | 
 | 412 | 		if (prev + slob_units(prev) == b) { | 
 | 413 | 			units = slob_units(b) + slob_units(prev); | 
 | 414 | 			set_slob(prev, units, slob_next(b)); | 
 | 415 | 		} else | 
 | 416 | 			set_slob(prev, slob_units(prev), b); | 
 | 417 | 	} | 
 | 418 | out: | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 419 | 	spin_unlock_irqrestore(&slob_lock, flags); | 
 | 420 | } | 
 | 421 |  | 
| Nick Piggin | 95b3512 | 2007-07-15 23:38:07 -0700 | [diff] [blame] | 422 | /* | 
 | 423 |  * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend. | 
 | 424 |  */ | 
 | 425 |  | 
| Ezequiel Garcia | f3f7410 | 2012-09-08 17:47:53 -0300 | [diff] [blame] | 426 | static __always_inline void * | 
 | 427 | __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 428 | { | 
| Christoph Lameter | 6cb8f91 | 2007-07-17 04:03:22 -0700 | [diff] [blame] | 429 | 	unsigned int *m; | 
| Arnd Bergmann | 789306e | 2012-10-05 16:55:20 +0200 | [diff] [blame] | 430 | 	int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 431 | 	void *ret; | 
| Nick Piggin | 5539484 | 2007-07-15 23:38:09 -0700 | [diff] [blame] | 432 |  | 
| Steven Rostedt | bd50cfa | 2011-06-07 07:18:45 -0400 | [diff] [blame] | 433 | 	gfp &= gfp_allowed_mask; | 
 | 434 |  | 
| Ingo Molnar | 19cefdf | 2009-03-15 06:03:11 +0100 | [diff] [blame] | 435 | 	lockdep_trace_alloc(gfp); | 
| Nick Piggin | cf40bd1 | 2009-01-21 08:12:39 +0100 | [diff] [blame] | 436 |  | 
| Nick Piggin | 5539484 | 2007-07-15 23:38:09 -0700 | [diff] [blame] | 437 | 	if (size < PAGE_SIZE - align) { | 
| Christoph Lameter | 6cb8f91 | 2007-07-17 04:03:22 -0700 | [diff] [blame] | 438 | 		if (!size) | 
 | 439 | 			return ZERO_SIZE_PTR; | 
 | 440 |  | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 441 | 		m = slob_alloc(size + align, gfp, align, node); | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 442 |  | 
| MinChan Kim | 239f49c | 2008-05-19 22:12:08 +0900 | [diff] [blame] | 443 | 		if (!m) | 
 | 444 | 			return NULL; | 
 | 445 | 		*m = size; | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 446 | 		ret = (void *)m + align; | 
| Nick Piggin | d87a133 | 2007-07-15 23:38:08 -0700 | [diff] [blame] | 447 |  | 
| Ezequiel Garcia | f3f7410 | 2012-09-08 17:47:53 -0300 | [diff] [blame] | 448 | 		trace_kmalloc_node(caller, ret, | 
| Eduard - Gabriel Munteanu | ca2b84c | 2009-03-23 15:12:24 +0200 | [diff] [blame] | 449 | 				   size, size + align, gfp, node); | 
| Nick Piggin | d87a133 | 2007-07-15 23:38:08 -0700 | [diff] [blame] | 450 | 	} else { | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 451 | 		unsigned int order = get_order(size); | 
| Nick Piggin | d87a133 | 2007-07-15 23:38:08 -0700 | [diff] [blame] | 452 |  | 
| David Rientjes | 8df275a | 2010-08-22 16:16:06 -0700 | [diff] [blame] | 453 | 		if (likely(order)) | 
 | 454 | 			gfp |= __GFP_COMP; | 
 | 455 | 		ret = slob_new_pages(gfp, order, node); | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 456 |  | 
| Ezequiel Garcia | f3f7410 | 2012-09-08 17:47:53 -0300 | [diff] [blame] | 457 | 		trace_kmalloc_node(caller, ret, | 
| Eduard - Gabriel Munteanu | ca2b84c | 2009-03-23 15:12:24 +0200 | [diff] [blame] | 458 | 				   size, PAGE_SIZE << order, gfp, node); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 459 | 	} | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 460 |  | 
| Catalin Marinas | 4374e61 | 2009-06-11 13:23:17 +0100 | [diff] [blame] | 461 | 	kmemleak_alloc(ret, size, 1, gfp); | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 462 | 	return ret; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 463 | } | 
| Ezequiel Garcia | f3f7410 | 2012-09-08 17:47:53 -0300 | [diff] [blame] | 464 |  | 
 | 465 | void *__kmalloc_node(size_t size, gfp_t gfp, int node) | 
 | 466 | { | 
 | 467 | 	return __do_kmalloc_node(size, gfp, node, _RET_IP_); | 
 | 468 | } | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 469 | EXPORT_SYMBOL(__kmalloc_node); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 470 |  | 
| Ezequiel Garcia | f3f7410 | 2012-09-08 17:47:53 -0300 | [diff] [blame] | 471 | #ifdef CONFIG_TRACING | 
 | 472 | void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) | 
 | 473 | { | 
 | 474 | 	return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller); | 
 | 475 | } | 
 | 476 |  | 
 | 477 | #ifdef CONFIG_NUMA | 
| David Rientjes | 82bd550 | 2012-09-25 12:53:51 -0700 | [diff] [blame] | 478 | void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, | 
| Ezequiel Garcia | f3f7410 | 2012-09-08 17:47:53 -0300 | [diff] [blame] | 479 | 					int node, unsigned long caller) | 
 | 480 | { | 
 | 481 | 	return __do_kmalloc_node(size, gfp, node, caller); | 
 | 482 | } | 
 | 483 | #endif | 
 | 484 | #endif | 
 | 485 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 486 | void kfree(const void *block) | 
 | 487 | { | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 488 | 	struct page *sp; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 489 |  | 
| Pekka Enberg | 2121db7 | 2009-03-25 11:05:57 +0200 | [diff] [blame] | 490 | 	trace_kfree(_RET_IP_, block); | 
 | 491 |  | 
| Satyam Sharma | 2408c55 | 2007-10-16 01:24:44 -0700 | [diff] [blame] | 492 | 	if (unlikely(ZERO_OR_NULL_PTR(block))) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 493 | 		return; | 
| Catalin Marinas | 4374e61 | 2009-06-11 13:23:17 +0100 | [diff] [blame] | 494 | 	kmemleak_free(block); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 495 |  | 
| Christoph Lameter | b556828 | 2012-06-13 10:24:54 -0500 | [diff] [blame] | 496 | 	sp = virt_to_page(block); | 
 | 497 | 	if (PageSlab(sp)) { | 
| Arnd Bergmann | 789306e | 2012-10-05 16:55:20 +0200 | [diff] [blame] | 498 | 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); | 
| Nick Piggin | 5539484 | 2007-07-15 23:38:09 -0700 | [diff] [blame] | 499 | 		unsigned int *m = (unsigned int *)(block - align); | 
 | 500 | 		slob_free(m, *m + align); | 
| Nick Piggin | d87a133 | 2007-07-15 23:38:08 -0700 | [diff] [blame] | 501 | 	} else | 
| Ezequiel Garcia | 8cf9864 | 2012-10-22 09:04:31 -0300 | [diff] [blame] | 502 | 		__free_pages(sp, compound_order(sp)); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 503 | } | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 504 | EXPORT_SYMBOL(kfree); | 
 | 505 |  | 
| Nick Piggin | d87a133 | 2007-07-15 23:38:08 -0700 | [diff] [blame] | 506 | /* can't use ksize for kmem_cache_alloc memory, only kmalloc */ | 
| Pekka Enberg | fd76bab | 2007-05-06 14:48:40 -0700 | [diff] [blame] | 507 | size_t ksize(const void *block) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 508 | { | 
| Christoph Lameter | b8c24c4 | 2012-06-13 10:24:52 -0500 | [diff] [blame] | 509 | 	struct page *sp; | 
| Ezequiel Garcia | 999d879 | 2012-10-19 09:33:10 -0300 | [diff] [blame] | 510 | 	int align; | 
 | 511 | 	unsigned int *m; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 512 |  | 
| Christoph Lameter | ef8b452 | 2007-10-16 01:24:46 -0700 | [diff] [blame] | 513 | 	BUG_ON(!block); | 
 | 514 | 	if (unlikely(block == ZERO_SIZE_PTR)) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 515 | 		return 0; | 
 | 516 |  | 
| Christoph Lameter | b556828 | 2012-06-13 10:24:54 -0500 | [diff] [blame] | 517 | 	sp = virt_to_page(block); | 
| Ezequiel Garcia | 999d879 | 2012-10-19 09:33:10 -0300 | [diff] [blame] | 518 | 	if (unlikely(!PageSlab(sp))) | 
 | 519 | 		return PAGE_SIZE << compound_order(sp); | 
 | 520 |  | 
| Arnd Bergmann | 789306e | 2012-10-05 16:55:20 +0200 | [diff] [blame] | 521 | 	align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); | 
| Ezequiel Garcia | 999d879 | 2012-10-19 09:33:10 -0300 | [diff] [blame] | 522 | 	m = (unsigned int *)(block - align); | 
 | 523 | 	return SLOB_UNITS(*m) * SLOB_UNIT; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 524 | } | 
| Kirill A. Shutemov | b1aabec | 2009-02-10 15:21:44 +0200 | [diff] [blame] | 525 | EXPORT_SYMBOL(ksize); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 526 |  | 
| Christoph Lameter | 8a13a4c | 2012-09-04 23:18:33 +0000 | [diff] [blame] | 527 | int __kmem_cache_create(struct kmem_cache *c, unsigned long flags) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 528 | { | 
| Christoph Lameter | 278b1bb | 2012-09-05 00:20:34 +0000 | [diff] [blame] | 529 | 	if (flags & SLAB_DESTROY_BY_RCU) { | 
 | 530 | 		/* leave room for rcu footer at the end of object */ | 
 | 531 | 		c->size += sizeof(struct slob_rcu); | 
| Christoph Lameter | 039363f | 2012-07-06 15:25:10 -0500 | [diff] [blame] | 532 | 	} | 
| Christoph Lameter | 278b1bb | 2012-09-05 00:20:34 +0000 | [diff] [blame] | 533 | 	c->flags = flags; | 
| Christoph Lameter | 278b1bb | 2012-09-05 00:20:34 +0000 | [diff] [blame] | 534 | 	return 0; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 535 | } | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 536 |  | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 537 | void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node) | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 538 | { | 
 | 539 | 	void *b; | 
 | 540 |  | 
| Steven Rostedt | bd50cfa | 2011-06-07 07:18:45 -0400 | [diff] [blame] | 541 | 	flags &= gfp_allowed_mask; | 
 | 542 |  | 
 | 543 | 	lockdep_trace_alloc(flags); | 
 | 544 |  | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 545 | 	if (c->size < PAGE_SIZE) { | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 546 | 		b = slob_alloc(c->size, flags, c->align, node); | 
| Ezequiel Garcia | fe74fe2 | 2012-10-19 09:33:11 -0300 | [diff] [blame] | 547 | 		trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, | 
| Eduard - Gabriel Munteanu | ca2b84c | 2009-03-23 15:12:24 +0200 | [diff] [blame] | 548 | 					    SLOB_UNITS(c->size) * SLOB_UNIT, | 
 | 549 | 					    flags, node); | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 550 | 	} else { | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 551 | 		b = slob_new_pages(flags, get_order(c->size), node); | 
| Ezequiel Garcia | fe74fe2 | 2012-10-19 09:33:11 -0300 | [diff] [blame] | 552 | 		trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, | 
| Eduard - Gabriel Munteanu | ca2b84c | 2009-03-23 15:12:24 +0200 | [diff] [blame] | 553 | 					    PAGE_SIZE << get_order(c->size), | 
 | 554 | 					    flags, node); | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 555 | 	} | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 556 |  | 
 | 557 | 	if (c->ctor) | 
| Alexey Dobriyan | 51cc506 | 2008-07-25 19:45:34 -0700 | [diff] [blame] | 558 | 		c->ctor(b); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 559 |  | 
| Catalin Marinas | 4374e61 | 2009-06-11 13:23:17 +0100 | [diff] [blame] | 560 | 	kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 561 | 	return b; | 
 | 562 | } | 
| Paul Mundt | 6193a2f | 2007-07-15 23:38:22 -0700 | [diff] [blame] | 563 | EXPORT_SYMBOL(kmem_cache_alloc_node); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 564 |  | 
| Nick Piggin | afc0ced | 2007-05-16 22:10:49 -0700 | [diff] [blame] | 565 | static void __kmem_cache_free(void *b, int size) | 
 | 566 | { | 
 | 567 | 	if (size < PAGE_SIZE) | 
 | 568 | 		slob_free(b, size); | 
 | 569 | 	else | 
| Américo Wang | 6e9ed0c | 2009-01-19 02:00:38 +0800 | [diff] [blame] | 570 | 		slob_free_pages(b, get_order(size)); | 
| Nick Piggin | afc0ced | 2007-05-16 22:10:49 -0700 | [diff] [blame] | 571 | } | 
 | 572 |  | 
 | 573 | static void kmem_rcu_free(struct rcu_head *head) | 
 | 574 | { | 
 | 575 | 	struct slob_rcu *slob_rcu = (struct slob_rcu *)head; | 
 | 576 | 	void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu)); | 
 | 577 |  | 
 | 578 | 	__kmem_cache_free(b, slob_rcu->size); | 
 | 579 | } | 
 | 580 |  | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 581 | void kmem_cache_free(struct kmem_cache *c, void *b) | 
 | 582 | { | 
| Catalin Marinas | 4374e61 | 2009-06-11 13:23:17 +0100 | [diff] [blame] | 583 | 	kmemleak_free_recursive(b, c->flags); | 
| Nick Piggin | afc0ced | 2007-05-16 22:10:49 -0700 | [diff] [blame] | 584 | 	if (unlikely(c->flags & SLAB_DESTROY_BY_RCU)) { | 
 | 585 | 		struct slob_rcu *slob_rcu; | 
 | 586 | 		slob_rcu = b + (c->size - sizeof(struct slob_rcu)); | 
| Nick Piggin | afc0ced | 2007-05-16 22:10:49 -0700 | [diff] [blame] | 587 | 		slob_rcu->size = c->size; | 
 | 588 | 		call_rcu(&slob_rcu->head, kmem_rcu_free); | 
 | 589 | 	} else { | 
| Nick Piggin | afc0ced | 2007-05-16 22:10:49 -0700 | [diff] [blame] | 590 | 		__kmem_cache_free(b, c->size); | 
 | 591 | 	} | 
| Eduard - Gabriel Munteanu | 3eae2cb2 | 2008-08-10 20:14:07 +0300 | [diff] [blame] | 592 |  | 
| Eduard - Gabriel Munteanu | ca2b84c | 2009-03-23 15:12:24 +0200 | [diff] [blame] | 593 | 	trace_kmem_cache_free(_RET_IP_, b); | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 594 | } | 
 | 595 | EXPORT_SYMBOL(kmem_cache_free); | 
 | 596 |  | 
| Christoph Lameter | 945cf2b | 2012-09-04 23:18:33 +0000 | [diff] [blame] | 597 | int __kmem_cache_shutdown(struct kmem_cache *c) | 
 | 598 | { | 
 | 599 | 	/* No way to check for remaining objects */ | 
 | 600 | 	return 0; | 
 | 601 | } | 
 | 602 |  | 
| Christoph Lameter | 2e892f4 | 2006-12-13 00:34:23 -0800 | [diff] [blame] | 603 | int kmem_cache_shrink(struct kmem_cache *d) | 
 | 604 | { | 
 | 605 | 	return 0; | 
 | 606 | } | 
 | 607 | EXPORT_SYMBOL(kmem_cache_shrink); | 
 | 608 |  | 
| Christoph Lameter | 9b030cb | 2012-09-05 00:20:33 +0000 | [diff] [blame] | 609 | struct kmem_cache kmem_cache_boot = { | 
 | 610 | 	.name = "kmem_cache", | 
 | 611 | 	.size = sizeof(struct kmem_cache), | 
 | 612 | 	.flags = SLAB_PANIC, | 
 | 613 | 	.align = ARCH_KMALLOC_MINALIGN, | 
 | 614 | }; | 
 | 615 |  | 
| Dimitri Gorokhovik | bcb4ddb | 2006-12-29 16:48:28 -0800 | [diff] [blame] | 616 | void __init kmem_cache_init(void) | 
 | 617 | { | 
| Christoph Lameter | 9b030cb | 2012-09-05 00:20:33 +0000 | [diff] [blame] | 618 | 	kmem_cache = &kmem_cache_boot; | 
| Christoph Lameter | 97d0660 | 2012-07-06 15:25:11 -0500 | [diff] [blame] | 619 | 	slab_state = UP; | 
| Matt Mackall | 10cef60 | 2006-01-08 01:01:45 -0800 | [diff] [blame] | 620 | } | 
| Wu Fengguang | bbff2e4 | 2009-08-06 11:36:25 +0300 | [diff] [blame] | 621 |  | 
 | 622 | void __init kmem_cache_init_late(void) | 
 | 623 | { | 
| Christoph Lameter | 97d0660 | 2012-07-06 15:25:11 -0500 | [diff] [blame] | 624 | 	slab_state = FULL; | 
| Wu Fengguang | bbff2e4 | 2009-08-06 11:36:25 +0300 | [diff] [blame] | 625 | } |