| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 1 | Semantics and Behavior of Atomic and | 
|  | 2 | Bitmask Operations | 
|  | 3 |  | 
|  | 4 | David S. Miller | 
|  | 5 |  | 
|  | 6 | This document is intended to serve as a guide to Linux port | 
|  | 7 | maintainers on how to implement atomic counter, bitops, and spinlock | 
|  | 8 | interfaces properly. | 
|  | 9 |  | 
|  | 10 | The atomic_t type should be defined as a signed integer. | 
|  | 11 | Also, it should be made opaque such that any kind of cast to a normal | 
|  | 12 | C integer type will fail.  Something like the following should | 
|  | 13 | suffice: | 
|  | 14 |  | 
|  | 15 | typedef struct { volatile int counter; } atomic_t; | 
|  | 16 |  | 
| Matti Linnanvuori | 8d7b52d | 2007-10-16 23:30:08 -0700 | [diff] [blame] | 17 | Historically, counter has been declared volatile.  This is now discouraged. | 
|  | 18 | See Documentation/volatile-considered-harmful.txt for the complete rationale. | 
|  | 19 |  | 
| Grant Grundler | 1a2142b | 2007-10-16 23:29:28 -0700 | [diff] [blame] | 20 | local_t is very similar to atomic_t. If the counter is per CPU and only | 
|  | 21 | updated by one CPU, local_t is probably more appropriate. Please see | 
|  | 22 | Documentation/local_ops.txt for the semantics of local_t. | 
|  | 23 |  | 
| Matti Linnanvuori | 8d7b52d | 2007-10-16 23:30:08 -0700 | [diff] [blame] | 24 | The first operations to implement for atomic_t's are the initializers and | 
|  | 25 | plain reads. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 26 |  | 
|  | 27 | #define ATOMIC_INIT(i)		{ (i) } | 
|  | 28 | #define atomic_set(v, i)	((v)->counter = (i)) | 
|  | 29 |  | 
|  | 30 | The first macro is used in definitions, such as: | 
|  | 31 |  | 
|  | 32 | static atomic_t my_counter = ATOMIC_INIT(1); | 
|  | 33 |  | 
| Matti Linnanvuori | 8d7b52d | 2007-10-16 23:30:08 -0700 | [diff] [blame] | 34 | The initializer is atomic in that the return values of the atomic operations | 
|  | 35 | are guaranteed to be correct reflecting the initialized value if the | 
|  | 36 | initializer is used before runtime.  If the initializer is used at runtime, a | 
|  | 37 | proper implicit or explicit read memory barrier is needed before reading the | 
|  | 38 | value with atomic_read from another thread. | 
|  | 39 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 40 | The second interface can be used at runtime, as in: | 
|  | 41 |  | 
|  | 42 | struct foo { atomic_t counter; }; | 
|  | 43 | ... | 
|  | 44 |  | 
|  | 45 | struct foo *k; | 
|  | 46 |  | 
|  | 47 | k = kmalloc(sizeof(*k), GFP_KERNEL); | 
|  | 48 | if (!k) | 
|  | 49 | return -ENOMEM; | 
|  | 50 | atomic_set(&k->counter, 0); | 
|  | 51 |  | 
| Matti Linnanvuori | 8d7b52d | 2007-10-16 23:30:08 -0700 | [diff] [blame] | 52 | The setting is atomic in that the return values of the atomic operations by | 
|  | 53 | all threads are guaranteed to be correct reflecting either the value that has | 
|  | 54 | been set with this operation or set with another operation.  A proper implicit | 
|  | 55 | or explicit memory barrier is needed before the value set with the operation | 
|  | 56 | is guaranteed to be readable with atomic_read from another thread. | 
|  | 57 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 58 | Next, we have: | 
|  | 59 |  | 
|  | 60 | #define atomic_read(v)	((v)->counter) | 
|  | 61 |  | 
| Matti Linnanvuori | 8d7b52d | 2007-10-16 23:30:08 -0700 | [diff] [blame] | 62 | which simply reads the counter value currently visible to the calling thread. | 
|  | 63 | The read is atomic in that the return value is guaranteed to be one of the | 
|  | 64 | values initialized or modified with the interface operations if a proper | 
|  | 65 | implicit or explicit memory barrier is used after possible runtime | 
|  | 66 | initialization by any other thread and the value is modified only with the | 
|  | 67 | interface operations.  atomic_read does not guarantee that the runtime | 
|  | 68 | initialization by any other thread is visible yet, so the user of the | 
|  | 69 | interface must take care of that with a proper implicit or explicit memory | 
|  | 70 | barrier. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 71 |  | 
| Matti Linnanvuori | 8d7b52d | 2007-10-16 23:30:08 -0700 | [diff] [blame] | 72 | *** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! *** | 
|  | 73 |  | 
|  | 74 | Some architectures may choose to use the volatile keyword, barriers, or inline | 
|  | 75 | assembly to guarantee some degree of immediacy for atomic_read() and | 
|  | 76 | atomic_set().  This is not uniformly guaranteed, and may change in the future, | 
|  | 77 | so all users of atomic_t should treat atomic_read() and atomic_set() as simple | 
|  | 78 | C statements that may be reordered or optimized away entirely by the compiler | 
|  | 79 | or processor, and explicitly invoke the appropriate compiler and/or memory | 
|  | 80 | barrier for each use case.  Failure to do so will result in code that may | 
|  | 81 | suddenly break when used with different architectures or compiler | 
|  | 82 | optimizations, or even changes in unrelated code which changes how the | 
|  | 83 | compiler optimizes the section accessing atomic_t variables. | 
|  | 84 |  | 
|  | 85 | *** YOU HAVE BEEN WARNED! *** | 
|  | 86 |  | 
|  | 87 | Now, we move onto the atomic operation interfaces typically implemented with | 
|  | 88 | the help of assembly code. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 89 |  | 
|  | 90 | void atomic_add(int i, atomic_t *v); | 
|  | 91 | void atomic_sub(int i, atomic_t *v); | 
|  | 92 | void atomic_inc(atomic_t *v); | 
|  | 93 | void atomic_dec(atomic_t *v); | 
|  | 94 |  | 
|  | 95 | These four routines add and subtract integral values to/from the given | 
|  | 96 | atomic_t value.  The first two routines pass explicit integers by | 
|  | 97 | which to make the adjustment, whereas the latter two use an implicit | 
|  | 98 | adjustment value of "1". | 
|  | 99 |  | 
|  | 100 | One very important aspect of these two routines is that they DO NOT | 
|  | 101 | require any explicit memory barriers.  They need only perform the | 
|  | 102 | atomic_t counter update in an SMP safe manner. | 
|  | 103 |  | 
|  | 104 | Next, we have: | 
|  | 105 |  | 
|  | 106 | int atomic_inc_return(atomic_t *v); | 
|  | 107 | int atomic_dec_return(atomic_t *v); | 
|  | 108 |  | 
|  | 109 | These routines add 1 and subtract 1, respectively, from the given | 
|  | 110 | atomic_t and return the new counter value after the operation is | 
|  | 111 | performed. | 
|  | 112 |  | 
|  | 113 | Unlike the above routines, it is required that explicit memory | 
|  | 114 | barriers are performed before and after the operation.  It must be | 
|  | 115 | done such that all memory operations before and after the atomic | 
|  | 116 | operation calls are strongly ordered with respect to the atomic | 
|  | 117 | operation itself. | 
|  | 118 |  | 
|  | 119 | For example, it should behave as if a smp_mb() call existed both | 
|  | 120 | before and after the atomic operation. | 
|  | 121 |  | 
|  | 122 | If the atomic instructions used in an implementation provide explicit | 
|  | 123 | memory barrier semantics which satisfy the above requirements, that is | 
|  | 124 | fine as well. | 
|  | 125 |  | 
|  | 126 | Let's move on: | 
|  | 127 |  | 
|  | 128 | int atomic_add_return(int i, atomic_t *v); | 
|  | 129 | int atomic_sub_return(int i, atomic_t *v); | 
|  | 130 |  | 
|  | 131 | These behave just like atomic_{inc,dec}_return() except that an | 
|  | 132 | explicit counter adjustment is given instead of the implicit "1". | 
|  | 133 | This means that like atomic_{inc,dec}_return(), the memory barrier | 
|  | 134 | semantics are required. | 
|  | 135 |  | 
|  | 136 | Next: | 
|  | 137 |  | 
|  | 138 | int atomic_inc_and_test(atomic_t *v); | 
|  | 139 | int atomic_dec_and_test(atomic_t *v); | 
|  | 140 |  | 
|  | 141 | These two routines increment and decrement by 1, respectively, the | 
|  | 142 | given atomic counter.  They return a boolean indicating whether the | 
|  | 143 | resulting counter value was zero or not. | 
|  | 144 |  | 
|  | 145 | It requires explicit memory barrier semantics around the operation as | 
|  | 146 | above. | 
|  | 147 |  | 
|  | 148 | int atomic_sub_and_test(int i, atomic_t *v); | 
|  | 149 |  | 
|  | 150 | This is identical to atomic_dec_and_test() except that an explicit | 
|  | 151 | decrement is given instead of the implicit "1".  It requires explicit | 
|  | 152 | memory barrier semantics around the operation. | 
|  | 153 |  | 
|  | 154 | int atomic_add_negative(int i, atomic_t *v); | 
|  | 155 |  | 
|  | 156 | The given increment is added to the given atomic counter value.  A | 
|  | 157 | boolean is return which indicates whether the resulting counter value | 
|  | 158 | is negative.  It requires explicit memory barrier semantics around the | 
|  | 159 | operation. | 
|  | 160 |  | 
| Nick Piggin | 8426e1f | 2005-11-13 16:07:25 -0800 | [diff] [blame] | 161 | Then: | 
| Nick Piggin | 4a6dae6 | 2005-11-13 16:07:24 -0800 | [diff] [blame] | 162 |  | 
| Matti Linnanvuori | 8d7b52d | 2007-10-16 23:30:08 -0700 | [diff] [blame] | 163 | int atomic_xchg(atomic_t *v, int new); | 
|  | 164 |  | 
|  | 165 | This performs an atomic exchange operation on the atomic variable v, setting | 
|  | 166 | the given new value.  It returns the old value that the atomic variable v had | 
|  | 167 | just before the operation. | 
|  | 168 |  | 
| Nick Piggin | 4a6dae6 | 2005-11-13 16:07:24 -0800 | [diff] [blame] | 169 | int atomic_cmpxchg(atomic_t *v, int old, int new); | 
|  | 170 |  | 
|  | 171 | This performs an atomic compare exchange operation on the atomic value v, | 
|  | 172 | with the given old and new values. Like all atomic_xxx operations, | 
|  | 173 | atomic_cmpxchg will only satisfy its atomicity semantics as long as all | 
|  | 174 | other accesses of *v are performed through atomic_xxx operations. | 
|  | 175 |  | 
|  | 176 | atomic_cmpxchg requires explicit memory barriers around the operation. | 
|  | 177 |  | 
|  | 178 | The semantics for atomic_cmpxchg are the same as those defined for 'cas' | 
|  | 179 | below. | 
|  | 180 |  | 
| Nick Piggin | 8426e1f | 2005-11-13 16:07:25 -0800 | [diff] [blame] | 181 | Finally: | 
|  | 182 |  | 
|  | 183 | int atomic_add_unless(atomic_t *v, int a, int u); | 
|  | 184 |  | 
|  | 185 | If the atomic value v is not equal to u, this function adds a to v, and | 
|  | 186 | returns non zero. If v is equal to u then it returns zero. This is done as | 
|  | 187 | an atomic operation. | 
|  | 188 |  | 
|  | 189 | atomic_add_unless requires explicit memory barriers around the operation. | 
|  | 190 |  | 
|  | 191 | atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) | 
|  | 192 |  | 
| Nick Piggin | 4a6dae6 | 2005-11-13 16:07:24 -0800 | [diff] [blame] | 193 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 194 | If a caller requires memory barrier semantics around an atomic_t | 
|  | 195 | operation which does not return a value, a set of interfaces are | 
|  | 196 | defined which accomplish this: | 
|  | 197 |  | 
|  | 198 | void smp_mb__before_atomic_dec(void); | 
|  | 199 | void smp_mb__after_atomic_dec(void); | 
|  | 200 | void smp_mb__before_atomic_inc(void); | 
| Ratnadeep Joshi | 4249e08 | 2007-06-08 13:46:50 -0700 | [diff] [blame] | 201 | void smp_mb__after_atomic_inc(void); | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 202 |  | 
|  | 203 | For example, smp_mb__before_atomic_dec() can be used like so: | 
|  | 204 |  | 
|  | 205 | obj->dead = 1; | 
|  | 206 | smp_mb__before_atomic_dec(); | 
|  | 207 | atomic_dec(&obj->ref_count); | 
|  | 208 |  | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 209 | It makes sure that all memory operations preceding the atomic_dec() | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 210 | call are strongly ordered with respect to the atomic counter | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 211 | operation.  In the above example, it guarantees that the assignment of | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 212 | "1" to obj->dead will be globally visible to other cpus before the | 
|  | 213 | atomic counter decrement. | 
|  | 214 |  | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 215 | Without the explicit smp_mb__before_atomic_dec() call, the | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 216 | implementation could legally allow the atomic counter update visible | 
|  | 217 | to other cpus before the "obj->dead = 1;" assignment. | 
|  | 218 |  | 
|  | 219 | The other three interfaces listed are used to provide explicit | 
|  | 220 | ordering with respect to memory operations after an atomic_dec() call | 
|  | 221 | (smp_mb__after_atomic_dec()) and around atomic_inc() calls | 
|  | 222 | (smp_mb__{before,after}_atomic_inc()). | 
|  | 223 |  | 
|  | 224 | A missing memory barrier in the cases where they are required by the | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 225 | atomic_t implementation above can have disastrous results.  Here is | 
|  | 226 | an example, which follows a pattern occurring frequently in the Linux | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 227 | kernel.  It is the use of atomic counters to implement reference | 
|  | 228 | counting, and it works such that once the counter falls to zero it can | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 229 | be guaranteed that no other entity can be accessing the object: | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 230 |  | 
|  | 231 | static void obj_list_add(struct obj *obj) | 
|  | 232 | { | 
|  | 233 | obj->active = 1; | 
|  | 234 | list_add(&obj->list); | 
|  | 235 | } | 
|  | 236 |  | 
|  | 237 | static void obj_list_del(struct obj *obj) | 
|  | 238 | { | 
|  | 239 | list_del(&obj->list); | 
|  | 240 | obj->active = 0; | 
|  | 241 | } | 
|  | 242 |  | 
|  | 243 | static void obj_destroy(struct obj *obj) | 
|  | 244 | { | 
|  | 245 | BUG_ON(obj->active); | 
|  | 246 | kfree(obj); | 
|  | 247 | } | 
|  | 248 |  | 
|  | 249 | struct obj *obj_list_peek(struct list_head *head) | 
|  | 250 | { | 
|  | 251 | if (!list_empty(head)) { | 
|  | 252 | struct obj *obj; | 
|  | 253 |  | 
|  | 254 | obj = list_entry(head->next, struct obj, list); | 
|  | 255 | atomic_inc(&obj->refcnt); | 
|  | 256 | return obj; | 
|  | 257 | } | 
|  | 258 | return NULL; | 
|  | 259 | } | 
|  | 260 |  | 
|  | 261 | void obj_poke(void) | 
|  | 262 | { | 
|  | 263 | struct obj *obj; | 
|  | 264 |  | 
|  | 265 | spin_lock(&global_list_lock); | 
|  | 266 | obj = obj_list_peek(&global_list); | 
|  | 267 | spin_unlock(&global_list_lock); | 
|  | 268 |  | 
|  | 269 | if (obj) { | 
|  | 270 | obj->ops->poke(obj); | 
|  | 271 | if (atomic_dec_and_test(&obj->refcnt)) | 
|  | 272 | obj_destroy(obj); | 
|  | 273 | } | 
|  | 274 | } | 
|  | 275 |  | 
|  | 276 | void obj_timeout(struct obj *obj) | 
|  | 277 | { | 
|  | 278 | spin_lock(&global_list_lock); | 
|  | 279 | obj_list_del(obj); | 
|  | 280 | spin_unlock(&global_list_lock); | 
|  | 281 |  | 
|  | 282 | if (atomic_dec_and_test(&obj->refcnt)) | 
|  | 283 | obj_destroy(obj); | 
|  | 284 | } | 
|  | 285 |  | 
|  | 286 | (This is a simplification of the ARP queue management in the | 
|  | 287 | generic neighbour discover code of the networking.  Olaf Kirch | 
|  | 288 | found a bug wrt. memory barriers in kfree_skb() that exposed | 
|  | 289 | the atomic_t memory barrier requirements quite clearly.) | 
|  | 290 |  | 
|  | 291 | Given the above scheme, it must be the case that the obj->active | 
|  | 292 | update done by the obj list deletion be visible to other processors | 
|  | 293 | before the atomic counter decrement is performed. | 
|  | 294 |  | 
|  | 295 | Otherwise, the counter could fall to zero, yet obj->active would still | 
|  | 296 | be set, thus triggering the assertion in obj_destroy().  The error | 
|  | 297 | sequence looks like this: | 
|  | 298 |  | 
|  | 299 | cpu 0				cpu 1 | 
|  | 300 | obj_poke()			obj_timeout() | 
|  | 301 | obj = obj_list_peek(); | 
|  | 302 | ... gains ref to obj, refcnt=2 | 
|  | 303 | obj_list_del(obj); | 
|  | 304 | obj->active = 0 ... | 
|  | 305 | ... visibility delayed ... | 
|  | 306 | atomic_dec_and_test() | 
|  | 307 | ... refcnt drops to 1 ... | 
|  | 308 | atomic_dec_and_test() | 
|  | 309 | ... refcount drops to 0 ... | 
|  | 310 | obj_destroy() | 
|  | 311 | BUG() triggers since obj->active | 
|  | 312 | still seen as one | 
|  | 313 | obj->active update visibility occurs | 
|  | 314 |  | 
|  | 315 | With the memory barrier semantics required of the atomic_t operations | 
|  | 316 | which return values, the above sequence of memory visibility can never | 
|  | 317 | happen.  Specifically, in the above case the atomic_dec_and_test() | 
|  | 318 | counter decrement would not become globally visible until the | 
|  | 319 | obj->active update does. | 
|  | 320 |  | 
|  | 321 | As a historical note, 32-bit Sparc used to only allow usage of | 
|  | 322 | 24-bits of it's atomic_t type.  This was because it used 8 bits | 
|  | 323 | as a spinlock for SMP safety.  Sparc32 lacked a "compare and swap" | 
|  | 324 | type instruction.  However, 32-bit Sparc has since been moved over | 
|  | 325 | to a "hash table of spinlocks" scheme, that allows the full 32-bit | 
|  | 326 | counter to be realized.  Essentially, an array of spinlocks are | 
|  | 327 | indexed into based upon the address of the atomic_t being operated | 
|  | 328 | on, and that lock protects the atomic operation.  Parisc uses the | 
|  | 329 | same scheme. | 
|  | 330 |  | 
|  | 331 | Another note is that the atomic_t operations returning values are | 
|  | 332 | extremely slow on an old 386. | 
|  | 333 |  | 
|  | 334 | We will now cover the atomic bitmask operations.  You will find that | 
|  | 335 | their SMP and memory barrier semantics are similar in shape and scope | 
|  | 336 | to the atomic_t ops above. | 
|  | 337 |  | 
|  | 338 | Native atomic bit operations are defined to operate on objects aligned | 
|  | 339 | to the size of an "unsigned long" C data type, and are least of that | 
|  | 340 | size.  The endianness of the bits within each "unsigned long" are the | 
|  | 341 | native endianness of the cpu. | 
|  | 342 |  | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 343 | void set_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 344 | void clear_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 345 | void change_bit(unsigned long nr, volatile unsigned long *addr); | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 346 |  | 
|  | 347 | These routines set, clear, and change, respectively, the bit number | 
|  | 348 | indicated by "nr" on the bit mask pointed to by "ADDR". | 
|  | 349 |  | 
|  | 350 | They must execute atomically, yet there are no implicit memory barrier | 
|  | 351 | semantics required of these interfaces. | 
|  | 352 |  | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 353 | int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 354 | int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 355 | int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 356 |  | 
|  | 357 | Like the above, except that these routines return a boolean which | 
|  | 358 | indicates whether the changed bit was set _BEFORE_ the atomic bit | 
|  | 359 | operation. | 
|  | 360 |  | 
|  | 361 | WARNING! It is incredibly important that the value be a boolean, | 
|  | 362 | ie. "0" or "1".  Do not try to be fancy and save a few instructions by | 
|  | 363 | declaring the above to return "long" and just returning something like | 
|  | 364 | "old_val & mask" because that will not work. | 
|  | 365 |  | 
|  | 366 | For one thing, this return value gets truncated to int in many code | 
|  | 367 | paths using these interfaces, so on 64-bit if the bit is set in the | 
|  | 368 | upper 32-bits then testers will never see that. | 
|  | 369 |  | 
|  | 370 | One great example of where this problem crops up are the thread_info | 
|  | 371 | flag operations.  Routines such as test_and_set_ti_thread_flag() chop | 
|  | 372 | the return value into an int.  There are other places where things | 
|  | 373 | like this occur as well. | 
|  | 374 |  | 
|  | 375 | These routines, like the atomic_t counter operations returning values, | 
|  | 376 | require explicit memory barrier semantics around their execution.  All | 
|  | 377 | memory operations before the atomic bit operation call must be made | 
|  | 378 | visible globally before the atomic bit operation is made visible. | 
|  | 379 | Likewise, the atomic bit operation must be visible globally before any | 
|  | 380 | subsequent memory operation is made visible.  For example: | 
|  | 381 |  | 
|  | 382 | obj->dead = 1; | 
|  | 383 | if (test_and_set_bit(0, &obj->flags)) | 
|  | 384 | /* ... */; | 
|  | 385 | obj->killed = 1; | 
|  | 386 |  | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 387 | The implementation of test_and_set_bit() must guarantee that | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 388 | "obj->dead = 1;" is visible to cpus before the atomic memory operation | 
|  | 389 | done by test_and_set_bit() becomes visible.  Likewise, the atomic | 
|  | 390 | memory operation done by test_and_set_bit() must become visible before | 
|  | 391 | "obj->killed = 1;" is visible. | 
|  | 392 |  | 
|  | 393 | Finally there is the basic operation: | 
|  | 394 |  | 
|  | 395 | int test_bit(unsigned long nr, __const__ volatile unsigned long *addr); | 
|  | 396 |  | 
|  | 397 | Which returns a boolean indicating if bit "nr" is set in the bitmask | 
|  | 398 | pointed to by "addr". | 
|  | 399 |  | 
|  | 400 | If explicit memory barriers are required around clear_bit() (which | 
|  | 401 | does not return a value, and thus does not need to provide memory | 
|  | 402 | barrier semantics), two interfaces are provided: | 
|  | 403 |  | 
|  | 404 | void smp_mb__before_clear_bit(void); | 
|  | 405 | void smp_mb__after_clear_bit(void); | 
|  | 406 |  | 
|  | 407 | They are used as follows, and are akin to their atomic_t operation | 
|  | 408 | brothers: | 
|  | 409 |  | 
|  | 410 | /* All memory operations before this call will | 
|  | 411 | * be globally visible before the clear_bit(). | 
|  | 412 | */ | 
|  | 413 | smp_mb__before_clear_bit(); | 
|  | 414 | clear_bit( ... ); | 
|  | 415 |  | 
|  | 416 | /* The clear_bit() will be visible before all | 
|  | 417 | * subsequent memory operations. | 
|  | 418 | */ | 
|  | 419 | smp_mb__after_clear_bit(); | 
|  | 420 |  | 
| Nick Piggin | 2633357 | 2007-10-18 03:06:39 -0700 | [diff] [blame] | 421 | There are two special bitops with lock barrier semantics (acquire/release, | 
|  | 422 | same as spinlocks). These operate in the same way as their non-_lock/unlock | 
|  | 423 | postfixed variants, except that they are to provide acquire/release semantics, | 
|  | 424 | respectively. This means they can be used for bit_spin_trylock and | 
|  | 425 | bit_spin_unlock type operations without specifying any more barriers. | 
|  | 426 |  | 
|  | 427 | int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); | 
|  | 428 | void clear_bit_unlock(unsigned long nr, unsigned long *addr); | 
|  | 429 | void __clear_bit_unlock(unsigned long nr, unsigned long *addr); | 
|  | 430 |  | 
|  | 431 | The __clear_bit_unlock version is non-atomic, however it still implements | 
|  | 432 | unlock barrier semantics. This can be useful if the lock itself is protecting | 
|  | 433 | the other bits in the word. | 
|  | 434 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 435 | Finally, there are non-atomic versions of the bitmask operations | 
|  | 436 | provided.  They are used in contexts where some other higher-level SMP | 
|  | 437 | locking scheme is being used to protect the bitmask, and thus less | 
|  | 438 | expensive non-atomic operations may be used in the implementation. | 
|  | 439 | They have names similar to the above bitmask operation interfaces, | 
|  | 440 | except that two underscores are prefixed to the interface name. | 
|  | 441 |  | 
|  | 442 | void __set_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 443 | void __clear_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 444 | void __change_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 445 | int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 446 | int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 447 | int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr); | 
|  | 448 |  | 
|  | 449 | These non-atomic variants also do not require any special memory | 
|  | 450 | barrier semantics. | 
|  | 451 |  | 
|  | 452 | The routines xchg() and cmpxchg() need the same exact memory barriers | 
|  | 453 | as the atomic and bit operations returning values. | 
|  | 454 |  | 
|  | 455 | Spinlocks and rwlocks have memory barrier expectations as well. | 
|  | 456 | The rule to follow is simple: | 
|  | 457 |  | 
|  | 458 | 1) When acquiring a lock, the implementation must make it globally | 
|  | 459 | visible before any subsequent memory operation. | 
|  | 460 |  | 
|  | 461 | 2) When releasing a lock, the implementation must make it such that | 
|  | 462 | all previous memory operations are globally visible before the | 
|  | 463 | lock release. | 
|  | 464 |  | 
|  | 465 | Which finally brings us to _atomic_dec_and_lock().  There is an | 
|  | 466 | architecture-neutral version implemented in lib/dec_and_lock.c, | 
|  | 467 | but most platforms will wish to optimize this in assembler. | 
|  | 468 |  | 
|  | 469 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); | 
|  | 470 |  | 
|  | 471 | Atomically decrement the given counter, and if will drop to zero | 
|  | 472 | atomically acquire the given spinlock and perform the decrement | 
|  | 473 | of the counter to zero.  If it does not drop to zero, do nothing | 
|  | 474 | with the spinlock. | 
|  | 475 |  | 
|  | 476 | It is actually pretty simple to get the memory barrier correct. | 
|  | 477 | Simply satisfy the spinlock grab requirements, which is make | 
|  | 478 | sure the spinlock operation is globally visible before any | 
|  | 479 | subsequent memory operation. | 
|  | 480 |  | 
|  | 481 | We can demonstrate this operation more clearly if we define | 
|  | 482 | an abstract atomic operation: | 
|  | 483 |  | 
|  | 484 | long cas(long *mem, long old, long new); | 
|  | 485 |  | 
|  | 486 | "cas" stands for "compare and swap".  It atomically: | 
|  | 487 |  | 
|  | 488 | 1) Compares "old" with the value currently at "mem". | 
|  | 489 | 2) If they are equal, "new" is written to "mem". | 
|  | 490 | 3) Regardless, the current value at "mem" is returned. | 
|  | 491 |  | 
|  | 492 | As an example usage, here is what an atomic counter update | 
|  | 493 | might look like: | 
|  | 494 |  | 
|  | 495 | void example_atomic_inc(long *counter) | 
|  | 496 | { | 
|  | 497 | long old, new, ret; | 
|  | 498 |  | 
|  | 499 | while (1) { | 
|  | 500 | old = *counter; | 
|  | 501 | new = old + 1; | 
|  | 502 |  | 
|  | 503 | ret = cas(counter, old, new); | 
|  | 504 | if (ret == old) | 
|  | 505 | break; | 
|  | 506 | } | 
|  | 507 | } | 
|  | 508 |  | 
|  | 509 | Let's use cas() in order to build a pseudo-C atomic_dec_and_lock(): | 
|  | 510 |  | 
|  | 511 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) | 
|  | 512 | { | 
|  | 513 | long old, new, ret; | 
|  | 514 | int went_to_zero; | 
|  | 515 |  | 
|  | 516 | went_to_zero = 0; | 
|  | 517 | while (1) { | 
|  | 518 | old = atomic_read(atomic); | 
|  | 519 | new = old - 1; | 
|  | 520 | if (new == 0) { | 
|  | 521 | went_to_zero = 1; | 
|  | 522 | spin_lock(lock); | 
|  | 523 | } | 
|  | 524 | ret = cas(atomic, old, new); | 
|  | 525 | if (ret == old) | 
|  | 526 | break; | 
|  | 527 | if (went_to_zero) { | 
|  | 528 | spin_unlock(lock); | 
|  | 529 | went_to_zero = 0; | 
|  | 530 | } | 
|  | 531 | } | 
|  | 532 |  | 
|  | 533 | return went_to_zero; | 
|  | 534 | } | 
|  | 535 |  | 
|  | 536 | Now, as far as memory barriers go, as long as spin_lock() | 
|  | 537 | strictly orders all subsequent memory operations (including | 
|  | 538 | the cas()) with respect to itself, things will be fine. | 
|  | 539 |  | 
| Michael Hayes | a0ebb3f | 2006-06-26 18:27:35 +0200 | [diff] [blame] | 540 | Said another way, _atomic_dec_and_lock() must guarantee that | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 541 | a counter dropping to zero is never made visible before the | 
|  | 542 | spinlock being acquired. | 
|  | 543 |  | 
|  | 544 | Note that this also means that for the case where the counter | 
|  | 545 | is not dropping to zero, there are no memory ordering | 
|  | 546 | requirements. |