Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | Semantics and Behavior of Atomic and |
2 | Bitmask Operations | |
3 | ||
4 | David S. Miller | |
5 | ||
6 | This document is intended to serve as a guide to Linux port | |
7 | maintainers on how to implement atomic counter, bitops, and spinlock | |
8 | interfaces properly. | |
9 | ||
10 | The atomic_t type should be defined as a signed integer. | |
11 | Also, it should be made opaque such that any kind of cast to a normal | |
12 | C integer type will fail. Something like the following should | |
13 | suffice: | |
14 | ||
72eef0f3 | 15 | typedef struct { int counter; } atomic_t; |
1da177e4 | 16 | |
8d7b52df ML |
17 | Historically, counter has been declared volatile. This is now discouraged. |
18 | See Documentation/volatile-considered-harmful.txt for the complete rationale. | |
19 | ||
1a2142b0 GG |
20 | local_t is very similar to atomic_t. If the counter is per CPU and only |
21 | updated by one CPU, local_t is probably more appropriate. Please see | |
22 | Documentation/local_ops.txt for the semantics of local_t. | |
23 | ||
8d7b52df ML |
24 | The first operations to implement for atomic_t's are the initializers and |
25 | plain reads. | |
1da177e4 LT |
26 | |
27 | #define ATOMIC_INIT(i) { (i) } | |
28 | #define atomic_set(v, i) ((v)->counter = (i)) | |
29 | ||
30 | The first macro is used in definitions, such as: | |
31 | ||
32 | static atomic_t my_counter = ATOMIC_INIT(1); | |
33 | ||
8d7b52df ML |
34 | The initializer is atomic in that the return values of the atomic operations |
35 | are guaranteed to be correct reflecting the initialized value if the | |
36 | initializer is used before runtime. If the initializer is used at runtime, a | |
37 | proper implicit or explicit read memory barrier is needed before reading the | |
38 | value with atomic_read from another thread. | |
39 | ||
1da177e4 LT |
40 | The second interface can be used at runtime, as in: |
41 | ||
42 | struct foo { atomic_t counter; }; | |
43 | ... | |
44 | ||
45 | struct foo *k; | |
46 | ||
47 | k = kmalloc(sizeof(*k), GFP_KERNEL); | |
48 | if (!k) | |
49 | return -ENOMEM; | |
50 | atomic_set(&k->counter, 0); | |
51 | ||
8d7b52df ML |
52 | The setting is atomic in that the return values of the atomic operations by |
53 | all threads are guaranteed to be correct reflecting either the value that has | |
54 | been set with this operation or set with another operation. A proper implicit | |
55 | or explicit memory barrier is needed before the value set with the operation | |
56 | is guaranteed to be readable with atomic_read from another thread. | |
57 | ||
1da177e4 LT |
58 | Next, we have: |
59 | ||
60 | #define atomic_read(v) ((v)->counter) | |
61 | ||
8d7b52df ML |
62 | which simply reads the counter value currently visible to the calling thread. |
63 | The read is atomic in that the return value is guaranteed to be one of the | |
64 | values initialized or modified with the interface operations if a proper | |
65 | implicit or explicit memory barrier is used after possible runtime | |
66 | initialization by any other thread and the value is modified only with the | |
67 | interface operations. atomic_read does not guarantee that the runtime | |
68 | initialization by any other thread is visible yet, so the user of the | |
69 | interface must take care of that with a proper implicit or explicit memory | |
70 | barrier. | |
71 | ||
72 | *** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! *** | |
73 | ||
74 | Some architectures may choose to use the volatile keyword, barriers, or inline | |
75 | assembly to guarantee some degree of immediacy for atomic_read() and | |
76 | atomic_set(). This is not uniformly guaranteed, and may change in the future, | |
77 | so all users of atomic_t should treat atomic_read() and atomic_set() as simple | |
78 | C statements that may be reordered or optimized away entirely by the compiler | |
79 | or processor, and explicitly invoke the appropriate compiler and/or memory | |
80 | barrier for each use case. Failure to do so will result in code that may | |
81 | suddenly break when used with different architectures or compiler | |
82 | optimizations, or even changes in unrelated code which changes how the | |
83 | compiler optimizes the section accessing atomic_t variables. | |
84 | ||
85 | *** YOU HAVE BEEN WARNED! *** | |
86 | ||
87 | Now, we move onto the atomic operation interfaces typically implemented with | |
88 | the help of assembly code. | |
1da177e4 LT |
89 | |
90 | void atomic_add(int i, atomic_t *v); | |
91 | void atomic_sub(int i, atomic_t *v); | |
92 | void atomic_inc(atomic_t *v); | |
93 | void atomic_dec(atomic_t *v); | |
94 | ||
95 | These four routines add and subtract integral values to/from the given | |
96 | atomic_t value. The first two routines pass explicit integers by | |
97 | which to make the adjustment, whereas the latter two use an implicit | |
98 | adjustment value of "1". | |
99 | ||
100 | One very important aspect of these two routines is that they DO NOT | |
101 | require any explicit memory barriers. They need only perform the | |
102 | atomic_t counter update in an SMP safe manner. | |
103 | ||
104 | Next, we have: | |
105 | ||
106 | int atomic_inc_return(atomic_t *v); | |
107 | int atomic_dec_return(atomic_t *v); | |
108 | ||
109 | These routines add 1 and subtract 1, respectively, from the given | |
110 | atomic_t and return the new counter value after the operation is | |
111 | performed. | |
112 | ||
113 | Unlike the above routines, it is required that explicit memory | |
114 | barriers are performed before and after the operation. It must be | |
115 | done such that all memory operations before and after the atomic | |
116 | operation calls are strongly ordered with respect to the atomic | |
117 | operation itself. | |
118 | ||
119 | For example, it should behave as if a smp_mb() call existed both | |
120 | before and after the atomic operation. | |
121 | ||
122 | If the atomic instructions used in an implementation provide explicit | |
123 | memory barrier semantics which satisfy the above requirements, that is | |
124 | fine as well. | |
125 | ||
126 | Let's move on: | |
127 | ||
128 | int atomic_add_return(int i, atomic_t *v); | |
129 | int atomic_sub_return(int i, atomic_t *v); | |
130 | ||
131 | These behave just like atomic_{inc,dec}_return() except that an | |
132 | explicit counter adjustment is given instead of the implicit "1". | |
133 | This means that like atomic_{inc,dec}_return(), the memory barrier | |
134 | semantics are required. | |
135 | ||
136 | Next: | |
137 | ||
138 | int atomic_inc_and_test(atomic_t *v); | |
139 | int atomic_dec_and_test(atomic_t *v); | |
140 | ||
141 | These two routines increment and decrement by 1, respectively, the | |
142 | given atomic counter. They return a boolean indicating whether the | |
143 | resulting counter value was zero or not. | |
144 | ||
145 | It requires explicit memory barrier semantics around the operation as | |
146 | above. | |
147 | ||
148 | int atomic_sub_and_test(int i, atomic_t *v); | |
149 | ||
150 | This is identical to atomic_dec_and_test() except that an explicit | |
151 | decrement is given instead of the implicit "1". It requires explicit | |
152 | memory barrier semantics around the operation. | |
153 | ||
154 | int atomic_add_negative(int i, atomic_t *v); | |
155 | ||
156 | The given increment is added to the given atomic counter value. A | |
157 | boolean is return which indicates whether the resulting counter value | |
158 | is negative. It requires explicit memory barrier semantics around the | |
159 | operation. | |
160 | ||
8426e1f6 | 161 | Then: |
4a6dae6d | 162 | |
8d7b52df ML |
163 | int atomic_xchg(atomic_t *v, int new); |
164 | ||
165 | This performs an atomic exchange operation on the atomic variable v, setting | |
166 | the given new value. It returns the old value that the atomic variable v had | |
167 | just before the operation. | |
168 | ||
4a6dae6d NP |
169 | int atomic_cmpxchg(atomic_t *v, int old, int new); |
170 | ||
171 | This performs an atomic compare exchange operation on the atomic value v, | |
172 | with the given old and new values. Like all atomic_xxx operations, | |
173 | atomic_cmpxchg will only satisfy its atomicity semantics as long as all | |
174 | other accesses of *v are performed through atomic_xxx operations. | |
175 | ||
176 | atomic_cmpxchg requires explicit memory barriers around the operation. | |
177 | ||
178 | The semantics for atomic_cmpxchg are the same as those defined for 'cas' | |
179 | below. | |
180 | ||
8426e1f6 NP |
181 | Finally: |
182 | ||
183 | int atomic_add_unless(atomic_t *v, int a, int u); | |
184 | ||
185 | If the atomic value v is not equal to u, this function adds a to v, and | |
186 | returns non zero. If v is equal to u then it returns zero. This is done as | |
187 | an atomic operation. | |
188 | ||
02c608c1 ON |
189 | atomic_add_unless requires explicit memory barriers around the operation |
190 | unless it fails (returns 0). | |
8426e1f6 NP |
191 | |
192 | atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) | |
193 | ||
4a6dae6d | 194 | |
1da177e4 LT |
195 | If a caller requires memory barrier semantics around an atomic_t |
196 | operation which does not return a value, a set of interfaces are | |
197 | defined which accomplish this: | |
198 | ||
199 | void smp_mb__before_atomic_dec(void); | |
200 | void smp_mb__after_atomic_dec(void); | |
201 | void smp_mb__before_atomic_inc(void); | |
4249e08e | 202 | void smp_mb__after_atomic_inc(void); |
1da177e4 LT |
203 | |
204 | For example, smp_mb__before_atomic_dec() can be used like so: | |
205 | ||
206 | obj->dead = 1; | |
207 | smp_mb__before_atomic_dec(); | |
208 | atomic_dec(&obj->ref_count); | |
209 | ||
a0ebb3ff | 210 | It makes sure that all memory operations preceding the atomic_dec() |
1da177e4 | 211 | call are strongly ordered with respect to the atomic counter |
a0ebb3ff | 212 | operation. In the above example, it guarantees that the assignment of |
1da177e4 LT |
213 | "1" to obj->dead will be globally visible to other cpus before the |
214 | atomic counter decrement. | |
215 | ||
a0ebb3ff | 216 | Without the explicit smp_mb__before_atomic_dec() call, the |
1da177e4 LT |
217 | implementation could legally allow the atomic counter update visible |
218 | to other cpus before the "obj->dead = 1;" assignment. | |
219 | ||
220 | The other three interfaces listed are used to provide explicit | |
221 | ordering with respect to memory operations after an atomic_dec() call | |
222 | (smp_mb__after_atomic_dec()) and around atomic_inc() calls | |
223 | (smp_mb__{before,after}_atomic_inc()). | |
224 | ||
225 | A missing memory barrier in the cases where they are required by the | |
a0ebb3ff MH |
226 | atomic_t implementation above can have disastrous results. Here is |
227 | an example, which follows a pattern occurring frequently in the Linux | |
1da177e4 LT |
228 | kernel. It is the use of atomic counters to implement reference |
229 | counting, and it works such that once the counter falls to zero it can | |
a0ebb3ff | 230 | be guaranteed that no other entity can be accessing the object: |
1da177e4 | 231 | |
4764e280 | 232 | static void obj_list_add(struct obj *obj, struct list_head *head) |
1da177e4 LT |
233 | { |
234 | obj->active = 1; | |
4764e280 | 235 | list_add(&obj->list, head); |
1da177e4 LT |
236 | } |
237 | ||
238 | static void obj_list_del(struct obj *obj) | |
239 | { | |
240 | list_del(&obj->list); | |
241 | obj->active = 0; | |
242 | } | |
243 | ||
244 | static void obj_destroy(struct obj *obj) | |
245 | { | |
246 | BUG_ON(obj->active); | |
247 | kfree(obj); | |
248 | } | |
249 | ||
250 | struct obj *obj_list_peek(struct list_head *head) | |
251 | { | |
252 | if (!list_empty(head)) { | |
253 | struct obj *obj; | |
254 | ||
255 | obj = list_entry(head->next, struct obj, list); | |
256 | atomic_inc(&obj->refcnt); | |
257 | return obj; | |
258 | } | |
259 | return NULL; | |
260 | } | |
261 | ||
262 | void obj_poke(void) | |
263 | { | |
264 | struct obj *obj; | |
265 | ||
266 | spin_lock(&global_list_lock); | |
267 | obj = obj_list_peek(&global_list); | |
268 | spin_unlock(&global_list_lock); | |
269 | ||
270 | if (obj) { | |
271 | obj->ops->poke(obj); | |
272 | if (atomic_dec_and_test(&obj->refcnt)) | |
273 | obj_destroy(obj); | |
274 | } | |
275 | } | |
276 | ||
277 | void obj_timeout(struct obj *obj) | |
278 | { | |
279 | spin_lock(&global_list_lock); | |
280 | obj_list_del(obj); | |
281 | spin_unlock(&global_list_lock); | |
282 | ||
283 | if (atomic_dec_and_test(&obj->refcnt)) | |
284 | obj_destroy(obj); | |
285 | } | |
286 | ||
287 | (This is a simplification of the ARP queue management in the | |
288 | generic neighbour discover code of the networking. Olaf Kirch | |
289 | found a bug wrt. memory barriers in kfree_skb() that exposed | |
290 | the atomic_t memory barrier requirements quite clearly.) | |
291 | ||
292 | Given the above scheme, it must be the case that the obj->active | |
293 | update done by the obj list deletion be visible to other processors | |
294 | before the atomic counter decrement is performed. | |
295 | ||
296 | Otherwise, the counter could fall to zero, yet obj->active would still | |
297 | be set, thus triggering the assertion in obj_destroy(). The error | |
298 | sequence looks like this: | |
299 | ||
300 | cpu 0 cpu 1 | |
301 | obj_poke() obj_timeout() | |
302 | obj = obj_list_peek(); | |
303 | ... gains ref to obj, refcnt=2 | |
304 | obj_list_del(obj); | |
305 | obj->active = 0 ... | |
306 | ... visibility delayed ... | |
307 | atomic_dec_and_test() | |
308 | ... refcnt drops to 1 ... | |
309 | atomic_dec_and_test() | |
310 | ... refcount drops to 0 ... | |
311 | obj_destroy() | |
312 | BUG() triggers since obj->active | |
313 | still seen as one | |
314 | obj->active update visibility occurs | |
315 | ||
316 | With the memory barrier semantics required of the atomic_t operations | |
317 | which return values, the above sequence of memory visibility can never | |
318 | happen. Specifically, in the above case the atomic_dec_and_test() | |
319 | counter decrement would not become globally visible until the | |
320 | obj->active update does. | |
321 | ||
322 | As a historical note, 32-bit Sparc used to only allow usage of | |
a33f3224 | 323 | 24-bits of its atomic_t type. This was because it used 8 bits |
1da177e4 LT |
324 | as a spinlock for SMP safety. Sparc32 lacked a "compare and swap" |
325 | type instruction. However, 32-bit Sparc has since been moved over | |
326 | to a "hash table of spinlocks" scheme, that allows the full 32-bit | |
327 | counter to be realized. Essentially, an array of spinlocks are | |
328 | indexed into based upon the address of the atomic_t being operated | |
329 | on, and that lock protects the atomic operation. Parisc uses the | |
330 | same scheme. | |
331 | ||
332 | Another note is that the atomic_t operations returning values are | |
333 | extremely slow on an old 386. | |
334 | ||
335 | We will now cover the atomic bitmask operations. You will find that | |
336 | their SMP and memory barrier semantics are similar in shape and scope | |
337 | to the atomic_t ops above. | |
338 | ||
339 | Native atomic bit operations are defined to operate on objects aligned | |
340 | to the size of an "unsigned long" C data type, and are least of that | |
341 | size. The endianness of the bits within each "unsigned long" are the | |
342 | native endianness of the cpu. | |
343 | ||
a0ebb3ff MH |
344 | void set_bit(unsigned long nr, volatile unsigned long *addr); |
345 | void clear_bit(unsigned long nr, volatile unsigned long *addr); | |
346 | void change_bit(unsigned long nr, volatile unsigned long *addr); | |
1da177e4 LT |
347 | |
348 | These routines set, clear, and change, respectively, the bit number | |
349 | indicated by "nr" on the bit mask pointed to by "ADDR". | |
350 | ||
351 | They must execute atomically, yet there are no implicit memory barrier | |
352 | semantics required of these interfaces. | |
353 | ||
a0ebb3ff MH |
354 | int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); |
355 | int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); | |
356 | int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); | |
1da177e4 LT |
357 | |
358 | Like the above, except that these routines return a boolean which | |
359 | indicates whether the changed bit was set _BEFORE_ the atomic bit | |
360 | operation. | |
361 | ||
362 | WARNING! It is incredibly important that the value be a boolean, | |
363 | ie. "0" or "1". Do not try to be fancy and save a few instructions by | |
364 | declaring the above to return "long" and just returning something like | |
365 | "old_val & mask" because that will not work. | |
366 | ||
367 | For one thing, this return value gets truncated to int in many code | |
368 | paths using these interfaces, so on 64-bit if the bit is set in the | |
369 | upper 32-bits then testers will never see that. | |
370 | ||
371 | One great example of where this problem crops up are the thread_info | |
372 | flag operations. Routines such as test_and_set_ti_thread_flag() chop | |
373 | the return value into an int. There are other places where things | |
374 | like this occur as well. | |
375 | ||
376 | These routines, like the atomic_t counter operations returning values, | |
377 | require explicit memory barrier semantics around their execution. All | |
378 | memory operations before the atomic bit operation call must be made | |
379 | visible globally before the atomic bit operation is made visible. | |
380 | Likewise, the atomic bit operation must be visible globally before any | |
381 | subsequent memory operation is made visible. For example: | |
382 | ||
383 | obj->dead = 1; | |
384 | if (test_and_set_bit(0, &obj->flags)) | |
385 | /* ... */; | |
386 | obj->killed = 1; | |
387 | ||
a0ebb3ff | 388 | The implementation of test_and_set_bit() must guarantee that |
1da177e4 LT |
389 | "obj->dead = 1;" is visible to cpus before the atomic memory operation |
390 | done by test_and_set_bit() becomes visible. Likewise, the atomic | |
391 | memory operation done by test_and_set_bit() must become visible before | |
392 | "obj->killed = 1;" is visible. | |
393 | ||
394 | Finally there is the basic operation: | |
395 | ||
396 | int test_bit(unsigned long nr, __const__ volatile unsigned long *addr); | |
397 | ||
398 | Which returns a boolean indicating if bit "nr" is set in the bitmask | |
399 | pointed to by "addr". | |
400 | ||
401 | If explicit memory barriers are required around clear_bit() (which | |
402 | does not return a value, and thus does not need to provide memory | |
403 | barrier semantics), two interfaces are provided: | |
404 | ||
405 | void smp_mb__before_clear_bit(void); | |
406 | void smp_mb__after_clear_bit(void); | |
407 | ||
408 | They are used as follows, and are akin to their atomic_t operation | |
409 | brothers: | |
410 | ||
411 | /* All memory operations before this call will | |
412 | * be globally visible before the clear_bit(). | |
413 | */ | |
414 | smp_mb__before_clear_bit(); | |
415 | clear_bit( ... ); | |
416 | ||
417 | /* The clear_bit() will be visible before all | |
418 | * subsequent memory operations. | |
419 | */ | |
420 | smp_mb__after_clear_bit(); | |
421 | ||
26333576 NP |
422 | There are two special bitops with lock barrier semantics (acquire/release, |
423 | same as spinlocks). These operate in the same way as their non-_lock/unlock | |
424 | postfixed variants, except that they are to provide acquire/release semantics, | |
425 | respectively. This means they can be used for bit_spin_trylock and | |
426 | bit_spin_unlock type operations without specifying any more barriers. | |
427 | ||
428 | int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); | |
429 | void clear_bit_unlock(unsigned long nr, unsigned long *addr); | |
430 | void __clear_bit_unlock(unsigned long nr, unsigned long *addr); | |
431 | ||
432 | The __clear_bit_unlock version is non-atomic, however it still implements | |
433 | unlock barrier semantics. This can be useful if the lock itself is protecting | |
434 | the other bits in the word. | |
435 | ||
1da177e4 LT |
436 | Finally, there are non-atomic versions of the bitmask operations |
437 | provided. They are used in contexts where some other higher-level SMP | |
438 | locking scheme is being used to protect the bitmask, and thus less | |
439 | expensive non-atomic operations may be used in the implementation. | |
440 | They have names similar to the above bitmask operation interfaces, | |
441 | except that two underscores are prefixed to the interface name. | |
442 | ||
443 | void __set_bit(unsigned long nr, volatile unsigned long *addr); | |
444 | void __clear_bit(unsigned long nr, volatile unsigned long *addr); | |
445 | void __change_bit(unsigned long nr, volatile unsigned long *addr); | |
446 | int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr); | |
447 | int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); | |
448 | int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr); | |
449 | ||
450 | These non-atomic variants also do not require any special memory | |
451 | barrier semantics. | |
452 | ||
453 | The routines xchg() and cmpxchg() need the same exact memory barriers | |
454 | as the atomic and bit operations returning values. | |
455 | ||
456 | Spinlocks and rwlocks have memory barrier expectations as well. | |
457 | The rule to follow is simple: | |
458 | ||
459 | 1) When acquiring a lock, the implementation must make it globally | |
460 | visible before any subsequent memory operation. | |
461 | ||
462 | 2) When releasing a lock, the implementation must make it such that | |
463 | all previous memory operations are globally visible before the | |
464 | lock release. | |
465 | ||
466 | Which finally brings us to _atomic_dec_and_lock(). There is an | |
467 | architecture-neutral version implemented in lib/dec_and_lock.c, | |
468 | but most platforms will wish to optimize this in assembler. | |
469 | ||
470 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); | |
471 | ||
472 | Atomically decrement the given counter, and if will drop to zero | |
473 | atomically acquire the given spinlock and perform the decrement | |
474 | of the counter to zero. If it does not drop to zero, do nothing | |
475 | with the spinlock. | |
476 | ||
477 | It is actually pretty simple to get the memory barrier correct. | |
478 | Simply satisfy the spinlock grab requirements, which is make | |
479 | sure the spinlock operation is globally visible before any | |
480 | subsequent memory operation. | |
481 | ||
482 | We can demonstrate this operation more clearly if we define | |
483 | an abstract atomic operation: | |
484 | ||
485 | long cas(long *mem, long old, long new); | |
486 | ||
487 | "cas" stands for "compare and swap". It atomically: | |
488 | ||
489 | 1) Compares "old" with the value currently at "mem". | |
490 | 2) If they are equal, "new" is written to "mem". | |
491 | 3) Regardless, the current value at "mem" is returned. | |
492 | ||
493 | As an example usage, here is what an atomic counter update | |
494 | might look like: | |
495 | ||
496 | void example_atomic_inc(long *counter) | |
497 | { | |
498 | long old, new, ret; | |
499 | ||
500 | while (1) { | |
501 | old = *counter; | |
502 | new = old + 1; | |
503 | ||
504 | ret = cas(counter, old, new); | |
505 | if (ret == old) | |
506 | break; | |
507 | } | |
508 | } | |
509 | ||
510 | Let's use cas() in order to build a pseudo-C atomic_dec_and_lock(): | |
511 | ||
512 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) | |
513 | { | |
514 | long old, new, ret; | |
515 | int went_to_zero; | |
516 | ||
517 | went_to_zero = 0; | |
518 | while (1) { | |
519 | old = atomic_read(atomic); | |
520 | new = old - 1; | |
521 | if (new == 0) { | |
522 | went_to_zero = 1; | |
523 | spin_lock(lock); | |
524 | } | |
525 | ret = cas(atomic, old, new); | |
526 | if (ret == old) | |
527 | break; | |
528 | if (went_to_zero) { | |
529 | spin_unlock(lock); | |
530 | went_to_zero = 0; | |
531 | } | |
532 | } | |
533 | ||
534 | return went_to_zero; | |
535 | } | |
536 | ||
537 | Now, as far as memory barriers go, as long as spin_lock() | |
538 | strictly orders all subsequent memory operations (including | |
539 | the cas()) with respect to itself, things will be fine. | |
540 | ||
a0ebb3ff | 541 | Said another way, _atomic_dec_and_lock() must guarantee that |
1da177e4 LT |
542 | a counter dropping to zero is never made visible before the |
543 | spinlock being acquired. | |
544 | ||
545 | Note that this also means that for the case where the counter | |
546 | is not dropping to zero, there are no memory ordering | |
547 | requirements. |