| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 1 | Please note that the "What is RCU?" LWN series is an excellent place | 
|  | 2 | to start learning about RCU: | 
|  | 3 |  | 
|  | 4 | 1.	What is RCU, Fundamentally?  http://lwn.net/Articles/262464/ | 
|  | 5 | 2.	What is RCU? Part 2: Usage   http://lwn.net/Articles/263130/ | 
|  | 6 | 3.	RCU part 3: the RCU API      http://lwn.net/Articles/264090/ | 
| Kees Cook | d493011 | 2011-12-07 15:11:23 -0800 | [diff] [blame] | 7 | 4.	The RCU API, 2010 Edition    http://lwn.net/Articles/418853/ | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 8 |  | 
|  | 9 |  | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 10 | What is RCU? | 
|  | 11 |  | 
|  | 12 | RCU is a synchronization mechanism that was added to the Linux kernel | 
|  | 13 | during the 2.5 development effort that is optimized for read-mostly | 
|  | 14 | situations.  Although RCU is actually quite simple once you understand it, | 
|  | 15 | getting there can sometimes be a challenge.  Part of the problem is that | 
|  | 16 | most of the past descriptions of RCU have been written with the mistaken | 
|  | 17 | assumption that there is "one true way" to describe RCU.  Instead, | 
|  | 18 | the experience has been that different people must take different paths | 
|  | 19 | to arrive at an understanding of RCU.  This document provides several | 
|  | 20 | different paths, as follows: | 
|  | 21 |  | 
|  | 22 | 1.	RCU OVERVIEW | 
|  | 23 | 2.	WHAT IS RCU'S CORE API? | 
|  | 24 | 3.	WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | 
|  | 25 | 4.	WHAT IF MY UPDATING THREAD CANNOT BLOCK? | 
|  | 26 | 5.	WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | 
|  | 27 | 6.	ANALOGY WITH READER-WRITER LOCKING | 
|  | 28 | 7.	FULL LIST OF RCU APIs | 
|  | 29 | 8.	ANSWERS TO QUICK QUIZZES | 
|  | 30 |  | 
|  | 31 | People who prefer starting with a conceptual overview should focus on | 
|  | 32 | Section 1, though most readers will profit by reading this section at | 
|  | 33 | some point.  People who prefer to start with an API that they can then | 
|  | 34 | experiment with should focus on Section 2.  People who prefer to start | 
|  | 35 | with example uses should focus on Sections 3 and 4.  People who need to | 
|  | 36 | understand the RCU implementation should focus on Section 5, then dive | 
|  | 37 | into the kernel source code.  People who reason best by analogy should | 
|  | 38 | focus on Section 6.  Section 7 serves as an index to the docbook API | 
|  | 39 | documentation, and Section 8 is the traditional answer key. | 
|  | 40 |  | 
|  | 41 | So, start with the section that makes the most sense to you and your | 
|  | 42 | preferred method of learning.  If you need to know everything about | 
|  | 43 | everything, feel free to read the whole thing -- but if you are really | 
|  | 44 | that type of person, you have perused the source code and will therefore | 
|  | 45 | never need this document anyway.  ;-) | 
|  | 46 |  | 
|  | 47 |  | 
|  | 48 | 1.  RCU OVERVIEW | 
|  | 49 |  | 
|  | 50 | The basic idea behind RCU is to split updates into "removal" and | 
|  | 51 | "reclamation" phases.  The removal phase removes references to data items | 
|  | 52 | within a data structure (possibly by replacing them with references to | 
|  | 53 | new versions of these data items), and can run concurrently with readers. | 
|  | 54 | The reason that it is safe to run the removal phase concurrently with | 
|  | 55 | readers is the semantics of modern CPUs guarantee that readers will see | 
|  | 56 | either the old or the new version of the data structure rather than a | 
|  | 57 | partially updated reference.  The reclamation phase does the work of reclaiming | 
|  | 58 | (e.g., freeing) the data items removed from the data structure during the | 
|  | 59 | removal phase.  Because reclaiming data items can disrupt any readers | 
|  | 60 | concurrently referencing those data items, the reclamation phase must | 
|  | 61 | not start until readers no longer hold references to those data items. | 
|  | 62 |  | 
|  | 63 | Splitting the update into removal and reclamation phases permits the | 
|  | 64 | updater to perform the removal phase immediately, and to defer the | 
|  | 65 | reclamation phase until all readers active during the removal phase have | 
|  | 66 | completed, either by blocking until they finish or by registering a | 
|  | 67 | callback that is invoked after they finish.  Only readers that are active | 
|  | 68 | during the removal phase need be considered, because any reader starting | 
|  | 69 | after the removal phase will be unable to gain a reference to the removed | 
|  | 70 | data items, and therefore cannot be disrupted by the reclamation phase. | 
|  | 71 |  | 
|  | 72 | So the typical RCU update sequence goes something like the following: | 
|  | 73 |  | 
|  | 74 | a.	Remove pointers to a data structure, so that subsequent | 
|  | 75 | readers cannot gain a reference to it. | 
|  | 76 |  | 
|  | 77 | b.	Wait for all previous readers to complete their RCU read-side | 
|  | 78 | critical sections. | 
|  | 79 |  | 
|  | 80 | c.	At this point, there cannot be any readers who hold references | 
|  | 81 | to the data structure, so it now may safely be reclaimed | 
|  | 82 | (e.g., kfree()d). | 
|  | 83 |  | 
|  | 84 | Step (b) above is the key idea underlying RCU's deferred destruction. | 
|  | 85 | The ability to wait until all readers are done allows RCU readers to | 
|  | 86 | use much lighter-weight synchronization, in some cases, absolutely no | 
|  | 87 | synchronization at all.  In contrast, in more conventional lock-based | 
|  | 88 | schemes, readers must use heavy-weight synchronization in order to | 
|  | 89 | prevent an updater from deleting the data structure out from under them. | 
|  | 90 | This is because lock-based updaters typically update data items in place, | 
|  | 91 | and must therefore exclude readers.  In contrast, RCU-based updaters | 
|  | 92 | typically take advantage of the fact that writes to single aligned | 
|  | 93 | pointers are atomic on modern CPUs, allowing atomic insertion, removal, | 
|  | 94 | and replacement of data items in a linked structure without disrupting | 
|  | 95 | readers.  Concurrent RCU readers can then continue accessing the old | 
|  | 96 | versions, and can dispense with the atomic operations, memory barriers, | 
|  | 97 | and communications cache misses that are so expensive on present-day | 
|  | 98 | SMP computer systems, even in absence of lock contention. | 
|  | 99 |  | 
|  | 100 | In the three-step procedure shown above, the updater is performing both | 
|  | 101 | the removal and the reclamation step, but it is often helpful for an | 
|  | 102 | entirely different thread to do the reclamation, as is in fact the case | 
|  | 103 | in the Linux kernel's directory-entry cache (dcache).  Even if the same | 
|  | 104 | thread performs both the update step (step (a) above) and the reclamation | 
|  | 105 | step (step (c) above), it is often helpful to think of them separately. | 
|  | 106 | For example, RCU readers and updaters need not communicate at all, | 
|  | 107 | but RCU provides implicit low-overhead communication between readers | 
|  | 108 | and reclaimers, namely, in step (b) above. | 
|  | 109 |  | 
|  | 110 | So how the heck can a reclaimer tell when a reader is done, given | 
|  | 111 | that readers are not doing any sort of synchronization operations??? | 
|  | 112 | Read on to learn about how RCU's API makes this easy. | 
|  | 113 |  | 
|  | 114 |  | 
|  | 115 | 2.  WHAT IS RCU'S CORE API? | 
|  | 116 |  | 
|  | 117 | The core RCU API is quite small: | 
|  | 118 |  | 
|  | 119 | a.	rcu_read_lock() | 
|  | 120 | b.	rcu_read_unlock() | 
|  | 121 | c.	synchronize_rcu() / call_rcu() | 
|  | 122 | d.	rcu_assign_pointer() | 
|  | 123 | e.	rcu_dereference() | 
|  | 124 |  | 
|  | 125 | There are many other members of the RCU API, but the rest can be | 
|  | 126 | expressed in terms of these five, though most implementations instead | 
|  | 127 | express synchronize_rcu() in terms of the call_rcu() callback API. | 
|  | 128 |  | 
|  | 129 | The five core RCU APIs are described below, the other 18 will be enumerated | 
|  | 130 | later.  See the kernel docbook documentation for more info, or look directly | 
|  | 131 | at the function header comments. | 
|  | 132 |  | 
|  | 133 | rcu_read_lock() | 
|  | 134 |  | 
|  | 135 | void rcu_read_lock(void); | 
|  | 136 |  | 
|  | 137 | Used by a reader to inform the reclaimer that the reader is | 
|  | 138 | entering an RCU read-side critical section.  It is illegal | 
|  | 139 | to block while in an RCU read-side critical section, though | 
| Paul E. McKenney | 6b3ef48 | 2009-08-22 13:56:53 -0700 | [diff] [blame] | 140 | kernels built with CONFIG_TREE_PREEMPT_RCU can preempt RCU | 
|  | 141 | read-side critical sections.  Any RCU-protected data structure | 
|  | 142 | accessed during an RCU read-side critical section is guaranteed to | 
|  | 143 | remain unreclaimed for the full duration of that critical section. | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 144 | Reference counts may be used in conjunction with RCU to maintain | 
|  | 145 | longer-term references to data structures. | 
|  | 146 |  | 
|  | 147 | rcu_read_unlock() | 
|  | 148 |  | 
|  | 149 | void rcu_read_unlock(void); | 
|  | 150 |  | 
|  | 151 | Used by a reader to inform the reclaimer that the reader is | 
|  | 152 | exiting an RCU read-side critical section.  Note that RCU | 
|  | 153 | read-side critical sections may be nested and/or overlapping. | 
|  | 154 |  | 
|  | 155 | synchronize_rcu() | 
|  | 156 |  | 
|  | 157 | void synchronize_rcu(void); | 
|  | 158 |  | 
|  | 159 | Marks the end of updater code and the beginning of reclaimer | 
|  | 160 | code.  It does this by blocking until all pre-existing RCU | 
|  | 161 | read-side critical sections on all CPUs have completed. | 
|  | 162 | Note that synchronize_rcu() will -not- necessarily wait for | 
|  | 163 | any subsequent RCU read-side critical sections to complete. | 
|  | 164 | For example, consider the following sequence of events: | 
|  | 165 |  | 
|  | 166 | CPU 0                  CPU 1                 CPU 2 | 
|  | 167 | ----------------- ------------------------- --------------- | 
|  | 168 | 1.  rcu_read_lock() | 
|  | 169 | 2.                    enters synchronize_rcu() | 
|  | 170 | 3.                                               rcu_read_lock() | 
|  | 171 | 4.  rcu_read_unlock() | 
|  | 172 | 5.                     exits synchronize_rcu() | 
|  | 173 | 6.                                              rcu_read_unlock() | 
|  | 174 |  | 
|  | 175 | To reiterate, synchronize_rcu() waits only for ongoing RCU | 
|  | 176 | read-side critical sections to complete, not necessarily for | 
|  | 177 | any that begin after synchronize_rcu() is invoked. | 
|  | 178 |  | 
|  | 179 | Of course, synchronize_rcu() does not necessarily return | 
|  | 180 | -immediately- after the last pre-existing RCU read-side critical | 
|  | 181 | section completes.  For one thing, there might well be scheduling | 
|  | 182 | delays.  For another thing, many RCU implementations process | 
|  | 183 | requests in batches in order to improve efficiencies, which can | 
|  | 184 | further delay synchronize_rcu(). | 
|  | 185 |  | 
|  | 186 | Since synchronize_rcu() is the API that must figure out when | 
|  | 187 | readers are done, its implementation is key to RCU.  For RCU | 
|  | 188 | to be useful in all but the most read-intensive situations, | 
|  | 189 | synchronize_rcu()'s overhead must also be quite small. | 
|  | 190 |  | 
|  | 191 | The call_rcu() API is a callback form of synchronize_rcu(), | 
|  | 192 | and is described in more detail in a later section.  Instead of | 
|  | 193 | blocking, it registers a function and argument which are invoked | 
|  | 194 | after all ongoing RCU read-side critical sections have completed. | 
|  | 195 | This callback variant is particularly useful in situations where | 
| Paul E. McKenney | 165d6c7 | 2006-06-25 05:48:44 -0700 | [diff] [blame] | 196 | it is illegal to block or where update-side performance is | 
|  | 197 | critically important. | 
|  | 198 |  | 
|  | 199 | However, the call_rcu() API should not be used lightly, as use | 
|  | 200 | of the synchronize_rcu() API generally results in simpler code. | 
|  | 201 | In addition, the synchronize_rcu() API has the nice property | 
|  | 202 | of automatically limiting update rate should grace periods | 
|  | 203 | be delayed.  This property results in system resilience in face | 
|  | 204 | of denial-of-service attacks.  Code using call_rcu() should limit | 
|  | 205 | update rate in order to gain this same sort of resilience.  See | 
|  | 206 | checklist.txt for some approaches to limiting the update rate. | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 207 |  | 
|  | 208 | rcu_assign_pointer() | 
|  | 209 |  | 
|  | 210 | typeof(p) rcu_assign_pointer(p, typeof(p) v); | 
|  | 211 |  | 
|  | 212 | Yes, rcu_assign_pointer() -is- implemented as a macro, though it | 
|  | 213 | would be cool to be able to declare a function in this manner. | 
|  | 214 | (Compiler experts will no doubt disagree.) | 
|  | 215 |  | 
|  | 216 | The updater uses this function to assign a new value to an | 
|  | 217 | RCU-protected pointer, in order to safely communicate the change | 
|  | 218 | in value from the updater to the reader.  This function returns | 
|  | 219 | the new value, and also executes any memory-barrier instructions | 
|  | 220 | required for a given CPU architecture. | 
|  | 221 |  | 
| Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 222 | Perhaps just as important, it serves to document (1) which | 
|  | 223 | pointers are protected by RCU and (2) the point at which a | 
|  | 224 | given structure becomes accessible to other CPUs.  That said, | 
|  | 225 | rcu_assign_pointer() is most frequently used indirectly, via | 
|  | 226 | the _rcu list-manipulation primitives such as list_add_rcu(). | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 227 |  | 
|  | 228 | rcu_dereference() | 
|  | 229 |  | 
|  | 230 | typeof(p) rcu_dereference(p); | 
|  | 231 |  | 
|  | 232 | Like rcu_assign_pointer(), rcu_dereference() must be implemented | 
|  | 233 | as a macro. | 
|  | 234 |  | 
|  | 235 | The reader uses rcu_dereference() to fetch an RCU-protected | 
|  | 236 | pointer, which returns a value that may then be safely | 
|  | 237 | dereferenced.  Note that rcu_deference() does not actually | 
|  | 238 | dereference the pointer, instead, it protects the pointer for | 
|  | 239 | later dereferencing.  It also executes any needed memory-barrier | 
|  | 240 | instructions for a given CPU architecture.  Currently, only Alpha | 
|  | 241 | needs memory barriers within rcu_dereference() -- on other CPUs, | 
|  | 242 | it compiles to nothing, not even a compiler directive. | 
|  | 243 |  | 
|  | 244 | Common coding practice uses rcu_dereference() to copy an | 
|  | 245 | RCU-protected pointer to a local variable, then dereferences | 
|  | 246 | this local variable, for example as follows: | 
|  | 247 |  | 
|  | 248 | p = rcu_dereference(head.next); | 
|  | 249 | return p->data; | 
|  | 250 |  | 
|  | 251 | However, in this case, one could just as easily combine these | 
|  | 252 | into one statement: | 
|  | 253 |  | 
|  | 254 | return rcu_dereference(head.next)->data; | 
|  | 255 |  | 
|  | 256 | If you are going to be fetching multiple fields from the | 
|  | 257 | RCU-protected structure, using the local variable is of | 
|  | 258 | course preferred.  Repeated rcu_dereference() calls look | 
|  | 259 | ugly and incur unnecessary overhead on Alpha CPUs. | 
|  | 260 |  | 
|  | 261 | Note that the value returned by rcu_dereference() is valid | 
|  | 262 | only within the enclosing RCU read-side critical section. | 
|  | 263 | For example, the following is -not- legal: | 
|  | 264 |  | 
|  | 265 | rcu_read_lock(); | 
|  | 266 | p = rcu_dereference(head.next); | 
|  | 267 | rcu_read_unlock(); | 
|  | 268 | x = p->address; | 
|  | 269 | rcu_read_lock(); | 
|  | 270 | y = p->data; | 
|  | 271 | rcu_read_unlock(); | 
|  | 272 |  | 
|  | 273 | Holding a reference from one RCU read-side critical section | 
|  | 274 | to another is just as illegal as holding a reference from | 
|  | 275 | one lock-based critical section to another!  Similarly, | 
|  | 276 | using a reference outside of the critical section in which | 
|  | 277 | it was acquired is just as illegal as doing so with normal | 
|  | 278 | locking. | 
|  | 279 |  | 
|  | 280 | As with rcu_assign_pointer(), an important function of | 
| Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 281 | rcu_dereference() is to document which pointers are protected by | 
|  | 282 | RCU, in particular, flagging a pointer that is subject to changing | 
|  | 283 | at any time, including immediately after the rcu_dereference(). | 
|  | 284 | And, again like rcu_assign_pointer(), rcu_dereference() is | 
|  | 285 | typically used indirectly, via the _rcu list-manipulation | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 286 | primitives, such as list_for_each_entry_rcu(). | 
|  | 287 |  | 
|  | 288 | The following diagram shows how each API communicates among the | 
|  | 289 | reader, updater, and reclaimer. | 
|  | 290 |  | 
|  | 291 |  | 
|  | 292 | rcu_assign_pointer() | 
|  | 293 | +--------+ | 
|  | 294 | +---------------------->| reader |---------+ | 
|  | 295 | |                       +--------+         | | 
|  | 296 | |                           |              | | 
|  | 297 | |                           |              | Protect: | 
|  | 298 | |                           |              | rcu_read_lock() | 
|  | 299 | |                           |              | rcu_read_unlock() | 
|  | 300 | |        rcu_dereference()  |              | | 
|  | 301 | +---------+                      |              | | 
|  | 302 | | updater |<---------------------+              | | 
|  | 303 | +---------+                                     V | 
|  | 304 | |                                    +-----------+ | 
|  | 305 | +----------------------------------->| reclaimer | | 
|  | 306 | +-----------+ | 
|  | 307 | Defer: | 
|  | 308 | synchronize_rcu() & call_rcu() | 
|  | 309 |  | 
|  | 310 |  | 
|  | 311 | The RCU infrastructure observes the time sequence of rcu_read_lock(), | 
|  | 312 | rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in | 
|  | 313 | order to determine when (1) synchronize_rcu() invocations may return | 
|  | 314 | to their callers and (2) call_rcu() callbacks may be invoked.  Efficient | 
|  | 315 | implementations of the RCU infrastructure make heavy use of batching in | 
|  | 316 | order to amortize their overhead over many uses of the corresponding APIs. | 
|  | 317 |  | 
|  | 318 | There are no fewer than three RCU mechanisms in the Linux kernel; the | 
|  | 319 | diagram above shows the first one, which is by far the most commonly used. | 
|  | 320 | The rcu_dereference() and rcu_assign_pointer() primitives are used for | 
|  | 321 | all three mechanisms, but different defer and protect primitives are | 
|  | 322 | used as follows: | 
|  | 323 |  | 
|  | 324 | Defer			Protect | 
|  | 325 |  | 
|  | 326 | a.	synchronize_rcu()	rcu_read_lock() / rcu_read_unlock() | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 327 | call_rcu()		rcu_dereference() | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 328 |  | 
|  | 329 | b.	call_rcu_bh()		rcu_read_lock_bh() / rcu_read_unlock_bh() | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 330 | rcu_dereference_bh() | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 331 |  | 
| Paul E. McKenney | 4c54005 | 2010-01-14 16:10:57 -0800 | [diff] [blame] | 332 | c.	synchronize_sched()	rcu_read_lock_sched() / rcu_read_unlock_sched() | 
|  | 333 | preempt_disable() / preempt_enable() | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 334 | local_irq_save() / local_irq_restore() | 
|  | 335 | hardirq enter / hardirq exit | 
|  | 336 | NMI enter / NMI exit | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 337 | rcu_dereference_sched() | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 338 |  | 
|  | 339 | These three mechanisms are used as follows: | 
|  | 340 |  | 
|  | 341 | a.	RCU applied to normal data structures. | 
|  | 342 |  | 
|  | 343 | b.	RCU applied to networking data structures that may be subjected | 
|  | 344 | to remote denial-of-service attacks. | 
|  | 345 |  | 
|  | 346 | c.	RCU applied to scheduler and interrupt/NMI-handler tasks. | 
|  | 347 |  | 
|  | 348 | Again, most uses will be of (a).  The (b) and (c) cases are important | 
|  | 349 | for specialized uses, but are relatively uncommon. | 
|  | 350 |  | 
|  | 351 |  | 
|  | 352 | 3.  WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | 
|  | 353 |  | 
|  | 354 | This section shows a simple use of the core RCU API to protect a | 
| Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 355 | global pointer to a dynamically allocated structure.  More-typical | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 356 | uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt. | 
|  | 357 |  | 
|  | 358 | struct foo { | 
|  | 359 | int a; | 
|  | 360 | char b; | 
|  | 361 | long c; | 
|  | 362 | }; | 
|  | 363 | DEFINE_SPINLOCK(foo_mutex); | 
|  | 364 |  | 
|  | 365 | struct foo *gbl_foo; | 
|  | 366 |  | 
|  | 367 | /* | 
|  | 368 | * Create a new struct foo that is the same as the one currently | 
|  | 369 | * pointed to by gbl_foo, except that field "a" is replaced | 
|  | 370 | * with "new_a".  Points gbl_foo to the new structure, and | 
|  | 371 | * frees up the old structure after a grace period. | 
|  | 372 | * | 
|  | 373 | * Uses rcu_assign_pointer() to ensure that concurrent readers | 
|  | 374 | * see the initialized version of the new structure. | 
|  | 375 | * | 
|  | 376 | * Uses synchronize_rcu() to ensure that any readers that might | 
|  | 377 | * have references to the old structure complete before freeing | 
|  | 378 | * the old structure. | 
|  | 379 | */ | 
|  | 380 | void foo_update_a(int new_a) | 
|  | 381 | { | 
|  | 382 | struct foo *new_fp; | 
|  | 383 | struct foo *old_fp; | 
|  | 384 |  | 
| Baruch Even | de0dfcd | 2006-03-24 18:25:25 +0100 | [diff] [blame] | 385 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 386 | spin_lock(&foo_mutex); | 
|  | 387 | old_fp = gbl_foo; | 
|  | 388 | *new_fp = *old_fp; | 
|  | 389 | new_fp->a = new_a; | 
|  | 390 | rcu_assign_pointer(gbl_foo, new_fp); | 
|  | 391 | spin_unlock(&foo_mutex); | 
|  | 392 | synchronize_rcu(); | 
|  | 393 | kfree(old_fp); | 
|  | 394 | } | 
|  | 395 |  | 
|  | 396 | /* | 
|  | 397 | * Return the value of field "a" of the current gbl_foo | 
|  | 398 | * structure.  Use rcu_read_lock() and rcu_read_unlock() | 
|  | 399 | * to ensure that the structure does not get deleted out | 
|  | 400 | * from under us, and use rcu_dereference() to ensure that | 
|  | 401 | * we see the initialized version of the structure (important | 
|  | 402 | * for DEC Alpha and for people reading the code). | 
|  | 403 | */ | 
|  | 404 | int foo_get_a(void) | 
|  | 405 | { | 
|  | 406 | int retval; | 
|  | 407 |  | 
|  | 408 | rcu_read_lock(); | 
|  | 409 | retval = rcu_dereference(gbl_foo)->a; | 
|  | 410 | rcu_read_unlock(); | 
|  | 411 | return retval; | 
|  | 412 | } | 
|  | 413 |  | 
|  | 414 | So, to sum up: | 
|  | 415 |  | 
|  | 416 | o	Use rcu_read_lock() and rcu_read_unlock() to guard RCU | 
|  | 417 | read-side critical sections. | 
|  | 418 |  | 
|  | 419 | o	Within an RCU read-side critical section, use rcu_dereference() | 
|  | 420 | to dereference RCU-protected pointers. | 
|  | 421 |  | 
|  | 422 | o	Use some solid scheme (such as locks or semaphores) to | 
|  | 423 | keep concurrent updates from interfering with each other. | 
|  | 424 |  | 
|  | 425 | o	Use rcu_assign_pointer() to update an RCU-protected pointer. | 
|  | 426 | This primitive protects concurrent readers from the updater, | 
|  | 427 | -not- concurrent updates from each other!  You therefore still | 
|  | 428 | need to use locking (or something similar) to keep concurrent | 
|  | 429 | rcu_assign_pointer() primitives from interfering with each other. | 
|  | 430 |  | 
|  | 431 | o	Use synchronize_rcu() -after- removing a data element from an | 
|  | 432 | RCU-protected data structure, but -before- reclaiming/freeing | 
|  | 433 | the data element, in order to wait for the completion of all | 
|  | 434 | RCU read-side critical sections that might be referencing that | 
|  | 435 | data item. | 
|  | 436 |  | 
|  | 437 | See checklist.txt for additional rules to follow when using RCU. | 
| Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 438 | And again, more-typical uses of RCU may be found in listRCU.txt, | 
|  | 439 | arrayRCU.txt, and NMI-RCU.txt. | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 440 |  | 
|  | 441 |  | 
|  | 442 | 4.  WHAT IF MY UPDATING THREAD CANNOT BLOCK? | 
|  | 443 |  | 
|  | 444 | In the example above, foo_update_a() blocks until a grace period elapses. | 
|  | 445 | This is quite simple, but in some cases one cannot afford to wait so | 
|  | 446 | long -- there might be other high-priority work to be done. | 
|  | 447 |  | 
|  | 448 | In such cases, one uses call_rcu() rather than synchronize_rcu(). | 
|  | 449 | The call_rcu() API is as follows: | 
|  | 450 |  | 
|  | 451 | void call_rcu(struct rcu_head * head, | 
|  | 452 | void (*func)(struct rcu_head *head)); | 
|  | 453 |  | 
|  | 454 | This function invokes func(head) after a grace period has elapsed. | 
|  | 455 | This invocation might happen from either softirq or process context, | 
|  | 456 | so the function is not permitted to block.  The foo struct needs to | 
|  | 457 | have an rcu_head structure added, perhaps as follows: | 
|  | 458 |  | 
|  | 459 | struct foo { | 
|  | 460 | int a; | 
|  | 461 | char b; | 
|  | 462 | long c; | 
|  | 463 | struct rcu_head rcu; | 
|  | 464 | }; | 
|  | 465 |  | 
|  | 466 | The foo_update_a() function might then be written as follows: | 
|  | 467 |  | 
|  | 468 | /* | 
|  | 469 | * Create a new struct foo that is the same as the one currently | 
|  | 470 | * pointed to by gbl_foo, except that field "a" is replaced | 
|  | 471 | * with "new_a".  Points gbl_foo to the new structure, and | 
|  | 472 | * frees up the old structure after a grace period. | 
|  | 473 | * | 
|  | 474 | * Uses rcu_assign_pointer() to ensure that concurrent readers | 
|  | 475 | * see the initialized version of the new structure. | 
|  | 476 | * | 
|  | 477 | * Uses call_rcu() to ensure that any readers that might have | 
|  | 478 | * references to the old structure complete before freeing the | 
|  | 479 | * old structure. | 
|  | 480 | */ | 
|  | 481 | void foo_update_a(int new_a) | 
|  | 482 | { | 
|  | 483 | struct foo *new_fp; | 
|  | 484 | struct foo *old_fp; | 
|  | 485 |  | 
| Baruch Even | de0dfcd | 2006-03-24 18:25:25 +0100 | [diff] [blame] | 486 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 487 | spin_lock(&foo_mutex); | 
|  | 488 | old_fp = gbl_foo; | 
|  | 489 | *new_fp = *old_fp; | 
|  | 490 | new_fp->a = new_a; | 
|  | 491 | rcu_assign_pointer(gbl_foo, new_fp); | 
|  | 492 | spin_unlock(&foo_mutex); | 
|  | 493 | call_rcu(&old_fp->rcu, foo_reclaim); | 
|  | 494 | } | 
|  | 495 |  | 
|  | 496 | The foo_reclaim() function might appear as follows: | 
|  | 497 |  | 
|  | 498 | void foo_reclaim(struct rcu_head *rp) | 
|  | 499 | { | 
|  | 500 | struct foo *fp = container_of(rp, struct foo, rcu); | 
|  | 501 |  | 
|  | 502 | kfree(fp); | 
|  | 503 | } | 
|  | 504 |  | 
|  | 505 | The container_of() primitive is a macro that, given a pointer into a | 
|  | 506 | struct, the type of the struct, and the pointed-to field within the | 
|  | 507 | struct, returns a pointer to the beginning of the struct. | 
|  | 508 |  | 
|  | 509 | The use of call_rcu() permits the caller of foo_update_a() to | 
|  | 510 | immediately regain control, without needing to worry further about the | 
|  | 511 | old version of the newly updated element.  It also clearly shows the | 
|  | 512 | RCU distinction between updater, namely foo_update_a(), and reclaimer, | 
|  | 513 | namely foo_reclaim(). | 
|  | 514 |  | 
|  | 515 | The summary of advice is the same as for the previous section, except | 
|  | 516 | that we are now using call_rcu() rather than synchronize_rcu(): | 
|  | 517 |  | 
|  | 518 | o	Use call_rcu() -after- removing a data element from an | 
|  | 519 | RCU-protected data structure in order to register a callback | 
|  | 520 | function that will be invoked after the completion of all RCU | 
|  | 521 | read-side critical sections that might be referencing that | 
|  | 522 | data item. | 
|  | 523 |  | 
|  | 524 | Again, see checklist.txt for additional rules governing the use of RCU. | 
|  | 525 |  | 
|  | 526 |  | 
|  | 527 | 5.  WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | 
|  | 528 |  | 
|  | 529 | One of the nice things about RCU is that it has extremely simple "toy" | 
|  | 530 | implementations that are a good first step towards understanding the | 
|  | 531 | production-quality implementations in the Linux kernel.  This section | 
|  | 532 | presents two such "toy" implementations of RCU, one that is implemented | 
|  | 533 | in terms of familiar locking primitives, and another that more closely | 
|  | 534 | resembles "classic" RCU.  Both are way too simple for real-world use, | 
|  | 535 | lacking both functionality and performance.  However, they are useful | 
|  | 536 | in getting a feel for how RCU works.  See kernel/rcupdate.c for a | 
|  | 537 | production-quality implementation, and see: | 
|  | 538 |  | 
|  | 539 | http://www.rdrop.com/users/paulmck/RCU | 
|  | 540 |  | 
|  | 541 | for papers describing the Linux kernel RCU implementation.  The OLS'01 | 
|  | 542 | and OLS'02 papers are a good introduction, and the dissertation provides | 
| Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 543 | more details on the current implementation as of early 2004. | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 544 |  | 
|  | 545 |  | 
|  | 546 | 5A.  "TOY" IMPLEMENTATION #1: LOCKING | 
|  | 547 |  | 
|  | 548 | This section presents a "toy" RCU implementation that is based on | 
|  | 549 | familiar locking primitives.  Its overhead makes it a non-starter for | 
|  | 550 | real-life use, as does its lack of scalability.  It is also unsuitable | 
|  | 551 | for realtime use, since it allows scheduling latency to "bleed" from | 
|  | 552 | one read-side critical section to another. | 
|  | 553 |  | 
|  | 554 | However, it is probably the easiest implementation to relate to, so is | 
|  | 555 | a good starting point. | 
|  | 556 |  | 
|  | 557 | It is extremely simple: | 
|  | 558 |  | 
|  | 559 | static DEFINE_RWLOCK(rcu_gp_mutex); | 
|  | 560 |  | 
|  | 561 | void rcu_read_lock(void) | 
|  | 562 | { | 
|  | 563 | read_lock(&rcu_gp_mutex); | 
|  | 564 | } | 
|  | 565 |  | 
|  | 566 | void rcu_read_unlock(void) | 
|  | 567 | { | 
|  | 568 | read_unlock(&rcu_gp_mutex); | 
|  | 569 | } | 
|  | 570 |  | 
|  | 571 | void synchronize_rcu(void) | 
|  | 572 | { | 
|  | 573 | write_lock(&rcu_gp_mutex); | 
|  | 574 | write_unlock(&rcu_gp_mutex); | 
|  | 575 | } | 
|  | 576 |  | 
|  | 577 | [You can ignore rcu_assign_pointer() and rcu_dereference() without | 
|  | 578 | missing much.  But here they are anyway.  And whatever you do, don't | 
|  | 579 | forget about them when submitting patches making use of RCU!] | 
|  | 580 |  | 
|  | 581 | #define rcu_assign_pointer(p, v)	({ \ | 
|  | 582 | smp_wmb(); \ | 
|  | 583 | (p) = (v); \ | 
|  | 584 | }) | 
|  | 585 |  | 
|  | 586 | #define rcu_dereference(p)     ({ \ | 
|  | 587 | typeof(p) _________p1 = p; \ | 
|  | 588 | smp_read_barrier_depends(); \ | 
|  | 589 | (_________p1); \ | 
|  | 590 | }) | 
|  | 591 |  | 
|  | 592 |  | 
|  | 593 | The rcu_read_lock() and rcu_read_unlock() primitive read-acquire | 
|  | 594 | and release a global reader-writer lock.  The synchronize_rcu() | 
|  | 595 | primitive write-acquires this same lock, then immediately releases | 
|  | 596 | it.  This means that once synchronize_rcu() exits, all RCU read-side | 
| Matt LaPlante | 53cb472 | 2006-10-03 22:55:17 +0200 | [diff] [blame] | 597 | critical sections that were in progress before synchronize_rcu() was | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 598 | called are guaranteed to have completed -- there is no way that | 
|  | 599 | synchronize_rcu() would have been able to write-acquire the lock | 
|  | 600 | otherwise. | 
|  | 601 |  | 
|  | 602 | It is possible to nest rcu_read_lock(), since reader-writer locks may | 
|  | 603 | be recursively acquired.  Note also that rcu_read_lock() is immune | 
|  | 604 | from deadlock (an important property of RCU).  The reason for this is | 
|  | 605 | that the only thing that can block rcu_read_lock() is a synchronize_rcu(). | 
|  | 606 | But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, | 
|  | 607 | so there can be no deadlock cycle. | 
|  | 608 |  | 
|  | 609 | Quick Quiz #1:	Why is this argument naive?  How could a deadlock | 
|  | 610 | occur when using this algorithm in a real-world Linux | 
|  | 611 | kernel?  How could this deadlock be avoided? | 
|  | 612 |  | 
|  | 613 |  | 
|  | 614 | 5B.  "TOY" EXAMPLE #2: CLASSIC RCU | 
|  | 615 |  | 
|  | 616 | This section presents a "toy" RCU implementation that is based on | 
|  | 617 | "classic RCU".  It is also short on performance (but only for updates) and | 
|  | 618 | on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT | 
|  | 619 | kernels.  The definitions of rcu_dereference() and rcu_assign_pointer() | 
|  | 620 | are the same as those shown in the preceding section, so they are omitted. | 
|  | 621 |  | 
|  | 622 | void rcu_read_lock(void) { } | 
|  | 623 |  | 
|  | 624 | void rcu_read_unlock(void) { } | 
|  | 625 |  | 
|  | 626 | void synchronize_rcu(void) | 
|  | 627 | { | 
|  | 628 | int cpu; | 
|  | 629 |  | 
| KAMEZAWA Hiroyuki | 3c30a75 | 2006-03-28 01:56:39 -0800 | [diff] [blame] | 630 | for_each_possible_cpu(cpu) | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 631 | run_on(cpu); | 
|  | 632 | } | 
|  | 633 |  | 
|  | 634 | Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. | 
|  | 635 | This is the great strength of classic RCU in a non-preemptive kernel: | 
|  | 636 | read-side overhead is precisely zero, at least on non-Alpha CPUs. | 
|  | 637 | And there is absolutely no way that rcu_read_lock() can possibly | 
|  | 638 | participate in a deadlock cycle! | 
|  | 639 |  | 
|  | 640 | The implementation of synchronize_rcu() simply schedules itself on each | 
|  | 641 | CPU in turn.  The run_on() primitive can be implemented straightforwardly | 
|  | 642 | in terms of the sched_setaffinity() primitive.  Of course, a somewhat less | 
|  | 643 | "toy" implementation would restore the affinity upon completion rather | 
|  | 644 | than just leaving all tasks running on the last CPU, but when I said | 
|  | 645 | "toy", I meant -toy-! | 
|  | 646 |  | 
|  | 647 | So how the heck is this supposed to work??? | 
|  | 648 |  | 
|  | 649 | Remember that it is illegal to block while in an RCU read-side critical | 
|  | 650 | section.  Therefore, if a given CPU executes a context switch, we know | 
|  | 651 | that it must have completed all preceding RCU read-side critical sections. | 
|  | 652 | Once -all- CPUs have executed a context switch, then -all- preceding | 
|  | 653 | RCU read-side critical sections will have completed. | 
|  | 654 |  | 
|  | 655 | So, suppose that we remove a data item from its structure and then invoke | 
|  | 656 | synchronize_rcu().  Once synchronize_rcu() returns, we are guaranteed | 
|  | 657 | that there are no RCU read-side critical sections holding a reference | 
|  | 658 | to that data item, so we can safely reclaim it. | 
|  | 659 |  | 
|  | 660 | Quick Quiz #2:	Give an example where Classic RCU's read-side | 
|  | 661 | overhead is -negative-. | 
|  | 662 |  | 
|  | 663 | Quick Quiz #3:  If it is illegal to block in an RCU read-side | 
|  | 664 | critical section, what the heck do you do in | 
|  | 665 | PREEMPT_RT, where normal spinlocks can block??? | 
|  | 666 |  | 
|  | 667 |  | 
|  | 668 | 6.  ANALOGY WITH READER-WRITER LOCKING | 
|  | 669 |  | 
|  | 670 | Although RCU can be used in many different ways, a very common use of | 
|  | 671 | RCU is analogous to reader-writer locking.  The following unified | 
|  | 672 | diff shows how closely related RCU and reader-writer locking can be. | 
|  | 673 |  | 
|  | 674 | @@ -13,15 +14,15 @@ | 
|  | 675 | struct list_head *lp; | 
|  | 676 | struct el *p; | 
|  | 677 |  | 
|  | 678 | -	read_lock(); | 
|  | 679 | -	list_for_each_entry(p, head, lp) { | 
|  | 680 | +	rcu_read_lock(); | 
|  | 681 | +	list_for_each_entry_rcu(p, head, lp) { | 
|  | 682 | if (p->key == key) { | 
|  | 683 | *result = p->data; | 
|  | 684 | -			read_unlock(); | 
|  | 685 | +			rcu_read_unlock(); | 
|  | 686 | return 1; | 
|  | 687 | } | 
|  | 688 | } | 
|  | 689 | -	read_unlock(); | 
|  | 690 | +	rcu_read_unlock(); | 
|  | 691 | return 0; | 
|  | 692 | } | 
|  | 693 |  | 
|  | 694 | @@ -29,15 +30,16 @@ | 
|  | 695 | { | 
|  | 696 | struct el *p; | 
|  | 697 |  | 
|  | 698 | -	write_lock(&listmutex); | 
|  | 699 | +	spin_lock(&listmutex); | 
|  | 700 | list_for_each_entry(p, head, lp) { | 
|  | 701 | if (p->key == key) { | 
| Urs Thuermann | 82a854e | 2006-07-10 04:44:06 -0700 | [diff] [blame] | 702 | -			list_del(&p->list); | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 703 | -			write_unlock(&listmutex); | 
| Urs Thuermann | 82a854e | 2006-07-10 04:44:06 -0700 | [diff] [blame] | 704 | +			list_del_rcu(&p->list); | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 705 | +			spin_unlock(&listmutex); | 
|  | 706 | +			synchronize_rcu(); | 
|  | 707 | kfree(p); | 
|  | 708 | return 1; | 
|  | 709 | } | 
|  | 710 | } | 
|  | 711 | -	write_unlock(&listmutex); | 
|  | 712 | +	spin_unlock(&listmutex); | 
|  | 713 | return 0; | 
|  | 714 | } | 
|  | 715 |  | 
|  | 716 | Or, for those who prefer a side-by-side listing: | 
|  | 717 |  | 
|  | 718 | 1 struct el {                          1 struct el { | 
|  | 719 | 2   struct list_head list;             2   struct list_head list; | 
|  | 720 | 3   long key;                          3   long key; | 
|  | 721 | 4   spinlock_t mutex;                  4   spinlock_t mutex; | 
|  | 722 | 5   int data;                          5   int data; | 
|  | 723 | 6   /* Other data fields */            6   /* Other data fields */ | 
|  | 724 | 7 };                                   7 }; | 
|  | 725 | 8 spinlock_t listmutex;                8 spinlock_t listmutex; | 
|  | 726 | 9 struct el head;                      9 struct el head; | 
|  | 727 |  | 
|  | 728 | 1 int search(long key, int *result)    1 int search(long key, int *result) | 
|  | 729 | 2 {                                    2 { | 
|  | 730 | 3   struct list_head *lp;              3   struct list_head *lp; | 
|  | 731 | 4   struct el *p;                      4   struct el *p; | 
|  | 732 | 5                                      5 | 
|  | 733 | 6   read_lock();                       6   rcu_read_lock(); | 
|  | 734 | 7   list_for_each_entry(p, head, lp) { 7   list_for_each_entry_rcu(p, head, lp) { | 
|  | 735 | 8     if (p->key == key) {             8     if (p->key == key) { | 
|  | 736 | 9       *result = p->data;             9       *result = p->data; | 
|  | 737 | 10       read_unlock();                10       rcu_read_unlock(); | 
|  | 738 | 11       return 1;                     11       return 1; | 
|  | 739 | 12     }                               12     } | 
|  | 740 | 13   }                                 13   } | 
|  | 741 | 14   read_unlock();                    14   rcu_read_unlock(); | 
|  | 742 | 15   return 0;                         15   return 0; | 
|  | 743 | 16 }                                   16 } | 
|  | 744 |  | 
|  | 745 | 1 int delete(long key)                 1 int delete(long key) | 
|  | 746 | 2 {                                    2 { | 
|  | 747 | 3   struct el *p;                      3   struct el *p; | 
|  | 748 | 4                                      4 | 
|  | 749 | 5   write_lock(&listmutex);            5   spin_lock(&listmutex); | 
|  | 750 | 6   list_for_each_entry(p, head, lp) { 6   list_for_each_entry(p, head, lp) { | 
|  | 751 | 7     if (p->key == key) {             7     if (p->key == key) { | 
| Urs Thuermann | 82a854e | 2006-07-10 04:44:06 -0700 | [diff] [blame] | 752 | 8       list_del(&p->list);            8       list_del_rcu(&p->list); | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 753 | 9       write_unlock(&listmutex);      9       spin_unlock(&listmutex); | 
|  | 754 | 10       synchronize_rcu(); | 
|  | 755 | 10       kfree(p);                     11       kfree(p); | 
|  | 756 | 11       return 1;                     12       return 1; | 
|  | 757 | 12     }                               13     } | 
|  | 758 | 13   }                                 14   } | 
|  | 759 | 14   write_unlock(&listmutex);         15   spin_unlock(&listmutex); | 
|  | 760 | 15   return 0;                         16   return 0; | 
|  | 761 | 16 }                                   17 } | 
|  | 762 |  | 
|  | 763 | Either way, the differences are quite small.  Read-side locking moves | 
|  | 764 | to rcu_read_lock() and rcu_read_unlock, update-side locking moves from | 
| Paolo Ornati | 670e9f3 | 2006-10-03 22:57:56 +0200 | [diff] [blame] | 765 | a reader-writer lock to a simple spinlock, and a synchronize_rcu() | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 766 | precedes the kfree(). | 
|  | 767 |  | 
|  | 768 | However, there is one potential catch: the read-side and update-side | 
|  | 769 | critical sections can now run concurrently.  In many cases, this will | 
|  | 770 | not be a problem, but it is necessary to check carefully regardless. | 
|  | 771 | For example, if multiple independent list updates must be seen as | 
|  | 772 | a single atomic update, converting to RCU will require special care. | 
|  | 773 |  | 
|  | 774 | Also, the presence of synchronize_rcu() means that the RCU version of | 
|  | 775 | delete() can now block.  If this is a problem, there is a callback-based | 
|  | 776 | mechanism that never blocks, namely call_rcu(), that can be used in | 
|  | 777 | place of synchronize_rcu(). | 
|  | 778 |  | 
|  | 779 |  | 
|  | 780 | 7.  FULL LIST OF RCU APIs | 
|  | 781 |  | 
|  | 782 | The RCU APIs are documented in docbook-format header comments in the | 
|  | 783 | Linux-kernel source code, but it helps to have a full list of the | 
|  | 784 | APIs, since there does not appear to be a way to categorize them | 
|  | 785 | in docbook.  Here is the list, by category. | 
|  | 786 |  | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 787 | RCU list traversal: | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 788 |  | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 789 | list_for_each_entry_rcu | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 790 | hlist_for_each_entry_rcu | 
| Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 791 | hlist_nulls_for_each_entry_rcu | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 792 |  | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 793 | list_for_each_continue_rcu	(to be deprecated in favor of new | 
|  | 794 | list_for_each_entry_continue_rcu) | 
|  | 795 |  | 
|  | 796 | RCU pointer/list update: | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 797 |  | 
|  | 798 | rcu_assign_pointer | 
|  | 799 | list_add_rcu | 
|  | 800 | list_add_tail_rcu | 
|  | 801 | list_del_rcu | 
|  | 802 | list_replace_rcu | 
|  | 803 | hlist_del_rcu | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 804 | hlist_add_after_rcu | 
|  | 805 | hlist_add_before_rcu | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 806 | hlist_add_head_rcu | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 807 | hlist_replace_rcu | 
|  | 808 | list_splice_init_rcu() | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 809 |  | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 810 | RCU:	Critical sections	Grace period		Barrier | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 811 |  | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 812 | rcu_read_lock		synchronize_net		rcu_barrier | 
|  | 813 | rcu_read_unlock		synchronize_rcu | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 814 | rcu_dereference		synchronize_rcu_expedited | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 815 | call_rcu | 
|  | 816 |  | 
|  | 817 |  | 
|  | 818 | bh:	Critical sections	Grace period		Barrier | 
|  | 819 |  | 
|  | 820 | rcu_read_lock_bh	call_rcu_bh		rcu_barrier_bh | 
| Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 821 | rcu_read_unlock_bh	synchronize_rcu_bh | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 822 | rcu_dereference_bh	synchronize_rcu_bh_expedited | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 823 |  | 
|  | 824 |  | 
|  | 825 | sched:	Critical sections	Grace period		Barrier | 
|  | 826 |  | 
| Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 827 | rcu_read_lock_sched	synchronize_sched	rcu_barrier_sched | 
|  | 828 | rcu_read_unlock_sched	call_rcu_sched | 
|  | 829 | [preempt_disable]	synchronize_sched_expedited | 
|  | 830 | [and friends] | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 831 | rcu_dereference_sched | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 832 |  | 
|  | 833 |  | 
|  | 834 | SRCU:	Critical sections	Grace period		Barrier | 
|  | 835 |  | 
|  | 836 | srcu_read_lock		synchronize_srcu	N/A | 
| Paul E. McKenney | 6417986 | 2009-10-25 19:03:53 -0700 | [diff] [blame] | 837 | srcu_read_unlock	synchronize_srcu_expedited | 
| Paul E. McKenney | 9ceae0e | 2011-11-03 13:43:24 -0700 | [diff] [blame] | 838 | srcu_read_lock_raw | 
|  | 839 | srcu_read_unlock_raw | 
| Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 840 | srcu_dereference | 
| Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 841 |  | 
| Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 842 | SRCU:	Initialization/cleanup | 
|  | 843 | init_srcu_struct | 
|  | 844 | cleanup_srcu_struct | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 845 |  | 
| Paul E. McKenney | 50aec00 | 2010-04-09 15:39:12 -0700 | [diff] [blame] | 846 | All:  lockdep-checked RCU-protected pointer access | 
|  | 847 |  | 
|  | 848 | rcu_dereference_check | 
|  | 849 | rcu_dereference_protected | 
|  | 850 | rcu_access_pointer | 
|  | 851 |  | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 852 | See the comment headers in the source code (or the docbook generated | 
|  | 853 | from them) for more information. | 
|  | 854 |  | 
| Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 855 | However, given that there are no fewer than four families of RCU APIs | 
|  | 856 | in the Linux kernel, how do you choose which one to use?  The following | 
|  | 857 | list can be helpful: | 
|  | 858 |  | 
|  | 859 | a.	Will readers need to block?  If so, you need SRCU. | 
|  | 860 |  | 
| Paul E. McKenney | 9ceae0e | 2011-11-03 13:43:24 -0700 | [diff] [blame] | 861 | b.	Is it necessary to start a read-side critical section in a | 
|  | 862 | hardirq handler or exception handler, and then to complete | 
|  | 863 | this read-side critical section in the task that was | 
|  | 864 | interrupted?  If so, you need SRCU's srcu_read_lock_raw() and | 
|  | 865 | srcu_read_unlock_raw() primitives. | 
|  | 866 |  | 
|  | 867 | c.	What about the -rt patchset?  If readers would need to block | 
| Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 868 | in an non-rt kernel, you need SRCU.  If readers would block | 
|  | 869 | in a -rt kernel, but not in a non-rt kernel, SRCU is not | 
|  | 870 | necessary. | 
|  | 871 |  | 
| Paul E. McKenney | 9ceae0e | 2011-11-03 13:43:24 -0700 | [diff] [blame] | 872 | d.	Do you need to treat NMI handlers, hardirq handlers, | 
| Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 873 | and code segments with preemption disabled (whether | 
|  | 874 | via preempt_disable(), local_irq_save(), local_bh_disable(), | 
|  | 875 | or some other mechanism) as if they were explicit RCU readers? | 
|  | 876 | If so, you need RCU-sched. | 
|  | 877 |  | 
| Paul E. McKenney | 9ceae0e | 2011-11-03 13:43:24 -0700 | [diff] [blame] | 878 | e.	Do you need RCU grace periods to complete even in the face | 
| Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 879 | of softirq monopolization of one or more of the CPUs?  For | 
|  | 880 | example, is your code subject to network-based denial-of-service | 
|  | 881 | attacks?  If so, you need RCU-bh. | 
|  | 882 |  | 
| Paul E. McKenney | 9ceae0e | 2011-11-03 13:43:24 -0700 | [diff] [blame] | 883 | f.	Is your workload too update-intensive for normal use of | 
| Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 884 | RCU, but inappropriate for other synchronization mechanisms? | 
|  | 885 | If so, consider SLAB_DESTROY_BY_RCU.  But please be careful! | 
|  | 886 |  | 
| Paul E. McKenney | 9ceae0e | 2011-11-03 13:43:24 -0700 | [diff] [blame] | 887 | g.	Otherwise, use RCU. | 
| Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 888 |  | 
|  | 889 | Of course, this all assumes that you have determined that RCU is in fact | 
|  | 890 | the right tool for your job. | 
|  | 891 |  | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 892 |  | 
|  | 893 | 8.  ANSWERS TO QUICK QUIZZES | 
|  | 894 |  | 
|  | 895 | Quick Quiz #1:	Why is this argument naive?  How could a deadlock | 
|  | 896 | occur when using this algorithm in a real-world Linux | 
|  | 897 | kernel?  [Referring to the lock-based "toy" RCU | 
|  | 898 | algorithm.] | 
|  | 899 |  | 
|  | 900 | Answer:		Consider the following sequence of events: | 
|  | 901 |  | 
|  | 902 | 1.	CPU 0 acquires some unrelated lock, call it | 
| Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 903 | "problematic_lock", disabling irq via | 
|  | 904 | spin_lock_irqsave(). | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 905 |  | 
|  | 906 | 2.	CPU 1 enters synchronize_rcu(), write-acquiring | 
|  | 907 | rcu_gp_mutex. | 
|  | 908 |  | 
|  | 909 | 3.	CPU 0 enters rcu_read_lock(), but must wait | 
|  | 910 | because CPU 1 holds rcu_gp_mutex. | 
|  | 911 |  | 
|  | 912 | 4.	CPU 1 is interrupted, and the irq handler | 
|  | 913 | attempts to acquire problematic_lock. | 
|  | 914 |  | 
|  | 915 | The system is now deadlocked. | 
|  | 916 |  | 
|  | 917 | One way to avoid this deadlock is to use an approach like | 
|  | 918 | that of CONFIG_PREEMPT_RT, where all normal spinlocks | 
|  | 919 | become blocking locks, and all irq handlers execute in | 
|  | 920 | the context of special tasks.  In this case, in step 4 | 
|  | 921 | above, the irq handler would block, allowing CPU 1 to | 
|  | 922 | release rcu_gp_mutex, avoiding the deadlock. | 
|  | 923 |  | 
|  | 924 | Even in the absence of deadlock, this RCU implementation | 
|  | 925 | allows latency to "bleed" from readers to other | 
|  | 926 | readers through synchronize_rcu().  To see this, | 
|  | 927 | consider task A in an RCU read-side critical section | 
|  | 928 | (thus read-holding rcu_gp_mutex), task B blocked | 
|  | 929 | attempting to write-acquire rcu_gp_mutex, and | 
|  | 930 | task C blocked in rcu_read_lock() attempting to | 
|  | 931 | read_acquire rcu_gp_mutex.  Task A's RCU read-side | 
|  | 932 | latency is holding up task C, albeit indirectly via | 
|  | 933 | task B. | 
|  | 934 |  | 
|  | 935 | Realtime RCU implementations therefore use a counter-based | 
|  | 936 | approach where tasks in RCU read-side critical sections | 
|  | 937 | cannot be blocked by tasks executing synchronize_rcu(). | 
|  | 938 |  | 
|  | 939 | Quick Quiz #2:	Give an example where Classic RCU's read-side | 
|  | 940 | overhead is -negative-. | 
|  | 941 |  | 
|  | 942 | Answer:		Imagine a single-CPU system with a non-CONFIG_PREEMPT | 
|  | 943 | kernel where a routing table is used by process-context | 
|  | 944 | code, but can be updated by irq-context code (for example, | 
|  | 945 | by an "ICMP REDIRECT" packet).	The usual way of handling | 
|  | 946 | this would be to have the process-context code disable | 
|  | 947 | interrupts while searching the routing table.  Use of | 
|  | 948 | RCU allows such interrupt-disabling to be dispensed with. | 
|  | 949 | Thus, without RCU, you pay the cost of disabling interrupts, | 
|  | 950 | and with RCU you don't. | 
|  | 951 |  | 
|  | 952 | One can argue that the overhead of RCU in this | 
|  | 953 | case is negative with respect to the single-CPU | 
|  | 954 | interrupt-disabling approach.  Others might argue that | 
|  | 955 | the overhead of RCU is merely zero, and that replacing | 
|  | 956 | the positive overhead of the interrupt-disabling scheme | 
|  | 957 | with the zero-overhead RCU scheme does not constitute | 
|  | 958 | negative overhead. | 
|  | 959 |  | 
|  | 960 | In real life, of course, things are more complex.  But | 
|  | 961 | even the theoretical possibility of negative overhead for | 
|  | 962 | a synchronization primitive is a bit unexpected.  ;-) | 
|  | 963 |  | 
|  | 964 | Quick Quiz #3:  If it is illegal to block in an RCU read-side | 
|  | 965 | critical section, what the heck do you do in | 
|  | 966 | PREEMPT_RT, where normal spinlocks can block??? | 
|  | 967 |  | 
|  | 968 | Answer:		Just as PREEMPT_RT permits preemption of spinlock | 
|  | 969 | critical sections, it permits preemption of RCU | 
|  | 970 | read-side critical sections.  It also permits | 
|  | 971 | spinlocks blocking while in RCU read-side critical | 
|  | 972 | sections. | 
|  | 973 |  | 
|  | 974 | Why the apparent inconsistency?  Because it is it | 
|  | 975 | possible to use priority boosting to keep the RCU | 
|  | 976 | grace periods short if need be (for example, if running | 
|  | 977 | short of memory).  In contrast, if blocking waiting | 
|  | 978 | for (say) network reception, there is no way to know | 
|  | 979 | what should be boosted.  Especially given that the | 
|  | 980 | process we need to boost might well be a human being | 
|  | 981 | who just went out for a pizza or something.  And although | 
|  | 982 | a computer-operated cattle prod might arouse serious | 
|  | 983 | interest, it might also provoke serious objections. | 
|  | 984 | Besides, how does the computer know what pizza parlor | 
|  | 985 | the human being went to??? | 
|  | 986 |  | 
|  | 987 |  | 
|  | 988 | ACKNOWLEDGEMENTS | 
|  | 989 |  | 
|  | 990 | My thanks to the people who helped make this human-readable, including | 
| Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 991 | Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. | 
| Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 992 |  | 
|  | 993 |  | 
|  | 994 | For more information, see http://www.rdrop.com/users/paulmck/RCU. |