| Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 1 | RCU and Unloadable Modules | 
 | 2 |  | 
 | 3 | [Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/] | 
 | 4 |  | 
 | 5 | RCU (read-copy update) is a synchronization mechanism that can be thought | 
 | 6 | of as a replacement for read-writer locking (among other things), but with | 
 | 7 | very low-overhead readers that are immune to deadlock, priority inversion, | 
 | 8 | and unbounded latency. RCU read-side critical sections are delimited | 
 | 9 | by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT | 
 | 10 | kernels, generate no code whatsoever. | 
 | 11 |  | 
 | 12 | This means that RCU writers are unaware of the presence of concurrent | 
 | 13 | readers, so that RCU updates to shared data must be undertaken quite | 
 | 14 | carefully, leaving an old version of the data structure in place until all | 
 | 15 | pre-existing readers have finished. These old versions are needed because | 
 | 16 | such readers might hold a reference to them. RCU updates can therefore be | 
 | 17 | rather expensive, and RCU is thus best suited for read-mostly situations. | 
 | 18 |  | 
 | 19 | How can an RCU writer possibly determine when all readers are finished, | 
 | 20 | given that readers might well leave absolutely no trace of their | 
 | 21 | presence? There is a synchronize_rcu() primitive that blocks until all | 
 | 22 | pre-existing readers have completed. An updater wishing to delete an | 
 | 23 | element p from a linked list might do the following, while holding an | 
 | 24 | appropriate lock, of course: | 
 | 25 |  | 
 | 26 | 	list_del_rcu(p); | 
 | 27 | 	synchronize_rcu(); | 
 | 28 | 	kfree(p); | 
 | 29 |  | 
 | 30 | But the above code cannot be used in IRQ context -- the call_rcu() | 
 | 31 | primitive must be used instead. This primitive takes a pointer to an | 
 | 32 | rcu_head struct placed within the RCU-protected data structure and | 
 | 33 | another pointer to a function that may be invoked later to free that | 
 | 34 | structure. Code to delete an element p from the linked list from IRQ | 
 | 35 | context might then be as follows: | 
 | 36 |  | 
 | 37 | 	list_del_rcu(p); | 
 | 38 | 	call_rcu(&p->rcu, p_callback); | 
 | 39 |  | 
 | 40 | Since call_rcu() never blocks, this code can safely be used from within | 
 | 41 | IRQ context. The function p_callback() might be defined as follows: | 
 | 42 |  | 
 | 43 | 	static void p_callback(struct rcu_head *rp) | 
 | 44 | 	{ | 
 | 45 | 		struct pstruct *p = container_of(rp, struct pstruct, rcu); | 
 | 46 |  | 
 | 47 | 		kfree(p); | 
 | 48 | 	} | 
 | 49 |  | 
 | 50 |  | 
 | 51 | Unloading Modules That Use call_rcu() | 
 | 52 |  | 
 | 53 | But what if p_callback is defined in an unloadable module? | 
 | 54 |  | 
 | 55 | If we unload the module while some RCU callbacks are pending, | 
 | 56 | the CPUs executing these callbacks are going to be severely | 
 | 57 | disappointed when they are later invoked, as fancifully depicted at | 
 | 58 | http://lwn.net/images/ns/kernel/rcu-drop.jpg. | 
 | 59 |  | 
 | 60 | We could try placing a synchronize_rcu() in the module-exit code path, | 
 | 61 | but this is not sufficient. Although synchronize_rcu() does wait for a | 
 | 62 | grace period to elapse, it does not wait for the callbacks to complete. | 
 | 63 |  | 
 | 64 | One might be tempted to try several back-to-back synchronize_rcu() | 
 | 65 | calls, but this is still not guaranteed to work. If there is a very | 
 | 66 | heavy RCU-callback load, then some of the callbacks might be deferred | 
 | 67 | in order to allow other processing to proceed. Such deferral is required | 
 | 68 | in realtime kernels in order to avoid excessive scheduling latencies. | 
 | 69 |  | 
 | 70 |  | 
 | 71 | rcu_barrier() | 
 | 72 |  | 
 | 73 | We instead need the rcu_barrier() primitive. This primitive is similar | 
 | 74 | to synchronize_rcu(), but instead of waiting solely for a grace | 
 | 75 | period to elapse, it also waits for all outstanding RCU callbacks to | 
 | 76 | complete. Pseudo-code using rcu_barrier() is as follows: | 
 | 77 |  | 
 | 78 |    1. Prevent any new RCU callbacks from being posted. | 
 | 79 |    2. Execute rcu_barrier(). | 
 | 80 |    3. Allow the module to be unloaded. | 
 | 81 |  | 
 | 82 | Quick Quiz #1: Why is there no srcu_barrier()? | 
 | 83 |  | 
 | 84 | The rcutorture module makes use of rcu_barrier in its exit function | 
 | 85 | as follows: | 
 | 86 |  | 
 | 87 |  1 static void | 
 | 88 |  2 rcu_torture_cleanup(void) | 
 | 89 |  3 { | 
 | 90 |  4   int i; | 
 | 91 |  5 | 
 | 92 |  6   fullstop = 1; | 
 | 93 |  7   if (shuffler_task != NULL) { | 
 | 94 |  8     VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); | 
 | 95 |  9     kthread_stop(shuffler_task); | 
 | 96 | 10   } | 
 | 97 | 11   shuffler_task = NULL; | 
 | 98 | 12 | 
 | 99 | 13   if (writer_task != NULL) { | 
 | 100 | 14     VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); | 
 | 101 | 15     kthread_stop(writer_task); | 
 | 102 | 16   } | 
 | 103 | 17   writer_task = NULL; | 
 | 104 | 18 | 
 | 105 | 19   if (reader_tasks != NULL) { | 
 | 106 | 20     for (i = 0; i < nrealreaders; i++) { | 
 | 107 | 21       if (reader_tasks[i] != NULL) { | 
 | 108 | 22         VERBOSE_PRINTK_STRING( | 
 | 109 | 23           "Stopping rcu_torture_reader task"); | 
 | 110 | 24         kthread_stop(reader_tasks[i]); | 
 | 111 | 25       } | 
 | 112 | 26       reader_tasks[i] = NULL; | 
 | 113 | 27     } | 
 | 114 | 28     kfree(reader_tasks); | 
 | 115 | 29     reader_tasks = NULL; | 
 | 116 | 30   } | 
 | 117 | 31   rcu_torture_current = NULL; | 
 | 118 | 32 | 
 | 119 | 33   if (fakewriter_tasks != NULL) { | 
 | 120 | 34     for (i = 0; i < nfakewriters; i++) { | 
 | 121 | 35       if (fakewriter_tasks[i] != NULL) { | 
 | 122 | 36         VERBOSE_PRINTK_STRING( | 
 | 123 | 37           "Stopping rcu_torture_fakewriter task"); | 
 | 124 | 38         kthread_stop(fakewriter_tasks[i]); | 
 | 125 | 39       } | 
 | 126 | 40       fakewriter_tasks[i] = NULL; | 
 | 127 | 41     } | 
 | 128 | 42     kfree(fakewriter_tasks); | 
 | 129 | 43     fakewriter_tasks = NULL; | 
 | 130 | 44   } | 
 | 131 | 45 | 
 | 132 | 46   if (stats_task != NULL) { | 
 | 133 | 47     VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); | 
 | 134 | 48     kthread_stop(stats_task); | 
 | 135 | 49   } | 
 | 136 | 50   stats_task = NULL; | 
 | 137 | 51 | 
 | 138 | 52   /* Wait for all RCU callbacks to fire. */ | 
 | 139 | 53   rcu_barrier(); | 
 | 140 | 54 | 
 | 141 | 55   rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ | 
 | 142 | 56 | 
 | 143 | 57   if (cur_ops->cleanup != NULL) | 
 | 144 | 58     cur_ops->cleanup(); | 
 | 145 | 59   if (atomic_read(&n_rcu_torture_error)) | 
 | 146 | 60     rcu_torture_print_module_parms("End of test: FAILURE"); | 
 | 147 | 61   else | 
 | 148 | 62     rcu_torture_print_module_parms("End of test: SUCCESS"); | 
 | 149 | 63 } | 
 | 150 |  | 
 | 151 | Line 6 sets a global variable that prevents any RCU callbacks from | 
 | 152 | re-posting themselves. This will not be necessary in most cases, since | 
 | 153 | RCU callbacks rarely include calls to call_rcu(). However, the rcutorture | 
 | 154 | module is an exception to this rule, and therefore needs to set this | 
 | 155 | global variable. | 
 | 156 |  | 
 | 157 | Lines 7-50 stop all the kernel tasks associated with the rcutorture | 
 | 158 | module. Therefore, once execution reaches line 53, no more rcutorture | 
 | 159 | RCU callbacks will be posted. The rcu_barrier() call on line 53 waits | 
 | 160 | for any pre-existing callbacks to complete. | 
 | 161 |  | 
 | 162 | Then lines 55-62 print status and do operation-specific cleanup, and | 
 | 163 | then return, permitting the module-unload operation to be completed. | 
 | 164 |  | 
 | 165 | Quick Quiz #2: Is there any other situation where rcu_barrier() might | 
 | 166 | 	be required? | 
 | 167 |  | 
 | 168 | Your module might have additional complications. For example, if your | 
 | 169 | module invokes call_rcu() from timers, you will need to first cancel all | 
 | 170 | the timers, and only then invoke rcu_barrier() to wait for any remaining | 
 | 171 | RCU callbacks to complete. | 
 | 172 |  | 
| Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 173 | Of course, if you module uses call_rcu_bh(), you will need to invoke | 
 | 174 | rcu_barrier_bh() before unloading.  Similarly, if your module uses | 
 | 175 | call_rcu_sched(), you will need to invoke rcu_barrier_sched() before | 
 | 176 | unloading.  If your module uses call_rcu(), call_rcu_bh(), -and- | 
 | 177 | call_rcu_sched(), then you will need to invoke each of rcu_barrier(), | 
 | 178 | rcu_barrier_bh(), and rcu_barrier_sched(). | 
 | 179 |  | 
| Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 180 |  | 
 | 181 | Implementing rcu_barrier() | 
 | 182 |  | 
 | 183 | Dipankar Sarma's implementation of rcu_barrier() makes use of the fact | 
 | 184 | that RCU callbacks are never reordered once queued on one of the per-CPU | 
 | 185 | queues. His implementation queues an RCU callback on each of the per-CPU | 
 | 186 | callback queues, and then waits until they have all started executing, at | 
 | 187 | which point, all earlier RCU callbacks are guaranteed to have completed. | 
 | 188 |  | 
 | 189 | The original code for rcu_barrier() was as follows: | 
 | 190 |  | 
 | 191 |  1 void rcu_barrier(void) | 
 | 192 |  2 { | 
 | 193 |  3   BUG_ON(in_interrupt()); | 
 | 194 |  4   /* Take cpucontrol mutex to protect against CPU hotplug */ | 
 | 195 |  5   mutex_lock(&rcu_barrier_mutex); | 
 | 196 |  6   init_completion(&rcu_barrier_completion); | 
 | 197 |  7   atomic_set(&rcu_barrier_cpu_count, 0); | 
 | 198 |  8   on_each_cpu(rcu_barrier_func, NULL, 0, 1); | 
 | 199 |  9   wait_for_completion(&rcu_barrier_completion); | 
 | 200 | 10   mutex_unlock(&rcu_barrier_mutex); | 
 | 201 | 11 } | 
 | 202 |  | 
 | 203 | Line 3 verifies that the caller is in process context, and lines 5 and 10 | 
 | 204 | use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the | 
 | 205 | global completion and counters at a time, which are initialized on lines | 
 | 206 | 6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is | 
 | 207 | shown below. Note that the final "1" in on_each_cpu()'s argument list | 
 | 208 | ensures that all the calls to rcu_barrier_func() will have completed | 
 | 209 | before on_each_cpu() returns. Line 9 then waits for the completion. | 
 | 210 |  | 
 | 211 | This code was rewritten in 2008 to support rcu_barrier_bh() and | 
 | 212 | rcu_barrier_sched() in addition to the original rcu_barrier(). | 
 | 213 |  | 
 | 214 | The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() | 
 | 215 | to post an RCU callback, as follows: | 
 | 216 |  | 
 | 217 |  1 static void rcu_barrier_func(void *notused) | 
 | 218 |  2 { | 
 | 219 |  3 int cpu = smp_processor_id(); | 
 | 220 |  4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu); | 
 | 221 |  5 struct rcu_head *head; | 
 | 222 |  6 | 
 | 223 |  7 head = &rdp->barrier; | 
 | 224 |  8 atomic_inc(&rcu_barrier_cpu_count); | 
 | 225 |  9 call_rcu(head, rcu_barrier_callback); | 
 | 226 | 10 } | 
 | 227 |  | 
 | 228 | Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure, | 
 | 229 | which contains the struct rcu_head that needed for the later call to | 
 | 230 | call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line | 
 | 231 | 8 increments a global counter. This counter will later be decremented | 
 | 232 | by the callback. Line 9 then registers the rcu_barrier_callback() on | 
 | 233 | the current CPU's queue. | 
 | 234 |  | 
 | 235 | The rcu_barrier_callback() function simply atomically decrements the | 
 | 236 | rcu_barrier_cpu_count variable and finalizes the completion when it | 
 | 237 | reaches zero, as follows: | 
 | 238 |  | 
 | 239 |  1 static void rcu_barrier_callback(struct rcu_head *notused) | 
 | 240 |  2 { | 
 | 241 |  3 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) | 
 | 242 |  4 complete(&rcu_barrier_completion); | 
 | 243 |  5 } | 
 | 244 |  | 
 | 245 | Quick Quiz #3: What happens if CPU 0's rcu_barrier_func() executes | 
 | 246 | 	immediately (thus incrementing rcu_barrier_cpu_count to the | 
 | 247 | 	value one), but the other CPU's rcu_barrier_func() invocations | 
 | 248 | 	are delayed for a full grace period? Couldn't this result in | 
 | 249 | 	rcu_barrier() returning prematurely? | 
 | 250 |  | 
 | 251 |  | 
 | 252 | rcu_barrier() Summary | 
 | 253 |  | 
 | 254 | The rcu_barrier() primitive has seen relatively little use, since most | 
 | 255 | code using RCU is in the core kernel rather than in modules. However, if | 
 | 256 | you are using RCU from an unloadable module, you need to use rcu_barrier() | 
 | 257 | so that your module may be safely unloaded. | 
 | 258 |  | 
 | 259 |  | 
 | 260 | Answers to Quick Quizzes | 
 | 261 |  | 
 | 262 | Quick Quiz #1: Why is there no srcu_barrier()? | 
 | 263 |  | 
 | 264 | Answer: Since there is no call_srcu(), there can be no outstanding SRCU | 
 | 265 | 	callbacks. Therefore, there is no need to wait for them. | 
 | 266 |  | 
 | 267 | Quick Quiz #2: Is there any other situation where rcu_barrier() might | 
 | 268 | 	be required? | 
 | 269 |  | 
 | 270 | Answer: Interestingly enough, rcu_barrier() was not originally | 
 | 271 | 	implemented for module unloading. Nikita Danilov was using | 
 | 272 | 	RCU in a filesystem, which resulted in a similar situation at | 
 | 273 | 	filesystem-unmount time. Dipankar Sarma coded up rcu_barrier() | 
 | 274 | 	in response, so that Nikita could invoke it during the | 
 | 275 | 	filesystem-unmount process. | 
 | 276 |  | 
 | 277 | 	Much later, yours truly hit the RCU module-unload problem when | 
 | 278 | 	implementing rcutorture, and found that rcu_barrier() solves | 
 | 279 | 	this problem as well. | 
 | 280 |  | 
 | 281 | Quick Quiz #3: What happens if CPU 0's rcu_barrier_func() executes | 
 | 282 | 	immediately (thus incrementing rcu_barrier_cpu_count to the | 
 | 283 | 	value one), but the other CPU's rcu_barrier_func() invocations | 
 | 284 | 	are delayed for a full grace period? Couldn't this result in | 
 | 285 | 	rcu_barrier() returning prematurely? | 
 | 286 |  | 
 | 287 | Answer: This cannot happen. The reason is that on_each_cpu() has its last | 
 | 288 | 	argument, the wait flag, set to "1". This flag is passed through | 
 | 289 | 	to smp_call_function() and further to smp_call_function_on_cpu(), | 
 | 290 | 	causing this latter to spin until the cross-CPU invocation of | 
 | 291 | 	rcu_barrier_func() has completed. This by itself would prevent | 
 | 292 | 	a grace period from completing on non-CONFIG_PREEMPT kernels, | 
 | 293 | 	since each CPU must undergo a context switch (or other quiescent | 
 | 294 | 	state) before the grace period can complete. However, this is | 
 | 295 | 	of no use in CONFIG_PREEMPT kernels. | 
 | 296 |  | 
 | 297 | 	Therefore, on_each_cpu() disables preemption across its call | 
 | 298 | 	to smp_call_function() and also across the local call to | 
 | 299 | 	rcu_barrier_func(). This prevents the local CPU from context | 
 | 300 | 	switching, again preventing grace periods from completing. This | 
 | 301 | 	means that all CPUs have executed rcu_barrier_func() before | 
 | 302 | 	the first rcu_barrier_callback() can possibly execute, in turn | 
 | 303 | 	preventing rcu_barrier_cpu_count from prematurely reaching zero. | 
 | 304 |  | 
 | 305 | 	Currently, -rt implementations of RCU keep but a single global | 
 | 306 | 	queue for RCU callbacks, and thus do not suffer from this | 
 | 307 | 	problem. However, when the -rt RCU eventually does have per-CPU | 
 | 308 | 	callback queues, things will have to change. One simple change | 
 | 309 | 	is to add an rcu_read_lock() before line 8 of rcu_barrier() | 
 | 310 | 	and an rcu_read_unlock() after line 8 of this same function. If | 
 | 311 | 	you can think of a better change, please let me know! |