Commit | Line | Data |
---|---|---|
202799be NMG |
1 | completions - wait for completion handling |
2 | ========================================== | |
3 | ||
4 | This document was originally written based on 3.18.0 (linux-next) | |
5 | ||
6 | Introduction: | |
7 | ------------- | |
8 | ||
9 | If you have one or more threads of execution that must wait for some process | |
7085f6c3 JC |
10 | to have reached a point or a specific state, completions can provide a |
11 | race-free solution to this problem. Semantically they are somewhat like a | |
12 | pthread_barrier and have similar use-cases. | |
202799be | 13 | |
7085f6c3 | 14 | Completions are a code synchronization mechanism which is preferable to any |
202799be | 15 | misuse of locks. Any time you think of using yield() or some quirky |
7085f6c3 | 16 | msleep(1) loop to allow something else to proceed, you probably want to |
202799be | 17 | look into using one of the wait_for_completion*() calls instead. The |
4988aaa6 | 18 | advantage of using completions is clear intent of the code, but also more |
202799be NMG |
19 | efficient code as both threads can continue until the result is actually |
20 | needed. | |
21 | ||
22 | Completions are built on top of the generic event infrastructure in Linux, | |
7085f6c3 JC |
23 | with the event reduced to a simple flag (appropriately called "done") in |
24 | struct completion that tells the waiting threads of execution if they | |
202799be NMG |
25 | can continue safely. |
26 | ||
4988aaa6 | 27 | As completions are scheduling related, the code is found in |
202799be NMG |
28 | kernel/sched/completion.c - for details on completion design and |
29 | implementation see completions-design.txt | |
30 | ||
31 | ||
32 | Usage: | |
33 | ------ | |
34 | ||
4988aaa6 | 35 | There are three parts to using completions, the initialization of the |
202799be | 36 | struct completion, the waiting part through a call to one of the variants of |
4988aaa6 | 37 | wait_for_completion() and the signaling side through a call to complete() |
202799be NMG |
38 | or complete_all(). Further there are some helper functions for checking the |
39 | state of completions. | |
40 | ||
41 | To use completions one needs to include <linux/completion.h> and | |
42 | create a variable of type struct completion. The structure used for | |
43 | handling of completions is: | |
44 | ||
45 | struct completion { | |
46 | unsigned int done; | |
47 | wait_queue_head_t wait; | |
48 | }; | |
49 | ||
50 | providing the wait queue to place tasks on for waiting and the flag for | |
51 | indicating the state of affairs. | |
52 | ||
4988aaa6 | 53 | Completions should be named to convey the intent of the waiter. A good |
202799be NMG |
54 | example is: |
55 | ||
56 | wait_for_completion(&early_console_added); | |
57 | ||
58 | complete(&early_console_added); | |
59 | ||
60 | Good naming (as always) helps code readability. | |
61 | ||
62 | ||
63 | Initializing completions: | |
64 | ------------------------- | |
65 | ||
66 | Initialization of dynamically allocated completions, often embedded in | |
67 | other structures, is done with: | |
68 | ||
69 | void init_completion(&done); | |
70 | ||
71 | Initialization is accomplished by initializing the wait queue and setting | |
72 | the default state to "not available", that is, "done" is set to 0. | |
73 | ||
74 | The re-initialization function, reinit_completion(), simply resets the | |
75 | done element to "not available", thus again to 0, without touching the | |
7085f6c3 | 76 | wait queue. Calling init_completion() twice on the same completion object is |
202799be NMG |
77 | most likely a bug as it re-initializes the queue to an empty queue and |
78 | enqueued tasks could get "lost" - use reinit_completion() in that case. | |
79 | ||
80 | For static declaration and initialization, macros are available. These are: | |
81 | ||
82 | static DECLARE_COMPLETION(setup_done) | |
83 | ||
84 | used for static declarations in file scope. Within functions the static | |
85 | initialization should always use: | |
86 | ||
87 | DECLARE_COMPLETION_ONSTACK(setup_done) | |
88 | ||
89 | suitable for automatic/local variables on the stack and will make lockdep | |
4988aaa6 | 90 | happy. Note also that one needs to make *sure* the completion passed to |
202799be NMG |
91 | work threads remains in-scope, and no references remain to on-stack data |
92 | when the initiating function returns. | |
93 | ||
4988aaa6 NMG |
94 | Using on-stack completions for code that calls any of the _timeout or |
95 | _interruptible/_killable variants is not advisable as they will require | |
96 | additional synchronization to prevent the on-stack completion object in | |
97 | the timeout/signal cases from going out of scope. Consider using dynamically | |
98 | allocated completions when intending to use the _interruptible/_killable | |
99 | or _timeout variants of wait_for_completion(). | |
100 | ||
202799be NMG |
101 | |
102 | Waiting for completions: | |
103 | ------------------------ | |
104 | ||
105 | For a thread of execution to wait for some concurrent work to finish, it | |
106 | calls wait_for_completion() on the initialized completion structure. | |
107 | A typical usage scenario is: | |
108 | ||
7085f6c3 | 109 | struct completion setup_done; |
202799be | 110 | init_completion(&setup_done); |
4988aaa6 | 111 | initialize_work(...,&setup_done,...) |
202799be NMG |
112 | |
113 | /* run non-dependent code */ /* do setup */ | |
114 | ||
4988aaa6 | 115 | wait_for_completion(&setup_done); complete(setup_done) |
202799be | 116 | |
4988aaa6 | 117 | This is not implying any temporal order on wait_for_completion() and the |
202799be NMG |
118 | call to complete() - if the call to complete() happened before the call |
119 | to wait_for_completion() then the waiting side simply will continue | |
4988aaa6 NMG |
120 | immediately as all dependencies are satisfied if not it will block until |
121 | completion is signaled by complete(). | |
202799be | 122 | |
7085f6c3 | 123 | Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(), |
202799be | 124 | so it can only be called safely when you know that interrupts are enabled. |
7085f6c3 JC |
125 | Calling it from hard-irq or irqs-off atomic contexts will result in |
126 | hard-to-detect spurious enabling of interrupts. | |
202799be NMG |
127 | |
128 | wait_for_completion(): | |
129 | ||
130 | void wait_for_completion(struct completion *done): | |
131 | ||
7085f6c3 | 132 | The default behavior is to wait without a timeout and to mark the task as |
202799be | 133 | uninterruptible. wait_for_completion() and its variants are only safe |
4988aaa6 NMG |
134 | in process context (as they can sleep) but not in atomic context, |
135 | interrupt context, with disabled irqs. or preemption is disabled - see also | |
136 | try_wait_for_completion() below for handling completion in atomic/interrupt | |
137 | context. | |
138 | ||
202799be | 139 | As all variants of wait_for_completion() can (obviously) block for a long |
4988aaa6 | 140 | time, you probably don't want to call this with held mutexes. |
202799be NMG |
141 | |
142 | ||
143 | Variants available: | |
144 | ------------------- | |
145 | ||
146 | The below variants all return status and this status should be checked in | |
147 | most(/all) cases - in cases where the status is deliberately not checked you | |
148 | probably want to make a note explaining this (e.g. see | |
149 | arch/arm/kernel/smp.c:__cpu_up()). | |
150 | ||
151 | A common problem that occurs is to have unclean assignment of return types, | |
152 | so care should be taken with assigning return-values to variables of proper | |
153 | type. Checking for the specific meaning of return values also has been found | |
154 | to be quite inaccurate e.g. constructs like | |
4988aaa6 | 155 | if (!wait_for_completion_interruptible_timeout(...)) would execute the same |
202799be NMG |
156 | code path for successful completion and for the interrupted case - which is |
157 | probably not what you want. | |
158 | ||
159 | int wait_for_completion_interruptible(struct completion *done) | |
160 | ||
4988aaa6 | 161 | This function marks the task TASK_INTERRUPTIBLE. If a signal was received |
7085f6c3 | 162 | while waiting it will return -ERESTARTSYS; 0 otherwise. |
202799be NMG |
163 | |
164 | unsigned long wait_for_completion_timeout(struct completion *done, | |
165 | unsigned long timeout) | |
166 | ||
4988aaa6 NMG |
167 | The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout' |
168 | (in jiffies). If timeout occurs it returns 0 else the remaining time in | |
7085f6c3 JC |
169 | jiffies (but at least 1). Timeouts are preferably calculated with |
170 | msecs_to_jiffies() or usecs_to_jiffies(). If the returned timeout value is | |
171 | deliberately ignored a comment should probably explain why (e.g. see | |
172 | drivers/mfd/wm8350-core.c wm8350_read_auxadc()) | |
202799be NMG |
173 | |
174 | long wait_for_completion_interruptible_timeout( | |
175 | struct completion *done, unsigned long timeout) | |
176 | ||
7085f6c3 JC |
177 | This function passes a timeout in jiffies and marks the task as |
178 | TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS; | |
179 | otherwise it returns 0 if the completion timed out or the remaining time in | |
180 | jiffies if completion occurred. | |
202799be | 181 | |
7085f6c3 JC |
182 | Further variants include _killable which uses TASK_KILLABLE as the |
183 | designated tasks state and will return -ERESTARTSYS if it is interrupted or | |
184 | else 0 if completion was achieved. There is a _timeout variant as well: | |
202799be NMG |
185 | |
186 | long wait_for_completion_killable(struct completion *done) | |
187 | long wait_for_completion_killable_timeout(struct completion *done, | |
188 | unsigned long timeout) | |
189 | ||
4988aaa6 | 190 | The _io variants wait_for_completion_io() behave the same as the non-_io |
202799be | 191 | variants, except for accounting waiting time as waiting on IO, which has |
4988aaa6 | 192 | an impact on how the task is accounted in scheduling stats. |
202799be NMG |
193 | |
194 | void wait_for_completion_io(struct completion *done) | |
195 | unsigned long wait_for_completion_io_timeout(struct completion *done | |
196 | unsigned long timeout) | |
197 | ||
198 | ||
199 | Signaling completions: | |
200 | ---------------------- | |
201 | ||
4988aaa6 NMG |
202 | A thread that wants to signal that the conditions for continuation have been |
203 | achieved calls complete() to signal exactly one of the waiters that it can | |
204 | continue. | |
202799be NMG |
205 | |
206 | void complete(struct completion *done) | |
207 | ||
4988aaa6 | 208 | or calls complete_all() to signal all current and future waiters. |
202799be NMG |
209 | |
210 | void complete_all(struct completion *done) | |
211 | ||
212 | The signaling will work as expected even if completions are signaled before | |
213 | a thread starts waiting. This is achieved by the waiter "consuming" | |
214 | (decrementing) the done element of struct completion. Waiting threads | |
215 | wakeup order is the same in which they were enqueued (FIFO order). | |
216 | ||
217 | If complete() is called multiple times then this will allow for that number | |
218 | of waiters to continue - each call to complete() will simply increment the | |
219 | done element. Calling complete_all() multiple times is a bug though. Both | |
4988aaa6 | 220 | complete() and complete_all() can be called in hard-irq/atomic context safely. |
202799be NMG |
221 | |
222 | There only can be one thread calling complete() or complete_all() on a | |
4988aaa6 | 223 | particular struct completion at any time - serialized through the wait |
202799be NMG |
224 | queue spinlock. Any such concurrent calls to complete() or complete_all() |
225 | probably are a design bug. | |
226 | ||
227 | Signaling completion from hard-irq context is fine as it will appropriately | |
4988aaa6 | 228 | lock with spin_lock_irqsave/spin_unlock_irqrestore and it will never sleep. |
202799be NMG |
229 | |
230 | ||
231 | try_wait_for_completion()/completion_done(): | |
232 | -------------------------------------------- | |
233 | ||
4988aaa6 NMG |
234 | The try_wait_for_completion() function will not put the thread on the wait |
235 | queue but rather returns false if it would need to enqueue (block) the thread, | |
7085f6c3 | 236 | else it consumes one posted completion and returns true. |
202799be | 237 | |
4988aaa6 | 238 | bool try_wait_for_completion(struct completion *done) |
202799be | 239 | |
7085f6c3 JC |
240 | Finally, to check the state of a completion without changing it in any way, |
241 | call completion_done(), which returns false if there are no posted | |
242 | completions that were not yet consumed by waiters (implying that there are | |
243 | waiters) and true otherwise; | |
202799be | 244 | |
4988aaa6 | 245 | bool completion_done(struct completion *done) |
202799be NMG |
246 | |
247 | Both try_wait_for_completion() and completion_done() are safe to be called in | |
4988aaa6 | 248 | hard-irq or atomic context. |