Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | CPUSETS |
2 | ------- | |
3 | ||
4 | Copyright (C) 2004 BULL SA. | |
5 | Written by Simon.Derr@bull.net | |
6 | ||
7 | Portions Copyright (c) 2004 Silicon Graphics, Inc. | |
8 | Modified by Paul Jackson <pj@sgi.com> | |
9 | ||
10 | CONTENTS: | |
11 | ========= | |
12 | ||
13 | 1. Cpusets | |
14 | 1.1 What are cpusets ? | |
15 | 1.2 Why are cpusets needed ? | |
16 | 1.3 How are cpusets implemented ? | |
17 | 1.4 How do I use cpusets ? | |
18 | 2. Usage Examples and Syntax | |
19 | 2.1 Basic Usage | |
20 | 2.2 Adding/removing cpus | |
21 | 2.3 Setting flags | |
22 | 2.4 Attaching processes | |
23 | 3. Questions | |
24 | 4. Contact | |
25 | ||
26 | 1. Cpusets | |
27 | ========== | |
28 | ||
29 | 1.1 What are cpusets ? | |
30 | ---------------------- | |
31 | ||
32 | Cpusets provide a mechanism for assigning a set of CPUs and Memory | |
33 | Nodes to a set of tasks. | |
34 | ||
35 | Cpusets constrain the CPU and Memory placement of tasks to only | |
36 | the resources within a tasks current cpuset. They form a nested | |
37 | hierarchy visible in a virtual file system. These are the essential | |
38 | hooks, beyond what is already present, required to manage dynamic | |
39 | job placement on large systems. | |
40 | ||
41 | Each task has a pointer to a cpuset. Multiple tasks may reference | |
42 | the same cpuset. Requests by a task, using the sched_setaffinity(2) | |
43 | system call to include CPUs in its CPU affinity mask, and using the | |
44 | mbind(2) and set_mempolicy(2) system calls to include Memory Nodes | |
45 | in its memory policy, are both filtered through that tasks cpuset, | |
46 | filtering out any CPUs or Memory Nodes not in that cpuset. The | |
47 | scheduler will not schedule a task on a CPU that is not allowed in | |
48 | its cpus_allowed vector, and the kernel page allocator will not | |
49 | allocate a page on a node that is not allowed in the requesting tasks | |
50 | mems_allowed vector. | |
51 | ||
52 | If a cpuset is cpu or mem exclusive, no other cpuset, other than a direct | |
53 | ancestor or descendent, may share any of the same CPUs or Memory Nodes. | |
85d7b949 DG |
54 | A cpuset that is cpu exclusive has a sched domain associated with it. |
55 | The sched domain consists of all cpus in the current cpuset that are not | |
56 | part of any exclusive child cpusets. | |
57 | This ensures that the scheduler load balacing code only balances | |
58 | against the cpus that are in the sched domain as defined above and not | |
59 | all of the cpus in the system. This removes any overhead due to | |
60 | load balancing code trying to pull tasks outside of the cpu exclusive | |
61 | cpuset only to be prevented by the tasks' cpus_allowed mask. | |
1da177e4 | 62 | |
9bf2229f PJ |
63 | A cpuset that is mem_exclusive restricts kernel allocations for |
64 | page, buffer and other data commonly shared by the kernel across | |
65 | multiple users. All cpusets, whether mem_exclusive or not, restrict | |
66 | allocations of memory for user space. This enables configuring a | |
67 | system so that several independent jobs can share common kernel | |
68 | data, such as file system pages, while isolating each jobs user | |
69 | allocation in its own cpuset. To do this, construct a large | |
70 | mem_exclusive cpuset to hold all the jobs, and construct child, | |
71 | non-mem_exclusive cpusets for each individual job. Only a small | |
72 | amount of typical kernel memory, such as requests from interrupt | |
73 | handlers, is allowed to be taken outside even a mem_exclusive cpuset. | |
74 | ||
1da177e4 LT |
75 | User level code may create and destroy cpusets by name in the cpuset |
76 | virtual file system, manage the attributes and permissions of these | |
77 | cpusets and which CPUs and Memory Nodes are assigned to each cpuset, | |
78 | specify and query to which cpuset a task is assigned, and list the | |
79 | task pids assigned to a cpuset. | |
80 | ||
81 | ||
82 | 1.2 Why are cpusets needed ? | |
83 | ---------------------------- | |
84 | ||
85 | The management of large computer systems, with many processors (CPUs), | |
86 | complex memory cache hierarchies and multiple Memory Nodes having | |
87 | non-uniform access times (NUMA) presents additional challenges for | |
88 | the efficient scheduling and memory placement of processes. | |
89 | ||
90 | Frequently more modest sized systems can be operated with adequate | |
91 | efficiency just by letting the operating system automatically share | |
92 | the available CPU and Memory resources amongst the requesting tasks. | |
93 | ||
94 | But larger systems, which benefit more from careful processor and | |
95 | memory placement to reduce memory access times and contention, | |
96 | and which typically represent a larger investment for the customer, | |
33430dc5 | 97 | can benefit from explicitly placing jobs on properly sized subsets of |
1da177e4 LT |
98 | the system. |
99 | ||
100 | This can be especially valuable on: | |
101 | ||
102 | * Web Servers running multiple instances of the same web application, | |
103 | * Servers running different applications (for instance, a web server | |
104 | and a database), or | |
105 | * NUMA systems running large HPC applications with demanding | |
106 | performance characteristics. | |
85d7b949 DG |
107 | * Also cpu_exclusive cpusets are useful for servers running orthogonal |
108 | workloads such as RT applications requiring low latency and HPC | |
109 | applications that are throughput sensitive | |
1da177e4 LT |
110 | |
111 | These subsets, or "soft partitions" must be able to be dynamically | |
112 | adjusted, as the job mix changes, without impacting other concurrently | |
113 | executing jobs. | |
114 | ||
115 | The kernel cpuset patch provides the minimum essential kernel | |
116 | mechanisms required to efficiently implement such subsets. It | |
117 | leverages existing CPU and Memory Placement facilities in the Linux | |
118 | kernel to avoid any additional impact on the critical scheduler or | |
119 | memory allocator code. | |
120 | ||
121 | ||
122 | 1.3 How are cpusets implemented ? | |
123 | --------------------------------- | |
124 | ||
125 | Cpusets provide a Linux kernel (2.6.7 and above) mechanism to constrain | |
126 | which CPUs and Memory Nodes are used by a process or set of processes. | |
127 | ||
128 | The Linux kernel already has a pair of mechanisms to specify on which | |
129 | CPUs a task may be scheduled (sched_setaffinity) and on which Memory | |
130 | Nodes it may obtain memory (mbind, set_mempolicy). | |
131 | ||
132 | Cpusets extends these two mechanisms as follows: | |
133 | ||
134 | - Cpusets are sets of allowed CPUs and Memory Nodes, known to the | |
135 | kernel. | |
136 | - Each task in the system is attached to a cpuset, via a pointer | |
137 | in the task structure to a reference counted cpuset structure. | |
138 | - Calls to sched_setaffinity are filtered to just those CPUs | |
139 | allowed in that tasks cpuset. | |
140 | - Calls to mbind and set_mempolicy are filtered to just | |
141 | those Memory Nodes allowed in that tasks cpuset. | |
142 | - The root cpuset contains all the systems CPUs and Memory | |
143 | Nodes. | |
144 | - For any cpuset, one can define child cpusets containing a subset | |
145 | of the parents CPU and Memory Node resources. | |
146 | - The hierarchy of cpusets can be mounted at /dev/cpuset, for | |
147 | browsing and manipulation from user space. | |
148 | - A cpuset may be marked exclusive, which ensures that no other | |
149 | cpuset (except direct ancestors and descendents) may contain | |
150 | any overlapping CPUs or Memory Nodes. | |
85d7b949 DG |
151 | Also a cpu_exclusive cpuset would be associated with a sched |
152 | domain. | |
1da177e4 LT |
153 | - You can list all the tasks (by pid) attached to any cpuset. |
154 | ||
155 | The implementation of cpusets requires a few, simple hooks | |
156 | into the rest of the kernel, none in performance critical paths: | |
157 | ||
158 | - in main/init.c, to initialize the root cpuset at system boot. | |
159 | - in fork and exit, to attach and detach a task from its cpuset. | |
160 | - in sched_setaffinity, to mask the requested CPUs by what's | |
161 | allowed in that tasks cpuset. | |
162 | - in sched.c migrate_all_tasks(), to keep migrating tasks within | |
163 | the CPUs allowed by their cpuset, if possible. | |
85d7b949 DG |
164 | - in sched.c, a new API partition_sched_domains for handling |
165 | sched domain changes associated with cpu_exclusive cpusets | |
166 | and related changes in both sched.c and arch/ia64/kernel/domain.c | |
1da177e4 LT |
167 | - in the mbind and set_mempolicy system calls, to mask the requested |
168 | Memory Nodes by what's allowed in that tasks cpuset. | |
169 | - in page_alloc, to restrict memory to allowed nodes. | |
170 | - in vmscan.c, to restrict page recovery to the current cpuset. | |
171 | ||
172 | In addition a new file system, of type "cpuset" may be mounted, | |
173 | typically at /dev/cpuset, to enable browsing and modifying the cpusets | |
174 | presently known to the kernel. No new system calls are added for | |
175 | cpusets - all support for querying and modifying cpusets is via | |
176 | this cpuset file system. | |
177 | ||
178 | Each task under /proc has an added file named 'cpuset', displaying | |
179 | the cpuset name, as the path relative to the root of the cpuset file | |
180 | system. | |
181 | ||
182 | The /proc/<pid>/status file for each task has two added lines, | |
183 | displaying the tasks cpus_allowed (on which CPUs it may be scheduled) | |
184 | and mems_allowed (on which Memory Nodes it may obtain memory), | |
185 | in the format seen in the following example: | |
186 | ||
187 | Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff | |
188 | Mems_allowed: ffffffff,ffffffff | |
189 | ||
190 | Each cpuset is represented by a directory in the cpuset file system | |
191 | containing the following files describing that cpuset: | |
192 | ||
193 | - cpus: list of CPUs in that cpuset | |
194 | - mems: list of Memory Nodes in that cpuset | |
195 | - cpu_exclusive flag: is cpu placement exclusive? | |
196 | - mem_exclusive flag: is memory placement exclusive? | |
197 | - tasks: list of tasks (by pid) attached to that cpuset | |
198 | ||
199 | New cpusets are created using the mkdir system call or shell | |
200 | command. The properties of a cpuset, such as its flags, allowed | |
201 | CPUs and Memory Nodes, and attached tasks, are modified by writing | |
202 | to the appropriate file in that cpusets directory, as listed above. | |
203 | ||
204 | The named hierarchical structure of nested cpusets allows partitioning | |
205 | a large system into nested, dynamically changeable, "soft-partitions". | |
206 | ||
207 | The attachment of each task, automatically inherited at fork by any | |
208 | children of that task, to a cpuset allows organizing the work load | |
209 | on a system into related sets of tasks such that each set is constrained | |
210 | to using the CPUs and Memory Nodes of a particular cpuset. A task | |
211 | may be re-attached to any other cpuset, if allowed by the permissions | |
212 | on the necessary cpuset file system directories. | |
213 | ||
214 | Such management of a system "in the large" integrates smoothly with | |
215 | the detailed placement done on individual tasks and memory regions | |
216 | using the sched_setaffinity, mbind and set_mempolicy system calls. | |
217 | ||
218 | The following rules apply to each cpuset: | |
219 | ||
220 | - Its CPUs and Memory Nodes must be a subset of its parents. | |
221 | - It can only be marked exclusive if its parent is. | |
222 | - If its cpu or memory is exclusive, they may not overlap any sibling. | |
223 | ||
224 | These rules, and the natural hierarchy of cpusets, enable efficient | |
225 | enforcement of the exclusive guarantee, without having to scan all | |
226 | cpusets every time any of them change to ensure nothing overlaps a | |
227 | exclusive cpuset. Also, the use of a Linux virtual file system (vfs) | |
228 | to represent the cpuset hierarchy provides for a familiar permission | |
229 | and name space for cpusets, with a minimum of additional kernel code. | |
230 | ||
231 | 1.4 How do I use cpusets ? | |
232 | -------------------------- | |
233 | ||
234 | In order to minimize the impact of cpusets on critical kernel | |
235 | code, such as the scheduler, and due to the fact that the kernel | |
236 | does not support one task updating the memory placement of another | |
237 | task directly, the impact on a task of changing its cpuset CPU | |
238 | or Memory Node placement, or of changing to which cpuset a task | |
239 | is attached, is subtle. | |
240 | ||
241 | If a cpuset has its Memory Nodes modified, then for each task attached | |
242 | to that cpuset, the next time that the kernel attempts to allocate | |
243 | a page of memory for that task, the kernel will notice the change | |
244 | in the tasks cpuset, and update its per-task memory placement to | |
245 | remain within the new cpusets memory placement. If the task was using | |
246 | mempolicy MPOL_BIND, and the nodes to which it was bound overlap with | |
247 | its new cpuset, then the task will continue to use whatever subset | |
248 | of MPOL_BIND nodes are still allowed in the new cpuset. If the task | |
249 | was using MPOL_BIND and now none of its MPOL_BIND nodes are allowed | |
250 | in the new cpuset, then the task will be essentially treated as if it | |
251 | was MPOL_BIND bound to the new cpuset (even though its numa placement, | |
252 | as queried by get_mempolicy(), doesn't change). If a task is moved | |
253 | from one cpuset to another, then the kernel will adjust the tasks | |
254 | memory placement, as above, the next time that the kernel attempts | |
255 | to allocate a page of memory for that task. | |
256 | ||
257 | If a cpuset has its CPUs modified, then each task using that | |
258 | cpuset does _not_ change its behavior automatically. In order to | |
259 | minimize the impact on the critical scheduling code in the kernel, | |
260 | tasks will continue to use their prior CPU placement until they | |
261 | are rebound to their cpuset, by rewriting their pid to the 'tasks' | |
262 | file of their cpuset. If a task had been bound to some subset of its | |
263 | cpuset using the sched_setaffinity() call, and if any of that subset | |
264 | is still allowed in its new cpuset settings, then the task will be | |
265 | restricted to the intersection of the CPUs it was allowed on before, | |
266 | and its new cpuset CPU placement. If, on the other hand, there is | |
267 | no overlap between a tasks prior placement and its new cpuset CPU | |
268 | placement, then the task will be allowed to run on any CPU allowed | |
269 | in its new cpuset. If a task is moved from one cpuset to another, | |
270 | its CPU placement is updated in the same way as if the tasks pid is | |
271 | rewritten to the 'tasks' file of its current cpuset. | |
272 | ||
273 | In summary, the memory placement of a task whose cpuset is changed is | |
274 | updated by the kernel, on the next allocation of a page for that task, | |
275 | but the processor placement is not updated, until that tasks pid is | |
276 | rewritten to the 'tasks' file of its cpuset. This is done to avoid | |
277 | impacting the scheduler code in the kernel with a check for changes | |
278 | in a tasks processor placement. | |
279 | ||
d533f671 | 280 | There is an exception to the above. If hotplug functionality is used |
1da177e4 LT |
281 | to remove all the CPUs that are currently assigned to a cpuset, |
282 | then the kernel will automatically update the cpus_allowed of all | |
b39c4fab | 283 | tasks attached to CPUs in that cpuset to allow all CPUs. When memory |
1da177e4 LT |
284 | hotplug functionality for removing Memory Nodes is available, a |
285 | similar exception is expected to apply there as well. In general, | |
286 | the kernel prefers to violate cpuset placement, over starving a task | |
287 | that has had all its allowed CPUs or Memory Nodes taken offline. User | |
288 | code should reconfigure cpusets to only refer to online CPUs and Memory | |
289 | Nodes when using hotplug to add or remove such resources. | |
290 | ||
291 | There is a second exception to the above. GFP_ATOMIC requests are | |
292 | kernel internal allocations that must be satisfied, immediately. | |
293 | The kernel may drop some request, in rare cases even panic, if a | |
294 | GFP_ATOMIC alloc fails. If the request cannot be satisfied within | |
295 | the current tasks cpuset, then we relax the cpuset, and look for | |
296 | memory anywhere we can find it. It's better to violate the cpuset | |
297 | than stress the kernel. | |
298 | ||
299 | To start a new job that is to be contained within a cpuset, the steps are: | |
300 | ||
301 | 1) mkdir /dev/cpuset | |
302 | 2) mount -t cpuset none /dev/cpuset | |
303 | 3) Create the new cpuset by doing mkdir's and write's (or echo's) in | |
304 | the /dev/cpuset virtual file system. | |
305 | 4) Start a task that will be the "founding father" of the new job. | |
306 | 5) Attach that task to the new cpuset by writing its pid to the | |
307 | /dev/cpuset tasks file for that cpuset. | |
308 | 6) fork, exec or clone the job tasks from this founding father task. | |
309 | ||
310 | For example, the following sequence of commands will setup a cpuset | |
311 | named "Charlie", containing just CPUs 2 and 3, and Memory Node 1, | |
312 | and then start a subshell 'sh' in that cpuset: | |
313 | ||
314 | mount -t cpuset none /dev/cpuset | |
315 | cd /dev/cpuset | |
316 | mkdir Charlie | |
317 | cd Charlie | |
318 | /bin/echo 2-3 > cpus | |
319 | /bin/echo 1 > mems | |
320 | /bin/echo $$ > tasks | |
321 | sh | |
322 | # The subshell 'sh' is now running in cpuset Charlie | |
323 | # The next line should display '/Charlie' | |
324 | cat /proc/self/cpuset | |
325 | ||
326 | In the case that a change of cpuset includes wanting to move already | |
327 | allocated memory pages, consider further the work of IWAMOTO | |
328 | Toshihiro <iwamoto@valinux.co.jp> for page remapping and memory | |
329 | hotremoval, which can be found at: | |
330 | ||
331 | http://people.valinux.co.jp/~iwamoto/mh.html | |
332 | ||
333 | The integration of cpusets with such memory migration is not yet | |
334 | available. | |
335 | ||
336 | In the future, a C library interface to cpusets will likely be | |
337 | available. For now, the only way to query or modify cpusets is | |
338 | via the cpuset file system, using the various cd, mkdir, echo, cat, | |
339 | rmdir commands from the shell, or their equivalent from C. | |
340 | ||
341 | The sched_setaffinity calls can also be done at the shell prompt using | |
342 | SGI's runon or Robert Love's taskset. The mbind and set_mempolicy | |
343 | calls can be done at the shell prompt using the numactl command | |
344 | (part of Andi Kleen's numa package). | |
345 | ||
346 | 2. Usage Examples and Syntax | |
347 | ============================ | |
348 | ||
349 | 2.1 Basic Usage | |
350 | --------------- | |
351 | ||
352 | Creating, modifying, using the cpusets can be done through the cpuset | |
353 | virtual filesystem. | |
354 | ||
355 | To mount it, type: | |
356 | # mount -t cpuset none /dev/cpuset | |
357 | ||
358 | Then under /dev/cpuset you can find a tree that corresponds to the | |
359 | tree of the cpusets in the system. For instance, /dev/cpuset | |
360 | is the cpuset that holds the whole system. | |
361 | ||
362 | If you want to create a new cpuset under /dev/cpuset: | |
363 | # cd /dev/cpuset | |
364 | # mkdir my_cpuset | |
365 | ||
366 | Now you want to do something with this cpuset. | |
367 | # cd my_cpuset | |
368 | ||
369 | In this directory you can find several files: | |
370 | # ls | |
371 | cpus cpu_exclusive mems mem_exclusive tasks | |
372 | ||
373 | Reading them will give you information about the state of this cpuset: | |
374 | the CPUs and Memory Nodes it can use, the processes that are using | |
375 | it, its properties. By writing to these files you can manipulate | |
376 | the cpuset. | |
377 | ||
378 | Set some flags: | |
379 | # /bin/echo 1 > cpu_exclusive | |
380 | ||
381 | Add some cpus: | |
382 | # /bin/echo 0-7 > cpus | |
383 | ||
384 | Now attach your shell to this cpuset: | |
385 | # /bin/echo $$ > tasks | |
386 | ||
387 | You can also create cpusets inside your cpuset by using mkdir in this | |
388 | directory. | |
389 | # mkdir my_sub_cs | |
390 | ||
391 | To remove a cpuset, just use rmdir: | |
392 | # rmdir my_sub_cs | |
393 | This will fail if the cpuset is in use (has cpusets inside, or has | |
394 | processes attached). | |
395 | ||
396 | 2.2 Adding/removing cpus | |
397 | ------------------------ | |
398 | ||
399 | This is the syntax to use when writing in the cpus or mems files | |
400 | in cpuset directories: | |
401 | ||
402 | # /bin/echo 1-4 > cpus -> set cpus list to cpus 1,2,3,4 | |
403 | # /bin/echo 1,2,3,4 > cpus -> set cpus list to cpus 1,2,3,4 | |
404 | ||
405 | 2.3 Setting flags | |
406 | ----------------- | |
407 | ||
408 | The syntax is very simple: | |
409 | ||
410 | # /bin/echo 1 > cpu_exclusive -> set flag 'cpu_exclusive' | |
411 | # /bin/echo 0 > cpu_exclusive -> unset flag 'cpu_exclusive' | |
412 | ||
413 | 2.4 Attaching processes | |
414 | ----------------------- | |
415 | ||
416 | # /bin/echo PID > tasks | |
417 | ||
418 | Note that it is PID, not PIDs. You can only attach ONE task at a time. | |
419 | If you have several tasks to attach, you have to do it one after another: | |
420 | ||
421 | # /bin/echo PID1 > tasks | |
422 | # /bin/echo PID2 > tasks | |
423 | ... | |
424 | # /bin/echo PIDn > tasks | |
425 | ||
426 | ||
427 | 3. Questions | |
428 | ============ | |
429 | ||
430 | Q: what's up with this '/bin/echo' ? | |
431 | A: bash's builtin 'echo' command does not check calls to write() against | |
432 | errors. If you use it in the cpuset file system, you won't be | |
433 | able to tell whether a command succeeded or failed. | |
434 | ||
435 | Q: When I attach processes, only the first of the line gets really attached ! | |
436 | A: We can only return one error code per call to write(). So you should also | |
437 | put only ONE pid. | |
438 | ||
439 | 4. Contact | |
440 | ========== | |
441 | ||
442 | Web: http://www.bullopensource.org/cpuset |