Commit | Line | Data |
---|---|---|
1c9bf22c AA |
1 | = Transparent Hugepage Support = |
2 | ||
3 | == Objective == | |
4 | ||
5 | Performance critical computing applications dealing with large memory | |
6 | working sets are already running on top of libhugetlbfs and in turn | |
7 | hugetlbfs. Transparent Hugepage Support is an alternative means of | |
8 | using huge pages for the backing of virtual memory with huge pages | |
9 | that supports the automatic promotion and demotion of page sizes and | |
10 | without the shortcomings of hugetlbfs. | |
11 | ||
12 | Currently it only works for anonymous memory mappings but in the | |
13 | future it can expand over the pagecache layer starting with tmpfs. | |
14 | ||
15 | The reason applications are running faster is because of two | |
16 | factors. The first factor is almost completely irrelevant and it's not | |
17 | of significant interest because it'll also have the downside of | |
18 | requiring larger clear-page copy-page in page faults which is a | |
19 | potentially negative effect. The first factor consists in taking a | |
20 | single page fault for each 2M virtual region touched by userland (so | |
21 | reducing the enter/exit kernel frequency by a 512 times factor). This | |
22 | only matters the first time the memory is accessed for the lifetime of | |
23 | a memory mapping. The second long lasting and much more important | |
24 | factor will affect all subsequent accesses to the memory for the whole | |
25 | runtime of the application. The second factor consist of two | |
26 | components: 1) the TLB miss will run faster (especially with | |
27 | virtualization using nested pagetables but almost always also on bare | |
28 | metal without virtualization) and 2) a single TLB entry will be | |
29 | mapping a much larger amount of virtual memory in turn reducing the | |
30 | number of TLB misses. With virtualization and nested pagetables the | |
31 | TLB can be mapped of larger size only if both KVM and the Linux guest | |
32 | are using hugepages but a significant speedup already happens if only | |
33 | one of the two is using hugepages just because of the fact the TLB | |
34 | miss is going to run faster. | |
35 | ||
36 | == Design == | |
37 | ||
a46e6376 KS |
38 | - "graceful fallback": mm components which don't have transparent hugepage |
39 | knowledge fall back to breaking huge pmd mapping into table of ptes and, | |
40 | if necessary, split a transparent hugepage. Therefore these components | |
41 | can continue working on the regular pages or regular pte mappings. | |
1c9bf22c AA |
42 | |
43 | - if a hugepage allocation fails because of memory fragmentation, | |
44 | regular pages should be gracefully allocated instead and mixed in | |
45 | the same vma without any failure or significant delay and without | |
46 | userland noticing | |
47 | ||
48 | - if some task quits and more hugepages become available (either | |
49 | immediately in the buddy or through the VM), guest physical memory | |
50 | backed by regular pages should be relocated on hugepages | |
51 | automatically (with khugepaged) | |
52 | ||
53 | - it doesn't require memory reservation and in turn it uses hugepages | |
54 | whenever possible (the only possible reservation here is kernelcore= | |
55 | to avoid unmovable pages to fragment all the memory but such a tweak | |
56 | is not specific to transparent hugepage support and it's a generic | |
57 | feature that applies to all dynamic high order allocations in the | |
58 | kernel) | |
59 | ||
60 | - this initial support only offers the feature in the anonymous memory | |
61 | regions but it'd be ideal to move it to tmpfs and the pagecache | |
62 | later | |
63 | ||
64 | Transparent Hugepage Support maximizes the usefulness of free memory | |
65 | if compared to the reservation approach of hugetlbfs by allowing all | |
66 | unused memory to be used as cache or other movable (or even unmovable | |
67 | entities). It doesn't require reservation to prevent hugepage | |
68 | allocation failures to be noticeable from userland. It allows paging | |
69 | and all other advanced VM features to be available on the | |
70 | hugepages. It requires no modifications for applications to take | |
71 | advantage of it. | |
72 | ||
73 | Applications however can be further optimized to take advantage of | |
74 | this feature, like for example they've been optimized before to avoid | |
75 | a flood of mmap system calls for every malloc(4k). Optimizing userland | |
76 | is by far not mandatory and khugepaged already can take care of long | |
77 | lived page allocations even for hugepage unaware applications that | |
78 | deals with large amounts of memory. | |
79 | ||
80 | In certain cases when hugepages are enabled system wide, application | |
81 | may end up allocating more memory resources. An application may mmap a | |
82 | large region but only touch 1 byte of it, in that case a 2M page might | |
83 | be allocated instead of a 4k page for no good. This is why it's | |
84 | possible to disable hugepages system-wide and to only have them inside | |
85 | MADV_HUGEPAGE madvise regions. | |
86 | ||
87 | Embedded systems should enable hugepages only inside madvise regions | |
88 | to eliminate any risk of wasting any precious byte of memory and to | |
89 | only run faster. | |
90 | ||
91 | Applications that gets a lot of benefit from hugepages and that don't | |
92 | risk to lose memory by using hugepages, should use | |
93 | madvise(MADV_HUGEPAGE) on their critical mmapped regions. | |
94 | ||
95 | == sysfs == | |
96 | ||
97 | Transparent Hugepage Support can be entirely disabled (mostly for | |
98 | debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to | |
99 | avoid the risk of consuming more memory resources) or enabled system | |
100 | wide. This can be achieved with one of: | |
101 | ||
102 | echo always >/sys/kernel/mm/transparent_hugepage/enabled | |
103 | echo madvise >/sys/kernel/mm/transparent_hugepage/enabled | |
104 | echo never >/sys/kernel/mm/transparent_hugepage/enabled | |
105 | ||
106 | It's also possible to limit defrag efforts in the VM to generate | |
107 | hugepages in case they're not immediately free to madvise regions or | |
108 | to never try to defrag memory and simply fallback to regular pages | |
109 | unless hugepages are immediately available. Clearly if we spend CPU | |
110 | time to defrag memory, we would expect to gain even more by the fact | |
111 | we use hugepages later instead of regular pages. This isn't always | |
112 | guaranteed, but it may be more likely in case the allocation is for a | |
113 | MADV_HUGEPAGE region. | |
114 | ||
115 | echo always >/sys/kernel/mm/transparent_hugepage/defrag | |
444eb2a4 | 116 | echo defer >/sys/kernel/mm/transparent_hugepage/defrag |
1c9bf22c AA |
117 | echo madvise >/sys/kernel/mm/transparent_hugepage/defrag |
118 | echo never >/sys/kernel/mm/transparent_hugepage/defrag | |
119 | ||
444eb2a4 MG |
120 | "always" means that an application requesting THP will stall on allocation |
121 | failure and directly reclaim pages and compact memory in an effort to | |
122 | allocate a THP immediately. This may be desirable for virtual machines | |
123 | that benefit heavily from THP use and are willing to delay the VM start | |
124 | to utilise them. | |
125 | ||
126 | "defer" means that an application will wake kswapd in the background | |
127 | to reclaim pages and wake kcompact to compact memory so that THP is | |
128 | available in the near future. It's the responsibility of khugepaged | |
129 | to then install the THP pages later. | |
130 | ||
131 | "madvise" will enter direct reclaim like "always" but only for regions | |
132 | that are have used madvise(MADV_HUGEPAGE). This is the default behaviour. | |
133 | ||
134 | "never" should be self-explanatory. | |
135 | ||
79da5407 KS |
136 | By default kernel tries to use huge zero page on read page fault. |
137 | It's possible to disable huge zero page by writing 0 or enable it | |
138 | back by writing 1: | |
139 | ||
f49cbdde WL |
140 | echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page |
141 | echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page | |
79da5407 | 142 | |
1c9bf22c AA |
143 | khugepaged will be automatically started when |
144 | transparent_hugepage/enabled is set to "always" or "madvise, and it'll | |
145 | be automatically shutdown if it's set to "never". | |
146 | ||
147 | khugepaged runs usually at low frequency so while one may not want to | |
148 | invoke defrag algorithms synchronously during the page faults, it | |
149 | should be worth invoking defrag at least in khugepaged. However it's | |
e369fde1 DR |
150 | also possible to disable defrag in khugepaged by writing 0 or enable |
151 | defrag in khugepaged by writing 1: | |
1c9bf22c | 152 | |
e369fde1 DR |
153 | echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag |
154 | echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag | |
1c9bf22c AA |
155 | |
156 | You can also control how many pages khugepaged should scan at each | |
157 | pass: | |
158 | ||
159 | /sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan | |
160 | ||
161 | and how many milliseconds to wait in khugepaged between each pass (you | |
162 | can set this to 0 to run khugepaged at 100% utilization of one core): | |
163 | ||
164 | /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs | |
165 | ||
166 | and how many milliseconds to wait in khugepaged if there's an hugepage | |
167 | allocation failure to throttle the next allocation attempt. | |
168 | ||
169 | /sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs | |
170 | ||
171 | The khugepaged progress can be seen in the number of pages collapsed: | |
172 | ||
173 | /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed | |
174 | ||
175 | for each pass: | |
176 | ||
177 | /sys/kernel/mm/transparent_hugepage/khugepaged/full_scans | |
178 | ||
9ddfa69f EA |
179 | max_ptes_none specifies how many extra small pages (that are |
180 | not already mapped) can be allocated when collapsing a group | |
181 | of small pages into one large page. | |
182 | ||
183 | /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none | |
184 | ||
185 | A higher value leads to use additional memory for programs. | |
186 | A lower value leads to gain less thp performance. Value of | |
187 | max_ptes_none can waste cpu time very little, you can | |
188 | ignore it. | |
189 | ||
80f73b4b EA |
190 | max_ptes_swap specifies how many pages can be brought in from |
191 | swap when collapsing a group of pages into a transparent huge page. | |
192 | ||
193 | /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap | |
194 | ||
195 | A higher value can cause excessive swap IO and waste | |
196 | memory. A lower value can prevent THPs from being | |
197 | collapsed, resulting fewer pages being collapsed into | |
198 | THPs, and lower memory access performance. | |
199 | ||
1c9bf22c AA |
200 | == Boot parameter == |
201 | ||
202 | You can change the sysfs boot time defaults of Transparent Hugepage | |
203 | Support by passing the parameter "transparent_hugepage=always" or | |
204 | "transparent_hugepage=madvise" or "transparent_hugepage=never" | |
205 | (without "") to the kernel command line. | |
206 | ||
207 | == Need of application restart == | |
208 | ||
209 | The transparent_hugepage/enabled values only affect future | |
210 | behavior. So to make them effective you need to restart any | |
211 | application that could have been using hugepages. This also applies to | |
212 | the regions registered in khugepaged. | |
213 | ||
69256994 MG |
214 | == Monitoring usage == |
215 | ||
216 | The number of transparent huge pages currently used by the system is | |
217 | available by reading the AnonHugePages field in /proc/meminfo. To | |
218 | identify what applications are using transparent huge pages, it is | |
219 | necessary to read /proc/PID/smaps and count the AnonHugePages fields | |
220 | for each mapping. Note that reading the smaps file is expensive and | |
221 | reading it frequently will incur overhead. | |
222 | ||
223 | There are a number of counters in /proc/vmstat that may be used to | |
224 | monitor how successfully the system is providing huge pages for use. | |
225 | ||
226 | thp_fault_alloc is incremented every time a huge page is successfully | |
227 | allocated to handle a page fault. This applies to both the | |
228 | first time a page is faulted and for COW faults. | |
229 | ||
230 | thp_collapse_alloc is incremented by khugepaged when it has found | |
231 | a range of pages to collapse into one huge page and has | |
232 | successfully allocated a new huge page to store the data. | |
233 | ||
234 | thp_fault_fallback is incremented if a page fault fails to allocate | |
235 | a huge page and instead falls back to using small pages. | |
236 | ||
237 | thp_collapse_alloc_failed is incremented if khugepaged found a range | |
238 | of pages that should be collapsed into one huge page but failed | |
239 | the allocation. | |
240 | ||
a46e6376 | 241 | thp_split_page is incremented every time a huge page is split into base |
69256994 MG |
242 | pages. This can happen for a variety of reasons but a common |
243 | reason is that a huge page is old and is being reclaimed. | |
a46e6376 KS |
244 | This action implies splitting all PMD the page mapped with. |
245 | ||
246 | thp_split_page_failed is is incremented if kernel fails to split huge | |
247 | page. This can happen if the page was pinned by somebody. | |
248 | ||
f9719a03 KS |
249 | thp_deferred_split_page is incremented when a huge page is put onto split |
250 | queue. This happens when a huge page is partially unmapped and | |
251 | splitting it would free up some memory. Pages on split queue are | |
252 | going to be split under memory pressure. | |
253 | ||
a46e6376 KS |
254 | thp_split_pmd is incremented every time a PMD split into table of PTEs. |
255 | This can happen, for instance, when application calls mprotect() or | |
256 | munmap() on part of huge page. It doesn't split huge page, only | |
257 | page table entry. | |
69256994 | 258 | |
d8a8e1f0 KS |
259 | thp_zero_page_alloc is incremented every time a huge zero page is |
260 | successfully allocated. It includes allocations which where | |
261 | dropped due race with other allocation. Note, it doesn't count | |
262 | every map of the huge zero page, only its allocation. | |
263 | ||
264 | thp_zero_page_alloc_failed is incremented if kernel fails to allocate | |
265 | huge zero page and falls back to using small pages. | |
266 | ||
69256994 MG |
267 | As the system ages, allocating huge pages may be expensive as the |
268 | system uses memory compaction to copy data around memory to free a | |
269 | huge page for use. There are some counters in /proc/vmstat to help | |
270 | monitor this overhead. | |
271 | ||
272 | compact_stall is incremented every time a process stalls to run | |
273 | memory compaction so that a huge page is free for use. | |
274 | ||
275 | compact_success is incremented if the system compacted memory and | |
276 | freed a huge page for use. | |
277 | ||
278 | compact_fail is incremented if the system tries to compact memory | |
279 | but failed. | |
280 | ||
281 | compact_pages_moved is incremented each time a page is moved. If | |
282 | this value is increasing rapidly, it implies that the system | |
283 | is copying a lot of data to satisfy the huge page allocation. | |
284 | It is possible that the cost of copying exceeds any savings | |
285 | from reduced TLB misses. | |
286 | ||
287 | compact_pagemigrate_failed is incremented when the underlying mechanism | |
288 | for moving a page failed. | |
289 | ||
290 | compact_blocks_moved is incremented each time memory compaction examines | |
291 | a huge page aligned range of pages. | |
292 | ||
293 | It is possible to establish how long the stalls were using the function | |
294 | tracer to record how long was spent in __alloc_pages_nodemask and | |
295 | using the mm_page_alloc tracepoint to identify which allocations were | |
296 | for huge pages. | |
297 | ||
1c9bf22c AA |
298 | == get_user_pages and follow_page == |
299 | ||
300 | get_user_pages and follow_page if run on a hugepage, will return the | |
301 | head or tail pages as usual (exactly as they would do on | |
302 | hugetlbfs). Most gup users will only care about the actual physical | |
303 | address of the page and its temporary pinning to release after the I/O | |
304 | is complete, so they won't ever notice the fact the page is huge. But | |
305 | if any driver is going to mangle over the page structure of the tail | |
306 | page (like for checking page->mapping or other bits that are relevant | |
307 | for the head page and not the tail page), it should be updated to jump | |
a46e6376 KS |
308 | to check head page instead. Taking reference on any head/tail page would |
309 | prevent page from being split by anyone. | |
1c9bf22c AA |
310 | |
311 | NOTE: these aren't new constraints to the GUP API, and they match the | |
312 | same constrains that applies to hugetlbfs too, so any driver capable | |
313 | of handling GUP on hugetlbfs will also work fine on transparent | |
314 | hugepage backed mappings. | |
315 | ||
316 | In case you can't handle compound pages if they're returned by | |
317 | follow_page, the FOLL_SPLIT bit can be specified as parameter to | |
318 | follow_page, so that it will split the hugepages before returning | |
319 | them. Migration for example passes FOLL_SPLIT as parameter to | |
320 | follow_page because it's not hugepage aware and in fact it can't work | |
321 | at all on hugetlbfs (but it instead works fine on transparent | |
322 | hugepages thanks to FOLL_SPLIT). migration simply can't deal with | |
323 | hugepages being returned (as it's not only checking the pfn of the | |
324 | page and pinning it during the copy but it pretends to migrate the | |
325 | memory in regular page sizes and with regular pte/pmd mappings). | |
326 | ||
327 | == Optimizing the applications == | |
328 | ||
329 | To be guaranteed that the kernel will map a 2M page immediately in any | |
330 | memory region, the mmap region has to be hugepage naturally | |
331 | aligned. posix_memalign() can provide that guarantee. | |
332 | ||
333 | == Hugetlbfs == | |
334 | ||
335 | You can use hugetlbfs on a kernel that has transparent hugepage | |
336 | support enabled just fine as always. No difference can be noted in | |
337 | hugetlbfs other than there will be less overall fragmentation. All | |
338 | usual features belonging to hugetlbfs are preserved and | |
339 | unaffected. libhugetlbfs will also work fine as usual. | |
340 | ||
341 | == Graceful fallback == | |
342 | ||
343 | Code walking pagetables but unware about huge pmds can simply call | |
a46e6376 | 344 | split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by |
1c9bf22c | 345 | pmd_offset. It's trivial to make the code transparent hugepage aware |
a46e6376 | 346 | by just grepping for "pmd_offset" and adding split_huge_pmd where |
1c9bf22c AA |
347 | missing after pmd_offset returns the pmd. Thanks to the graceful |
348 | fallback design, with a one liner change, you can avoid to write | |
349 | hundred if not thousand of lines of complex code to make your code | |
350 | hugepage aware. | |
351 | ||
352 | If you're not walking pagetables but you run into a physical hugepage | |
353 | but you can't handle it natively in your code, you can split it by | |
354 | calling split_huge_page(page). This is what the Linux VM does before | |
a46e6376 KS |
355 | it tries to swapout the hugepage for example. split_huge_page() can fail |
356 | if the page is pinned and you must handle this correctly. | |
1c9bf22c AA |
357 | |
358 | Example to make mremap.c transparent hugepage aware with a one liner | |
359 | change: | |
360 | ||
361 | diff --git a/mm/mremap.c b/mm/mremap.c | |
362 | --- a/mm/mremap.c | |
363 | +++ b/mm/mremap.c | |
364 | @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru | |
365 | return NULL; | |
366 | ||
367 | pmd = pmd_offset(pud, addr); | |
a46e6376 | 368 | + split_huge_pmd(vma, pmd, addr); |
1c9bf22c AA |
369 | if (pmd_none_or_clear_bad(pmd)) |
370 | return NULL; | |
371 | ||
372 | == Locking in hugepage aware code == | |
373 | ||
374 | We want as much code as possible hugepage aware, as calling | |
a46e6376 | 375 | split_huge_page() or split_huge_pmd() has a cost. |
1c9bf22c AA |
376 | |
377 | To make pagetable walks huge pmd aware, all you need to do is to call | |
378 | pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the | |
379 | mmap_sem in read (or write) mode to be sure an huge pmd cannot be | |
380 | created from under you by khugepaged (khugepaged collapse_huge_page | |
381 | takes the mmap_sem in write mode in addition to the anon_vma lock). If | |
382 | pmd_trans_huge returns false, you just fallback in the old code | |
383 | paths. If instead pmd_trans_huge returns true, you have to take the | |
a46e6376 KS |
384 | page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the |
385 | page table lock will prevent the huge pmd to be converted into a | |
386 | regular pmd from under you (split_huge_pmd can run in parallel to the | |
1c9bf22c | 387 | pagetable walk). If the second pmd_trans_huge returns false, you |
a46e6376 KS |
388 | should just drop the page table lock and fallback to the old code as |
389 | before. Otherwise you can proceed to process the huge pmd and the | |
390 | hugepage natively. Once finished you can drop the page table lock. | |
391 | ||
392 | == Refcounts and transparent huge pages == | |
393 | ||
394 | Refcounting on THP is mostly consistent with refcounting on other compound | |
395 | pages: | |
396 | ||
397 | - get_page()/put_page() and GUP operate in head page's ->_count. | |
398 | ||
399 | - ->_count in tail pages is always zero: get_page_unless_zero() never | |
400 | succeed on tail pages. | |
401 | ||
402 | - map/unmap of the pages with PTE entry increment/decrement ->_mapcount | |
403 | on relevant sub-page of the compound page. | |
404 | ||
405 | - map/unmap of the whole compound page accounted in compound_mapcount | |
406 | (stored in first tail page). | |
407 | ||
408 | PageDoubleMap() indicates that ->_mapcount in all subpages is offset up by one. | |
409 | This additional reference is required to get race-free detection of unmap of | |
410 | subpages when we have them mapped with both PMDs and PTEs. | |
411 | ||
412 | This is optimization required to lower overhead of per-subpage mapcount | |
413 | tracking. The alternative is alter ->_mapcount in all subpages on each | |
414 | map/unmap of the whole compound page. | |
415 | ||
416 | We set PG_double_map when a PMD of the page got split for the first time, | |
417 | but still have PMD mapping. The addtional references go away with last | |
418 | compound_mapcount. | |
1c9bf22c AA |
419 | |
420 | split_huge_page internally has to distribute the refcounts in the head | |
a46e6376 KS |
421 | page to the tail pages before clearing all PG_head/tail bits from the page |
422 | structures. It can be done easily for refcounts taken by page table | |
423 | entries. But we don't have enough information on how to distribute any | |
424 | additional pins (i.e. from get_user_pages). split_huge_page() fails any | |
425 | requests to split pinned huge page: it expects page count to be equal to | |
426 | sum of mapcount of all sub-pages plus one (split_huge_page caller must | |
427 | have reference for head page). | |
428 | ||
429 | split_huge_page uses migration entries to stabilize page->_count and | |
430 | page->_mapcount. | |
431 | ||
432 | We safe against physical memory scanners too: the only legitimate way | |
433 | scanner can get reference to a page is get_page_unless_zero(). | |
434 | ||
435 | All tail pages has zero ->_count until atomic_add(). It prevent scanner | |
436 | from geting reference to tail page up to the point. After the atomic_add() | |
437 | we don't care about ->_count value. We already known how many references | |
438 | with should uncharge from head page. | |
439 | ||
440 | For head page get_page_unless_zero() will succeed and we don't mind. It's | |
441 | clear where reference should go after split: it will stay on head page. | |
442 | ||
443 | Note that split_huge_pmd() doesn't have any limitation on refcounting: | |
444 | pmd can be split at any point and never fails. | |
445 | ||
446 | == Partial unmap and deferred_split_huge_page() == | |
447 | ||
448 | Unmapping part of THP (with munmap() or other way) is not going to free | |
449 | memory immediately. Instead, we detect that a subpage of THP is not in use | |
450 | in page_remove_rmap() and queue the THP for splitting if memory pressure | |
451 | comes. Splitting will free up unused subpages. | |
452 | ||
453 | Splitting the page right away is not an option due to locking context in | |
454 | the place where we can detect partial unmap. It's also might be | |
455 | counterproductive since in many cases partial unmap unmap happens during | |
456 | exit(2) if an THP crosses VMA boundary. | |
457 | ||
458 | Function deferred_split_huge_page() is used to queue page for splitting. | |
459 | The splitting itself will happen when we get memory pressure via shrinker | |
460 | interface. |