KVM: MMU: fix SMAP virtualization
[deliverable/linux.git] / Documentation / virtual / kvm / mmu.txt
CommitLineData
03909187
AK
1The x86 kvm shadow mmu
2======================
3
4The mmu (in arch/x86/kvm, files mmu.[ch] and paging_tmpl.h) is responsible
5for presenting a standard x86 mmu to the guest, while translating guest
6physical addresses to host physical addresses.
7
8The mmu code attempts to satisfy the following requirements:
9
10- correctness: the guest should not be able to determine that it is running
11 on an emulated mmu except for timing (we attempt to comply
12 with the specification, not emulate the characteristics of
13 a particular implementation such as tlb size)
14- security: the guest must not be able to touch host memory not assigned
15 to it
16- performance: minimize the performance penalty imposed by the mmu
17- scaling: need to scale to large memory and large vcpu guests
18- hardware: support the full range of x86 virtualization hardware
19- integration: Linux memory management code must be in control of guest memory
20 so that swapping, page migration, page merging, transparent
21 hugepages, and similar features work without change
22- dirty tracking: report writes to guest memory to enable live migration
23 and framebuffer-based displays
24- footprint: keep the amount of pinned kernel memory low (most memory
25 should be shrinkable)
25985edc 26- reliability: avoid multipage or GFP_ATOMIC allocations
03909187
AK
27
28Acronyms
29========
30
31pfn host page frame number
32hpa host physical address
33hva host virtual address
34gfn guest frame number
35gpa guest physical address
36gva guest virtual address
37ngpa nested guest physical address
38ngva nested guest virtual address
39pte page table entry (used also to refer generically to paging structure
40 entries)
41gpte guest pte (referring to gfns)
42spte shadow pte (referring to pfns)
43tdp two dimensional paging (vendor neutral term for NPT and EPT)
44
45Virtual and real hardware supported
46===================================
47
48The mmu supports first-generation mmu hardware, which allows an atomic switch
49of the current paging mode and cr3 during guest entry, as well as
50two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware
51it exposes is the traditional 2/3/4 level x86 mmu, with support for global
52pages, pae, pse, pse36, cr0.wp, and 1GB pages. Work is in progress to support
53exposing NPT capable hardware on NPT capable hosts.
54
55Translation
56===========
57
58The primary job of the mmu is to program the processor's mmu to translate
59addresses for the guest. Different translations are required at different
60times:
61
62- when guest paging is disabled, we translate guest physical addresses to
63 host physical addresses (gpa->hpa)
64- when guest paging is enabled, we translate guest virtual addresses, to
65 guest physical addresses, to host physical addresses (gva->gpa->hpa)
66- when the guest launches a guest of its own, we translate nested guest
67 virtual addresses, to nested guest physical addresses, to guest physical
68 addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
69
70The primary challenge is to encode between 1 and 3 translations into hardware
71that support only 1 (traditional) and 2 (tdp) translations. When the
72number of required translations matches the hardware, the mmu operates in
73direct mode; otherwise it operates in shadow mode (see below).
74
75Memory
76======
77
c4bd09b2
AK
78Guest memory (gpa) is part of the user address space of the process that is
79using kvm. Userspace defines the translation between guest addresses and user
21bbe18b 80addresses (gpa->hva); note that two gpas may alias to the same hva, but not
03909187
AK
81vice versa.
82
21bbe18b 83These hvas may be backed using any method available to the host: anonymous
03909187
AK
84memory, file backed memory, and device memory. Memory might be paged by the
85host at any time.
86
87Events
88======
89
90The mmu is driven by events, some from the guest, some from the host.
91
92Guest generated events:
93- writes to control registers (especially cr3)
94- invlpg/invlpga instruction execution
95- access to missing or protected translations
96
97Host generated events:
98- changes in the gpa->hpa translation (either through gpa->hva changes or
99 through hva->hpa changes)
100- memory pressure (the shrinker)
101
102Shadow pages
103============
104
105The principal data structure is the shadow page, 'struct kvm_mmu_page'. A
106shadow page contains 512 sptes, which can be either leaf or nonleaf sptes. A
107shadow page may contain a mix of leaf and nonleaf sptes.
108
109A nonleaf spte allows the hardware mmu to reach the leaf pages and
110is not related to a translation directly. It points to other shadow pages.
111
112A leaf spte corresponds to either one or two translations encoded into
113one paging structure entry. These are always the lowest level of the
c4bd09b2 114translation stack, with optional higher level translations left to NPT/EPT.
03909187
AK
115Leaf ptes point at guest pages.
116
117The following table shows translations encoded by leaf ptes, with higher-level
118translations in parentheses:
119
120 Non-nested guests:
121 nonpaging: gpa->hpa
122 paging: gva->gpa->hpa
123 paging, tdp: (gva->)gpa->hpa
124 Nested guests:
125 non-tdp: ngva->gpa->hpa (*)
126 tdp: (ngva->)ngpa->gpa->hpa
127
128(*) the guest hypervisor will encode the ngva->gpa translation into its page
129 tables if npt is not present
130
131Shadow pages contain the following information:
132 role.level:
133 The level in the shadow paging hierarchy that this shadow page belongs to.
134 1=4k sptes, 2=2M sptes, 3=1G sptes, etc.
135 role.direct:
136 If set, leaf sptes reachable from this page are for a linear range.
137 Examples include real mode translation, large guest pages backed by small
138 host pages, and gpa->hpa translations when NPT or EPT is active.
139 The linear range starts at (gfn << PAGE_SHIFT) and its size is determined
140 by role.level (2MB for first level, 1GB for second level, 0.5TB for third
141 level, 256TB for fourth level)
142 If clear, this page corresponds to a guest page table denoted by the gfn
143 field.
144 role.quadrant:
145 When role.cr4_pae=0, the guest uses 32-bit gptes while the host uses 64-bit
146 sptes. That means a guest page table contains more ptes than the host,
147 so multiple shadow pages are needed to shadow one guest page.
148 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
149 first or second 512-gpte block in the guest page table. For second-level
150 page tables, each 32-bit gpte is converted to two 64-bit sptes
151 (since each first-level guest page is shadowed by two first-level
152 shadow pages) so role.quadrant takes values in the range 0..3. Each
153 quadrant maps 1GB virtual address space.
154 role.access:
155 Inherited guest access permissions in the form uwx. Note execute
156 permission is positive, not negative.
157 role.invalid:
158 The page is invalid and should not be used. It is a root page that is
159 currently pinned (by a cpu hardware register pointing to it); once it is
160 unpinned it will be destroyed.
161 role.cr4_pae:
162 Contains the value of cr4.pae for which the page is valid (e.g. whether
163 32-bit or 64-bit gptes are in use).
6859762e 164 role.nxe:
03909187 165 Contains the value of efer.nxe for which the page is valid.
3dbe1415
AK
166 role.cr0_wp:
167 Contains the value of cr0.wp for which the page is valid.
411c588d
AK
168 role.smep_andnot_wp:
169 Contains the value of cr4.smep && !cr0.wp for which the page is valid
170 (pages for which this is true are different from other pages; see the
171 treatment of cr0.wp=0 below).
edc90b7d
XG
172 role.smap_andnot_wp:
173 Contains the value of cr4.smap && !cr0.wp for which the page is valid
174 (pages for which this is true are different from other pages; see the
175 treatment of cr0.wp=0 below).
03909187
AK
176 gfn:
177 Either the guest page table containing the translations shadowed by this
178 page, or the base page frame for linear translations. See role.direct.
179 spt:
c4bd09b2 180 A pageful of 64-bit sptes containing the translations for this page.
03909187
AK
181 Accessed by both kvm and hardware.
182 The page pointed to by spt will have its page->private pointing back
183 at the shadow page structure.
184 sptes in spt point either at guest pages, or at lower-level shadow pages.
185 Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
186 at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte.
187 The spt array forms a DAG structure with the shadow page as a node, and
188 guest pages as leaves.
189 gfns:
190 An array of 512 guest frame numbers, one for each present pte. Used to
2032a93d
LJ
191 perform a reverse map from a pte to a gfn. When role.direct is set, any
192 element of this array can be calculated from the gfn field when used, in
193 this case, the array of gfns is not allocated. See role.direct and gfn.
03909187
AK
194 root_count:
195 A counter keeping track of how many hardware registers (guest cr3 or
196 pdptrs) are now pointing at the page. While this counter is nonzero, the
197 page cannot be destroyed. See role.invalid.
6c806a73
XG
198 parent_ptes:
199 The reverse mapping for the pte/ptes pointing at this page's spt. If
200 parent_ptes bit 0 is zero, only one spte points at this pages and
201 parent_ptes points at this single spte, otherwise, there exists multiple
202 sptes pointing at this page and (parent_ptes & ~0x1) points at a data
203 structure with a list of parent_ptes.
03909187
AK
204 unsync:
205 If true, then the translations in this page may not match the guest's
206 translation. This is equivalent to the state of the tlb when a pte is
207 changed but before the tlb entry is flushed. Accordingly, unsync ptes
208 are synchronized when the guest executes invlpg or flushes its tlb by
209 other means. Valid for leaf pages.
210 unsync_children:
211 How many sptes in the page point at pages that are unsync (or have
212 unsynchronized children).
213 unsync_child_bitmap:
214 A bitmap indicating which sptes in spt point (directly or indirectly) at
215 pages that may be unsynchronized. Used to quickly locate all unsychronized
216 pages reachable from a given page.
f6f8adee
XG
217 mmu_valid_gen:
218 Generation number of the page. It is compared with kvm->arch.mmu_valid_gen
219 during hash table lookup, and used to skip invalidated shadow pages (see
220 "Zapping all pages" below.)
accaefe0
XG
221 clear_spte_count:
222 Only present on 32-bit hosts, where a 64-bit spte cannot be written
223 atomically. The reader uses this while running out of the MMU lock
224 to detect in-progress updates and retry them until the writer has
225 finished the write.
0cbf8e43
XG
226 write_flooding_count:
227 A guest may write to a page table many times, causing a lot of
228 emulations if the page needs to be write-protected (see "Synchronized
229 and unsynchronized pages" below). Leaf pages can be unsynchronized
230 so that they do not trigger frequent emulation, but this is not
231 possible for non-leafs. This field counts the number of emulations
232 since the last time the page table was actually used; if emulation
233 is triggered too frequently on this page, KVM will unmap the page
234 to avoid emulation in the future.
03909187
AK
235
236Reverse map
237===========
238
239The mmu maintains a reverse mapping whereby all ptes mapping a page can be
240reached given its gfn. This is used, for example, when swapping out a page.
241
242Synchronized and unsynchronized pages
243=====================================
244
245The guest uses two events to synchronize its tlb and page tables: tlb flushes
246and page invalidations (invlpg).
247
248A tlb flush means that we need to synchronize all sptes reachable from the
249guest's cr3. This is expensive, so we keep all guest page tables write
250protected, and synchronize sptes to gptes when a gpte is written.
251
252A special case is when a guest page table is reachable from the current
253guest cr3. In this case, the guest is obliged to issue an invlpg instruction
254before using the translation. We take advantage of that by removing write
255protection from the guest page, and allowing the guest to modify it freely.
256We synchronize modified gptes when the guest invokes invlpg. This reduces
257the amount of emulation we have to do when the guest modifies multiple gptes,
258or when the a guest page is no longer used as a page table and is used for
259random guest data.
260
c4bd09b2 261As a side effect we have to resynchronize all reachable unsynchronized shadow
03909187
AK
262pages on a tlb flush.
263
264
265Reaction to events
266==================
267
268- guest page fault (or npt page fault, or ept violation)
269
270This is the most complicated event. The cause of a page fault can be:
271
272 - a true guest fault (the guest translation won't allow the access) (*)
273 - access to a missing translation
274 - access to a protected translation
275 - when logging dirty pages, memory is write protected
276 - synchronized shadow pages are write protected (*)
277 - access to untranslatable memory (mmio)
278
279 (*) not applicable in direct mode
280
281Handling a page fault is performed as follows:
282
67652ed3
XG
283 - if the RSV bit of the error code is set, the page fault is caused by guest
284 accessing MMIO and cached MMIO information is available.
285 - walk shadow page table
5a9b3830
XG
286 - check for valid generation number in the spte (see "Fast invalidation of
287 MMIO sptes" below)
67652ed3
XG
288 - cache the information to vcpu->arch.mmio_gva, vcpu->arch.access and
289 vcpu->arch.mmio_gfn, and call the emulator
2d49c47f
XG
290 - If both P bit and R/W bit of error code are set, this could possibly
291 be handled as a "fast page fault" (fixed without taking the MMU lock). See
292 the description in Documentation/virtual/kvm/locking.txt.
03909187
AK
293 - if needed, walk the guest page tables to determine the guest translation
294 (gva->gpa or ngpa->gpa)
295 - if permissions are insufficient, reflect the fault back to the guest
296 - determine the host page
67652ed3
XG
297 - if this is an mmio request, there is no host page; cache the info to
298 vcpu->arch.mmio_gva, vcpu->arch.access and vcpu->arch.mmio_gfn
03909187
AK
299 - walk the shadow page table to find the spte for the translation,
300 instantiating missing intermediate page tables as necessary
67652ed3
XG
301 - If this is an mmio request, cache the mmio info to the spte and set some
302 reserved bit on the spte (see callers of kvm_mmu_set_mmio_spte_mask)
03909187
AK
303 - try to unsynchronize the page
304 - if successful, we can let the guest continue and modify the gpte
305 - emulate the instruction
306 - if failed, unshadow the page and let the guest continue
307 - update any translations that were modified by the instruction
308
309invlpg handling:
310
311 - walk the shadow page hierarchy and drop affected translations
312 - try to reinstantiate the indicated translation in the hope that the
313 guest will use it in the near future
314
315Guest control register updates:
316
317- mov to cr3
318 - look up new shadow roots
319 - synchronize newly reachable shadow pages
320
321- mov to cr0/cr4/efer
322 - set up mmu context for new paging mode
323 - look up new shadow roots
324 - synchronize newly reachable shadow pages
325
326Host translation updates:
327
328 - mmu notifier called with updated hva
329 - look up affected sptes through reverse map
330 - drop (or update) translations
331
ec87fe2a
AK
332Emulating cr0.wp
333================
334
335If tdp is not enabled, the host must keep cr0.wp=1 so page write protection
336works for the guest kernel, not guest guest userspace. When the guest
337cr0.wp=1, this does not present a problem. However when the guest cr0.wp=0,
338we cannot map the permissions for gpte.u=1, gpte.w=0 to any spte (the
339semantics require allowing any guest kernel access plus user read access).
340
341We handle this by mapping the permissions to two possible sptes, depending
342on fault type:
343
344- kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
345 disallows user access)
346- read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
347 write access)
348
349(user write faults generate a #PF)
350
edc90b7d
XG
351In the first case there are two additional complications:
352- if CR4.SMEP is enabled: since we've turned the page into a kernel page,
353 the kernel may now execute it. We handle this by also setting spte.nx.
354 If we get a user fetch or read fault, we'll change spte.u=1 and
355 spte.nx=gpte.nx back.
356- if CR4.SMAP is disabled: since the page has been changed to a kernel
357 page, it can not be reused when CR4.SMAP is enabled. We set
358 CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note,
359 here we do not care the case that CR4.SMAP is enabled since KVM will
360 directly inject #PF to guest due to failed permission check.
411c588d
AK
361
362To prevent an spte that was converted into a kernel page with cr0.wp=0
363from being written by the kernel after cr0.wp has changed to 1, we make
364the value of cr0.wp part of the page role. This means that an spte created
365with one value of cr0.wp cannot be used when cr0.wp has a different value -
366it will simply be missed by the shadow page lookup code. A similar issue
367exists when an spte created with cr0.wp=0 and cr4.smep=0 is used after
368changing cr4.smep to 1. To avoid this, the value of !cr0.wp && cr4.smep
369is also made a part of the page role.
370
316b9521
AK
371Large pages
372===========
373
374The mmu supports all combinations of large and small guest and host pages.
375Supported page sizes include 4k, 2M, 4M, and 1G. 4M pages are treated as
376two separate 2M pages, on both guest and host, since the mmu always uses PAE
377paging.
378
379To instantiate a large spte, four constraints must be satisfied:
380
381- the spte must point to a large host page
382- the guest pte must be a large pte of at least equivalent size (if tdp is
40e47125 383 enabled, there is no guest pte and this condition is satisfied)
316b9521
AK
384- if the spte will be writeable, the large page frame may not overlap any
385 write-protected pages
386- the guest page must be wholly contained by a single memory slot
387
388To check the last two conditions, the mmu maintains a ->write_count set of
389arrays for each memory slot and large page size. Every write protected page
390causes its write_count to be incremented, thus preventing instantiation of
391a large spte. The frames at the end of an unaligned memory slot have
40e47125 392artificially inflated ->write_counts so they can never be instantiated.
316b9521 393
f6f8adee
XG
394Zapping all pages (page generation count)
395=========================================
396
397For the large memory guests, walking and zapping all pages is really slow
398(because there are a lot of pages), and also blocks memory accesses of
399all VCPUs because it needs to hold the MMU lock.
400
401To make it be more scalable, kvm maintains a global generation number
402which is stored in kvm->arch.mmu_valid_gen. Every shadow page stores
403the current global generation-number into sp->mmu_valid_gen when it
404is created. Pages with a mismatching generation number are "obsolete".
405
406When KVM need zap all shadow pages sptes, it just simply increases the global
407generation-number then reload root shadow pages on all vcpus. As the VCPUs
408create new shadow page tables, the old pages are not used because of the
409mismatching generation number.
410
411KVM then walks through all pages and zaps obsolete pages. While the zap
412operation needs to take the MMU lock, the lock can be released periodically
413so that the VCPUs can make progress.
414
5a9b3830
XG
415Fast invalidation of MMIO sptes
416===============================
417
418As mentioned in "Reaction to events" above, kvm will cache MMIO
419information in leaf sptes. When a new memslot is added or an existing
420memslot is changed, this information may become stale and needs to be
421invalidated. This also needs to hold the MMU lock while walking all
422shadow pages, and is made more scalable with a similar technique.
423
424MMIO sptes have a few spare bits, which are used to store a
425generation number. The global generation number is stored in
426kvm_memslots(kvm)->generation, and increased whenever guest memory info
427changes. This generation number is distinct from the one described in
428the previous section.
429
430When KVM finds an MMIO spte, it checks the generation number of the spte.
431If the generation number of the spte does not equal the global generation
432number, it will ignore the cached MMIO information and handle the page
433fault through the slow path.
434
435Since only 19 bits are used to store generation-number on mmio spte, all
436pages are zapped when there is an overflow.
437
ee3d1570
DM
438Unfortunately, a single memory access might access kvm_memslots(kvm) multiple
439times, the last one happening when the generation number is retrieved and
440stored into the MMIO spte. Thus, the MMIO spte might be created based on
441out-of-date information, but with an up-to-date generation number.
442
443To avoid this, the generation number is incremented again after synchronize_srcu
444returns; thus, the low bit of kvm_memslots(kvm)->generation is only 1 during a
445memslot update, while some SRCU readers might be using the old copy. We do not
446want to use an MMIO sptes created with an odd generation number, and we can do
447this without losing a bit in the MMIO spte. The low bit of the generation
448is not stored in MMIO spte, and presumed zero when it is extracted out of the
449spte. If KVM is unlucky and creates an MMIO spte while the low bit is 1,
450the next access to the spte will always be a cache miss.
451
5a9b3830 452
03909187
AK
453Further reading
454===============
455
456- NPT presentation from KVM Forum 2008
457 http://www.linux-kvm.org/wiki/images/c/c8/KvmForum2008%24kdf2008_21.pdf
458
This page took 0.280592 seconds and 5 git commands to generate.