Merge branch 'upstream-fixes' into for-linus
[deliverable/linux.git] / drivers / xen / Kconfig
1 menu "Xen driver support"
2 depends on XEN
3
4 config XEN_BALLOON
5 bool "Xen memory balloon driver"
6 default y
7 help
8 The balloon driver allows the Xen domain to request more memory from
9 the system to expand the domain's memory allocation, or alternatively
10 return unneeded memory to the system.
11
12 config XEN_SELFBALLOONING
13 bool "Dynamically self-balloon kernel memory to target"
14 depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
15 default n
16 help
17 Self-ballooning dynamically balloons available kernel memory driven
18 by the current usage of anonymous memory ("committed AS") and
19 controlled by various sysfs-settable parameters. Configuring
20 FRONTSWAP is highly recommended; if it is not configured, self-
21 ballooning is disabled by default but can be enabled with the
22 'selfballooning' kernel boot parameter. If FRONTSWAP is configured,
23 frontswap-selfshrinking is enabled by default but can be disabled
24 with the 'noselfshrink' kernel boot parameter; and self-ballooning
25 is enabled by default but can be disabled with the 'noselfballooning'
26 kernel boot parameter. Note that systems without a sufficiently
27 large swap device should not enable self-ballooning.
28
29 config XEN_BALLOON_MEMORY_HOTPLUG
30 bool "Memory hotplug support for Xen balloon driver"
31 default n
32 depends on XEN_BALLOON && MEMORY_HOTPLUG
33 help
34 Memory hotplug support for Xen balloon driver allows expanding memory
35 available for the system above limit declared at system startup.
36 It is very useful on critical systems which require long
37 run without rebooting.
38
39 Memory could be hotplugged in following steps:
40
41 1) dom0: xl mem-max <domU> <maxmem>
42 where <maxmem> is >= requested memory size,
43
44 2) dom0: xl mem-set <domU> <memory>
45 where <memory> is requested memory size; alternatively memory
46 could be added by writing proper value to
47 /sys/devices/system/xen_memory/xen_memory0/target or
48 /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
49
50 3) domU: for i in /sys/devices/system/memory/memory*/state; do \
51 [ "`cat "$i"`" = offline ] && echo online > "$i"; done
52
53 Memory could be onlined automatically on domU by adding following line to udev rules:
54
55 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
56
57 In that case step 3 should be omitted.
58
59 config XEN_SCRUB_PAGES
60 bool "Scrub pages before returning them to system"
61 depends on XEN_BALLOON
62 default y
63 help
64 Scrub pages before returning them to the system for reuse by
65 other domains. This makes sure that any confidential data
66 is not accidentally visible to other domains. Is it more
67 secure, but slightly less efficient.
68 If in doubt, say yes.
69
70 config XEN_DEV_EVTCHN
71 tristate "Xen /dev/xen/evtchn device"
72 default y
73 help
74 The evtchn driver allows a userspace process to triger event
75 channels and to receive notification of an event channel
76 firing.
77 If in doubt, say yes.
78
79 config XEN_BACKEND
80 bool "Backend driver support"
81 depends on XEN_DOM0
82 default y
83 help
84 Support for backend device drivers that provide I/O services
85 to other virtual machines.
86
87 config XENFS
88 tristate "Xen filesystem"
89 default y
90 help
91 The xen filesystem provides a way for domains to share
92 information with each other and with the hypervisor.
93 For example, by reading and writing the "xenbus" file, guests
94 may pass arbitrary information to the initial domain.
95 If in doubt, say yes.
96
97 config XEN_COMPAT_XENFS
98 bool "Create compatibility mount point /proc/xen"
99 depends on XENFS
100 default y
101 help
102 The old xenstore userspace tools expect to find "xenbus"
103 under /proc/xen, but "xenbus" is now found at the root of the
104 xenfs filesystem. Selecting this causes the kernel to create
105 the compatibility mount point /proc/xen if it is running on
106 a xen platform.
107 If in doubt, say yes.
108
109 config XEN_SYS_HYPERVISOR
110 bool "Create xen entries under /sys/hypervisor"
111 depends on SYSFS
112 select SYS_HYPERVISOR
113 default y
114 help
115 Create entries under /sys/hypervisor describing the Xen
116 hypervisor environment. When running native or in another
117 virtual environment, /sys/hypervisor will still be present,
118 but will have no xen contents.
119
120 config XEN_XENBUS_FRONTEND
121 tristate
122
123 config XEN_GNTDEV
124 tristate "userspace grant access device driver"
125 depends on XEN
126 default m
127 select MMU_NOTIFIER
128 help
129 Allows userspace processes to use grants.
130
131 config XEN_GRANT_DEV_ALLOC
132 tristate "User-space grant reference allocator driver"
133 depends on XEN
134 default m
135 help
136 Allows userspace processes to create pages with access granted
137 to other domains. This can be used to implement frontend drivers
138 or as part of an inter-domain shared memory channel.
139
140 config XEN_PLATFORM_PCI
141 tristate "xen platform pci device driver"
142 depends on XEN_PVHVM && PCI
143 default m
144 help
145 Driver for the Xen PCI Platform device: it is responsible for
146 initializing xenbus and grant_table when running in a Xen HVM
147 domain. As a consequence this driver is required to run any Xen PV
148 frontend on Xen HVM.
149
150 config SWIOTLB_XEN
151 def_bool y
152 depends on PCI
153 select SWIOTLB
154
155 config XEN_TMEM
156 bool
157 default y if (CLEANCACHE || FRONTSWAP)
158 help
159 Shim to interface in-kernel Transcendent Memory hooks
160 (e.g. cleancache and frontswap) to Xen tmem hypercalls.
161
162 config XEN_PCIDEV_BACKEND
163 tristate "Xen PCI-device backend driver"
164 depends on PCI && X86 && XEN
165 depends on XEN_BACKEND
166 default m
167 help
168 The PCI device backend driver allows the kernel to export arbitrary
169 PCI devices to other guests. If you select this to be a module, you
170 will need to make sure no other driver has bound to the device(s)
171 you want to make visible to other guests.
172
173 The parameter "passthrough" allows you specify how you want the PCI
174 devices to appear in the guest. You can choose the default (0) where
175 PCI topology starts at 00.00.0, or (1) for passthrough if you want
176 the PCI devices topology appear the same as in the host.
177
178 The "hide" parameter (only applicable if backend driver is compiled
179 into the kernel) allows you to bind the PCI devices to this module
180 from the default device drivers. The argument is the list of PCI BDFs:
181 xen-pciback.hide=(03:00.0)(04:00.0)
182
183 If in doubt, say m.
184 endmenu
This page took 0.048728 seconds and 5 git commands to generate.