md/raid5: activate raid6 rmw feature
[deliverable/linux.git] / drivers / md / Kconfig
1 #
2 # Block device driver configuration
3 #
4
5 menuconfig MD
6 bool "Multiple devices driver support (RAID and LVM)"
7 depends on BLOCK
8 select SRCU
9 help
10 Support multiple physical spindles through a single logical device.
11 Required for RAID and logical volume management.
12
13 if MD
14
15 config BLK_DEV_MD
16 tristate "RAID support"
17 ---help---
18 This driver lets you combine several hard disk partitions into one
19 logical block device. This can be used to simply append one
20 partition to another one or to combine several redundant hard disks
21 into a RAID1/4/5 device so as to provide protection against hard
22 disk failures. This is called "Software RAID" since the combining of
23 the partitions is done by the kernel. "Hardware RAID" means that the
24 combining is done by a dedicated controller; if you have such a
25 controller, you do not need to say Y here.
26
27 More information about Software RAID on Linux is contained in the
28 Software RAID mini-HOWTO, available from
29 <http://www.tldp.org/docs.html#howto>. There you will also learn
30 where to get the supporting user space utilities raidtools.
31
32 If unsure, say N.
33
34 config MD_AUTODETECT
35 bool "Autodetect RAID arrays during kernel boot"
36 depends on BLK_DEV_MD=y
37 default y
38 ---help---
39 If you say Y here, then the kernel will try to autodetect raid
40 arrays as part of its boot process.
41
42 If you don't use raid and say Y, this autodetection can cause
43 a several-second delay in the boot time due to various
44 synchronisation steps that are part of this step.
45
46 If unsure, say Y.
47
48 config MD_LINEAR
49 tristate "Linear (append) mode"
50 depends on BLK_DEV_MD
51 ---help---
52 If you say Y here, then your multiple devices driver will be able to
53 use the so-called linear mode, i.e. it will combine the hard disk
54 partitions by simply appending one to the other.
55
56 To compile this as a module, choose M here: the module
57 will be called linear.
58
59 If unsure, say Y.
60
61 config MD_RAID0
62 tristate "RAID-0 (striping) mode"
63 depends on BLK_DEV_MD
64 ---help---
65 If you say Y here, then your multiple devices driver will be able to
66 use the so-called raid0 mode, i.e. it will combine the hard disk
67 partitions into one logical device in such a fashion as to fill them
68 up evenly, one chunk here and one chunk there. This will increase
69 the throughput rate if the partitions reside on distinct disks.
70
71 Information about Software RAID on Linux is contained in the
72 Software-RAID mini-HOWTO, available from
73 <http://www.tldp.org/docs.html#howto>. There you will also
74 learn where to get the supporting user space utilities raidtools.
75
76 To compile this as a module, choose M here: the module
77 will be called raid0.
78
79 If unsure, say Y.
80
81 config MD_RAID1
82 tristate "RAID-1 (mirroring) mode"
83 depends on BLK_DEV_MD
84 ---help---
85 A RAID-1 set consists of several disk drives which are exact copies
86 of each other. In the event of a mirror failure, the RAID driver
87 will continue to use the operational mirrors in the set, providing
88 an error free MD (multiple device) to the higher levels of the
89 kernel. In a set with N drives, the available space is the capacity
90 of a single drive, and the set protects against a failure of (N - 1)
91 drives.
92
93 Information about Software RAID on Linux is contained in the
94 Software-RAID mini-HOWTO, available from
95 <http://www.tldp.org/docs.html#howto>. There you will also
96 learn where to get the supporting user space utilities raidtools.
97
98 If you want to use such a RAID-1 set, say Y. To compile this code
99 as a module, choose M here: the module will be called raid1.
100
101 If unsure, say Y.
102
103 config MD_RAID10
104 tristate "RAID-10 (mirrored striping) mode"
105 depends on BLK_DEV_MD
106 ---help---
107 RAID-10 provides a combination of striping (RAID-0) and
108 mirroring (RAID-1) with easier configuration and more flexible
109 layout.
110 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
111 be the same size (or at least, only as much as the smallest device
112 will be used).
113 RAID-10 provides a variety of layouts that provide different levels
114 of redundancy and performance.
115
116 RAID-10 requires mdadm-1.7.0 or later, available at:
117
118 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
119
120 If unsure, say Y.
121
122 config MD_RAID456
123 tristate "RAID-4/RAID-5/RAID-6 mode"
124 depends on BLK_DEV_MD
125 select RAID6_PQ
126 select ASYNC_MEMCPY
127 select ASYNC_XOR
128 select ASYNC_PQ
129 select ASYNC_RAID6_RECOV
130 ---help---
131 A RAID-5 set of N drives with a capacity of C MB per drive provides
132 the capacity of C * (N - 1) MB, and protects against a failure
133 of a single drive. For a given sector (row) number, (N - 1) drives
134 contain data sectors, and one drive contains the parity protection.
135 For a RAID-4 set, the parity blocks are present on a single drive,
136 while a RAID-5 set distributes the parity across the drives in one
137 of the available parity distribution methods.
138
139 A RAID-6 set of N drives with a capacity of C MB per drive
140 provides the capacity of C * (N - 2) MB, and protects
141 against a failure of any two drives. For a given sector
142 (row) number, (N - 2) drives contain data sectors, and two
143 drives contains two independent redundancy syndromes. Like
144 RAID-5, RAID-6 distributes the syndromes across the drives
145 in one of the available parity distribution methods.
146
147 Information about Software RAID on Linux is contained in the
148 Software-RAID mini-HOWTO, available from
149 <http://www.tldp.org/docs.html#howto>. There you will also
150 learn where to get the supporting user space utilities raidtools.
151
152 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To
153 compile this code as a module, choose M here: the module
154 will be called raid456.
155
156 If unsure, say Y.
157
158 config MD_MULTIPATH
159 tristate "Multipath I/O support"
160 depends on BLK_DEV_MD
161 help
162 MD_MULTIPATH provides a simple multi-path personality for use
163 the MD framework. It is not under active development. New
164 projects should consider using DM_MULTIPATH which has more
165 features and more testing.
166
167 If unsure, say N.
168
169 config MD_FAULTY
170 tristate "Faulty test module for MD"
171 depends on BLK_DEV_MD
172 help
173 The "faulty" module allows for a block device that occasionally returns
174 read or write errors. It is useful for testing.
175
176 In unsure, say N.
177
178
179 config MD_CLUSTER
180 tristate "Cluster Support for MD (EXPERIMENTAL)"
181 depends on BLK_DEV_MD
182 depends on DLM
183 default n
184 ---help---
185 Clustering support for MD devices. This enables locking and
186 synchronization across multiple systems on the cluster, so all
187 nodes in the cluster can access the MD devices simultaneously.
188
189 This brings the redundancy (and uptime) of RAID levels across the
190 nodes of the cluster.
191
192 If unsure, say N.
193
194 source "drivers/md/bcache/Kconfig"
195
196 config BLK_DEV_DM_BUILTIN
197 bool
198
199 config BLK_DEV_DM
200 tristate "Device mapper support"
201 select BLK_DEV_DM_BUILTIN
202 ---help---
203 Device-mapper is a low level volume manager. It works by allowing
204 people to specify mappings for ranges of logical sectors. Various
205 mapping types are available, in addition people may write their own
206 modules containing custom mappings if they wish.
207
208 Higher level volume managers such as LVM2 use this driver.
209
210 To compile this as a module, choose M here: the module will be
211 called dm-mod.
212
213 If unsure, say N.
214
215 config DM_DEBUG
216 bool "Device mapper debugging support"
217 depends on BLK_DEV_DM
218 ---help---
219 Enable this for messages that may help debug device-mapper problems.
220
221 If unsure, say N.
222
223 config DM_BUFIO
224 tristate
225 depends on BLK_DEV_DM
226 ---help---
227 This interface allows you to do buffered I/O on a device and acts
228 as a cache, holding recently-read blocks in memory and performing
229 delayed writes.
230
231 config DM_BIO_PRISON
232 tristate
233 depends on BLK_DEV_DM
234 ---help---
235 Some bio locking schemes used by other device-mapper targets
236 including thin provisioning.
237
238 source "drivers/md/persistent-data/Kconfig"
239
240 config DM_CRYPT
241 tristate "Crypt target support"
242 depends on BLK_DEV_DM
243 select CRYPTO
244 select CRYPTO_CBC
245 ---help---
246 This device-mapper target allows you to create a device that
247 transparently encrypts the data on it. You'll need to activate
248 the ciphers you're going to use in the cryptoapi configuration.
249
250 For further information on dm-crypt and userspace tools see:
251 <http://code.google.com/p/cryptsetup/wiki/DMCrypt>
252
253 To compile this code as a module, choose M here: the module will
254 be called dm-crypt.
255
256 If unsure, say N.
257
258 config DM_SNAPSHOT
259 tristate "Snapshot target"
260 depends on BLK_DEV_DM
261 select DM_BUFIO
262 ---help---
263 Allow volume managers to take writable snapshots of a device.
264
265 config DM_THIN_PROVISIONING
266 tristate "Thin provisioning target"
267 depends on BLK_DEV_DM
268 select DM_PERSISTENT_DATA
269 select DM_BIO_PRISON
270 ---help---
271 Provides thin provisioning and snapshots that share a data store.
272
273 config DM_CACHE
274 tristate "Cache target (EXPERIMENTAL)"
275 depends on BLK_DEV_DM
276 default n
277 select DM_PERSISTENT_DATA
278 select DM_BIO_PRISON
279 ---help---
280 dm-cache attempts to improve performance of a block device by
281 moving frequently used data to a smaller, higher performance
282 device. Different 'policy' plugins can be used to change the
283 algorithms used to select which blocks are promoted, demoted,
284 cleaned etc. It supports writeback and writethrough modes.
285
286 config DM_CACHE_MQ
287 tristate "MQ Cache Policy (EXPERIMENTAL)"
288 depends on DM_CACHE
289 default y
290 ---help---
291 A cache policy that uses a multiqueue ordered by recent hit
292 count to select which blocks should be promoted and demoted.
293 This is meant to be a general purpose policy. It prioritises
294 reads over writes.
295
296 config DM_CACHE_CLEANER
297 tristate "Cleaner Cache Policy (EXPERIMENTAL)"
298 depends on DM_CACHE
299 default y
300 ---help---
301 A simple cache policy that writes back all data to the
302 origin. Used when decommissioning a dm-cache.
303
304 config DM_ERA
305 tristate "Era target (EXPERIMENTAL)"
306 depends on BLK_DEV_DM
307 default n
308 select DM_PERSISTENT_DATA
309 select DM_BIO_PRISON
310 ---help---
311 dm-era tracks which parts of a block device are written to
312 over time. Useful for maintaining cache coherency when using
313 vendor snapshots.
314
315 config DM_MIRROR
316 tristate "Mirror target"
317 depends on BLK_DEV_DM
318 ---help---
319 Allow volume managers to mirror logical volumes, also
320 needed for live data migration tools such as 'pvmove'.
321
322 config DM_LOG_USERSPACE
323 tristate "Mirror userspace logging"
324 depends on DM_MIRROR && NET
325 select CONNECTOR
326 ---help---
327 The userspace logging module provides a mechanism for
328 relaying the dm-dirty-log API to userspace. Log designs
329 which are more suited to userspace implementation (e.g.
330 shared storage logs) or experimental logs can be implemented
331 by leveraging this framework.
332
333 config DM_RAID
334 tristate "RAID 1/4/5/6/10 target"
335 depends on BLK_DEV_DM
336 select MD_RAID1
337 select MD_RAID10
338 select MD_RAID456
339 select BLK_DEV_MD
340 ---help---
341 A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings
342
343 A RAID-5 set of N drives with a capacity of C MB per drive provides
344 the capacity of C * (N - 1) MB, and protects against a failure
345 of a single drive. For a given sector (row) number, (N - 1) drives
346 contain data sectors, and one drive contains the parity protection.
347 For a RAID-4 set, the parity blocks are present on a single drive,
348 while a RAID-5 set distributes the parity across the drives in one
349 of the available parity distribution methods.
350
351 A RAID-6 set of N drives with a capacity of C MB per drive
352 provides the capacity of C * (N - 2) MB, and protects
353 against a failure of any two drives. For a given sector
354 (row) number, (N - 2) drives contain data sectors, and two
355 drives contains two independent redundancy syndromes. Like
356 RAID-5, RAID-6 distributes the syndromes across the drives
357 in one of the available parity distribution methods.
358
359 config DM_ZERO
360 tristate "Zero target"
361 depends on BLK_DEV_DM
362 ---help---
363 A target that discards writes, and returns all zeroes for
364 reads. Useful in some recovery situations.
365
366 config DM_MULTIPATH
367 tristate "Multipath target"
368 depends on BLK_DEV_DM
369 # nasty syntax but means make DM_MULTIPATH independent
370 # of SCSI_DH if the latter isn't defined but if
371 # it is, DM_MULTIPATH must depend on it. We get a build
372 # error if SCSI_DH=m and DM_MULTIPATH=y
373 depends on SCSI_DH || !SCSI_DH
374 ---help---
375 Allow volume managers to support multipath hardware.
376
377 config DM_MULTIPATH_QL
378 tristate "I/O Path Selector based on the number of in-flight I/Os"
379 depends on DM_MULTIPATH
380 ---help---
381 This path selector is a dynamic load balancer which selects
382 the path with the least number of in-flight I/Os.
383
384 If unsure, say N.
385
386 config DM_MULTIPATH_ST
387 tristate "I/O Path Selector based on the service time"
388 depends on DM_MULTIPATH
389 ---help---
390 This path selector is a dynamic load balancer which selects
391 the path expected to complete the incoming I/O in the shortest
392 time.
393
394 If unsure, say N.
395
396 config DM_DELAY
397 tristate "I/O delaying target"
398 depends on BLK_DEV_DM
399 ---help---
400 A target that delays reads and/or writes and can send
401 them to different devices. Useful for testing.
402
403 If unsure, say N.
404
405 config DM_UEVENT
406 bool "DM uevents"
407 depends on BLK_DEV_DM
408 ---help---
409 Generate udev events for DM events.
410
411 config DM_FLAKEY
412 tristate "Flakey target"
413 depends on BLK_DEV_DM
414 ---help---
415 A target that intermittently fails I/O for debugging purposes.
416
417 config DM_VERITY
418 tristate "Verity target support"
419 depends on BLK_DEV_DM
420 select CRYPTO
421 select CRYPTO_HASH
422 select DM_BUFIO
423 ---help---
424 This device-mapper target creates a read-only device that
425 transparently validates the data on one underlying device against
426 a pre-generated tree of cryptographic checksums stored on a second
427 device.
428
429 You'll need to activate the digests you're going to use in the
430 cryptoapi configuration.
431
432 To compile this code as a module, choose M here: the module will
433 be called dm-verity.
434
435 If unsure, say N.
436
437 config DM_SWITCH
438 tristate "Switch target support (EXPERIMENTAL)"
439 depends on BLK_DEV_DM
440 ---help---
441 This device-mapper target creates a device that supports an arbitrary
442 mapping of fixed-size regions of I/O across a fixed set of paths.
443 The path used for any specific region can be switched dynamically
444 by sending the target a message.
445
446 To compile this code as a module, choose M here: the module will
447 be called dm-switch.
448
449 If unsure, say N.
450
451 endif # MD
This page took 0.077878 seconds and 5 git commands to generate.