Merge git://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-fixes
[deliverable/linux.git] / Documentation / device-mapper / thin-provisioning.txt
1 Introduction
2 ============
3
4 This document describes a collection of device-mapper targets that
5 between them implement thin-provisioning and snapshots.
6
7 The main highlight of this implementation, compared to the previous
8 implementation of snapshots, is that it allows many virtual devices to
9 be stored on the same data volume. This simplifies administration and
10 allows the sharing of data between volumes, thus reducing disk usage.
11
12 Another significant feature is support for an arbitrary depth of
13 recursive snapshots (snapshots of snapshots of snapshots ...). The
14 previous implementation of snapshots did this by chaining together
15 lookup tables, and so performance was O(depth). This new
16 implementation uses a single data structure to avoid this degradation
17 with depth. Fragmentation may still be an issue, however, in some
18 scenarios.
19
20 Metadata is stored on a separate device from data, giving the
21 administrator some freedom, for example to:
22
23 - Improve metadata resilience by storing metadata on a mirrored volume
24 but data on a non-mirrored one.
25
26 - Improve performance by storing the metadata on SSD.
27
28 Status
29 ======
30
31 These targets are very much still in the EXPERIMENTAL state. Please
32 do not yet rely on them in production. But do experiment and offer us
33 feedback. Different use cases will have different performance
34 characteristics, for example due to fragmentation of the data volume.
35
36 If you find this software is not performing as expected please mail
37 dm-devel@redhat.com with details and we'll try our best to improve
38 things for you.
39
40 Userspace tools for checking and repairing the metadata are under
41 development.
42
43 Cookbook
44 ========
45
46 This section describes some quick recipes for using thin provisioning.
47 They use the dmsetup program to control the device-mapper driver
48 directly. End users will be advised to use a higher-level volume
49 manager such as LVM2 once support has been added.
50
51 Pool device
52 -----------
53
54 The pool device ties together the metadata volume and the data volume.
55 It maps I/O linearly to the data volume and updates the metadata via
56 two mechanisms:
57
58 - Function calls from the thin targets
59
60 - Device-mapper 'messages' from userspace which control the creation of new
61 virtual devices amongst other things.
62
63 Setting up a fresh pool device
64 ------------------------------
65
66 Setting up a pool device requires a valid metadata device, and a
67 data device. If you do not have an existing metadata device you can
68 make one by zeroing the first 4k to indicate empty metadata.
69
70 dd if=/dev/zero of=$metadata_dev bs=4096 count=1
71
72 The amount of metadata you need will vary according to how many blocks
73 are shared between thin devices (i.e. through snapshots). If you have
74 less sharing than average you'll need a larger-than-average metadata device.
75
76 As a guide, we suggest you calculate the number of bytes to use in the
77 metadata device as 48 * $data_dev_size / $data_block_size but round it up
78 to 2MB if the answer is smaller. If you're creating large numbers of
79 snapshots which are recording large amounts of change, you may find you
80 need to increase this.
81
82 The largest size supported is 16GB: If the device is larger,
83 a warning will be issued and the excess space will not be used.
84
85 Reloading a pool table
86 ----------------------
87
88 You may reload a pool's table, indeed this is how the pool is resized
89 if it runs out of space. (N.B. While specifying a different metadata
90 device when reloading is not forbidden at the moment, things will go
91 wrong if it does not route I/O to exactly the same on-disk location as
92 previously.)
93
94 Using an existing pool device
95 -----------------------------
96
97 dmsetup create pool \
98 --table "0 20971520 thin-pool $metadata_dev $data_dev \
99 $data_block_size $low_water_mark"
100
101 $data_block_size gives the smallest unit of disk space that can be
102 allocated at a time expressed in units of 512-byte sectors.
103 $data_block_size must be between 128 (64KB) and 2097152 (1GB) and a
104 multiple of 128 (64KB). $data_block_size cannot be changed after the
105 thin-pool is created. People primarily interested in thin provisioning
106 may want to use a value such as 1024 (512KB). People doing lots of
107 snapshotting may want a smaller value such as 128 (64KB). If you are
108 not zeroing newly-allocated data, a larger $data_block_size in the
109 region of 256000 (128MB) is suggested.
110
111 $low_water_mark is expressed in blocks of size $data_block_size. If
112 free space on the data device drops below this level then a dm event
113 will be triggered which a userspace daemon should catch allowing it to
114 extend the pool device. Only one such event will be sent.
115 Resuming a device with a new table itself triggers an event so the
116 userspace daemon can use this to detect a situation where a new table
117 already exceeds the threshold.
118
119 Thin provisioning
120 -----------------
121
122 i) Creating a new thinly-provisioned volume.
123
124 To create a new thinly- provisioned volume you must send a message to an
125 active pool device, /dev/mapper/pool in this example.
126
127 dmsetup message /dev/mapper/pool 0 "create_thin 0"
128
129 Here '0' is an identifier for the volume, a 24-bit number. It's up
130 to the caller to allocate and manage these identifiers. If the
131 identifier is already in use, the message will fail with -EEXIST.
132
133 ii) Using a thinly-provisioned volume.
134
135 Thinly-provisioned volumes are activated using the 'thin' target:
136
137 dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0"
138
139 The last parameter is the identifier for the thinp device.
140
141 Internal snapshots
142 ------------------
143
144 i) Creating an internal snapshot.
145
146 Snapshots are created with another message to the pool.
147
148 N.B. If the origin device that you wish to snapshot is active, you
149 must suspend it before creating the snapshot to avoid corruption.
150 This is NOT enforced at the moment, so please be careful!
151
152 dmsetup suspend /dev/mapper/thin
153 dmsetup message /dev/mapper/pool 0 "create_snap 1 0"
154 dmsetup resume /dev/mapper/thin
155
156 Here '1' is the identifier for the volume, a 24-bit number. '0' is the
157 identifier for the origin device.
158
159 ii) Using an internal snapshot.
160
161 Once created, the user doesn't have to worry about any connection
162 between the origin and the snapshot. Indeed the snapshot is no
163 different from any other thinly-provisioned device and can be
164 snapshotted itself via the same method. It's perfectly legal to
165 have only one of them active, and there's no ordering requirement on
166 activating or removing them both. (This differs from conventional
167 device-mapper snapshots.)
168
169 Activate it exactly the same way as any other thinly-provisioned volume:
170
171 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1"
172
173 External snapshots
174 ------------------
175
176 You can use an external _read only_ device as an origin for a
177 thinly-provisioned volume. Any read to an unprovisioned area of the
178 thin device will be passed through to the origin. Writes trigger
179 the allocation of new blocks as usual.
180
181 One use case for this is VM hosts that want to run guests on
182 thinly-provisioned volumes but have the base image on another device
183 (possibly shared between many VMs).
184
185 You must not write to the origin device if you use this technique!
186 Of course, you may write to the thin device and take internal snapshots
187 of the thin volume.
188
189 i) Creating a snapshot of an external device
190
191 This is the same as creating a thin device.
192 You don't mention the origin at this stage.
193
194 dmsetup message /dev/mapper/pool 0 "create_thin 0"
195
196 ii) Using a snapshot of an external device.
197
198 Append an extra parameter to the thin target specifying the origin:
199
200 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image"
201
202 N.B. All descendants (internal snapshots) of this snapshot require the
203 same extra origin parameter.
204
205 Deactivation
206 ------------
207
208 All devices using a pool must be deactivated before the pool itself
209 can be.
210
211 dmsetup remove thin
212 dmsetup remove snap
213 dmsetup remove pool
214
215 Reference
216 =========
217
218 'thin-pool' target
219 ------------------
220
221 i) Constructor
222
223 thin-pool <metadata dev> <data dev> <data block size (sectors)> \
224 <low water mark (blocks)> [<number of feature args> [<arg>]*]
225
226 Optional feature arguments:
227
228 skip_block_zeroing: Skip the zeroing of newly-provisioned blocks.
229
230 ignore_discard: Disable discard support.
231
232 no_discard_passdown: Don't pass discards down to the underlying
233 data device, but just remove the mapping.
234
235 read_only: Don't allow any changes to be made to the pool
236 metadata.
237
238 error_if_no_space: Error IOs, instead of queueing, if no space.
239
240 Data block size must be between 64KB (128 sectors) and 1GB
241 (2097152 sectors) inclusive.
242
243
244 ii) Status
245
246 <transaction id> <used metadata blocks>/<total metadata blocks>
247 <used data blocks>/<total data blocks> <held metadata root>
248 [no_]discard_passdown ro|rw
249
250 transaction id:
251 A 64-bit number used by userspace to help synchronise with metadata
252 from volume managers.
253
254 used data blocks / total data blocks
255 If the number of free blocks drops below the pool's low water mark a
256 dm event will be sent to userspace. This event is edge-triggered and
257 it will occur only once after each resume so volume manager writers
258 should register for the event and then check the target's status.
259
260 held metadata root:
261 The location, in sectors, of the metadata root that has been
262 'held' for userspace read access. '-' indicates there is no
263 held root. This feature is not yet implemented so '-' is
264 always returned.
265
266 discard_passdown|no_discard_passdown
267 Whether or not discards are actually being passed down to the
268 underlying device. When this is enabled when loading the table,
269 it can get disabled if the underlying device doesn't support it.
270
271 ro|rw
272 If the pool encounters certain types of device failures it will
273 drop into a read-only metadata mode in which no changes to
274 the pool metadata (like allocating new blocks) are permitted.
275
276 In serious cases where even a read-only mode is deemed unsafe
277 no further I/O will be permitted and the status will just
278 contain the string 'Fail'. The userspace recovery tools
279 should then be used.
280
281 error_if_no_space|queue_if_no_space
282 If the pool runs out of data or metadata space, the pool will
283 either queue or error the IO destined to the data device. The
284 default is to queue the IO until more space is added.
285
286 iii) Messages
287
288 create_thin <dev id>
289
290 Create a new thinly-provisioned device.
291 <dev id> is an arbitrary unique 24-bit identifier chosen by
292 the caller.
293
294 create_snap <dev id> <origin id>
295
296 Create a new snapshot of another thinly-provisioned device.
297 <dev id> is an arbitrary unique 24-bit identifier chosen by
298 the caller.
299 <origin id> is the identifier of the thinly-provisioned device
300 of which the new device will be a snapshot.
301
302 delete <dev id>
303
304 Deletes a thin device. Irreversible.
305
306 set_transaction_id <current id> <new id>
307
308 Userland volume managers, such as LVM, need a way to
309 synchronise their external metadata with the internal metadata of the
310 pool target. The thin-pool target offers to store an
311 arbitrary 64-bit transaction id and return it on the target's
312 status line. To avoid races you must provide what you think
313 the current transaction id is when you change it with this
314 compare-and-swap message.
315
316 reserve_metadata_snap
317
318 Reserve a copy of the data mapping btree for use by userland.
319 This allows userland to inspect the mappings as they were when
320 this message was executed. Use the pool's status command to
321 get the root block associated with the metadata snapshot.
322
323 release_metadata_snap
324
325 Release a previously reserved copy of the data mapping btree.
326
327 'thin' target
328 -------------
329
330 i) Constructor
331
332 thin <pool dev> <dev id> [<external origin dev>]
333
334 pool dev:
335 the thin-pool device, e.g. /dev/mapper/my_pool or 253:0
336
337 dev id:
338 the internal device identifier of the device to be
339 activated.
340
341 external origin dev:
342 an optional block device outside the pool to be treated as a
343 read-only snapshot origin: reads to unprovisioned areas of the
344 thin target will be mapped to this device.
345
346 The pool doesn't store any size against the thin devices. If you
347 load a thin target that is smaller than you've been using previously,
348 then you'll have no access to blocks mapped beyond the end. If you
349 load a target that is bigger than before, then extra blocks will be
350 provisioned as and when needed.
351
352 If you wish to reduce the size of your thin device and potentially
353 regain some space then send the 'trim' message to the pool.
354
355 ii) Status
356
357 <nr mapped sectors> <highest mapped sector>
358
359 If the pool has encountered device errors and failed, the status
360 will just contain the string 'Fail'. The userspace recovery
361 tools should then be used.
This page took 0.058225 seconds and 5 git commands to generate.