Commit | Line | Data |
---|---|---|
991d9fa0 JT |
1 | Introduction |
2 | ============ | |
3 | ||
4998d8ed | 4 | This document describes a collection of device-mapper targets that |
991d9fa0 JT |
5 | between them implement thin-provisioning and snapshots. |
6 | ||
7 | The main highlight of this implementation, compared to the previous | |
8 | implementation of snapshots, is that it allows many virtual devices to | |
9 | be stored on the same data volume. This simplifies administration and | |
10 | allows the sharing of data between volumes, thus reducing disk usage. | |
11 | ||
12 | Another significant feature is support for an arbitrary depth of | |
13 | recursive snapshots (snapshots of snapshots of snapshots ...). The | |
14 | previous implementation of snapshots did this by chaining together | |
15 | lookup tables, and so performance was O(depth). This new | |
16 | implementation uses a single data structure to avoid this degradation | |
17 | with depth. Fragmentation may still be an issue, however, in some | |
18 | scenarios. | |
19 | ||
20 | Metadata is stored on a separate device from data, giving the | |
21 | administrator some freedom, for example to: | |
22 | ||
23 | - Improve metadata resilience by storing metadata on a mirrored volume | |
24 | but data on a non-mirrored one. | |
25 | ||
26 | - Improve performance by storing the metadata on SSD. | |
27 | ||
28 | Status | |
29 | ====== | |
30 | ||
31 | These targets are very much still in the EXPERIMENTAL state. Please | |
32 | do not yet rely on them in production. But do experiment and offer us | |
33 | feedback. Different use cases will have different performance | |
34 | characteristics, for example due to fragmentation of the data volume. | |
35 | ||
36 | If you find this software is not performing as expected please mail | |
37 | dm-devel@redhat.com with details and we'll try our best to improve | |
38 | things for you. | |
39 | ||
40 | Userspace tools for checking and repairing the metadata are under | |
41 | development. | |
42 | ||
43 | Cookbook | |
44 | ======== | |
45 | ||
46 | This section describes some quick recipes for using thin provisioning. | |
47 | They use the dmsetup program to control the device-mapper driver | |
48 | directly. End users will be advised to use a higher-level volume | |
49 | manager such as LVM2 once support has been added. | |
50 | ||
51 | Pool device | |
52 | ----------- | |
53 | ||
54 | The pool device ties together the metadata volume and the data volume. | |
55 | It maps I/O linearly to the data volume and updates the metadata via | |
56 | two mechanisms: | |
57 | ||
58 | - Function calls from the thin targets | |
59 | ||
60 | - Device-mapper 'messages' from userspace which control the creation of new | |
61 | virtual devices amongst other things. | |
62 | ||
63 | Setting up a fresh pool device | |
64 | ------------------------------ | |
65 | ||
66 | Setting up a pool device requires a valid metadata device, and a | |
67 | data device. If you do not have an existing metadata device you can | |
68 | make one by zeroing the first 4k to indicate empty metadata. | |
69 | ||
70 | dd if=/dev/zero of=$metadata_dev bs=4096 count=1 | |
71 | ||
72 | The amount of metadata you need will vary according to how many blocks | |
73 | are shared between thin devices (i.e. through snapshots). If you have | |
74 | less sharing than average you'll need a larger-than-average metadata device. | |
75 | ||
76 | As a guide, we suggest you calculate the number of bytes to use in the | |
77 | metadata device as 48 * $data_dev_size / $data_block_size but round it up | |
c4a69ecd MS |
78 | to 2MB if the answer is smaller. If you're creating large numbers of |
79 | snapshots which are recording large amounts of change, you may find you | |
80 | need to increase this. | |
991d9fa0 | 81 | |
c4a69ecd MS |
82 | The largest size supported is 16GB: If the device is larger, |
83 | a warning will be issued and the excess space will not be used. | |
991d9fa0 JT |
84 | |
85 | Reloading a pool table | |
86 | ---------------------- | |
87 | ||
88 | You may reload a pool's table, indeed this is how the pool is resized | |
89 | if it runs out of space. (N.B. While specifying a different metadata | |
90 | device when reloading is not forbidden at the moment, things will go | |
91 | wrong if it does not route I/O to exactly the same on-disk location as | |
92 | previously.) | |
93 | ||
94 | Using an existing pool device | |
95 | ----------------------------- | |
96 | ||
97 | dmsetup create pool \ | |
98 | --table "0 20971520 thin-pool $metadata_dev $data_dev \ | |
99 | $data_block_size $low_water_mark" | |
100 | ||
101 | $data_block_size gives the smallest unit of disk space that can be | |
a561ddbe CM |
102 | allocated at a time expressed in units of 512-byte sectors. |
103 | $data_block_size must be between 128 (64KB) and 2097152 (1GB) and a | |
104 | multiple of 128 (64KB). $data_block_size cannot be changed after the | |
105 | thin-pool is created. People primarily interested in thin provisioning | |
106 | may want to use a value such as 1024 (512KB). People doing lots of | |
107 | snapshotting may want a smaller value such as 128 (64KB). If you are | |
108 | not zeroing newly-allocated data, a larger $data_block_size in the | |
109 | region of 256000 (128MB) is suggested. | |
991d9fa0 JT |
110 | |
111 | $low_water_mark is expressed in blocks of size $data_block_size. If | |
112 | free space on the data device drops below this level then a dm event | |
113 | will be triggered which a userspace daemon should catch allowing it to | |
114 | extend the pool device. Only one such event will be sent. | |
115 | Resuming a device with a new table itself triggers an event so the | |
116 | userspace daemon can use this to detect a situation where a new table | |
117 | already exceeds the threshold. | |
118 | ||
07f2b6e0 MS |
119 | A low water mark for the metadata device is maintained in the kernel and |
120 | will trigger a dm event if free space on the metadata device drops below | |
121 | it. | |
122 | ||
123 | Updating on-disk metadata | |
124 | ------------------------- | |
125 | ||
126 | On-disk metadata is committed every time a FLUSH or FUA bio is written. | |
127 | If no such requests are made then commits will occur every second. This | |
128 | means the thin-provisioning target behaves like a physical disk that has | |
129 | a volatile write cache. If power is lost you may lose some recent | |
130 | writes. The metadata should always be consistent in spite of any crash. | |
131 | ||
132 | If data space is exhausted the pool will either error or queue IO | |
133 | according to the configuration (see: error_if_no_space). If metadata | |
134 | space is exhausted or a metadata operation fails: the pool will error IO | |
135 | until the pool is taken offline and repair is performed to 1) fix any | |
136 | potential inconsistencies and 2) clear the flag that imposes repair. | |
137 | Once the pool's metadata device is repaired it may be resized, which | |
138 | will allow the pool to return to normal operation. Note that if a pool | |
139 | is flagged as needing repair, the pool's data and metadata devices | |
140 | cannot be resized until repair is performed. It should also be noted | |
141 | that when the pool's metadata space is exhausted the current metadata | |
142 | transaction is aborted. Given that the pool will cache IO whose | |
143 | completion may have already been acknowledged to upper IO layers | |
144 | (e.g. filesystem) it is strongly suggested that consistency checks | |
145 | (e.g. fsck) be performed on those layers when repair of the pool is | |
146 | required. | |
147 | ||
991d9fa0 JT |
148 | Thin provisioning |
149 | ----------------- | |
150 | ||
151 | i) Creating a new thinly-provisioned volume. | |
152 | ||
153 | To create a new thinly- provisioned volume you must send a message to an | |
154 | active pool device, /dev/mapper/pool in this example. | |
155 | ||
156 | dmsetup message /dev/mapper/pool 0 "create_thin 0" | |
157 | ||
158 | Here '0' is an identifier for the volume, a 24-bit number. It's up | |
159 | to the caller to allocate and manage these identifiers. If the | |
160 | identifier is already in use, the message will fail with -EEXIST. | |
161 | ||
162 | ii) Using a thinly-provisioned volume. | |
163 | ||
164 | Thinly-provisioned volumes are activated using the 'thin' target: | |
165 | ||
166 | dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0" | |
167 | ||
168 | The last parameter is the identifier for the thinp device. | |
169 | ||
170 | Internal snapshots | |
171 | ------------------ | |
172 | ||
173 | i) Creating an internal snapshot. | |
174 | ||
175 | Snapshots are created with another message to the pool. | |
176 | ||
177 | N.B. If the origin device that you wish to snapshot is active, you | |
178 | must suspend it before creating the snapshot to avoid corruption. | |
179 | This is NOT enforced at the moment, so please be careful! | |
180 | ||
181 | dmsetup suspend /dev/mapper/thin | |
182 | dmsetup message /dev/mapper/pool 0 "create_snap 1 0" | |
183 | dmsetup resume /dev/mapper/thin | |
184 | ||
185 | Here '1' is the identifier for the volume, a 24-bit number. '0' is the | |
186 | identifier for the origin device. | |
187 | ||
188 | ii) Using an internal snapshot. | |
189 | ||
190 | Once created, the user doesn't have to worry about any connection | |
191 | between the origin and the snapshot. Indeed the snapshot is no | |
192 | different from any other thinly-provisioned device and can be | |
193 | snapshotted itself via the same method. It's perfectly legal to | |
194 | have only one of them active, and there's no ordering requirement on | |
195 | activating or removing them both. (This differs from conventional | |
196 | device-mapper snapshots.) | |
197 | ||
198 | Activate it exactly the same way as any other thinly-provisioned volume: | |
199 | ||
200 | dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1" | |
201 | ||
2dd9c257 JT |
202 | External snapshots |
203 | ------------------ | |
204 | ||
205 | You can use an external _read only_ device as an origin for a | |
206 | thinly-provisioned volume. Any read to an unprovisioned area of the | |
207 | thin device will be passed through to the origin. Writes trigger | |
208 | the allocation of new blocks as usual. | |
209 | ||
210 | One use case for this is VM hosts that want to run guests on | |
211 | thinly-provisioned volumes but have the base image on another device | |
212 | (possibly shared between many VMs). | |
213 | ||
214 | You must not write to the origin device if you use this technique! | |
215 | Of course, you may write to the thin device and take internal snapshots | |
216 | of the thin volume. | |
217 | ||
218 | i) Creating a snapshot of an external device | |
219 | ||
220 | This is the same as creating a thin device. | |
221 | You don't mention the origin at this stage. | |
222 | ||
223 | dmsetup message /dev/mapper/pool 0 "create_thin 0" | |
224 | ||
225 | ii) Using a snapshot of an external device. | |
226 | ||
227 | Append an extra parameter to the thin target specifying the origin: | |
228 | ||
229 | dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image" | |
230 | ||
231 | N.B. All descendants (internal snapshots) of this snapshot require the | |
232 | same extra origin parameter. | |
233 | ||
991d9fa0 JT |
234 | Deactivation |
235 | ------------ | |
236 | ||
237 | All devices using a pool must be deactivated before the pool itself | |
238 | can be. | |
239 | ||
240 | dmsetup remove thin | |
241 | dmsetup remove snap | |
242 | dmsetup remove pool | |
243 | ||
244 | Reference | |
245 | ========= | |
246 | ||
247 | 'thin-pool' target | |
248 | ------------------ | |
249 | ||
250 | i) Constructor | |
251 | ||
252 | thin-pool <metadata dev> <data dev> <data block size (sectors)> \ | |
253 | <low water mark (blocks)> [<number of feature args> [<arg>]*] | |
254 | ||
255 | Optional feature arguments: | |
67e2e2b2 JT |
256 | |
257 | skip_block_zeroing: Skip the zeroing of newly-provisioned blocks. | |
258 | ||
259 | ignore_discard: Disable discard support. | |
260 | ||
261 | no_discard_passdown: Don't pass discards down to the underlying | |
262 | data device, but just remove the mapping. | |
991d9fa0 | 263 | |
e49e5829 JT |
264 | read_only: Don't allow any changes to be made to the pool |
265 | metadata. | |
266 | ||
787a996c MS |
267 | error_if_no_space: Error IOs, instead of queueing, if no space. |
268 | ||
991d9fa0 JT |
269 | Data block size must be between 64KB (128 sectors) and 1GB |
270 | (2097152 sectors) inclusive. | |
271 | ||
272 | ||
273 | ii) Status | |
274 | ||
275 | <transaction id> <used metadata blocks>/<total metadata blocks> | |
276 | <used data blocks>/<total data blocks> <held metadata root> | |
e49e5829 | 277 | [no_]discard_passdown ro|rw |
991d9fa0 JT |
278 | |
279 | transaction id: | |
280 | A 64-bit number used by userspace to help synchronise with metadata | |
281 | from volume managers. | |
282 | ||
283 | used data blocks / total data blocks | |
284 | If the number of free blocks drops below the pool's low water mark a | |
285 | dm event will be sent to userspace. This event is edge-triggered and | |
286 | it will occur only once after each resume so volume manager writers | |
287 | should register for the event and then check the target's status. | |
288 | ||
289 | held metadata root: | |
f6d16d32 | 290 | The location, in blocks, of the metadata root that has been |
991d9fa0 | 291 | 'held' for userspace read access. '-' indicates there is no |
f6d16d32 | 292 | held root. |
991d9fa0 | 293 | |
e49e5829 JT |
294 | discard_passdown|no_discard_passdown |
295 | Whether or not discards are actually being passed down to the | |
296 | underlying device. When this is enabled when loading the table, | |
297 | it can get disabled if the underlying device doesn't support it. | |
298 | ||
e4c78e21 | 299 | ro|rw|out_of_data_space |
e49e5829 JT |
300 | If the pool encounters certain types of device failures it will |
301 | drop into a read-only metadata mode in which no changes to | |
302 | the pool metadata (like allocating new blocks) are permitted. | |
303 | ||
304 | In serious cases where even a read-only mode is deemed unsafe | |
305 | no further I/O will be permitted and the status will just | |
306 | contain the string 'Fail'. The userspace recovery tools | |
307 | should then be used. | |
308 | ||
787a996c MS |
309 | error_if_no_space|queue_if_no_space |
310 | If the pool runs out of data or metadata space, the pool will | |
311 | either queue or error the IO destined to the data device. The | |
80c57893 MS |
312 | default is to queue the IO until more space is added or the |
313 | 'no_space_timeout' expires. The 'no_space_timeout' dm-thin-pool | |
314 | module parameter can be used to change this timeout -- it | |
315 | defaults to 60 seconds but may be disabled using a value of 0. | |
787a996c | 316 | |
e4c78e21 MS |
317 | needs_check |
318 | A metadata operation has failed, resulting in the needs_check | |
319 | flag being set in the metadata's superblock. The metadata | |
320 | device must be deactivated and checked/repaired before the | |
321 | thin-pool can be made fully operational again. '-' indicates | |
322 | needs_check is not set. | |
323 | ||
991d9fa0 JT |
324 | iii) Messages |
325 | ||
326 | create_thin <dev id> | |
327 | ||
328 | Create a new thinly-provisioned device. | |
329 | <dev id> is an arbitrary unique 24-bit identifier chosen by | |
330 | the caller. | |
331 | ||
332 | create_snap <dev id> <origin id> | |
333 | ||
334 | Create a new snapshot of another thinly-provisioned device. | |
335 | <dev id> is an arbitrary unique 24-bit identifier chosen by | |
336 | the caller. | |
337 | <origin id> is the identifier of the thinly-provisioned device | |
338 | of which the new device will be a snapshot. | |
339 | ||
340 | delete <dev id> | |
341 | ||
342 | Deletes a thin device. Irreversible. | |
343 | ||
991d9fa0 JT |
344 | set_transaction_id <current id> <new id> |
345 | ||
346 | Userland volume managers, such as LVM, need a way to | |
347 | synchronise their external metadata with the internal metadata of the | |
348 | pool target. The thin-pool target offers to store an | |
349 | arbitrary 64-bit transaction id and return it on the target's | |
350 | status line. To avoid races you must provide what you think | |
351 | the current transaction id is when you change it with this | |
352 | compare-and-swap message. | |
353 | ||
cc8394d8 JT |
354 | reserve_metadata_snap |
355 | ||
356 | Reserve a copy of the data mapping btree for use by userland. | |
357 | This allows userland to inspect the mappings as they were when | |
358 | this message was executed. Use the pool's status command to | |
359 | get the root block associated with the metadata snapshot. | |
360 | ||
361 | release_metadata_snap | |
362 | ||
363 | Release a previously reserved copy of the data mapping btree. | |
364 | ||
991d9fa0 JT |
365 | 'thin' target |
366 | ------------- | |
367 | ||
368 | i) Constructor | |
369 | ||
2dd9c257 | 370 | thin <pool dev> <dev id> [<external origin dev>] |
991d9fa0 JT |
371 | |
372 | pool dev: | |
373 | the thin-pool device, e.g. /dev/mapper/my_pool or 253:0 | |
374 | ||
375 | dev id: | |
376 | the internal device identifier of the device to be | |
377 | activated. | |
378 | ||
2dd9c257 JT |
379 | external origin dev: |
380 | an optional block device outside the pool to be treated as a | |
381 | read-only snapshot origin: reads to unprovisioned areas of the | |
382 | thin target will be mapped to this device. | |
383 | ||
991d9fa0 JT |
384 | The pool doesn't store any size against the thin devices. If you |
385 | load a thin target that is smaller than you've been using previously, | |
386 | then you'll have no access to blocks mapped beyond the end. If you | |
387 | load a target that is bigger than before, then extra blocks will be | |
388 | provisioned as and when needed. | |
389 | ||
991d9fa0 JT |
390 | ii) Status |
391 | ||
392 | <nr mapped sectors> <highest mapped sector> | |
e49e5829 JT |
393 | |
394 | If the pool has encountered device errors and failed, the status | |
395 | will just contain the string 'Fail'. The userspace recovery | |
396 | tools should then be used. |