summaryrefslogtreecommitdiffstats
path: root/Documentation/admin-guide/device-mapper/vdo.rst
blob: a14e6d3e787c9191c9e0036ac16ea13a14190e5e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
.. SPDX-License-Identifier: GPL-2.0-only

dm-vdo
======

The dm-vdo (virtual data optimizer) device mapper target provides
block-level deduplication, compression, and thin provisioning. As a device
mapper target, it can add these features to the storage stack, compatible
with any file system. The vdo target does not protect against data
corruption, relying instead on integrity protection of the storage below
it. It is strongly recommended that lvm be used to manage vdo volumes. See
lvmvdo(7).

Userspace component
===================

Formatting a vdo volume requires the use of the 'vdoformat' tool, available
at:

https://github.com/dm-vdo/vdo/

In most cases, a vdo target will recover from a crash automatically the
next time it is started. In cases where it encountered an unrecoverable
error (either during normal operation or crash recovery) the target will
enter or come up in read-only mode. Because read-only mode is indicative of
data-loss, a positive action must be taken to bring vdo out of read-only
mode. The 'vdoforcerebuild' tool, available from the same repo, is used to
prepare a read-only vdo to exit read-only mode. After running this tool,
the vdo target will rebuild its metadata the next time it is
started. Although some data may be lost, the rebuilt vdo's metadata will be
internally consistent and the target will be writable again.

The repo also contains additional userspace tools which can be used to
inspect a vdo target's on-disk metadata. Fortunately, these tools are
rarely needed except by dm-vdo developers.

Metadata requirements
=====================

Each vdo volume reserves 3GB of space for metadata, or more depending on
its configuration. It is helpful to check that the space saved by
deduplication and compression is not cancelled out by the metadata
requirements. An estimation of the space saved for a specific dataset can
be computed with the vdo estimator tool, which is available at:

https://github.com/dm-vdo/vdoestimator/

Target interface
================

Table line
----------

::

	<offset> <logical device size> vdo V4 <storage device>
	<storage device size> <minimum I/O size> <block map cache size>
	<block map era length> [optional arguments]


Required parameters:

	offset:
		The offset, in sectors, at which the vdo volume's logical
		space begins.

	logical device size:
		The size of the device which the vdo volume will service,
		in sectors. Must match the current logical size of the vdo
		volume.

	storage device:
		The device holding the vdo volume's data and metadata.

	storage device size:
		The size of the device holding the vdo volume, as a number
		of 4096-byte blocks. Must match the current size of the vdo
		volume.

	minimum I/O size:
		The minimum I/O size for this vdo volume to accept, in
		bytes. Valid values are 512 or 4096. The recommended value
		is 4096.

	block map cache size:
		The size of the block map cache, as a number of 4096-byte
		blocks. The minimum and recommended value is 32768 blocks.
		If the logical thread count is non-zero, the cache size
		must be at least 4096 blocks per logical thread.

	block map era length:
		The speed with which the block map cache writes out
		modified block map pages. A smaller era length is likely to
		reduce the amount of time spent rebuilding, at the cost of
		increased block map writes during normal operation. The
		maximum and recommended value is 16380; the minimum value
		is 1.

Optional parameters:
--------------------
Some or all of these parameters may be specified as <key> <value> pairs.

Thread related parameters:

Different categories of work are assigned to separate thread groups, and
the number of threads in each group can be configured separately.

If <hash>, <logical>, and <physical> are all set to 0, the work handled by
all three thread types will be handled by a single thread. If any of these
values are non-zero, all of them must be non-zero.

	ack:
		The number of threads used to complete bios. Since
		completing a bio calls an arbitrary completion function
		outside the vdo volume, threads of this type allow the vdo
		volume to continue processing requests even when bio
		completion is slow. The default is 1.

	bio:
		The number of threads used to issue bios to the underlying
		storage. Threads of this type allow the vdo volume to
		continue processing requests even when bio submission is
		slow. The default is 4.

	bioRotationInterval:
		The number of bios to enqueue on each bio thread before
		switching to the next thread. The value must be greater
		than 0 and not more than 1024; the default is 64.

	cpu:
		The number of threads used to do CPU-intensive work, such
		as hashing and compression. The default is 1.

	hash:
		The number of threads used to manage data comparisons for
		deduplication based on the hash value of data blocks. The
		default is 0.

	logical:
		The number of threads used to manage caching and locking
		based on the logical address of incoming bios. The default
		is 0; the maximum is 60.

	physical:
		The number of threads used to manage administration of the
		underlying storage device. At format time, a slab size for
		the vdo is chosen; the vdo storage device must be large
		enough to have at least 1 slab per physical thread. The
		default is 0; the maximum is 16.

Miscellaneous parameters:

	maxDiscard:
		The maximum size of discard bio accepted, in 4096-byte
		blocks. I/O requests to a vdo volume are normally split
		into 4096-byte blocks, and processed up to 2048 at a time.
		However, discard requests to a vdo volume can be
		automatically split to a larger size, up to <maxDiscard>
		4096-byte blocks in a single bio, and are limited to 1500
		at a time. Increasing this value may provide better overall
		performance, at the cost of increased latency for the
		individual discard requests. The default and minimum is 1;
		the maximum is UINT_MAX / 4096.

	deduplication:
		Whether deduplication is enabled. The default is 'on'; the
		acceptable values are 'on' and 'off'.

	compression:
		Whether compression is enabled. The default is 'off'; the
		acceptable values are 'on' and 'off'.

Device modification
-------------------

A modified table may be loaded into a running, non-suspended vdo volume.
The modifications will take effect when the device is next resumed. The
modifiable parameters are <logical device size>, <physical device size>,
<maxDiscard>, <compression>, and <deduplication>.

If the logical device size or physical device size are changed, upon
successful resume vdo will store the new values and require them on future
startups. These two parameters may not be decreased. The logical device
size may not exceed 4 PB. The physical device size must increase by at
least 32832 4096-byte blocks if at all, and must not exceed the size of the
underlying storage device. Additionally, when formatting the vdo device, a
slab size is chosen: the physical device size may never increase above the
size which provides 8192 slabs, and each increase must be large enough to
add at least one new slab.

Examples:

Start a previously-formatted vdo volume with 1 GB logical space and 1 GB
physical space, storing to /dev/dm-1 which has more than 1 GB of space.

::

	dmsetup create vdo0 --table \
	"0 2097152 vdo V4 /dev/dm-1 262144 4096 32768 16380"

Grow the logical size to 4 GB.

::

	dmsetup reload vdo0 --table \
	"0 8388608 vdo V4 /dev/dm-1 262144 4096 32768 16380"
	dmsetup resume vdo0

Grow the physical size to 2 GB.

::

	dmsetup reload vdo0 --table \
	"0 8388608 vdo V4 /dev/dm-1 524288 4096 32768 16380"
	dmsetup resume vdo0

Grow the physical size by 1 GB more and increase max discard sectors.

::

	dmsetup reload vdo0 --table \
	"0 10485760 vdo V4 /dev/dm-1 786432 4096 32768 16380 maxDiscard 8"
	dmsetup resume vdo0

Stop the vdo volume.

::

	dmsetup remove vdo0

Start the vdo volume again. Note that the logical and physical device sizes
must still match, but other parameters can change.

::

	dmsetup create vdo1 --table \
	"0 10485760 vdo V4 /dev/dm-1 786432 512 65550 5000 hash 1 logical 3 physical 2"

Messages
--------
All vdo devices accept messages in the form:

::

        dmsetup message <target-name> 0 <message-name> <message-parameters>

The messages are:

        stats:
		Outputs the current view of the vdo statistics. Mostly used
		by the vdostats userspace program to interpret the output
		buffer.

	config:
		Outputs useful vdo configuration information. Mostly used
		by users who want to recreate a similar VDO volume and
		want to know the creation configuration used.

	dump:
		Dumps many internal structures to the system log. This is
		not always safe to run, so it should only be used to debug
		a hung vdo. Optional parameters to specify structures to
		dump are:

			viopool: The pool of I/O requests incoming bios
			pools: A synonym of 'viopool'
			vdo: Most of the structures managing on-disk data
			queues: Basic information about each vdo thread
			threads: A synonym of 'queues'
			default: Equivalent to 'queues vdo'
			all: All of the above.

        dump-on-shutdown:
		Perform a default dump next time vdo shuts down.


Status
------

::

    <device> <operating mode> <in recovery> <index state>
    <compression state> <physical blocks used> <total physical blocks>

	device:
		The name of the vdo volume.

	operating mode:
		The current operating mode of the vdo volume; values may be
		'normal', 'recovering' (the volume has detected an issue
		with its metadata and is attempting to repair itself), and
		'read-only' (an error has occurred that forces the vdo
		volume to only support read operations and not writes).

	in recovery:
		Whether the vdo volume is currently in recovery mode;
		values may be 'recovering' or '-' which indicates not
		recovering.

	index state:
		The current state of the deduplication index in the vdo
		volume; values may be 'closed', 'closing', 'error',
		'offline', 'online', 'opening', and 'unknown'.

	compression state:
		The current state of compression in the vdo volume; values
		may be 'offline' and 'online'.

	used physical blocks:
		The number of physical blocks in use by the vdo volume.

	total physical blocks:
		The total number of physical blocks the vdo volume may use;
		the difference between this value and the
		<used physical blocks> is the number of blocks the vdo
		volume has left before being full.

Memory Requirements
===================

A vdo target requires a fixed 38 MB of RAM along with the following amounts
that scale with the target:

- 1.15 MB of RAM for each 1 MB of configured block map cache size. The
  block map cache requires a minimum of 150 MB.
- 1.6 MB of RAM for each 1 TB of logical space.
- 268 MB of RAM for each 1 TB of physical storage managed by the volume.

The deduplication index requires additional memory which scales with the
size of the deduplication window. For dense indexes, the index requires 1
GB of RAM per 1 TB of window. For sparse indexes, the index requires 1 GB
of RAM per 10 TB of window. The index configuration is set when the target
is formatted and may not be modified.

Module Parameters
=================

The vdo driver has a numeric parameter 'log_level' which controls the
verbosity of logging from the driver. The default setting is 6
(LOGLEVEL_INFO and more severe messages).

Run-time Usage
==============

When using dm-vdo, it is important to be aware of the ways in which its
behavior differs from other storage targets.

- There is no guarantee that over-writes of existing blocks will succeed.
  Because the underlying storage may be multiply referenced, over-writing
  an existing block generally requires a vdo to have a free block
  available.

- When blocks are no longer in use, sending a discard request for those
  blocks lets the vdo release references for those blocks. If the vdo is
  thinly provisioned, discarding unused blocks is essential to prevent the
  target from running out of space. However, due to the sharing of
  duplicate blocks, no discard request for any given logical block is
  guaranteed to reclaim space.

- Assuming the underlying storage properly implements flush requests, vdo
  is resilient against crashes, however, unflushed writes may or may not
  persist after a crash.

- Each write to a vdo target entails a significant amount of processing.
  However, much of the work is paralellizable. Therefore, vdo targets
  achieve better throughput at higher I/O depths, and can support up 2048
  requests in parallel.

Tuning
======

The vdo device has many options, and it can be difficult to make optimal
choices without perfect knowledge of the workload. Additionally, most
configuration options must be set when a vdo target is started, and cannot
be changed without shutting it down completely; the configuration cannot be
changed while the target is active. Ideally, tuning with simulated
workloads should be performed before deploying vdo in production
environments.

The most important value to adjust is the block map cache size. In order to
service a request for any logical address, a vdo must load the portion of
the block map which holds the relevant mapping. These mappings are cached.
Performance will suffer when the working set does not fit in the cache. By
default, a vdo allocates 128 MB of metadata cache in RAM to support
efficient access to 100 GB of logical space at a time. It should be scaled
up proportionally for larger working sets.

The logical and physical thread counts should also be adjusted. A logical
thread controls a disjoint section of the block map, so additional logical
threads increase parallelism and can increase throughput. Physical threads
control a disjoint section of the data blocks, so additional physical
threads can also increase throughput. However, excess threads can waste
resources and increase contention.

Bio submission threads control the parallelism involved in sending I/O to
the underlying storage; fewer threads mean there is more opportunity to
reorder I/O requests for performance benefit, but also that each I/O
request has to wait longer before being submitted.

Bio acknowledgment threads are used for finishing I/O requests. This is
done on dedicated threads since the amount of work required to execute a
bio's callback can not be controlled by the vdo itself. Usually one thread
is sufficient but additional threads may be beneficial, particularly when
bios have CPU-heavy callbacks.

CPU threads are used for hashing and for compression; in workloads with
compression enabled, more threads may result in higher throughput.

Hash threads are used to sort active requests by hash and determine whether
they should deduplicate; the most CPU intensive actions done by these
threads are comparison of 4096-byte data blocks. In most cases, a single
hash thread is sufficient.