Commit | Line | Data |
---|---|---|
216bf58f FT |
1 | Dynamic DMA mapping Guide |
2 | ========================= | |
1da177e4 LT |
3 | |
4 | David S. Miller <davem@redhat.com> | |
5 | Richard Henderson <rth@cygnus.com> | |
6 | Jakub Jelinek <jakub@redhat.com> | |
7 | ||
216bf58f FT |
8 | This is a guide to device driver writers on how to use the DMA API |
9 | with example pseudo-code. For a concise description of the API, see | |
1da177e4 LT |
10 | DMA-API.txt. |
11 | ||
12 | Most of the 64bit platforms have special hardware that translates bus | |
13 | addresses (DMA addresses) into physical addresses. This is similar to | |
14 | how page tables and/or a TLB translates virtual addresses to physical | |
15 | addresses on a CPU. This is needed so that e.g. PCI devices can | |
16 | access with a Single Address Cycle (32bit DMA address) any page in the | |
17 | 64bit physical address space. Previously in Linux those 64bit | |
18 | platforms had to set artificial limits on the maximum RAM size in the | |
19 | system, so that the virt_to_bus() static scheme works (the DMA address | |
20 | translation tables were simply filled on bootup to map each bus | |
21 | address to the physical page __pa(bus_to_virt())). | |
22 | ||
23 | So that Linux can use the dynamic DMA mapping, it needs some help from the | |
24 | drivers, namely it has to take into account that DMA addresses should be | |
25 | mapped only for the time they are actually used and unmapped after the DMA | |
26 | transfer. | |
27 | ||
28 | The following API will work of course even on platforms where no such | |
216bf58f FT |
29 | hardware exists. |
30 | ||
31 | Note that the DMA API works with any bus independent of the underlying | |
32 | microprocessor architecture. You should use the DMA API rather than | |
33 | the bus specific DMA API (e.g. pci_dma_*). | |
1da177e4 LT |
34 | |
35 | First of all, you should make sure | |
36 | ||
216bf58f | 37 | #include <linux/dma-mapping.h> |
1da177e4 LT |
38 | |
39 | is in your driver. This file will obtain for you the definition of the | |
40 | dma_addr_t (which can hold any valid DMA address for the platform) | |
41 | type which should be used everywhere you hold a DMA (bus) address | |
42 | returned from the DMA mapping functions. | |
43 | ||
44 | What memory is DMA'able? | |
45 | ||
46 | The first piece of information you must know is what kernel memory can | |
47 | be used with the DMA mapping facilities. There has been an unwritten | |
48 | set of rules regarding this, and this text is an attempt to finally | |
49 | write them down. | |
50 | ||
51 | If you acquired your memory via the page allocator | |
52 | (i.e. __get_free_page*()) or the generic memory allocators | |
53 | (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from | |
54 | that memory using the addresses returned from those routines. | |
55 | ||
56 | This means specifically that you may _not_ use the memory/addresses | |
57 | returned from vmalloc() for DMA. It is possible to DMA to the | |
58 | _underlying_ memory mapped into a vmalloc() area, but this requires | |
59 | walking page tables to get the physical addresses, and then | |
60 | translating each of those pages back to a kernel address using | |
61 | something like __va(). [ EDIT: Update this when we integrate | |
62 | Gerd Knorr's generic code which does this. ] | |
63 | ||
21440d31 DB |
64 | This rule also means that you may use neither kernel image addresses |
65 | (items in data/text/bss segments), nor module image addresses, nor | |
66 | stack addresses for DMA. These could all be mapped somewhere entirely | |
67 | different than the rest of physical memory. Even if those classes of | |
68 | memory could physically work with DMA, you'd need to ensure the I/O | |
69 | buffers were cacheline-aligned. Without that, you'd see cacheline | |
70 | sharing problems (data corruption) on CPUs with DMA-incoherent caches. | |
71 | (The CPU could write to one word, DMA would write to a different one | |
72 | in the same cache line, and one of them could be overwritten.) | |
1da177e4 LT |
73 | |
74 | Also, this means that you cannot take the return of a kmap() | |
75 | call and DMA to/from that. This is similar to vmalloc(). | |
76 | ||
77 | What about block I/O and networking buffers? The block I/O and | |
78 | networking subsystems make sure that the buffers they use are valid | |
79 | for you to DMA from/to. | |
80 | ||
81 | DMA addressing limitations | |
82 | ||
83 | Does your device have any DMA addressing limitations? For example, is | |
216bf58f FT |
84 | your device only capable of driving the low order 24-bits of address? |
85 | If so, you need to inform the kernel of this fact. | |
1da177e4 LT |
86 | |
87 | By default, the kernel assumes that your device can address the full | |
216bf58f FT |
88 | 32-bits. For a 64-bit capable device, this needs to be increased. |
89 | And for a device with limitations, as discussed in the previous | |
90 | paragraph, it needs to be decreased. | |
91 | ||
92 | Special note about PCI: PCI-X specification requires PCI-X devices to | |
93 | support 64-bit addressing (DAC) for all transactions. And at least | |
94 | one platform (SGI SN2) requires 64-bit consistent allocations to | |
95 | operate correctly when the IO bus is in PCI-X mode. | |
96 | ||
97 | For correct operation, you must interrogate the kernel in your device | |
98 | probe routine to see if the DMA controller on the machine can properly | |
99 | support the DMA addressing limitation your device has. It is good | |
100 | style to do this even if your device holds the default setting, | |
1da177e4 LT |
101 | because this shows that you did think about these issues wrt. your |
102 | device. | |
103 | ||
4aa806b7 | 104 | The query is performed via a call to dma_set_mask_and_coherent(): |
1da177e4 | 105 | |
4aa806b7 | 106 | int dma_set_mask_and_coherent(struct device *dev, u64 mask); |
1da177e4 | 107 | |
4aa806b7 RK |
108 | which will query the mask for both streaming and coherent APIs together. |
109 | If you have some special requirements, then the following two separate | |
110 | queries can be used instead: | |
1da177e4 | 111 | |
4aa806b7 RK |
112 | The query for streaming mappings is performed via a call to |
113 | dma_set_mask(): | |
114 | ||
115 | int dma_set_mask(struct device *dev, u64 mask); | |
116 | ||
117 | The query for consistent allocations is performed via a call | |
118 | to dma_set_coherent_mask(): | |
119 | ||
120 | int dma_set_coherent_mask(struct device *dev, u64 mask); | |
1da177e4 | 121 | |
216bf58f FT |
122 | Here, dev is a pointer to the device struct of your device, and mask |
123 | is a bit mask describing which bits of an address your device | |
124 | supports. It returns zero if your card can perform DMA properly on | |
125 | the machine given the address mask you provided. In general, the | |
126 | device struct of your device is embedded in the bus specific device | |
127 | struct of your device. For example, a pointer to the device struct of | |
128 | your PCI device is pdev->dev (pdev is a pointer to the PCI device | |
129 | struct of your device). | |
1da177e4 | 130 | |
84eb8d06 | 131 | If it returns non-zero, your device cannot perform DMA properly on |
1da177e4 LT |
132 | this platform, and attempting to do so will result in undefined |
133 | behavior. You must either use a different mask, or not use DMA. | |
134 | ||
135 | This means that in the failure case, you have three options: | |
136 | ||
137 | 1) Use another DMA mask, if possible (see below). | |
138 | 2) Use some non-DMA mode for data transfer, if possible. | |
139 | 3) Ignore this device and do not initialize it. | |
140 | ||
141 | It is recommended that your driver print a kernel KERN_WARNING message | |
142 | when you end up performing either #2 or #3. In this manner, if a user | |
143 | of your driver reports that performance is bad or that the device is not | |
144 | even detected, you can ask them for the kernel messages to find out | |
145 | exactly why. | |
146 | ||
216bf58f | 147 | The standard 32-bit addressing device would do something like this: |
1da177e4 | 148 | |
4aa806b7 | 149 | if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { |
1da177e4 LT |
150 | printk(KERN_WARNING |
151 | "mydev: No suitable DMA available.\n"); | |
152 | goto ignore_this_device; | |
153 | } | |
154 | ||
216bf58f FT |
155 | Another common scenario is a 64-bit capable device. The approach here |
156 | is to try for 64-bit addressing, but back down to a 32-bit mask that | |
157 | should not fail. The kernel may fail the 64-bit mask not because the | |
158 | platform is not capable of 64-bit addressing. Rather, it may fail in | |
159 | this case simply because 32-bit addressing is done more efficiently | |
160 | than 64-bit addressing. For example, Sparc64 PCI SAC addressing is | |
161 | more efficient than DAC addressing. | |
1da177e4 LT |
162 | |
163 | Here is how you would handle a 64-bit capable device which can drive | |
164 | all 64-bits when accessing streaming DMA: | |
165 | ||
166 | int using_dac; | |
167 | ||
216bf58f | 168 | if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { |
1da177e4 | 169 | using_dac = 1; |
216bf58f | 170 | } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { |
1da177e4 LT |
171 | using_dac = 0; |
172 | } else { | |
173 | printk(KERN_WARNING | |
174 | "mydev: No suitable DMA available.\n"); | |
175 | goto ignore_this_device; | |
176 | } | |
177 | ||
178 | If a card is capable of using 64-bit consistent allocations as well, | |
179 | the case would look like this: | |
180 | ||
181 | int using_dac, consistent_using_dac; | |
182 | ||
4aa806b7 | 183 | if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { |
1da177e4 LT |
184 | using_dac = 1; |
185 | consistent_using_dac = 1; | |
4aa806b7 | 186 | } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { |
1da177e4 LT |
187 | using_dac = 0; |
188 | consistent_using_dac = 0; | |
1da177e4 LT |
189 | } else { |
190 | printk(KERN_WARNING | |
191 | "mydev: No suitable DMA available.\n"); | |
192 | goto ignore_this_device; | |
193 | } | |
194 | ||
4aa806b7 RK |
195 | The coherent coherent mask will always be able to set the same or a |
196 | smaller mask as the streaming mask. However for the rare case that a | |
1da177e4 | 197 | device driver only uses consistent allocations, one would have to |
216bf58f | 198 | check the return value from dma_set_coherent_mask(). |
1da177e4 | 199 | |
1da177e4 | 200 | Finally, if your device can only drive the low 24-bits of |
216bf58f | 201 | address you might do something like: |
1da177e4 | 202 | |
216bf58f | 203 | if (dma_set_mask(dev, DMA_BIT_MASK(24))) { |
1da177e4 LT |
204 | printk(KERN_WARNING |
205 | "mydev: 24-bit DMA addressing not available.\n"); | |
206 | goto ignore_this_device; | |
207 | } | |
208 | ||
4aa806b7 RK |
209 | When dma_set_mask() or dma_set_mask_and_coherent() is successful, and |
210 | returns zero, the kernel saves away this mask you have provided. The | |
211 | kernel will use this information later when you make DMA mappings. | |
1da177e4 LT |
212 | |
213 | There is a case which we are aware of at this time, which is worth | |
214 | mentioning in this documentation. If your device supports multiple | |
215 | functions (for example a sound card provides playback and record | |
216 | functions) and the various different functions have _different_ | |
217 | DMA addressing limitations, you may wish to probe each mask and | |
218 | only provide the functionality which the machine can handle. It | |
216bf58f | 219 | is important that the last call to dma_set_mask() be for the |
1da177e4 LT |
220 | most specific mask. |
221 | ||
222 | Here is pseudo-code showing how this might be done: | |
223 | ||
2c5510d4 | 224 | #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) |
038f7d00 | 225 | #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) |
1da177e4 LT |
226 | |
227 | struct my_sound_card *card; | |
216bf58f | 228 | struct device *dev; |
1da177e4 LT |
229 | |
230 | ... | |
216bf58f | 231 | if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { |
1da177e4 LT |
232 | card->playback_enabled = 1; |
233 | } else { | |
234 | card->playback_enabled = 0; | |
472c0644 | 235 | printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n", |
1da177e4 LT |
236 | card->name); |
237 | } | |
216bf58f | 238 | if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { |
1da177e4 LT |
239 | card->record_enabled = 1; |
240 | } else { | |
241 | card->record_enabled = 0; | |
472c0644 | 242 | printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n", |
1da177e4 LT |
243 | card->name); |
244 | } | |
245 | ||
246 | A sound card was used as an example here because this genre of PCI | |
247 | devices seems to be littered with ISA chips given a PCI front end, | |
248 | and thus retaining the 16MB DMA addressing limitations of ISA. | |
249 | ||
250 | Types of DMA mappings | |
251 | ||
252 | There are two types of DMA mappings: | |
253 | ||
254 | - Consistent DMA mappings which are usually mapped at driver | |
255 | initialization, unmapped at the end and for which the hardware should | |
256 | guarantee that the device and the CPU can access the data | |
257 | in parallel and will see updates made by each other without any | |
258 | explicit software flushing. | |
259 | ||
260 | Think of "consistent" as "synchronous" or "coherent". | |
261 | ||
262 | The current default is to return consistent memory in the low 32 | |
216bf58f FT |
263 | bits of the bus space. However, for future compatibility you should |
264 | set the consistent mask even if this default is fine for your | |
1da177e4 LT |
265 | driver. |
266 | ||
267 | Good examples of what to use consistent mappings for are: | |
268 | ||
269 | - Network card DMA ring descriptors. | |
270 | - SCSI adapter mailbox command data structures. | |
271 | - Device firmware microcode executed out of | |
272 | main memory. | |
273 | ||
274 | The invariant these examples all require is that any CPU store | |
275 | to memory is immediately visible to the device, and vice | |
276 | versa. Consistent mappings guarantee this. | |
277 | ||
278 | IMPORTANT: Consistent DMA memory does not preclude the usage of | |
279 | proper memory barriers. The CPU may reorder stores to | |
280 | consistent memory just as it may normal memory. Example: | |
281 | if it is important for the device to see the first word | |
282 | of a descriptor updated before the second, you must do | |
283 | something like: | |
284 | ||
285 | desc->word0 = address; | |
286 | wmb(); | |
287 | desc->word1 = DESC_VALID; | |
288 | ||
289 | in order to get correct behavior on all platforms. | |
290 | ||
21440d31 DB |
291 | Also, on some platforms your driver may need to flush CPU write |
292 | buffers in much the same way as it needs to flush write buffers | |
293 | found in PCI bridges (such as by reading a register's value | |
294 | after writing it). | |
295 | ||
216bf58f FT |
296 | - Streaming DMA mappings which are usually mapped for one DMA |
297 | transfer, unmapped right after it (unless you use dma_sync_* below) | |
298 | and for which hardware can optimize for sequential accesses. | |
1da177e4 LT |
299 | |
300 | This of "streaming" as "asynchronous" or "outside the coherency | |
301 | domain". | |
302 | ||
303 | Good examples of what to use streaming mappings for are: | |
304 | ||
305 | - Networking buffers transmitted/received by a device. | |
306 | - Filesystem buffers written/read by a SCSI device. | |
307 | ||
308 | The interfaces for using this type of mapping were designed in | |
309 | such a way that an implementation can make whatever performance | |
310 | optimizations the hardware allows. To this end, when using | |
311 | such mappings you must be explicit about what you want to happen. | |
312 | ||
216bf58f FT |
313 | Neither type of DMA mapping has alignment restrictions that come from |
314 | the underlying bus, although some devices may have such restrictions. | |
21440d31 DB |
315 | Also, systems with caches that aren't DMA-coherent will work better |
316 | when the underlying buffers don't share cache lines with other data. | |
317 | ||
1da177e4 LT |
318 | |
319 | Using Consistent DMA mappings. | |
320 | ||
321 | To allocate and map large (PAGE_SIZE or so) consistent DMA regions, | |
322 | you should do: | |
323 | ||
324 | dma_addr_t dma_handle; | |
325 | ||
216bf58f | 326 | cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); |
1da177e4 | 327 | |
216bf58f FT |
328 | where device is a struct device *. This may be called in interrupt |
329 | context with the GFP_ATOMIC flag. | |
1da177e4 LT |
330 | |
331 | Size is the length of the region you want to allocate, in bytes. | |
332 | ||
333 | This routine will allocate RAM for that region, so it acts similarly to | |
334 | __get_free_pages (but takes size instead of a page order). If your | |
335 | driver needs regions sized smaller than a page, you may prefer using | |
216bf58f FT |
336 | the dma_pool interface, described below. |
337 | ||
338 | The consistent DMA mapping interfaces, for non-NULL dev, will by | |
339 | default return a DMA address which is 32-bit addressable. Even if the | |
340 | device indicates (via DMA mask) that it may address the upper 32-bits, | |
341 | consistent allocation will only return > 32-bit addresses for DMA if | |
342 | the consistent DMA mask has been explicitly changed via | |
343 | dma_set_coherent_mask(). This is true of the dma_pool interface as | |
344 | well. | |
345 | ||
346 | dma_alloc_coherent returns two values: the virtual address which you | |
1da177e4 LT |
347 | can use to access it from the CPU and dma_handle which you pass to the |
348 | card. | |
349 | ||
350 | The cpu return address and the DMA bus master address are both | |
351 | guaranteed to be aligned to the smallest PAGE_SIZE order which | |
352 | is greater than or equal to the requested size. This invariant | |
353 | exists (for example) to guarantee that if you allocate a chunk | |
354 | which is smaller than or equal to 64 kilobytes, the extent of the | |
355 | buffer you receive will not cross a 64K boundary. | |
356 | ||
357 | To unmap and free such a DMA region, you call: | |
358 | ||
216bf58f | 359 | dma_free_coherent(dev, size, cpu_addr, dma_handle); |
1da177e4 | 360 | |
216bf58f FT |
361 | where dev, size are the same as in the above call and cpu_addr and |
362 | dma_handle are the values dma_alloc_coherent returned to you. | |
1da177e4 LT |
363 | This function may not be called in interrupt context. |
364 | ||
365 | If your driver needs lots of smaller memory regions, you can write | |
216bf58f FT |
366 | custom code to subdivide pages returned by dma_alloc_coherent, |
367 | or you can use the dma_pool API to do that. A dma_pool is like | |
368 | a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages. | |
1da177e4 LT |
369 | Also, it understands common hardware constraints for alignment, |
370 | like queue heads needing to be aligned on N byte boundaries. | |
371 | ||
216bf58f | 372 | Create a dma_pool like this: |
1da177e4 | 373 | |
216bf58f | 374 | struct dma_pool *pool; |
1da177e4 | 375 | |
216bf58f | 376 | pool = dma_pool_create(name, dev, size, align, alloc); |
1da177e4 | 377 | |
216bf58f | 378 | The "name" is for diagnostics (like a kmem_cache name); dev and size |
1da177e4 LT |
379 | are as above. The device's hardware alignment requirement for this |
380 | type of data is "align" (which is expressed in bytes, and must be a | |
381 | power of two). If your device has no boundary crossing restrictions, | |
382 | pass 0 for alloc; passing 4096 says memory allocated from this pool | |
383 | must not cross 4KByte boundaries (but at that time it may be better to | |
216bf58f | 384 | go for dma_alloc_coherent directly instead). |
1da177e4 | 385 | |
216bf58f | 386 | Allocate memory from a dma pool like this: |
1da177e4 | 387 | |
216bf58f | 388 | cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); |
1da177e4 LT |
389 | |
390 | flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor | |
216bf58f | 391 | holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent, |
1da177e4 LT |
392 | this returns two values, cpu_addr and dma_handle. |
393 | ||
216bf58f | 394 | Free memory that was allocated from a dma_pool like this: |
1da177e4 | 395 | |
216bf58f | 396 | dma_pool_free(pool, cpu_addr, dma_handle); |
1da177e4 | 397 | |
216bf58f FT |
398 | where pool is what you passed to dma_pool_alloc, and cpu_addr and |
399 | dma_handle are the values dma_pool_alloc returned. This function | |
1da177e4 LT |
400 | may be called in interrupt context. |
401 | ||
216bf58f | 402 | Destroy a dma_pool by calling: |
1da177e4 | 403 | |
216bf58f | 404 | dma_pool_destroy(pool); |
1da177e4 | 405 | |
216bf58f | 406 | Make sure you've called dma_pool_free for all memory allocated |
1da177e4 LT |
407 | from a pool before you destroy the pool. This function may not |
408 | be called in interrupt context. | |
409 | ||
410 | DMA Direction | |
411 | ||
412 | The interfaces described in subsequent portions of this document | |
413 | take a DMA direction argument, which is an integer and takes on | |
414 | one of the following values: | |
415 | ||
216bf58f FT |
416 | DMA_BIDIRECTIONAL |
417 | DMA_TO_DEVICE | |
418 | DMA_FROM_DEVICE | |
419 | DMA_NONE | |
1da177e4 LT |
420 | |
421 | One should provide the exact DMA direction if you know it. | |
422 | ||
216bf58f FT |
423 | DMA_TO_DEVICE means "from main memory to the device" |
424 | DMA_FROM_DEVICE means "from the device to main memory" | |
1da177e4 LT |
425 | It is the direction in which the data moves during the DMA |
426 | transfer. | |
427 | ||
428 | You are _strongly_ encouraged to specify this as precisely | |
429 | as you possibly can. | |
430 | ||
431 | If you absolutely cannot know the direction of the DMA transfer, | |
216bf58f | 432 | specify DMA_BIDIRECTIONAL. It means that the DMA can go in |
1da177e4 LT |
433 | either direction. The platform guarantees that you may legally |
434 | specify this, and that it will work, but this may be at the | |
435 | cost of performance for example. | |
436 | ||
216bf58f | 437 | The value DMA_NONE is to be used for debugging. One can |
1da177e4 LT |
438 | hold this in a data structure before you come to know the |
439 | precise direction, and this will help catch cases where your | |
440 | direction tracking logic has failed to set things up properly. | |
441 | ||
442 | Another advantage of specifying this value precisely (outside of | |
443 | potential platform-specific optimizations of such) is for debugging. | |
444 | Some platforms actually have a write permission boolean which DMA | |
445 | mappings can be marked with, much like page protections in the user | |
446 | program address space. Such platforms can and do report errors in the | |
216bf58f | 447 | kernel logs when the DMA controller hardware detects violation of the |
1da177e4 LT |
448 | permission setting. |
449 | ||
450 | Only streaming mappings specify a direction, consistent mappings | |
451 | implicitly have a direction attribute setting of | |
216bf58f | 452 | DMA_BIDIRECTIONAL. |
1da177e4 | 453 | |
be7db055 | 454 | The SCSI subsystem tells you the direction to use in the |
455 | 'sc_data_direction' member of the SCSI command your driver is | |
456 | working on. | |
1da177e4 LT |
457 | |
458 | For Networking drivers, it's a rather simple affair. For transmit | |
216bf58f | 459 | packets, map/unmap them with the DMA_TO_DEVICE direction |
1da177e4 | 460 | specifier. For receive packets, just the opposite, map/unmap them |
216bf58f | 461 | with the DMA_FROM_DEVICE direction specifier. |
1da177e4 LT |
462 | |
463 | Using Streaming DMA mappings | |
464 | ||
465 | The streaming DMA mapping routines can be called from interrupt | |
466 | context. There are two versions of each map/unmap, one which will | |
467 | map/unmap a single memory region, and one which will map/unmap a | |
468 | scatterlist. | |
469 | ||
470 | To map a single region, you do: | |
471 | ||
216bf58f | 472 | struct device *dev = &my_dev->dev; |
1da177e4 LT |
473 | dma_addr_t dma_handle; |
474 | void *addr = buffer->ptr; | |
475 | size_t size = buffer->len; | |
476 | ||
216bf58f | 477 | dma_handle = dma_map_single(dev, addr, size, direction); |
8d7f62e6 SK |
478 | if (dma_mapping_error(dma_handle)) { |
479 | /* | |
480 | * reduce current DMA mapping usage, | |
481 | * delay and try again later or | |
482 | * reset driver. | |
483 | */ | |
484 | goto map_error_handling; | |
485 | } | |
1da177e4 LT |
486 | |
487 | and to unmap it: | |
488 | ||
216bf58f | 489 | dma_unmap_single(dev, dma_handle, size, direction); |
1da177e4 | 490 | |
8d7f62e6 SK |
491 | You should call dma_mapping_error() as dma_map_single() could fail and return |
492 | error. Not all dma implementations support dma_mapping_error() interface. | |
493 | However, it is a good practice to call dma_mapping_error() interface, which | |
494 | will invoke the generic mapping error check interface. Doing so will ensure | |
495 | that the mapping code will work correctly on all dma implementations without | |
496 | any dependency on the specifics of the underlying implementation. Using the | |
497 | returned address without checking for errors could result in failures ranging | |
be62bc41 SK |
498 | from panics to silent data corruption. A couple of examples of incorrect ways |
499 | to check for errors that make assumptions about the underlying dma | |
500 | implementation are as follows and these are applicable to dma_map_page() as | |
501 | well. | |
8d7f62e6 SK |
502 | |
503 | Incorrect example 1: | |
504 | dma_addr_t dma_handle; | |
505 | ||
506 | dma_handle = dma_map_single(dev, addr, size, direction); | |
507 | if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) { | |
508 | goto map_error; | |
509 | } | |
510 | ||
511 | Incorrect example 2: | |
512 | dma_addr_t dma_handle; | |
513 | ||
514 | dma_handle = dma_map_single(dev, addr, size, direction); | |
515 | if (dma_handle == DMA_ERROR_CODE) { | |
516 | goto map_error; | |
517 | } | |
518 | ||
216bf58f | 519 | You should call dma_unmap_single when the DMA activity is finished, e.g. |
1da177e4 LT |
520 | from the interrupt which told you that the DMA transfer is done. |
521 | ||
522 | Using cpu pointers like this for single mappings has a disadvantage, | |
523 | you cannot reference HIGHMEM memory in this way. Thus, there is a | |
216bf58f | 524 | map/unmap interface pair akin to dma_{map,unmap}_single. These |
1da177e4 LT |
525 | interfaces deal with page/offset pairs instead of cpu pointers. |
526 | Specifically: | |
527 | ||
216bf58f | 528 | struct device *dev = &my_dev->dev; |
1da177e4 LT |
529 | dma_addr_t dma_handle; |
530 | struct page *page = buffer->page; | |
531 | unsigned long offset = buffer->offset; | |
532 | size_t size = buffer->len; | |
533 | ||
216bf58f | 534 | dma_handle = dma_map_page(dev, page, offset, size, direction); |
8d7f62e6 SK |
535 | if (dma_mapping_error(dma_handle)) { |
536 | /* | |
537 | * reduce current DMA mapping usage, | |
538 | * delay and try again later or | |
539 | * reset driver. | |
540 | */ | |
541 | goto map_error_handling; | |
542 | } | |
1da177e4 LT |
543 | |
544 | ... | |
545 | ||
216bf58f | 546 | dma_unmap_page(dev, dma_handle, size, direction); |
1da177e4 LT |
547 | |
548 | Here, "offset" means byte offset within the given page. | |
549 | ||
8d7f62e6 SK |
550 | You should call dma_mapping_error() as dma_map_page() could fail and return |
551 | error as outlined under the dma_map_single() discussion. | |
552 | ||
553 | You should call dma_unmap_page when the DMA activity is finished, e.g. | |
554 | from the interrupt which told you that the DMA transfer is done. | |
555 | ||
1da177e4 LT |
556 | With scatterlists, you map a region gathered from several regions by: |
557 | ||
216bf58f | 558 | int i, count = dma_map_sg(dev, sglist, nents, direction); |
1da177e4 LT |
559 | struct scatterlist *sg; |
560 | ||
4c2f6d4c | 561 | for_each_sg(sglist, sg, count, i) { |
1da177e4 LT |
562 | hw_address[i] = sg_dma_address(sg); |
563 | hw_len[i] = sg_dma_len(sg); | |
564 | } | |
565 | ||
566 | where nents is the number of entries in the sglist. | |
567 | ||
568 | The implementation is free to merge several consecutive sglist entries | |
569 | into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any | |
570 | consecutive sglist entries can be merged into one provided the first one | |
571 | ends and the second one starts on a page boundary - in fact this is a huge | |
572 | advantage for cards which either cannot do scatter-gather or have very | |
573 | limited number of scatter-gather entries) and returns the actual number | |
574 | of sg entries it mapped them to. On failure 0 is returned. | |
575 | ||
576 | Then you should loop count times (note: this can be less than nents times) | |
577 | and use sg_dma_address() and sg_dma_len() macros where you previously | |
578 | accessed sg->address and sg->length as shown above. | |
579 | ||
580 | To unmap a scatterlist, just call: | |
581 | ||
216bf58f | 582 | dma_unmap_sg(dev, sglist, nents, direction); |
1da177e4 LT |
583 | |
584 | Again, make sure DMA activity has already finished. | |
585 | ||
216bf58f FT |
586 | PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be |
587 | the _same_ one you passed into the dma_map_sg call, | |
1da177e4 | 588 | it should _NOT_ be the 'count' value _returned_ from the |
216bf58f | 589 | dma_map_sg call. |
1da177e4 | 590 | |
216bf58f | 591 | Every dma_map_{single,sg} call should have its dma_unmap_{single,sg} |
1da177e4 LT |
592 | counterpart, because the bus address space is a shared resource (although |
593 | in some ports the mapping is per each BUS so less devices contend for the | |
594 | same bus address space) and you could render the machine unusable by eating | |
595 | all bus addresses. | |
596 | ||
597 | If you need to use the same streaming DMA region multiple times and touch | |
598 | the data in between the DMA transfers, the buffer needs to be synced | |
599 | properly in order for the cpu and device to see the most uptodate and | |
600 | correct copy of the DMA buffer. | |
601 | ||
216bf58f | 602 | So, firstly, just map it with dma_map_{single,sg}, and after each DMA |
1da177e4 LT |
603 | transfer call either: |
604 | ||
216bf58f | 605 | dma_sync_single_for_cpu(dev, dma_handle, size, direction); |
1da177e4 LT |
606 | |
607 | or: | |
608 | ||
216bf58f | 609 | dma_sync_sg_for_cpu(dev, sglist, nents, direction); |
1da177e4 LT |
610 | |
611 | as appropriate. | |
612 | ||
613 | Then, if you wish to let the device get at the DMA area again, | |
614 | finish accessing the data with the cpu, and then before actually | |
615 | giving the buffer to the hardware call either: | |
616 | ||
216bf58f | 617 | dma_sync_single_for_device(dev, dma_handle, size, direction); |
1da177e4 LT |
618 | |
619 | or: | |
620 | ||
216bf58f | 621 | dma_sync_sg_for_device(dev, sglist, nents, direction); |
1da177e4 LT |
622 | |
623 | as appropriate. | |
624 | ||
625 | After the last DMA transfer call one of the DMA unmap routines | |
216bf58f FT |
626 | dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_* |
627 | call till dma_unmap_*, then you don't have to call the dma_sync_* | |
1da177e4 LT |
628 | routines at all. |
629 | ||
630 | Here is pseudo code which shows a situation in which you would need | |
216bf58f | 631 | to use the dma_sync_*() interfaces. |
1da177e4 LT |
632 | |
633 | my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) | |
634 | { | |
635 | dma_addr_t mapping; | |
636 | ||
216bf58f | 637 | mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); |
8d7f62e6 SK |
638 | if (dma_mapping_error(dma_handle)) { |
639 | /* | |
640 | * reduce current DMA mapping usage, | |
641 | * delay and try again later or | |
642 | * reset driver. | |
643 | */ | |
644 | goto map_error_handling; | |
645 | } | |
1da177e4 LT |
646 | |
647 | cp->rx_buf = buffer; | |
648 | cp->rx_len = len; | |
649 | cp->rx_dma = mapping; | |
650 | ||
651 | give_rx_buf_to_card(cp); | |
652 | } | |
653 | ||
654 | ... | |
655 | ||
656 | my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) | |
657 | { | |
658 | struct my_card *cp = devid; | |
659 | ||
660 | ... | |
661 | if (read_card_status(cp) == RX_BUF_TRANSFERRED) { | |
662 | struct my_card_header *hp; | |
663 | ||
664 | /* Examine the header to see if we wish | |
665 | * to accept the data. But synchronize | |
666 | * the DMA transfer with the CPU first | |
667 | * so that we see updated contents. | |
668 | */ | |
216bf58f FT |
669 | dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, |
670 | cp->rx_len, | |
671 | DMA_FROM_DEVICE); | |
1da177e4 LT |
672 | |
673 | /* Now it is safe to examine the buffer. */ | |
674 | hp = (struct my_card_header *) cp->rx_buf; | |
675 | if (header_is_ok(hp)) { | |
216bf58f FT |
676 | dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, |
677 | DMA_FROM_DEVICE); | |
1da177e4 LT |
678 | pass_to_upper_layers(cp->rx_buf); |
679 | make_and_setup_new_rx_buf(cp); | |
680 | } else { | |
3f0fb4e8 MM |
681 | /* CPU should not write to |
682 | * DMA_FROM_DEVICE-mapped area, | |
683 | * so dma_sync_single_for_device() is | |
684 | * not needed here. It would be required | |
685 | * for DMA_BIDIRECTIONAL mapping if | |
686 | * the memory was modified. | |
1da177e4 | 687 | */ |
1da177e4 LT |
688 | give_rx_buf_to_card(cp); |
689 | } | |
690 | } | |
691 | } | |
692 | ||
693 | Drivers converted fully to this interface should not use virt_to_bus any | |
694 | longer, nor should they use bus_to_virt. Some drivers have to be changed a | |
695 | little bit, because there is no longer an equivalent to bus_to_virt in the | |
696 | dynamic DMA mapping scheme - you have to always store the DMA addresses | |
216bf58f FT |
697 | returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single |
698 | calls (dma_map_sg stores them in the scatterlist itself if the platform | |
1da177e4 LT |
699 | supports dynamic DMA mapping in hardware) in your driver structures and/or |
700 | in the card registers. | |
701 | ||
216bf58f FT |
702 | All drivers should be using these interfaces with no exceptions. It |
703 | is planned to completely remove virt_to_bus() and bus_to_virt() as | |
1da177e4 LT |
704 | they are entirely deprecated. Some ports already do not provide these |
705 | as it is impossible to correctly support them. | |
706 | ||
4ae9ca82 FT |
707 | Handling Errors |
708 | ||
709 | DMA address space is limited on some architectures and an allocation | |
710 | failure can be determined by: | |
711 | ||
712 | - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0 | |
713 | ||
714 | - checking the returned dma_addr_t of dma_map_single and dma_map_page | |
715 | by using dma_mapping_error(): | |
716 | ||
717 | dma_addr_t dma_handle; | |
718 | ||
719 | dma_handle = dma_map_single(dev, addr, size, direction); | |
720 | if (dma_mapping_error(dev, dma_handle)) { | |
721 | /* | |
722 | * reduce current DMA mapping usage, | |
723 | * delay and try again later or | |
724 | * reset driver. | |
725 | */ | |
8d7f62e6 SK |
726 | goto map_error_handling; |
727 | } | |
728 | ||
729 | - unmap pages that are already mapped, when mapping error occurs in the middle | |
730 | of a multiple page mapping attempt. These example are applicable to | |
731 | dma_map_page() as well. | |
732 | ||
733 | Example 1: | |
734 | dma_addr_t dma_handle1; | |
735 | dma_addr_t dma_handle2; | |
736 | ||
737 | dma_handle1 = dma_map_single(dev, addr, size, direction); | |
738 | if (dma_mapping_error(dev, dma_handle1)) { | |
739 | /* | |
740 | * reduce current DMA mapping usage, | |
741 | * delay and try again later or | |
742 | * reset driver. | |
743 | */ | |
744 | goto map_error_handling1; | |
745 | } | |
746 | dma_handle2 = dma_map_single(dev, addr, size, direction); | |
747 | if (dma_mapping_error(dev, dma_handle2)) { | |
748 | /* | |
749 | * reduce current DMA mapping usage, | |
750 | * delay and try again later or | |
751 | * reset driver. | |
752 | */ | |
753 | goto map_error_handling2; | |
754 | } | |
755 | ||
756 | ... | |
757 | ||
758 | map_error_handling2: | |
759 | dma_unmap_single(dma_handle1); | |
760 | map_error_handling1: | |
761 | ||
11cd3db0 | 762 | Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when |
8d7f62e6 SK |
763 | mapping error is detected in the middle) |
764 | ||
765 | dma_addr_t dma_addr; | |
766 | dma_addr_t array[DMA_BUFFERS]; | |
767 | int save_index = 0; | |
768 | ||
769 | for (i = 0; i < DMA_BUFFERS; i++) { | |
770 | ||
771 | ... | |
772 | ||
773 | dma_addr = dma_map_single(dev, addr, size, direction); | |
774 | if (dma_mapping_error(dev, dma_addr)) { | |
775 | /* | |
776 | * reduce current DMA mapping usage, | |
777 | * delay and try again later or | |
778 | * reset driver. | |
779 | */ | |
780 | goto map_error_handling; | |
781 | } | |
782 | array[i].dma_addr = dma_addr; | |
783 | save_index++; | |
784 | } | |
785 | ||
786 | ... | |
787 | ||
788 | map_error_handling: | |
789 | ||
790 | for (i = 0; i < save_index; i++) { | |
791 | ||
792 | ... | |
793 | ||
794 | dma_unmap_single(array[i].dma_addr); | |
4ae9ca82 FT |
795 | } |
796 | ||
797 | Networking drivers must call dev_kfree_skb to free the socket buffer | |
798 | and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook | |
799 | (ndo_start_xmit). This means that the socket buffer is just dropped in | |
800 | the failure case. | |
801 | ||
802 | SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping | |
803 | fails in the queuecommand hook. This means that the SCSI subsystem | |
804 | passes the command to the driver again later. | |
805 | ||
1da177e4 LT |
806 | Optimizing Unmap State Space Consumption |
807 | ||
216bf58f | 808 | On many platforms, dma_unmap_{single,page}() is simply a nop. |
1da177e4 LT |
809 | Therefore, keeping track of the mapping address and length is a waste |
810 | of space. Instead of filling your drivers up with ifdefs and the like | |
811 | to "work around" this (which would defeat the whole purpose of a | |
812 | portable API) the following facilities are provided. | |
813 | ||
814 | Actually, instead of describing the macros one by one, we'll | |
815 | transform some example code. | |
816 | ||
216bf58f | 817 | 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. |
1da177e4 LT |
818 | Example, before: |
819 | ||
820 | struct ring_state { | |
821 | struct sk_buff *skb; | |
822 | dma_addr_t mapping; | |
823 | __u32 len; | |
824 | }; | |
825 | ||
826 | after: | |
827 | ||
828 | struct ring_state { | |
829 | struct sk_buff *skb; | |
216bf58f FT |
830 | DEFINE_DMA_UNMAP_ADDR(mapping); |
831 | DEFINE_DMA_UNMAP_LEN(len); | |
1da177e4 LT |
832 | }; |
833 | ||
216bf58f | 834 | 2) Use dma_unmap_{addr,len}_set to set these values. |
1da177e4 LT |
835 | Example, before: |
836 | ||
837 | ringp->mapping = FOO; | |
838 | ringp->len = BAR; | |
839 | ||
840 | after: | |
841 | ||
216bf58f FT |
842 | dma_unmap_addr_set(ringp, mapping, FOO); |
843 | dma_unmap_len_set(ringp, len, BAR); | |
1da177e4 | 844 | |
216bf58f | 845 | 3) Use dma_unmap_{addr,len} to access these values. |
1da177e4 LT |
846 | Example, before: |
847 | ||
216bf58f FT |
848 | dma_unmap_single(dev, ringp->mapping, ringp->len, |
849 | DMA_FROM_DEVICE); | |
1da177e4 LT |
850 | |
851 | after: | |
852 | ||
216bf58f FT |
853 | dma_unmap_single(dev, |
854 | dma_unmap_addr(ringp, mapping), | |
855 | dma_unmap_len(ringp, len), | |
856 | DMA_FROM_DEVICE); | |
1da177e4 LT |
857 | |
858 | It really should be self-explanatory. We treat the ADDR and LEN | |
859 | separately, because it is possible for an implementation to only | |
860 | need the address in order to perform the unmap operation. | |
861 | ||
862 | Platform Issues | |
863 | ||
864 | If you are just writing drivers for Linux and do not maintain | |
865 | an architecture port for the kernel, you can safely skip down | |
866 | to "Closing". | |
867 | ||
868 | 1) Struct scatterlist requirements. | |
869 | ||
b02de871 FT |
870 | Don't invent the architecture specific struct scatterlist; just use |
871 | <asm-generic/scatterlist.h>. You need to enable | |
872 | CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs | |
873 | (including software IOMMU). | |
1da177e4 | 874 | |
ce00f7fe | 875 | 2) ARCH_DMA_MINALIGN |
2fd74e25 FT |
876 | |
877 | Architectures must ensure that kmalloc'ed buffer is | |
878 | DMA-safe. Drivers and subsystems depend on it. If an architecture | |
879 | isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in | |
880 | the CPU cache is identical to data in main memory), | |
ce00f7fe | 881 | ARCH_DMA_MINALIGN must be set so that the memory allocator |
2fd74e25 FT |
882 | makes sure that kmalloc'ed buffer doesn't share a cache line with |
883 | the others. See arch/arm/include/asm/cache.h as an example. | |
884 | ||
ce00f7fe | 885 | Note that ARCH_DMA_MINALIGN is about DMA memory alignment |
2fd74e25 FT |
886 | constraints. You don't need to worry about the architecture data |
887 | alignment constraints (e.g. the alignment constraints about 64-bit | |
888 | objects). | |
1da177e4 | 889 | |
c31e74c4 FT |
890 | 3) Supporting multiple types of IOMMUs |
891 | ||
892 | If your architecture needs to support multiple types of IOMMUs, you | |
893 | can use include/linux/asm-generic/dma-mapping-common.h. It's a | |
894 | library to support the DMA API with multiple types of IOMMUs. Lots | |
895 | of architectures (x86, powerpc, sh, alpha, ia64, microblaze and | |
896 | sparc) use it. Choose one to see how it can be used. If you need to | |
897 | support multiple types of IOMMUs in a single system, the example of | |
898 | x86 or powerpc helps. | |
899 | ||
1da177e4 LT |
900 | Closing |
901 | ||
a33f3224 | 902 | This document, and the API itself, would not be in its current |
1da177e4 LT |
903 | form without the feedback and suggestions from numerous individuals. |
904 | We would like to specifically mention, in no particular order, the | |
905 | following people: | |
906 | ||
907 | Russell King <rmk@arm.linux.org.uk> | |
908 | Leo Dagum <dagum@barrel.engr.sgi.com> | |
909 | Ralf Baechle <ralf@oss.sgi.com> | |
910 | Grant Grundler <grundler@cup.hp.com> | |
911 | Jay Estabrook <Jay.Estabrook@compaq.com> | |
912 | Thomas Sailer <sailer@ife.ee.ethz.ch> | |
913 | Andrea Arcangeli <andrea@suse.de> | |
26bbb29a | 914 | Jens Axboe <jens.axboe@oracle.com> |
1da177e4 | 915 | David Mosberger-Tang <davidm@hpl.hp.com> |