| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 1 | DMA Engine API Guide | 
|  | 2 | ==================== | 
|  | 3 |  | 
|  | 4 | Vinod Koul <vinod dot koul at intel.com> | 
|  | 5 |  | 
|  | 6 | NOTE: For DMA Engine usage in async_tx please see: | 
|  | 7 | Documentation/crypto/async-tx-api.txt | 
|  | 8 |  | 
|  | 9 |  | 
|  | 10 | Below is a guide to device driver writers on how to use the Slave-DMA API of the | 
|  | 11 | DMA Engine. This is applicable only for slave DMA usage only. | 
|  | 12 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 13 | The slave DMA usage consists of following steps: | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 14 | 1. Allocate a DMA slave channel | 
|  | 15 | 2. Set slave and controller specific parameters | 
|  | 16 | 3. Get a descriptor for transaction | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 17 | 4. Submit the transaction | 
|  | 18 | 5. Issue pending requests and wait for callback notification | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 19 |  | 
|  | 20 | 1. Allocate a DMA slave channel | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 21 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 22 | Channel allocation is slightly different in the slave DMA context, | 
|  | 23 | client drivers typically need a channel from a particular DMA | 
|  | 24 | controller only and even in some cases a specific channel is desired. | 
|  | 25 | To request a channel dma_request_channel() API is used. | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 26 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 27 | Interface: | 
|  | 28 | struct dma_chan *dma_request_channel(dma_cap_mask_t mask, | 
|  | 29 | dma_filter_fn filter_fn, | 
|  | 30 | void *filter_param); | 
|  | 31 | where dma_filter_fn is defined as: | 
|  | 32 | typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); | 
|  | 33 |  | 
|  | 34 | The 'filter_fn' parameter is optional, but highly recommended for | 
|  | 35 | slave and cyclic channels as they typically need to obtain a specific | 
|  | 36 | DMA channel. | 
|  | 37 |  | 
|  | 38 | When the optional 'filter_fn' parameter is NULL, dma_request_channel() | 
|  | 39 | simply returns the first channel that satisfies the capability mask. | 
|  | 40 |  | 
|  | 41 | Otherwise, the 'filter_fn' routine will be called once for each free | 
|  | 42 | channel which has a capability in 'mask'.  'filter_fn' is expected to | 
|  | 43 | return 'true' when the desired DMA channel is found. | 
|  | 44 |  | 
|  | 45 | A channel allocated via this interface is exclusive to the caller, | 
|  | 46 | until dma_release_channel() is called. | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 47 |  | 
|  | 48 | 2. Set slave and controller specific parameters | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 49 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 50 | Next step is always to pass some specific information to the DMA | 
|  | 51 | driver.  Most of the generic information which a slave DMA can use | 
|  | 52 | is in struct dma_slave_config.  This allows the clients to specify | 
|  | 53 | DMA direction, DMA addresses, bus widths, DMA burst lengths etc | 
|  | 54 | for the peripheral. | 
|  | 55 |  | 
|  | 56 | If some DMA controllers have more parameters to be sent then they | 
|  | 57 | should try to embed struct dma_slave_config in their controller | 
|  | 58 | specific structure. That gives flexibility to client to pass more | 
|  | 59 | parameters, if required. | 
|  | 60 |  | 
|  | 61 | Interface: | 
|  | 62 | int dmaengine_slave_config(struct dma_chan *chan, | 
|  | 63 | struct dma_slave_config *config) | 
|  | 64 |  | 
|  | 65 | Please see the dma_slave_config structure definition in dmaengine.h | 
|  | 66 | for a detailed explaination of the struct members.  Please note | 
|  | 67 | that the 'direction' member will be going away as it duplicates the | 
|  | 68 | direction given in the prepare call. | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 69 |  | 
|  | 70 | 3. Get a descriptor for transaction | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 71 |  | 
|  | 72 | For slave usage the various modes of slave transfers supported by the | 
|  | 73 | DMA-engine are: | 
|  | 74 |  | 
|  | 75 | slave_sg	- DMA a list of scatter gather buffers from/to a peripheral | 
|  | 76 | dma_cyclic	- Perform a cyclic DMA operation from/to a peripheral till the | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 77 | operation is explicitly stopped. | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 78 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 79 | A non-NULL return of this transfer API represents a "descriptor" for | 
|  | 80 | the given transaction. | 
|  | 81 |  | 
|  | 82 | Interface: | 
|  | 83 | struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)( | 
|  | 84 | struct dma_chan *chan, struct scatterlist *sgl, | 
|  | 85 | unsigned int sg_len, enum dma_data_direction direction, | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 86 | unsigned long flags); | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 87 |  | 
|  | 88 | struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)( | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 89 | struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, | 
|  | 90 | size_t period_len, enum dma_data_direction direction); | 
|  | 91 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 92 | The peripheral driver is expected to have mapped the scatterlist for | 
|  | 93 | the DMA operation prior to calling device_prep_slave_sg, and must | 
|  | 94 | keep the scatterlist mapped until the DMA operation has completed. | 
|  | 95 | The scatterlist must be mapped using the DMA struct device.  So, | 
|  | 96 | normal setup should look like this: | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 97 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 98 | nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len); | 
|  | 99 | if (nr_sg == 0) | 
|  | 100 | /* error */ | 
| Vinod Koul | 46b2903 | 2011-05-25 14:49:20 -0700 | [diff] [blame] | 101 |  | 
| Russell King - ARM Linux | 5a42fb9 | 2011-07-26 14:25:10 +0100 | [diff] [blame] | 102 | desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg, | 
|  | 103 | direction, flags); | 
|  | 104 |  | 
|  | 105 | Once a descriptor has been obtained, the callback information can be | 
|  | 106 | added and the descriptor must then be submitted.  Some DMA engine | 
|  | 107 | drivers may hold a spinlock between a successful preparation and | 
|  | 108 | submission so it is important that these two operations are closely | 
|  | 109 | paired. | 
|  | 110 |  | 
|  | 111 | Note: | 
|  | 112 | Although the async_tx API specifies that completion callback | 
|  | 113 | routines cannot submit any new operations, this is not the | 
|  | 114 | case for slave/cyclic DMA. | 
|  | 115 |  | 
|  | 116 | For slave DMA, the subsequent transaction may not be available | 
|  | 117 | for submission prior to callback function being invoked, so | 
|  | 118 | slave DMA callbacks are permitted to prepare and submit a new | 
|  | 119 | transaction. | 
|  | 120 |  | 
|  | 121 | For cyclic DMA, a callback function may wish to terminate the | 
|  | 122 | DMA via dmaengine_terminate_all(). | 
|  | 123 |  | 
|  | 124 | Therefore, it is important that DMA engine drivers drop any | 
|  | 125 | locks before calling the callback function which may cause a | 
|  | 126 | deadlock. | 
|  | 127 |  | 
|  | 128 | Note that callbacks will always be invoked from the DMA | 
|  | 129 | engines tasklet, never from interrupt context. | 
|  | 130 |  | 
|  | 131 | 4. Submit the transaction | 
|  | 132 |  | 
|  | 133 | Once the descriptor has been prepared and the callback information | 
|  | 134 | added, it must be placed on the DMA engine drivers pending queue. | 
|  | 135 |  | 
|  | 136 | Interface: | 
|  | 137 | dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc) | 
|  | 138 |  | 
|  | 139 | This returns a cookie can be used to check the progress of DMA engine | 
|  | 140 | activity via other DMA engine calls not covered in this document. | 
|  | 141 |  | 
|  | 142 | dmaengine_submit() will not start the DMA operation, it merely adds | 
|  | 143 | it to the pending queue.  For this, see step 5, dma_async_issue_pending. | 
|  | 144 |  | 
|  | 145 | 5. Issue pending DMA requests and wait for callback notification | 
|  | 146 |  | 
|  | 147 | The transactions in the pending queue can be activated by calling the | 
|  | 148 | issue_pending API. If channel is idle then the first transaction in | 
|  | 149 | queue is started and subsequent ones queued up. | 
|  | 150 |  | 
|  | 151 | On completion of each DMA operation, the next in queue is started and | 
|  | 152 | a tasklet triggered. The tasklet will then call the client driver | 
|  | 153 | completion callback routine for notification, if set. | 
|  | 154 |  | 
|  | 155 | Interface: | 
|  | 156 | void dma_async_issue_pending(struct dma_chan *chan); | 
|  | 157 |  | 
|  | 158 | Further APIs: | 
|  | 159 |  | 
|  | 160 | 1. int dmaengine_terminate_all(struct dma_chan *chan) | 
|  | 161 |  | 
|  | 162 | This causes all activity for the DMA channel to be stopped, and may | 
|  | 163 | discard data in the DMA FIFO which hasn't been fully transferred. | 
|  | 164 | No callback functions will be called for any incomplete transfers. | 
|  | 165 |  | 
|  | 166 | 2. int dmaengine_pause(struct dma_chan *chan) | 
|  | 167 |  | 
|  | 168 | This pauses activity on the DMA channel without data loss. | 
|  | 169 |  | 
|  | 170 | 3. int dmaengine_resume(struct dma_chan *chan) | 
|  | 171 |  | 
|  | 172 | Resume a previously paused DMA channel.  It is invalid to resume a | 
|  | 173 | channel which is not currently paused. | 
|  | 174 |  | 
|  | 175 | 4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan, | 
|  | 176 | dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used) | 
|  | 177 |  | 
|  | 178 | This can be used to check the status of the channel.  Please see | 
|  | 179 | the documentation in include/linux/dmaengine.h for a more complete | 
|  | 180 | description of this API. | 
|  | 181 |  | 
|  | 182 | This can be used in conjunction with dma_async_is_complete() and | 
|  | 183 | the cookie returned from 'descriptor->submit()' to check for | 
|  | 184 | completion of a specific DMA transaction. | 
|  | 185 |  | 
|  | 186 | Note: | 
|  | 187 | Not all DMA engine drivers can return reliable information for | 
|  | 188 | a running DMA channel.  It is recommended that DMA engine users | 
|  | 189 | pause or stop (via dmaengine_terminate_all) the channel before | 
|  | 190 | using this API. |