USART Driver Library - Data Transfer Models

The Universal Synchronous/Asynchronous Receiver/Transmitter (USART) driver provides three different data transfer models for transferring data:

The Byte-by-Byte Model
The File I/O Type Read/Write Transfer Model
Buffer Queue Transfer Model

Byte-by-Byte Model

The Byte-by-Byte Model allows the application to transfer data through the USART driver one byte at a time. With this model, the driver reads one byte from the receive fist in first out (FIFO) or writes one byte to the transmit FIFO. The application must check if data has been received before reading the data. Similarly, it must check if the transmit FIFO is not full before writing to the FIFO. The byte-by-byte data transfer model places the responsibility of maintaining the USART peripheral on the Application. The driver cannot support other data transfer models if support for this data transfer model is enabled. The Byte-by-Byte data transfer model can be used for simple data transfer applications.

The DRV_USART_WriteByte function and the DRV_USART_ReadByte functions represent the Byte-by-Byte Data Transfer Model.

File I/O Type Read/Write Transfer Model

This data transfer model is similar to file read/write model in a UNIX operating system. The application calls the USART driver read/write routines to transfer data through the USART. Unlike the Byte-by-Byte data transfer model, the read/write data model can process a block of data. Depending on the mode (blocking or non-blocking) in which the client opened the driver, the driver will either block until all of the data is transferred (used only with an RTOS) or will immediately return with the number of bytes transferred. The application does not have to check the FIFO status while using this mode. The application can instead use the return status (number of bytes transferred) to maintain its logic and throttle the data transfer to the USART driver. The read/write model can be used with the non-DMA Buffer Queue model. It cannot be used with the Byte-by-Byte Model and the DMA-enabled Buffer Queue Model in the same application.

The driver can support the non-DMA Buffer Queue Data Transfer Model along with the File I/O Type read/write data transfer Model. The Byte-by-Byte Model and DMA Buffer Queue Model cannot be enabled if the File I/O Type read/write data transfer Model is enabled.

The DRV_USART_Read and DRV_USART_Writefunctions represent the File I/O Type read/write data transfer model. The functional behavior of these APIs is affected by the mode in which the client opened the driver. If the client opened the driver in blocking mode, then these APIs will block. In blocking mode (used only with an RTOS), the DRV_USART_Read and DRV_USART_Write functions will not return until the requested number of bytes have been read or written. If the client opened the driver in non-blocking mode, then these APIs will not block. In non-blocking mode, the DRV_USART_Read and DRV_USART_Write functions will return immediately with the amount of data that could be read or written.

Buffer Queue Transfer Model

The Buffer Queue Data Transfer Model allows clients to queue data transfers for processing. This data transfer model is always non-blocking. The USART driver returns a buffer handle for a queued request. The clients can track the completion of a buffer through events and the API. If the USART driver is busy processing a data transfer, other data transfer requests are queued. This allows the clients to optimize their application logic and increase throughput. To optimize memory usage, the USART driver implements a shared buffer object pool concept to add a data transfer request to the queue. The following figure shows a conceptual representation of the Buffer Queue Model.

buffer_queue.png

As shown in the previous figure, each USART driver hardware instance has a read/write queue. The application must configure the sizes of these read/write queues. The USART driver additionally employs a global pool of buffer queue objects. This pool is common to all USART Driver hardware instances and its size is defined by the DRV_USART_QUEUE_DEPTH_COMBINED configuration macro.

When a client places a request to add a data transfer, the driver performs the following actions:

  1. It checks if a buffer object is free in the global pool; if not, the driver rejects the request.
  2. It then checks if the hardware instance specific queue is full; if not, the buffer object from the global pool is added to the hardware instance specific queue. If the queue is full, the driver rejects the request.

The DMA and non-DMA Buffer Queue model API is the same. The driver can support the non-DMA Buffer Queue Data Transfer Model along with the File I/O Type Read Write Data Transfer Model. The Byte-by-Byte Model cannot be enabled if the Buffer Queue Data Transfer Model is enabled.

The USART Driver DMA feature is only available while using the Buffer Queue Model. If enabled, the USART Driver uses the DMA module channels to transfer data directly from application memory to USART transmit or receive registers. This reduces CPU resource consumption and improves system performance.

The DRV_USART_BufferAddRead and DRV_USART_BufferAddWrite functions represent the Buffer Queue Data Transfer Model. These functions are always non-blocking. The Buffer Queue Data Transfer Model employs queuing of read/write request. Each driver instance contains a read/write queue. The sizes are determined by the ueueSizeRead and queueSizeWrite members of the DRV_USART_INIT data structure. The driver provides driver events (DRV_USART_BUFFER_EVENT) that indicates termination of the buffer requests.

When the driver is configured for Interrupt mode operation, the buffer event handler executes in an interrupt context. Calling computationally intensive or hardware polling routines within the event handlers is not recommended.

When the driver adds request to the queue, it returns a buffer handle. This handle allows the client to track the request at it progresses through the queue. The buffer handle expires when the event associated with the buffer completes.

20th Annual
Microchip MASTERs Conference 2016
Register now - Deadline: July 29

JW Marriott Desert Ridge Resort-Phoenix, AZ

© 2016 Microchip Technology, Inc.
Information contained on this site regarding device applications and the like is provided only for your convenience and may be superseded by updates. It is your responsibility to ensure that your application meets with your specifications. MICROCHIP MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WHETHER EXPRESS OR IMPLIED, WRITTEN OR ORAL, STATUTORY OR OTHERWISE, RELATED TO THE INFORMATION, INCLUDING BUT NOT LIMITED TO ITS CONDITION, QUALITY, PERFORMANCE, MERCHANTABILITY OR FITNESS FOR PURPOSE. Microchip disclaims all liability arising from this information and its use. Use of Microchip devices in life support and/or safety applications is entirely at the buyer's risk, and the buyer agrees to defend, indemnify and hold harmless Microchip from any and all damages, claims, suits, or expenses resulting from such use. No licenses are conveyed, implicitly or otherwise, under any Microchip intellectual property rights.