DMA Architecture

DMA hardware is integral to most ARM SoC designs. They are extensively used for driving various peripheral controllers related to flash memory, LCD displays, USB, SD etc. Here we elaborate the functioning of a typical ARM SoC DMA controller and briefly delve into its internal mechanisms. The intent is to explain the underlying architecture of a typical DMA module and then eventually tie those elementary structural features to ARM’s PrimeCell® DMA Controller, which initially might come across as a more sophisticated hardware.

Direct memory access is simply meant to enable the processor to offload memory transfer operations. In this manner a system will be able to dedicate its valuable computational resources towards more apt alternate purposes. So, in fact we can state that a DMA controller is like a processor module which is capable of only executing load-store operations. While a full blown pipelined processor core possesses several other computational features and hence it should rarely be employed for rudimentary data transfers.

Eventually DMA is about transferring data across different hardware modules. Depending on the use case, the DMA source and destination might vary. And depending on the hardware the specifics of the bus protocol employed for load-store also varies. But just like a processor core, a DMA controller core will also be compatible with the hardware bus protocol features related to burst mode, flow control etc. Otherwise it simply won’t be able to move data around.

 

DMA-Processor-1
Fig 1 : DMA v/s Processor

 

So, a DMA controller should be simply interpreted as a form of rudimentary logical block meant to implement data transfers. Similar to a processor core, this IP block also plugs into the larger data bus matrix involving SRAM and other peripherals. While a processor executes its object code from instruction memory, a DMA controller executes its operations by loading transfer configurations from a RAM memory segment. Such a configuration is usually termed as a descriptor. This recognizable structure would essentially describe all the details regarding the source/destination, burst size, length of the transfer etc. Eventually, such a configuration table would have to be loaded into the RAM by the DMA driver software running on the main processor. And usually the memory location of this descriptor table would be communicated to the DMA controller via some shared register.

Descriptor-2
Fig 2 : Descriptor

DMA controller is eventually a co-processor assisting the main core software with various complex use cases involving heavy data transfers. For example, populating an LCD image bitmap or transferring contents to an SD card are all DMA intensive features. Such a controller would usually support multiple DMA channels enabling simultaneous transfers. In this particular case, a DMA controller with more than two channels should be able to simultaneously drive an LCD and an SD Card.

An interrupt signal would be usually employed to synchronize these DMA transfers with the main processor operations. So, once the requested data transfer is completed, DMA hardware would simply assert an interrupt, which would be acknowledged by the processor.

DMA-int-3
Fig 3 : DMA Interrupt

 

Usually, the contents of a transfer descriptor would resemble the DMA controller register structures. Essentially, these memory descriptor contents are blindly loaded into its actual hardware registers by the DMA controller itself. For example, a descriptor would include entries for the source and destination address which would basically correspond to the actual DMA controller source and destination address registers. Such a DMA controller block would also involve a separate core logic which reads these registers and then implements the actual memory transfers. Illustrated by the Fig 4 below.

GenericDMA-4
Fig 4 : Generic DMA architecture

The transfer descriptor structures written to the memory by the processor would be loaded into the controller registers by the DMA hardware and eventually parsed and executed by its core load/store transfer logic. Undoubtedly, any errors within these memory resident transfer descriptors would also be propagated to the core transfer logic via these registers. Essentially, such a register set is meant to communicate all the relevant transfer configuration details to the DMA load/store engine. It clarifies all the details like source/destination, burst size, transfer length etc, but the actual circuit implementing the load-store loop would be already present within this DMA core.

Every hardware module has a primary interfacing mechanism, simple peripherals use registers and complex processors employ binary object codes. A typical DMA controller interfacing mechanism as elaborated above involves descriptor table reflecting its register set and transfer capabilities. PrimeCell is a DMA IP block from ARM, but unlike a typical DMA controller, its primary interfacing mechanism is not register set or descriptor tables. But it’s an instruction set, very much resembling a simple processor.

PrimeCell-5
Fig 5 : PrimeCell

PrimeCell integrates a rudimentary processor core which understands a limited instruction set, mostly meant for implementing load store operations and other synchronization mechanisms related to data bus and processor interrupts. So, here, instead of a descriptor table, we have an object code describing the transfer. Essentially a binary code implementing memcpy, but written using assembly instruction code understood by the PrimeCell DMA processor. Obviously, such an instruction code would also specify the source/destination address, burst size etc. It’s an actual binary object code.

PrimeCell6
Fig 6 : PrimeCell Architecture

This DMA processor core is also multi-threaded with one transfer channel being associated with each thread. These threads possess the same scheduling priority,  and the system implements a round-robin scheme. The internal FIFO is utilized as a buffer during DMA transfers and it’s shared across all the channels. Each channel is allowed access to the whole FIFO memory, but of course, at any instant, FIFO utilization across all the running channels can never exceed its total size. Such a flexible shared memory design adds to software complexity but it can also lead to a more productive hardware resource allocation pattern.

Finally, PrimeCell IP block also includes an instruction cache to optimize object code access. Also provides scope for instruction abort handling and watchdog exceptions to avoid lockups. In other words, the module almost resembles a full blown multi tasking processor. But do note that, such transfer related or hardware related exception triggers are present in a typical DMA controller also. So, seems like, with PrimeCell IP, a software engineer is being exposed to the underlying machinery which already exist in other typical DMA hardwares but is eventually hidden behind the veil of register set abstractions. As always, nothing is without a trade-off, the more complicated aspect of managing this module is definitely the object code generation. Which makes a crude compiler out of an otherwise relatively simple DMA driver.

Advertisement

Linux Storage Cache

Linux Content Index

File System Architecture – Part I
File System Architecture– Part II
File System Write
Buffer Cache
Linux File Write 

Every system use case involves a combination of software modules talking to each other. But an optimal system is more than set of efficient modules. The overall architectural arrangement of these modules and their data/control interfaces also impacts the performance. In that sense, several useful design methodologies can be observed from examining the storage sub system within Linux.

Application to VFS
Fig 1

Various abstraction layers have their own purpose. For example, VFS exposes a uniform interface to the user applications, while it’s the file system which actually implements the structural representation of files within the device. Eventually, how each storage sector is accessed will depend on the block driver interactions with the hardware. Productivity depends on how quickly a system can service user requests. In other words, the response time is critical. The pure logic implemented for each of the involved modules could be efficient, but if they are merely stacked up vertically then it will be only as good as using a primitive processor with no internal caches or pipelined executions.

*********************************

Inode-Dentry Cache

Book keeping information associated with each file is represented by two kernel structures — Inodes and Dentries. In bare simple terms these constructs hold the information necessary to identify and locate the storage blocks associated with a file. A multitasking environment might have numerous processes accessing the same volume or even the same file. Caching this information at higher levels can reduce contentions associated with the corresponding lower layers. If the repeated file accesses would be picked from the VFS cache, then the file system will be free to quickly service the new requests. Consider a use case where there are multiple processes looking up for a file in different dedicated directories; without a cache, the contention on the file system will scale with the number of processes. But with a good cache mechanism, only the brand new requests would end up accessing the file system module. The idea is quite like a fully associative hierarchical hardware cache.

Inode/Dentry Cache
Fig 2 – Inode/Dentry Cache

*********************************

Data Cache

Inodes and Dentries are accessed by the kernel to eventually locate the file contents, hence the above cache is only a part of the solution while the data cache handles the rest (see below Fig -3). Even here we need frequently accessed file data to be cached and only the new writes or new reads should end up contending for file system module attention. Please note that inside the kernel, the pages associated with file are arranged as a radix tree indexed via file offsets. While the dentry cache is managed by a hash table. Also, each such data cache is eventually associated with the corresponding Inode.

Cache mechanism also determines the program execution speed, it’s closely related to how efficiently the binary sections of the executable are managed within the RAM. For example, the read-ahead heuristics would determine how often the system endures an executable page miss. Also another useful illustration of the utility of a cache is the use case where a program is accessing files from the same volume where its executable is paging from; without a cache the contention on the file system module would be tremendous.

Full Cache
Fig 3 – Full Cache

*********************************

Cache Management

Cache management involves its own complexities. How these buffers are filled, flushed, invalidated or marked as dirty depends on the file system module. Linux kernel merely invokes the file system registered call-backs and expects that the associated cache buffers would be handled accordingly. For example, during a file overwrite the file system should mark the overwritten cached page as dirty, otherwise the flush may never get invoked for that page.

We can also state that the Linux kernel VFS abstracts the pure logic of how data is represented on the storage device from how it’s viewed by kernel on a running system. In this manner the role of the file system is limited to its core functionality of mapping file access to actual storage blocks and it also deals with the transformation between internal file system specific representation of files to the generic VFS Inode, Dentry and data cache constructs. Eventually, how file is created, deleted or modified and how the associated data cache is managed depends solely on the file system module.

For example, a new file creation would involve invoking of the file system module call-backs registered via kernel structure “inode_operations”. The lookup call-backs registered here would be employed to avoid file name duplication and also the obvious “create inode” call would be needed to create the file. An fopen would invoke “file_operations” callback and similarly a data read/write would involve use of “address_space_operations”.

How the registered file system call-backs would eventually manage the inode, dentry and the page data cache buffers would depend on its own inherent logic. So here there is a kind of loop where the VFS calls the file system module call-backs and those modules would in turn call other kernel helper functions to handle specific cache buffer operations like clean, flush, write-back etc. In the end when the file system call-backs return the control back to VFS the associated caches should be already configured and ready for use.

Inode/Page Cache maps
Fig 4 – Inode/Page Cache maps

*********************************

Load Store Analogy

A generic Linux file system can be described as a module which directly services the kernel cache by implementing functions for creating, invalidating or flushing various cache entries and in this manner it only indirectly caters to the POSIX file operations. The methodology here is in a way analogous to how a RISC load-store would differ from a 8051 like CISC design, separation of the slow memory access from the register operations constitute the core attribute of a load-store mechanism. Here the Linux kernel ensures that the operations on the cache can happen in parallel with an ongoing load or store which might involve the file system and the driver modules, the obvious trade-offs are complexity and memory consumption.

*********************************

Pipelines

Caches are effective when there are read ahead mechanisms and background flusher tasks. A pipelined architecture would involve as much parallel execution as possible. Cache memories are filled in advance by the reader threads while the dirty entries are constantly flushed in the background by worker threads. Such sector I/O requests are asynchronously queued and processed by the block driver task which will eventually DMA them into the interfaced device. After the writes, the corresponding cache entries are marked as clean and similarly after the reads the entries are marked as up-to-date. The pipeline stages can be seen in the below Fig 5.

Kernel Storage Pipeline
Fig 5 – Kernel Storage Pipeline

System organization evolves over time when various use cases are analyzed for performance bottlenecks and the system arrangement itself is constantly revisited to remove those identified issues. Matured generic software architectures adapt effectively to all the intended use cases while remaining  within various resource constraints. For a general purpose operating system like Linux, the infrastructure for cache memory utilization is a good illustration of how various abstract software components can seamlessly coordinate with widely varying user requests.

Linux File System Write

Linux Content Index

File System Architecture – Part I
File System Architecture– Part II
File System Write
Buffer Cache
Storage Cache

Linux file write methodology is specific to particular file systems. For example, JFFS2 will sync data within the user process ‘context’ itself (inside write_end call), while file systems like UBIFS, LogFS, Ext & FAT tends to create dirty pages and let the background flusher task manage the sync.

These two methodologies come with their own advantages, but usage of the second method leads to Linux kernel exhibiting certain quirks which might drastically impact the write throughput measurements.

Linux File Write

The above diagram does give an overview of the design, there are in fact two main attributes to this architecture:

COPY : User space data is copied into a kernel page associated with the file and marked as dirty via“write_begin” and “write_end” file system call backs, they are in fact invoked for every “write” system call made by the user process. So while they are indeed kernel mode functions they get executed on behalf of the user space process which in turn simply waits on the the system call. The sequence is illustrated below:

    write_begin()  /* Informs FS about the number of bytes being written */
    mem_copy()    /* Copies data to kernel page cache.*/
    write_end()     /* Copied kernel page marked as dirty and FS internal */
/*  meta information is updated */

&

Write-Back : The Linux kernel will spawn a flusher task to write these above created dirty pages into the storage. For that this newly spawned task would typically invoke file system specific “writepage” or “writepages” call-back.

Some Nitty-Gritty Details

The call-backs mentioned above are specific to the file system and are essentially registered during its initialization with kernel. You might have noticed that the above design almost resemble a classic producer consumer scenario, but there are certain nuances:

Lower Threshold: The flusher task (consumer) is spawned only when a certain percentage of page cache becomes dirty. So there is indeed a lower threshold (configured in /proc/sys/vm/dirty_background_ratio) for spawning this task. Please note that it can also be invoked after an internal time-out. So even if the dirty pages are in small numbers, they are not kept around in volatile memory for a long time.

Higher Threshold: Once a certain high percentage of page cache is dirtied by a user process (producer), this process is forced to sleep to avoid the kernel running out of memory. Such a user process is not unhooked from this involuntary sleep until the flusher task catches up and brings down the dirty page levels to acceptable limits. This way kernel tends to push back on the process which comes with a massive capacity to write data. This higher threshold can be specified by writing to /proc/sys/vm/dirty_ratio.

So between the above mentioned lower and upper thresholds the system tends to have two concurrent tasks, one which dirty the pages with contents and the other which cleans them up with a write back. If there are too many dirty pages being created, then Linux might also spawn multiple flusher tasks. Seems like this design might scale well on SMP architectures with multiple storage disk paths.

We could bring some more clarity with the exposition of code level details. The critical aspect of the above mechanism is implemented in the below mentioned balance_dirty_pages (page-writeback.c) which gets invoked after “write_end” call inside kernel.

The function audits the number of dirty pages in the system and then decides whether the user process should be stalled.

static void balance_dirty_pages (struct address_space *mapping, unsigned long write_chunk)

{
int pause = 1;


……
………
for (;;)

{
….
……         /* Code checks for dirty page status and breaks out of the loop only if the dirty page ratio has gone down */
……..

io_schedule_timeout(pause);

/*
* Increase the delay for each loop, up to our previous
* default of taking a 100ms nap.
*/
pause <<= 1; if (pause > HZ / 10)
pause = HZ / 10;
}
}

The argument ‘pause’ to the function io_schedule_timeout specify the sleep time in terms of jiffies while the big for (;;) loop repeatedly checks the dirty page ratio after each sleep cycle and if there is no ample reduction in that then the sleep time is doubled until it reaches 100mS, which is in fact the maximum time the ‘producer‘ process can be forced to sleep. This algorithm is executed during file writes to ensure that no rogue process goes amok consuming the full page cache memory.

The Gist

We can attribute considerable significance for the above mentioned upper and lower thresholds, if they are not fine tuned for the system specific use cases then the RAM utilization will be far from optimal. The particulars for these configurations will depend on RAM size plus how many processes will concurrently initiate file system writes, how big these writes will be, file sizes being written, typical time intervals between successive file writes etc.

A file system mounted with syncs off is meant to utilize page cache for buffering. Optimal write speeds will mandate that the upper threshold be large enough to buffer the full file while allowing seamless concurrent write backs, otherwise the kernel response time for user process writes will be erratic at best.

Another critical aspect is that the writepage should have minimal interference with write_begin & write_end, only then the flusher task write backs can happen with least friction with regards to the user process which is constantly invoking the latter two functions. If they share some resource, then eventually the user process and the flusher task will end up waiting on each other and the expected quick response time of an asynchronous buffered page cache write will not be materialized.

FUSE File System Performance on Embedded Linux

We ported and benchmarked a flash file system to Linux running on an ARM board. Porting was done via FUSE, a user space file system mechanism where the file system module itself runs as a process inside Linux. The file I/O calls from other processes are eventually routed to the FUSE process via inter process communication. This IPC is enabled by a low level FUSE driver running in the kernel.

http://en.wikipedia.org/wiki/Filesystem_in_Userspace

 

 

The above diagram provides an overview of FUSE architecture. The ported file system was proprietary and was not meant to be open sourced, from this perspective file system as a user space library made a lot of sense.

Primary bottleneck with FUSE is its performance. The control path timing for a 2K byte file read use-case is elaborated below. Please note that the 2K corresponds to NAND page size.

1. User space app to kernel FUSE driver switch. – 15 uS

2. Kernel space FUSE to user space FUSE library process context switch. – 1 to15 mS

3. Switch back into kernel mode for flash device driver access – NAND MTD driver overhead without including device delay is in uS.

4. Kernel to FUSE with the data read from flash – 350uS (NAND dependent) + 15uS + 15uS (Kernel to user mode switch and back)

5. From FUSE library back to FUSE kernel driver process context switch. – 1 to 15mS

6. Finally from FUSE kernel driver to the application with the data – 15 uS

As you can see, the two process context switches takes time in terms of Milliseconds, which kills the whole idea. If performance is a crucial, then profile the context switch overhead of an operating system before attempting a FUSE port. Seems loadable kernel module approach would be the best alternative.