Input–output memory management unit
In computing, an input–output memory management unit (IOMMU) is a memory management unit (MMU) that connects a direct-memory-access–capable (DMA-capable) I/O bus to the main memory. Like a traditional MMU, which translates CPU-visible virtual addresses to physical addresses, the IOMMU maps device-visible virtual addresses (also called device addresses or I/O addresses in this context) to physical addresses. Some units also provide memory protection from faulty or malicious devices.
An example IOMMU is the graphics address remapping table (GART) used by AGP and PCI Express graphics cards on Intel Architecture and AMD computers.
On the x86 architecture, prior to splitting the functionality of northbridge and southbridge between the CPU and Platform Controller Hub (PCH), I/O virtualization was not performed by the CPU but instead by the chipset.[1][2]
Advantages
The advantages of having an IOMMU, compared to direct physical addressing of the memory, include:
- Large regions of memory can be allocated without the need to be contiguous in physical memory – the IOMMU maps contiguous virtual addresses to the underlying fragmented physical addresses. Thus, the use of vectored I/O (scatter-gather lists) can sometimes be avoided.
- Devices that do not support memory addresses long enough to address the entire physical memory can still address the entire memory through the IOMMU, avoiding overheads associated with copying buffers to and from the peripheral's addressable memory space.
- For example, x86 computers can address more than 4 gigabytes of memory with the Physical Address Extension (PAE) feature in an x86 processor. Still, an ordinary 32-bit PCI device simply cannot address the memory above the 4 GiB boundary, and thus it cannot directly access it. Without an IOMMU, the operating system would have to implement time-consuming bounce buffers (also known as double buffers[3]).
- Memory is protected from malicious devices that are attempting DMA attacks and faulty devices that are attempting errant memory transfers because a device cannot read or write to memory that has not been explicitly allocated (mapped) for it. The memory protection is based on the fact that OS running on the CPU (see figure) exclusively controls both the MMU and the IOMMU. The devices are physically unable to circumvent or corrupt configured memory management tables.
- In virtualization, guest operating systems can use hardware that is not specifically made for virtualization. Higher performance hardware such as graphics cards use DMA to access memory directly; in a virtual environment all memory addresses are re-mapped by the virtual machine software, which causes DMA devices to fail. The IOMMU handles this re-mapping, allowing the native device drivers to be used in a guest operating system.
- In some architectures IOMMU also performs hardware interrupt re-mapping, in a manner similar to standard memory address re-mapping.
- Peripheral memory paging can be supported by an IOMMU. A peripheral using the PCI-SIG PCIe Address Translation Services (ATS) Page Request Interface (PRI) extension can detect and signal the need for memory manager services.
For system architectures in which port I/O is a distinct address space from the memory address space, an IOMMU is not used when the CPU communicates with devices via I/O ports. In system architectures in which port I/O and memory are mapped into a suitable address space, an IOMMU can translate port I/O accesses.
Disadvantages
The disadvantages of having an IOMMU, compared to direct physical addressing of the memory, include:[4]
- Some degradation of performance from translation and management overhead (e.g., page table walks).
- Consumption of physical memory for the added I/O page (translation) tables. This can be mitigated if the tables can be shared with the processor.
Virtualization
When an operating system is running inside a virtual machine, including systems that use paravirtualization, such as Xen, it does not usually know the host-physical addresses of memory that it accesses. This makes providing direct access to the computer hardware difficult, because if the guest OS tried to instruct the hardware to perform a direct memory access (DMA) using guest-physical addresses, it would likely corrupt the memory, as the hardware does not know about the mapping between the guest-physical and host-physical addresses for the given virtual machine. The corruption is avoided because the hypervisor or host OS intervenes in the I/O operation to apply the translations, incurring a delay in the I/O operation.
An IOMMU can solve this problem by re-mapping the addresses accessed by the hardware according to the same (or a compatible) translation table that is used to map guest-physical address to host-physical addresses.[5]
Published specifications
- AMD has published a specification for IOMMU technology.[6][7]
- Intel has published a specification for IOMMU technology as Virtualization Technology for Directed I/O, abbreviated VT-d.[8]
- Information about the Sun IOMMU has been published in the Device Virtual Memory Access (DVMA) section of the Solaris Developer Connection.[9]
- The IBM Translation Control Entry (TCE) has been described in a document entitled Logical Partition Security in the IBM eServer pSeries 690.[10]
- The PCI-SIG has relevant work under the terms I/O Virtualization (IOV)[11] and Address Translation Services (ATS).
- ARM defines its version of IOMMU as System Memory Management Unit (SMMU)[12] to complement its Virtualization architecture.[13]
See also
- Heterogeneous System Architecture (HSA)
- List of IOMMU-supporting hardware
- Memory-mapped I/O
- Memory protection
References
- ↑ "Intel platform hardware support for I/O virtualization". intel.com. 2006-08-10. Archived from the original on 2007-01-20. Retrieved 2014-06-07.
- ↑ "Desktop Boards: Compatibility with Intel Virtualization Technology (Intel VT)". intel.com. 2014-02-14. Retrieved 2014-06-07.
- ↑ "Physical Address Extension — PAE Memory and Windows". Microsoft Windows Hardware Development Central. 2005. Retrieved 2008-04-07.
- ↑ Muli Ben-Yehuda; Jimi Xenidis; Michal Ostrowski (2007-06-27). "Price of Safety: Evaluating IOMMU Performance" (PDF). Proceedings of the Linux Symposium 2007. Ottawa, Ontario, Canada: IBM Research. Retrieved 2013-02-28.
- ↑ "Xen FAQ: In DomU, how can I use 3D graphics". Retrieved 2006-12-12.
- ↑ "AMD I/O Virtualization Technology (IOMMU) Specification Revision 2.0" (PDF). amd.com. 2011-03-24. Retrieved 2014-01-11.
- ↑ "AMD I/O Virtualization Technology (IOMMU) Specification Revision 2.62" (PDF). amd.com. 2015-03-02. Retrieved 2016-01-05.
- ↑ "Intel Virtualization Technology for Directed I/O (VT-d) Architecture Specification" (PDF). Retrieved 2016-02-17.
- ↑ "DVMA Resources and IOMMU Translations". Retrieved 2007-04-30.
- ↑ "Logical Partition Security in the IBM eServer pSeries 690". Retrieved 2007-04-30.
- ↑ "I/O Virtualization specifications". Retrieved 2007-05-01.
- ↑ "ARM SMMU". Retrieved 2013-05-13.
- ↑ "ARM Virtualization Extensions". Retrieved 2013-05-13.
External links
- Bottomley, James (2004-05-01). "Using DMA". Linux Journal. Specialized Systems Consultants.
- Mastering the DMA and IOMMU APIs, Embedded Linux Conference 2014, San Jose, by Laurent Pinchart