Live Chat
The Machine Learning (ML) training and inference market using multiple GPUs is rapidly evolving, driving advanced technologies such as low-latency, high-throughput PCIe® switches and high-performance NVMe™ Flash controllers. The increased use of accelerators for deep learning, Artificial Intelligence (AI) and ML is enabling radical advances in image classification, speech recognition, autonomous driving, bioinformatics and video analytics. This results in a growing need for a high-bandwidth/low-latency PCIe interconnect infrastructure utilizing NVMe storage to enable parallel computing.
High-performance fabric connectivity and composability for multi-host GPU and NVMe SSD systems are critical to ensure dynamic allocation of GPU resources to match workload requirements and maximize system efficiency. Switchtec™ PAX Advanced Fabric PCIe switches feature dynamic partitioning and multi-host SR-IOV sharing, enabling real-time “composition” or dynamic allocation of GPU resources to a specific host or set of hosts using standard host drivers.
Advanced fabric PCIe switch solutions for ML appliances deliver a scalable, low-latency and cost-effective multi-host interconnect or a network of GPUs, NVMe SSDs and other PCIe endpoints. Another important consideration is the availability of a fabric Application Programming Interface (API), which can simplify system management, greatly reducing time to market and development cost for multi-host systems.
Flashtec NVMe controllers support the standard NVMe host interface in a variety of form factors at a wide range of capacity points, and are optimized for maximum high-performance, random read/write operations. They perform all Flash management operations on chip while using minimal host processing and memory resources.
Our broad portfolio of high-reliability PCIe switches offers the industry’s highest-density and lowest-power solutions for data center, storage, communications, defense, industrial and a wide range of other applications.