Meeting the Bandwidth Demands of Next-Gen HPC & AI System Architectures

In artificial intelligence (AI), increasingly complex algorithms, larger datasets, and process-intensive workloads lend to an insatiable demand for compute, memory, and storage, as well as higher-bandwidth, lower-latency communication between these components. Conversational AI, recommender systems, and computer vision are becoming increasingly prevalent, with new system architectures being explored to enable AI models with hundreds of trillions of parameters, and power supercomputers beyond exascale. These advances are bringing new challenges as the physical limitations of standard electrical interconnects are nearing the point of diminishing returns. Performance and efficiency losses from memory capacity limitations, network bottlenecks and stranded resources are being amplified at scale, creating a need for new system architectures to handle ever-growing, performance-intensive workloads.
Watch Now