Compute requirements for deep learning (DL) and high-performance computing (HPC) are growing at an exponential rate. Many of the most important and valuable workloads in this domain require more flexible and performant compute than traditional processors can provide. Moreover, there is a growing population of converged AI and HPC workloads that are not well-addressed by legacy machines.

In this presentation, Dr. Andy Hock, VP of Product at Cerebras Systems, describes the Cerebras CS-2 system and its application to large-scale HPC and AI workloads. This solution supports not just the largest models of today but extends seamlessly to giant models with more than 100 trillion parameters, models that are impractical to implement today.

Our wafer-scale accelerator technology features unprecedented compute density, memory and communication bandwidth and is uniquely built for sparse linear algebra. Our co-designed software execution and hardware cluster technology allows users to quickly and easily support enormous workloads that would require thousands of petaflops and take days or weeks to execute on warehouse-sized clusters of legacy, general-purpose processors. We will also review our new software development kit that enables lower-level kernel programming for new or custom HPC and AI application development.

Cerebras Systems builds the ultimate accelerator for AI and HPC workloads. With this kind of horsepower, the possibilities are endless.

Learn more about the Cerebras SDK: https://youtu.be/ZXJzS_LHxcQ
Cerebras @ SC21: https://cerebras.net/blog/sc21
Learn more about Cerebras: https://cerebras.net

#AI #HPC #artificialintelligence #NLP

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics