Natural language processing has taken the machine learning community by storm over the past few years. Transformer architectures such as BERT and its many variants have gained great notoriety and adoption as over the past few years as a preferred model for sequence data in a variety of domains.

In addition, the PyTorch machine learning framework has also gained immense popularity within the machine learning community due to its intuitive nature and ease-of-use.

Naturally, this means that there is a huge demand for running transformer models such as BERT using PyTorch. We at Cerebras are constantly expanding our support for PyTorch models to provide a simple and easy way to port existing PyTorch models to run at high performance on Cerebras systems with just a few extra lines of code.

Cerebras Engineer Cindy Orozco Bohorquez walks through the simple process of compiling and running your PyTorch neural network code on the Cerebras CS-2 system.

Learn more
https://cerebras.net/blog/supporting-pytorch-on-the-cerebras-wafer-scale-engine/
https://cerebras.net/blog/getting-started-with-pytorch-bert-models-on-the-cerebras-cs-2-system/
https://www.cerebras.net/product-software/
https://docs.cerebras.net/en/latest/

#AI #deeplearning #machinelearning #NLP #naturallanguageprocessing #pytorch

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics