• What is sparsity and why is it important in training neural networks
• The latest progress in applying sparsity to large language models
• Our latest paper “SPDF: Sparse Pre-training and Dense Fine-tuning For Large Language Models”
https://arxiv.org/abs/2303.10464
• Why Cerebras hardware is uniquely suited to training large sparse models
• Future directions in sparsity research
Speakers:
Vithu Thangarasa (@vithursant19) – Senior ML Research Scientist, Cerebras
James Wang (@draecomino) – Senior Product Marketing Manager, Cerebras
Add comment