Large language models (LLMs) are routinely pre-trained on billions of tokens, only to restart the process over again once new data becomes available. A much cheaper and more efficient solution would be to enable the continual pre-training of these models, i.e. updating pre-trained models with new data instead of re-training them from scratch. However, the distribution shift induced by novel data typically results in degraded performance on past data.

This talk discusses the vision to develop methods that can enable efficiently updating pre-trained models with new knowledge while preventing the forgetting of past knowledge. Taking a step towards efficient continual pre-training, we examine the effect of different warm-up strategies and replay when continuing to pre-train models on new data and new languages.

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics