We’re excited to talk about how to combine fine-tuning and RAG with Victoria Lin, author of the recent RA-DIT (Retrieval-Augmented Dual Instruction Tuning) paper.

​Paper Abstract:

​Retrieval-augmented language models (RALMs) improve performance by accessing long-tail and up-to-date knowledge from external data stores, but are challenging to build. Existing approaches require either expensive retrieval-specific modifications to LM pre-training or use post-hoc integration of the data store that leads to suboptimal performance. We introduce Retrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuning methodology that provides a third option by retrofitting any LLM with retrieval capabilities. Our approach operates in two distinct fine-tuning steps: (1) one updates a pre-trained LM to better use retrieved information, while (2) the other updates the retriever to return more relevant results, as preferred by the LM.

​Victoria will do a presentation on the paper in the first half, and in the second half we’ll do Q&A on fine-tuning + RAG more generally!

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics