​With the release of OpenAI’s new finetuning endpoints, it’s worth asking the general question: how do I best use finetuning with my LLM app, especially if I’m building RAG systems? (search, chatbots, and more).

​We’re excited to host Jo Bergum, distinguished engineer at Vespa, as well as Shishir Patil, PhD student at Berkeley and author of Gorilla, to discuss this.

​We’ll talk about some how to plug in different aspects of finetuning to optimize your RAG system:

Timeline:
0:00: Opening presentation from Tenzin/Ali on their winning hackathon project at the Augment hackathon
8:00: Shishir (Berkeley) on finetuning + Gorilla
16:00: Jo (Vespa) on LLMs + synthetic queries
24:00: Panel with Shishir/Jo on RAG + finetuning

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics