In this video we walk through benchmarking RAG (retrieval augmented generation) methods over LangChain documentation. We use the new `langchain-benchmarks` Python package and a new LangSmith public dataset to do this benchmarking.

Key Links:
– Notebook used: https://langchain-ai.github.io/langchain-benchmarks/notebooks/retrieval/langchain_docs_qa.html
– Blog post on LangChain Benchmarks: https://blog.langchain.dev/public-langsmith-benchmarks/
– Public LangSmith Dataset: https://smith.langchain.com/public/452ccafc-18e1-4314-885b-edd735f17b9d/d?tab=0

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics