​LLMs are great at reasoning and taking actions.

​But previous frameworks for agentic reasoning (e.g. ReAct) were primarily focused on sequential reasoning, leading to higher latency/cost, and even poorer performance due to the lack of long-term planning.

​LLMCompiler is a new framework by Kim et al. that introduces a compiler for multi-function calling. Given a task, the framework plans out a DAG. This planning both allows for long-term thinking (which boosts performance) but also determination of which steps can be massively parallelized.

​We’re excited to host paper co-authors Sehoon Kim and Amir Gholami to present this paper and discuss the future of agents.

​LLMCompiler paper: https://arxiv.org/pdf/2312.04511.pdf

​LlamaPack: https://llamahub.ai/l/llama_packs-agents-llm_compiler?from=llama_packs

​Notebook: https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/agents/llm_compiler/llm_compiler.ipynb

Timeline:
00:00-34:30 – LLMCompiler Presentation
34:30-37:50 – Short LlamaIndex + LLMCompiler demo
37:50: Q&A

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics