Using LLMs to do question-answering and understanding over tabular data is challenging. The naive approaches (dump text into prompt, text-to-SQL), oftentimes don’t work well over tabular data.

​In this webinar we’re excited to feature the authors of two papers featuring advanced techniques for tabular data understanding with LLMs:

1. ​Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding, featuring Zilong Wang: https://arxiv.org/abs/2401.04398v1

2. ​Rethinking Tabular Data Understanding with Large Language Models, featuring Tianyang Liu: https://arxiv.org/abs/2312.16702

​Both papers pinpoint shortcomings with naive approaches and propose novel techniques to reason about tabular data in a robust manner. Come check it out!

​We’ve also implemented these as LlamaPack templates:

1. ​Chain-of-Table LlamaPack: https://llamahub.ai/l/llama_packs-tables-chain_of_table?from=all

2. ​Mix-Self-Consistency LlamaPack: https://llamahub.ai/l/llama_packs-tables-mix_self_consistency?from=all

Timeline:
00:00-22:45 – Chain-of-Table
22:45-24:23 – Short Break (skip this part)
24:23-45:00 – Rethinking Tabular Data Understanding + Mix-Self-Consistency
45:00 – Q&A

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics