Cohere For AI Fireside Chats bring together leading researchers and rising stars in the field of machine learning to discuss their research learning journeys. Research is inherently a human endeavour, and this discussion series provides insights from beginning to breakthrough.


This Fireside Chat features Jacob Hilton, Researcher at the Alignment Research Center. Beyza Ermis, Research Scientist at Cohere For AI, will sit down with Jacob for a conversation on “Discussion on AI Truthfulness, Falsehoods and Hallucinations.”


00:01 – Welcome and Introductions
01:10 – Could you share about how your non-traditional path led you to ML?
04:00 – Tell us about your Deep Learning curriculum
06:00 – Could you tell us about how you started your journey at Open AI and about your time there?
07:38 – What was your involvement with ChatGPT?
08:37 – What are your thoughts on the public response to ChatGPT?
11:37 – What are your thoughts on the truthfulness of ChatGPT?
12:19 – What are your thoughts on maturity of the tech right now in terms of commercial applications?
13:25 – How did you get interested in the truthfulness of LMs?
16:10 – Tell us more about the benchmark dataset you created
20:00 – LLM Hallucinations
21:35 – Why do you think hallucinations are so common?
23:25 – What are some ongoing challenges for truthfulness of LMs?
27:33 – Do you have any ideas on what metrics we can use to measure and evaluate truthfulness of models?
29:22 – How optimistic are you about the progress of hallucinations?
35:35 – What was your motivation in going from Open AI to Alignment Research Center?
36:50 – Tell us about the Alignment Research Center, your team, goals, etc.
39:21 – Audience Q&A

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics