99 videos
69085 views
1500 subscribers
The AI Conference was established to embrace the potent capability of AI to revolutionize every facet of human existence, from daily living to broad societal structures.
Guided by the core tenets of transparency, diversity, and open-source promotion, we aim to provide a unique platform to exchange ideas, foster innovation, and challenge existing norms in the AI domain. Our mission is to mitigate the inherent biases often found in AI models, push for increased openness, and champion the democratic accessibility of AI technologies.
Uploads
- Speakers 2024
Yu Chen, Snowflake: Empowering Sales with LLM-Powered Data Insights and Workflow Automation
07.07.2025
Yu Chen, Senior Data Scientist Sales Data Science Team, Snowflake
Empowering Sales with LLM-Powered Data Insights and Workflow Automation
Snowflake leverages Large Language Models (LLMs) to empower the sales team with efficient search and execution capabilities for customer-related metrics and business performance data.
By using data-to-text and task execution chatbots, salespeople can easily understand and interact with their data.
These chatbots enable users to search complex datasets, receive clear summaries, and execute tasks efficiently, all within a single platform.
This integration ensures that sales professionals have quick access to vital information and can manage their workflows seamlessly.
Subscribe to our channel for the latest news and announcements. 🚀
🎟️ Tickets: https://aiconference.com/
🛎️ Remember to hit the bell icon to stay notified!
Follow The AI Conference
Instagram: https://www.instagram.com/aiconfere...
Facebook: https://www.facebook.com/AIconferen...
LinkedIn: https://www.linkedin.com/company/th...
Twitter: https://twitter.com/aiconference
© The AI Conference 2025
Video Recorded at The AI Conference. Copyright, The AI Conference, All Rights Reserved
Aurimas Griciūnas, Neptune.AI: Observability in LLMOps pipeline - different levels of scale
25.06.2025
Aurimas Griciūnas, Chief Product Officer, Neptune.AI
Observability in LLMOps pipeline - different levels of scale
The term LLMOps has been around for just under 2 years. In the meantime, we have gone from using fine-tuned foundation models to running complex Agentic AI systems in production. Many breakthroughs are yet to be made and hardware limitations to be overcome.
In this talk I will walk you through the LLMOps pipeline as if we were building an AI system from scratch. From building foundation models and fine-tuning them to observing complex Agentic AI systems in production. I will emphasize the different levels of scalability requirements for observability infrastructure and tooling at each step and why you might want to invest into it.
Subscribe to our channel for the latest news and announcements. 🚀
🎟️ Tickets: https://aiconference.com/
🛎️ Remember to hit the bell icon to stay notified!
Follow The AI Conference
Instagram: https://www.instagram.com/aiconfere...
Facebook: https://www.facebook.com/AIconferen...
LinkedIn: https://www.linkedin.com/company/th...
Twitter: https://twitter.com/aiconference
© The AI Conference 2025
Video Recorded at The AI Conference. Copyright, The AI Conference, All Rights Reserved
Prashanth Rao, Kùzu Inc: Unpacking Graph RAG:An overview of history, terminologies and examples
23.06.2025
Prashanth Rao, AI Engineer, Kùzu Inc
Unpacking Graph RAG:An overview of history, terminologies and examples
This talk provides an overview of the current state of Graph RAG (Retrieval-Augmented Generation) and its components. Graph RAG has become an incredible buzz term recently, but at its core, it represents an extension to RAG by combining the power of graphs with vector search to enhance information access and response relevance. The talk presents a pragmatic perspective by uncovering the various stages involved in building and using Graph RAG, and highlights some examples from recent studies that show tangible improvements when using graph retrieval in combination with vector search.
You will also learn about some ongoing open source projects in Graph RAG that can help you get started with building, experimenting, and engaging with the rapidly growing community of practitioners who are interested in this space.
Subscribe to our channel for the latest news and announcements. 🚀
🎟️ Tickets: https://aiconference.com/
🛎️ Remember to hit the bell icon to stay notified!
Follow The AI Conference
Instagram: https://www.instagram.com/aiconfere...
Facebook: https://www.facebook.com/AIconferen...
LinkedIn: https://www.linkedin.com/company/th...
Twitter: https://twitter.com/aiconference
© The AI Conference 2025
Video Recorded at The AI Conference. Copyright, The AI Conference, All Rights Reserved
Amit Jain, CEO & Co-Founder, Luma AI: Effects of Scaling in World Model Training
30.06.2025
Amit Jain, CEO & Co-Founder, Luma AI
Effects of scaling in world model training
The bitter lesson teaches us that simple techniques at scale are the most effective. This has been proven with LLMs and has similarly shown true in video models, where scale has unlocked truly emergent behaviors in 3D and is showing early glimpses of the future of multimodal intelligence.
We’ll discuss the approach to, and findings of training Dream Machine, Luma’s SOTA video generation model.
Subscribe to our channel for the latest news and announcements. 🚀
🎟️ Tickets: https://aiconference.com/
🛎️ Remember to hit the bell icon to stay notified!
Follow The AI Conference
Instagram: https://www.instagram.com/aiconfere...
Facebook: https://www.facebook.com/AIconferen...
LinkedIn: https://www.linkedin.com/company/th...
Twitter: https://twitter.com/aiconference
© The AI Conference 2025
Video Recorded at The AI Conference. Copyright, The AI Conference, All Rights Reserved
Kanika Narang, Meta: On-Device LLMs: Power of Conversational AI for Mobile and Wearable Devices
09.07.2025
Kanika Narang, Research Scientist, Meta
On-Device LLMs: Unleashing the Power of Conversational AI for Mobile and Wearable Devices
Large Language Models (LLMs) have captured the attention of the tech world with their remarkable common-sense reasoning and generalizability. However, their large size and server transfer requirements can make them resource-intensive and slow, which is problematic for use in mobile or wearable devices like smart glasses and smart watches. Moreover, on-device computing could offer a solution to privacy concerns by keeping sensitive data, such as text messages or photos, on the device itself. To tackle these challenges, we’ve developed a more compact language model, ranging from 0.5B to 1.4B parameters. This model is designed to run on-device, providing a competitive performance for conversational grounded tasks, while also managing latency and memory usage effectively.
In this presentation, I’ll delve into our work on creating a versatile, on-device LLM, which is a distilled version of the LLAMA model, specifically tailored for conversational reasoning tasks. I’ll discuss our pretraining framework and our innovative approach to finetuning, which involves using LLM synthesized, task-specific data in a fresh dialogue format. Finally, I’ll share our strategy for scaling our text-based conversational model to a multimodal model, enhancing generative experiences such as composing text replies, document summarization, image captioning, and visual question-answering on wearables or mobile devices.
Subscribe to our channel for the latest news and announcements. 🚀
🎟️ Tickets: https://aiconference.com/
🛎️ Remember to hit the bell icon to stay notified!
Follow The AI Conference
Instagram: https://www.instagram.com/aiconfere...
Facebook: https://www.facebook.com/AIconferen...
LinkedIn: https://www.linkedin.com/company/th...
Twitter: https://twitter.com/aiconference
© The AI Conference 2025
Video Recorded at The AI Conference. Copyright, The AI Conference, All Rights Reserved
Ivan Lee, CEO, datasaur.ai: From Concept to Reality: Mastering LLMs from POC to Production
02.07.2025
Ivan Lee, CEO, datasaur.ai
From Concept to Reality:Mastering LLMs from POC to Production
As businesses move from LLM experimentation to real-world deployments in 2024, they face a critical balancing act between quality, speed, and cost.
Drawing from a decade of NLP experience with Fortune 100 companies, this talk navigates the complexities of moving from POC to production-ready LLM solutions.
Strategic LLM applications transforming industries from legal to healthcare and eCommerce
Best practices for minimizing costs and risks such as hallucination issues
Scaling for growing demands
Techniques for future-proofing deployments as next-gen models like Llama 3 and GPT-5 emerge
The crucial role of regression testing in safeguarding existing applications
Subscribe to our channel for the latest news and announcements. 🚀
🎟️ Tickets: https://aiconference.com/
🛎️ Remember to hit the bell icon to stay notified!
Follow The AI Conference
Instagram: https://www.instagram.com/aiconfere...
Facebook: https://www.facebook.com/AIconferen...
LinkedIn: https://www.linkedin.com/company/th...
Twitter: https://twitter.com/aiconference
© The AI Conference 2025
Video Recorded at The AI Conference. Copyright, The AI Conference, All Rights Reserved
Subhabrata Mukherjee: Building the first Safety-focused Conversational AI for Healthcare
23.06.2025
Subhabrata Mukherjee, Hippocratic AI
Building the first Safety-focused Conversational AI for Healthcare
Healthcare is facing a massive, global staffing crisis. The World Health Organization predicts a 10 million healthcare worker shortage by 2030, but stretched systems and underserved patients are already feeling the effects. Generative AI is the key to closing this staffing gap and ensuring more people can receive a level of care that has never existed before. With generative AI and large language models (LLM) working alongside healthcare workers of all kinds, we are looking to improve access, equity, and outcomes.
Hippocratic AI is building the industry’s first safety-focused LLM for healthcare, specifically focused on non-diagnostic, patient-facing, real-time healthcare conversation. We present Polaris — an agentic trillion-parameter constellation system with collaborative specialization and safety guardrails for human-like conversation. We will talk about the architecture, specialized training and alignment, and the first comprehensive clinician evaluation of an LLM system for healthcare with thousands of U.S. licensed nurses and hundreds of U.S. licensed physicians. We will also share industry use-cases to showcase the best and safest ways to deploy this technology, and explore what AI- and LLM-powered healthcare could look like in the future.
Subscribe to our channel for the latest news and announcements. 🚀
🎟️ Tickets: https://aiconference.com/
🛎️ Remember to hit the bell icon to stay notified!
Follow The AI Conference
Instagram: https://www.instagram.com/aiconfere...
Facebook: https://www.facebook.com/AIconferen...
LinkedIn: https://www.linkedin.com/company/th...
Twitter: https://twitter.com/aiconference
© The AI Conference 2025
Video Recorded at The AI Conference. Copyright, The AI Conference, All Rights Reserved
Add comment