Responsible AI: Interviews with Sam Carrington

Towards a Systems-Level Approach to Fair ML with Sarah M. Brown - #456

26.02.2021
Today we’re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island. In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a “systems-level” approach is necessary when thinking about ethical and fairness issues in models and algorithms. We also explore Wiggum: a fairness forensics tool, which explores bias and allows for regular auditing of data, as well as her ongoing collaboration with a social psychologist to explore how people perceive ethics and fairness. Finally, we talk through the role of tools in assessing fairness and bias, and the importance of understanding the decisions the tools are making. The complete show notes for this episode can be found at https://twimlai.com/go/456. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai
69
3
0

A Future of Work for the Invisible Workers in A.I. with Saiph Savage - #447

01.02.2021
Today we’re joined by Saiph Savage, a visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM. We caught up with Saiph during NeurIPS where she delivered an insightful invited talk “A Future of Work for the Invisible Workers in A.I.”. In our conversation with Saiph, we gain a better understanding of the “Invisible workers,” or the people doing the work of labeling for machine learning and AI systems, and some of the issues around lack of economic empowerment, emotional trauma, and other issues that arise with these jobs. We discuss ways that we can empower these workers, and push the companies that are employing these workers to do the same. Finally, we discuss Saiph’s participatory design work with rural workers in the global south. The complete show notes for this episode can be found at https://twimlai.com/go/447
146
4
0

AI for Social Good: Why “Good” isn’t Enough with Ben Green - #368

23.04.2020
Today we’re joined by Ben Green, PhD Candidate at Harvard, Affiliate at the Berkman Klein Center for Internet & Society at Harvard, Research Fellow at the AI Now Institute at NYU. Ben’s research is focused on social and policy impacts of data science, with a focus on algorithmic fairness, municipal governments, and the criminal justice system. In our conversation, we discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning projects, papers and research; A grounded definition of what “good” actually means, and the absence of a “theory of change.” We also talk through how he thinks about the unintended consequence associated with the application of technology to social good, and his theory for the relationship between technology and social impact. The complete show notes for this episode can be found at twimlai.com/talk/368.
218
5
2

Facebook Abandons Facial Recognition. Should Everyone Else Follow Suit? With Luke Stark - #534

08.11.2021
Today we’re joined by Luke Stark, an assistant professor at Western University in London, Ontario. In our conversation with Luke, we explore the existence and use of facial recognition technology, something Luke has been critical of in his work over the past few years, comparing it to plutonium. We discuss Luke’s recent paper, “Physiognomic Artificial Intelligence”, in which he critiques studies that will attempt to use faces and facial expressions and features to make determinations about people, a practice fundamental to facial recognition, also one that Luke believes is inherently racist at its core. Finally, briefly discuss the recent wave of hires at the FTC, and the news that broke (mid-recording) announcing that Facebook will be shutting down their facial recognition system and why it's not necessarily the game-changing announcement it seemed on its… face. The complete show notes for this episode can be found at https://twimlai.com/go/534. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai
234
2
3

AI and the Responsible Data Economy with Dawn Song - #403

24.08.2020
Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs. In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way. We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data. The complete show notes for this episode can be found twimlai.com/go/403.
243
5
0

The Measure and Mismeasure of Fairness with Sharad Goel - #363

06.04.2020
Today we’re joined by Sharad Goel, Assistant Professor in the management science & engineering department at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent the recent years focused on applying machine learning to better understand and improve public policy. In our conversation, we dive into Sharad’s non-traditional path to academia, which includes extensive work on discriminatory policing, including practices like stop-and-frisk, leading up to his work on The Stanford Open Policing Project, which uses data from over 200 million traffic stops nationwide to “help researchers, journalists, and policymakers investigate and improve interactions between police and the public.” Finally, we discuss Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning,” which identifies three formal definitions of fairness in algorithms, the statistical limitations of each, and details how mathematical formalizations of fairness could be introduced into algorithms. Check out the complete show notes for this episode at twimlai.com/talk/363.
261
4
0

A Social Scientist’s Perspective on AI with Eric Rice - #511

19.08.2021
Today we’re joined by Eric Rice, associate professor at USC, and the co-director of the USC Center for Artificial Intelligence in Society. Eric is a sociologist by trade, and in our conversation, we explore how he has made extensive inroads within the machine learning community through collaborations with ML academics and researchers. We discuss some of the most important lessons Eric has learned while doing interdisciplinary projects, how the social scientist’s approach to assessment and measurement would be different from a computer scientist's approach to assessing the algorithmic performance of a model. We specifically explore a few projects he’s worked on including HIV prevention amongst the homeless youth population in LA, a project he spearheaded with former guest Milind Tambe, as well as a project focused on using ML techniques to assist in the identification of people in need of housing resources, and ensuring that they get the best interventions possible. If you enjoyed this conversation, I encourage you to check out our conversation with Milind Tambe from last year’s TWIMLfest on Why AI Innovation and Social Impact Go Hand in Hand. The complete show notes for this episode can be found at https://twimlai.com/go/511. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai #Homelessness #HIV, ucla #HUD, housing #hiv prevention #Center for AI in Society
294
6
0

Building Public Interest Technology with Meredith Broussard - 552

13.01.2022
Today we’re joined by Meredith Broussard, an associate professor at NYU & research director at the NYU Alliance for Public Interest Technology. Meredith was a keynote speaker at the recent NeurIPS conference, and we had the pleasure of speaking with her to discuss her talk from the event, and her upcoming book, tentatively titled More Than A Glitch: What Everyone Needs To Know About Making Technology Anti-Racist, Accessible, And Otherwise Useful To All. In our conversation, we explore Meredith’s work in the field of public interest technology, and her view of the relationship between technology and artificial intelligence. Meredith and Sam talk through real-world scenarios where an emphasis on monitoring bias and responsibility would positively impact outcomes, and how this type of monitoring parallels the infrastructure that many organizations are already building out. Finally, we talk through the main takeaways from Meredith’s NeurIPS talk, and how practitioners can get involved in the work of building and deploying public interest technology. The complete show notes for this episode can be found at https://twimlai.com/go/552 Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai #journalism #frameworks #Charlton McIlwain #black software #auditing #Artificial Unintelligence: How Computers Misunderstand the World #government technology
322
6
0

Understanding AI’s Impact on Social Disparities with Vinodkumar Prabhakaran - 617

20.02.2023
How does bias creep its way into our AI systems? Is it the data, or the architect? Today we’re joined by Vinodkumar Prabhakaran, a Senior Research Scientist at Google Research. In our conversation with Vinod, we discuss his two main areas of research, using ML, specifically NLP, to explore these social disparities, and how these same social disparities are captured and propagated within machine learning tools. We explore a few specific projects, the first using NLP to analyze interactions between police officers and community members, determining factors like level of respect or politeness and how they play out across a spectrum of community members. We also discuss his work on understanding how bias creeps into the pipeline of building ML models, whether it be from the data or the person building the model. Finally, for those working with human annotators, Vinod shares his thoughts on how to incorporate principles of fairness to help build more robust models. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 🔗 LINKS & RESOURCES =============================== Paper: On Releasing Annotator-Level Labels and Information in Datasets - https://arxiv.org/pdf/2110.05699.pd... Paper: Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations - https://arxiv.org/pdf/2110.05719.pd... For the complete resource list, head over to https://twimlai.com/go/617 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
364
7
0

How External Auditing is Changing the Facial Recognition Landscape with Deb Raji - #388

03.07.2020
Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute at New York University. Over the past week or two, there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition technology from Amazon, IBM, and Microsoft. There was also the release of PULSE, a controversial computer vision model that ultimately sparked a Twitter firestorm involving Yann Lecun and AI ethics researchers, including Timnit Gebru. The controversy echoed into the broader AI community, eventually leading to the former’s departure from Twitter. In our conversation with Deb, we dig into both of these stories in-depth, discussing the origins of Deb’s work on the Gender Shades project, how subsequent work put a spotlight on the potential harms of facial recognition technology, and who holds responsibility for dealing with underlying bias issues in datasets. The complete show notes for this episode can be found at https://twimlai.com/talk/388. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai
377
9
1

Service Cards and ML Governance with Michael Kearns - 610

02.01.2023
Will model cards be useful at Amazon scale? Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at Amazon. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision-making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 🔗 LINKS & RESOURCES =============================== Introducing AWS AI Service Cards: A new resource to enhance transparency and advance responsible AI - https://aws.amazon.com/blogs/machin... The Ethical Algorithm: The Science of Socially Aware Algorithm Design - https://amzn.to/3i6kIQO 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
383
9
0

AI Regulation and Automated Decisioning with Peter van der Putten - 699

26.08.2024
Today, we're joined by Peter van der Putten, director of the AI Lab at Pega and assistant professor of AI at Leiden University. We discuss the newly adopted European AI Act and the challenges of applying academic fairness metrics in real-world AI applications. We dig into the key ethical principles behind the Act, its broad definition of AI, and how it categorizes various AI risks. We also discuss the practical challenges of implementing fairness and bias metrics in real-world scenarios, and the importance of a risk-based approach in regulating AI systems. Finally, we cover how the EU AI Act might influence global practices, similar to the GDPR's effect on data privacy, and explore strategies for closing bias gaps in real-world automated decision-making. 🎧 / 🎥 Listen or watch the full episode on our page: https://twimlai.com/go/699. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Follow us on Twitter: https://twitter.com/twimlai Follow us on LinkedIn: https://www.linkedin.com/company/tw... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 📖 CHAPTERS =============================== 00:00 - Introduction 4:02- European AI Act 11:15 - EU AI Act vs other AI Regulation Acts 16:21 - Impacts of EU AI Act 21:11 - Real-world application challenges of metrics in the EU AI Act 29:05 - Addressing the challenges in fairness and bias metrics 35:02 - Closing the gap in fairness metrics and real-world AI applications 38:39 - Reasons for lack of adoption 40:32 - Recap 🔗 LINKS & RESOURCES =============================== Pegasystems - https://www.pega.com/ The AI Manifesto - https://www.pega.com/the-ai-manifes... Autonomous Enterprise - https://www.pega.com/technology/aut... Pega GenAI Knowledge - https://www.pega.com/about/news/pre... Pega GenAI Coach - https://www.pega.com/about/news/pre... Pega GenAI Blueprint - https://www.pega.com/about/news/pre... Pega GenAI - https://www.pega.com/technology/gen... Prompts, Buddies, Coaches & Agents: The Next Generation of GenAI at Pega - https://www.pega.com/events/pegawor... 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
397
14
0

Responsible AI in the Generative Era with Michael Kearns - 662

22.12.2023
Today we’re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael’s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, privacy, hallucinations, RLHF, and LLM evaluation benchmarks. We also touch on Clean Rooms ML, a secured environment that balances accessibility to private datasets through differential privacy techniques, offering a new approach for secure data handling in machine learning. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 📖 CHAPTERS =============================== 00:00 - Career research update 1:27 - Service cards 5:34 - LLM evaluation benchmarks and metrics 11:56 - Hallucination 15:56 - RAG in tackling hallucinations 17:14 - Guardrails 19:13 - Guardrails vs. RLHF 21:56 - Privacy 24:10 - Steering via RLHF 27:35 - Clean Room ML 34:25 - Responsible AI in the Wild 🔗 LINKS & RESOURCES =============================== Responsible AI in the wild: Lessons learned at AWS - https://www.amazon.science/blog/res... AWS Responsible AI - https://aws.amazon.com/machine-lear... For a COMPLETE LIST of links and references, head over to https://twimlai.com/go/662. 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
407
13
0

Privacy vs Fairness in Computer Vision with Alice Xiang - 637

10.07.2023
Today we’re joined by Alice Xiang, Lead Research Scientist at Sony AI, and Global Head of AI Ethics at Sony Group Corporation. In our conversation with Alice, we discuss the ongoing debate between privacy and fairness in computer vision, diving into the impact of data privacy laws on the AI space while highlighting concerns around unauthorized use and lack of transparency in data usage. We explore the potential harm of inaccurate AI model outputs and the need for legal protection against biased AI products, and Alice suggests various solutions to address these challenges, such as working through third parties for data collection and establishing closer relationships with communities. Finally, we talk through the history of unethical data collection practices in CV and the emergence of generative AI technologies that exacerbate the problem, the importance of operationalizing ethical data collection and practice, including appropriate consent, representation, diversity, and compensation, and the need for interdisciplinary collaboration in AI ethics and the growing interest in AI regulation, including the EU AI Act and regulatory activities in the US. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 📖CHAPTERS =============================== 00:00 - Background 04:48 - Interdisciplinary approach in AI ethics regulation 07:15 - Algorithmic bias and its legal implications 10:26 - Data challenges in AI ethics 14:51 - Lack of legal protection for data privacy 19:43 - Challenges in data collection for AI models 31:05 - Legal guardrails protect privacy risks 33:27 - Regulating AI: Addressing conflicts and creating incentives 38:34 - Exploring fairness evaluation and bias mitigation challenges 41:31 - Future Research Interests 🔗 LINKS & RESOURCES =============================== Paper: Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision - https://papers.ssrn.com/sol3/papers... Paper: Reconciling Legal and Technical Approaches to Algorithmic Bias - https://papers.ssrn.com/sol3/papers... For this episode’s reference list, head over to https://twimlai.com/go/637. 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
413
11
2

AI for High-Stakes Decision Making with Hima Lakkaraju - #387

29.06.2020
Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University with appointments in both the Business School and Department of Computer Science. At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what these attacks look like. We also discuss people’s tendency to trust computer systems and their outputs, her thoughts on collaborator (and former TWIML guest) Cynthia Rudin’s theory that we shouldn’t use black-box algorithms, and much more. For the complete show notes, visit twimlai.com/talk/387. For our continuing CVPR Coverage, visit twimlai.com/cvpr20.
531
5
1

Machine Learning as a Software Engineering Enterprise with Charles Isbell - #441

05.01.2021
As we continue our NeurIPS 2020 series, we’re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing. This year Charles gave an Invited Talk at this year’s conference, You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise. In our conversation, we explore the success of the Georgia Tech Online Masters program in CS, which now has over 11k students enrolled, and the importance of making the education accessible to as many people as possible. We spend quite a bit speaking about the impact machine learning is beginning to have on the world, and how we should move from thinking of ourselves as compiler hackers, and begin to see the possibilities and opportunities that have been ignored. We also touch on the fallout from Timnit Gebru being “resignated” and the importance of having diverse voices and different perspectives “in the room,” and what the future holds for machine learning as a discipline. The complete show notes for this episode can be found at https://twimlai.com/go/441. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai
549
12
2

Assessing the Risks of Open AI Models with Sayash Kapoor - 675

11.03.2024
Today we’re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models.” We dig into the controversy around AI safety, the risks and benefits of releasing open model weights, and how we can establish common ground for assessing the threats posed by AI. We discuss the application of the framework presented in the paper to specific risks, such as the biosecurity risk of open LLMs, as well as the growing problem of "Non Consensual Intimate Imagery" using open diffusion models. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 📖 CHAPTERS =============================== 00:00 - Introduction 06:50 - What is an open model? 11:04 - Building common ground on AI risk 15:19 - Marginal risk 18:28 - Assessing the risks of NCII 28:44 - The argument for openness 37:13 - The future of AI risk assessment 43:56 - Conclusion 🔗 LINKS & RESOURCES =============================== On the Societal Impact of Open Foundation Models - https://crfm.stanford.edu/open-fms/... Open Source Initiative - https://opensource.org/ Will releasing the weights of future large language models grant widespread access to pandemic agents? - https://arxiv.org/ftp/arxiv/papers/... Can large language models democratize access to dual-use biotechnology? - https://arxiv.org/ftp/arxiv/papers/... Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools - https://arxiv.org/ftp/arxiv/papers/... StopNCII.org - https://stopncii.org/ 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
578
22
2

2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury

08.06.2020
Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible Artificial Intelligence at Accenture. In our conversation with Rumman, we explored questions like: • Why is now such a critical inflection point in the application of responsible AI? • How should engineers and practitioners think about AI ethics and responsible AI? • Why is AI ethics inherently personal and how can you define your own personal approach? • Is the implementation of AI governance necessarily authoritarian? • How do we balance idealism and pragmatism in the application of AI ethics? We also cover practical topics like how and where you should implement responsible AI in your organization, and building the teams and processes capable of taking on critical ethics and governance questions. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai
618
14
3

#TWIMLfest: Coded Bias Screening Q&A

25.10.2020
This panel discussion will explore the societal implications of the biases embedded within AI algorithms. The conversation will discuss examples of AI systems with disparate impact across industries and communities, what can be done to mitigate this disparity, and opportunities to get involved. Panelists, including Director Shalini Kantayya, Meredith Broussard, and Deb Raji will each share insight into their experience working on and researching bias in AI systems and the sometimes oppressive and dehumanizing impact they can have on people in the real world. Register for the screening at https://twimlai.com/twimlfest/sessi...
638
12
0

Pushing Back on AI Hype with Alex Hanna - 649

02.10.2023
Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We also talked about DAIR and how they’ve crafted their research agenda. We discuss current research projects like DAIR Fellow Asmelash Teka Hadgu’s research supporting machine translation and speech recognition tools for the low-resource Amharic and Tigrinya languages of Ethiopia and Eritrea, in partnership with his startup Lesan.AI. We also explore the “Do Data Sets Have Politics” paper, which focuses on coding various variables and conducting a qualitative analysis of computer vision data sets to uncover the inherent politics present in data sets and the challenges in data set creation. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 📖 CHAPTERS =============================== 00:00 - Research journey 05:02 - DAIR’s research agenda and projects 09:51 - Common threads driving DAIR’s research efforts 12:39 - Advocacy 14:42 - Lesan.AI 16:02 - Inspired by Te hiku Media 18:08 - Research on data sourcing, ethics, and practices 21:21 - Do Data Sets Have Politics 27:57 - AI hype 32:59 - Concerning AI use cases 36:07 - Technology's role vs. misuse debate 36:50 - Funding frenzy fuels rapid adoption of AI 38:12 - Experts critical of the insufficient evaluation and supervision 42:32 - Frameworks for evaluating AI tool applicability 44:32 - Are LLMs as universal tools a concern? 47:41 - Technology isn't inherently bad, but it has consequences 50:07 - Balancing benefits and drawbacks of machine translation as a sociologist 🔗 LINKS & RESOURCES =============================== AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype - https://www.scientificamerican.com/... Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development - https://arxiv.org/abs/2108.04308 For a COMPLETE LIST of links and references, head over to https://twimlai.com/go/649. 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
639
15
4

How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - 691

01.07.2024
Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ‘Tay’ and ‘Bing Chat’ incidents along with her thoughts on the rapidly evolving GenAI landscape. 🎧 / 🎥 Listen or watch the full episode on our page: https://twimlai.com/go/691. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_con... 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai... Follow us on Twitter: https://twitter.com/twimlai Follow us on LinkedIn: https://www.linkedin.com/company/tw... Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 📖 CHAPTERS =============================== 00:00 - Introduction 03:04 - Risks and challenges presented by Generative AI 07:44 - Fairness vs. security concerns in Responsible AI 10:04 - Takeaways from Gen AI failure cases 16:00 - System response mechanisms for unintended AI behavior 21:41 - Balancing automation and human oversight in AI testing 25:01 - The future of the Gen AI user experience 30:06 - Testing and evaluation for model publishers vs model users 36:24 - Importance of unified quality, adversarial, and safety testing 39:38 - When is red teaming necessary? 42:58 - NIST Framework 46:10 - Governance implementation 48:21 - Generative AI as a step forward for Responsible AI 52:31 - Future directions 🔗 LINKS & RESOURCES =============================== Microsoft Responsible AI Principles and Approach - https://www.microsoft.com/en-us/ai/... Introducing AI-assisted safety evaluations in Azure AI Studio - https://techcommunity.microsoft.com... 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5
678
12
0

Responsible Data Science in the Fight Against COVID-19 (Coronavirus)

23.04.2020
"We are at a critical point in the global response to COVID-19 – we need everyone to get involved in this massive effort to keep the world safe." - WHO Director-General Dr. Tedros Adhanom Ghebreyesus Since the beginning of the coronavirus pandemic, we’ve seen an outpouring of interest on the part of data scientists and AI practitioners wanting to make a contribution. At the same time, some of the resulting efforts have been criticized for promoting the spread of misinformation or being disconnected from the applicable domain knowledge. In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. The resources mentioned in this conversation can be found at twimlai.com/rdscovid.
863
24
0

Mitigating Discrimination and Bias with AI Fairness 360 - Democast #5

06.05.2020
The Democast is BACK! This month, we had the pleasure of chatting with Karthi Natesan Ramamurthy, a research staff member at the IBM TJ Watson Research Center, who is one of the architects of today’s demo topic, AI Fairness 360. We had the opportunity to get an early look at 360 leading up to, and during, TWIMLcon: AI Platforms last year, where Trisha Mahoney presented on the topic. You can find our conversation with Trisha here, and for her presentation, you can purchase the TWIMLcon video pass here. In our conversation with Karthi, we explore some of the ins-and-outs of the toolkit, including: • The decision to open-source the toolkit • The various “Bias Mitigation” algorithms included in the toolkit • “Fairness” Metrics • Use cases for AI Fairness 360 • The paper behind the toolkit: AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias For the resources mentioned in this video, visit https://twimlai.com/democast-5-miti...
1416
7
3

Trends in Fairness and AI Ethics with Timnit Gebru - #336

06.01.2020
Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more. We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai. The complete show notes for this episode can be found at twimlai.com/talk/336. Check out the rest of the series at twimlai.com/rewind19!
1451
31
1

Model Explainability Forum

12.08.2020
The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice. In this panel discussion, we bring together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way. Join us as we explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai
1495
55
1

AI’s Legal and Ethical Implications with Sandra Wachter - 521

23.09.2021
Today we’re joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford. Sandra’s work lies at the intersection of law and AI-focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness, and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they’re created. We also explore why factors like the lack of oversight lead to poor self-regulation and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon. The complete show notes for this episode can be found at https://twimlai.com/go/521. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai #Algorithms #counterfactual explanations #discrimination #fairness #data protection #alan turing institute #conditional demograpic disparity
1581
30
0

Fairness and Robustness in Federated Learning with Virginia Smith - #504

26.07.2021
Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University. In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness. We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodca... Spotify: https://tinyurl.com/twimlspotify Google Podcasts: https://podcasts.google.com/?feed=a... RSS: https://twimlai.libsyn.com/rss Full episodes playlist: https://www.youtube.com/playlist?li... Subscribe to our Youtube Channel: https://www.youtube.com/channel/UC7... Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twitter.com/twimlai Follow us on Facebook: https://facebook.com/twimlai Follow us on Instagram: https://instagram.com/twimlai
1711
25
0

Algorithmic Injustices and Relational Ethics with Abeba Birhane - #348

13.02.2020
Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics. We caught up with Abeba, whose aforementioned paper was the recipient of the Best Paper award at the most recent Black in AI Workshop at NeurIPS, to go in-depth on the paper and the thought process around AI ethics. In our conversation, we discuss the “harm of categorization”, and how the thinking around these categorizations should be discussed, how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve this issue, her most recent paper “Robot Rights? Let’s Talk about Human Welfare Instead,” and much more. Check out our complete write-up and resource page at twimlai.com/talk/348.
12834
67
7
Load more...