Around nearly every corner of the tech space you can hear the words “Artificial Intelligence (AI).” As AI use increases more among various sectors, including education, businesses, and court systems, questions about how to train and deploy these technologies on unbiased data are on many experts’ minds.

While innovation in this space creates new possibilities for various sectors, the use of this emerging technology raises unique ethical considerations on issues of bias, particularly for marginalized communities.

With more companies competing to launch the next chatbot, it is critical to take a step back and reflect on how AI could continue perpetuating harm to women, people of color, LGBTQ+ folk, the disabled community, and other marginalized groups from biased data. As the technology sector enters an AI “arms race,” diversity, inclusion, and public interest must be at the center of its development.

Join the Internet Law & Policy Foundry and artificial intelligence experts on July 28, 2023, at 1 p.m. EST to learn more about the types of bias present in AI, how and why bias exists, and what civil society, technologists, and policymakers can do to address concerns of bias in this emerging technology.

Our exciting panel features:

– Jiahao Chen — Founder and CEO, Responsible Artificial Intelligence, LLC
– Amber Ezzell — Policy Counsel, Future of Privacy Forum
– Juhi Koré — Digital Projects, United Nations Development Program Chief Digital Office

The Internet Law & Policy Foundry (“The Foundry”) is a professional development organization for early to mid-career law and policy professionals passionate about disruptive innovation. Fellows are responsible for the planning, execution, and substance of all Foundry initiatives.

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics