Michael Madaio of Microsoft Research, NYC, gave a talk titled “Human-Centered Approaches to Supporting Fairness in AI”. AI-industry practitioners are increasingly asked to adhere to principles for fair and responsible AI. The problem is that currently responsible AI efforts focus on the training and testing of AI algorithms, leaving out the entire AI design lifecycle and ignoring what happens during design, prototyping and post-deployment. That is why Microsoft developed a comprehensive set of checklists, one for each of their AI design phases: envision, definition, prototype, build, launch, and evolve. Across different deployments within the company, they found that fairness is deeply contextual (e.g., the same video analytics system might have very different fairness requirements in the US than in Europe). As such, one-size-fits-all checklists do not work. The next logical step is to then figure out how to paraphrase checklists according to the context. Even as the checklist becomes more customized to context, there is still another challenge: how to embed responsible AI in formal corporate processes. As Debra Myerson puts it, that could be achieved by “tempered radicals” – employees who “slowly but surely create corporate change by pushing organizations through persistent small steps.”
Add comment