The Cohere For AI Regional Asia group hosts Nedjma Ousidhoum (Postdoctoral Research Associate, University of Cambridge) to present “What is Needed Vs. What is Built in NLP: Toxic Language Detection and Automated Fact-checking Models as Use Cases”

There has been a rising interest in automating toxic language detection and fact-checking to help experts with online moderation and fact verification. However, in the absence of clear definitions of crucial terms and experts’ needs, one may ask how we can have the impact we aim for and build robust tools for toxic language detection and automated fact-checking. In this presentation, Nedjma will share insights and lessons learned from past and ongoing work on toxic language detection and automated fact-checking. Nedjma will talk about (1) the construction of multilingual resources for toxic language detection and related problems (e.g., choice of labels, selection bias) and (2) work on fact-checking. Finally, Nedjma will share mistakes made along the way and how to address them in their ongoing projects (e.g., the AfriHate project, work on fact-checking).

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics