AI4Dignity – Artificial Intelligence, Extreme Speech, and the Challenges of Online Content Moderation

Location

Sasquatch Room 124 C

Start Date

22-4-2023 10:30 AM

End Date

22-4-2023 11:45 AM

Publication Date

2023

Disciplines

Arts and Humanities | Law | Social and Behavioral Sciences

Description

In our research we refer to Udupa’s and Pohjonen’s (2017) anthropological concept of ‘extreme speech’ instead of using the commonly attributed term of ‘hate speech’. The extreme speech approach refers to expressions that challenge and stretch the boundaries of legitimate speech along the twin axes of truth/falsity and civility/uncivility. With developing this concept, the authors are departing from assumptions around civility, politeness, or abuse as universal features of communication with little cultural variation. Instead, they gesture towards the situatedness of online speech forms in different cultural as well as political backgrounds (Pohjonen, Udupa, 2017) and emphasize practice as in what people do that is related to media, in order to avoid predetermining the effects of online extreme speech as vilifying, polarizing, or lethal (Couldry, 2010).

Online extreme speech has emerged as a significant challenge for democratic societies worldwide. AI deployment to detect, slow down and remove extreme content is expected to bring scalability, reduce costs, decrease human discretion and emotional labor. But AI tools for extreme speech detection that are globally applicable, inclusive and still resource-efficient are lacking. The two key challenges in AI use for content moderation are that there is no catch-all algorithm that can work for different cultural contexts and that hate-groups have managed to escape keyword-based machine detection through misspellings, satire, and coded language.

Our proof of concept project ‘AI4Dignity’, funded by the European Research Council and hosted at LMU Munich with PI Prof Dr Sahana Udupa, proposes ways to develop context-sensitive frameworks for AI-assisted content moderation that are centered around human collaboration. Our approaches aim at evolving responsible practices and focus on the use of AI for describing and detecting problematic content online. We do so by building a process model for bottom-up coding through partnering with fact checkers from four countries (India, Brazil, Kenya, Germany) as critical contact persons in the fight against digital hate and to develop context sensitive responses to extreme speech (Udupa et al. 2021; Udupa et al. 2022; Maronikolakis et al. 2022). For that, fact checkers collected and labelled extreme speech passages (derogatory, exclusionary, or dangerous speech) and marked target groups for each instance of speech (e.g., caste, ethnicity, gender, language group, national origin, religious affiliation, sexual orientation) to allow for a fine-grained analysis of different extreme speech discourses as per our dataset.

In my presentation I will explain the AI4Dignity project and the project steps, focusing on the explanation of the concept of ‘extreme speech’ and a brief description and analysis of the collected passages. Amongst other topics like anti-immigrant extreme speech, we found online misogyny to be prominent in our dataset.

Description Format

html

Full Text of Presentation

wf_no

Media Format

flash_audio

Session Title

Promotion of Hate and Violence Through Social Media and Mainstream Sources

Type

Panel

Share

COinS
 
Apr 22nd, 10:30 AM Apr 22nd, 11:45 AM

AI4Dignity – Artificial Intelligence, Extreme Speech, and the Challenges of Online Content Moderation

Sasquatch Room 124 C

In our research we refer to Udupa’s and Pohjonen’s (2017) anthropological concept of ‘extreme speech’ instead of using the commonly attributed term of ‘hate speech’. The extreme speech approach refers to expressions that challenge and stretch the boundaries of legitimate speech along the twin axes of truth/falsity and civility/uncivility. With developing this concept, the authors are departing from assumptions around civility, politeness, or abuse as universal features of communication with little cultural variation. Instead, they gesture towards the situatedness of online speech forms in different cultural as well as political backgrounds (Pohjonen, Udupa, 2017) and emphasize practice as in what people do that is related to media, in order to avoid predetermining the effects of online extreme speech as vilifying, polarizing, or lethal (Couldry, 2010).

Online extreme speech has emerged as a significant challenge for democratic societies worldwide. AI deployment to detect, slow down and remove extreme content is expected to bring scalability, reduce costs, decrease human discretion and emotional labor. But AI tools for extreme speech detection that are globally applicable, inclusive and still resource-efficient are lacking. The two key challenges in AI use for content moderation are that there is no catch-all algorithm that can work for different cultural contexts and that hate-groups have managed to escape keyword-based machine detection through misspellings, satire, and coded language.

Our proof of concept project ‘AI4Dignity’, funded by the European Research Council and hosted at LMU Munich with PI Prof Dr Sahana Udupa, proposes ways to develop context-sensitive frameworks for AI-assisted content moderation that are centered around human collaboration. Our approaches aim at evolving responsible practices and focus on the use of AI for describing and detecting problematic content online. We do so by building a process model for bottom-up coding through partnering with fact checkers from four countries (India, Brazil, Kenya, Germany) as critical contact persons in the fight against digital hate and to develop context sensitive responses to extreme speech (Udupa et al. 2021; Udupa et al. 2022; Maronikolakis et al. 2022). For that, fact checkers collected and labelled extreme speech passages (derogatory, exclusionary, or dangerous speech) and marked target groups for each instance of speech (e.g., caste, ethnicity, gender, language group, national origin, religious affiliation, sexual orientation) to allow for a fine-grained analysis of different extreme speech discourses as per our dataset.

In my presentation I will explain the AI4Dignity project and the project steps, focusing on the explanation of the concept of ‘extreme speech’ and a brief description and analysis of the collected passages. Amongst other topics like anti-immigrant extreme speech, we found online misogyny to be prominent in our dataset.