Haters gonna hate? Online haters just regular users, according to study

Share
condividi
Photo by NordWood Themes on Unsplash

A study published in Scientific Reports reveals that online hate speech is not the prerogative of “pure haters''. On the contrary, offensive and even violent language is often used by regular users who unleash hateful comments in certain contexts. The study was conducted by researchers of Ca’ Foscari University of Venice, together with Agcom and the Jožef Stefan Institute of Ljubljana. They analysed 1 million comments on YouTube Covid-19-related videos. 

In order to monitor the frequency of hate speech on such a large corpus of data, the team, coordinated by Ca’ Foscari researcher Fabiana Zollo, devised a machine learning model that classified each comment as appropriate, inappropriate, offensive or violent, depending on the language it employed. 

“Hate speech is one of the most challenging issues of the web,” says Matteo Cinelli, first author of the study and post-doc researcher at Ca’ Foscari. “It encourages violence towards specific social groups, to the extent that social networking platforms and governments are looking for solutions to this problem.”

The study revealed that only 32% of the comments classified as “violent” are actually removed or made unavailable by the platform’s moderation team or by the author. The study also offers data that can be useful in the development of strategies to understand and curb this phenomenon. 

Among the 345,000 authors of the comments that were analysed in the study, the researchers could not identify any real “keyboard warriors” — i.e., users who are only interested in spreading hate online. On the contrary, the phenomenon of hate speech seems to involve regular users who are occasionally triggered to use toxic language. 

“Apparently, the use of offensive and violent languages is triggered by external factors,” says Fabiana Zollo. “The analysis of these factors is certainly crucial if we want to identify the most effective tools to curb the phenomenon of online hate speech.”

Researchers analysed one million comments on “reliable” and “questionable” YouTube channels — with the latter ones being more likely to spread misinformation. They found that 1% of such comments could be classified as hate speech

On average, users who tend to comment on reliable channels use more toxic, offensive and violent language, compared to the language employed by users on questionable channels. On the other hand, the study also found that language tends to degenerate when users are commenting in a “bubble” that is unfamiliar and contrary to their views. 

Conversations become increasingly toxic as they grow longer,” Cinelli adds. “This result is in keeping with Godwin’s Law, the well-known theorem coined by Mike Godwin in the 1990s which suggests that there is a correlation between the toxicity of an online discussion and its length.”

The study was conducted within the framework of IMSyPP (Innovative Monitoring Systems and Prevention Policies of Online Hate Speech), a two-year European project which started in March 2020. The project aims to support a data-driven approach to hate speech regulation, prevention and awareness raising. 

Author: Enrico Costa / Translator: Joangela Ceccon