Ilustracin: Pablo Blasberg
The algorithm that facilitates the circulation of hate speech is becoming a boomerang for Facebook and Instagram, but some complaints indicate that Mark Zuckerberg’s company has not decided to modify it because the mechanism generates interactions, even though it attacks vulnerable groups and contradicts the objective of creating a “meaningful and healthy” network, said this Saturday Ezequiel Ipar, director of the Laboratory for Studies on Democracy and Authoritarianism (LEDA).
The texts that promote hate speech apparently deepened with the modification of the Facebook algorithm, the true manager of the interaction between its users.
The issue took on more force when the former employee of that company Frances Haugen asked the social media giant to regulate which he accused before United States legislators of “financing his profits” with the safety of users.
Haugen revealed that Facebook knows that its sites are potentially harmful to the mental health of young people. “Almost no one knows what happens inside Facebook. They hide information from the public, their shareholders and governments,” he added.
From Argentina, at the LEDA, from the National University of San Martín (Unsam) with researchers from Conicet, Ipar and his team are investigating these networks and the consequences of the application of the algorithm designed by the people of Zuckerberg.
Haugen said Facebook “can account for and recognize between 3 and 5 percent of hate speech from the posts of 3 billion users. But the controls appear to work in 50 languages, and Facebook has users in 5,500. That is, 5,450 live in a digital far west, “Ipar said.
“Hardly anyone knows what happens inside Facebook. They hide information from the public, their shareholders and governments.””
Frances Haugen, exempleada of Facebook
As an example of hate speech and lack of network controls, in October 2018, The New York Times (NYT) reported that Burmese military used Facebook to justify an ethnic cleansing against the Rohingya, a Muslim minority, and argued that Islam poses a threat to Buddhism.
Ipar also recalled “cases of incitement to inter-ethnic violence in Ethiopa”.
In November 2019, the BBC reproduced the testimony of the atleta etope Haile Gebrselassie, who denounced Facebook as responsible for the killing of 78 people due to “the distribution of false news,” he explained.
“They began to look for where the hate speech had been generated, and they discovered that it was through posts on Facebook,” the researcher explained to Tlam.
“It was not radio or television, or a dictator speaking from an official medium; it was from the capillarity of Facebook that the feeling of fear was generated,” he added.
But the way Facebook tried “to solve these problems, it ended up exacerbating them”said about their reformulation of the algorithm.
“It was not radio or television, nor a dictator speaking from an official medium; it was from the capillarity of Facebook that the feeling of fear was generated””
Ezequiel Ipar – Photo: Eliana Obregn
Here are the central passages from Tlam’s interview with Ipar.
Tlam: What is the goal of the algorithm?
Ezequiel Ipar: Connecting people and knowing what a user may be interested in, what to put their attention on. For this reason, Facebook is said to dominate the attention of individuals. That is what the algorithm seeks: to get your attention, and to relate to users.
T .: How do you associate this function of the algorithm with the spread of hate speech?
EI: When interactions between individuals began to decline, in 2017/2018, Facebook redesigned its algorithm so, they said, users had “a healthier and more meaningful experience.” What they will show in a privileged way will no longer be advertising, but what your friends will find interesting to promote more “likes”, share more content or increase comment traffic. Thus it was that they detected that the speeches that incited hatred in the interactions began to stand out. An interaction related to the narcissism of hate was deepened, but they did not consider it because the algorithms quantify the interaction and the reaction of the users, and not the quality or content of the responses.
The algorithm facilitates the circulation of messages, and the hate speech produced more interactions on the network. Between the healthiest and the most significant, the algorithm promotes the significant to generate more traffic on the network. So it is very likely that the Facebook algorithm is inciting hate campaigns.
“They themselves are suggesting that for Facebook to be ‘meaningful and healthy’ now requires some kind of evaluation and supervision, knowing what’s going on online, and external regulation.””
T .: The algorithm evaluates the intensity of circulation of the messages, regardless of their content. It is a mathematical fact without human connotation. Why doesn’t Facebook modify its algorithm?
EI: Because they verified that what most re-energizes the circulation of messages is, precisely, messages with connotations of hate speech.
T .: What do they do with that data knowing that they end up facilitating these types of messages?
EI: We are right at that moment. Haugen realized that Facebook knew what they were mobilizing with their algorithm, and also that they were willing to do nothing to modify it. Now there is a curious response from Facebook because they begin to suggest the need for some state regulation since they understood that there is something about the digital space itself, which they created, which is inherently uncontrolled and that can begin to affect the platform and their business. I think they realized that there is also the risk that a good percentage of their users will withdraw from the networks because of this distribution of hate speech and violence. They themselves are suggesting that for Facebook to be “meaningful and healthy” now requires some kind of evaluation and supervision, knowing what’s going on online, and external regulation.
“I imagine they refuse to change it because they believe that violence and hate speech are a social problem that is expressed on the social network””
T: But they are not allowing that window of observation.
EI: Not yet, and that is the conflict we are in now. On the one hand, to what extent – from this crisis – will Facebook make the information available to it public, and to what extent they are willing to intervene in the network as suggested by their own studies. These documents show that there are professionals on Facebook who already have alternatives to modify the circulation of hate speech, but they are structural changes that can put a part of the business at stake. One of their teams studied the themes of violence, harassment and hate speech in the algorithm, and suggested removing the possibility of sharing malicious content. Do not delete them, but they cannot be shared either. It seems they tried it and it worked, but decided not to adopt it. The chronicle says that this internal document reached the highest authorities of the company and the response was that it was a very good solution, but that they are going to leave it as an emergency alternative. I imagine that they refuse to change it because they believe that violence and hate speech are a social problem that is expressed in the social network, but it is not a problem of the social network itself.
“I imagine they refuse to change it because they believe that violence and hate speech are a social problem that is expressed on the social network.” Photo: Eliana Obregn
Millennials and people over 75, the main propagators of hate speech in Argentina
The Millennial and Silent generations, as those over 75 were called, are the ones that recirculate the greatest amount of hate speech on social networks, according to an investigation carried out by the LEDA, released this Saturday.
“We understand hate speech as any type of speech delivered in the public sphere that seeks to promote, incite or legitimize discrimination, dehumanization or violence against a person or a group of people based on their membership of a group religious, ethnic, national, political, racial, gender or any other social identity “, defined Ezequiel Ipar.
And he added that they are speeches that “frequently generate a cultural climate of intolerance and hatred and, in certain contexts, can provoke aggressive, segregationist or genocidal practices in civil society.”
From LEDA they investigated indicators on hate speech (DDO) in the Argentine digital sphere from an investigation carried out on 3,140 cases surveyed in people over 16 years of age, between November 27, 2020 and February 3, 2021.
Among the most significant data of the research, it was highlighted that:
• The DDO index was constructed from three circulating discourses: one racist with very strong segregationist connotations, another critic of the ideological positions that discriminate against the Lgbtiq + collective and a third, dehumanizing towards foreigners.
• Regarding the age variable, they detected and classified as “striking” that Millennials (24-40 years old) are those who register the highest degrees of agreement and willingness to emit or replicate hate speech, with 31.1 percent, and that 51 percent disapprove. Generation X (41 to 55 years) approve it in 25.5 percent of the cases consulted, and disapprove in 55.3 percent. The approval and disapproval points are held by the Baby Boomers generation (56 to 74 years), with 19.6 and 64.3%.
• The DDO index by educational level revealed that 30.1 percent of the population with incomplete high school is the one that approves or uses hate speech the most, while the one that rejects it the most, 68.2 percent, is the who has full postgraduate training.
• By occupation rate, 33.4 percent of those who are employers or business owners promote hate speech the most, and those who disapprove the most, 61 percent of those interviewed, are self-employed or self-employed.