In January 2018, Facebook announced an algorithm update to increase the visibility of “significant social interactions” from friends and family, in line with a similar one carried out in 2016. The aim was to revive participation in light of the decline registered throughout the year. previous, which led to fear that users would stop using the application. As a consequence, traffic derived from the media sank and the production / dissemination of divisive, disinformative and harmful content soared, something recognized internally by the platform.
That’s what it indicates The Wall Street Journal, who has had access to documentation in which Facebook employees reflected that the changes made the social network worse and generated negative externalities for society in different aspects: on the one hand, they suddenly promoted toxic publications related to political positions, which made the messages of some formations more extreme and negative to take advantage of that window of dissemination; on the other, the circulation of false or incorrect information skyrocketed, coinciding with the pronounced decrease in visibility of that generated by the media, which had to start investing in promoting content on a paid basis.
The documents held by the economic newspaper of Rupert Murdoch indicate that Mark Zuckerberg He was alerted to these circumstances and did not accept the proposed solutions to influence them. He stuck to his theory on reducing passive consumption on Facebook, which in a short time disrupted the turn to video format that not a few media had done shortly before to take advantage of the momentum offered by the platform.
What should improve the levels of interaction on the platform based on closer conversations resulted in a window of opportunity for actors with bad intentions.
The basis of the problem, according to the internal files, was the very formula by which the visibility of any content was enhanced or not. Its design promoted by default the most controversial publications, since they were the ones that caused the most interactions. In practice, this meant that anything that stimulated unhealthy reactions would have a greater chance of appearing in front of more people, which improved internal metrics at the expense of many users who thought that the quality of what Facebook showed them had dropped.
That is the same as denounced Jonah Peretti, founder of BuzzFeed, in an email sent to a social network executive a few months after the changes. In it, he noted that he had studied his numbers and those of other outlets to conclude that the algorithm did not really reward content that promoted meaningful social interactions, but disproportionately divisive posts. In this context, its journalists felt that they were forced to make bad content or expose themselves to poor performance., a dilemma that has marked newsroom activity until recently.
These controversies are partly responsible for the fact that since 2018 there have been different tweaks in the Facebook algorithm that have reduced the visibility of political media content and publications. The scope of the latter is in question in Spain, since the social network is trying to cut it to offer a less problematic experience, as it has been doing in the US.
Precisely one of the negative milestones reflected in the internal information surfaced by The Wall Street Journal is that of the conversation about Catalonia in the months after the frustrated declaration of independence in October 2017. Facebook researchers found that all parties raised the tone on the platform to get their messages to more people. This is in line with the 43% increase in insults and threats on public pages dedicated to politics and social affairs in Spain in the same time frame, according to an investigation by the digital risk protection company Constella Intelligence.