For Facebook, the reduction in the range reached by misleading posts has been one of the measures that it has promoted the most in this regard, but also one of the ones that has generated the most problems, especially on issues of public controversy, such as elections or health. around COVID-19.
In this sense, the leaders of the social network have mentioned that their Artificial Intelligence systems improve every year in the detection of inappropriate content. Even last August, Mark Zuckerberg announced that the political content would be reduced in the main interface to return the happy tone to the social network.
However, it is important to remember that content moderation on the platform depends not only on technological systems, but also on human personnel, and last year the Wall Street Journal revealed that the company dispensed with part of this team for content detection. violent.
According to the newspaper, Facebook reduced the time that human reviewers focused on hate speech complaints and made adjustments, which led to a reduction in the number of complaints. This also gave the appearance that the AI had been successful in enforcing company rules.
While this latest case had no malicious intent and was genuinely a mistake, Sahar Massachi, a former member of Facebook’s Civic Integrity team, said these incidents demonstrate why more transparency is needed on social media platforms and their algorithms, something that Frances Haugen, the informant who released the Facebook Papers, also demanded last year.