He also emphasized that the team in charge of moderating and reviewing content on the platform has grown, since now it is a group of 40,000 people.
However, The Wall Street Journal alleged that Facebook was aware of serious issues such as false COVID-19 information and negative emotional effects on users, but delayed the resolution for fear of weakening engagement with its platforms. Facebook executive Nick Clegg issued a rebuttal over the weekend, accusing the newspaper of “deliberate mischaracterizations of what we are trying to do.”
“In the past, we didn’t address security and safety challenges early enough in the product development process,” Facebook said. “But we have fundamentally changed that approach. Today, we incorporate teams that focus specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it, “the company said.
The new company figures indicate that the company has quadrupled its staff focused on this area, since in 2017, it employed 10,000 people for this purpose and promised to double the number in one year.
Although the company claims that the expense has been several thousand dollars, it does not specify the amounts to which the investment has been allocated, but points out that in addition to the moderation team, it has invested and improved its artificial intelligence.
“Our AI systems have improved to keep people safer on our platform. For example, by proactively removing content that violates our hate speech policies. Now we delete 15 times more content of this type on Facebook and Instagram than when we started reporting it in 2017, “said the company, which took the opportunity to say that thanks to these filters, 3 billion false accounts have been blocked between January and June 2021 .