Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Digital technology to know the growth of sales of your companies

    February 2, 2023

    How to be an original and creative leader in the future

    February 2, 2023

    great anchor of current television

    February 2, 2023
    Facebook Twitter Instagram
    Facebook Twitter Instagram
    Bullfrag Bullfrag
    Subscribe
    • Entertainment
      • Fashion
      • Lifestyle
        • Home Decor
    • Gaming
    • Health
    • News
      • Business
        • Marketing
      • Cryptocurrency
      • Sports
    • Recipes
    • Technology
      • Science
      • Automobiles
      • Internet
      • Software
    Bullfrag Bullfrag
    Home»News»Facebook exposed inappropriate content for six months by mistake

    Facebook exposed inappropriate content for six months by mistake

    Stanley BowersBy Stanley BowersApril 2, 2022No Comments2 Mins Read
    Facebook exposed inappropriate content for six months by mistake
    Share
    Facebook Twitter LinkedIn Pinterest Email

    For Facebook, the reduction in the range reached by misleading posts has been one of the measures that it has promoted the most in this regard, but also one of the ones that has generated the most problems, especially on issues of public controversy, such as elections or health. around COVID-19.

    In this sense, the leaders of the social network have mentioned that their Artificial Intelligence systems improve every year in the detection of inappropriate content. Even last August, Mark Zuckerberg announced that the political content would be reduced in the main interface to return the happy tone to the social network.

    However, it is important to remember that content moderation on the platform depends not only on technological systems, but also on human personnel, and last year the Wall Street Journal revealed that the company dispensed with part of this team for content detection. violent.

    According to the newspaper, Facebook reduced the time that human reviewers focused on hate speech complaints and made adjustments, which led to a reduction in the number of complaints. This also gave the appearance that the AI ​​had been successful in enforcing company rules.

    While this latest case had no malicious intent and was genuinely a mistake, Sahar Massachi, a former member of Facebook’s Civic Integrity team, said these incidents demonstrate why more transparency is needed on social media platforms and their algorithms, something that Frances Haugen, the informant who released the Facebook Papers, also demanded last year.

    Related Posts

    Digital technology to know the growth of sales of your companies

    February 2, 2023

    How to be an original and creative leader in the future

    February 2, 2023

    great anchor of current television

    February 2, 2023
    Add A Comment

    Leave a Reply Cancel reply

    Editors Picks

    Digital technology to know the growth of sales of your companies

    February 2, 2023

    How to be an original and creative leader in the future

    February 2, 2023

    great anchor of current television

    February 2, 2023

    Public speaking is a must

    February 2, 2023
    Advertisement
    Facebook Twitter Instagram
    © 2023 Bullfrag. Designed by Bullfrag.

    Type above and press Enter to search. Press Esc to cancel.