The Google AI in charge of reviewing the images that users send through the Android messaging application, and whose objective is to detect and report those multimedia media that contain child sexual abuse material (CSAM), has blocked the account of a dad after he took a photo of his son’s groin infection to send to his doctoras reported by the parent himself to New York Times.
The Father, named Mark, sent the image at the request of a nurse in February 2021. Date on which some health centers in the United States maintained telematic visits despite the fact that the restrictions due to the pandemic had eased. The nurse, after assessing the image and making a visit via video call with the adult, was able to prescribe antibiotics intended to cure the infection. This, in particular, was in the genital area that the little one.
Google instead sent a notification to Mark two days after the photo was captured. In this, the company warned of having detected “harmful content” which, moreover, was “a serious violation of Google’s policies and could be illegal“. The father’s accounts were blocked immediately after the notice. Google automatically forwarded the report to the National Center for Missing & Exploited Children (NCMEC) for further investigation.
Mark assures T.N.Y. that the blocking of your account by Google caused you to lose access to all services and platforms linked to your user. Among them, email, contacts, images or even a phone number, since his father used Google Fi, the company’s OMV.
Google’s system to detect images of child sexual abuse does not convince users and experts
mark too was investigated by the San Francisco Police in December 2021. The case, however, was closed after the Police, with the evidence presented, concluded that no crime had been committed.
Mark’s case, however, is further proof of the drawbacks of the child abuse detection system that companies like Google or Apple are using or will use in the future. This, in particular, works through an artificial intelligence system, which isscan the images that are sent via message to find matches against the hash database from the National Center for Missing & Exploited Children. If found, a human will review those images to categorize them as CSAM, lock the account, and initiate investigation. Citizens and experts, however, believe that this method is an attack on the privacy of users.
Google, however, has assured TheVerge that the team of experts in charge of reviewing the images also consults with pediatricians to “identify instances where users may be seeking medical adviceMark’s image, showing his son’s infection, however, was reported directly to NCMEC.