Google AI flags parental accounts for potential abuse over nude photos of their sick children

A concerned father says that after using his Android smartphone to take pictures of his child’s throat infection, Google flagged the images as child sexual abuse material (CSAM), according to a report. new York Times, The company closed their accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and prompted a police investigation, highlighting the complexities of trying to tell the difference between potential abuse and an innocent photo. Users once accessed become part of a digital library, whether on their personal device or in cloud storage.

Concern about the consequences of blurring the lines for what should be considered private was broadcast last year when Apple announced its child protection plan. As part of the plan, Apple will scan images locally on Apple devices before uploading them to iCloud and then match the images with NCMEC’s ​​hashed database of known CSAMs. If enough matches are found, a human moderator will review the content and lock the user’s account if it has CSAM.

The Electronic Frontier Foundation (EFF), a non-profit digital rights group, criticized Apple’s plan, saying it could “open a backdoor to your private life” and that it would “privacy for all iCloud Photos users”. represents a reduction, not an improvement.”

Apple eventually put the archived image scanning part on hold, but with the launch of iOS 15.2, it went ahead with the inclusion of an optional feature for child accounts included in a family sharing plan. If a parent opts-in, on a child’s account, the Messages app “analyzes image attachments and determines whether a photo contains nudity while maintaining end-to-end encryption of messages or No.” If it detects nudity, it blurs the image, displays a warning to the child, and presents them with resources aimed at helping safety online.

Main event highlights new York Times This happened in February 2021, when some doctors’ offices were still closed due to the COVID-19 pandemic. as noted by Times, Mark (whose last name was not disclosed) noticed swelling in his child’s genital area and at the request of a nurse sent him photos of the issue before a video consultation. The doctor prescribed antibiotics to cure the infection.

according to Now heMark received a notification from Google two days after the photo was taken, saying his accounts had been locked due to “harmful content” that was “a serious violation of Google’s policies and may be illegal.”

Like many Internet companies including Facebook, Twitter and Reddit, Google uses hash matching with Microsoft’s PhotoDNA to scan uploaded images to detect matches with known CSAMs. In 2012, it arrested a man who was a registered sex offender and used Gmail to send pictures of a young girl.

In 2018, Google announced the launch of its Content Safety API AI Toolkit, which can “proactively identify never-before-seen CSAM imagery so that it can be reviewed and, if confirmed as CSAM If so, it is removed and reported as soon as possible.” It uses the tool for its own services and also provides it for others to use, along with a video-targeting CSAI Match hash matching solution developed by YouTube engineers.

Google “Fighting abuse on our own Platform and Services”:

We identify and report CSAM with trained expert teams and state-of-the-art technology including machine learning classifiers and hash-matching technology, which creates a “hash” or unique digital fingerprint for an image or video so that it can be compared to the hash Known CSAM’s. When we receive CSAM, we report it to the National Center for Missing and Exploited Children (NCMEC), which liaises with law enforcement agencies around the world.

A Google spokesperson told Times That Google only scans users’ personal images when a user takes “affirmative action,” which may explicitly include backing up their pictures to Google Photos. When Google flags exploitable images Times Notes that Google is required by federal law to report a potential perpetrator to the Cyber ​​Tipline at NCMEC. In 2021, Google reported 621,583 cases of CSAM to the NCMEC’s ​​cyber tipline, while the NCMEC alerted authorities of 4,260 potential victims, a list that Now he Mark’s son is also included.

Mark lost access to his email, contacts, photos and even his phone number because he used Google Fi’s mobile service. Times Report. Mark immediately tried to appeal against Google’s decision, but Google denied Mark’s request. The San Francisco Police Department, where Mark lives, began an investigation into Mark in December 2021 and obtained all the information he stored with Google. The investigator of the case ultimately found that the incident “did not match the elements of the crime and that no crime was committed,” Now he notes.

“Child sexual abuse material (CSAM) is abhorrent and we are committed to preventing it from spreading on our platform,” Google spokeswoman Krista Muldoon said in an emailed statement. ledge, “We comply with US law in defining what constitutes a CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platform. Additionally, our team of child protection experts is concerned with accuracy. Reviews and consults with pediatricians to help ensure that we are able to identify cases where users can seek medical advice.

While protecting children from abuse is undeniably important, critics argue that the practice of scanning user photos unreasonably encroaches on their privacy. John Callas, director of technology projects at EFF, called Google’s practices “intrusive” in a statement. Now he. “It’s exactly the nightmare we’re all worried about,” Callas said. Now he, “They’re going to scan my family album, and then I’ll be in trouble.”

Source link

Leave a Comment