Wednesday, 29 December 2021

Google’s Child Abuse Detection Tools Can Also Identify Illegal Drawings of Children

(Photo by Alexander Pohl/NurPhoto via Getty Images)

Apple was hit with a wave of criticism earlier this year when it announced plans to scan iPhones to stop the distribution of Child Sexual Abuse Material (CSAM). Critics fretted that Apple’s hash-checking system could be co-opted by governments to spy on law-abiding iPhone users. In response to the backlash, Apple might end up making changes to that program, but Google has its own way of spotting CSAM, and it might be even more intrusive for those who use all of Google’s cloud services. 

The specifics on Google’s CSAM scanning come by way of a warrant issued in early 2020 and spotted by Forbes. According to the filing, Google detected CSAM in Google Drive, its cloud storage platform. And here’s where things get a little weird; the warrant stemming from this report targeted digital artwork, not a photo or video depicting child abuse. 

Apple’s system under its “Expanded Protections for Children” banner uses hashes for known child abuse materials, scanning iDevices for matching hashes. This should prevent false positives and it doesn’t require Apple to look at any of the files on your phone. The issue cited most often with this approach is that Apple is still scanning your personal files on your smartphone, and it could be a privacy nightmare if someone manages to substitute different hashes. Apple says this isn’t possible, though. 

Google, as it turns out, does something similar. It uses a technique initially developed for YouTube to look for hashes of known CSAM, but it also has an AI that has been trained to use machine learning to identify new images of child abuse. It’s not clear how Google spotted the problematic files in 2020, but the unidentified individual is described as an artist. That suggests he is the one who created the drawings at issue, and Google’s systems identified it as CSAM. 

Lots of servers, at Google's Douglas County data center. Blue LEDs mean the servers are healthy, apparently

After Google’s system spotted the drawings, it sent the data to the National Center for Missing and Exploited Children, and from there it went to the DHS Homeland Security Investigations unit. Investigators filed the warrant in order to get access to the user’s data. The artist has not been identified as no charges were ever brought. However, US law holds that drawings, like those depicting child abuse, can still be illegal if they lack “serious literary, artistic, political, or scientific value.” That’s hard to prove — even agreeing on a definition of “art” can be a challenge. This may explain why there were no charges brought in this case. 

While Google’s use of AI is more aggressive than Apple’s, it’s also seemingly restricted to cloud services like Gmail and Drive. So, Google isn’t set up for scanning Android phones for hashes like Apple is on the iPhone, but Google’s approach can sweep up original artwork that may or may not be illegal, depending on who you ask. Regardless of what is “art,” Google isn’t doing this just to do it — there is an undeniable problem with CSAM on all cloud services. Google says that it reported 3.4 million pieces of potentially illegal material in 2021, and that’s up from 2.9 million the year before.

Now Read:



No comments:

Post a Comment