Technology Surveillance May Lead to Criminal Charges – Even When You’re Perfectly Innocent

Technology Surveillance May Lead to Criminal Charges – Even When You’re Perfectly InnocentA recent report in the New York Times shows how seemingly good intentions can lead to bad results. In response to concerns about child abuse, Internet sites like Google have created software that flags photos that might in any way suggest a child is being abused. The problem is that many photographs that are posted via the Internet are for perfectly good reasons, such as helping parents communicate with their doctors about their child’s illnesses or medical disorders.

The New York Times (NYT) story profiled two parents whose Internet accounts were flagged for child abuse concerns even though they were legitimately and protectively helping their children. The flags led to notices being sent to the police to investigate whether a child was being abused – and the closing of the parents’ online accounts. In an age where many transactions such as paying bills are accomplished online, the closing of an online account can create havoc for the account holders. The notices to the police can damage reputations and, if not properly investigated, lead to criminal charges.

What compounds the problem of closing accounts and conducting criminal investigations is what happens after an investigation determines a parent was truly acting in their child’s best interests. The correct action would be to reopen the accounts and ensure any damage to the reputation of the account holder/parent is restored. Without further oversight, however, many companies and police departments fail to take corrective actions.

An illustrative case involving excessive technology oversight

In the NYT article, a father took a photo of his son’s penis when he noticed it looked swollen and was causing the toddler pain. In February 2021, his wife scheduled an appointment with their health care provider. The nurse said, due to the pandemic and because the wife called on a Saturday, to send the photos in advance so the physician could review them. She sent some of the high-quality photos of the son’s groin up through her iPhone. One photo showed the father’s hand (visible to help show the swelling). The parents never assumed that when they transferred the photos, tech companies would capture and review them. A similar scenario happened to another parent in Texas.

The physician, with the help of the photos, diagnosed the son’s condition and prescribed antibiotics which cleared the problem. What the father didn’t know was that computer algorithms designed to “snare people exchanging sexual abuse material” would lead to legal complications and emotional trauma.

By way of background, tech companies regularly capture data that captures the spread of sexual abuse imagery. While seemingly well-intended, the data captured can “cast innocent behavior in a sinister light in at least two cases The Times has unearthed.”

The concern, according to Jon Callas, who works in the technology department of the Electronic Frontier Foundation, a digital civil liberties organization, is that this one parent’s case is likely typical of tens, hundreds, or thousands of other cases.

In the case described by the NYT, the police agreed that no wrongful conduct was involved. Unfortunately, Google, the company that had flagged the photos, was not so cooperative.

The father, in his 40s, had done more than just create a Google account. His appointments, camera photos and videos, and phone plan all used Google technology, including the Google cloud and a Google Fi phone plan. Just two days after taking the photos, the father was notified his account was disabled because of harmful content that might be illegal. The list of concerns provided on a “learn more” link included many reasons the photos might be considered harmful, including “sexual abuse & exploitation.” That’s when the father thought Google might think his photos were child pornography.

Ironically, the father had worked on similar technology. Based on his work experience, he “assumed” he would be able to reach a human being to correct the mistake. Not only was his account not reinstated, he found that the closing of the account meant the loss of emails, contact information for friends and former workers, and information and photos about his son’s first years of life. Without the phone account, he was locked out of other Internet accounts – basically, his digital life.

Google had done more than close his accounts. A review team had notified the local police department.

The background behind technology surveillance software

According to the NYT story, in 2021, Google filed more than 600,000 reports of child abuse material and disabled more than 270,000 accounts. A leader in the development of this type of technology, Google made its technology available to Facebook and other companies. Humans are supposed to review any photos flagged by the artificial intelligence (AI) software. Federal law requires that reports of “exploitative material” be sent to the National Center for Missing and Exploited Children through an organization called CyberTipline.

The NYT mentions that “Apple announced plans last year to scan iCloud Photos for known sexually abusive depictions of children, but the rollout was delayed indefinitely after resistance from privacy groups.”

In the father’s case, the police who investigated Google’s alert couldn’t contact him because his Google Fi phone number was no longer active. It took until December 2021 to notify him there was an investigation, and search warrants were served on Google and his Internet service provider. The search had taken place back in February.

After receiving the notice in December, the parent spoke with the police investigator who determined that there was no child abuse or exploitation. When the father asked if the police could tell Google he was innocent, the investigator said that the father had to speak with Google himself. When he did speak to Google and included the police report confirming his innocence, he got nowhere. Instead, Google permanently deleted his account. The father thought about filing a lawsuit but decided not to file one after speaking with a lawyer. “I decided it was probably not worth $7,000,” he said.

The concerns about AI surveillance software

Kate Klonick, a St. John’s University law professor, said “false positives, where people are erroneously flagged, are inevitable given the billions of images being scanned.” She believes companies should have a “robust process” for clearing and reinstating people who were flagged by mistake and are innocent.

These technology false flags are dangerous. People can be wrongly charged with a criminal offense and have to pay to defend themselves. Parents could lose custody rights. The solutions involve better technology, prompt investigations, proper and timely notice to the people being flagged, restoration of accounts and other privileges when a determination of innocence is made, and other remedies. One lesson of the New York Times story is that parents generally should avoid taking and sending vulnerable pictures of their child, even if a doctor recommends the photos.

At Soroka & Associates, our criminal defense lawyers handle sexual abuse and other types of sexual criminal charges. We understand the trauma of just being charged with a sex crime, let alone the consequences of a conviction. We work aggressively to have criminal charges dismissed, obtain acquittals, and negotiate plea bargains. To defend yourself from any type of criminal charge, call us at 614-358-6525 or use our contact form to schedule a free consultation.