An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.
According to a press release from the Evansville Police Department, this was a clear “misuse” of Clearview AI’s controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.
To help identify suspects, police can scan what Clearview AI describes on its website as “the world’s largest facial recognition network.” The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.
But these scans must always be linked to an investigation, and Evansville police chief Philip Smith said that instead, the disgraced cop repeatedly disguised his personal searches by deceptively “utilizing an actual case number associated with an actual incident” to evade detection.
Smith’s department discovered the officer’s unauthorized use after performing an audit before renewing their Clearview AI subscription in March. That audit showed “an anomaly of very high usage of the software by an officer whose work output was not indicative of the number of inquiry searches that they had.”
Another clue to the officer’s abuse of the tool was that most face scans conducted during investigations are “usually live or CCTV images”—shots taken in the wild—Smith said. However, the officer who resigned was mainly searching social media images, which was a red flag.
An investigation quickly “made clear that this officer was using Clearview AI” for “personal purposes,” Smith said, declining to name the officer or verify if targets of these searchers were notified.
As a result, Smith recommended that the department terminate the officer. However, the officer resigned “before the Police Merit Commission could make a final determination on the matter,” Smith said.
Easily dodging Clearview AI’s built-in compliance features
Clearview AI touts the face image network as a public safety resource, promising to help law enforcement make arrests sooner while committing to “ethical and responsible” use of the tech.
On its website, the company says that it understands that “law enforcement agencies need built-in compliance features for increased oversight, accountability, and transparency within their jurisdictions, such as advanced admin tools, as well as user-friendly dashboards, reporting, and metrics tools.”
To “help deter and detect improper searches,” its website says that a case number and crime type is required, and “every agency is required to have an assigned administrator that can see an in-depth overview of their organization’s search history.”
It seems that neither of those safeguards stopped the Indiana cop from repeatedly scanning social media images for undisclosed personal reasons, seemingly rubber-stamping the case number and crime type requirement and going unnoticed by his agency’s administrator. This incident could have broader implications in the US, where its technology has been widely used by police to conduct nearly 1 million searches, Clearview AI CEO Hoan Ton-That told the BBC last year.
In 2022, Ars reported when Clearview AI told investors it had ambitions to collect more than 100 billion face images, ensuring that “almost everyone in the world will be identifiable.” As privacy concerns about the controversial tech mounted, it became hotly debated. Facebook moved to stop the company from scraping faces on its platform, and the ACLU won a settlement that banned Clearview AI from contracting with most businesses. But the US government retained access to the tech, including “hundreds of police forces across the US,” Ton-That told the BBC.
Most law enforcement agencies are hesitant to discuss their Clearview AI tactics in detail, the BBC reported, so it’s often unclear who has access and why. But the Miami Police confirmed that “it uses this software for every type of crime,” the BBC reported.
Now, at least one Indiana police department has confirmed that an officer can sneakily abuse the tech and conduct unapproved face scans with apparent ease.
According to Kashmir Hill—the journalist who exposed Clearview AI’s tech—the disgraced cop was following in the footsteps of “billionaires, Silicon Valley investors, and a few high-wattage celebrities” who got early access to Clearview AI tech in 2020 and considered it a “superpower on their phone, allowing them to put a name to a face and dig up online photos of someone that the person might not even realize were online.”
Advocates have warned that stronger privacy laws are needed to stop law enforcement from abusing Clearview AI’s network, which Hill described as “a Shazam for people.”
Smith said the officer disregarded department guidelines by conducting the improper face scans.
“To ensure that the software is used for its intended purposes, we have put in place internal operational guidelines and adhere to the Clearview AI terms of service,” Smith said. “Both have language that clearly states that this is a tool for official use and is not to be used for personal reasons.