For the past three months, a small encrypted group chat of Latin American officials who investigate online child-exploitation cases has been lighting up with reports of raids, arrests, and rescued minors in half a dozen countries.
The successes are the result of a recent trial of a facial-recognition tool given to a group of Latin American law-enforcement officials, investigators, and prosecutors by the American company Clearview AI. During a five-day operation in Ecuador in early March, participants from 10 countries including Argentina, Brazil, Colombia, the Dominican Republic, El Salvador, and Peru were given access to Clearview’s technology, which allows them to upload images and run them through a database of billions of public photos scraped from the Internet.
“Normally it takes at least several days for a child to be identified, and sometimes there are victims that have not been identified for years,” says Guillermo Galarza Abizaid, the vice president in charge of partnerships and law enforcement at the Virginia-based nonprofit International Centre for Missing and Exploited Children (ICMEC), which organized the event. “Using Clearview, it’s within seconds.”
The group used the facial-recognition tool to analyze a total of 2,198 images and 995 videos, hundreds of them from cold cases. In just three days, they identified 29 offenders and 110 victims, ranging from newborns to 17-year-olds, which investigators then worked to confirm. As of June 13, at least 51 victims had been rescued as a result of the effort, according to ICMEC and government officials interviewed by TIME. “Clearview was a vital resource because of its ability to search and compare faces in its vast database of images pulled from social networks,” says Captain Diego Rafael Calispa of the Directorate of Children, Adolescence and Family of the Ecuadorian Police.
The operation, which was called Guardianes Digitales por la Niñez, or Digital Guardians for Children, provides a rare glimpse into how Clearview AI has quietly expanded into Latin America and the Caribbean in recent years. Clearview now acknowledges that it is operating in at least five countries in the region: Brazil, Colombia, Chile, the Dominican Republic, and Trinidad and Tobago. (The company declined to confirm details in court documents that stated the company also operates in Mexico and Panama.) Clearview has built a dedicated team selling its services to Latin America, and has made its tools available in Spanish and Portuguese. “Here in Latin America we see a lot of adoption and demand,” CEO Hoan Ton-That tells TIME.
The expansion into Latin America comes at a time when Clearview, which is used by the FBI, the Department of Homeland Security (DHS), and hundreds of police departments across the U.S., faces a range of lawsuits and fines for allegedly violating biometric and other privacy laws. Clearview is largely prohibited from selling access to its database to U.S. private companies under a 2022 court settlement. On June 13, the company also agreed to an unusual settlement in a class-action lawsuit that could potentially give any American whose photos are in Clearview’s database a stake in the company. Several countries in Europe have deemed Clearview illegal, imposed steep financial penalties on the company for breaching privacy laws, and attempted to ban it from collecting the faces of their citizens.
Clearview’s reception in Latin America and the Caribbean has been far more enthusiastic, and the company sees the region as a promising new market with a looser approach to personal data and privacy regulations. “When we had requests from Latin America and South America, it made more sense for us regulatorily,” says Ton-That. “It’s much easier since these tools are allowed.”
Read More: Ukraine’s ‘Secret Weapon’ Against Russia Is a Controversial U.S. Tech Company.
Yet the growing adoption of facial-recognition tools by law enforcement agencies in Latin America and the Caribbean has alarmed privacy and digital-rights advocates. They warn that Clearview’s technology puts most of the world in a “perpetual police lineup,” risking wrongful arrests due to false positives, racial profiling, and the potential weaponization by governments who could use it against political opponents, all while the company continues to train its tools using the personal data of millions of people without their consent.
“While countries in other regions are making an effort to curb biometric surveillance, Latin American governments seem to be going in the opposite direction,” says Ángela Alarcón, who works on the Latin America and Caribbean program at Access Now, a nonprofit organization that advocates for digital rights. Companies like Clearview “take advantage of legislative deficiencies in data protection [and] the lack of authorities with sufficient technical and legal tools to exercise control against abuses and violations,” she says.
Clearview says it carefully vets the countries it decides to operate in on a case-by-case basis, including their human-rights record and any legal restrictions, and also consults with DHS and the State Department. But as more countries look to adopt its tools, it’s likely to ignite a debate over how to balance the benefits of potentially intrusive technology with privacy rights in a region where officials say Clearview “could be a game changer” in dealing with cross-border organized crime, drug cartels, and human trafficking.
To Galarza Abizaid, who has spent 25 years training officers around the globe on how to investigate online child sexual exploitation, the calculation is clear. “We’re dealing with vulnerable children, and we’re in a race against time here,” he tells TIME. “And I’m sure people are willing to give up a little privacy if a child is going to be recovered.”
Facial-recognition technology has been used by police across Central and South America for more than a decade. Several countries, including Brazil, Mexico, and Colombia, have experimented with it to monitor crowds at soccer games, subway systems, and markets, and even to track school attendance. But Clearview’s tools allow civil and military police and law enforcement to run photos through a growing trove of faces harvested from the Internet to find a match, which often links back to social profiles that expose a person’s identity, location, and family members. This database now contains more than 50 billion images, according to the company, and has been searched more than 2 million times.
In the five countries in Latin America and the Caribbean where Clearview confirms it is operating, officials have used facial-recognition technology to quickly identify the people behind a broad range of crimes, from credit-card fraud to bomb threats and homicides, as well as to locate kidnapped or missing persons. After a death threat was made against the leader of an unnamed Caribbean country, the facial-recognition tech identified the person behind it in less than 24 hours, the company tells TIME. On some small Caribbean islands with limited resources, Clearview has shortened the process of identifying suspected criminals from a month to an hour, according to local officials.
One of its most effective uses has been identifying suspects and victims of crimes that are documented online. Even three months after the five-day Clearview trial, officials who participated in the ICMEC operation continue to make arrests. Reached by TIME on June 12, Marino Abreu Tejeda, an investigator in the Dominican Republic’s Attorney General’s office, was just returning from a raid to rescue four more minors identified by Clearview. “We already put some suspects in preventative detention,” he says.
Outside of the U.S., Clearview has been used most extensively in Ukraine, where it made initial inroads in a similar fashion. At first, the company provided its tools for free to a wartime government under siege from Russia’s invasion and eager to find ways to fight back. Within months, 1,500 officials across 18 Ukrainian government agencies were using it to identify Russian soldiers, suspected Ukrainian collaborators, and abducted children who were transported to Russia. In interviews in Kyiv in October, Ukrainian officials touted the use of this “secret weapon” to prosecute war crimes and detect infiltrators at checkpoints. Its widespread adoption during the war has alarmed human-rights groups and privacy advocates who say that the country’s outdated privacy laws could fail to curtail the potential surveillance of citizens without proper justification. The company is now being paid for its services in Ukraine.
Critics say this playbook is effective. “Latin American countries are easy prey for the abusive practices of Clearview AI,” says Juan Espindola, a researcher at the National Autonomous University of Mexico (UNAM) who has studied Clearview’s application in war and crisis situations. “The weakness of many law enforcement agencies in Latin America creates a huge pressure for quick fixes,” he says. “AI and facial recognition are one such fix.”
While the company is just getting started in the region, in the U.S. it has been under fire for misidentifying suspects—especially people of color—and abuse by law enforcement. On June 13, a police officer in Indiana resigned after he was found to have used the powerful facial recognition tool to conduct searches for “personal purposes.”
Ton-That acknowledges that events like the ICMEC operation do serve as “partly a marketing thing,” although he says it shows why the technology is particularly well suited to support law enforcement in regions like Latin America.
For their part, organizations like ICMEC are eager to help expand Clearview into other parts of the world. Galarza Abizaid says his organization is seeking to partner with the company for a similar event in Kenya that would allow African officials across the continent to sample Clearview’s tools for possible use in thousands of current and cold cases. “Their technology is working, and everything is coming out right now,” Galarza Abizaid says. “It feels like we’ve opened Pandora’s Box.”