LONDON (AP) – A British court ruled Wednesday that a police force’s use of automated facial recognition technology is lawful, dealing a blow to an activist concerned about its implications for privacy.

Existing laws adequately cover the South Wales police force’s deployment of the technology in a trial, two judges said , in what’s believed to be the world’s first legal case on how a law enforcement agency uses the new technology.

The decision comes amid a broader global debate about the rising use of facial recognition technology. Recent advances in artificial intelligence make it easier for police to automatically scan faces and instantly match them to “watchlists” of suspects, missing people and persons of interest, but it also raises concerns about mass surveillance.

“The algorithms of the law must keep pace with new and emerging technologies,” Judges Charles Haddon-Cave and Jonathan Swift said.

Ed Bridges, a Cardiff resident and human rights campaigner who filed the judicial review, said South Wales police scanned his face twice as it tested the technology – once while he was Christmas shopping in 2017 and again when he was at a peaceful protest against a defense expo in 2018.

“This sinister technology undermines our privacy and I will continue to fight against its unlawful use to ensure our rights are protected and we are free from disproportionate government surveillance,” he said in a statement released by Liberty, a rights group that worked on his case.

His legal team argued that he suffered “distress” and his privacy and data protection rights were violated when South Wales police processed an image taken of him in public.

But the judges said that the police force’s use of the technology was in line with British human rights and data privacy legislation. They said that all images and biometric data of anyone who wasn’t a match on the “watchlist” of suspects was deleted immediately.

The judges noted, however, that the British legal framework should be subject to “periodic review.”

South Wales is the lead U.K. police force for conducting tests and trials of automatic facial recognition. It has deployed cameras mounted on police vans nearly 60 times since May 2017, capturing 500,000 faces at rugby games, Ed Sheeran concerts, protests, yacht races and other events.

London’s Metropolitan Police ended its own trial of the technology earlier this year.

Facial recognition has been gaining attention in other cities around the world, as authorities struggle with how to regulate the technology amid a growing backlash over worries about its intrusion into daily life and algorithms that can discriminate against darker-skinned people. San Francisco this year banned police and other city departments from using it, becoming the first U.S. city to do so, followed by Oakland and Somerville, Massachusetts . In Hong Kong, protesters cut down a “smart lamppost” last month over worries it contained facial recognition cameras used for surveillance by Chinese authorities – fears that the city’s government said were unfounded.

Britain’s privacy watchdog, which has been investigating the police trials, said it would take the court’s ruling into consideration as it draws up guidelines for future use of the systems.

“This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police,” the Information Commissioner’s Office said.

The decision won’t be the end of the debate – Bridges plans to appeal the ruling.

Fiona Barton, a barrister who worked on behalf of the South Wales police, said the case “should not be taken as a green light” to use automatic facial recognition “in any and all circumstances” because of the specifics of the case.