WORLD / EUROPE
Controversial technology
Pushback against AI policing in Europe heats up over
Published: Oct 25, 2021 06:13 PM
Amnesty International activists hold a protest against the ongoing migrant crisis with a boat filled with mannequins wearing life vests outside the Maritime Museum on January 25, 2016 in Amsterdam, the Netherlands during an informal meeting of EU Justice and Home Affairs ministers at the Maritime Museum. Photo: AFP

Amnesty International activists hold a protest against the ongoing migrant crisis with a boat filled with mannequins wearing life vests outside the Maritime Museum on January 25, 2016 in Amsterdam, the Netherlands during an informal meeting of EU Justice and Home Affairs ministers at the Maritime Museum. Photo: AFP

Damien Sardjoe was 14 when the Amsterdam police put him on the city's Top 600 criminals list, which sets people thought to be at risk of committing "high-impact crimes" such as robbery, assault and murder on a regime of care and punishment.

That was when his life began to fall apart.

Sardjoe had previously been arrested for two street robberies - one of which was violent. But his inclusion on the list meant police would raid his home whenever a crime happened in his area, he said, while officers routinely stopped him on the street asking for ID.

"I felt very spied on," Sardjoe, now a 20-year-old youth worker, told the Thomson Reuters Foundation in a video-conferencing interview. "I didn't feel comfortable walking on the street."

Sardjoe's older brother was placed on another automated list - the Top 400 children at risk of criminal behavior - before he had ever committed a crime, and then went on to become involved in stealing scooters, he added.

Amid warnings from rights groups that artificial intelligence (AI) technologies reinforce prejudice in policing, the debate over systems like the Top 600 list kicked up a notch in October, when MEPs voted for a report proposing strict regulation of predictive policing.

Officials said the non-binding report, which also calls to outlaw the use of mass biometric surveillance, should become Parliament's position for coming negotiations on a new AI law.

The use of automated risk modeling and profiling systems to predict future criminal activity has already been banned in cities like Santa Cruz and New Orleans amid accusations that they reinforce racist policing patterns.

"They treat everyone as suspects to some extent," said Sarah Chander, senior policy advisor at the European Digital Rights network (EDRi).

"But [they] will be disproportionately used against racialized people ... who are perceived to be potential migrants, terrorists, poor and working-class people, in poor, working-class areas."

The Netherlands police declined to comment on the Top 600 and 400 schemes, referring inquiries to Amsterdam's city council, which in turn said they were the responsibility of the police.

Policing by algorithm

Europe's law enforcement and criminal justice authorities are increasingly using technologies like predictive policing to profile people and assess their likelihood to commit crimes, according to Fair Trials, a partly EU-funded civil rights group.

One much-criticized Dutch project, which ran between January 2019 and October 2020, aimed to counter crimes like shoplifting in the southeastern city of Roermond.

The Sensing Project used remote sensors in and around the city to detect the make, color and route of cars carrying people suspected of what police call "mobile banditry."

Merel Koning, senior policy officer at human rights group Amnesty International, said the system mainly targeted people from east European countries and specifically Roma, referring to members of Europe's largest ethnic minority.

But the focus was not in line with internal police crime figures for previous years, Amnesty says.

Dutch police spokeswoman Mireille Beentjes said the project's scope went beyond pickpockets and was not predictive as the data used "always [had] a human check."

"We know these kinds of criminals often come from eastern Europe," she said in an email. "However, an eastern European license by itself was never enough to draw our special attention. More features were needed for that."

The program ended because the police did not have enough capacity to follow up project data, according to Dutch police.

In Denmark, the POL-INTEL project, based on the Gotham system designed by US data analytics firm Palantir and operational since 2017, uses a mapping system to build a so-called heat map identifying areas with higher crime rates.

The data appears to include citizenship information, such as whether a person in the system is "a non-Western Dane," according to Matt Mahmoudi, an affiliated lecturer and researcher on digital society at the University of Cambridge.

Magnus Andresen, a senior Danish National Police officer, confirmed that POL-INTEL contains nationality and citizenship data, but would not comment on why.

The police do not have any statistics on the system's effectiveness in combatting terrorism or crime, Andresen said.

But he added it is being used to support most of the force's operational decisions, like stop and searches, through the use of a "finder function" which quickly locates data on people, places, objects and events.

Courtney Bowman, Palantir's director of privacy, said decisions on the data gathered by the Gotham system - which has also been used by the European police agency Europol and the Hesse state police in Germany - were "always determinations made by customers."

A call for clarity

Pushback against institutions and companies linked to "predictive policing" has gone so far that digital experts say even the US firm which pioneered the tech, formerly called PredPol - short for predictive policing - now distances itself from the term.

The company's system uses algorithms to analyze police records and identify crime-ridden areas to proactively determine when and where officers patrol.

"However, what we do isn't 'predictive,' what we do is create location-based risk assessments based on historical crime patterns. This is why we changed our name from PredPol to Geolitica earlier this year," said CEO Brian MacDonald.

But police use of AI technology is still "extremely controversial," said Tom McNeil, assistant police and crime commissioner for the West Midlands Police in Britain, which is working with about eight types of automated modeling system.

Reuters