EU Parliament votes to ban predictive policing and criminal prediction systems in AI Act
Fair Trials: Today (11), members of the European Parliament voted for a landmark ban on predictive policing and criminal prediction systems in the EU AI Act. The ban will be the first of its kind in Europe.
MEPs in IMCO-LIBE, the two Committees in charge of the flagship AI Act, voted to finalise the text of the act, in advance of the whole Parliament vote in June.
Fair Trials has been calling for a ban on these systems since 2021, on the basis that they reinforce and reproduce structural discrimination, and infringe upon fundamental rights.
Today, the European Parliament has taken an important step towards protecting people from harmful AI systems in law enforcement and criminal justice settings.
The road to a ban
Fair Trials first called for a ban in 2021 and has since built a coalition of more than 50 rights, lawyers and other civil society organisations across Europe, including European Digital Rights (EDRi), Access Now, Amnesty Tech, the Council of Bars and Law Societies of Europe. Following Fair Trials’ campaigning, many MEPs also publicly supported a ban. Co-rapporteur of the AI Act, Dragos Tudorache, said: “Predictive policing goes against the presumption of innocence… We do not want it in Europe.”
‘Predictive policing’: Discrimination, surveillance and infringements on rights
There are numerous predictive AI systems currently used by law enforcement and criminal justice authorities across Europe. These systems can and have exacerbated existing structural discrimination, resulting in Black people, Roma and other minoritised ethnic people being disproportionately surveilled, stopped and searched, arrested, detained and imprisoned.
Fair Trials’ research found that attempts to ‘predict’ criminal behaviour with AI and automated-decision making systems: infringe fundamental rights, including the right to a fair trial, the presumption of innocence and the right to privacy; legitimise and exacerbate racial and ethnic profiling and discrimination; and result in the repeated targeting, surveillance and over-policing of minoritised ethnic and working-class communities. These discriminatory practices are so fundamental and ingrained that all such systems will reinforce such outcomes. This is an unacceptable risk. They must be banned.
Prohibition text and other amendments
The full text of the prohibition is below, within Article 5 of the Act, which comprises of a list of ‘prohibited practices’: “(da) the placing on the market, putting into service or use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons;”
Fair Trials also supported several other amendments to the Act, which it was also pleased to see pass, including: a ban on remote biometric identification systems, such as face recognition surveillance, and measures to ensure transparency and accountability for other AI systems.
Next step, plenary vote.
The vote today is a landmark result, but the fight is not yet over. The AI Act will be subject to the plenary vote of the whole European Parliament in June.