Blog Post

Germany says no to predictive policing – the rule of law challenges posed by algorithms

by Gary Brady

Gary Brady, Utrecht University (Netherlands)

Technologically-Powered State Functions?

As technology becomes further integrated with the law, so does it with public administrative and state bodies. While immediately there are many upsides to this such as lowering administrative burdens and reducing waiting periods meaning more efficient systems, there are just as many issues to be found in their application if left unchecked. Just recently on February 16th 2023, the German Federal Constitutional Court declared a landmark ruling, declaring the use of policing surveillance software unconstitutional on the grounds of potential discrimination as well as on the grounds of the right to informational self-determination, a German concept relating to privacy. This ruling has once again raised the issue of algorithmic application and the implications it has for policing as well as public administration in relation to not only fundamental rights but also the rule of law in general.

What challenges to policing algorithms create?

The algorithm in question was employed by the state of Hesse, which provides optimised mapping information of crime hotspot areas based not only on crime models but concerningly socio-economic census data. This is used to create profiles of criminality before crimes even take place. Furthermore, it must also be noted that many of these algorithms such as the Hesse and Hamburg policing systems, are developed by private companies who may very well be based outside of Germany, or even Europe for that matter. These raise very important issues for any administration of the rule of law on the grounds of accountability and fairness. Truly one of the largest criticisms leveraged against policing algorithms and predictive policing is that of fairness and data bias which serve to enforce institutional unfairness in policing.

This is not a new phenomenon. In 2019, the Netherlands was the first nation to deploy predictive policing on a nationwide level, the Dutch Criminaliteits Anticipatie Systeem (CAS), predicts future crime on a heat map which is determined by analysing crime data and identifying trends and produces a crime score as well as predicting times crime is most likely to occur. The score is based on numerous factors such as income, residency status, and until 2017, parental ethnicity as divided into “western” and “non-western” categories. It goes without saying that the mere existence that last point is of notable concern. Furthermore, little is known about the true inner functioning of these systems, something that should not be the case when these systems can determine the very liberty of an individual.

What are the issues with algorithmic discrimination?

These contemporary examples of policing show more general issues regarding applying algorithms to issues concerning the individual. One of the most important fundamental rights to the individual is that of non-discrimination which is either found in Article 21 of the European Charter of Fundamental Rights or Article 14 of the European Convention on Human Rights, these are positive rights in the sense that the state should ensure the individual is not subjected to discriminatory practices, as well as negative rights in the sense that the state should refrain from discriminatory behaviour. Algorithms are based off the learning set of those who designed it, bias input produces bias output and the usage of algorithmic profiling, especially for allocation of resources may be inherently discriminatory given the fact that large scale data processing and profiling comes with many implications for the decision subject. Respect for fundamental rights is a cornerstone of good statehood, providing for effective protection of self-determination of the individual, a rule of law system presupposes equal treatment to allow citizens to express this. It isn’t just about equal treatment, it is the core and integral moral and ethical features of the state that is supposed to uphold the public good.

Why these systems need transparency and accountability

A frequent criticism leveraged against predictive policing algorithms and algorithmic application at large is a lack of transparency. As a result individuals subject to these algorithms know very little about how they are viewed by these authorities. Accountability and transparency allow for better administration, it is natural for an individual subject to a decision to want the reasoning and what went into the decision. What does accountability mean for an algorithm? While it is true that use of algorithms offer unparalleled convenience and efficiency and proponents of policing algorithms claim that understanding the causes of crime will lead to increase prevention and deterrence, the algorithm itself cannot be blamed when it presents a harmful outcome since it is created by human actors, ultimately accountability should be focused on these actors since they are responsible for the norms that facilitate algorithmic processing of data. Of particular concern is where, as seen in the German policing example, private companies not based in Europe are responsible for this. The German system in Hesse and Hamburg was developed by Palantir and was deployed in Denmark also, there are issues with having potential discriminatory systems developed by those not beholden to European social norms and fundamental rights, how is someone subject to these systems in Hesse supposed to understand why they have been targeted when not only do they have to inquire from state and municipal policing but also algorithm manufacturers based in Denver? Issues with accountability feed into issues with transparency. While algorithms and authorities that use them can present a small amount of transparency by publishing the model, the data and metadata involved, this will be limited to those without the technical knowhow, how do they make sense of machine learning models? Transparency may not in itself be the silver bullet for accountability in algorithms as there are still other facets that need to be addressed.

What about remedies?

The General Data Protection Regulation provides a general framework in articles 13, 14, and 15 for rights to explanation behind automated decisions or at the very least the meaningful information about the logic involved. These rights come under a larger right of effective remedies under Article 47 of the Charter and Article 13 of the ECHR. Another primary issue of algorithmic application for decision-making is that artificial systems, on top of the issues of bias and inefficient accountability measures, do not possess any explanatory powers. These issues specifically play into the right to an effective remedy for any person who has their rights violated, moreover in applying this right the principle of equality of arms must be adhered to, meaning every party must have reasonable opportunity to present their case and on terms that do not create an imbalance of power. This raises specific challenges with algorithms in cases where one party has specific technical expertise, surely it follows that this power balance can be rectified by means of third party technical experts, but in the case of a private entity such as Palantir, exposure of such information in seeking an effective remedy may be difficult if such technical information is protected under a non-disclosure agreement or amounts to a trade secret. So, what we have here is an issue that while an individual may be discriminated against, they may not be able to adequately appeal against a breach of their rights or even their incarceration if the algorithm is left unchecked without any oversight mechanism, thus making application inconsistent with requirements under the Charter and ECHR.

Conclusion

While stakeholders are quick to point to the many efficiencies offered by algorithms, the unseen harms are often left ignored. The judgement German Federal Constitutional Court echoes similar judgements across Europe as more services turn to algorithms and predictive measures, with this the question turns to reconciling these concerns to potentially allow for the operation of these algorithms to allow for the benefits they create to be utilised, while ensuring that the rights of the individual are respected. French courts have previously suggested greater transparency, anonymisation of certain details and human oversight to allow for their usage in a fair fashion which could offer a way forward. In relation to policing, while the European AI Act does include partial bans on inherently discriminatory predictive policing systems, which is a notable development in the eyes of civil society groups, algorithmic bias and concerns about accountability, transparency, and explainability will seemingly remain a spectre haunting algorithmic application if these systems are left unchecked, should these be accounted for in a manner that allows the decision-subject to truly understand the criteria used, the rule of law and algorithms can successfully coexist.