The revelations brought up by Edward Snowden in 2013 disturbed the world and our perception of privacy, exposing the shocking extent of the massive surveillance operations of the U.S. National Security Agency (NSA). His disclosures opened our eyes to how deeply governments could infiltrate our personal lives through digital means, often bypassing any oversight under the pretext of national security. In this context, we must ask ourselves: are we willing to trade our fundamental rights and freedoms for the illusion of safety?
Since then, while technological advancements have proven valuable for law enforcement and national security, they also raise critical concerns, especially massive surveillance capabilities, which enable unprecedented scenarios of potential harm to individuals’ guarantees.
More recently (February 2025), the UK government demanded Apple to create a backdoor in its encrypted cloud storage service, confronting the company’s declared commitment to user privacy as one of its “core values”. The order came as a “technical capability notice” issued by the Home Officer under the Investigatory Powers Act 2016, which requires companies to assist law enforcement in providing evidence). It targeted Apple’s Advanced Data Protection (ADP) service, which uses end-to-end encryption, ensuring only account holders can access their data. According to The Washington Post, the order was a “blanket” request that affects any Apple user worldwide. Apple’s strong posture against creating encryption backdoors is highlighted by its assertion that privacy is a fundamental human right, stating explicitly that it would “never build a backdoor” and would rather withdraw “critical safety features” from the UK market.
I decided to write this article when, while reading the previous article on this blog, a particular point highlighted by Dr. Srabonty Das Gupta caught my attention. At some point, she also comments on how AI systems’ ability to process vast amounts of personal data, track individuals, and infer sensitive details raises serious concerns about mass surveillance and data misuse. In particular, she pointed to the EU AI Act attempts to mitigate these risks by placing “strict” restrictions on the use of AI in biometric surveillance, for example, by placing a general restriction on real-time biometric identification in public spaces under Article 5. However, according to the Regulation, such limitation is exempt in cases deemed necessary for national security or criminal investigations. For Dr Das Gupta, this creates legal ambiguity and may lead to the risk of government exacerbation, disproportionately affecting marginalised groups. As you will soon understand why, I could not agree more.
As soon as I read this part of her text, it reminded me of the legal battle around the Digital Rights Ireland case (C-293/12 and C-594/12), which accentuated the tension between national security measures and the protection of fundamental rights. In the context of AI-driven surveillance, this debate remains highly relevant, maybe more than ever.
Diving Deeper into the Bulk Data Retention Discussion
The 2006 Data Retention Directive (Directive 2006/24/EC) was introduced as a response to the growing threat of transnational terrorism, exemplified by the 2004 Madrid and 2005 London attacks. The Directive aimed to standardize data retention across EU Member States, requiring telecommunications providers to retain metadata, such as phone numbers, location, time and duration of phone calls, and IP addresses, for six months to two years. Law enforcement agencies could access this data to investigate and prosecute serious crimes. Framed under Article 114 of the Treaty on the Functioning of the European Union (TFEU), it sought to harmonize Member States’ data retention practices to ensure consistent security measures and cross-border cooperation.
Despite its intent, the Directive faced significant backlash for disproportionately interfering with fundamental rights. The extensive metadata retention, irrespective of individual suspicion, was criticized for turning the entire population into potential suspects. Legal scholars argued that the Directive failed to meet the proportionality and necessity standards required under EU law, leaving it vulnerable to challenge.
In 2014, these criticisms culminated in the Digital Rights Ireland case (C-293/12 and C-594/12), in which the Court of Justice of the European Union (CJEU) invalidated the Directive, deeming it incompatible with Articles 7 and 8 of the EU Charter of Fundamental Rights, which guarantee respect for private life and protection of personal data. This ruling marked a turning point in EU data retention law, establishing strict limits on state surveillance. It emphasized that national security measures must be necessary, proportionate, and accompanied by robust safeguards, such as, in this case, objective criteria for authorities’ access and use of personal data and judicial oversight.
The Digital Rights Ireland ruling has since been reinforced by subsequent CJEU decisions, including Tele2 Sverige (C-203/15) and La Quadrature du Net (C-511/18). These cases further clarified that indiscriminate data retention, even under national frameworks, is incompatible with EU law. Save this info.
The Spanish Precedent
Last semester, while studying at the University of León, I was surprised to learn that the Spanish Supreme Court upheld the country’s 2007 Data Retention Law (Ley 25/2007). While the law was designed to align with the original Directive, it has remained unchanged even after the Directives’ invalidation.
The Court emphasized that the requirement of judicial authorization to access the retained data provides sufficient safeguards despite the accusations of ignoring the proportionality and necessity principles, which are fundamental to EU jurisprudence.
The Spanish approach sets a concerning precedent, potentially weakening the primacy of EU law and risking a fragmented legal environment across Europe. As noted by David Fennelly, such fragmentation erodes trust in democratic institutions and may legitimize future, probably even more invasive, mass surveillance practices across the EU.
Big Data And Other Emerging Technologies
To put it into perspective, the debate we are analysing began nearly two decades ago, when Digital Rights Ireland challenged data retention practices in 2006. It is essential to highlight that, back then, mobile devices were on the rise, and the topic of the discussion was “only” the metadata of an individual’s telecommunications.
However, technology’s rapid evolution since then has exponentially amplified the risks of such relativization of fundamental rights (as seen in the Spanish scenario). Today, the almost ever-present Internet of Things (IoT) devices drastically expand data collection, storage and analysis capabilities. That is what we call big data. The term commonly encompasses “the growing technological ability to collect, process and extract new and predictive knowledge from the great volume, velocity, and variety of data”.
There is also a significant rise in artificial intelligence development today, primarily driven by machine learning, a process that involves using large datasets to train models for improved performance. Algorithms are used to automate choice by emulating human decision-making. Mathematical operations function as a way to calculate, process, and evaluate data, as well as to perform automated reasoning and decision-making.
Given all this, we must start looking at bulk data retention discussion in a broader way. This is no longer just a debate about telecommunications metadata. What’s really at stake is the risk of setting a precedent where governments can limit fundamental rights in the name of national security. As technology evolves, more invasive surveillance methods might be introduced and justified using the same reasoning. From smart devices that track our every move to AI systems that profile people automatically, the tools now available go far beyond what was possible when this debate began.
Potential Harm to Rights and Freedoms
This discussion is especially troubling when considered alongside newer frameworks like the EU AI Act. For instance, Article 5 of the AI Act restricts real-time biometric identification in public spaces except for “national security or criminal investigations.” But is this enough limitation?
As Spain’s Supreme Court demonstrated, such exceptions can easily be misused, enabling massive surveillance. In other words, an EU Member State may claim to have national security or criminal investigations reasons and, for example, push for real-time biometric identification in contexts that would generally be prohibited.
In fact, on Tuesday (March 18, 2025), Hungary enacted a controversial legislation banning LGBTQ+ Pride events and authorizing the police to use facial recognition technology to identify those attending these events. The bill imposes fines up to €500 for individuals who engage in or participate in events that go against the country’s contentious “child protection” legislation, which prohibits the “depiction or promotion” of homosexuality to minors under the age of 18
By the time I wrote this article, I could not find the text of this new legislation to understand if it specifies whether the facial recognition system mentioned is based on real-time biometric identification. But I risk saying so. If so, it remains unclear to me whether the country is fitting the measure into the national security/law enforcement exemption or simply ignoring the new AI Act provision.
In any case, this situation serves as a clear example of how national political measures can quickly devolve into invasive surveillance practices, significantly threatening individual rights and freedoms.
A Fragile Balance Between Rights and Security
From Snowden’s revelations to the rise of AI surveillance, the debate around privacy and national security has become increasingly urgent. What started as a conversation about metadata retention in the early 2000s has evolved into a complex and urgent conversation about how emerging technologies—big data, facial recognition, and artificial intelligence—can amplify the risk of undermining fundamental rights
The Digital Rights Ireland case was a milestone that drew a clear line against indiscriminate surveillance in the EU. Yet, national-level responses, like the Spanish Supreme Court’s decision or Hungary’s recent declaration that it will use facial recognition against LGBTQ+ events, show how fragile and uneven these protections remain.
As surveillance becomes more advanced and less visible, the risk of normalizing overreach grows. Whether it is backdoors into encrypted services, bulk metadata retention, or real-time biometric tracking, the question remains: how much of our rights and freedoms are we willing to give up in the name of safety?
To prevent today’s exceptions from becoming tomorrow’s norm, we must critically examine and challenge every measure that limits our rights. The defence of privacy, autonomy, and democratic values must adapt as fast as the technology that threatens them.
Renato Lima Lopes de Freitas is a Brazilian-qualified lawyer and a Certified Information Privacy Professional (CIPP/E) with 5 years of experience in privacy and data protection consulting across different industries and jurisdictions. Before getting to know tech law, he also volunteered for 5 years at AIESEC, where he became Local Vice-President of AIESEC in Recife (2016), National Vice-President of AIESEC in Poland (17-18) and National Vice-President of AIESEC in Romania (18-19). Currently pursuing a European Master’s in Law, Data, and Artificial Intelligence (EMILDAI), he specializes in the intersection of law and technology and is interested in Privacy Management and Artificial Intelligence Governance.
