Introduction
The European Union Artificial Intelligence Act (EU AI Act) represents one of the most significant regulatory frameworks in the global governance of artificial intelligence (AI). Its prior objective is to ensure that AI systems deployed within the European Union align with fundamental rights, democratic values, and ethical principles. At its core, the Act seeks to balance technological innovation with human rights protection, recalling the EU’s long-standing commitment to safeguarding human dignity and individual freedoms.
An important element of this commitment is the Act’s alignment with the European Convention on Human Rights (ECHR). This legally binding treaty establishes the foundation for human rights protection across Europe. Since its adoption in 1950, the ECHR has played a defining role in shaping legal and ethical standards, particularly concerning the right to privacy, freedom of expression, protection against discrimination, and the right to a fair trial. However, AI technologies increasingly influence decision-making in sensitive domains such as law enforcement, healthcare, employment, and content moderation, the AI Act must adhere to the principles enshrined in the ECHR. [1] However, while the EU AI Act explicitly recognizes fundamental rights as a guiding principle, its practical application presents challenges. The interplay between AI regulation and human rights law raises important questions regarding transparency, accountability, and legal safeguards in AI-driven systems. As the EU AI Act is enforced, ensuring strong compliance with ECHR standards will require ongoing scrutiny, refinement, and robust enforcement mechanisms.
The Right to Privacy and Data Protection
One of the most contentious aspects of AI regulation is its impact on privacy rights, as protected under Article 8 of the ECHR. AI technologies particularly those used for biometric identification, predictive policing, and large-scale data processing pose significant risks to individual privacy. The capacity of AI systems to examine extensive amounts of personal data, track individuals’ movements, and infer sensitive personal details increases concerns about mass surveillance and data misuse.
The EU AI Act attempts to mitigate these risks by placing strict restrictions on the use of AI in biometric surveillance. Under article 5 of this Act, real-time biometric identification systems in public spaces are generally prohibited, except in cases deemed necessary for national security or criminal investigations. While this limitation aligns with privacy safeguards under the ECHR, it also presents legal ambiguity. The exceptions granted for law enforcement purposes leave room for potential overreach, raising the risk of government-led surveillance programs that may disproportionately impact marginalized groups.
Furthermore, the AI Act requires that AI systems processing personal data comply with the General Data Protection Regulation (GDPR), reinforcing the principle of data minimization, purpose limitation, and user consent. However, the real challenge lies in ensuring effective enforcement. AI systems operating in complex environments, such as predictive policing or healthcare diagnostics, often process data in ways that are difficult to audit. The opacity of AI decision-making mechanisms commonly referred to as the “black box” problem complicates the ability of individuals to maintain their privacy rights.
To fully adhere to the ECHR’s privacy protections, the AI Act must incorporate more robust oversight mechanisms, including independent review boards that assess whether AI-driven surveillance practices are strictly necessary and proportionate. Without such safeguards, the potential for privacy violations and unchecked data exploitation remains a significant concern.
AI and the Protection Against Discrimination
The principle of non-discrimination under Article 14 of the ECHR is a cornerstone of European human rights law. AI systems, specifically those used in hiring, law enforcement, and credit scoring, have been found to exacerbate existing biases instead of eliminate them. The risk of AI-driven discrimination arises from biased training data, flawed algorithmic design, and lack of accountability in decision-making processes.
The EU AI Act addresses these concerns by imposing strict requirements on high-risk AI systems that affect fundamental rights. Companies deploying AI in sensitive domains must lead bias testing, provide transparency reports, and ensure human oversight in AI-driven decision-making. In theory, these provisions were introduced to prevent AI models from producing biased outcomes, such as racial profiling in policing or gender bias in hiring algorithms.
However, despite these regulatory safeguards, the effectiveness of the AI Act in fully eliminating algorithmic discrimination remains uncertain. One fundamental challenge is that bias in AI is often deeply embedded in historical data. Even if an AI system is designed to be “neutral,” it may still replicate past patterns of discrimination due to the historical inequalities reflected in training datasets.
Moreover, while the AI Act mandates transparency and explainability, it does not fully address the issue of algorithmic accountability. If an individual is denied a job or a loan due to an AI-driven decision, what legal recourse do they have? The Act requires companies to provide explanations for AI decisions, but such explanations are often technical and difficult for the average person to understand. This creates an access to justice issue, as victims of algorithmic bias may struggle to challenge unfair AI decisions effectively.
For the AI Act to genuinely align with the ECHR’s commitment to equality and non-discrimination, it must go beyond passive transparency measures and incorporate stronger legal remedies. This includes establishing clear pathways for individuals to contest AI-generated decisions and ensuring autonomous regulatory bodies have the authority to intervene in cases of systemic AI bias.
AI, Freedom of Expression, and Fair Trial Rights
Another important dimension of AI regulation concerns freedom of expression (Article 10 ECHR) and the right to a fair trial (Article 6 ECHR). AI plays a growing role in content moderation on digital platforms, automated sentencing in courts, and predictive analytics in law enforcement. While AI can enhance and improve efficiency, it also poses risks to due process, legal fairness, and the suppression of lawful speech.
A significant area of concern is the use of AI in automated content moderation. Many social media platforms rely on AI algorithms to detect and remove harmful content, such as hate speech and misinformation. While combating online abuse is essential, AI-driven censorship mechanisms often lack significance, leading to the unfair suppression of legitimate political speech, activism, and disagreement. The AI Act attempts to address this by requiring platforms to disclose when content removal decisions are made by AI, hence providing users with the ability to challenge content takedowns.
Similarly, the use of AI in judicial and law enforcement settings raises questions about due process and legal fairness. AI-powered predictive policing tools, for example, are often based on historical crime data that may reflect racial and socioeconomic biases. If courts increasingly rely on AI-generated risk assessments for sentencing decisions, there is a danger that individual rights to a fair trial may be undermined. The AI Act stipulates that AI should never replace human judicial decision-making, but ensuring meaningful human oversight in practice remains a challenge. To align fully with the ECHR’s guarantees of free expression and fair trial rights, the EU AI Act must enhance transparency in AI-driven legal and content moderation decisions. Further, judicial review mechanisms should be strengthened to allow individuals affected by AI-generated decisions to challenge them effectively.
Conclusion
The EU AI Act represents a significant step forward in regulating AI technologies in a manner that respects human rights and democratic values. However, its commitment to the European Convention on Human Rights is not without challenges. While the Act establishes important safeguards for privacy, non-discrimination, and transparency, its practical enforcement will determine whether it truly safeguards fundamental rights or simply sets aspirational guidelines.
For the AI Act to fully align with ECHR principles, independent oversight mechanisms, stronger legal remedies, and clear enforcement strategies must be implemented. Without these, the risks of algorithmic bias, privacy violations, and deterioration of due process could persist despite well-intentioned regulations. The future of AI governance in the EU will depend on ongoing legal scrutiny, public accountability, and a commitment to upholding the values of human dignity and fairness in an AI-driven world.
Dr. Srabonty Das Gupta is a postdoctoral researcher in human rights law based in Leon, Spain. Before doing her PhD at the University of Leon on forced migration and international human rights, she worked as a corporate legal counsel in Bangladesh.