Blog Post

From 10 Commandments to 10 OECD Principles on Artificial Intelligence

by Victor Lopez Juarez

The Organization for Economic Cooperation and Development (OECD) is a global policy forum that aims to promote policies for global economic and social well-being. In 2019, the OECD adopted the Recommendation on Artificial Intelligence (AI), which consists of 10 principles that promote responsible management of trustworthy AI with a human-focused approach. The principles address various AI-related issues and establish flexible, durable, and implementable standards for the rapidly changing field of AI.



In 2018, the OECD Committee on Digital Economy Policy (CDEP) started forming a group of experts called the AI Group of Experts (AIGO) to create the principles. The AIGO comprised 50 experts from various domains, including the technical community, government, academia, industry, and civil society. They held meetings in Paris, Boston (at the Massachusetts Institute of Technology), and Dubai; their multidisciplinary expertise significantly influenced the Principles. The OECD Council reviewed the principles, and they were finally adopted at the OECD Ministerial level in May 2019.

In 2019, the principles were adopted by 46 countries, the OECD’s 36 member countries, along with six non-member states.


AI can improve people’s lives, drive global economic growth and productivity, and help tackle major global challenges. However, it also poses challenges for humanity, including inequality, job displacement, ethical and privacy concerns, and security risks. The OECD created these Principles to promote the responsible advancement of AI. The Principles aim to foster trust in the adoption of AI while preserving fundamental rights, democracy, economy, and innovation.

The 10 Principles

The Principles are divided into two substantial sections. The first section is directed towards organizations and individuals that deploy or operate AI and those who play an active role in the AI system lifecycle. The second section is addressed to governments and policymakers.

The 10 Principles are complementary and should be considered as a whole:

1. Five AI values-based Principles:

  • Principle 1.1: Inclusive growth, sustainable development, and well-being.
  • Principle 1.2: Human-centered values and fairness.
  • Principle 1.3: Transparency and explainability.
  • Principle 1.4: Robustness, security, and safety.
  • Principle 1.5: Accountability.

2. Five recommendations to Governments:

  • Principle 2.1: Investing in AI research and development.
  • Principle 2.2: Fostering a digital ecosystem for AI.
  • Principle 2.3: Shaping an enabling policy environment for AI.
  • Principle 2.4: Building human capacity and preparing for labor market transformation.
  • Principle 2.5: International cooperation for trustworthy AI.

The Principles include a provision for the development of metrics to measure AI research, development, and deployment.

The State of Implementation of the OECD AI Principles 4 years on

In October 2023, the OECD released its report on the State of Implementation of the OECD AI Principles. The comprehensive report provides an overview of national AI strategies worldwide to implement the Principles. It explores the regulatory frameworks that countries are adopting to ensure the trustworthiness of AI systems and current examples of policies that implement each of the ten Principles.


The potential of AI has been exposed, making it available to the mainstream public. More AI tools are being released constantly, bringing opportunities and risks simultaneously. The 10 OECD AI Principles are a valuable guide to address those challenges and capitalize on those opportunities.

Countries are adopting AI-specific regulatory approaches and policies based on these Principles, with over 930 related policy initiatives across 70 jurisdictions reported to the OECD AI policy hub by 2023. This highlights the influence and importance of this essential framework.

The continuous fast development and implementation of AI will allow the Principles in the future to provide a stable framework for the responsible implementation of trustworthy AI.

Victor Lopez Juarez is a lawyer and a technology enthusiast currently enrolled in the European Master in Law, Data, and Artificial Intelligence (EMILDAI) program. His primary research focus is on Artificial Intelligence and Data Protection.