Blog Post

How can AI improve corporate criminal compliance?

by Nicolò Di Paco

Nicolò Di Paco is a PhD candidate in Law at the University of Pisa and a visiting researcher at the DCU Law and Tech Research Cluster.

At present, the difficulties that companies have to face in fulfilling self-regulatory tasks are increasing. First of all, corporations nowadays deal with a broad, non-systematic, and chaotic legal framework. Secondly, the prospect of “integrated compliance” – so a compliance system that contains all  necessary control systems: such as criminal compliance, General Data Protection Regulation (GDPR) compliance, anti-corruption compliance, and so on – implicated a large amount of information that needed to be systematized. Finally, companies deal with a substantial quantity of data that is not manageable by means of the traditional way of archiving.

In front of those normative and practical conundrums, the inappropriateness of the traditional ways of collecting and comparing data and pieces of information (so-called “paper-based compliance”) to meet the organizational duties emerges tangibly. The need to prepare an adequate and economically sustainable crime-prevention system could push corporations towards a technological breakthrough: the use of Artificial Intelligence (AI), and other digital technologies, in the making of the compliance program.

To signify the employment of AI, and other digital technologies in the compliance structure, the terminology “digital criminal compliance” could be used. A premise is necessary: in the absence of a generally accepted definition of AI, reference can be made to the definition developed by the Independent High-Level Expert Group on Artificial Intelligence, set up by the European Commission, which – in the Ethics Guidelines for Trustworthy AI – has established that “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions”.

Nonetheless, AI could be used both in the predisposition and in the actualization phase. From the first point of view, digital technologies could be implemented in all the stages that compose the creation of the compliance program; so, the legal-tech application could map the risks: in other words, it could identify the sensitive area, proceeding to the reconnaissance of all the relevant data, including the judicial history of the corporation. In fact, by “studying” the cases in which crimes were committed inside the company, the tool could pinpoint the organizational breach that made the offense possible. Moreover, AI could be used in the gap analysis phase, in order to evaluate the suitability of the control system to prevent misdemeanors. Furthermore, we could imagine the creation of “digitised” protocols, in which the AI could subject the resolutions’ procedures to the law. Certainly, the actualization stage is the phase in which the use of AI could be much more important and beneficial.

Many are the possible modalities of intervention. First of all, AI could proceed to continuous and real-time monitoring – through the analysis of a large amount of data taken by email, pdfs, internal reports, access to a telematic area, social networks, etc. – guaranteeing efficient prevention of illegal and near-illegal conduct. In this way, the corporation could be genuinely able to intervene before the occurrence of a crime.

AI could also be employed in the uploading stage of the compliance program. From this point of view, the possible manners of purpose are twofold: on one side, AI could automatically reform the compliance structure, changing the protocols; on the other side, the digital instrument could mark to supervisory boards and directive boards the – normative or corporate – mutation(s), so that they could then proceed to renovate the organizational model. Futhermore, the digital criminal compliance tool could be used for the documentation and archiving of the activities of prevention carried out by the company, for evidential purposes regarding the adequacy and efficiency of the organizational model. Finally, intelligent tools might be useful in managing and screening whistleblowing reports and to assure the balance between the confidentiality of the whistleblower and the need to verify the illegal incident.

In conclusion, the benefits that digital criminal compliance could bring to corporations are numerous. On one side, the automatization of risk assessment, with continuous monitoring, could guarantee saving time and resources, with the consequent expense of a large sum of money. On the other hand, the computational and analytical skills of the software could assure the detection of anomalies, illegal behavioral patterns, and organizational breaches, concealed to human eyes.

For certain, many perils and juridical issues descend from the use of AI in the compliance structure. At the outset, there is the continuous monitoring of  employees, and the control of their behaviors, their email, pdf, reports, etc., which heavily affects their labor rights and their privacy. Then, there is the matter of data processing and data protection: during the activities of recording and processing of the data, the AI – i.e. the corporation which uses it – must act in conformity with the provisions of the GDPR and is required to guarantee the rights provided in favour of the data subject, including the “right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” (Art. 22 GDPR). Additionally, it seems to be necessary to examine if the corporation could be held liable for the “organizational fault” if the crime is committed owing to bad organization, due to an AI mistake. Lastly, one must ask, could the Judiciary use digital technologies to evaluate the compliance program’s suitability? In this regard, it must be stressed the risk of a fideistic and a priori acceptance of the AI findings and the need for a technical and adversarial procedure about the digital outcome. Moreover, it is essential to avoid any kind of AI bias, concerning, for example, the territorial location of the corporation.

In conclusion, digital intrusion could represent a fundamental and necessary technological step with the aim of confronting the obstacles that companies encounter while engaging in crime-preventing activities; nevertheless, the perils which could derive must not be ignored and should be faced, in  light of the fundamental interests involved. Certainly, the proposed AI Act could constitute a productive starting point to confront the issues that digital criminal compliancepresents.