AI Act: A User’s Guide in Three Easy Steps

Following the release of the Corrigendum to the AI Act on April 19th, 2024, it seems feasible to suggest the “next steps” for businesses, SME or otherwise. Notice that these suggestions reference the articles in the numbering of the final version of the AI Act, rather than those of the earlier versions. 

Step 1: Do I use artificial intelligence as per the definition of the AI Act?

There are many definitions of AI. A popular witticism says that AI are the algorithms that we do not understand, yet. Indeed one could argue that many subfields of AI such as voice recognition, optical character recognition, or AI opponents in games have been commoditized to the extent where they are not perceived as AI any longer. Nevertheless, the  AI Act embodies a definition drawing upon the OECD definition that is much more general.

Article (12) of the AI Act suggests a test that has a number of criteria, but which covers a wide variety of algorithms and their hardware implementations. Notably, the criteria feature: 

  • Algorithmic: algorithms process inputs into outputs, possibly having some side effects in terms of changing a state that is convenient not to see as an output. The AI Act  clearly envisions a wide variety of outputs are covered: “outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments”.
  • Machine-based: the algorithm needs to be carried out automatically, rather than by a person. 
  • Well-defined: the AI system is required to “achieve certain objectives”. While this refers to empirical risk minimization in many applications such as classification or regression, it would surely cover also sorting algorithms, for instance. 
  • Data-driven: the AI system adjusts “models or algorithms” based on “inputs or data”. A simple deterministic algorithm such as bubble sort does not learn from the data and is clearly not covered by the definition. Simple deterministic algorithms learning from data, such as sample sort and linear regression, may be deemed to perform “narrow procedural task[s]” of Article (53), but this is likely to be the subject of much litigation. Any uses of neural networks, as in much of generative AI, are certainly covered.

Step 2: How to avoid regulation of the AI Act?

The AI Act offers several “get out of jail free” cards known as exceptions and exclusions. Three stand out:

  • so-called “material influence exception” of Article (53). If the AI system does not “materially influence the outcome of decision-making”, human or automated, then it may be relieved of the requirements placed on high-risk systems. This exception has been introduced only very recently and it may be the subject of some litigation, prior to its meaning being completely clear.  
  • so-called “no humans involved” or of Article (53). If the AI system is not “intended to improve the result of a previously completed human activity”, then it may be relieved of the requirements placed on high-risk systems. Again, this exception has been introduced only very recently.  
  • so-called “research exception” of Article (2) point 8 suggests that the provisions do not apply “to any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service”, but that “testing in real world conditions shall not be covered by that exclusion”. One can hence develop the systems, and only prior to real-life testing of the system undergo compliance assessment. 
  • so-called “open-source exception” of Article (89) suggests that providers “making accessible to the public tools, services, processes, or AI components other than general-purpose AI models” that are open-source need not comply with many of the requirements of the AI Act. Further exceptions for open-source general-purpose models are established in Articles (102)-(104).
  • so-called “national-security exception” of Article (2) point 3 suggests that both developers and users of systems “exclusively for military, defence or national security purposes” are excluded from the provisions of the AI Act. It is plausible that there may be litigation as to whether monitoring internet traffic, for instance, falls within the national-security exception. 
  • so-called “law-enforcement exception” of Articles (33-35) and Article (73), which is limited to biometric identification near-real time and to “search for certain victims of crime and missing persons”, “terrorist attack”, and the “suspects of the criminal offences […] punishable by a […] sentence […] for a maximum period of at least four years”, or in the protection of critical infrastructure or national borders.
  • so-called “national-interest exception” of Article (46) suggests that member states can as for “derogation from conformity assessment procedure”. High-risk AI systems can be excepted from the obligations on the grounds of “public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets”, for a finite duration, which is not bounded from above. It is plausible that for long-term derogations from the AI Act, the European Commission may put a substantial pressure on the member state. A similar exception is introduced in Article (130), which addresses the need to protect “health and safety of persons, the protection of the environment and climate change and for society as a whole,” including the “protection of key industrial and infrastructural assets”.
  • so-called “financial-services exception” of Article (58). While the AI Act specifically suggests that credit risk rating of individuals (“evaluation of credit score or creditworthiness” of “natural persons”) is considered a high-risk application, it excepts “detecting fraud in the offering of financial services” and “prudential purposes” in terms of systemic risks calculations. 

An implied exception is, essentially, for very rich corporations. If you can afford to pay the fees of Article (99):

  • Engaging in prohibited practices: 35 million EUR or 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher
  • Misbehaving the development or use of high-risk systems: 15 million EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

it is feasible to engage in any conduct you like, and then request the review of the fines by the Court of Justice of the European Union. Notice that the General Data Protection Regulation (GDPR) has also instituted high fines, but their use has been cautious at best. For example the GDPR Enforcement Tracker (https://www.enforcementtracker.com/) suggests that the highest fine has amounted to 1.2 billion EUR, which is has been less than 1 % of the total worldwide annual turnover of Meta Inc., the operator of Facebook, in 2023. 

On the other hand of the spectrum, Articles (109) and (146) suggest that “compliance with those obligations should take due account of the size of the provider and allow simplified ways of compliance for SMEs, including start-ups”, but it is not clear what this would entail. Article (143) establishes the priority access to regulatory sandboxes for start-ups and Article (145) suggests further support will be available for start-ups. 

Another implied exception is, essentially, for corporations with little revenues from the European Union. You can stop offering your products and services to users in the European Union. Implementing this fully can be fraught with legal challenges.

Step 3: If I cannot avoid the AI Act, what shall I do?

Once you know that you operate an AI system that is covered by the regulation you need to follow the rules set out by the AI Act (“harmonised rules applicable to the placing on the market, the putting into service and the use of high-risk AI systems”). Their precise nature will be based on the risk associated with the use of the AI system. 

While there are very many risks associated with the use of AI systems, cf. The AI Risk Atlas (https://www.ibm.com/docs/en/watsonx-as-a-service?topic=ai-risk-atlas), the European Union essentially distinguishes five risk levels. For systems with a clearly defined purpose, there are three risk levels dependent on the purpose. For general-purpose systems, applicable in many use cases, there are two additional levels introduced in the December 2023 version. In the decreasing level of regulation, there are:

  • Prohibited systems specified by Article (28), focussing as social-scoring systems, Article (29), focussing on manipulative techniques, Article (30) focussed on biometric categorisation to infer infer an individuals’ political opinions, religious beliefs, race, sex life or sexual orientation, Articles (31) and (42) focussed on profiling, Article (32) focussed on remote biometric identification in public spaces, Article (44) focussing on emotion recognition, 
  • Systemically important general-purpose systems of Articles (51) and (110)-(115). The rules are listed in Section 2 of Annex XI. Article (128) suggests that when “the intended purpose of the system changes, that AI system should be considered to be a new AI system which should undergo a new conformity assessment”, although Article (109) restricts the requirements in cases of fine-tuning. 
  • General purpose systems not deemed systemically important of Article (97)-(102). The rules are listed in Section 1 of Annex XI.
  • High-risk systems, which within the scope of Article(48) and possibly delegated acts could harm fundamental rights of citizens. Article (50) suggests that all products that “undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation” is deemed high-risk. See below for an extensive list.
  • Low-risk systems. These come only with guidelines. 

Specifically, high-risk systems include AI systems:

  • Article (54): handling “biometric data”;
  • Article (55): for the “management and operation of critical infrastructure”, which includes telecommunications, transport, energy systems, or water distribution is deemed high-risk;
  • Article (56): in “education”;
  • Article (57): in human resources; 
  • Article (58): in provision of “essential public assistance benefits and services” including “healthcare services, social security benefits, social services” and credit risk assessment in retail banking; 
  • Article (59): used by law enforcement authorities; 
  • Article (60): in “migration, asylum and border control management”;
  • Article (61): in “administration of justice and democratic processes”;
  • Article (62): affecting voting behaviour in an “election or referendum”.

For high-risk systems, one should like to obtain the “CE marking” and display it digitally or physically, as befits the product or service. In order to obtain the “CE marking”, “conformity assessment” requires:

  • An “authorised representative” for the purposes of communications with the newly-established AI office and national regulators. See Article (82). 
  • Registration in a newly created database of high-risk AI systems. See Article (131). 
  • A risk-management system and a quality-management system. Article (65) provides only high-level guidance, but a number of model risk-management systems in financial services could serve as a blueprint. Article (81) elaborates upon the quality management. One option is to deploy ISO/IEC 42001:2023 (https://www.iso.org/standard/81230.html) or AI risk management systems (such as https://credo.ai/).
  • Collection of protected attributes in order to evaluate bias, as per Article (70) and (138)–(141). Within so-called “regulatory sandboxes”, the protected attributes can be collected even when banned by other regulations and national laws. Ivo Jeník has an extensive monograph “How to build a regulatory sandbox”. 
  • Monitoring quality and relevance of data sets used. The requirement is established in Article (66) and data governance is considered in Article (67). Compliance with standards such as ISO 8000 (Data quality) is a great start. Open-source toolkits such as AI Fairness 360 (https://ai-fairness-360.org/) make it possible to evaluate bias across a variety of applications. See also ISO/IEC TR 24027:2021. There are a number of requirements listed in subsequent Articles (67-70), including consideration of situations “where data outputs influence inputs for future operations (feedback loops).” Further requirements for general-purpose systems are set out in Article (105)-(108).
  • Provision of technical documentation, record-keeping, and transparency. Articles (71) and (72) and (132)–(134) establish some of the details. Notably, humans should be made aware whenever they are “interacting with an AI system” or contents generated by AI. 
  • Human oversight. The requirement is established in Article (66) and elaborated in Article (73). Notably, humans need to “oversee their functioning, ensure that they are used as intended and that their impacts are addressed over the system’s lifecycle”.
  • Robustness and accuracy. The requirement is established in Article (66) and elaborated upon Article (74)-(75). This is perhaps the most challenging requirement, when it comes to neural networks and general-purpose AI systems. There are only provisional guidelines for the most common applications and methods, such as ISO/IEC TS 4213 (Assessment of machine learning classification performance) for classification and ISO/IEC TR 24029-1 (Assessment of the robustness of neural networks) for neural networks.
  • Focus on cybersecurity. The requirement is established in Article (66) and elaborated in Articles (76)-(78). This should include both traditional cybersecurity concerns, data poisoning attacks, and adversarial attacks on the neural networks. Compliance with with NIS2 and standards such as ISO/IEC 27001:2022 and NIST CSF are a great start. Open-source toolkits such as Adversarial Robustness Toolbox (https://github.com/Trusted-AI/adversarial-robustness-toolbox) aid with the AI-specific attacks. 

Notice, however, that Article (125) suggests that the “conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility”, rather than requiring a third-party assessment. While one may benefit from the services of  consultants, it does not alleviate the responsibility from the company.   

It seems feasible that the longer a “provider” or “deployer” of an AI system waits, the more resources there will be available. At the same time, one needs to be compliant by the end of the year 2024 (for prohibited uses) or by mid-2026 (for high-risk systems), or risk fines subsequently.