AI Act on the horizon: Companies should start preparing

Before Christmas, so-called trilogue negotiations between the European Parliament, the Council of the European Union and the European Commission culminated in a preliminary agreement on a regulation known as the Artificial Intelligence Act (“Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence”). A few more formal steps will be needed, but the regulation will be in force before the European Parliament elections in June 2024. And after transitional periods (from 6 to 36 months, depending on the risks), it will come into effect.

What are the biggest changes we can expect from the new regulation?

The AI Act brings regulation to all algorithms that learn from data. It will distinguish between systems that are developed for one specific use case and systems that are generally applicable. If the specific use case is known, the AI Act applies a risk-based approach and distinguishes between several levels of risk.

Banning real-time facial recognition and social scoring

The AI Act will ban the use of algorithms in certain applications that are deemed too risky. These include social credit estimation systems, biometric systems in public space, or emotional state estimation in the workplace. The only exceptions will be the defence and home affairs departments. However, these should be backed up by a specific investigation or legal proceedings in the case of retrospective analysis, or specific approval in the case of a terrorist attack for real-time systems, for example.

Rules for high-risk systems in a nutshell

In systems for one specific use case, the Act will define high-risk applications where validation of the algorithms used (“conformity assessment”) will be required, similar to e.g. financial services. This should focus on data quality, documentation, transparency, robustness and accuracy in terms of statistical methods, some aspects of cyber security and the potential for human oversight of the system. In this context, it should be mentioned that model validation in financial services has an order of magnitude higher cost than model development itself. High-risk applications involve the deployment of algorithms:

  • in critical infrastructure operations (including electricity, district heating, water, digital infrastructure, transport, banking, healthcare and integrated emergency response),
  • in the judiciary, policing, or asylum or visa decisions,
  • in education and training
  • in human resources (HR) management automation, and
  • in recommendation systems used by social media platforms.

Many other high-risk applications, for example in the automotive and aerospace industries or toys (under the General Product Safety Directive) or financial services, are already “sector regulated”. In these sectors, then, both sectoral rules and those of the AI Act will apply.

Public operators of systems in high-risk applications will have to provide a description of the impact on fundamental human rights in the form of a “fundamental rights impact assessment”, similar to the way they provide privacy statements under the General Data Protection Regulation (GDPR) and statements under the Digital Services Act (DSA). It is expected that this statement will have to be prepared in cooperation between the operator and the users of the system and possibly others concerned.

Rules for ChatGPT language models

Unlike the Czech Presidency’s proposal and the European Parliament’s position of June 2023, the December agreement brings specific rules for the regulation of general-purpose artificial intelligence systems, also known as large language models, such as GPT-4. There are several complications with such systems, but the key one seems to be:

  • Their multi-purpose nature. It is not known in advance for which applications they will be used, and therefore the risks associated with such use cannot be estimated in advance.
  • The range of data for which they are taught. A related difficulty is the difficulty of assessing whether the data has always been used in accordance with the license agreement.
  • The scope of the systems in terms of the extent of their description. For large systems, it is not good enough to prepare a small set of tests and declare them sufficient.

The AI Act for all general-purpose AI systems will require a continuously updated description of the learning process of such a system. Not only will the process need to respect copyright, but details of the data on which the learning has been performed will need to be published. Data suppliers will be able to request that data be removed from the learning process (‘opt-out’), which may be problematic given the cost of learning the system. For example, it is said that updating the GPT-4 system with new data or after removing some of the data costs tens of millions of dollars, although more efficient specialized algorithms will certainly be developed.

Further regulation will be applied to very large or very common general-purpose systems. That is, those that require more than 1025 operations to learn (e.g. GPT-4) or have more than ten thousand paying customers in the European Union (e.g. GPT-3), where it will be required:

  • a risk management and incident reporting system
  • security testing (‘red teaming’), either in-house or outsourced,
  • information on the electricity consumed during the learning process.

General purpose AI systems, e.g. in financial services, may be regulated even more strictly. Individual operators of individual systems will have to register them with a new office (“AI Office”) of the European Union. Conversely, there will be certain exemptions for so-called open-source systems, which are still under discussion.

Big European countries support their national champions

In November, the governments of France, Germany and Italy opposed this regulation of general purpose systems. This involvement by France is said to have been motivated by a desire to cultivate Mistral AI as a national champion and rival to OpenAI. However, Mistral AI itself, which made its name with the open-source Mixtral 8x7B model, ultimately supported the regulation. Similarly, Germany’s involvement was apparently closely tied to Aleph Alpha, which wants to develop AI for governments and large companies and promotes the concept of “data sovereignty”, but the proposal eventually gained support there as well. Each of the companies raised around half a billion euros from investors this year, putting them in the category of “unicorns”, technology companies valued at more than a billion dollars. For the citizens of the Czech Republic, it may be little consolation that some of the related research at the European level is being coordinated by scientists from the AI Center at the FEE CTU.

The clock is ticking

It is expected that most companies will need to map which systems will be regulated by the Act, add clauses requiring compliance with the Act to contracts with suppliers of such systems, set up processes to monitor compliance and manage risk, and quite possibly work with suppliers to modify the systems in question. As the timetable for approval shows, it is high time they started working on this already.


This article is part of commentary series on the AI Act in Hospodářské noviny by Jakub Mareček, AIC FEE CTU.