AutoFair explains the AI Act

On Thursday, June 14th, the European Parliament approved 499:28 its proposal for a regulation known as the Artificial Intelligence Act (‚ÄúRegulation of the European Parliament and of the Council laying down harmonized rules for artificial intelligence‚ÄĚ). There are now negotiations underway, whose aim is to reconcile the proposal of the European Parliament with the proposal that was approved by the Council of the European Union on December 6 within the framework of the Czech Presidency. Quite possibly, the regulation will be passed by the elections to the European Parliament in the spring of 2024. Although the wording of the regulation is not yet final, it is already becoming clear what it will mean.

The Artificial Intelligence Act will define high-risk applications, wherein algorithms would require further validation. High-risk applications include applications in critical infrastructure (including electric power dispatching or, dispatching in the integrated rescue system), applications in the judiciary, automation of human resources management (HR), or in policing (e.g., biometric identification in public spaces and predictive policing). A number of other high-risk applications, for example in the automotive and aerospace industries or financial services, are already “sectorally regulated”. Both the sectoral rules and the rules from the Artificial Intelligence Act will then apply there.

In general, the Artificial Intelligence Act is the beginning of a much broader effort by the European Commission to set rules for the use of algorithms. The Directive on the Adaptation of the Rules of Non-Contractual Civil Liability for Artificial Intelligence (AI Liability Directive) will regulate the responsibility for their operation, where the burden of proof in case of failure will shift to the supplier. The Directive on harmonized rules for fair access to and use of data (the Data Act) will further extend the ban on profiling that has been in force since 2016 (the General Data Protection Regulation). The Digital Single Market Regulation (Digital Services Act) and the Regulation on Fair Markets Open to Competition in the Digital Sector (Digital Markets Act) have already tightened the operation of platforms, including social networks. The directive on measures to ensure a high common level of cyber security (NIS2) addresses, among other things, who can work on algorithms for critical infrastructure and in what environment. More regulations and directives will surely follow.

More specifically, for businesses, the AI Act will immediately mean long calls with consultants who promise hundred-point plans and thousand-page manuals, and lawyers who supply templates for several important contracts. In particular, a contract known as the Innovation agreement will allow for the collection of protected attributes (e.g. ethnicity) that are not yet collected, for the purpose of validating the impact of algorithms on subgroups given by the protected attributes. In the best case, the implementation of the regulation will lead to improved processes of deploying algorithms in high-risk applications so that, in addition to developers and IT, experts in statistics (in validation and risk estimation) and lawyers (in assessing compliance with the rules) are also involved.

For experts in related fields, this is a non-trivial challenge. Computer scientists will have to learn to think about the bias in algorithms and data, and their long-term effects. This includes, on the one hand, statistical reasoning about the detection of systematic error (bias), and, on the other hand, thinking about the social impacts of systems that have not yet been widely articulated in Computer Science. Lawyers will have to learn about machine learning and statistics. Statisticians and European standardization organizations will work on definitions of the mean value and risk measures of harm that could be used by Computer Scientists and statisticians. Such interdisciplinarity can be beneficial, if not always appealing, to a number of professionals.

For citizens in each member country, an existing authority (e.g., the Office for Personal Data Protection) will be appointed or a new authority will be established to oversee algorithms in high-risk applications. Individuals or interest groups will be able to turn to him in case of suspected violations of the Artificial Intelligence Act. Independently, it is possible to use SafetyNet (formerly known as RAPEX Reporting, https://ec.europa.eu/safety-gate-alerts/screen/webReport), which accepts submissions from all official EU languages.