{"id":232,"date":"2024-04-28T07:31:02","date_gmt":"2024-04-28T07:31:02","guid":{"rendered":"https:\/\/humancompatible.org\/?p=232"},"modified":"2024-06-12T11:46:56","modified_gmt":"2024-06-12T11:46:56","slug":"ai-act-a-users-guide-in-three-easy-steps","status":"publish","type":"post","link":"https:\/\/humancompatible.org\/index.php\/2024\/04\/28\/ai-act-a-users-guide-in-three-easy-steps\/","title":{"rendered":"AI Act: A User&#8217;s Guide in Three Easy Steps"},"content":{"rendered":"\n<p>Following the release of the Corrigendum to the AI Act on April 19th, 2024, it seems feasible to suggest the \u201cnext steps\u201d for businesses, SME or otherwise. Notice that these suggestions reference the articles in the numbering of the final version of the AI Act, rather than those of the earlier versions.&nbsp;<\/p>\n\n\n\n<p><strong>Step 1: Do I use artificial intelligence as per the definition of the AI Act?<\/strong><\/p>\n\n\n\n<p>There are many definitions of AI. A popular witticism says that AI are the algorithms that we do not understand, yet. Indeed one could argue that many subfields of AI such as voice recognition, optical character recognition, or AI opponents in games have been commoditized to the extent where they are not perceived as AI any longer. Nevertheless, the&nbsp; AI Act embodies a definition drawing upon the OECD definition that is much more general.<\/p>\n\n\n\n<p>Article (12) of the AI Act suggests a test that has a number of criteria, but which covers a wide variety of algorithms and their hardware implementations. Notably, the criteria feature:&nbsp;<\/p>\n\n\n\n<ul><li>Algorithmic: algorithms process inputs into outputs, possibly having some side effects in terms of changing a state that is convenient not to see as an output. The AI Act&nbsp; clearly envisions a wide variety of outputs are covered: \u201coutputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments\u201d.<\/li><li>Machine-based: the algorithm needs to be carried out automatically, rather than by a person.&nbsp;<\/li><li>Well-defined: the AI system is required to \u201cachieve certain objectives\u201d. While this refers to empirical risk minimization in many applications such as classification or regression, it would surely cover also sorting algorithms, for instance.&nbsp;<\/li><li>Data-driven: the AI system adjusts \u201cmodels or algorithms\u201d based on \u201cinputs or data\u201d. A simple deterministic algorithm such as bubble sort does not learn from the data and is clearly not covered by the definition. Simple deterministic algorithms learning from data, such as sample sort and linear regression, may be deemed to perform \u201cnarrow procedural task[s]\u201d of Article (53), but this is likely to be the subject of much litigation. Any uses of neural networks, as in much of generative AI, are certainly covered.<\/li><\/ul>\n\n\n\n<p><strong>Step 2: How to avoid regulation of the AI Act?<\/strong><\/p>\n\n\n\n<p>The AI Act offers several \u201cget out of jail free\u201d cards known as exceptions and exclusions. Three stand out:<\/p>\n\n\n\n<ul><li>so-called \u201cmaterial influence exception\u201d of Article (53). If the AI system does not \u201cmaterially influence the outcome of decision-making\u201d, human or automated, then it may be relieved of the requirements placed on high-risk systems. This exception has been introduced only very recently and it may be the subject of some litigation, prior to its meaning being completely clear.&nbsp;&nbsp;<\/li><li>so-called \u201cno humans involved\u201d or of Article (53). If the AI system is not \u201cintended to improve the result of a previously completed human activity\u201d, then it may be relieved of the requirements placed on high-risk systems. Again, this exception has been introduced only very recently.&nbsp;&nbsp;<\/li><li>so-called \u201cresearch exception\u201d of Article (2) point 8 suggests that the provisions do not apply \u201cto any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service\u201d, but that \u201ctesting in real world conditions shall not be covered by that exclusion\u201d. One can hence develop the systems, and only prior to real-life testing of the system undergo compliance assessment.&nbsp;<\/li><li>so-called \u201copen-source exception\u201d of Article (89) suggests that providers \u201cmaking accessible to the public tools, services, processes, or AI components other than general-purpose AI models\u201d that are open-source need not comply with many of the requirements of the AI Act. Further exceptions for open-source general-purpose models are established in Articles (102)-(104).<\/li><li>so-called \u201cnational-security exception\u201d of Article (2) point 3 suggests that both developers and users of systems \u201cexclusively for military, defence or national security purposes\u201d are excluded from the provisions of the AI Act. It is plausible that there may be litigation as to whether monitoring internet traffic, for instance, falls within the national-security exception.&nbsp;<\/li><li>so-called \u201claw-enforcement exception\u201d of Articles (33-35) and Article (73), which is limited to biometric identification near-real time and to \u201csearch for certain victims of crime and missing persons\u201d, \u201cterrorist attack\u201d, and the \u201csuspects of the criminal offences [&#8230;] punishable by a [&#8230;] sentence [&#8230;] for a maximum period of at least four years\u201d, or in the protection of critical infrastructure or national borders.<\/li><li>so-called \u201cnational-interest exception\u201d of Article (46) suggests that member states can as for \u201cderogation from conformity assessment procedure\u201d. High-risk AI systems can be excepted from the obligations on the grounds of \u201cpublic security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets\u201d, for a finite duration, which is not bounded from above. It is plausible that for long-term derogations from the AI Act, the European Commission may put a substantial pressure on the member state. A similar exception is introduced in Article (130), which addresses the need to protect \u201chealth and safety of persons, the protection of the environment and climate change and for society as a whole,\u201d including the \u201cprotection of key industrial and infrastructural assets\u201d.<\/li><li>so-called \u201cfinancial-services exception\u201d of Article (58). While the AI Act specifically suggests that credit risk rating of individuals (\u201cevaluation of credit score or creditworthiness\u201d of \u201cnatural persons\u201d) is considered a high-risk application, it excepts \u201cdetecting fraud in the offering of financial services\u201d and \u201cprudential purposes\u201d in terms of systemic risks calculations.&nbsp;<\/li><\/ul>\n\n\n\n<p>An implied exception is, essentially, for very rich corporations. If you can afford to pay the fees of Article (99):<\/p>\n\n\n\n<ul><li>Engaging in prohibited practices: 35 million EUR or 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher<\/li><li>Misbehaving the development or use of high-risk systems: 15 million EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.<\/li><\/ul>\n\n\n\n<p>it is feasible to engage in any conduct you like, and then request the review of the fines by the Court of Justice of the European Union. Notice that the General Data Protection Regulation (GDPR) has also instituted high fines, but their use has been cautious at best. For example the GDPR Enforcement Tracker (<a href=\"https:\/\/www.enforcementtracker.com\/\">https:\/\/www.enforcementtracker.com\/<\/a>) suggests that the highest fine has amounted to 1.2 billion EUR, which is has been less than 1 % of the total worldwide annual turnover of Meta Inc., the operator of Facebook, in 2023.&nbsp;<\/p>\n\n\n\n<p>On the other hand of the spectrum, Articles (109) and (146) suggest that \u201ccompliance with those obligations should take due account of the size of the provider and allow simplified ways of compliance for SMEs, including start-ups\u201d, but it is not clear what this would entail. Article (143) establishes the priority access to regulatory sandboxes for start-ups and Article (145) suggests further support will be available for start-ups.&nbsp;<\/p>\n\n\n\n<p>Another implied exception is, essentially, for corporations with little revenues from the European Union. You can stop offering your products and services to users in the European Union. Implementing this fully can be fraught with legal challenges.<\/p>\n\n\n\n<p><strong>Step 3: If I cannot avoid the AI Act, what shall I do?<\/strong><\/p>\n\n\n\n<p>Once you know that you operate an AI system that is covered by the regulation you need to follow the rules set out by the AI Act (\u201charmonised rules applicable to the placing on the market, the putting into service and the use of high-risk AI systems\u201d). Their precise nature will be based on the risk associated with the use of the AI system.&nbsp;<\/p>\n\n\n\n<p>While there are very many risks associated with the use of AI systems, cf. The AI Risk Atlas (<a href=\"https:\/\/www.ibm.com\/docs\/en\/watsonx-as-a-service?topic=ai-risk-atlas\">https:\/\/www.ibm.com\/docs\/en\/watsonx-as-a-service?topic=ai-risk-atlas<\/a>), the European Union essentially distinguishes <strong>five risk levels<\/strong>. For systems with a clearly defined purpose, there are three risk levels dependent on the purpose. For general-purpose systems, applicable in many use cases, there are two additional levels introduced in the December 2023 version. In the decreasing level of regulation, there are:<\/p>\n\n\n\n<ul><li>Prohibited systems specified by Article (28), focussing as social-scoring systems, Article (29), focussing on manipulative techniques, Article (30) focussed on biometric categorisation to infer infer an individuals\u2019 political opinions, religious beliefs, race, sex life or sexual orientation, Articles (31) and (42) focussed on profiling, Article (32) focussed on remote biometric identification in public spaces, Article (44) focussing on emotion recognition,&nbsp;<\/li><li>Systemically important general-purpose systems of Articles (51) and (110)-(115). The rules are listed in Section 2 of Annex XI. Article (128) suggests that when \u201cthe intended purpose of the system changes, that AI system should be considered to be a new AI system which should undergo a new conformity assessment\u201d, although Article (109) restricts the requirements in cases of fine-tuning.&nbsp;<\/li><li>General purpose systems not deemed systemically important of Article (97)-(102). The rules are listed in Section 1 of Annex XI.<\/li><li>High-risk systems, which within the scope of Article(48) and possibly delegated acts could harm fundamental rights of citizens. Article (50) suggests that all products that \u201cundergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation\u201d is deemed high-risk. See below for an extensive list.<\/li><li>Low-risk systems. These come only with guidelines.&nbsp;<\/li><\/ul>\n\n\n\n<p>Specifically, high-risk systems include AI systems:<\/p>\n\n\n\n<ul><li>Article (54): handling \u201cbiometric data\u201d;<\/li><li>Article (55): for the \u201cmanagement and operation of critical infrastructure\u201d, which includes telecommunications, transport, energy systems, or water distribution is deemed high-risk;<\/li><li>Article (56): in \u201ceducation\u201d;<\/li><li>Article (57): in human resources;&nbsp;<\/li><li>Article (58): in provision of \u201cessential public assistance benefits and services\u201d including \u201chealthcare services, social security benefits, social services\u201d and credit risk assessment in retail banking;&nbsp;<\/li><li>Article (59): used by law enforcement authorities;&nbsp;<\/li><li>Article (60): in \u201cmigration, asylum and border control management\u201d;<\/li><li>Article (61): in \u201cadministration of justice and democratic processes\u201d;<\/li><li>Article (62): affecting voting behaviour in an \u201celection or referendum\u201d.<\/li><\/ul>\n\n\n\n<p>For high-risk systems, one should like to obtain the \u201cCE marking\u201d and display it digitally or physically, as befits the product or service. In order to obtain the \u201cCE marking\u201d, \u201c<strong>conformity assessment<\/strong>\u201d requires:<\/p>\n\n\n\n<ul><li>An \u201cauthorised representative\u201d for the purposes of communications with the newly-established AI office and national regulators. See Article (82).&nbsp;<\/li><li>Registration in a newly created database of high-risk AI systems. See Article (131).&nbsp;<\/li><li>A risk-management system and a quality-management system. Article (65) provides only high-level guidance, but a number of model risk-management systems in financial services could serve as a blueprint. Article (81) elaborates upon the quality management. One option is to deploy ISO\/IEC 42001:2023 (<a href=\"https:\/\/www.iso.org\/standard\/81230.html\">https:\/\/www.iso.org\/standard\/81230.html<\/a>) or AI risk management systems (such as https:\/\/credo.ai\/).<\/li><li>Collection of protected attributes in order to evaluate bias, as per Article (70) and (138)&#8211;(141). Within so-called \u201cregulatory sandboxes\u201d, the protected attributes can be collected even when banned by other regulations and national laws. Ivo Jen\u00edk has an extensive monograph \u201cHow to build a regulatory sandbox\u201d.&nbsp;<\/li><li>Monitoring quality and relevance of data sets used. The requirement is established in Article (66) and data governance is considered in Article (67). Compliance with standards such as ISO 8000 (Data quality) is a great start. Open-source toolkits such as AI Fairness 360 (<a href=\"https:\/\/ai-fairness-360.org\/\">https:\/\/ai-fairness-360.org\/<\/a>) make it possible to evaluate bias across a variety of applications. See also ISO\/IEC TR 24027:2021. There are a number of requirements listed in subsequent Articles (67-70), including consideration of situations \u201cwhere data outputs influence inputs for future operations (feedback loops).\u201d Further requirements for general-purpose systems are set out in Article (105)-(108).<\/li><li>Provision of technical documentation, record-keeping, and transparency. Articles (71) and (72) and (132)&#8211;(134) establish some of the details. Notably, humans should be made aware whenever they are \u201cinteracting with an AI system\u201d or contents generated by AI.&nbsp;<\/li><li>Human oversight. The requirement is established in Article (66) and elaborated in Article (73). Notably, humans need to \u201coversee their functioning, ensure that they are used as intended and that their impacts are addressed over the system\u2019s lifecycle\u201d.<\/li><li>Robustness and accuracy. The requirement is established in Article (66) and elaborated upon Article (74)-(75). This is perhaps the most challenging requirement, when it comes to neural networks and general-purpose AI systems. There are only provisional guidelines for the most common applications and methods, such as ISO\/IEC TS 4213 (Assessment of machine learning classification performance) for classification and ISO\/IEC TR 24029-1 (Assessment of the robustness of neural networks) for neural networks.<\/li><li>Focus on cybersecurity. The requirement is established in Article (66) and elaborated in Articles (76)-(78). This should include both traditional cybersecurity concerns, data poisoning attacks, and adversarial attacks on the neural networks. Compliance with with NIS2 and standards such as ISO\/IEC 27001:2022 and NIST CSF are a great start. Open-source toolkits such as Adversarial Robustness Toolbox (<a href=\"https:\/\/github.com\/Trusted-AI\/adversarial-robustness-toolbox\">https:\/\/github.com\/Trusted-AI\/adversarial-robustness-toolbox<\/a>) aid with the AI-specific attacks.&nbsp;<\/li><\/ul>\n\n\n\n<p>Notice, however, that Article (125) suggests that the \u201cconformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility\u201d, rather than requiring a third-party assessment. While one may benefit from the services of&nbsp; consultants, it does not alleviate the responsibility from the company.&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p>It seems feasible that the longer a \u201cprovider\u201d or \u201cdeployer\u201d of an AI system waits, the more resources there will be available. At the same time, one needs to be compliant by the end of the year 2024 (for prohibited uses) or by mid-2026 (for high-risk systems), or risk fines subsequently.\u00a0<\/p>\n\n\n\n<div class=\"wp-block-file\"><a href=\"https:\/\/humancompatible.org\/wp-content\/uploads\/2024\/06\/D3_1.pdf\">For more, see Deliverable 3.1.<\/a><a href=\"https:\/\/humancompatible.org\/wp-content\/uploads\/2024\/06\/D3_1.pdf\" class=\"wp-block-file__button\" download>Download<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Following the release of the Corrigendum to the AI Act on April 19th, 2024, it seems feasible to suggest the \u201cnext steps\u201d for businesses, SME or otherwise. Notice that these [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/posts\/232"}],"collection":[{"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/comments?post=232"}],"version-history":[{"count":3,"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/posts\/232\/revisions"}],"predecessor-version":[{"id":252,"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/posts\/232\/revisions\/252"}],"wp:attachment":[{"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/media?parent=232"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/categories?post=232"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/humancompatible.org\/index.php\/wp-json\/wp\/v2\/tags?post=232"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}