Challenges in the regulation of AI
There has been a considerable effort aimed at regulation of AI systems. In the European Union, DG Connect has spearheaded the AI Act, but its implementation is yet to be […]
Read moreUse Case 1: workable.com is the world’s leading hiring platform, where companies find, evaluate and hire better candidates, faster. Clearly, individual and group fairness among the candidates is crucial for their continued custom.
Use Case 2: IBM Watson Advertising helps scale advertising campaigns with AI and machine learning and addressing unwanted bias in advertising. Unwanted bias in advertising has the potential to negatively impact consumers, who may miss out on an economic opportunity or feel targeted based on stereotypes, while also negatively impacting brands that may experience poor campaign performance.
Use Case 3: dateio.eu is a fintech running a card-linked marketing platform delivering targeted cashback offers to banks’ clients. We will also work on credit risk decisions under the guidance of experts from Nationwide Building Society and BNP Paribas.
Human-compatible AI with guarantees, the Horizon Europe project (“AutoFair”), seeks to address needs for trusted AI and user-in-the-loop tools and systems in a range of industry applications through:
Comprehensive and flexible certification of fairness At one end we can consider risk averse a priori guarantees on certain bias measures as hard constraints in the training process. At the other end, we can consider post hoc comprehensible but thorough presentation of all of the tradeoffs involved in the design of an AI pipeline and their effect on industrial and bias outcomes.
User-in-the-loop in continuous iterative engagement among AI systems, their developers and users. We seek to both inform the users thoroughly in regards to the possible algorithmic choices and their expected effects, and at the same time to learn their preferences in regards to different fairness measures and subsequently guide decision making bringing together the benefits of automation in a human-compatible manner.
Toolkits for the automatic identification of various types of bias, and their joint compensation by automatically optimizing various and potentially conflicting objectives (fairness/accuracy/runtime/resources), visualising the tradeoffs, and making it possible to communicate the tradeoffs to the industrial user, government agency, NGO, or members of the public, where appropriate.
There has been a considerable effort aimed at regulation of AI systems. In the European Union, DG Connect has spearheaded the AI Act, but its implementation is yet to be […]
Read moreJakub Marecek delivers a full-day tutorial T1 (Fairness in the sharing economy and stochastic models for MAS) at the 23rd International Conference on Autonomous Agents and Multi-Agent Systems in Auckland, […]
Read moreFollowing the release of the Corrigendum to the AI Act on April 19th, 2024, it seems feasible to suggest the “next steps” for businesses, SME or otherwise. Notice that these […]
Read more