Navigating Uncertainty in the European Regulation of AI. Understanding when your AI System is High-Risk, and why it does matter!
Prof. Dr. Andrea Bertolini, Scuola Superiore Sant’Anna, Dr. Federica Fedorczyk, Scuola Superiore Sant’Anna, Dr. Marta Mariolina Mollicone, Scuola Superiore Sant’Anna, Guilherme Migliora, Scuola Superiore Sant’Anna,Silchersaal
Questions to be answered
In this workshop, we will demonstrate the challenges posed by the AI Act, particularly the complexity and lack of clarity in its definitions, with a focus on high-risk AI systems. For instance, reading Article 6 of the AI Act does not immediately clarify how to classify a system as high-risk. Yet, correctly qualifying an AI system is crucial as it determines the specific legal obligations that developers, providers, and deployers must meet to operate lawfully in the European market. Indeed, developers, providers, and deployers of AI systems currently face significant uncertainty in identifying the applicable legal regime for their systems and, consequently, the obligations required to avoid liability and legal sanctions.
Therefore, we will explain the actual content of Article 6 and illustrate the intricacies of its formulation, and this explanation is essential for both engineers and lawyers.
Description
This workshop will provide attendees with an in-depth exploration of the regulatory framework established by the AI Act (AIA), offering both an overview and a critical perspective on its implications for the regulation of AI systems.
- The regulatory framework: an overview of the AI Act
- A general introduction to the AIA, highlighting its structure and objectives;
- Examination of how the AIA represents a shift in the regulation of AI systems, placing it in a broader context of European regulatory practices.
- Focus on Article 6 and the definition of High-Risk AI Systems (h-AIS)
- Critical Overview of Annex III:
- Detailed analysis of the high-risk categories outlined in Annex III, identifying key criteria and addressing critical challenges and ambiguities in their application;
- Article 6, §1 and Applicable European Product Safety Regulations:
- Explanation of how Article 6, §1 connects high-risk AI system identification to existing European product safety regulations;
- Discussion of the extreme complexity of the regulatory framework referenced by Article 6, §1, and its implications for stakeholders;
- Examination of the role of third-party conformity assessments.
- Case Study: Unintended Consequences of Article 6, §1 AIA
- A concrete example illustrating the potential unintended consequences that could arise from the application of Article 6, §1;
- Analysis of how these challenges might impact developers, providers, and deployers of AI systems intending to operate in the European market.
Organisation of the WS
A panel will be set up to present the critical European regulation. Each speaker will talk for 10/15 minutes and time for Q&A will be granted.
The WS will be split in two half:
- In the first half there will be a in-depth presentation of the AI Act norms taken into account and their inconsistencies;
- In the second half there will be a roundtable and Q&A.
Intended outcome
After the workshop, participants will have a clear and comprehensive understanding of Article 6 of the AI Act. They will understand the complexities and practical implications of the references to Annex III and Annex I, including how to classify their products as high-risk AI systems and the specific criteria outlined therein. Furthermore, they will understand the critical role that third-party conformity assessments play in determining whether a system falls into the high-risk category.
Attendees will also leave with a deeper awareness of the broader implications of these provisions for those developing, providing or deploying AI systems intended for the European market. This includes understanding the legal obligations they must comply with to ensure their products meet regulatory requirements and can be lawfully operated in the EU. By the end of the session, participants will be better equipped to navigate the regulatory landscape for high-risk AI systems and to confidently address the challenges posed by the AI Act.
Further information
Andrea Bertolini
Federica Fedorczyk
Marta Mariolina Mollicone
Guilherme Migliora
Stefano Aterno
Lena Lörcher
Organisers
LSE – Legal Subtopic group