The structure of the Regulation provides for an articulated risk classification system (Chapter II), which differentiates AI systems based on the potential danger to individuals and society, with particular implications in terms of privacy protection, transparency and civil liability. AI systems are divided into four levels: unacceptable risk, high, limited and minimal. For each level, the Regulation imposes specific regulatory requirements that suppliers must comply with to ensure a balance between innovation and the protection of fundamental rights. For example, high-risk systems are subject to a rigorous compliance assessment that includes cybersecurity management, human oversight, and accurate technical documentation throughout the system’s lifecycle.
Among the key principles of the Regulation is the need for responsible data governance: the quality, representativeness and non-discrimination of the data used for training AI models must meet high standards of reliability and confidentiality. Compliance with the principle of proportionality, which requires the use of AI to be calibrated in such a way as not to sacrifice the rights of individuals unreasonably, is crucial. For example, AI systems that pose high risks to personal dignity and safety, such as real-time biometric surveillance, are strictly regulated or even prohibited.
The course also examines the peculiar responsibilities of AI system suppliers and deployers, who the Regulation identifies as the main actors responsible for the compliance of systems placed on the European market. Suppliers are required to implement an iterative risk management system and adhere to strict rules of transparency and traceability of AI models. The deployer, on the other hand, is responsible for the operational supervision of the system, the management of the data generated, and the timely communication of incidents or malfunctions to the competent authorities.
Further in-depth analysis is dedicated to the recommendations of the European Privacy Guarantors, in particular the European Data Protection Board (EDPB), which has produced guidelines to harmonize the requirements of the AI Act with the General Data Protection Regulation (GDPR).
The course includes an analysis of the relevant judgments of the Court of Justice of the European Union (CJEU), which outline legal principles that also apply to AI systems. These include judgments on the right to be forgotten, profiling, and decision-making transparency, which lay the foundations for jurisprudence aimed at protecting fundamental rights in the use of AI. Through guided discussions, participants will examine cases inspired by these judgments, developing a critical capacity for interpreting legal principles and their applications.
Finally, the course addresses the issue of sanctions and enforcement measures, examining the AI Act’s sanctioning system for non-compliance with regulatory provisions. Participants will reflect on the deterrent role of sanctions and understand how compliance is not just a formal obligation but an essential component of building an AI ecosystem that complies with the ethical and legal principles of the European Union.
Mirjana Pejic Bach
Inglese