In short, this study day will cover these topics:
- AI Act basics
- AI Act and technical standards
- AI Act and liability
- AI Act and cybersecurity, with a focus on medical devices
- AI Act and fundamental rights
- AI Act and regulatory sandboxes
AI Act basics (T. Gils and W. Ooms)
The EU AI Act introduces new obligations for providers and deployers of AI systems. In this introductory presentation, we will discuss the scope of the AI Act, the different qualifications of AI systems and the related obligations or requirements. We also provide a look ahead at key deadlines, the status of standards and conformity assessments, and other responsibilities along the AI value chain. This session will be the basis for the more specific topics discussed later in this course.
Thomas Gils and Wannes Ooms are researchers at KU Leuven’s Centre for IT and IP Law (CiTiP) and work at the Knowledge Centre Data and Society. As part of their tasks for the Knowledge Centre, they inform Belgian and Flemish stakeholders (policymakers, industry, public institutions, and the general public) about the legal aspects of data-driven and AI applications. Before joining CiTiP, Thomas worked at an international law firm as a member of the Belgian IP and Technology Team. Wannes previously worked as an in-house legal counsel in the semiconductor industry.
AI Act and technical standards (K. Vranckaert)
Proving compliance with the AI Act will, in large part, depend on compliance with harmonized technical standards. In this workshop, Koen Vranckaert will explain how standards interact with the AI Act and how the standards landscape must be navigated to make proof of compliance with the AI Act easier.
Koen Vranckaert (°1991) is a legal researcher at CiTiP specializing in the legal aspects of digital technologies’ safety and cybersecurity, in a wide sense. Before joining CiTiP, Koen practiced as an attorney at law in Leuven and Brussels.
AI Act and liability (J. De Bruyne)
Artificial intelligence (AI) is becoming increasingly important in our daily lives and so is academic research on its impact on various legal domains. One of these fields that has attracted much attention is extra-contractual or tort liability. That is because AI will inevitably cause damage. Reference can be made to accidents involving autonomous vehicles. In this session, Jan De Bruyne will discuss some major and general challenges that arise in this context. He will thereby illustrate the remaining importance of national law and focus on procedural elements, including disclosure requirements and rebuttable presumptions. He will also illustrate how existing tort law concepts are being challenged by AI characteristics and provide an overview of the regulatory answers.
Jan De Bruyne is a professor of IT law at the KU Leuven Centre for IT & IP Law (CiTiP). He has numerous publications on AI and liability and is Principal Investigator (PI) of different projects related to AI and data.
AI Act and cybersecurity, with a focus on medical devices (E. Kamenjasevic)
Artificial intelligence is increasingly present in the healthcare sector, causing its continual transformation. As part of a healthcare system and due to their nature, AI medical devices might be exposed to cyberattacks, thus impacting patient safety. Next to potentially fatal health-related consequences, cyberattacks on AI medical devices could also provoke indirect effects, such as diminishing patients’ trust in the security of the healthcare system and hesitancy towards using those medical devices. The recently adopted AI Act will impact the cybersecurity of AI medical devices in several ways. During this workshop, how this materialises will be outlined together with new challenges and opportunities for the healthcare sector stemming from the AI Act.
Erik Kamenjasevic is a doctoral researcher in the law and ethics of human enhancement technologies. Next to this, he has been researching the cybersecurity of medical devices in the EU and USA.
AI Act and fundamental rights (A. Palumbo)
An important part of the new obligations of the AI Act relates to the protection of fundamental rights. Developers and deployers will need to assess the impact of AI systems and models on fundamental rights, mitigating any risks where feasible. In this workshop, Andrea Palumbo will explain how these obligations can be interpreted, highlighting the challenges of integrating fundamental rights considerations in risk management processes. Possible solutions to facilitate compliance with these obligations will also be discussed.
Andrea Palumbo is a legal researcher at CiTiP where he specializes on the protection of fundamental rights in the digital age. Before joining CiTiP, Andrea has been working for Italian and international law firms.
AI Act and regulatory sandboxes (A. Papageorgiou)
Aiming to foster innovation while at the same time ensuring regulatory supervision, regulatory sandboxes present great potential for establishing a solid, forward-looking regulatory system. In the current context of the ever-growing proliferation of AI technological products, the AI Act provides an ambitious framework for the establishment of regulatory sandboxes. During this workshop, the opportunities and challenges that arise from AI regulatory sandboxes’ implementation, along with possible mitigation solutions, will be interactively discussed.
Alexandra Papageorgiou is a legal researcher at CiTiP, where she focuses on the intersection of data and AI technologies. Prior to joining CiTiP, Alexandra worked for Greek law firms and consultancies.