für Recht und Ökonomik
NAIL Research Seminar #15
11. Dezember 2023, von Internetredaktion
Professor Christoph Kumpan and Professor Georg Ringe would like to invite you to the NAIL Research Seminar event on Monday, 11 December 2023, starting at 18h00 (CET), with Professor Carsten Gerner-Beuerle (University College London), who will give a presentation titled "AI Regulations' Measurement Problem: Assessing the Risks of Automated Decision-Making Systems to Health, Safety, Fundamental Rights, and the Rule of Law".
The lecture will be followed by a discussion on the topic. The event will be held in English and will take place in person at the Bucerius Law School, Room 1.11, in Jungiusstraße 6, 20355 Hamburg. You can also participate online by registering your participation via mail( nail"AT"ile-hamburg.de).
After the lecture and discussion, guests are invited to end the evening in a relaxed atmosphere with snacks and refreshments.
Proposed frameworks for the regulation of artificial intelligence are adamant that AI systems should not pose unacceptable risks to important public interests, such as the health, safety, and fundamental rights of individuals. Additionally, large language models and other foundation models have become a key concern of lawmakers. For example, the European Parliament has proposed amendments to the draft EU AI Act that would require developers of foundation models to “demonstrate through appropriate design, testing and analysis the identification, the reduction and mitigation of reasonably foreseeable risks to … democracy and the rule of law”. However, it is unclear how such aspirational statements about values and fundamental rights can be translated into concrete, actionable rules. Any attempt to develop workable rules for the governance of AI systems faces two connected challenges. First, it is important to gain clarity on the meaning of inherently context-dependent, abstract concepts increasingly used in proposed AI regulations. Second, the predictions of automated decision-making systems are never entirely error-free. The question thus is how abstract risks can be identified and measured, and what level of residual risk is “acceptable” and in line with legal risk mitigation requirements. This research project probes both problems. It examines the contested interpretations of relevant legal concepts including the rule of law and human dignity, explores how definitions may differ depending on the machine learning model and use case, and identifies the need to resolve value tensions that often arise when diverging stakeholder interests are involved. Moreover, it builds on approaches in computer science to certify model performance across a range of metrics and discusses whether these methods can be leveraged to satisfy legal requirements, such as those set out in the proposed EU AI Act.
More information about the NAIL project is available on our institutional website. Please subscribe to our mailing list(nail"AT"ile-hamburg.de) to receive notifications for future events.