AI systems, it is often said, are poised to play an ever greater role in our lives and will increasingly apply rules and make decisions as autonomous agents. Unfortunately, it is often forgotten that AI systems still can't be provided with a kind of normative reasoning that enables them to take the purposes, interests or values into account which lie behind the rules they apply and the decisions they make. Unlike humans – to us this kind of thinking is second nature – AI systems are still unable to handle rules in an intelligent way and steer a sensible course between two extremes: if they ignore the purposes, interests or values behind rules, they run a risk of succumbing to blind rule following and if they do take them into account, but either misunderstand the purposes, interests and values or the means needed to achieve them, they are in danger of engaging in misguided rule revision.
Aims of Project
With our project we are trying to change that. We aim to develop new methods for norm-based reasoning, methods which are not only computable, but also intelligent and enable AI systems, and the autonomous agents that use them, to comply not only with precise legal rules, but also with the purposes these rules have been made and the interests or values they continue to be upheld for. In the end, we are trying to do so, because we are convinced that unless we manage to provide our autonomous agents with such methods, they can't and won't be trusted to act in our stead.
To make autonomous agents trustworthy, their AI systems need to be capable of reasoning with rules in an intelligent way and to enable them to do that, we must provide them with:
- methods to apply strict rules;
- methods to handle conflicts of rules;
- methods to deal with explicit exceptions to rules;
- methods to deal with implicit or open exceptions to rules;
- methods to take into account the purposes, values or interests behind rules
- to either limit their application through teleological reduction
- or extend their application to new cases by analogy;
- methods to live up to higher standards than rules require (supererogation);
- methods to violate rules in case of necessity (state of emergency);
- methods to decide on the use of methods; that is,
- between strict application and making exceptions,
- teleological reduction and analogical application,
- and supererogation or rule violation;
- methods to match rules with facts to trigger legal effects.
All these methods need to be feasible, explicable and verifiable. First, autonomous agents must be able to execute the methods in real time. Secondly, they should be able to give an explanation of their behaviour by reference to the rules and capable of explaining why they are or aren't following them. And thirdly, it should be possible to check their behaviour in advance, to evaluate their benevolance as well as their compliance with the rules, before they act and can cause harm.
In order to develop such new methods for norm-based reasoning, we use the law as our starting point. With recourse to the literature on legal reasoning we set out to elicit reasoning patterns from an analysis of different legal systems, historic and contemporary and from both the civil and common law tradition. Based on these findings from legal analysis, we use logic and good old fashioned symbolic artificial intelligence to choose appropriate formal representations for rules and build suitable inference mechanisms for reasoning with these rules. And, to go the whole nine yards, we plan to use sub-symbolic connectionist artificial intelligence to match the rules with facts.
Impact of Research
Methods for norm-based reasoning which are both computable and intelligent, will not only pave the way for more advanced legal tech applications, but will also make a contribution to the development of more trustworthy AI systems in general. In other words, the basic research undertaken in relation with the project may not only have an impact on the field of law alone, but also on the fields of logic, computer science and artificial intelligence. In the end, its results might not only be used to create more sophisticated advertising and contracting bots, but also better self-driving cars.