Seminar by Prof. Francesca Toni (Imperial College)
- Date 16 Mar 2022
- Time 3.00pm-4.00pm
- Category Seminar
Abstract:
It is widely acknowledged that transparency of automated decision making
is crucial for deployability of intelligent systems, and explaining the reasons
why some outputs are computed is a way to achieve this transparency.
The form that explanations should take, however, is much less clear.
In this talk I will explore two classes of explanations, which I call 'lean'
and 'mechanistic': the former focus on the inputs contributing to decisions
given in output; the latter reflect instead the internal functioning of the
automated decision-making fed with the inputs and computing those outputs.
I will show how both classes of explanations can be supported by forms of c
omputational argumentation, and will describe forms of argumentative XAI
in several settings, including multi-attribute decision making and machine learning.
Short bio:
Francesca Toni is Professor in Computational Logic and Royal Academy of
Engineering/JP Morgan Research Chair on Argumentation-based InteractiveExplainable AI at the Department of Computing, Imperial College London, UK,
and the founder and leader of the CLArg (Computational Logic andArgumentation) research group and of the Faculty of Engineering XAI Research
Centre. Her research interests lie within the broad area of Knowledge
Representation and Reasoning in AI and Explainable AI, and in particular
include Argumentation, Argument Mining, Logic-Based Multi-Agent Systems,
Non-monotonic/Default/Defeasible Reasoning, Machine Learning. She has
recently been awarded an ERC Advanced grant on Argumentation-based Deep
Interactive eXplanations (ADIX). She is EurAI fellow, in the editorial boardof the Argument and Computation journal and the AI journal, and in the Board
of Advisors for KR Inc. and for Theory and Practice of Logic Programming.