Program Overview

Please notice that this program is still subject to change.

The SSA sessions will be held in room 2.26 in the Abacws Building on Cardiff University Cathays Campus. The room is located on the second floor. The OHAAI sessions will be held in room 3.02 on the third floor.

The refreshments will be served on the ground floor of the building.

The registration desk and arrival refreshments will be available from 9:00 in the Abacws Building every day on ground floor.

  8 September 9 September 10 September 11 September
9:30-11:00 Martin Caminada
Introduction to Formal
Argumentation (1/2)
Antonio Rago
An Overview of
Argumentative XAI
Markus Ulbricht
Theoretical Tools towards
Argumentative Explanation (1/2)
Federico Castagna
Delivering Argument-Based
Explanations via Chatbots (1/2)
11:00-11:20 coffee break coffee break coffee break coffee break
11:20-12:50
Martin Caminada
Introduction to Formal
Argumentation (2/2)
Martin Caminada
Discussion Games for
Formal Argumentation
Markus Ulbricht
Theoretical Tools towards
Argumentative Explanation (2/2)
Federico Castagna
Delivering Argument-Based
Explanations via Chatbots (2/2)
12:50-14:10 lunch break lunch break lunch break lunch break
14:10-15:40
Richard Booth
Aggregating Opinions in
Abstract Argumentation (1/2)
Hiroyuki Kido
Bayesian Statistics in
Logic and Argumentation (1/2)
Annemarie Borg
Explaining argumentation-based
conclusions (1/2)
OHAAI session
15:40-16:00 coffee break coffee break coffee break coffee break
16:00-17:30
Richard Booth
Aggregating Opinions in
Abstract Argumentation (2/2)
Hiroyuki Kido
Bayesian Statistics in
Logic and Argumentation (2/2)
Annemarie Borg
Explaining argumentation-based
conclusions (2/2)
OHAAI session
18:00 social event

Speakers and Abstracts

Richard BoothRichard Booth is a senior lecturer at the School of Computer Science and Informatics at Cardiff University, which he joined in 2015. His research interests are in knowledge representation and logic-based approaches to artificial intelligence, specifically argumentation, belief revision, reasoning about expertise and trust, and computational social choice. Before joining Cardiff, he worked as a post-doc or lecturer at, among others, University of Luxembourg, Mahasarakham University (Thailand) and Leipzig University.

Aggregating Opinions in Abstract Argumentation

In this tutorial we look at problems that arise in abstract argumentation settings in which different agents participating in the discussion do not necessarily agree on the acceptance status of the involved arguments, or, possibly, whether a given argument really attacks another one. Given a set of such individual viewpoints, which arguments should be considered as accepted by the group? In other words, how should we aggregate these individual opinions to arrive at a group labelling, or a group extension that appropriately reflects the opinion of the group as a whole? In the last few years a thread of research has emerged in abstract argumentation that examines this problem, often making use of tools that have been developed in the areas of preference aggregation, or voting theory, in social choice. In this tutorial we will survey this recent work, covering different aggregation rules that have been proposed, as well as different approaches such as axiomatic analysis and dialogue-based approaches.

Annemarie BorgAnneMarie Borg is an assistant professor at the Information and Computing Sciences department at Utrecht University and a member of the National Police Lab AI. She is interested in the development of formal argumentation for the real world. There is a vast amount of theoretical results on formal argumentation, but there is often a gap with requirements from real-life applications. In her work AnneMarie focuses on dynamic aspects of argumentation and the use of argumentation in the field of explainable artificial intelligence.

Explaining argumentation-based conclusions

In recent years a wealth of research on argumentative explainable artificial intelligence has been published. In this course we will take a look at explanations for formal (abstract and structured) argumentation – the question of whether and why a certain argument or claim can be accepted (or not) under various extension-based semantics. To this end a flexible framework, which can act as the basis for many different types of explanations, will be introduced. In addition to discussing several types of explanations, it will be shown how this explanation framework can be applied to argumentation-based applications at the Netherlands Police.

Martin Caminada Martin Caminada is an established researcher in the field of formal argumentation. He has contributed to the definition of several argumentation semantics (in particular semi-stable and eager semantics), as well as on labelling-based definitions of argumentation semantics, and on the connection between argumentation semantics and formal discussion. His work on rationality postulates has generally been accepted to provide some of the key properties that formalisms for instantiated argumentation should aim to satisfy. Martin's publication record includes AAAI, IJCAI, AAMAS, ECAI, AIJ and JAIR. He has supervised several PhD students, and has given a Master course on argumentation as well as tutorials at places such as EASSS, ESSLLI and IJCAI.

Introduction to Formal Argumentation

The current talk with provide an introduction to some of the key concepts of formal argumentation, in both its abstract and instantiated form. As for abstract argumentatation, we provide an overview of some of the common argumentation semantics, in both their labelling and extension form. As for instantiated argumentation, we study the question of how to use formal argumentation to define meaningful forms of logical inference, and examine which particular choices have been made in the various formalisms for instantiated argumentation. We round off with some of the open challenges in formal argumentation, and with some of the pitfalls one has to be aware of when doing research in this field.

Discussion Games for Formal Argumentation

In the current talk, we examine how different argumentationn semantics correspond to different types of formal discussion. That is, we provide formal discussion protocols such that the ability to win a discussion for a particular argument corresponds to the argument being justified according to the associated argumentation semantics. Our overall aim is to bridge the gap between human discussion and formal argumentation. As such, we show that grounded semantics corresponds to the persuasion-style discussion, whereas preferred semantics corresponds to Socratic-style discussion. Apart from the theoretical background, we also provide a brief overview of some of the implementations of these types of discussion protocols.

Federico CastagnaHello! I am Federico Castagna, a Post Doctoral Research Associate in Explainable AI at the University of Lincoln. My PhD studies in Informatics at King's College London were preceded by a Bachelor's degree in Philosophy and a Master's degree in Philosophical Sciences at the University of Milan. My research interests include Computational Argumentation, Proof theories and Explainable AI. I am currently involved in the CONSULT project (https://consultproject.co.uk), where I am working to improve the users' trust in the system by enhancing the AI explainability using argumentation.

Talking with a bot: delivering argument-based explanations via chatbots

Artificial Intelligence constitutes a powerful means when deployed for assisting people in making well-informed decisions. However, this comes with a price. It is indeed the urge to overcome ethical issues involving AI-based systems, along with distrust from their users, that denotes the reason for the recent interest in the Explainable AI (XAI) research field. The idea is that the trustworthiness of the AIs can be improved by building more transparent and interpretable tools capable of explaining what the system has done while disclosing salient information during these processes. Several studies advocate an account of explanations that is primarily argumentative and point toward specific templates that may be employed to clarify the rationale behind the AIs' decisions. Such templates can then be instantiated and fed to a chatbot that will deliver the requested information by interacting with the interested user. The talk will be structured in three parts. Starting from a general introduction of XAI and its well-established relation with argumentation theory, I will then proceed, in the second part, with an analysis of (the most relevant) Douglas Walton’s Argument Schemes. This will include how they can be adapted to provide suited explanation templates and an overall presentation of the formal dialogue protocols employed to convey such (instantiated) templates. In the final part of the talk, I will focus on argumentation-based chatbots and how these can deliver exhaustive explanations by prompting the interacting user to question for information until satisfied. Chatbots are designed to converse with humans, thus representing media intrinsically suited to provide explanations.

Hiroyuki Kido Hiroyuki Kido joined Cardiff University in October 2019 as a lecturer at the School of Computer Science and informatics. He is currently interested in neuroscientific approaches to unify logic and machine learning. After he obtained his PhD from Tokyo Institute of Technology in 2011, he worked for Japanese National Institute of Advanced Industrial Science and Technology (2011-2013), University of Tokyo (2013-2016) and Sun Yat-sen University (2016-2019).

Bayesian Statistics in Formal Logic and Formal Argumentation

Thanks to big data and computational power available today, Bayesian statistics plays an important role in various fields such as neuroscience, cognitive science, and artificial intelligence. This tutorial focuses on inverse problems in formal logic and abstract argumentation. Given noisy data about truth values of logical formulae, the inverse logic problem basically aims to find models satisfying the truth values. Given noisy data about acceptability of arguments, the inverse argumentation problem basically aims to find attack relations justifying the acceptability. We look at how Bayesian approaches generalise and solve these basic problems. We also discuss how the Bayesian approaches can serve as models of several types of reasoning such as data-based logical reasoning, paraconsistent reasoning, counterfactual reasoning, nonmonotonic reasoning, predictive reasoning, statistical reasoning, and perceptual reasoning.

Antonio Rago Antonio is a Research Assistant in the Computing department at Imperial College London, where he also completed a PhD titled “Gradual Evaluation in Argumentation Frameworks: Methods, Properties and Applications” under the supervision of Prof. Francesca Toni. His main research interests lie in the deployment of argumentative technologies to applications, particularly in explainable AI, e.g. for explaining the outputs of neural networks, recommender systems or Bayesian classifiers, but also in other settings such as e-democracy, engineering, medicine and judgemental forecasting.

An Overview of Argumentative XAI

Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this talk, I will overview XAI approaches built using argumentative methods, leveraging the wide array of reasoning abstractions and explanation delivery methods permitted by computational argumentation. I will overview the literature focusing on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use. I will also lay out a roadmap for future work.

Markus Ulbricht I am a Post Doctoral Research Associate at the Center of Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) at Leipzig University. My main research interests are theoretical aspects of formal argumentation. After my PhD studies, I continued working in Leipzig and contributed to several argumentation-related projects. In 2021, I visited TU Vienna for six months. At Scads.AI, my main focus is to investigate the potential of argumentation theory regarding its contributions to explainability.

Theoretical Tools Towards Argumentative Explanations in Abstract Argumentation Frameworks

Explainability is one of the hot topics in current AI research. Since abstract argumentation frameworks (AFs) facilitate an intuitive and user-friendly representation, investigating the role of AFs for explainable AI is a promising endeavor. Indeed, in this context a considerable amount of research has been conducted within the last years. Thereby, researchers investigated how AFs can produce explanations for certain decisions in application scenarios (i.e. use AFs as a tool), but also how to better explain the behavior of AF semantics (i.e. focus on AFs themselves). In this course, we will focus on the latter aspect and compare several approaches explaining (non-)acceptances of arguments in a given AF. The underlying notions range from general ideas which in principle apply to any non-monotonic logic, to specialized concepts which are tailored to comprehend the behavior of AF semantics.