The digital transition opens new perspectives for researchers interested in cultural processes. An increasing part of the material and immaterial heritage of Western culture is accessible via digital representations such as digital editions of manuscripts, multispectral images of paintings or 3D models of archeological findings. Digital representations have the obvious advantage of permitting simultaneous remote access. This is of interest both to researchers as to the general public. Via web interfaces, scholars and enthusiasts can search for miniatures in the digital edition of the Codex Manesse or interact with the music scores of the Digitale Interaktive Mozart Edition. Additional effort is needed to include more cultural creations in the digital transition. Beyond that, the sheer number of those creations already digitally accessible – almost 60 million on the Europeana aggregation platform – raises new challenges for humanities scholars. The task of analyzing and linking the many pieces of information becomes more important and difficult than ever. AI methods provide solutions to some of the challenges involved. The Semantic Web technology stack, for instance, permits knowledge-based algorithms to assist scholars in the task of linking large cultural data sets. Another issue is the vagueness and uncertainty omnipresent in the historic study of cultural processes. AI research has devised a number of methods able to deal with these phenomena. It is important, however, to realize that humanities scholars have specific requirements. Different from many AI applications in engineering, it is generally not the goal to resolve all ambiguities. This needs new approaches to classification, for instance. The workshop gathers AI researchers and interested humanities scholars. We encourage submissions that report on work in progress or present a synthesis of emerging research trends.
With the current scientific discourse on explainable AI (XAI), algorithmic transparency, interpretability, accountability and finally explainability of algorithmic models and decisions, the workshop targets a topic of prominent and timely topic. Accordingly, explainable and interpretable machine learning tackles this theme from the modeling and learning perspective, i.e. targeting interpretable methods and models, that are able to explain themselves and their output, respectively. The workshop aims to provide an interdisciplinary forum to investigate fundamental issues in explainable and interpretable machine learning as well as to discuss recent advances, trends, and challenges in this area.
Information for real life AI applications is usually pervaded by uncertainty and subject to change, and thus demands for non-classical reasoning approaches. At the same time, psychological findings indicate that human reasoning cannot be completely described by classical logical systems. Sources of explanations are incomplete knowledge, incorrect beliefs, or inconsistencies. A wide range of reasoning mechanism has to be considered, such as analogical or defeasible reasoning, possibly in combination with machine learning methods. The field of knowledge representation and reasoning offers a rich palette of methods for uncertain reasoning both to describe human reasoning and to model AI approaches. The aim of this series of workshops is to address recent challenges and to present novel approaches to uncertain reasoning and belief change in their broad senses, and in particular provide a forum for research work linking different paradigms of reasoning. We put a special focus on papers from both fields that provide a base for connecting formal-logical models of knowledge representation and cognitive models of reasoning and learning, addressing formal as well as experimental or heuristic issues.
The WLP workshop provides a forum for exchanging ideas on declarative logic programming, non-monotonic reasoning, and knowledge representation, and facilitate interactions between research in theoretical foundations and in the design and implementation of logic-based programming systems. Contributions are welcome on all theoretical, experimental, and application aspects of logic and constraint logic programming.
Advances of AI and the increasing degree of application-relevant AI techniques create the necessity to establish means for designing and developing AI software which is dependable. We call a system dependable if it meets functional safety requirements and when its performance is intuitive throughout all situations. For example, a dependable service robot will not hurt its owners and, if it can go shopping for tomatoes, it should also be able to recognize and carefully handle a tomato in all other contexts as well, including new ones. We expect that dependability of AI systems will be crucial to the development of trust in AI and the acceptability of its use in important socio-technical applications. We therefore expect the proposed workshop to be of interest to any AI subfield, yet in particular to those researchers working on applications. Moreover, the proposed workshop will be of interest to researchers from related communities in software engineering and formal methods. In particular methods based on deep neural networks often lack dependability, possibly caused by unintended biases in the training data or counter-intuitive generalisation leading to sudden failures. While these shortcomings are currently addressed in fields such as explainable machine learning or explainable AI, the problem of achieving dependability goes beyond the advancement of individual methods. We require techniques to compose complex systems out of interdependent components, which may act asynchronously and adapt their behaviour over time. With this workshop we wish to create a forum for researchers from AI and software engineering to discuss methods for specifying dependability and achieving it.
Abstract argumentation graphs model arguments and relationships between them. Quantitative bipolar graphs are one popular instance with many recent applications. In this setting, relationships are usually attacks and supports and the credibility of arguments is evaluated by numerical values like probabilities or more general strength values. This structure allows to model decision problems very naturally. A decision can be based on pro and contra arguments, which, in turn, may attack or support each other. In many cases, the final decision can be explained very intuitively from the argumentation graph by going backwards through attackers and supporters and their final strength values. In this tutorial, we will focus on two approaches to quantitative abstract argumentation. Epistemic probabilistic argumentation as proposed by the workgroups of Hunter and Thimm and gradual argumentation as proposed by the workgroups of Baroni and Toni. In both frameworks, many interesting reasoning problems can be solved in polynomial time and the results are easily interpretable and explainable from the graph structure. We will discuss the basic formal models, look at some modelling examples and recent applications.
Changepoints are abrupt changes in the statistical properties of a signal. Changepoints detection algorithms can be used to automatically segment a signal (or several, in the case of multiple dimensions). These segments can later be used as input for other analysis methods, such as anomaly detection, clustering, patterns recognitions, etc. Changepoint detection is a task that is done in many domains, for example medical diagnosis, engineering systems, speech recognition or sensor data monitoring, just to mention a few of them. One usually places the starting point of changepoint detection techniques with the work of Page in the 1950s (see also the classical work of (Basseville and Nikiforov 1993)). Over the time, change- point techniques have adapted methods not only from statistics and signal processing but also from artificial intelligence (see the latest reviews like (Truong et al. 2018; Aminikhanghahi and Cook 2017)). Understanding the mathematical principles of changepoint detection algorithms provides useful tools for data analysis and artificial intelligence. This tutorial is divided in two main parts. In the first part, we present the principles of changepoint detection. In the second part, we focus on the current implementations of change point detection available in R and Python.
From a process point of view, the CRoss-Industry Standard Process for Data Mining (CRISP-DM) describes six major steps for any data analysis project: Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation and Deployment (Shearer 2000). Having gained knowledge about the problem to solve (i.e., business understanding), required data need to be identified and semantically understood. This requires domain knowledge, data engineering and data analysis knowledge. In this phase, it is essential to assess the quality of the data (garbage in, garbage out). Data understanding is the starting point for data integration and preparation. For the integration, many technologies are available, however, none of them is the silver-bullet. The analysis’ goal and the planned analysis approach (e.g., classification, regression, clustering, etc.) influence the technology stack as well as the existing infrastructure. The analysis also poses concrete requirements on the data (type, quality, quantity). The data preparation’s task is to extract the required data from their sources through transformation, cleaning, filtering, missing value treatment, etc. and to prepare it for the analysis. The explorative character of data analyses as well as the strong influence of the quality of the data on the results of the analysis require that data preparation is performed repeatedly. In addition, due to the exploratory nature of an analysis it is often seen only in the course of the project, which data is actually important, how data needs to be prepared, and which data lead to better results. Consequently, data scientists spend a lot of time solely with data preparation (up to 70% of the project effort). Within this hands-on tutorial, participants will learn, based on concrete examples, how to kick-start any data analysis projects (i.e., data integration, data preparation). We will show, based on experience from many projects, where the caveats lie and how you safely ship around them. This tutorial consists of both a theoretical part giving an overview of relevant data preparation tasks, as well as a hands-on part, which focuses on frequent in practice issues of missing values and outlier data. Based on different examples implemented using jupyter notebooks (both in Python and R) we will shop how to detect and handle data quality deficits. The goal of the hands-on examples is to illustrate common data preparation methods presented in the theoretical part of the tutorial and to learn which software packages or libraries are currently used and helpful for data preparation.
The field of artificial intelligence and knowledge representation (KR) has originated a high variety of formalisms, notions, languages, and formats during the past decades. Each approach has been motivated and designed with specific applications in mind. Nowadays, in the century of Industry 4.0, the Internet of Things, and Smart-devices, we are interested in ways to connect the various approaches and allow them to distribute and exchange their knowledge and beliefs in an uniform way. This leads to the problem that these sophisticated knowledge representation approaches cannot understand each others point of views and their positions on semantics are not necessarily compatible either. These two problems between different KR-formalisms has been tackled by the concept of Multi-Context Systems, which allow methods to transfer information under a strong and generalised notion of semantics. Recent advances in the representation of Streams allowed one to utilise the ideas of Multi-Context Systems and expand them to provide reasoning based on streams. Modern languages, such as LARS, extend logic programming formalisms like ASP with sliding windows and temporal modalities that allow one to encode monitoring, configuration, control, and many other problems occurring in the domains listed above. Available distributed high-performance reasoning systems are closely related to the ideas of Multi-Context Systems and can be used to efficiently process data streams with low latency. The goal of this tutorial is to provide a sophisticated and formally sound overview on the last decade of advances in the field of Multi-Context Stream Reasoning. The aim is to educate participants on how Multi-Context Systems evolved into the current state-of-the-art and how they differ in terms of applicability and feasibility. In addition it should allow a conscious insight on the motivation and philosophy behind the reason for the development of different Multi-Context Systems.
Machine-readable commonsense knowledge (CSK) is fundamental for automated reasoning about the general world, and relevant for downstream applications such as question answering and dialogue. In this tutorial, we focus on the construction and consolidation of large repositories of commonsense knowledge. After briefly surveying crowdsourcing approaches to commonsense knowledge compilation, in the main parts of this tutorial, we investigate (i) automated text extraction of CSK and relevant choices for extraction methodology and corpora, and (ii) knowledge consolidation techniques, that aim to canonicalize, clean, or enrich initial extraction results. We end the tutorial with an outlook on application scenarios, and the promises of deep pretrained language models.
Bayesian Methods can estimate the model uncertainty and uncertainty regarding the input in Neural Networks and make them more robust and precise. Deep Neural Networks produce state-of-the-art results in various fields like natural language and image processing solving tasks such as speech recognition, object detection or object recognition. In contrast to classic Neural Networks, the model parameters of Bayesian Neural Networks (BNNs) are not defined by point estimates, but by probability distributions. Therefore, BNNs are prone to tackle the problem of outlier detection, with which Neural Networks struggle. Thus, they can detect misclassified out-of-distribution input examples and counteract adversarial attacks. This is especially important for safety critical applications in fields like medicine or for autonomous driving. This tutorial aims to give an introduction and motivation about Neural Networks and uncertainty measurements and then dives deeper into comparing Bayesian Deep Learning approaches.
We argue that artificial intelligence system need to be endowed with a higher level of intelligence that we call „argumentative intelligence“ that allows systems to reason beyond facts by taking into account more complex semantic relationships and verbalize those relationships to a user in order enhance transparency, explainability and controllability of AI systems. Arguments play an important role in creating transparency, explaining suggestions and thus support human decision making in interaction with „white-box“ systems. In this tutorial we outline the importance of argumentation for artificial intelligence and will consider three important fields within computational argumentation: argumentation mining, argumentation retrieval and argumentation synthesis. Argumentation mining is concerned with understanding arguments expressed in text. Argument retrieval is concerned with supporting humans in retrieving the most relevant arguments for a given topic. Argument synthesis is concerned with how to support human decision making by machine-generated arguments.