Learning about Language and Action for Robots
To operate effectively, and to collaborate with humans, robots need to know much about the world, including the kinds of objects in the world, their properties, the spatial relationships between them and actions that can be performed on them, as well as how language is used to describe these things. I will present a cognitively plausible novel framework capable of /incrementally/ learning a language used to give commands to a robot in a table top environment, and the grounding of linguistic concepts to the induced visual semantics of the observed scenes. The system also induces a set of probabilistic grammar rules governing the previously unknown language. I also plan to talk about new work in which we show how robots can improve their manipulation planning in cluttered environments by learning from human demonstrations in virtual worlds.
Tony Cohn is Professor of Automated Reasoning at the University of Leeds. His earlier work focussed on knowledge representation and reasoning with a particular focus on qualitative spatio-temporal representation. His research has broadened to encompass Cognitive Vision, Robotics, Grounding Language in Vision, and Decision Support Systems. He is the recipient of the 2015 IJCAI Donald E Walker Distinguished Service Award and the 2012 AAAI Distinguished Service Award. He is a Fellow of the Royal Academy of Engineering, EurAI, AAAI, and AISB. He is co-Editor-in-Chief of the journal Spatial Cognition and Computation and has been Chairman/President of AISB, EurAI, KR inc, and the IJCAI Board of Trustees.
From Model-free to Model-based AI: Representation learning for Planning
One of the main obstacles for developing flexible AI systems is the split between data-based learners and model-based solvers. Solvers such as classical planners are very flexible and can deal with a variety of problem instances and goals but require first-order symbolic models. Data-based learners, on the other hand, are robust but do not produce such representations. In the talk, I look at recent work in my lab aimed at bridging this gap by learning first-order symbolic representations for planning and for generalized planning directly from non-symbolic data.
Hector Geffner is an ICREA Research Professor at the Universitat Pompeu Fabra (UPF) in Barcelona, Spain. He was born and grew up in Buenos Aires and obtained a PhD in Computer Science at UCLA under the supervision of Judea Pearl. He worked then at the IBM T.J. Watson Research Center in NY, USA, and at the Universidad Simon Bolivar, in Caracas. Hector is a Fellow of AAAI and EurAI, a board member of the European Association for AI, and former Associate Editor of AI and JAIR. He is the recipient of an Advanced ERC grant to do research on symbolic representation learning for planning.
10120 and Beyond: Scalable AI Search Algorithms as a Foundation for Powerful Industrial Optimization
Search algorithms are among the core techniques of Artificial Intelligence. Continuous research over seven decades has led to breakthroughs, which are at the heart of contemporary learners and solvers, where search helps to trains neural networks, play games, or solve complex combinatorial optimization problems. In this talk, I review the state of the art in search algorithms. I discuss the potential and selected open research problems and illustrate them with examples from practical applications where search is used to solve industrial planning and scheduling problems.
Jana Koehler holds the chair of artificial intelligence at Saarland University and is head of the new research department for Algorithmic Business and Production at the German Research Center for AI DFKI. Her research, teaching and consulting activities focus on the use of AI algorithms to improve the flexibility, efficiency and resilience of industrial processes and decisions. Prior affiliations include Lucerne University of Applied Sciences and Arts, IBM Research Zurich, Schindler Elevators R&D, the University of Freiburg and others.
Semantic Relational Learning
Relational Learning (RL) and Relational Data Mining (RDM) address the task of inducing models or patterns from multi-relational data. One of the established approaches to RDM is propositionalization, characterized by transforming a relational database into a single-table representation. The talk provides an overview of propositionalization algorithms, and a particular approach named wordification, all of which have been made publicly available through the web-based ClowdFlows data mining platform. The focus of this talk is on recent advances in Semantic Relational Learning and Semantic Data Mining (SDM), characterized by exploiting relational background knowledge in the form of domain ontologies in the process of model and pattern construction. The open source SDM approaches, available through the ClowdFlows platform, enable software reuse and experiment replication. The talk concludes by presenting the recent developments, which allow to speed up SDM by data mining and network analysis approaches.
Nada Lavrač is Research Councillor at Department of Knowledge Technologies at Jožef Stefan Institute, Ljubljana, Slovenia. She is Professor of Computer Science at the University of Nova Gorica and Jožef Stefan International Postgraduate School in Ljubljana, where she acts as Head of ICT programme. Her research interests are in Knowledge Technologies, with particular interests in machine learning, data mining, text mining, knowledge management and computational creativity. Her special interest is in supervised descriptive rule induction, where the research goal is to automatically induce rules from class labeled data, stored either in simple tabular format or in complex relational databases. Areas of her applied research include data mining applications in medicine, healthcare, and bioinformatics. She is (co-)author of several books, including Foundations of Rule Learning, Springer 2012.
Open and Closed Book Machine Reading
One of the fundamental problems in AI is to enable computational agents to access human knowledge expressed in natural language. For decades this meant teaching machines to “read text” by turning it into highly structured knowledge bases ready for downstream use. In this talk I will discuss two recent alternative paradigms at seemingly opposite ends of a spectrum. In retrieve-and-read approaches text is left as is, and is retrieved and comprehended on as-needed basis, much in the way open book exams work. In recent closed book approaches, all knowledge has been stored in the parameters of massive language models without the need for retaining text corpora at all. I will contrast the strengths and weaknesses of both approaches, and show how we overcome some of them in our most recent works, partly by finding synergies between the two.
Sebastian is a researcher at Facebook AI Research (FAIR) and a Professor at University College London. He did his PhD in Edinburgh and postdocs at the University of Tokyo and UMass Amherst. Sebastian is working on how to use Machine Learning and Natural Language Processing to teach machines how to acquire, manipulate, use and reason with knowledge.
The Beauty of Imperfection: From Gut Feeling to Transfer Learning to Self-Supervision
The success in the field of machine learning and in particular the aspects of representation learning, and deep nets have not only a significant impact in the academic world, but also show their added value in within corporate world and its industrial applications. From visual quality inspection, dynamic price determination, autonomous parameter optimization, improved adaptivity of robotics controls and AI-optimized supply chains to predictive maintenance in production. Algorithms make our lives more efficient, increasingly support our (often human gut feeling-driven) decision-making process, and in part they take it over. The strength and at the same time the challenge aspect of modern ML, is when they hit the beautiful real world of imperfection and uncertainty, that is coping with new situations and unknow unknows. As the quality of performance strongly depends on the size, balance and purity of the input data, which is rarely given in terms of quantity and quality, new approaches towards self-supervision and trustworthiness are of importance. This talk motivates some research challenges within real-world applications of Industrial AI and present recent research work from the lab on robustness, explainability, and continuous learning for industrial applications.
Ulli is curious about the foundations of computational intelligence, specifically methodologies that bridge the areas of connectionist and symbolic learning applied to real-world AI applications. He is passioned about enabling an open and interdisciplinary research and innovation culture with meaningful impact to enhance people`s life. Over the last years, Ulli pushed and led, within Siemens, various AI initiatives, such as the Core Technology Initiative on Deep Learning and Artificial Intelligence 2015, various Hackathons and Bootcamps on AI, and founded together with a passionate team in 2017 the Siemens Artificial Intelligence Lab in Munich.