We were pleased to host an excellent line-up of invited speakers:

Hans Briegel, University of Innsbruck

Hans Briegel’s research is focused on fundamental aspects of quantum theory, its applications in computer science, and in other branches of science. His work on the concept of a one-way quantum computer (with R. Raussendorf) introduced a new paradigm for building a quantum computer and led to a new understanding of entanglement. One of his main current interests is to understand the ultimate power of machines to compute and to simulate Nature. Models for quantum information processing, both in physical and in biological systems, are thereby being explored.

Title: Projective Simulation for Learning and Agency

I will first present the model of projective simulation (PS) for a learning agent, whose interaction with its environment is governed by a simulation-based projection. The PS agent uses random walks in its episodic and compositional memory (ECM) to project itself into future situations before taking real action. The PS model can solve basic tasks in reinforcement learning but it also allows for the implementation of advanced concepts such as generalization and meta-learning. Notably, projective simulation can be quantized, allowing for a quantum mechanical speed-up in the agent’s deliberation process. I will then discuss recent applications of the PS model in robotics and in the philosophy of action, as well as the question to what extent learning agents can help us in finding new quantum experiments.

Peter Gärdenfors, University of Lund

Peter Gärdenfors is a member of the Royal Swedish Academy of Letters, History and Antiquities and recipient of the Gad Rausing Prize. Internationally, he is one of Sweden's most notable philosophers. His previous research was focussed on philosophy of science, decision theory, belief revision and nonmonotonic reasoning. His main current research interests are concept formation (using conceptual spaces based on geometrical and topological models), cognitive semantics, models of knowledge and information and the evolution of cognition.

Title: The Role of Domains in the Representation of Word Meanings (video)

Abstract: I first present some of the main ideas concerning the semantics of word classes from my book Geometry of Meaning. In particular I discuss the hypothesis that the meanings of adjectives, verbs and prepositions (unlike nouns), can be represented in a single domain (the single domain hypothesis). I also present some preliminary ideas on how nouns can be classified depending on which domains are included in their corresponding semantics categories. On the basis of this, I will discuss the relevance of domains for computational approaches to natural language processing.

Dominic Widdows, Grab Technologies

Dominic Widdows is a mathematician and computational linguist, and a world expert in the areas of Quantum Informatics and semantic vector spaces. His book "Geometry and Meaning" is considered one of the most comprehensive and insightful accounts on the latter topic. His work often stands literally at the intersection of mathematics, quantum physics and NLP, approaching language-related problems from a unique perspective.

Title: Semantic Spaces: Successes and Goals (video)

This talk will review some of the successes of semantic spaces over the past decade, and describe some large needs for the future. From early bag-of-words models for information retrieval, semantic spaces have been applied to many semantic modelling and reasoning challenges. This talk will discuss some perhaps less-familiar applications in medical informatics, including literature based discovery, ontology-based reasoning, drug repurposing, and sequence alignment. In the process we’ll examine the (arguably) quantum-like nature of these models, looking particularly at the use of “entangled” superpositions of product states for searching a model, and the logical and probabilistic operations that such geometric models support. This story demonstrates that semantic spaces can represent propositions, analogies, and gradable quantities, as well as the traditional use of points for words and documents. This leads to a more general question, “What else can semantic spaces represent, and how can this help science?” One potential opportunity here is that semantic spaces can be used to represent goals and objectives, something that has been markedly lacking in formal models for language. “Drawing words from a distribution of topics in accordance with grammatical structure” does not describe why people write documents, and a quick glance at a few news articles shows that part of the reason is “to convince the reader of something”. The second part of the talk will explore the suggestion that goals and objectives can be expressed in the same model as words and documents, potentially leading to a way of understanding of rhetoric and persuasion in terms of semantic spaces.