QPL 2016, and took place on 11 June 2016, at University of Strathclyde (Glasgow, Scotland).
Since their introduction in the early 1970s, vector space models of meaning have evolved into a well-established area of research in Natural Language Processing (NLP). Their probabilistic nature and ability to exploit the abundance of large-scale resources such as the Web make them one of the most useful tools (arguably the most successful (Turney and Pantel, 2010)) for modeling what we broadly call meaning in language. The geometry provided by the angular distance between the vectors has been widely used as a representative of the degree of similarity of meaning in NLP.
Another field in which vector space models play an important role is physics, and especially quantum theory. Though seemingly unrelated to language, intriguing connections have recently been uncovered. The categorical model of Coecke et al. (2010), inspired by quantum protocols, has provided a convincing —theoretical and practical— account of compositionality in vector space models of NLP. The resulting setting has systematically extended the vector models from words to sentences, enabling them to reason about sentence meaning with the same tools as for word meaning. Frobenius algebras have enabled reasoning about meanings of functional words such as relative pronouns (Sadrzadeh et al., 2013), and have been used for modeling aspects of language such as intonation (Kartsaklis and Sadrzadeh, 2015). The CPM construction over the underlying category, has provided a setting where the traditional notion of a word vector is replaced with that of a density matrix, allowing for a more fine-grained model (Piedeleu et al., 2015; Balkir, 2014). All along the way, the diagrammatic calculus of categorical quantum mechanics has simplified the computations thereof, allowing for a depiction of the flow meaning within sentences, using similar methods as those used for quantum protocols such as teleportation.
The link between physics and natural language semantics via vector space models has not been restricted to the aspirations and tools provided by categorical quantum mechanics. Density matrices have been used for modelling grammar along side meaning (Blacoe et al., 2013), whereas Sordoni and Nie (2014) exploit similar means for information retrieval. Methods from quantum logic have been applied to model logical words in natural language (Widdows, 2003), to reason about the human mental lexicon in cognitive processes (Bruza et al., 2009), and to vectors of queries and documents in information retrieval (Van Rijsbergen, 2004).
There is also a long-standing history of vector space models in cognitive science. Theories of categorization such as those developed by Ashby and Gott (1988); Nosofsky (1986); Rosch and Mervis (1975) utilise notions of angular distance between vectors. Hampton (1987); Smith and Osherson (1984); Tversky (1977) encode meanings as feature vectors, and more recently Gärdenfors (2004) has developed a model of concepts in which conceptual spaces provide geometric structures, and information is represented by points, vectors and regions in vector spaces. The conceptual spaces model has been applied to language evolution (Steels et al., 2005), scientific theory change (Gärdenfors and Zenker, 2013), and models of musical creativity (Forth et al., 2010), amongst others, and has the potential to augment NLP models of meaning with representations that have been learned through interaction with the external world.
Exploiting the common ground provided by the concept of a vector space, the workshop brought together researchers working at the intersection of NLP, cognitive science, and physics, offering to them an appropriate forum for presenting their uniquely motivated work and ideas. The interplay between these three disciplines fostered theoretically motivated approaches to understanding how meanings of words interact with each other in sentences and discourse, how diagrammatic reasoning depicts and simplifies this interaction, how language models are determined by input from the world, and how word and sentence meanings interact logically. Topics of interests included (but were not restricted to):