In case you are looking for something interesting to read, I can share what I have been thinking about lately. In Thought vectors and the dimensionality of the space of concepts (a post from last week) I discussed the dimensionality of the space of concepts (primitives) used in human language (or equivalently, in human thought). There are various lines of reasoning that lead to the conclusion that this space has only ~1000 dimensions, and has some qualities similar to an actual vector space. Indeed, one can speak of some primitives being closer or further from others, leading to a notion of distance, and one can also rescale a vector to increase or decrease the intensity of meaning. See examples in the earlier post:
You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. ... it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers.The earlier post focused on breakthroughs in language translation which utilize these properties, but the more significant aspect (to me) is that we now have an automated method to extract an abstract representation of human thought from samples of ordinary language. This abstract representation will allow machines to improve dramatically in their ability to process language, dealing appropriately with semantics (i.e., meaning), which is represented geometrically.
Below are two relevant papers, both by Google researchers. The first (from just this month) reports remarkable "reading comprehension" capability using paragraph vectors. The earlier paper from 2014 introduces the method of paragraph vectors.
Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors
Radu Soricut, Nan Ding
https://arxiv.org/abs/1612.04342
(Submitted on 13 Dec 2016)
We present a dual contribution to the task of machine reading-comprehension: a technique for creating large-sized machine-comprehension (MC) datasets using paragraph-vector models; and a novel, hybrid neural-network architecture that combines the representation power of recurrent neural networks with the discriminative power of fully-connected multi-layered networks. We use the MC-dataset generation technique to build a dataset of around 2 million examples, for which we empirically determine the high-ceiling of human performance (around 91% accuracy), as well as the performance of a variety of computer models. Among all the models we have experimented with, our hybrid neural-network architecture achieves the highest performance (83.2% accuracy). The remaining gap to the human-performance ceiling provides enough room for future model improvements.
Distributed Representations of Sentences and Documents
Quoc V. Le, Tomas Mikolov
https://arxiv.org/abs/1405.4053
(Submitted on 16 May 2014 (v1), last revised 22 May 2014 (this version, v2))
Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
No comments:
Post a Comment