Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:47:53.840807Z"
},
"title": "Associative Multichannel Autoencoder for Multimodal Word Representation",
"authors": [
{
"first": "Shaonan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we address the problem of learning multimodal word representations by integrating textual, visual and auditory inputs. Inspired by the re-constructive and associative nature of human memory, we propose a novel associative multichannel autoencoder (AMA). Our model first learns the associations between textual and perceptual modalities, so as to predict the missing perceptual information of concepts. Then the textual and predicted perceptual representations are fused through reconstructing their original and associated embeddings. Using a gating mechanism our model assigns different weights to each modality according to the different concepts. Results on six benchmark concept similarity tests show that the proposed method significantly outperforms strong unimodal baselines and state-of-the-art multimodal models.",
"pdf_parse": {
"paper_id": "D18-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we address the problem of learning multimodal word representations by integrating textual, visual and auditory inputs. Inspired by the re-constructive and associative nature of human memory, we propose a novel associative multichannel autoencoder (AMA). Our model first learns the associations between textual and perceptual modalities, so as to predict the missing perceptual information of concepts. Then the textual and predicted perceptual representations are fused through reconstructing their original and associated embeddings. Using a gating mechanism our model assigns different weights to each modality according to the different concepts. Results on six benchmark concept similarity tests show that the proposed method significantly outperforms strong unimodal baselines and state-of-the-art multimodal models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Representing the meaning of a word is a prerequisite to solve many linguistic and non-linguistic problems, such as retrieving words with the same meaning, finding the most relevant images or sounds of a word and so on. In recent years we have seen a surge of interest in building computational models that represent word meanings from patterns of word co-occurrence in corpora (Turney and Pantel, 2010; Mikolov et al., 2013; Pennington et al., 2014; Clark, 2015; Wang et al., 2018b) . However, word meaning is also tied to the physical world. Many behavioral studies suggest that human semantic representation is grounded in the external environment and sensorimotor experience (Landau et al., 1998; Barsalou, 2008) . This has led to the development of multimodal representation models that utilize both textual and perceptual information (e.g., images, sounds).",
"cite_spans": [
{
"start": 377,
"end": 402,
"text": "(Turney and Pantel, 2010;",
"ref_id": "BIBREF38"
},
{
"start": 403,
"end": 424,
"text": "Mikolov et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 425,
"end": 449,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 450,
"end": 462,
"text": "Clark, 2015;",
"ref_id": "BIBREF9"
},
{
"start": 463,
"end": 482,
"text": "Wang et al., 2018b)",
"ref_id": "BIBREF42"
},
{
"start": 678,
"end": 699,
"text": "(Landau et al., 1998;",
"ref_id": "BIBREF23"
},
{
"start": 700,
"end": 715,
"text": "Barsalou, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As evidenced by a range of evaluations (Andrews et al., 2009; Bruni et al., 2014; Silberer et al., 2016) , multimodal models can learn better semantic word representations (a.k.a. embeddings) than text-based models. However, most existing models still have a number of drawbacks. First, they ignore the associations between modalities, and thus lack the ability of information transferring between modalities. Consequently they cannot handle words without perceptual information. Second, they integrate textual and perceptual representations with simple concatenation, which is insufficient to effectively fuse information from various modalities. Third, they typically treat the representations from different modalities equally. This is inconsistent with many psychological findings that information from different modalities contributes differently to the meaning of words (Paivio, 1990; Anderson et al., 2017) .",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "(Andrews et al., 2009;",
"ref_id": "BIBREF3"
},
{
"start": 62,
"end": 81,
"text": "Bruni et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 82,
"end": 104,
"text": "Silberer et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 876,
"end": 890,
"text": "(Paivio, 1990;",
"ref_id": "BIBREF27"
},
{
"start": 891,
"end": 913,
"text": "Anderson et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we introduce the associative multichannel autoencoder (AMA), a novel multimodal word representation model that addresses all the above issues. Our model is built upon the stacked autoencoder (Bengio et al., 2007) to learn semantic representations by integrating textual and perceptual inputs. Inspired by the re-constructive and associative nature of human memory, we propose two associative memory modules as extensions. One is to learn associations between modalities (e.g., associations between textual and visual features), so as to reconstruct corresponding perceptual information of concepts. The other is to learn associations between related concepts, by reconstructing embeddings of both target words and their associated words. Furthermore, we propose a gating mechanism to learn the importance weights of different modalities to each word.",
"cite_spans": [
{
"start": 205,
"end": 226,
"text": "(Bengio et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize, our main contributions in this work are two-fold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present a novel associative multichannel autoencoder for multimodal word representation, which is capable of utilizing associations between different modalities and related concepts, and assigning different importance weights to each modality according to different words. Results on six standard benchmarks demonstrate that our methods outperform strong unimodal baselines and state-ofthe-art multimodal models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our model successfully integrates cognitive insights of the re-constructive and associative nature of semantic memory in humans, suggesting that rich information contained in human cognitive processing can be used to enhance NLP models. Furthermore, our results shed light on the fundamental questions of how to learn semantic representations, such as the plausibility of reconstructing perceptual information, associating related concepts and grounding word symbols to external environment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Background and Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A large body of research evidences that human semantic memory is inherently re-constructive and associative (Collins and Loftus, 1975; Anderson and Bower, 2014) . That is, memories are not exact static copies of reality, but are rather reconstructed from their stimuli and associated concepts each time they are retrieved. For example, when we see a dog, not only the concept itself, but also the corresponding perceptual information and associated words will be jointly activated and reconstructed. Moreover, various theories state that the different sources of information contribute differently to the semantic representation of a concept (Wang et al., 2010; Ralph et al., 2017) . For instance, Dual Coding Theory (Hiscock, 1974) posits that concrete words are represented in the brain in terms of a perceptual and linguistic code, whereas abstract words are encoded only in the linguistic modality. In these respects, our method employs a retrieval and representation process analogous to that of humans, in which the retrieval of perceptual information and associated words is triggered and mediated by a linguistic input. The learned cross-modality mapping and reconstruction of associated words are inspired by the human mental model of associations between different modalities and related concepts. Moreover, word meaning is tied to both linguistic and physical environment, and relies differently on each modality in-puts (Wang et al., 2018a) . These are also captured by our multimodal representation model.",
"cite_spans": [
{
"start": 108,
"end": 134,
"text": "(Collins and Loftus, 1975;",
"ref_id": "BIBREF11"
},
{
"start": 135,
"end": 160,
"text": "Anderson and Bower, 2014)",
"ref_id": "BIBREF2"
},
{
"start": 642,
"end": 661,
"text": "(Wang et al., 2010;",
"ref_id": "BIBREF40"
},
{
"start": 662,
"end": 681,
"text": "Ralph et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 717,
"end": 732,
"text": "(Hiscock, 1974)",
"ref_id": "BIBREF19"
},
{
"start": 1432,
"end": 1452,
"text": "(Wang et al., 2018a)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cognitive Grounding",
"sec_num": "2.1"
},
{
"text": "The existing multimodal representation models can be generally classified into two groups: 1) Jointly training models build multimodal representations with raw inputs of textual and perceptual resources. 2) Separate training models independently learn textual and perceptual representations and integrate them afterwards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Models",
"sec_num": "2.2"
},
{
"text": "A class of models extends Latent Dirichlet Allocation (Blei et al., 2003) to jointly learn topic distributions from words and perceptual units (Andrews et al., 2009; Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013) . Recently introduced work is an extension of the Skip-gram model (Mikolov et al., 2013) . For instance, propose a corpus fusion method that inserts the perceptual features of concepts in the training corpus, which is then used to train the Skip-gram model. Lazaridou et al. (2015) propose MMSkip model, which injects visual information in the process of learning textual representations by adding a max-margin objective to minimize the distance between textual and visual vectors. Kiela and Clark (2015) adopt the MMSkip to learn multimodal vectors with auditory perceptual inputs.",
"cite_spans": [
{
"start": 54,
"end": 73,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF7"
},
{
"start": 143,
"end": 165,
"text": "(Andrews et al., 2009;",
"ref_id": "BIBREF3"
},
{
"start": 166,
"end": 192,
"text": "Silberer and Lapata, 2012;",
"ref_id": "BIBREF34"
},
{
"start": 193,
"end": 227,
"text": "Roller and Schulte im Walde, 2013)",
"ref_id": "BIBREF31"
},
{
"start": 294,
"end": 316,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 486,
"end": 509,
"text": "Lazaridou et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 710,
"end": 732,
"text": "Kiela and Clark (2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly training models",
"sec_num": "2.2.1"
},
{
"text": "These methods can implicitly propagate perceptual information to word representations and at the same time learn multimodal representations. However, they utilize raw text corpus in which words having perceptual information account for a small portion. This weakens the effect of introducing perceptual information and consequently leads to the slight improvement of textual vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly training models",
"sec_num": "2.2.1"
},
{
"text": "The simplest approach is concatenation which fuses textual and visual vectors by concatenating them. It has been proven to be effective in learning multimodal representations (Bruni et al., 2014; Collell et al., 2017) . Variations of this method employ transformation and dimension reduction on the concatenation result, including application of singular value decomposition (SVD) (Bruni et al., 2014) or canonical correlation analysis (CCA) . There is also work using deep learning methods to project different modality inputs into a common space, including restricted Boltzman machines (Ngiam et al., 2011; Srivastava and Salakhutdinov, 2012) , autoencoders (Silberer and Lapata, 2014; Silberer et al., 2016) , and recursive neural networks (Socher et al., 2013) . However, the above methods can only generate multimodal vectors of those words that have perceptual information, thus reducing multimodal vocabulary drastically. An empirically superior model addresses this problem by predicting missing perceptual information firstly. This includes who utilize the ridge regression method to learn a mapping matrix from textual modality to visual modality, and Collell et al. (2017) who employ a feedforward neural network to learn the mapping relation between textual vectors and visual vectors. Applying the mapping function on textual representations, they obtain the predicted visual vectors for all words in textual vocabulary. Then they calculate multimodal representations by concatenating textual and predicted visual vectors. However, the above methods learn separate mapping functions and fusion models, which are somewhat inelegant. In this paper we employ a neural-network mapping function to integrate these two processes into a unified multimodal models.",
"cite_spans": [
{
"start": 175,
"end": 195,
"text": "(Bruni et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 196,
"end": 217,
"text": "Collell et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 381,
"end": 401,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 588,
"end": 608,
"text": "(Ngiam et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 609,
"end": 644,
"text": "Srivastava and Salakhutdinov, 2012)",
"ref_id": "BIBREF37"
},
{
"start": 660,
"end": 687,
"text": "(Silberer and Lapata, 2014;",
"ref_id": "BIBREF35"
},
{
"start": 688,
"end": 710,
"text": "Silberer et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 743,
"end": 764,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF36"
},
{
"start": 1162,
"end": 1183,
"text": "Collell et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Separate training models",
"sec_num": "2.2.2"
},
{
"text": "According to this classification, our method falls into the second group. However, existing models ignore either the associative relations among modalities, associative relations among relative words, or the different contributions of each modality. This paper aims to integrate more perceptual information and the human-like associative memory into a unified multimodal model to learn better word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Separate training models",
"sec_num": "2.2.2"
},
{
"text": "We first provide a brief description of the basic multichannel autoencoder for learning multimodal word representations (Figure 1 ). Then we extend the model with two associative memory modules and a gating mechanism (Figure 2 ) in the next sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 129,
"text": "(Figure 1",
"ref_id": null
},
{
"start": 217,
"end": 226,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Associative Multichannel Autoencoder",
"sec_num": "3"
},
{
"text": "An autoencoder is an unsupervised neural network which is trained to reconstruct a given input from its latent representation (Bengio, 2009) . In this work, we propose a variant of autoencoder called multichannel autoencoder, which maps multimodal inputs into a common space. ...",
"cite_spans": [
{
"start": 126,
"end": 140,
"text": "(Bengio, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Mutichannel Autoencoder",
"sec_num": "3.1"
},
{
"text": "... Multimodal representations dog ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Mutichannel Autoencoder",
"sec_num": "3.1"
},
{
"text": "In case you need, we've collected the cutest small dog breeds to lift your mood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "There's nothing that cheers you up quite as fast as a cute dog doing something peculiar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Figure 1: Architecture of the multichannel autoencoder with inputs of textual, visual and auditory sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Our model extends the unimodal and bimodal autoencoder (Ngiam et al., 2011; Silberer and Lapata, 2014) to induce semantic representations integrating textual, visual and auditory information. As shown in Figure 1 , our model first transforms input textual vector x t , visual vector x v and auditory vector x a to hidden representations:",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Ngiam et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 76,
"end": 102,
"text": "Silberer and Lapata, 2014)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = g(W t x t + b t ) h v = g(W v x v + b v ) h a = g(W a x a + b a ).",
"eq_num": "(1)"
}
],
"section": ".",
"sec_num": null
},
{
"text": "Then the hidden representations are concatenated together and mapped to a common space:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h m = g(W m [h t ; h v ; h a ] + b m ).",
"eq_num": "(2)"
}
],
"section": ".",
"sec_num": null
},
{
"text": "The model is trained to reconstruct the hidden representations of the three modalities from the multimodal representation h m :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[\u0125 t ;\u0125 v ;\u0125 a ] = g(W m h m + bm),",
"eq_num": "(3)"
}
],
"section": ".",
"sec_num": null
},
{
"text": "and finally to reconstruct the original embeddings of textual, visual and auditory inputs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x t = g(W t\u0125 t + bt) x v = g(W v\u0125 v + bv) x a = g(W a\u0125 a + b\u00e2),",
"eq_num": "(4)"
}
],
"section": ".",
"sec_num": null
},
{
"text": "wherex",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "t ,x v ,x a are the reconstruction of input vectors x t , x v , x a , and\u0125 t ,\u0125 v ,\u0125 a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "are the reconstruction of hidden representa- \u2022] denotes the vector concatenation, and g denotes the non-linear function which we use tanh(\u2022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "tions h t , h v , h a . The learning parameters {W t , W v , W a , W t , W v , W a , W m , W m } are weight matrices, {b t , b v , b a ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Training a single-layer autoencoder corresponds to optimizing the learning parameters to minimize the overall loss between inputs and their reconstructions. Following (Vincent et al., 2010) , we use squared loss:",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "(Vincent et al., 2010)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03b8 1 n i=1 (||x i t \u2212x i t || 2 + ||x i v \u2212x i v || 2 + ||x i a \u2212x i a || 2 ),",
"eq_num": "(5)"
}
],
"section": ".",
"sec_num": null
},
{
"text": "where i denotes the i th word, and the model parameters are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "\u03b8 1 = {W t , W v , W a , W m , W t , W v , W a , W m , b t , b v , b a , b m , bt, bv, b\u00e2, bm}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Autoencoders can be stacked to create deep networks. To enhance the quality of semantic representations, we employ a stacked multichannel autoencoder, which is composed of multiple hidden layers that are stacked together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "In reality, the words that have corresponding images or sounds are only a small subset of the textual vocabulary. To obtain the perceptual vectors for each word, we need associations between modalities (i.e., text-to-vision and text-to-audition mapping functions), that transform the textual vectors into visual and auditory ones. Previous methods learn separate mapping functions and fusion models, which are somewhat inelegant. Here we employ a neural-network mapping function to incorporate this modality association module into multimodal models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Modality Associations",
"sec_num": "3.2"
},
{
"text": "Take text-to-vision mapping as an example. Suppose that T \u2208 R mt\u00d7nt is the textual representation containing m t words, V \u2208 R mv\u00d7nv is the visual representation containing m v ( m t ) words, where n t and n v are dimensions of the textual and visual representations respectively. The textual and visual representations of the i th concept are denoted as T i and V i respectively. Our goal is to learn a mapping function f : g(W p T + b p ) from textual to visual space such that the prediction f (T i ) is similar to the actual visual vec- are used to learn the mapping function. To train the model, we employ a square loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Modality Associations",
"sec_num": "3.2"
},
{
"text": "tor V i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Modality Associations",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03b8 2 mv i=1 ||f (T i ) \u2212 V i || 2 ,",
"eq_num": "(6)"
}
],
"section": "Integrating Modality Associations",
"sec_num": "3.2"
},
{
"text": "where the training parameters are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Modality Associations",
"sec_num": "3.2"
},
{
"text": "\u03b8 2 = {W p , b p }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Modality Associations",
"sec_num": "3.2"
},
{
"text": "We adopt the same method to learn the text-toaudition mapping function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Modality Associations",
"sec_num": "3.2"
},
{
"text": "Word associations are a proxy for an aspect of human semantic memory that is not sufficiently captured by the usual training objectives of multimodal models. Therefore we assume that incorporating the objective of word associations helps to learn better semantic representations. To achieve this, we propose to reconstruct the vector of associated word from the corresponding multimodal semantic representation. Specifically, in the decoding process we change the equation 3to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[\u0125 t ,\u0125 v ,\u0125 a ,\u0125 asc ] = g(W m h m + bm),",
"eq_num": "(7)"
}
],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "and equation 4to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "x t = g(W t\u0125 t + bt)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "x v = g(W v\u0125 v + bv)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "x a = g(W a\u0125 a + b\u00e2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x asc = g(W asc\u0125asc + b asc ).",
"eq_num": "(8)"
}
],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "To train the model, we add an additional objective function, which is the mean square error between the embeddings of the associated word y and their re-constructive embeddingsx asc :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03b8 3 n i=1 ||y i \u2212x i asc || 2 ,",
"eq_num": "(9)"
}
],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "where y i and x i are the embeddings of a pair of associated words. Here, y is the concatenation of three unimodal vectors [y t ; y v ; y a ]. The parameters of word association module are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "\u03b8 3 = {W t , W v , W a , W m ,\u0174 m , W asc , b t , b v , b a , b m , bm, b asc }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "This additional criterion drives the learning towards a semantic representation capable of reconstructing its associated representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Word Associations",
"sec_num": "3.3"
},
{
"text": "Considering that the meaning of each word has different dependencies on textual and perceptual information, we propose the sample-specific gate to assign different weights to each modality according to different words. The weight parameters are calculated by the following feed-forward neural networks: Finally, we compute element-wise multiplication of the textual, visual and auditory representations with their corresponding gates:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating a Gating Mechanism",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g t = g(W gt x t + b gt ) g v = g(W gv x v + b gv ) g a = g(W ga x a + b ga ),",
"eq_num": "(10)"
}
],
"section": "Integrating a Gating Mechanism",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x gt = x t g t x gv = x v g v x ga = x a g a .",
"eq_num": "(11)"
}
],
"section": "Integrating a Gating Mechanism",
"sec_num": "3.4"
},
{
"text": "The x gt , x gv and x ga can be seen as the weighted textual, visual and auditory representations. The parameters of our gating mechanism is trained together with that of the proposed model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating a Gating Mechanism",
"sec_num": "3.4"
},
{
"text": "To train the AMA model, we use overall objective function of equation (5) + (6) + (9). In the training phase, model inputs are textual vectors, the corresponding visual vectors, auditory vectors, and association words (Figure 2 ). In the testing phase, we only need textual inputs to generate multimodal word representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 227,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.5"
},
{
"text": "Textual vectors. We use 300-dimensional GloVe vectors 1 which are trained on the Common Crawl corpus consisting of 840B tokens and a vocabulary of 2.2M words 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Visual vectors. Our source of visual vectors are collected from ImageNet (Russakovsky et al., 2015) which covers a total of 21,841 WordNet synsets (Fellbaum, 1998) that have 14,197,122 images. For our experiments, we delete words with fewer than 50 images or words not in the Glove vectors, and sample at most 100 images for each word. To generate a visual vector for each word, we use the forward pass of a pre-trained VGGnet model 3 and extract the hidden representation of the last layer as the feature vector. Then we use averaged feature vectors of the multiple images corresponding to the same word. Finally, we get 8,048 visual vectors of 128 dimensions.",
"cite_spans": [
{
"start": 73,
"end": 99,
"text": "(Russakovsky et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 147,
"end": 163,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Auditory vectors. For auditory data, we gather audio files from Freesound 4 , in which we select words with more than 10 audio files and sample at most 50 sounds for one word. To extract auditory features, we use the VGG-net model which is pretrained on Audioset 5 . The final auditory vectors are averaged feature vectors of multiple audios of the same word, which contains 9,988 words of 128 dimensions 6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Word associations. We use the word association data collected by (De Deyne et al., 2016) , in which each word pair is generated by at least one subject 7 . This dataset includes mostly words with similar meaning (e.g., occasionally & sometimes, adored & loved, supervisor & boss) and related words (e.g., eruption & volcano, cortex & brain, umbrella & rain) . We calculate the association score for each word pair (cue word + target word) as: the number of person who generated the word pair divided by the total number of people who were presented with the cue word. For training, we select pairs of associated words above a threshold of 0.15 and delete those that are not in the Glove vocabulary, which results in 7,674 word association data sets 8 . For the development set, we randomly sample 5,000 word association collections together with their association scores.",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(De Deyne et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 305,
"end": 357,
"text": "eruption & volcano, cortex & brain, umbrella & rain)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Our models are implemented with PyTorch (Paszke et al., 2017) , optimized with Adam (Kingma and Ba, 2014) . We set the initial learning rate to 0.05, and batch size to 64. We tune the number of layers over 1, 2, 3, the size of multimodal vectors over 100, 200, 300, and the size of each layer in textual channel over 300, 250, 200, 150, 100 and in visual/auditory channel over 128, 120, 90, 60 . We train the model for 500 epochs and select the best parameters on the development set. All models are trained for 3 times and the average results are reported in Table 1 .",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 84,
"end": 105,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 317,
"end": 393,
"text": "300, 250, 200, 150, 100 and in visual/auditory channel over 128, 120, 90, 60",
"ref_id": null
}
],
"ref_spans": [
{
"start": 560,
"end": 567,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "4.2"
},
{
"text": "To test the effect of each module, we separately train the following models: multichannel autoencoder with modality association (AMA-M), with modality and word associations (AMA-MW), with modality and word associations plus value/vector gate (AMA-MW-Gval/vec).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "4.2"
},
{
"text": "For AMA-M model, we initialize the text-tovision and text-to-audition mapping functions with pre-trained mapping matrices, which are parameters of one-layer feed-forward neural networks. The network uses input of the textual vectors, output of visual or auditory vectors, and is trained with SGD for 100 epochs. We initialize the network biases as zeros and network weights with He-initialisation (He et al., 2015) . The best parameters of AMA-M model are 2 hidden layers, with textual channel size of 300, 250 and 150, visual/auditory channel size of 128, 90, 60. For AMA-MW model, we use the best AMA-M model parameters as initialization, and train the model with word association data. The optimal parameter of association channel size is 300, 350, 556 (or 428 for bimodal inputs). For AMA-MW-Gval and AMA-MW-Gvec, we adopt the same training strategy as AMA-MW model. The code for training and evaluation can be found at: https://github.com/wangshaonan/ Associative-multichannel-autoencoder.",
"cite_spans": [
{
"start": 379,
"end": 414,
"text": "He-initialisation (He et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "4.2"
},
{
"text": "We test the baseline and proposed models on six standard evaluation benchmarks, covering two different tasks: (i) Semantic relatedness: Men-3000 (Bruni et al., 2014) and Wordrel-252 (Agirre et al., 2009) ; (ii) Semantic similarity: Simlex-999 , Semsim-7576 (Silberer and Lapata, 2014) , Wordsim-203 and Simverb-3500 (Gerz et al., 2016) . All test sets contain a list of word pairs along with their subject ratings.",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 170,
"end": 203,
"text": "Wordrel-252 (Agirre et al., 2009)",
"ref_id": null
},
{
"start": 257,
"end": 284,
"text": "(Silberer and Lapata, 2014)",
"ref_id": "BIBREF35"
},
{
"start": 316,
"end": 335,
"text": "(Gerz et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Tasks",
"sec_num": "5.1"
},
{
"text": "We employ Spearman's correlation method to evaluate the performance of our models. This method calculates the correlation coefficients between model predictions and subject ratings, in which the model prediction is the cosine similarity between semantic representations of two words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Tasks",
"sec_num": "5.1"
},
{
"text": "Most of existing multimodal models only utilize textual and visual modalities. For fair comparison, we re-implement several representative systems with our own textual and visual vectors. The Concatenation (CONC) model (Kiela and Bottou, 2014) is simple concatenation of normalized textual and visual vectors. The Mapping (Collell et al., 2017) and Ridge models first learn a mapping matrix from textual to visual modality using feed-forward neural network and ridge regression respectively. After applying the mapping function on the textual vectors, they obtain the predicted visual vectors for all words in textual vocabulary. Then they concatenate the normalized textual and predicted visual vectors to get multimodal word representations. The SVD (Bruni et al., 2014) and CCA models first concatenate normalized textual and visual vectors, and then conduct SVD or CCA transformations on the concatenated vectors.",
"cite_spans": [
{
"start": 219,
"end": 243,
"text": "(Kiela and Bottou, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 322,
"end": 344,
"text": "(Collell et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 752,
"end": 772,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Multimodal Models",
"sec_num": "5.2"
},
{
"text": "For multimodal models with textual, visual and Table 1 : Spearman's correlations between model predictions and human ratings on six evaluation datasets. Here T, V, A denote textual, visual and auditory. TV denotes bimodal inputs of textual and visual. TVA denotes trimodal inputs of textual, visual and auditory. The bold scores are the best results per column in bimodal models and trimodal models respectively. For each test, ALL corresponds to the whole testing set, V/A to those word pairs for which we have textual&visual vectors in bimodal models or textual&visual&auditory in trimodal models, and ZS (zero-shot) denotes word pairs for which we have only textual vectors. The #inst. denotes the number of word pairs. auditory inputs, we implement CONC and Ridge as baseline models. The trimodal CONC model simply concatenates normalized textual, visual and auditory vectors. The trimodal Ridge model first learns text-to-vision and text-to-audition mapping matrices with ridge regression method. Then it applies the mapping functions on the textual vectors to get the predicted visual and auditory vectors. Finally, the normalized textual, predictedvisual and predicted-auditory vectors are concatenated to get the multimodal representations. All above baseline models are implemented with Sklearn 9 . Same as the proposed AMA model, 9 http://scikit-learn.org/ the hyper-parameters of baseline models are tuned on the development set using Spearman's correlation method. In Ridge model, the optimal regularization parameter is 0.6. The Mapping model is trained with SGD for maximum 100 epochs with early stopping, and the optimal learning rate is 0.001. The output dimension of SVD and CCA models are 300.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Multimodal Models",
"sec_num": "5.2"
},
{
"text": "MEN SIMLEX SEMSIM SIMVERB WORDSIMM WORDREL ALL V/A ZS ALL V/A ZS ALL V/A ZS ALLL V/A ZS ALL V/A ZS ALL V/A ZS Kiela & Bottou 2014 - 0.72 - - - - - - - - - - - - - - - Silberer & Lapata 2014 - - - - - - 0.70 - - - - - - - - - - - Lazaridou",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Multimodal Models",
"sec_num": "5.2"
},
{
"text": "As shown in Table 1 , we divide all models into six groups: (1) existing multimodal models (with textual and visual inputs) in which results are reprinted from Collell et al. (2017) . (2) Unimodal models with textual, (predicted) visual or (pre-dicted) auditory inputs. (3) Our re-implementation of baseline bimodal models with textual and visual inputs (TV). (4) Our AMA models with textual and visual inputs. (5) Our implementation of trimodal baseline models with textual, visual and auditory inputs (TVA). (6) Our AMA model with textual, visual and auditory inputs. Overall performance Our AMA models (in group 4 and 6) clearly outperform their baseline unimodal and multimodal models (in group 2, 3 and 5). We use Wilcoxon signed-rank test to check if significant difference exists between two models. Results show that our multimodal models perform significantly better (p < 0.05) than all baseline models.",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "Collell et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.3"
},
{
"text": "As shown clearly, our bimodal and trimodal AMA models achieve better performance than baselines in both V/A (visual or auditory, the testing data that have associated visual or auditory vectors) and ZS (zero-shot, the testing data that do not have associated visual or auditory vectors) region. In other words, our models outperform baseline models on words with or without perceptual information. The good results in ZS region also indicate that our models have good generalization capacity. Unimodal baselines As shown in group 2, the Glove vectors are much better than CNNvisual and CNN-auditory vectors, in which CNNauditory has the worst performance on capturing concept similarities. Comparing with visual and auditory vectors, the predicted visual and auditory vectors achieve much better performance. This indicates that the predicted vectors contain richer information than purely perceptual representations and are more useful for building semantic representations. Multimodal baselines For bimodal models (group 3), the CONC model that combines Glove and visual vectors performs worse than Glove on four out of six datasets, suggesting that simple concatenation might be suboptimal. The Mapping and Ridge models, which combine Glove and predicted visual vectors, improve over Glove on five out of six datasets in ALL regions. This reinforces the conclusion that the predicted visual vectors are more useful in building multimodal models. The SVD model gets similar results as Ridge model. The CCA model maps different modality inputs into a common space, achieving better results on some datasets and worse results on the others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.3"
},
{
"text": "The improvement on three benchmark tests shows the potential of mapping multimodal inputs into a common space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.3"
},
{
"text": "The above results can also be observed in the trimodal CONC and Ridge models (group 5). Overall, the trimodal models, which utilize additional auditory inputs, get slightly worse performance than bimodal models. This is partly caused by the fusion method of concatenation. Note that our proposed AMA models are more effective with trimodal inputs as shown in group 6. Our multimodal models With either bimodal or trimodal inputs, the proposed AMA-M model outperforms all baseline models by a large margin. Specifically our AMA-M model achieves an relative improvement of 4.1% on average (4.5% with trimodal inputs) over the state-of-the-art Ridge model. This illustrates that our AMA models can productively combine textual and perceptual representations. Moreover, our AMA-MW model, which employs word associations, achieves an average improvement of 1.5% (2.7% with trimodal inputs) over the AMA-M model. That is to say, the representation ability of multimodal models can be clearly improved by learning associative relations between words. Furthermore, the AMA-MW-Gval model improves the AMA-MW model by 1.3% (0.3% with trimodal inputs) on average, illustrating that the gating mechanism (especially the value gate) helps to learn better semantic representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.3"
},
{
"text": "In addition, we explore the effect of word association data size. We find that the decrease of association data has no discernible effect on model performance: when using 100%, 80%, 60%, 40%, 20% of the data, the average results are 0.6479, 0.6409, 0.6361, 0.6430, 0.6458 in bimodal model. The same trend is observed in trimodal models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.3"
},
{
"text": "We have proposed a cognitively-inspired multimodal model -associative multichannel autoencoder -which utilizes the associations between modalities and related words to learn multimodal word representations. Performance improvement on six benchmark tests shows that our models can efficiently fuse different modality inputs and build better semantic representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Ultimately, the present paper sheds light on the fundamental questions of how to learn word meanings, such as the plausibility of reconstructing per-ceptual information, associating related concepts and grounding word symbols to external environment. We believe that one of the promising future directions is to learn from how humans learn and store semantic word representations to build a more effective computational model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "http://nlp.stanford.edu/projects/ glove 2 We have tried skip-gram vectors and get the same conclusions.3 http://www.vlfeat.org/matconvnet/ 4 http://www.freesound.org/ 5 https://research.google.com/audioset 6 We build auditory vectors with the released code at: https://github.com/tensorflow/models/ tree/master/research/audioset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset can be found at: https:// simondedeyne.me/data.8 We have done experiments with Synonyms (which are extracted from WordNet and PPDB corpora), and the results are not as good as using word associations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research work descried in this paper has been supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002103 and also supported by the Natural Science Foundation of China under Grant No. 61333018. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In NAACL, pages 19-27.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2017,
"venue": "TACL",
"volume": "5",
"issue": "",
"pages": "17--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J. Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017. Visually grounded and tex- tual semantic models differentially decode brain ac- tivity associated with concrete and abstract nouns. TACL, 5:17-30.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Human associative memory",
"authors": [
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Gordon H",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bower",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R Anderson and Gordon H Bower. 2014. Human associative memory. Psychology press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Integrating experiential and distributional data to learn semantic representations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Vigliocco",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vinson",
"suffix": ""
}
],
"year": 2009,
"venue": "Psychological review",
"volume": "116",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Andrews, Gabriella Vigliocco, and David Vin- son. 2009. Integrating experiential and distribu- tional data to learn semantic representations. Psy- chological review, 116(3):463.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Grounded cognition",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barsalou",
"suffix": ""
}
],
"year": 2008,
"venue": "Annu. Rev. Psychol",
"volume": "59",
"issue": "",
"pages": "617--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence W Barsalou. 2008. Grounded cognition. Annu. Rev. Psychol., 59:617-645.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning deep architectures for ai",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Learning",
"volume": "2",
"issue": "",
"pages": "1--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio. 2009. Learning deep architectures for ai. Foundations and trends in Machine Learning, 2(1):1-127.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Greedy layer-wise training of deep networks",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Lamblin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Popovici",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "153--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Advances in neural informa- tion processing systems, pages 153-160.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine Learning research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993-1022.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Artif. Intell. Res.(JAIR)",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res.(JAIR), 49(2014):1-47.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Vector space models of lexical meaning. Handbook of Contemporary Semantic Theory, The",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "493--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark. 2015. Vector space models of lexi- cal meaning. Handbook of Contemporary Semantic Theory, The, pages 493-522.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Imagined visual representations as multimodal embeddings",
"authors": [
{
"first": "Guillem",
"middle": [],
"last": "Collell",
"suffix": ""
},
{
"first": "Teddy",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillem Collell, Teddy Zhang, and Marie-Francine Moens. 2017. Imagined visual representations as multimodal embeddings. In AAAI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A spreading-activation theory of semantic processing",
"authors": [
{
"first": "Allan",
"middle": [
"M"
],
"last": "Collins",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [
"F"
],
"last": "Loftus",
"suffix": ""
}
],
"year": 1975,
"venue": "Psychological Review",
"volume": "82",
"issue": "",
"pages": "407--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan M. Collins and Elizabeth F. Loftus. 1975. A spreading-activation theory of semantic processing. Psychological Review, 82:407-428.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Predicting human similarity judgments with distributional models: The value of word associations",
"authors": [
{
"first": "Amy",
"middle": [],
"last": "Simon De Deyne",
"suffix": ""
},
{
"first": "Daniel J",
"middle": [],
"last": "Perfors",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navarro",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "1861--1870",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon De Deyne, Amy Perfors, and Daniel J Navarro. 2016. Predicting human similarity judgments with distributional models: The value of word associa- tions. COLING, pages 1861-1870.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Simverb-3500: A largescale evaluation set of verb similarity",
"authors": [
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.00869"
]
},
"num": null,
"urls": [],
"raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simverb-3500: A large- scale evaluation set of verb similarity. arXiv preprint arXiv:1608.00869.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "1026--1034",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpass- ing human-level performance on imagenet classifi- cation. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning abstract concept embeddings from multi-modal data: Since you probably can't see what i mean",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "255--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill and Anna Korhonen. 2014. Learning ab- stract concept embeddings from multi-modal data: Since you probably can't see what i mean. In EMNLP, pages 255-265.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multi-modal models for concrete and abstract concept meaning",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "285--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Multi-modal models for concrete and abstract con- cept meaning. Transactions of the Association for Computational Linguistics, 2:285-296.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Imagery and verbal processes",
"authors": [
{
"first": "Merrill",
"middle": [],
"last": "Hiscock",
"suffix": ""
}
],
"year": 1974,
"venue": "Psyccritiques",
"volume": "19",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Merrill Hiscock. 1974. Imagery and verbal processes. Psyccritiques, 19(6):487.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning image embeddings using convolutional neural networks for improved multi-modal semantics",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and L\u00e9on Bottou. 2014. Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In EMNLP, pages 36-45.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multi-and cross-modal semantics beyond vision: Grounding in auditory perception",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2461--2470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and Stephen Clark. 2015. Multi-and cross-modal semantics beyond vision: Grounding in auditory perception. In EMNLP, pages 2461-2470.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Object perception and object naming in early development",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Landau",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1998,
"venue": "Trends in cognitive sciences",
"volume": "2",
"issue": "1",
"pages": "19--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Landau, Linda Smith, and Susan Jones. 1998. Object perception and object naming in early devel- opment. Trends in cognitive sciences, 2(1):19-24.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Combining language and vision with a multimodal skip-gram model",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nghia The",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "153--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Nghia The Pham, and Marco Ba- roni. 2015. Combining language and vision with a multimodal skip-gram model. ACL, pages 153-163.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multimodal deep learning",
"authors": [
{
"first": "Jiquan",
"middle": [],
"last": "Ngiam",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Mingyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Juhan",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "689--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. 2011. Mul- timodal deep learning. In ICML, pages 689-696.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Mental representations: A dual coding approach",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Paivio",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan Paivio. 1990. Mental representations: A dual coding approach. Oxford University Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Soumith Chintala, and Gregory Chanan",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, and Gre- gory Chanan. 2017. Pytorch.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532- 1543.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The neural and computational bases of semantic cognition",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Matthew A Lambon Ralph",
"suffix": ""
},
{
"first": "Karalyn",
"middle": [],
"last": "Jefferies",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"T"
],
"last": "Patterson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rogers",
"suffix": ""
}
],
"year": 2017,
"venue": "Nature Reviews Neuroscience",
"volume": "18",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew A Lambon Ralph, Elizabeth Jefferies, Kara- lyn Patterson, and Timothy T Rogers. 2017. The neural and computational bases of semantic cogni- tion. Nature Reviews Neuroscience, 18(1):42.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A multimodal lda model integrating textual, cognitive and visual modalities",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1146--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A multimodal lda model integrating textual, cogni- tive and visual modalities. In EMNLP, pages 1146- 1157.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Imagenet large scale visual recognition challenge",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Computer Vision",
"volume": "115",
"issue": "3",
"pages": "211--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition chal- lenge. International Journal of Computer Vision, 115(3):211-252.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Visually grounded meaning representations. IEEE transactions on pattern analysis and machine intelligence",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Ferrari",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2016. Visually grounded meaning representations. IEEE transactions on pattern analysis and machine intelligence.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Grounded models of semantic representation",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1423--1433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In EMNLP, pages 1423-1433.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning grounded meaning representations with autoencoders",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "721--732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2014. Learn- ing grounded meaning representations with autoen- coders. In ACL, pages 721-732.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Zero-shot learning through cross-modal transfer",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Milind",
"middle": [],
"last": "Ganjoo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "935--943",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Milind Ganjoo, Christopher D Man- ning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in neu- ral information processing systems, pages 935-943.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Multimodal learning with deep boltzmann machines",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ruslan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2222--2230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava and Ruslan R Salakhutdinov. 2012. Multimodal learning with deep boltzmann ma- chines. In Advances in neural information process- ing systems, pages 2222-2230.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of artificial intelligence research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research, 37:141-188.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Lajoie",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pierre-Antoine",
"middle": [],
"last": "Manzagol",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "3371--3408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371-3408.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Julie",
"middle": [
"A"
],
"last": "Conder",
"suffix": ""
},
{
"first": "David",
"middle": [
"N"
],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [
"V"
],
"last": "Shinkareva",
"suffix": ""
}
],
"year": 2010,
"venue": "Human brain mapping",
"volume": "31",
"issue": "10",
"pages": "1459--1468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Wang, Julie A. Conder, David N. Blitzer, and Svet- lana. V. Shinkareva. 2010. Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. Human brain mapping, 31(10):1459-1468.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Investigating inner properties of multimodal representation and semantic compositionality with brain-based componential semantics",
"authors": [
{
"first": "Shaonan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "5964--5972",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaonan Wang, Jiajun Zhang, Nan Lin, and Chengqing Zong. 2018a. Investigating inner properties of mul- timodal representation and semantic composition- ality with brain-based componential semantics. In AAAI, pages 5964-5972.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning multimodal word representation via dynamic fusion methods",
"authors": [
{
"first": "Shaonan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "5973--5980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2018b. Learning multimodal word representation via dynamic fusion methods. In AAAI, pages 5973- 5980.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Architecture of the proposed associative multichannel autoencoder."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "0.757 0.656 0.372 0.458 0.347 0.702 0.700 0.709 0.212 0.194 0.211 0.596 0.621 0.557 0.412 0.604 00.555 0.597 0.270 0.251 0.296 0.547 0.531 0.559 0.157 0.074 0.227 0.515 0.496 0.544 0.388 0.400 00.815 0.782 0.408 0.407 0.410 0.769 0.771 0.709 0.282 0.358 0.272 0.781 0.696 0.768 0.650 0.751 0.594 Ridge (TV) 0.806 0.816 0.786 0.418 0.405 0.429 0.764 0.766 0.756 0.287 0.329 0.285 0.786 0.689 0.771 0.660 0.765 0"
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "bt, bv, b\u00e2, b m , bm} are bias vectors. Here [\u2022 ;"
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td colspan=\"3\">Association word</td></tr><tr><td>word2vec Multimodal representations ... ... ... ... ... ...</td><td>... ... ... ... ... ...</td><td>sound2vec ... ... ... ... ... ...</td><td>... ... ... representations Multimodal ... ... ...</td><td>... ... ...</td><td>... ... ... ...</td><td>... ... ...</td><td>... ... ...</td><td>Gate ... ... ...</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Text-image mapping</td><td/><td/></tr><tr><td>There's nothing that In case you cheers you need, we've up quite as collected the fast as a cute cutest small dog doing dog breeds something to lift your mood. peculiar.</td><td/><td/><td>In case you need, we've collected the cutest small dog breeds to lift your mood. There's nothing that cheers you up quite as fast as a cute dog doing something peculiar.</td><td colspan=\"3\">dog Text-sound mapping</td><td colspan=\"2\">Sample-specific gate ...</td></tr><tr><td/><td>dog</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "The set of visual representations along with their corresponding textual representations image2vec ..."
}
}
}
}