Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:05:46.758902Z"
},
"title": "Compositional Generalization in Image Captioning",
"authors": [
{
"first": "Mitja",
"middle": [],
"last": "Nikolaus",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of T\u00fcbingen",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mostafa",
"middle": [],
"last": "Abdou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Matthew",
"middle": [],
"last": "Lamm",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Rahul",
"middle": [],
"last": "Aralikatte",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {}
},
"email": ""
},
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts. We study the problem of compositional generalization, which measures how well a model composes unseen combinations of concepts when describing images. Stateof-the-art image captioning models show poor generalization performance on this task. We propose a multi-task model to address the poor performance, that combines caption generation and image-sentence ranking, and uses a decoding mechanism that re-ranks the captions according their similarity to the image. This model is substantially better at generalizing to unseen combinations of concepts compared to state-of-the-art captioning models.",
"pdf_parse": {
"paper_id": "K19-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts. We study the problem of compositional generalization, which measures how well a model composes unseen combinations of concepts when describing images. Stateof-the-art image captioning models show poor generalization performance on this task. We propose a multi-task model to address the poor performance, that combines caption generation and image-sentence ranking, and uses a decoding mechanism that re-ranks the captions according their similarity to the image. This model is substantially better at generalizing to unseen combinations of concepts compared to state-of-the-art captioning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When describing scenes, humans are able to almost arbitrarily combine concepts, producing novel combinations that they have not previously observed (Matthei, 1982; Piantadosi and Aslin, 2016) . Imagine encountering a purple-colored dog in your town, for instance. Given that you understand the concepts PURPLE and DOG, you are able to compose them together to describe the dog in front of you, despite never having seen one before.",
"cite_spans": [
{
"start": 148,
"end": 163,
"text": "(Matthei, 1982;",
"ref_id": "BIBREF41"
},
{
"start": 164,
"end": 191,
"text": "Piantadosi and Aslin, 2016)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Image captioning models attempt to automatically describe scenes in natural language (Bernardi et al., 2016) . Most recent approaches generate captions using a recurrent neural network, where the image is represented by features extracted from a Convolutional Neural Network (CNN). Although state-of-the-art models show good performance on challenge datasets, as measured by text-similarity metrics, their performance * The work was carried out during a visit to the University of Copenhagen.",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "(Bernardi et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A white cat sitting on a laptop computer A white dog running along a beach A big brown dog sitting on a couch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "White things Dogs Figure 1 : We evaluate whether image captioning models are able to compositionally generalize to unseen combinations of adjectives, nouns, and verbs by forcing paradigmatic gaps in the training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Evaluation",
"sec_num": null
},
{
"text": "as measured by human judges is low when compared to human-written captions (Vinyals et al., 2017, Section 5.3.2) . It is widely believed that systematic compositionality is a key property of human language that is essential for making generalizations from limited data (Montague, 1974; Partee, 1984; . In this work, we investigate to what extent image captioning models are capable of compositional language understanding. We explore whether these models can compositionally generalize to unseen adjective-noun and nounverb composition pairs, in which the constituents of the pair are observed during training but the combination is not, thus introducing a paradigmatic gap in the training data, as illustrated in Figure 1 . We define new training and evaluation splits of the COCO dataset (Chen et al., 2015) by holding out the data associated with the compositional pairs from the training set. These splits are used to evaluate how well models generalize to describing images that depict the held out pairings.",
"cite_spans": [
{
"start": 75,
"end": 112,
"text": "(Vinyals et al., 2017, Section 5.3.2)",
"ref_id": null
},
{
"start": 269,
"end": 285,
"text": "(Montague, 1974;",
"ref_id": "BIBREF46"
},
{
"start": 286,
"end": 299,
"text": "Partee, 1984;",
"ref_id": null
},
{
"start": 714,
"end": 722,
"text": "Figure 1",
"ref_id": null
},
{
"start": 790,
"end": 809,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Evaluation",
"sec_num": null
},
{
"text": "We find that state-of-the-art captioning models, such as Show, Attend and Tell , and Bottom-Up and Top-Down Attention (Ander-son et al., 2018) , have poor compositional generalization performance. We also observe that the inability to generalize of these models is primarily due to the language generation component, which relies too heavily on the distributional characteristics of the dataset and assigns low probabilities to unseen combinations of concepts in the evaluation data. This supports the findings from concurrent work (Holtzman et al., 2019) which studies the challenges in decoding from language models trained with a maximum likelihood objective.",
"cite_spans": [
{
"start": 118,
"end": 142,
"text": "(Ander-son et al., 2018)",
"ref_id": null
},
{
"start": 532,
"end": 555,
"text": "(Holtzman et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Evaluation",
"sec_num": null
},
{
"text": "To address the generalization problem, we propose a multi-task model that jointly learns image captioning and image-sentence ranking. For caption generation, our model benefits from an additional step, where the set of captions generated by the model can be re-ranked using the jointlytrained image-sentence ranking component. We find that the ranking component is less affected by the likelihood of n-gram sequences in the training data, and that it is able to assign a higher ranking to more informative captions which contain unseen combinations of concepts. These findings are reflected by improved compositional generalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Evaluation",
"sec_num": null
},
{
"text": "The source code is publicly available on GitHub. 1 2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Evaluation",
"sec_num": null
},
{
"text": "Image Caption Generation models are usually end-to-end differentiable encoder-decoder models trained with a maximum likelihood objective. Given an image encoding that is extracted from a convolutional neural network (CNN), an RNNbased decoder generates a sequence of words that form the corresponding caption (Vinyals et al., 2015, inter-alia) . This approach has been improved by applying top-down and bottom-up attention mechanisms . These models show increasingly good performance on benchmark datasets, e.g. COCO, and in some cases reportedly surpass human-level performance as measured by n-gram based evaluation metrics (Bernardi et al., 2016) . However, recent work has revealed several caveats. Firstly, when using human judgments for evaluation, the automatically generated captions are still considered worse in most cases Vinyals et al., 2017) . Furthermore, when evaluating out-of-domain images or images with unseen concepts, it has been shown that the generated captions are often of poor quality (Mao et al., 2015; Vinyals et al., 2017) . Attempts have been made to address the latter issue by leveraging unpaired text data or pre-trained language models (Hendricks et al., 2016; Agrawal et al., 2018) .",
"cite_spans": [
{
"start": 309,
"end": 343,
"text": "(Vinyals et al., 2015, inter-alia)",
"ref_id": null
},
{
"start": 626,
"end": 649,
"text": "(Bernardi et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 833,
"end": 854,
"text": "Vinyals et al., 2017)",
"ref_id": "BIBREF62"
},
{
"start": 1011,
"end": 1029,
"text": "(Mao et al., 2015;",
"ref_id": "BIBREF40"
},
{
"start": 1030,
"end": 1051,
"text": "Vinyals et al., 2017)",
"ref_id": "BIBREF62"
},
{
"start": 1170,
"end": 1194,
"text": "(Hendricks et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 1195,
"end": 1216,
"text": "Agrawal et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Caption Generation and Retrieval",
"sec_num": "2.1"
},
{
"text": "Image-Sentence Ranking is closely related to image captioning. Here, the problem of language generation is circumvented and models are instead trained to rank a set of captions given an image, and vice-versa (Hodosh et al., 2013) . A common approach is to learn a visual-semantic embedding for the captions and images, and to rank the images or captions based on similarity in the joint embedding space. State-of-the-art models extract image features from CNNs and use gated RNNs to represent captions, both of which are projected into a joint space using a linear transformation (Frome et al., 2013; Karpathy and Fei-Fei, 2015; Vendrov et al., 2016; Faghri et al., 2018) .",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "(Hodosh et al., 2013)",
"ref_id": "BIBREF29"
},
{
"start": 580,
"end": 600,
"text": "(Frome et al., 2013;",
"ref_id": "BIBREF24"
},
{
"start": 601,
"end": 628,
"text": "Karpathy and Fei-Fei, 2015;",
"ref_id": "BIBREF31"
},
{
"start": 629,
"end": 650,
"text": "Vendrov et al., 2016;",
"ref_id": "BIBREF59"
},
{
"start": 651,
"end": 671,
"text": "Faghri et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Caption Generation and Retrieval",
"sec_num": "2.1"
},
{
"text": "Investigations of compositionality in vector space models date back to early debates in the cognitive science (Fodor and Pylyshyn, 1988; Fodor and Lepore, 2002) and connectionist literature (Mc-Clelland et al., 1986; Smolensky, 1988) regarding the ability of connectionist systems to compose simple constituents into complex structures. In the NLP literature, numerous approaches that (loosely) follow the linguistic principle of compositionality 2 have been proposed (Mitchell and Lapata, 2008 ; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011) . More recently, it has become standard to employ representations which are learned using neural network architectures. The extent to which these models behave compositionally is an open topic of research (Lake and Baroni, 2017; Dasgupta et al., 2018; Ettinger et al., 2018; McCoy et al., 2018) that closely relates to the focus of the present paper.",
"cite_spans": [
{
"start": 110,
"end": 136,
"text": "(Fodor and Pylyshyn, 1988;",
"ref_id": "BIBREF23"
},
{
"start": 137,
"end": 160,
"text": "Fodor and Lepore, 2002)",
"ref_id": "BIBREF22"
},
{
"start": 190,
"end": 216,
"text": "(Mc-Clelland et al., 1986;",
"ref_id": null
},
{
"start": 217,
"end": 233,
"text": "Smolensky, 1988)",
"ref_id": "BIBREF56"
},
{
"start": 468,
"end": 494,
"text": "(Mitchell and Lapata, 2008",
"ref_id": "BIBREF45"
},
{
"start": 497,
"end": 525,
"text": "Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF6"
},
{
"start": 526,
"end": 559,
"text": "Grefenstette and Sadrzadeh, 2011)",
"ref_id": "BIBREF25"
},
{
"start": 775,
"end": 788,
"text": "Baroni, 2017;",
"ref_id": "BIBREF35"
},
{
"start": 789,
"end": 811,
"text": "Dasgupta et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 812,
"end": 834,
"text": "Ettinger et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 835,
"end": 854,
"text": "McCoy et al., 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Models of Language",
"sec_num": "2.2"
},
{
"text": "Compositional generalization in image captioning has received limited attention in the literature. In Atzmon et al. (2016) , the captions in the COCO dataset are replaced by subject-relationobject triplets, circumventing the problem of language generation, and replacing it with structured triplet prediction. Other work explores generalization to unseen combinations of visual concepts as a classification task (Misra et al., 2017; Kato et al., 2018) . Lu et al. (2018) is more closely related to our work; they evaluate captioning models on describing images with unseen noun-noun pairs.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "Atzmon et al. (2016)",
"ref_id": "BIBREF4"
},
{
"start": 412,
"end": 432,
"text": "(Misra et al., 2017;",
"ref_id": "BIBREF44"
},
{
"start": 433,
"end": 451,
"text": "Kato et al., 2018)",
"ref_id": "BIBREF32"
},
{
"start": 454,
"end": 470,
"text": "Lu et al. (2018)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Models of Language",
"sec_num": "2.2"
},
{
"text": "In this paper, we study compositional generalization in image captioning with combinations of multiple classes of nouns, adjectives, and verbs. 3 We find that state-of-the-art models fail to generalize to unseen combinations, and present a multitask model that improves generalization by combining image captioning and image-sentence ranking (Faghri et al., 2018) . In contrast to other models that use a re-ranking step 4 , our model is trained jointly on both tasks and does not use any additional features or external resources. The ranking model is only used to optimize the global semantics of the generated captions with respect to the image.",
"cite_spans": [
{
"start": 144,
"end": 145,
"text": "3",
"ref_id": null
},
{
"start": 342,
"end": 363,
"text": "(Faghri et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Models of Language",
"sec_num": "2.2"
},
{
"text": "In this section we define the compositional captioning task, which is designed to evaluate how well a model generalizes to captioning images that should be described using previously unseen combinations of concepts, when the individual concepts have been observed in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "We assume a dataset of captioned images D, in which N images are described by K captions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "D := {\ufffdi 1 , s 1 1 , ..., s 1 K \ufffd, ..., \ufffdi N , s N 1 , ..., s N K \ufffd}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "We also assume the existence of a concept pair {c i , c j } that represents the concepts of interest in the evaluation. In order to evaluate the compositional generalization of a model for that concept pair, we first define a training set by identifying and removing instances where the captions of an image contain the pair of concepts, creating a paradigmatic gap in the original training set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "D train := {\ufffdi n , s n k \ufffd} s.t. \u2200 N n=1 \ufffd k : c i \u2208 s n k \u2227 c j \u2208 s n k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "Note that the concepts c i and c j can still be independently observed in the captions of an image of this set, but not together in the same caption. We also define validation and evaluation sets D val and D eval that only contain instances where at least one of the captions of an image contains the pair of concepts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "D val/eval := {\ufffdi n , s n k \ufffd} s.t. \u2200 N n=1 \u2203 k : c i \u2208 s n k \u2227 c j \u2208 s n k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "A model is trained on the D train training set until it converges, as measured on the D val validation set. The compositional generalization of the model is measured by the proportion of evaluation set captions which successfully combined a held out pair of concepts {c i , c j } in D eval .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "We select pairs of concepts that are likely to be represented in an image recognition model. In particular, we identify adjectives, nouns, and verbs in the English COCO captions dataset (Chen et al., 2015) that are suitable for testing compositional generalization. We define concepts as sets of synonyms for each word, to account for the variation in how the concept can be expressed in a caption. For each noun, we use the synonyms defined in Lu et al. (2018) . For the verbs and adjectives, we use manually defined synonyms (see Appendix D). From these concepts, we select adjective-noun and noun-verb pairs for the evaluation. To identify concept pair candidates, we use StanfordNLP (Qi et al., 2018) to label and lemmatize the nouns, adjectives, and verbs in the captions, and to check if the adjective or verb is connected to the respective noun in the dependency parse.",
"cite_spans": [
{
"start": 186,
"end": 205,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 445,
"end": 461,
"text": "Lu et al. (2018)",
"ref_id": "BIBREF39"
},
{
"start": 687,
"end": 704,
"text": "(Qi et al., 2018)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of Concept Pairs",
"sec_num": "3.2"
},
{
"text": "We consider the 80 COCO object categories (Lin et al., 2014) and additionally divide the \"person\" category into \"man\", \"woman\" and \"child\". It has been shown that models can detect and classify these categories with high confidence (He et al., 2016) . We further group the nouns under consideration into animate and inanimate objects. We use the following nouns in the evaluation: woman, man, dog, cat, horse, bird, child, bus, plane, truck, table.",
"cite_spans": [
{
"start": 42,
"end": 60,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF37"
},
{
"start": 232,
"end": 249,
"text": "(He et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Nouns:",
"sec_num": null
},
{
"text": "Adjectives: We analyze the distribution of the adjectives in the dataset (see Figure 4 in Appendix A). The captions most frequently contain descriptions of the color, size, age, texture or quantity of objects in the images. We consider the color and size adjectives in this evaluation. It has been shown that CNNs can accurately classify the color of objects (Anderson et al., 2016) ; and we assume that CNNs can encode the size of objects because they can predict bounding boxes, even for small objects (Bai et al., 2018) . In the evaluation, we use the following adjectives: big, small, black, red, brown, white, blue.",
"cite_spans": [
{
"start": 359,
"end": 382,
"text": "(Anderson et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 504,
"end": 522,
"text": "(Bai et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Nouns:",
"sec_num": null
},
{
"text": "Verbs: Sadeghi and Farhadi (2011) show that it is possible to automatically describe the interaction of objects or the activities of objects in images. We select verbs that describe simple and well-defined actions and group them into transitive and intransitive verbs. We use the following verbs in the pairs: eat, lie, ride, fly, hold, stand.",
"cite_spans": [
{
"start": 7,
"end": 33,
"text": "Sadeghi and Farhadi (2011)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Nouns:",
"sec_num": null
},
{
"text": "Pairs and Datasets: We define a total of 24 concept pairs for the evaluation, as shown in Table 1 . The training and evaluation data is extracted from the COCO dataset, which contains K=5 reference captions for N =123,287 images. In the compositional captioning evaluation, we define the training datasets D train and validation datasets D val as subsets of the original COCO training data, and the evaluation datasets D eval as subsets of the COCO validation set, both given the concept pairs. To ensure that there is enough evaluation data, we only use concept pairs for which there are more than 100 instances in the validation set. Occurrence statistics for the considered concept pairs can be found in Appendix B.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Nouns:",
"sec_num": null
},
{
"text": "The performance of a model is measured on the D eval datasets. For each concept pair evaluation set consisting of M images, we dependency parse the set of M \u00d7 K generated captions {\ufffds 1 1 , ..., s 1 K \ufffd, ..., \ufffds M 1 , ..., s M K \ufffd} to determine whether the captions contain the expected concept pair, and whether the adjective or verb is a dependent of the noun. 5 We denote the set of captions for which these conditions hold true as C.",
"cite_spans": [
{
"start": 363,
"end": 364,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "3.3"
},
{
"text": "There is low inter-annotator agreement in the human reference captions on the usage of the concepts in the target pairs. 6 Therefore, one should not expect a model to generate a single caption with the concepts in a pair. However, a model can generate a larger set of K captions using beam search or diverse decoding strategies. Given K captions, the recall of the concept pairs in an evaluation dataset is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Recall@K = |{\ufffds m k \ufffd | \u2203k : s m k \u2208 C}| M",
"eq_num": "(1)"
}
],
"section": "Evaluation Metric",
"sec_num": "3.3"
},
{
"text": "Recall@K is an appropriate metric because the reference captions were produced by annotators who did not need to produce any specific word when describing an image. In addition, the set of captions C is determined with respect to the same synonym sets of the concepts that were used to construct the datasets, and so credit is given for semantically equivalent outputs. More exhaustive approaches to determine semantic equivalence for this metric are left for future work. Training and Evaluation: The models are trained on the D train datasets, in which groups of concept pairs are held out-see Appendix C for more information. Hyperparameters are set as described in the respective papers. When a model has converged on the D val validation split (as measured in BLEU score), we generate K captions for each image in D eval using beam search. Then, we calculate the Recall@K metric (Eqn. 1, K=5) for each concept pair in the evaluation split, as well as the average over all recall scores to report the compositional generalization performance of a model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "3.3"
},
{
"text": "We also evaluate the compositional generalization of a BUTD model trained on the full COCO 6 We calculate the inter-annotator agreement for the target pairs between the 5 reference captions for every image in the COCO dataset: on average, only 1.57 / 5 captions contain the respective adjective-noun or noun-verb concept pair, if it is present in any. We ascribe this lack of agreement to the open nature of the annotation task: there were no restrictions given for what should be included in an image caption. training dataset (FULL). In this setting, the model is trained on compositions of the type we seek to evaluate in this task, and thus does not need to generalize to new compositions.",
"cite_spans": [
{
"start": 91,
"end": 92,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "3.3"
},
{
"text": "The word embeddings of image captioning models are usually learned from scratch, without pretraining 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Language Representations:",
"sec_num": null
},
{
"text": "Pretrained word embeddings (e.g. GloVe (Pennington et al., 2014) ) or language models (e.g. Devlin et al. (2019) ) contain distributional information obtained from large-scale textual resources, which may improve generalization performance. However, we do use them for this task because the resulting model may not have the expected paradigmatic gaps.",
"cite_spans": [
{
"start": 39,
"end": 64,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF49"
},
{
"start": 92,
"end": 112,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Language Representations:",
"sec_num": null
},
{
"text": "Image Captioning: The models mostly fail to generate captions that contain the held out pairs. The average Recall@5 for SAT and BUTD are 3.0 and 6.5, respectively. A qualitative analysis of the generated captions shows that the models usually describe the depicted objects correctly, but, in the case of held out adjective-noun pairs, the models either avoid using adjectives, or use adjectives that describe a different property of the object in question, e.g. white and green airplane instead of small plane in Figure 3 . In the case of held out noun-verb pairs, the models either replace the target verb with a less descriptive phrase, e.g. a man sitting with a plate of food instead of a man is eating in Figure 3 , or completely omit the verb, reducing the caption to a simple noun phrase.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 521,
"text": "Figure 3",
"ref_id": null
},
{
"start": 709,
"end": 717,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "In the FULL setting, average Recall@5 reaches 33.3. We assume that this score is a conservative estimate due to the low average inter-annotator agreement (see Footnote 6). The model is less likely to describe an image using the target pair if the pair is only present in one of the reference captions, as the feature is likely not salient (e.g. the car in the image has multiple colors, and the target color is only covering one part of the car). In fact, if we calculate the average recall for images where at least 2 / 3 / 4 / 5 of the reference captions contain the target concept pair, Recall@5 increases to 46.5 / 58.3 / 64.9 / 75.2. This shows that the BUTD model is more likely to generate a caption with the expected concept pair when more human annotators agree that it is a salient pair of concepts in an image.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Image-Sentence Ranking: In a related experiment, we evaluate the generalization performance of the VSE++ image-sentence ranking model on the compositional captioning task (Faghri et al., 2018) . We use an adapted version of the evaluation metric because the ranking model does not generate tokens. 8 The average Recall@5 with the adapted metric for the ranking model is 46.3. The respective FULL performance for this model is 47.0, indicating that the model performs well whether it has seen examples of the evaluation concept pair at training time or not. In other words, the model achieves better compositional generalization than the captioning models.",
"cite_spans": [
{
"start": 171,
"end": 192,
"text": "(Faghri et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 298,
"end": 299,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "In the previous section, we found that state-of-theart captioning models fail to generalize to unseen combinations of concepts, however, an imagesentence ranking model does generalize. We propose a multi-task model that is trained for image captioning and image-sentence ranking with shared parameters between the different tasks. The captioning component can use the ranking component to re-rank complete candidate captions in the beam. This ensures that the generated captions are as informative and accurate as possible, given the constraints of satisfying both tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model",
"sec_num": "5"
},
{
"text": "Following , the model is a two-layer LSTM (Hochreiter and Schmidhuber, 1997) , where the first layer encodes the sequence of words, and the second layer integrates visual features from the bottom-up and top-down attention mechanism, and generates the output sequence. The parameters of the ranking component \u03b8 2 are mostly a subset of the parameters of the generation component \u03b8 1 . We name the model Bottom-Up and Top-down attention with Ranking (BUTR). Figure 2 shows a high-level overview of the model architecture.",
"cite_spans": [
{
"start": 42,
"end": 76,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 456,
"end": 464,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Joint Model",
"sec_num": "5"
},
{
"text": "To perform the image-sentence ranking task, we project the images and captions into a joint visualsemantic embedding space R J . We introduce a a dog language encoding LSTM with a hidden layer dimension of L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h l t = LSTM(W 1 o t , h l t\u22121 )",
"eq_num": "(2)"
}
],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "where o t \u2208 R V is a one-hot encoding of the input word at timestep t, W 1 \u2208 R E\u00d7V is a word embedding matrix for a vocabulary of size V and h l t\u22121 the state of the LSTM at the previous timestep. At training time, the input words are the words of the target caption at each timestep.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "The final hidden state of the language encoding LSTM h l t=T is projected into the joint embedding space as s * \u2208 R J using W 2 \u2208 R J\u00d7L :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "s * = W 2 h l t=T (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "The images are represented using the bottom-up features proposed by . For each image, we extract a set of R mean-pooled convolutional features v r \u2208 R I , one for each proposed image region r. We introduce W 3 \u2208 R J\u00d7I , which projects the image features of a single region into the joint embedding space:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v e r = W 3 v r",
"eq_num": "(4)"
}
],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "To form a single representation v * of the image from the set of embedded image region features v e r , we apply a weighting mechanism. We generate a normalized weighting of region features \u03b2 \u2208 R R using W 4 \u2208 R 1\u00d7J . \u03b2 r denotes the weight for a specific region r. Then we sum the weighted region features to generate v * \u2208 R J :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 \ufffd r = W 4 v e r",
"eq_num": "(5)"
}
],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 = softmax(\u03b2 \ufffd )",
"eq_num": "(6)"
}
],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v * = R \ufffd r=1 \u03b2 r v e r",
"eq_num": "(7)"
}
],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "We define the similarity between an image and a caption as the cosine similarity cos(v * , s * ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-Sentence Ranking",
"sec_num": "5.1"
},
{
"text": "For caption generation, we introduce a separate language generation LSTM that is stacked on top of the language encoding LSTM. At each timestep t, we first calculate a weighted representation of the input image features. We calculate a normalized attention weight \u03b1 t \u2208 R R (one \u03b1 r,t for each region) using the language encoding and the image region features. Then, we create a single weighted image feature vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Caption Generation",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 \ufffd r,t = W 5 tanh(W 6 v e r + W 7 h l t ) (8) \u03b1 t = softmax(\u03b1 \ufffd r,t ) (9) v t = R \ufffd r=1 \u03b1 r,t v e r",
"eq_num": "(10)"
}
],
"section": "Caption Generation",
"sec_num": "5.2"
},
{
"text": "where W 5 \u2208 R H , W 6 \u2208 R H\u00d7J and W 7 \u2208 R H\u00d7L . H indicates the hidden layer dimension of the attention module. These weighted image featuresv t , the output of the language encoding LSTM h l t (Eqn. 2) and the previous state of the language generation LSTM h g t\u22121 are input to the language generation LSTM:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Caption Generation",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h g t = LSTM([v t , h l t ], h g t\u22121 )",
"eq_num": "(11)"
}
],
"section": "Caption Generation",
"sec_num": "5.2"
},
{
"text": "The hidden layer dimension of the LSTM is G. The output probability distribution over the vocabulary is calculated using W 8 \u2208 R V \u00d7G :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Caption Generation",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w t |w <t ) = softmax(W 8 h g t )",
"eq_num": "(12)"
}
],
"section": "Caption Generation",
"sec_num": "5.2"
},
{
"text": "The model is jointly trained on two objectives. The caption generation component is trained with a cross-entropy loss, given a target ground-truth sentence s consisting of the words w 1 , . . . , w T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L gen (\u03b8 1 ) = \u2212 T \ufffd t=1 log p(w t |w <t ; i)",
"eq_num": "(13)"
}
],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "The image-caption ranking component is trained using a hinge loss with emphasis on hard negatives (Faghri et al., 2018) :",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Faghri et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "L rank (\u03b8 2 ) = max s \ufffd [\u03b1 + cos(i, s \ufffd ) \u2212 cos(i, s)] + + max i \ufffd [\u03b1 + cos(i \ufffd , s) \u2212 cos(i, s)] + (14) where [x] + \u2261 max(x, 0).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "These two loss terms can take very different magnitudes during training, and thus can not be simply added. We use GradNorm to learn loss weighting parameters w gen and w rank with an additional optimizer during training. These parameters dynamically rescale the gradients so that no task becomes too dominant. The overall training objective is formulated as the weighted sum of the single-task losses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "L(\u03b8 1 , \u03b8 2 ) = w gen L gen (\u03b8 1 ) + w rank L rank (\u03b8 2 ) (15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.3"
},
{
"text": "The model generates B captions for each image using beam search decoding. At each timestep, the tokens generated so far for each item on the beam are input back into the language encoder (Eqn. 3). The output of the language encoder is concatenated with the image representation (Eqn. 7) and the previous hidden state of the generation LSTM, and input to the generation LSTM (Eqn. 11) to predict the next token (Eqn. 12).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5.4"
},
{
"text": "The jointly-trained image-sentence ranking component can be used to re-rank the generated captions comparing the image embedding with a language encoder embedding of the captions (Eqn. 4). We expect the ranking model will produce a better ranking of the B captions than only beam search by considering their relevance and informativity with respect to the image.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5.4"
},
{
"text": "We follow the experimental protocol defined in Section 4 to evaluate the joint model. See Appendix E for training details and hyperparameters. 13.8 26.0 1.4 0.8 20.3 16.9 FULL 42.7 38.7 5.9 33.3 39.6 39.5 the same image features and a decoder architecture as the BUTD model. Thus, when using the standard beam search decoding method, BUTR does not improve over BUTD. However, when using the improved decoding mechanism with re-ranking BUTR + RR, Recall@5 increases to 13.2. We also observe an improvement in METEOR and SPICE, and a drop in BLEU and CIDEr compared to the other models. We note that BLEU has the weakest correlations (Elliott and Keller, 2014) , and SPICE and METEOR have the strongest correlations with human judgments (Kilickaya et al., 2017 ).",
"cite_spans": [
{
"start": 632,
"end": 658,
"text": "(Elliott and Keller, 2014)",
"ref_id": "BIBREF17"
},
{
"start": 735,
"end": 758,
"text": "(Kilickaya et al., 2017",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The Recall@5 scores for different categories of held out pairs is presented in in Table 3 , and Figure 3 presents examples of images and the generated captions from different models. We observe that all models are generally best at describing colors, especially of inanimate objects; they nearly never correctly describe held out size modifiers; and for held out noun-verb pairs, performance is slightly better for transitive verbs. Figure 3 : Selected examples of the captions generated by SAT, BUTD, and BUTR for six different concept pairs. The bold words in a caption indicate compositional success.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 96,
"end": 105,
"text": "Figure 3",
"ref_id": null
},
{
"start": 434,
"end": 442,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Describing colors: The color-noun pairings studied in this work have the best generalization performance. We find that all models are better at generalizing to describing inanimate objects instead of animate objects, as shown in the detailed results in Table 3 . One explanation for this could be that the colors of inanimate objects tend to have a higher variance in chromaticity when compared to the colors of animate objects (Rosenthal et al., 2018) , making them easier to distinguish.",
"cite_spans": [
{
"start": 428,
"end": 452,
"text": "(Rosenthal et al., 2018)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "7"
},
{
"text": "Describing sizes: The generalization performance for size modifiers is consistently low for all models. The CNN image encoders are generally able to predict the sizes of object bounding boxes in an image. However, this does not necessarily relate to the actual sizes of the objects, given that this depends on their distance from the camera. To support this claim, we perform a correlation analysis in Appendix F showing that the bounding box sizes of objects in the COCO dataset do not relate to the described sizes in the respective captions. Nevertheless, size modification is challenging from a linguistic perspective because it requires reference to an object's comparison class (Cresswell, 1977; Bierwisch, 1989) . A large mouse is so with respect to the class of mice, not with respect to the broader class of animals. To successfully learn size modification, a model needs to represent such comparison classes.",
"cite_spans": [
{
"start": 684,
"end": 701,
"text": "(Cresswell, 1977;",
"ref_id": "BIBREF11"
},
{
"start": 702,
"end": 718,
"text": "Bierwisch, 1989)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "7"
},
{
"text": "We hypothesize that recall is reasonable in the FULL setting because it exploits biases in the dataset, e.g. that trucks are often described as BIG. In that case, the model is not actually learning the meaning of BIG, but simple co-occurrence statistics for adjectives with nouns in the dataset.",
"cite_spans": [
{
"start": 144,
"end": 148,
"text": "BIG.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "7"
},
{
"text": "Describing actions: In these experiments, the models were better at generalizing to transitive verbs than intransitive verbs. This may be because images depicting transitive events (e.g. eating) often contain additional arguments (e.g. cake); thus they offer richer contextual cues than images with intransitive events. The analysis in Appendix G provides some support for this hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "7"
},
{
"text": "Diversity in Generated Captions: A crucial difference between human-written and modelgenerated captions is that the latter are less diverse (Devlin et al., 2015; Dai et al., 2017) . Given that BUTR+RR improves compositional generalization, we explore whether the diversity of the captions is also improved. Van Miltenburg et al. (2018) proposes a suite of metrics to measure the diversity of the captions generated by a model. We apply these metrics to the captions generated by BUTR+RR and BUTD and compare the scores to the best models evaluated in Van Miltenburg et al. (2018) .",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "(Devlin et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 162,
"end": 179,
"text": "Dai et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 555,
"end": 579,
"text": "Miltenburg et al. (2018)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "7"
},
{
"text": "The results are presented in Table 4 . BUTR+RR shows the best performance as measured by most of the diversity metrics. BUTR+RR produces the highest percentage of novel captions (%Novel), which is important for compositional generalization. It generates sentences with a high average sentence length (ASL) -performing similarly to Liu et al. (2017) -but with a larger standard deviation, suggesting a greater variety in the captions. The total number of word types (Types) and cover- age (Cov) are higher for Shetty et al. (2017) , which is trained with a generative adversarial objective in order to generate more diverse captions. However, these types are more equally distributed in the captions generated by BUTR+RR, as shown by the higher mean segmented type-token ratio (TTR 1 ) and bigram type-token ratio (TTR 2 ). The increased diversity of the captions may explain the lower BLEU score of BUTR+RR compared to BUTD. Recall that BLEU measures weighted n-gram precision, hence it awards less credit for captions that are lexically or syntactically different than the references. Thus, BLEU score may decrease if a model generates diverse captions. We note that METEOR, which incorporates non-lexical matching components in its scoring function, is higher for BUTR+RR than BUTD.",
"cite_spans": [
{
"start": 331,
"end": 348,
"text": "Liu et al. (2017)",
"ref_id": "BIBREF38"
},
{
"start": 509,
"end": 529,
"text": "Shetty et al. (2017)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "7"
},
{
"text": "Decoding strategies: The failure of the captioning models to generalize can be partially ascribed to the effects of maximum likelihood decoding. Holtzman et al. (2019) find that maximum likelihood decoding leads to unnaturally flat and high per-token probability text. We find that even with grounding from the images, the captioning models do not assign a high probability to the sequences containing compositions that were not observed during training. BUTR is jointly trained with a ranking component, which is used to re-rank the generated captions, thereby ensuring that at the sentence-level, the captions are relevant for the image. It can thus be viewed as an improved decoding strategy such as those proposed in Vijayakumar et al. ",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "Holtzman et al. (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "7"
},
{
"text": "Image captioning models are usually evaluated without explicitly considering their ability to generalize to unseen concepts. In this paper, we ar-gued that models should be capable of compositional generalization, i.e. the ability to produce captions that include combinations of unseen concepts. We evaluated the ability of models to generalize to unseen adjective-noun and noun-verb pairs and found that two state-of-the-art models did not generalize in this evaluation, but that an image-sentence ranking model did. Given these findings, we presented a multi-task model that combines captioning and image-sentence ranking, and uses the ranking component to re-rank the captions generated by the captioning component. This model substantially improved generalization performance without sacrificing performance on established text-similarity metrics, while generating more diverse captions. We hope that this work will encourage researchers to design models that better reflect human-like language production.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Future work includes extending the evaluation to other concept pairs and other concept classes, analysing the circumstances in which the re-ranking step improves compositional generalization, exploring the utility of jointly trained discriminative re-rankers into other NLP tasks, developing models that generalize to size modifier adjectives, and devising approaches to improve the handling of semantically equivalent outputs for the proposed evaluation metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://github.com/mitjanikolaus/ compositional-image-captioning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The meanings for complex expressions are derived from the meanings of their parts via specific composition functions.(Partee, 1984)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is different from the \"robust image captioning\" task(Lu et al., 2018) because we are testing for the composition of nouns with adjectives or verbs, and not the co-occurrence of different nouns in an image.4 Fang et al. (2015) use a discriminative model that has access to sentence-level features and a multimodal similarity model in order to capture global semantics. uses a conditional variational auto-encoder to generate a set of diverse captions and a consensus-based method for re-ranking the candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This means that a model gains no credit for predicting the concept pairs without them attaching to their expected target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Exceptions: You et al. (2016);Anderson et al. (2017)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For each image in the evaluation set, we construct a test set that consists of the 5 correct captions and the captions of 1,000 randomly selected images from the COCO validation set. We ensure that all captions in the test set contain exactly one of the constituent concept pairs, but not both (except for the 5 correct captions). We construct a ranking of the captions in this test set with respect to the image, and use the top-K ranked captions to calculate the concept pair recall (Eqn. 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Emiel van Miltenburg and\u00c1kos K\u00e1d\u00e1r for their extensive feedback on the work, and the reviewers, Ana Valeria Gonzales, Daniel Hershcovich, Heather Lent, and Mareike Hartmann for their comments. We also thank the participants of the Lorentz Center workshop on Compositionality in Brains and Machines for suggesting the phrase \"paradigmatic gap\". MN was supported by the Erasmus+ Traineeship program. RA and MA are funded by a Google Focused Research Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "nocaps: novel object captioning at scale",
"authors": [
{
"first": "Harsh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Karan",
"middle": [],
"last": "Desai",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.08658"
]
},
"num": null,
"urls": [],
"raw_text": "Harsh Agrawal, Karan Desai, Xinlei Chen, Rishabh Jain, Dhruv Batra, Devi Parikh, Stefan Lee, and Pe- ter Anderson. 2018. nocaps: novel object captioning at scale. arXiv preprint arXiv:1812.08658.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spice: Semantic propositional image caption evaluation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "382--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propo- sitional image caption evaluation. In European Conference on Computer Vision, pages 382-398. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Guided open vocabulary image captioning with constrained beam search",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2017,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "936--945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary im- age captioning with constrained beam search. In EMNLP, pages 936-945. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bottom-up and top-down attention for image captioning and visual question answering",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buehler",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6077--6086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077-6086.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning to generalize to new compositions in image understanding",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Atzmon",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Vahid",
"middle": [],
"last": "Kezami",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Gal",
"middle": [],
"last": "Chechik",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuval Atzmon, Jonathan Berant, Vahid Kezami, Amir Globerson, and Gal Chechik. 2016. Learning to generalize to new compositions in image under- standing. CoRR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sod-mtgan: Small object detection via multi-task generative adversarial network",
"authors": [
{
"first": "Yancheng",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mingli",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Ghanem",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "206--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yancheng Bai, Yongqiang Zhang, Mingli Ding, and Bernard Ghanem. 2018. Sod-mtgan: Small object detection via multi-task generative adversarial net- work. In Proceedings of the European Conference on Computer Vision (ECCV), pages 206-221.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183-1193. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic description generation from images: A survey of models, datasets, and evaluation measures",
"authors": [
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Ruket",
"middle": [],
"last": "Cakici",
"suffix": ""
},
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Aykut",
"middle": [],
"last": "Erdem",
"suffix": ""
},
{
"first": "Erkut",
"middle": [],
"last": "Erdem",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "409--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from im- ages: A survey of models, datasets, and evalua- tion measures. Journal of Artificial Intelligence Re- search, 55:409-442.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The semantics of gradation",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Bierwisch",
"suffix": ""
}
],
"year": 1989,
"venue": "Dimensional adjectives",
"volume": "",
"issue": "",
"pages": "71--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Bierwisch. 1989. The semantics of gradation. In Dimensional adjectives, ed. Manfred Bierwisch and Ewald Lang, pages 71-261. Berlin: Springer- Verlag.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Microsoft coco captions: Data collection and evaluation server",
"authors": [
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.00325"
]
},
"num": null,
"urls": [],
"raw_text": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Badrinarayanan",
"suffix": ""
},
{
"first": "Chen-Yu",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Rabinovich",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "793--802",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. 2018. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proceedings of the 35th In- ternational Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10-15, 2018, pages 793-802.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The semantics of degree",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Cresswell",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "261--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. J. Cresswell. 1977. The semantics of degree. In Montague grammar, ed. Barbara Partee, pages 261- 292. New York: Academic Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards diverse and natural image descriptions via a conditional gan",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Dahua",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2970--2979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. 2017. Towards diverse and natural image descrip- tions via a conditional gan. In Proceedings of the IEEE International Conference on Computer Vision, pages 2970-2979.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Evaluating compositionality in sentence embeddings",
"authors": [
{
"first": "Ishita",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "Demi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stuhlm\u00fcller",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Noah D",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.04302"
]
},
"num": null,
"urls": [],
"raw_text": "Ishita Dasgupta, Demi Guo, Andreas Stuhlm\u00fcller, Samuel J Gershman, and Noah D Goodman. 2018. Evaluating compositionality in sentence embed- dings. arXiv preprint arXiv:1802.04302.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ninth workshop on statistical machine translation",
"volume": "",
"issue": "",
"pages": "376--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language models for image captioning: The quirks and what works",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "100--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Mar- garet Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers, pages 100-105.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Comparing automatic evaluation measures for image description",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "452--457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desmond Elliott and Frank Keller. 2014. Comparing automatic evaluation measures for image descrip- tion. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), volume 2, pages 452-457.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Assessing composition in sentence vector representations",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.03992"
]
},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. arXiv preprint arXiv:1809.03992.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "VSE++: improving visual-semantic embeddings with hard negatives",
"authors": [
{
"first": "Fartash",
"middle": [],
"last": "Faghri",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Fleet",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2018,
"venue": "British Machine Vision Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fartash Faghri, David J. Fleet, Jamie Kiros, and Sanja Fidler. 2018. VSE++: improving visual-semantic embeddings with hard negatives. In British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018, page 12.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04833"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. arXiv preprint arXiv:1805.04833.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "From captions to visual concepts and back",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Hao Fang",
"suffix": ""
},
{
"first": "Forrest",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iandola",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rupesh",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "1473--1482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll\u00e1r, Jianfeng Gao, Xi- aodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1473-1482.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The compositionality papers",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "Ernest",
"middle": [],
"last": "Fodor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lepore",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry A Fodor and Ernest Lepore. 2002. The composi- tionality papers. Oxford University Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Connectionism and cognitive architecture: A critical analysis",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fodor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zenon W Pylyshyn",
"suffix": ""
}
],
"year": 1988,
"venue": "Cognition",
"volume": "28",
"issue": "1-2",
"pages": "3--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry A Fodor and Zenon W Pylyshyn. 1988. Connec- tionism and cognitive architecture: A critical analy- sis. Cognition, 28(1-2):3-71.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Devise: A deep visual-semantic embedding model",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Frome",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2121--2129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. De- vise: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121-2129.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical composi- tional distributional model of meaning. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing, pages 1394-1404. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep compositional captioning: Describing novel object categories without paired training data",
"authors": [
{
"first": "Lisa",
"middle": [
"Anne"
],
"last": "Hendricks",
"suffix": ""
},
{
"first": "Subhashini",
"middle": [],
"last": "Venugopalan",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Anne Hendricks, Subhashini Venugopalan, Mar- cus Rohrbach, Raymond Mooney, Kate Saenko, and Trevor Darrell. 2016. Deep compositional cap- tioning: Describing novel object categories without paired training data. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 1-10.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Framing image description as a ranking task: Data, models and evaluation metrics",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "47",
"issue": "",
"pages": "853--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Ar- tificial Intelligence Research, 47:853-899.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degen- eration. CoRR, abs/1904.09751.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Deep visualsemantic alignments for generating image descriptions",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Compositional learning for human object interaction",
"authors": [
{
"first": "Keizo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Yin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "234--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keizo Kato, Yin Li, and Abhinav Gupta. 2018. Com- positional learning for human object interaction. In Proceedings of the European Conference on Com- puter Vision (ECCV), pages 234-251.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Re-evaluating automatic metrics for image captioning",
"authors": [
{
"first": "Mert",
"middle": [],
"last": "Kilickaya",
"suffix": ""
},
{
"first": "Aykut",
"middle": [],
"last": "Erdem",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Ikizler-Cinbis",
"suffix": ""
},
{
"first": "Erkut",
"middle": [],
"last": "Erdem",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "199--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, and Erkut Erdem. 2017. Re-evaluating automatic metrics for image captioning. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, pages 199-209.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brenden",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lake",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.00350"
]
},
"num": null,
"urls": [],
"raw_text": "Brenden M Lake and Marco Baroni. 2017. General- ization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. arXiv preprint arXiv:1711.00350.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Building machines that learn and think like people",
"authors": [
{
"first": "",
"middle": [],
"last": "Brenden M Lake",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tomer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Ullman",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"J"
],
"last": "Tenenbaum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gershman",
"suffix": ""
}
],
"year": 2017,
"venue": "Behavioral and brain sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brenden M Lake, Tomer D Ullman, Joshua B Tenen- baum, and Samuel J Gershman. 2017. Building ma- chines that learn and think like people. Behavioral and brain sciences, 40.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "European conference on computer vision",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European confer- ence on computer vision, pages 740-755. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Mat: A multimodal attentive translator for image captioning",
"authors": [
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fuchun",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Changhu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Yuille",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.05658"
]
},
"num": null,
"urls": [],
"raw_text": "Chang Liu, Fuchun Sun, Changhu Wang, Feng Wang, and Alan Yuille. 2017. Mat: A multimodal atten- tive translator for image captioning. arXiv preprint arXiv:1702.05658.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Neural baby talk",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jianwei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "7219--7228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2018. Neural baby talk. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, pages 7219-7228.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Learning like a child: Fast novel visual concept learning from sentence descriptions of images",
"authors": [
{
"first": "Junhua",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"L"
],
"last": "Yuille",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2533--2541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junhua Mao, Xu Wei, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L Yuille. 2015. Learning like a child: Fast novel visual concept learning from sen- tence descriptions of images. In Proceedings of the IEEE international conference on computer vision, pages 2533-2541.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The acquisition of prenominal modifier sequences",
"authors": [
{
"first": "H",
"middle": [],
"last": "Edward",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matthei",
"suffix": ""
}
],
"year": 1982,
"venue": "Cognition",
"volume": "11",
"issue": "3",
"pages": "301--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward H Matthei. 1982. The acquisition of prenomi- nal modifier sequences. Cognition, 11(3):301-332.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Parallel distributed processing",
"authors": [
{
"first": "L",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "David",
"middle": [
"E"
],
"last": "Mcclelland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pdp Research Group",
"suffix": ""
}
],
"year": 1986,
"venue": "Explorations in the Microstructure of Cognition",
"volume": "2",
"issue": "",
"pages": "216--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James L McClelland, David E Rumelhart, PDP Re- search Group, et al. 1986. Parallel distributed pro- cessing. Explorations in the Microstructure of Cog- nition, 2:216-271.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Rnns implicitly implement tensor product representations",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Dunbar",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.08718"
]
},
"num": null,
"urls": [],
"raw_text": "R Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. 2018. Rnns implicitly imple- ment tensor product representations. arXiv preprint arXiv:1812.08718.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "From red wine to red tomato: Composition with context",
"authors": [
{
"first": "Ishan",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Martial",
"middle": [],
"last": "Hebert",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1792--1801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishan Misra, Abhinav Gupta, and Martial Hebert. 2017. From red wine to red tomato: Composition with context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1792-1801.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. proceedings of ACL-08: HLT, pages 236-244.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Formal philosophy",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Montague",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Montague. 1974. Formal philosophy, ed. r. thomason. New Haven.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In ACL, pages 311- 318. ACL.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Compositional reasoning in early childhood",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Piantadosi",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Aslin",
"suffix": ""
}
],
"year": 2016,
"venue": "PloS one",
"volume": "11",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Piantadosi and Richard Aslin. 2016. Compo- sitional reasoning in early childhood. PloS one, 11(9):e0147734.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Universal dependency parsing from scratch",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "160--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Timothy Dozat, Yuhao Zhang, and Christo- pher D. Manning. 2018. Universal dependency pars- ing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160-170, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1:8.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Color statistics of objects, and color tuning of object cortex in macaque monkey",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Sivalogeswaran",
"middle": [],
"last": "Ratnasingam",
"suffix": ""
},
{
"first": "Theodros",
"middle": [],
"last": "Haile",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Eastman",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Fuller-Deets",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conway",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of vision",
"volume": "18",
"issue": "11",
"pages": "1--1",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Rosenthal, Sivalogeswaran Ratnasingam, Theodros Haile, Serena Eastman, Josh Fuller-Deets, and Bevil R Conway. 2018. Color statistics of ob- jects, and color tuning of object cortex in macaque monkey. Journal of vision, 18(11):1-1.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Recognition using visual phrases",
"authors": [
{
"first": "Mohammad Amin",
"middle": [],
"last": "Sadeghi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2011,
"venue": "CVPR 2011",
"volume": "",
"issue": "",
"pages": "1745--1752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Amin Sadeghi and Ali Farhadi. 2011. Recognition using visual phrases. In CVPR 2011, pages 1745-1752. IEEE.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Speaking the same language: Matching machine to human captions by adversarial training",
"authors": [
{
"first": "Rakshith",
"middle": [],
"last": "Shetty",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"Anne"
],
"last": "Hendricks",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Fritz",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Schiele",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "4135--4144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rakshith Shetty, Marcus Rohrbach, Lisa Anne Hen- dricks, Mario Fritz, and Bernt Schiele. 2017. Speak- ing the same language: Matching machine to human captions by adversarial training. In Proceedings of the IEEE International Conference on Computer Vi- sion, pages 4135-4144.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "On the proper treatment of connectionism",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1988,
"venue": "Behavioral and brain sciences",
"volume": "11",
"issue": "1",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Smolensky. 1988. On the proper treatment of connectionism. Behavioral and brain sciences, 11(1):1-23.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Measuring the diversity of automatic image descriptions",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1730--1741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emiel Van Miltenburg, Desmond Elliott, and Piek Vossen. 2018. Measuring the diversity of automatic image descriptions. In Proceedings of the 27th In- ternational Conference on Computational Linguis- tics, pages 1730-1741.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Cider: Consensus-based image description evaluation",
"authors": [
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Ramakrishna Vedantam",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "4566--4575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR, pages 4566-4575. IEEE Computer Society.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Order-embeddings of images and language",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vendrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
}
],
"year": 2016,
"venue": "4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Pro- ceedings.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Diverse beam search for improved description of complex scenes",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ashwin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vijayakumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cogswell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ramprasaath",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Selvaraju",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Crandall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin K Vijayakumar, Michael Cogswell, Ram- prasaath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Thirty-Second AAAI Conference on Artificial In- telligence.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "3156--3164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 3156-3164.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Show and tell: Lessons learned from the 2015 mscoco image captioning challenge",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "39",
"issue": "",
"pages": "652--663",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2017. Show and tell: Lessons learned from the 2015 mscoco image captioning challenge. IEEE transactions on pattern analysis and machine intelligence, 39(4):652-663.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space",
"authors": [
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Schwing",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Lazebnik",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5756--5766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liwei Wang, Alexander Schwing, and Svetlana Lazeb- nik. 2017. Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space. In Advances in Neural In- formation Processing Systems, pages 5756-5766.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhudinov",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "37",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2048-2057, Lille, France. PMLR.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Image captioning with semantic attention",
"authors": [
{
"first": "Quanzeng",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Hailin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhaowen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "4651--4659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with seman- tic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4651-4659.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "State-of-the-Art Performance 4.1 Experimental Protocol Models: We evaluate two image captioning models on the compositional generalization task: Show, Attend and Tell (SAT; Xu et al., 2015) and Bottom-up and Top-down Attention (BUTD; Anderson et al., 2018). For SAT, we use ResNet-152(He et al., 2016) as an improved image encoder.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "An overview of BUTR, which jointly learns image-sentence ranking and image captioning.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "(2018); Fan et al. (2018); Radford et al. (2019); Holtzman et al. (2019).",
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "The 24 concept pairs used to construct the training D train and eval D eval datasets.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "shows the compositional generalization performance, as well as the common image captioning metric scores for all models. BUTR uses",
"content": "<table><tr><td>Model</td><td>R</td><td>M</td><td>S</td><td>C</td><td>B</td></tr><tr><td>SAT</td><td colspan=\"5\">3.0 23.2 16.6 80.4 27.5</td></tr><tr><td>BUTD BUTR</td><td colspan=\"5\">6.5 25.8 19.1 98.1 32.6 6.5 25.7 19.0 97.0 32.0</td></tr><tr><td colspan=\"6\">BUTR + RR 13.2 26.4 20.4 92.7 28.8</td></tr><tr><td>FULL</td><td colspan=\"5\">33.3 27.4 20.9 105.3 36.6</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "",
"content": "<table><tr><td colspan=\"7\">: Average results for Recall@5 (R; Eqn. 1), METEOR (M; Denkowski and Lavie, 2014), SPICE (S; Anderson et al., 2016) , CIDEr (C; Vedantam et al., 2015), BLEU (B; Papineni et al., 2002). RR stands for re-ranking after decoding.</td></tr><tr><td/><td>Color</td><td/><td>Size</td><td/><td>Verb</td></tr><tr><td/><td>A</td><td>I</td><td>A</td><td>I</td><td>T</td><td>I</td></tr><tr><td>SAT</td><td colspan=\"3\">3.7 10.5 0</td><td>0</td><td>1.6</td><td>2.2</td></tr><tr><td>BUTD</td><td colspan=\"3\">5.4 10.9 0.5</td><td>0</td><td colspan=\"2\">11.6 10.3</td></tr><tr><td>BUTR</td><td colspan=\"4\">6.4 16.2 0.3 0.2</td><td>7.0</td><td>8.6</td></tr><tr><td>+ RR</td><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "Detailed Recall@5 scores for different categories of held out pairs. The scores are averaged over the set of scores for pairs from the respective category.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"text": "TTR 2 %Novel Cov Loc 5",
"content": "<table><tr><td>Model Types TTR 1 Liu et al. (2017) ASL 598 0.17 10.3 \u00b1 1.32 953 0.21 Vinyals et al. (2017) 10.1 \u00b1 1.28 Shetty et al. (2017) 9.4 \u00b1 1.31 2611 0.24 BUTD 1162 0.22 9.0 \u00b1 1.01 BUTR+RR 0.26 10.2 \u00b1 1.76 1882 Validation data 11.3 \u00b1 2.61 9200 0.32</td><td>0.38 0.43 0.54 0.49 0.59 0.72</td><td>50.1 90.5 80.5 56.4 93.6 95.3</td><td>0.05 0.70 0.07 0.69 0.20 0.71 0.09 0.78 0.14 0.80 --</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF7": {
"text": "Scores for diversity metrics as defined by Van Miltenburg et al. (2018) for different models.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}