ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:28.698811Z"
},
"title": "Probing Cross-Modal Representations in Multi-Step Relational Reasoning",
"authors": [
{
"first": "Iuliia",
"middle": [],
"last": "Parfenova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vrije Universiteit Amsterdam",
"location": {}
},
"email": ""
},
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sandro",
"middle": [],
"last": "Pezzelle",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate the representations learned by vision and language models in tasks that require relational reasoning. Focusing on the problem of assessing the relative size of objects in abstract visual contexts, we analyse both one-step and two-step reasoning. For the latter, we construct a new dataset of threeimage scenes and define a task that requires reasoning at the level of the individual images and across images in a scene. We probe the learned model representations using diagnostic classifiers. Our experiments show that pretrained multimodal transformer-based architectures can perform higher-level relational reasoning, and are able to learn representations for novel tasks and data that are very different from what was seen in pretraining.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate the representations learned by vision and language models in tasks that require relational reasoning. Focusing on the problem of assessing the relative size of objects in abstract visual contexts, we analyse both one-step and two-step reasoning. For the latter, we construct a new dataset of threeimage scenes and define a task that requires reasoning at the level of the individual images and across images in a scene. We probe the learned model representations using diagnostic classifiers. Our experiments show that pretrained multimodal transformer-based architectures can perform higher-level relational reasoning, and are able to learn representations for novel tasks and data that are very different from what was seen in pretraining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Intelligence is classically described as \"the ability to see the similarities among dissimilar things and the dissimilarities among similar things\" (Thomas Acquinas, 1225-1274, reported by Ruiz, 2011) . Developing systems that can reason over objects and their relations is indeed a long-standing goal of artificial intelligence research, as argued by Johnson et al. (2017) . In recent years, huge progress toward this goal has been made in the language and vision community. Starting from Malinowski and Fritz (2014) and Antol et al. (2015) , a wealth of studies have focused on language-driven visual reasoning, namely the problem of reasoning about an image given some linguistic input.",
"cite_spans": [
{
"start": 189,
"end": 200,
"text": "Ruiz, 2011)",
"ref_id": "BIBREF26"
},
{
"start": 352,
"end": 373,
"text": "Johnson et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 490,
"end": 517,
"text": "Malinowski and Fritz (2014)",
"ref_id": "BIBREF21"
},
{
"start": 522,
"end": 541,
"text": "Antol et al. (2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generally speaking, there are two main types of problems in visual reasoning datasets (see Santoro et al., 2017) : non-relational, requiring models to focus only on a given object (e.g., answering the question \"What material is the cube made of?\"), and relational, requiring models to pay attention to several or even all the objects in the image (e.g., indi-cating whether the statement \"There are four cubes that are red\" is true or false). Relational problems call for higher-level abilities, such as counting or directly comparing objects, both of which involve recognising the (dis)similarities among things.",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "Santoro et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on an important but understudied, relational reasoning task: assessing the relative size of objects in visual contexts, that is, determining whether an object counts as 'big' or 'small' in an image. We define a multi-step relational reasoning problem formulated as a sentence verification task. We construct a dataset of three-image scenes where a given target object, e.g., a blue triangle, is present in each image: two images have target objects with the same contextually-defined size and one image stands out in this regard. The task requires verifying whether a simple natural language statement standing for a first-order logical form describes a scene, e.g., \"There is exactly one blue triangle that is small in its image in this scene\" (Figure 1 ). Such multi-step relational reasoning is at play in many real-life situations: e.g., the same exact pan may count as 'big' in all contexts except a restaurant kitchen.",
"cite_spans": [],
"ref_spans": [
{
"start": 769,
"end": 778,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We experiment with two types of models to solve this task: a modular neural network (Hu et al., 2017) and LXMERT, a pre-trained multimodal transformer (Tan and Bansal, 2019) . We probe the learned representations of LXMERT to assess whether, and to what extent, it has learned the underlying structure of the data. By means of two experiments with probing classifiers (Alain and Bengio, 2017; Hupkes et al., 2018; Belinkov and Glass, 2019) , we first verify that it is able to perform the task at the image level (i.e., to compute the relative size of the target object at the image level); then, we test its ability to reason at the multi-image level and detect the image that stands out.",
"cite_spans": [
{
"start": 84,
"end": 101,
"text": "(Hu et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 151,
"end": 173,
"text": "(Tan and Bansal, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 368,
"end": 392,
"text": "(Alain and Bengio, 2017;",
"ref_id": "BIBREF1"
},
{
"start": 393,
"end": 413,
"text": "Hupkes et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 414,
"end": 439,
"text": "Belinkov and Glass, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experiments show that LXMERT is able to solve the multi-step relational reasoning task there is exactly one blue triangle that is small in its image in this scene there are exactly two blue triangles that are small in their images in this scene there are exactly two blue triangles that are big in their images in this scene F F T T there is exactly one blue triangle that is big in its image in this scene Figure 1 : One sample scene from our dataset and the four statements it can be paired with, including corresponding truth values assigned as explained in Section 4.1. For clarity, the odd-one-out image (holding the odd size) is framed in red. Best viewed in color.",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 419,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "with an accuracy of 88.8%, and that the majority of errors occur when the relative size of the target object is difficult to determine. Our analyses show that (i) in most cases, different attention heads in LXMERT specialise to localising the smallest and biggest objects in the images, (ii) that the cross-modal representations learned appear encode a threshold function that controls whether an object is 'big' or 'small' in an image, and (iii) that a simple diagnostic classifier successfully identifies the instance that stands out in a three-image scene. Taken together, these findings lend further support to the advanced reasoning abilities of pretrained transformer-based architectures, showing that they can perform higher-level relational reasoning and are able to deal with novel tasks and novel data, including synthetic data not available during pre-training. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We investigate multi-step relational reasoning by formulating the problem as a visually grounded sentence verification task (see Figure 1 ). Given a pair scene, statement consisting of a visual scene and a statement about such scene, the task consists in classifying the statement as either true or false. In our setup, a scene consists of 3 images: img 1 , img 2 , img 3 , each including an instance of the target object (e.g., a blue triangle) together with other geometrical shapes of the same type (e.g., triangles of other colours). A statement paired with a scene is of the following form: \"there is exactly one blue triangle that is small in its image in this scene\" or \"there are exactly two blue triangles that are big in their im-ages in this scene\". As we will explain in detail in Sec. 4.1, the dataset is created such that the target object counts as either 'big' or 'small' in only one of the three images in a scene. Arguably, solving the task requires the following two steps of relational reasoning: (1) identifying whether the target object counts as either 'big' or 'small' in each image, and (2) counting how many images include a big/small target. However, in our setup there is no direct supervision for any of these steps. In other words, the training data does not indicate which images contain an object that counts as big/small nor explicitly how many images contain a big/small target.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 137,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "3 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "To evaluate reasoning abilities of multimodal models, several datasets of synthetic scenes and questions, such as CLEVR (Johnson et al., 2017) , ShapeWorld (Kuhnle and Copestake, 2017) , and MALeViC (Pezzelle and Fern\u00e1ndez, 2019) have been proposed in recent years. Our work directly builds on them, and particularly on approaches adopting a multi-image setting, such as NLVR (Suhr et al., 2017) and NLVR2 (which, however, contains pairs of natural scenes; Suhr et al., 2019) . In NLVR, in particular, a crowdsourced statement is coupled with a synthetic scene including 3 independent images, and models must verify whether the statement is true or false with respect to the entire visual input. This involves handling phenomena such as counting, negation or comparisons, that require perform relational reasoning over the entire scene, e.g.: There is a black item in every box, There is a tower with yellow base, etc. However, most scene, statement pairs do not challenge models to do the same at the level of the single image (or box), where a low-level understanding of the object(s) of interest (shape, color, etc.) often suffices. Our approach is novel since it requires two steps of relational reasoning: at the level of both the single image and the multi-image context.",
"cite_spans": [
{
"start": 120,
"end": 142,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 156,
"end": 184,
"text": "(Kuhnle and Copestake, 2017)",
"ref_id": "BIBREF20"
},
{
"start": 199,
"end": 229,
"text": "(Pezzelle and Fern\u00e1ndez, 2019)",
"ref_id": "BIBREF23"
},
{
"start": 376,
"end": 395,
"text": "(Suhr et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 457,
"end": 475,
"text": "Suhr et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Reasoning",
"sec_num": "3.1"
},
{
"text": "Our approach is also related to other work in language and vision involving multiple images. One is the spot-the-difference task: in Jhamtani and Berg-Kirkpatrick (2018) , models are fed with pairs of video-surveillance images that only differ in one detail, and asked to generate text which de-scribes such difference. The same task-with different real-scene datasets-is explored by Forbes et al. (2019) and Su et al. (2017) ; others experiment with pairs of similar images drawn from CLEVR (Johnson et al., 2017) or similar synthetic 3D datasets (Park et al., 2019; Qiu et al., 2020) . This task is akin to ours since it requires a higherlevel reasoning step: systems must reason over the two independent representations to describe what is different. However, in practice, it does not always require semantic understanding (Jhamtani and Berg-Kirkpatrick, 2018); when it does, the changes often involve one object's fixed attribute (color, shape, material, etc.) rather than a contextually-defined property whose applicability depends on the other objects in the image. 2 A similar, partially overlapping task is discriminative captioning: systems are fed with a set of similar images and asked to provide a description that unequivocally refers to a target one. Many approaches have been proposed focusing on synthetic Achlioptas et al., 2019) or natural scenes (Vedantam et al., 2017; Cohn-Gordon et al., 2018; Vered et al., 2019) , very often embedding pragmatic components based on the Rational Speech Acts framework (RSA; Goodman and Frank, 2016). Also in this case, however, differences among images mainly involve intrinsic attributes of the objects rather than relational properties defined at the level of the image.",
"cite_spans": [
{
"start": 133,
"end": 169,
"text": "Jhamtani and Berg-Kirkpatrick (2018)",
"ref_id": "BIBREF17"
},
{
"start": 384,
"end": 404,
"text": "Forbes et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 409,
"end": 425,
"text": "Su et al. (2017)",
"ref_id": "BIBREF30"
},
{
"start": 492,
"end": 514,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 548,
"end": 567,
"text": "(Park et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 568,
"end": 585,
"text": "Qiu et al., 2020)",
"ref_id": null
},
{
"start": 1072,
"end": 1073,
"text": "2",
"ref_id": null
},
{
"start": 1322,
"end": 1346,
"text": "Achlioptas et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1365,
"end": 1388,
"text": "(Vedantam et al., 2017;",
"ref_id": "BIBREF34"
},
{
"start": 1389,
"end": 1414,
"text": "Cohn-Gordon et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 1415,
"end": 1434,
"text": "Vered et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Image Approaches",
"sec_num": "3.2"
},
{
"text": "Our dataset is based on the POS1 dataset from MALeViC (Pezzelle and Fern\u00e1ndez, 2019) , in which images contain 4 to 9 same-shape objects, e.g., squares. Each object is labeled with a groundtruth relative size, indicating whether the object counts as either big or small in that particular context. The label is determined by the following threshold function motivated by cognitive science studies on how humans interpret relative gradable adjectives (Schmidt et al., 2009) :",
"cite_spans": [
{
"start": 54,
"end": 84,
"text": "(Pezzelle and Fern\u00e1ndez, 2019)",
"ref_id": "BIBREF23"
},
{
"start": 450,
"end": 472,
"text": "(Schmidt et al., 2009)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3POS1 Dataset",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T = Max \u2212 k(Max \u2212 Min)",
"eq_num": "(1)"
}
],
"section": "3POS1 Dataset",
"sec_num": "4.1"
},
{
"text": "where Max and Min represent the areas of the biggest and smallest objects in the image, and k is a positive value < 0.5. 3 Thus, an object with a certain area can count as big in one image and as small in another one. In total, the POS1 dataset contains 20K image, statement datapoints (16K train, 2K val, 2K test), where statements are about the size of a target object based on its unique color: e.g., \"the blue triangle is a small triangle\". The dataset for the present experiments, which we name 3POS1, is constructed as follows: For each image in each split of POS1, we randomly sample two images from that split with the same target object (e.g., a blue triangle) but the opposite ground-truth size (e.g., big). We obtain 20K sets of three images where one size is prevalent, i.e., present in two images, and one is odd, i.e., held by only one image. 4 The sizes big and small are the prevalent ones in 10K cases each, thus the dataset is balanced. Then, for each three-image scene, we generate four logic-based templated statements, two of which are true and two false for the given scene. 5 The only variation in the statements is the target object. The four types of statement are (alongside examples with respect to Figure 1 ): (i) one shape, color small:",
"cite_spans": [
{
"start": 857,
"end": 858,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1226,
"end": 1234,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "3POS1 Dataset",
"sec_num": "4.1"
},
{
"text": "\"There is exactly one blue triangle that is small in its image in this scene\" \u2192 True (ii) one shape, color big: \"There is exactly one blue triangle that is big in its image in this scene\" \u2192 False (iii) two shapes, color small: \"There are exactly two blue triangles that are small in their images in this scene\" \u2192 False (iv) two shapes, color big: \"There are exactly two blue triangles that are big in their images in this scene\" \u2192 True",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3POS1 Dataset",
"sec_num": "4.1"
},
{
"text": "To tackle the visually grounded sentence verification task, we use two models that achieve state of the art results on the NLVR (Suhr et al., 2017) and NLVR2 (Suhr et al., 2019) tasks, respectively: N2NMN (Hu et al., 2017) and LXMERT (Tan and Bansal, 2019) . The End-to-End Module Network (N2NMN), belongs to the family of modular networks, which treat a sentence as a collection of predefined subproblems (e.g., counting, localization, conjunction, etc.), each handled by a dedicated module. Compared to its direct predecessor NMN , in particular, N2NMN does not require any external supervision (e.g., a parser) to process the sentence into its components. The latter, Learning Cross-Modality Encoder Representations from Transformers (LXMERT), is a transformer-based multimodal architecture pretrained on several language-and-vision tasks; as such, it is claimed to be universal, that is, capable of solving virtually any visual reasoning problem. LXMERT uses BERT (Devlin et al., 2019) to encode the language input; as for the image, it considers the sequence of N salient regions output by Faster R- CNN (Ren et al., 2015) .",
"cite_spans": [
{
"start": 128,
"end": 147,
"text": "(Suhr et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 158,
"end": 177,
"text": "(Suhr et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 205,
"end": 222,
"text": "(Hu et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 234,
"end": 256,
"text": "(Tan and Bansal, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 968,
"end": 989,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1105,
"end": 1127,
"text": "CNN (Ren et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.2"
},
{
"text": "To assess the suitability of these models for the 3POS1 task, we first evaluate them on the original POS1 task where statements are evaluated against a single image. For N2NMN, we use a public implementation, 6 specifically, the code developed for training and an evaluating the model on the CLEVR dataset (Johnson et al., 2017) . For LXMERT, we use a snapshot pre-trained on several multi-modal tasks, 7 that we fine-tune using the training set of POS1. The ceiling performance for this task is 97% accuracy (using a fixed interpretation of the threshold parameter k = 0.29). LXMERT achieves 93.4% accuracy, which outperforms both N2NMN (78.1%) and the models tested by Pezzelle and Fern\u00e1ndez (2019) . This shows the overall advantage of transformer-based architectures over competing methods, in line with previous findings (Devlin et al., 2019) . Moreover, it indicates the capability of LXMERT-which is pre-trained on natural images and language-to deal with synthetic data after fine-tuning (crucially, when not fine-tuned it yields an accuracy of 50%, i.e., random). Based on its performance, we focus on LXMERT in the main experiments and analyses in this paper.",
"cite_spans": [
{
"start": 306,
"end": 328,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 671,
"end": 700,
"text": "Pezzelle and Fern\u00e1ndez (2019)",
"ref_id": "BIBREF23"
},
{
"start": 826,
"end": 847,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.2"
},
{
"text": "We fine-tune LXMERT on the 3POS1 dataset by adapting the method applied by Suhr et al. (2019) for the two-image scenes of NLVR2 to our three-image scenes. More concretely, each datum in 3POS1 is composed of 3 images 6 https://github.com/ronghanghu/n2nmn. 7 Downloaded from http://nlp1.cs.unc.edu/ data/model_LXRT.pth img 0 , img 1 , img 2 , a statement stat, and a ground truth label True or False. Recall, that the visually grounded sentence verification task is to predict a label (True or False), given a representation of the images and the statement. An overview of how this is achieved with LXMERT is shown in Figure 2 . First, visual features are extracted separately for each image with Faster R-CNN (Ren et al., 2015). Then cross-modal representations x i are extracted from the [CLS] from the LXMERT encoder for each image in a scene:",
"cite_spans": [
{
"start": 75,
"end": 93,
"text": "Suhr et al. (2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 616,
"end": 624,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "x 0 = lxmert encoder(img 0 , stat) x 1 = lxmert encoder(img 1 , stat) x 2 = lxmert encoder(img 2 , stat) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "For label prediction, we train a classifier on the concatenation of the three image-statement representations (Eqn. 3), followed by a linear layer with learned parameters W and a bias vector b (Eqn. 4), followed by layer normalization (Ba et al., 2016) and a GeLU activation (Hendrycks and Gimpel, 2016) (Eqn. 5), and finally, a sigmoid activation function over a linear layer with learned parameters test accuracy statement type true false one shape, color big 0.868 0.876 two shapes, color big 0.880 0.908 one shape, color small 0.872 0.900 two shapes, color small 0.876 0.924 overall 0.888 ",
"cite_spans": [
{
"start": 235,
"end": 252,
"text": "(Ba et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c = [x 0 ; x 1 ; x 2 ] (3) z = Wc + b (4) z 1 = LayerNorm(GeLU(z)) (5) y = \u03c3(W 1 z 1 + b 1 )",
"eq_num": "(6)"
}
],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "The LXMERT encoder and the classifier are finetuned for 12 epochs to prevent overfitting with a batch size 64. The learning rate of the Adam optimizer (Kingma and Ba, 2014) is 5e-5. The finetuning is performed for 5 random seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "Overall, LXMERT achieves a very high accuracy on the task, averaged across 5 runs: 0.8909\u00b10.004 in validation set, 0.8864 \u00b1 0.005 in test set. Moreover, its performance turns out to be fairly stable across various statement types, with the best model run's accuracy (see Table 1 ) ranging from 0.868 (one shape, color big, true) to 0.924 (two shapes, color small, f alse). Interestingly, for all four statement types, the model experiences a slight advantage with false over true statements, even though the dataset was carefully balanced. Taken together, these results indicate that the model, which is pre-trained on natural images, can deal with the synthetic scenes in our dataset after finetuning. This is in line with the claim that off-theshelf transformer-based models can be applied to a wide range of different learning problems and data. At the same time, the model yields random accuracy when not fine-tuned, which reveals that our new dataset is challenging and involves a type of reasoning not captured during pre-training.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In Pezzelle and Fern\u00e1ndez (2019) , models were shown to make more errors when the area of the queried object is closer to the threshold (see Eq. 1). 0.0387 0.3077 0.1961 there is exactly one green circle that is small in its image in this scene F Figure 3 : A sample from the test split of 3POS1, for which LXMERT predicts the incorrect label (True, instead of False). The numbers above the images are the distances of the target object (green circle) from the image-specific threshold. Here, the target object in the leftmost image is very close to that image's threshold value, so it is challenging for the model to detect whether it is big or small. The odd-one-out image is framed in red. Best viewed in color.",
"cite_spans": [
{
"start": 3,
"end": 32,
"text": "Pezzelle and Fern\u00e1ndez (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We check if this is the case also for LXMERT on our 3POS1 task. To do so, we consider the cases where the model gives a wrong prediction. Among the 3 images in a scene, we take the one with the lowest distance from the threshold. We then check whether the model makes more errors when such distance is lower, i.e., when there is at least one image in the scene with a borderline size. As reported in Table 2 , this is indeed the case: 75% of incorrect predictions involve cases where (at least) in one image the target object is close to the threshold (< 0.1) (see Figure 3 , where the leftmost image is borderline). In contrast, only around 3% of the errors involve clear-cut cases, i.e., images where the target object's distance from threshold is \u2265 0.2. As observed by Pezzelle and Fern\u00e1ndez (2019) , this may suggest that the model is genuinely learning to compute the threshold function based on the areas of the relevant objects in the scene. Further support for this is given by the performance of the model on the 15 cases in the test set where the target object has the same area in the three-image scene. These cases could be expected to act as a confound for the model, 9 but LXMERT succeeds in 14/15 cases. Consistently with the error pattern reported above, the missed case contains low-distance objects (the lowest distance is equal to 0.1). In the next section, we more extensively explore this issue.",
"cite_spans": [
{
"start": 772,
"end": 801,
"text": "Pezzelle and Fern\u00e1ndez (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 400,
"end": 407,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 565,
"end": 573,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our results show that LXMERT achieves a high level of accuracy on our visually-grounded sentence verification task on the three-image 3POS1 dataset. In this section, we investigate how the model may be solving the task. Specifically, we explore what visual information the model attends to within each image and whether the representations learned by the model encode information about the context-dependent threshold that determines what counts as big or small in a given image.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis at the Individual Image Level",
"sec_num": "6"
},
{
"text": "Recall that the ground truth labels in our dataset are assigned based on the function in Eqn. 1, which was shown to fit well with human judgements about relative gradable adjectives (Schmidt et al., 2009) . This function computes a threshold value taking into account the biggest and smallest objects in the context of an image. Thus, a possible strategy adopted by the model at the level of individual images could be to identify the target object and reason about the context by focusing on the biggest and smallest objects. We test this hypothesis by checking whether the model pays particular attention to these object types (target, biggest, smallest) or whether its attention is rather uniformly distributed over all regions detected by Faster R- CNN (Ren et al., 2015) . To compute which objects are the most attended, we use the Intersection over Union (IoU) metric (Russakovsky et al., 2015) . We take the attention weights provided by the [CLS] token representation, extracted from the final layer of the best fine-tuned model with frozen parameters. We then use IoU Precision @ K to find the percentage of the labels correctly predicted by the model using the following steps:",
"cite_spans": [
{
"start": 182,
"end": 204,
"text": "(Schmidt et al., 2009)",
"ref_id": "BIBREF29"
},
{
"start": 753,
"end": 775,
"text": "CNN (Ren et al., 2015)",
"ref_id": null
},
{
"start": 874,
"end": 900,
"text": "(Russakovsky et al., 2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Attention over Key Object Types",
"sec_num": "6.1"
},
{
"text": "1. Extract top-K object proposals: For each correctly predicted label, separately for each of the three images in a scene, we take the object proposals of the image regions detected by Faster R-CNN with K-highest scores in the [CLS] token. We perform the procedure for each attention head of the representation, extracted from the cross-modality encoder for the corresponding visual-language input. We ignore the object proposals related to the background areas of the image, which we identify based on the labels provided by Faster R-CNN. 10",
"cite_spans": [
{
"start": 227,
"end": 232,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Attention over Key Object Types",
"sec_num": "6.1"
},
{
"text": "2. Extract ground-truth bounding boxes: We take the ground-truth bounding boxes of the biggest/the smallest/target objects from all three images in the input scene. 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Attention over Key Object Types",
"sec_num": "6.1"
},
{
"text": "3. Calculate Pairwise IoU: We calculate the pairwise IoU between the top-K object proposals and the ground truth bounding boxes, obtained in Steps 1 and 2. We take the highest IoU value calculated for all these pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Attention over Key Object Types",
"sec_num": "6.1"
},
{
"text": "The IoU precision @ K is the percentage of all the IoU values obtained in Step 3 that are > 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate IoU Precision@K:",
"sec_num": "4."
},
{
"text": "We also compute a random baseline for all three categories with the same steps, except in Step 1 we randomly select K objects from the 36 detected by Faster R-CNN, instead of using the ones with the highest attention scores. We use the smallest possible value for K = 1, as the most illustrative case in which the metric only 10 The attributes predicted for the regions corresponding to the black background in our scenes could be black or dark.",
"cite_spans": [
{
"start": 326,
"end": 328,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate IoU Precision@K:",
"sec_num": "4."
},
{
"text": "11 We calculate the coordinates of the boxes using objects position and radius provided in the annotation of the POS1 dataset by Pezzelle and Fern\u00e1ndez (2019) .",
"cite_spans": [
{
"start": 129,
"end": 158,
"text": "Pezzelle and Fern\u00e1ndez (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate IoU Precision@K:",
"sec_num": "4."
},
{
"text": "there are exactly two green circles that are big in their images in this scene T Figure 5 : Example of object proposals most attended to by the 9th head of the last layer of the cross-modality encoder. In each image, the model attends to all of the objects except the biggest ones. Simultaneously, in the leftmost image, it also focuses on the green circle, which is the target object in this scene.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 89,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculate IoU Precision@K:",
"sec_num": "4."
},
{
"text": "looks at the single object in each image to which the model attends the most. Figure 4 shows the results of the IoU Precision @ K for the 12 attention heads in LXMERT. In particular, Figure 4a shows that many of the attention heads attend to the target object that is queried directly in the input sentence. Figures 4b and 4c demonstrate that the model also looks at the surrounding visual context, which is needed to perform relational reasoning. A comparison of behaviour across the Figures reveals that different attention heads appear to specialise on different object types: attention head 9 learns to attend to the smallest objects while it pays no attention to the biggest objects and less than random attention to the target objects. We also highlight the observed behaviour of attention head 11, which is the only head that reliably attends to the biggest objects. Figure 5 shows an example of the objects attended to by attention head 9 in one sample scene. Here, we can see that the model is primarily attending to the smallest objects in the scene.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 183,
"end": 192,
"text": "Figure 4a",
"ref_id": "FIGREF1"
},
{
"start": 308,
"end": 325,
"text": "Figures 4b and 4c",
"ref_id": "FIGREF1"
},
{
"start": 874,
"end": 882,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculate IoU Precision@K:",
"sec_num": "4."
},
{
"text": "The analysis above showed that the model, besides the target object, also pays attention to key contextual information, particularly to the smallest and biggest objects in an image. These objects are critical to compute the threshold to determine if a target object is big or small relative to the context of an image. To test whether the representations learned by the model implicitly encode information about the context-dependent threshold, we use a diagnostic classifier (Alain and Bengio, 2017; Hupkes et al., 2018; Belinkov and Glass, 2019) . Probing or diagnostic tests are useful tools to better understand the inner workings of deep models. Given a hypothesis about information that may be encoded by a trained model, a probe checks whether such information is accessible by a relatively simple classifier.",
"cite_spans": [
{
"start": 476,
"end": 500,
"text": "(Alain and Bengio, 2017;",
"ref_id": "BIBREF1"
},
{
"start": 501,
"end": 521,
"text": "Hupkes et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 522,
"end": 547,
"text": "Belinkov and Glass, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Knowledge of the Threshold",
"sec_num": "6.2"
},
{
"text": "Concretely, in this experiment we use a linear regression classifier 12 to predict the threshold values for each of the three images in a scene given the cross-modality features learned by the LXMERT encoder (x 0 , x 1 , x 2 in Eqn. 2). The classifier uses the same train/val/test splits of the 3POS1 dataset. The predicted and actual values are displayed in Figure 6 , which shows that a simple linear classifier can predict the threshold values for each image in a scene remarkably accurately (mean squared error on the test set is 6.64e \u2212 05). This confirms that the cross-modality representations learned by the model are representing the threshold in each image. 85.70 2.45 3.10 8.75 Table 3 : Confusion matrix with % of scenes in the test set that are (in)correctly classified by the full LXMERT model for the original sentence verification task and by the linear SVM for the scene configuration task.",
"cite_spans": [],
"ref_spans": [
{
"start": 359,
"end": 367,
"text": "Figure 6",
"ref_id": "FIGREF2"
},
{
"start": 689,
"end": 696,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implicit Knowledge of the Threshold",
"sec_num": "6.2"
},
{
"text": "We first investigate whether the representations learned by the model encode the configuration of the scene, that is, whether they are effective to distinguish between scenes where 1 target object counts as small and 2 as big (hence, 1small2big), and vice versa (1big2small). In principle, this counting step is necessary to solve the sentenceverification task (see Sec. 2), and this probe determines whether the model is reasoning at the level of the scene or exploiting other strategies, such as capturing random correlations in the data. We use an SVM classifier with linear kernel (Boser et al., 1992) 14 to probe the representations learned by the model, and find that they are indeed useful for predicting the configurations. Accuracy on the test set is 88.15%, which is well above chance level (50%). As reported in Table 3 , in the large majority of cases (85.7%) a correct prediction in the sentence verification task corresponds to a correct assessment by the diagnostic classifier. This confirms that LXMERT learns representations that encode the configuration of the scene.",
"cite_spans": [
{
"start": 585,
"end": 605,
"text": "(Boser et al., 1992)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 823,
"end": 830,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scene Configuration Classification",
"sec_num": "7.1"
},
{
"text": "Our results so far show that the model is able to perform the multi-step sentence verification task with high accuracy and that the representations encode information about different configurations of scenes. However, there is yet no guarantee that the model is able to identify the odd-one-out image (i.e., the image that is not prevalent; see Sec. 4.1). We test this by means of another diagnostic classifier: given a scene representation, the task is to predict the position of the odd-one-out image (hence, OOO), namely image 0, 1, or 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Odd-One-Out Image Identification",
"sec_num": "7.2"
},
{
"text": "We initially experiment with the same type of diagnostic classifier used in the previous analysis: an train valid test OOO 0.8767 0.8771 0.8659 control 0.3385 0.3386 0.3359 Table 4 : Accuracy of the MLP diagnostic classifier on the train/val/test splits of the data on both the OOO and the control setting. Chance level is 0.33 for all splits.",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Odd-One-Out Image Identification",
"sec_num": "7.2"
},
{
"text": "SVM with a linear kernel. However, this linear classifier was only able to accurately classify the position of odd-one-out images associated with imagescene instances labelled True, suggesting that the prediction of the position of the odd-one-out cannot be solved by a linear classifier. Therefore, we use a non-linear MLP and also report the results of a control task, where the labels are randomly assigned to the instances (Hewitt and Liang, 2019) . The MLP is a two-layer neural network with 128 units in each layer followed by a ReLU activation function, and finally a learned projection into 3 output units, followed by a softmax normalisation. We train the MLP with a cross-entropy objective function for four epochs using the Adam optimiser with the default learning rate. Table 4 reports the results of the non-linear diagnostic classifier in both the OOO and control settings. As can be seen, while the MLP does not exceed chance level in the control setting, in the OOO it achieves a striking 87.67% accuracy, a similar performance as the one reported in Sec. 7.1. On the one hand, this indicates that the model cannot fit the data when the assigned labels are not related to the actual OOO image positions. On the other hand, these results show that the representations learned by LXMERT do encode information regarding the odd-one-out object in the scene.",
"cite_spans": [
{
"start": 427,
"end": 451,
"text": "(Hewitt and Liang, 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 782,
"end": 789,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Odd-One-Out Image Identification",
"sec_num": "7.2"
},
{
"text": "Taken together, these analyses demonstrate that LXMERT reasons over the multi-image scene to perform the sentence-verification task. In particular, it is able to compute the contextually-defined size of the objects in the scene and perform higherlevel reasoning over these representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Odd-One-Out Image Identification",
"sec_num": "7.2"
},
{
"text": "We performed an in-depth analysis of the representations learned by the pretrained multimodal transformer LXMERT when performing relational reasoning. We proposed a multimodal reasoning task that requires multi-step relational reasoning and showed that LXMERT can perform the task with high accuracy. Our analysis reveals that the majority of the errors arise from target objects with contextually-defined sizes close to the threshold, and that LXMERT solves the task by (i) encoding information regarding the size of objects and by (ii) reasoning over that size. Most of its errors concern borderline cases for which the first, image-level reasoning step was shown to be challenging. Overall, our results show that transformer-based architectures pretrained on natural images can generalise to synthetic datasets. We leave to future work an extensive exploration of the extent to which our findings apply to similar tasks and models, for example other vision and langauge transformers (Bugliarello et al., 2021) , as well as to natural multimodal data.",
"cite_spans": [
{
"start": 986,
"end": 1012,
"text": "(Bugliarello et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "LTS, single GPU Tesla V100-SXM2, and NVIDIA driver 455.38, CUDA 10.1, and 24GB RAM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "For N2NMN, we used a computer cluster with Debian 10, a single GPU GeForce 1080Ti, 11GB GDDR5X, NVIDIA driver 450.80.02, CUDA 11.0, 260GB RAM, and Python 3.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We performed a parameter search to determine the best values for training N2NMN 15 on the training split of the POS1 dataset 16 for 3000 iterations of batch size 64 for each combination. We experimented with the following parameters: encoder dropout (0, 0.5, 0.8), decoder dropout (0, 0.5, 0.8), weight decay (5e-5, 5e-4), baseline decay (0.8, 0.99), lambda entropy (0.1, 0.01, 0.001). Their best values (corresponding to the best validation accuracy) are shown in Table 5 . We trained the final model using these parameters for 14,000 iterations with batch size 64. The training took approximately 4 hours. ",
"cite_spans": [],
"ref_spans": [
{
"start": 465,
"end": 472,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "B Hyperparameters and training for N2NMN",
"sec_num": null
},
{
"text": "For the fine-tuning of LXMERT, the pre-trained model with standard hyperparameters was used 17 , with only the learning rate changed from 1e-5 to 5e-5, since even with these out-of-the-box parameters, it was able to achieve high performance on the given task. We fine-tuned this model with the POS1 training split using early stopping after 12 epochs, with the parameter number of epochs of BertADAM optimizer set to 150, learning rate 1e-5, and batch size 32 (the only difference in the used hyperparameters during the fine-tuning with 3POS1 was in the batch size 64). We validated the model after each epoch, then the best model was selected, which showed the highest validation ac-15 https://github.com/ronghanghu/n2nmn 16 https://github.com/sandropezzelle/ malevic 17 https://github.com/airsplay/lxmert. git curacy during the 12 epochs, and further evaluated on the test split.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Hyperparameters and fine-tuning for LXMERT",
"sec_num": null
},
{
"text": "The running time of each fine-tuning epoch for the POS1 dataset was 3 minutes, while each epoch of fine-tuning with 3POS1 took around 6 minutes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Hyperparameters and fine-tuning for LXMERT",
"sec_num": null
},
{
"text": "The code to generate the data, and to train and evaluate the models, is available at https://github.com/ jig-san/multi-step-size-reasoning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One notable exception is position(Park et al., 2019; Qiu et al., 2020), which can involve spatial relations of objects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To account for gradable adjectives' vagueness, for each image k was randomly sampled from the normal distribution centered on 0.29, the best-predictive value in Schmidt et al.(2009). SeePezzelle and Fern\u00e1ndez (2019) for further details.4 On average, each target image appears 2 times as a distractor in the dataset (min: 0, max: 10). The position of the odd-one-out image in the scene is assigned randomly.5 The odd-one-out is the same for all statements; seeFig. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is identical to the approach followed byTan and Bansal (2019) to finetune LXMERT for NLVR2 classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The target objects have exactly the same area in pixels but each target object has its own context-defined size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Analysis at the Multi-Image LevelIn the previous section, we analysed the model representations at the level of the independent images. Here, we probe the representations with respect to the entire three-image scene. First, we investigate whether the representations encode information on the overall configuration of the scene (Sec. 7.1). Second, we probe their effectiveness in identifying the odd-one-out image in the scene (Sec. 7.2). In both analyses, we use diagnostic classifiers, 13 that take as input the concatenation of the three imagestatement cross-modal representations (Eqn. 3).12 Least squares linear regression from the sklearn.13 Trained on the same splits as the main experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Implemented in linear support vector machine classification (LinearSVC) from the sklearn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Elia Bruni for providing feedback on a preliminary version of this work and Dieuwke Hupkes for her advice on probing methods. The work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Shapeglot: Learning language for shape differentiation",
"authors": [
{
"first": "Panos",
"middle": [],
"last": "Achlioptas",
"suffix": ""
},
{
"first": "Judy",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Hawkins",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Leonidas",
"middle": [
"J"
],
"last": "Guibas",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "8938--8947",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Panos Achlioptas, Judy Fan, Robert Hawkins, Noah Goodman, and Leonidas J Guibas. 2019. Shapeglot: Learning language for shape differentiation. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 8938-8947.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Understanding intermediate layers using linear classifier probes",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Alain",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations (ICLR) -Workshop Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Alain and Yoshua Bengio. 2017. Under- standing intermediate layers using linear classifier probes. In International Conference on Learning Representations (ICLR) -Workshop Track.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reasoning about pragmatics with neural listeners and speakers",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1173--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1173-1182.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural module networks",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 39-48.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "VQA: Visual question answering",
"authors": [
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2425--2433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question an- swering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425-2433.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Analysis methods in neural language processing: A survey",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "49--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A training algorithm for optimal margin classifiers",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bernhard",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [
"M"
],
"last": "Boser",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [
"N"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the fifth annual workshop on Computational learning theory",
"volume": "",
"issue": "",
"pages": "144--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. 1992. A training algorithm for optimal mar- gin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144-152.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language BERTs",
"authors": [
{
"first": "Emanuele",
"middle": [],
"last": "Bugliarello",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, and Desmond Elliott. 2021. Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language BERTs. Transactions of the Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pragmatically informative image captioning with character-level inference",
"authors": [
{
"first": "Reuben",
"middle": [],
"last": "Cohn-Gordon",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "439--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reuben Cohn-Gordon, Noah Goodman, and Christo- pher Potts. 2018. Pragmatically informative image captioning with character-level inference. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 439-443.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural naturalist: Generating fine-grained image comparisons",
"authors": [
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Kaeser-Chen",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "708--717",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxwell Forbes, Christine Kaeser-Chen, Piyush Sharma, and Serge Belongie. 2019. Neural natural- ist: Generating fine-grained image comparisons. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 708- 717.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pragmatic language interpretation as probabilistic inference",
"authors": [
{
"first": "D",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Michael C",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2016,
"venue": "Trends in cognitive sciences",
"volume": "20",
"issue": "11",
"pages": "818--829",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah D Goodman and Michael C Frank. 2016. Prag- matic language interpretation as probabilistic infer- ence. Trends in cognitive sciences, 20(11):818-829.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Gaussian error linear units (gelus)",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.08415"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaus- sian error linear units (gelus). arXiv preprint arXiv:1606.08415.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Designing and interpreting probes with control tasks",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2733--2743",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1275"
]
},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to reason: End-to-end module networks for visual question answering",
"authors": [
{
"first": "Ronghang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "804--813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 804-813.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Veldhoen",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "61",
"issue": "",
"pages": "907--926",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and 'diagnostic classifiers' re- veal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning to describe differences between pairs of similar images",
"authors": [
{
"first": "Harsh",
"middle": [],
"last": "Jhamtani",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4024--4034",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harsh Jhamtani and Taylor Berg-Kirkpatrick. 2018. Learning to describe differences between pairs of similar images. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 4024-4034.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Bharath",
"middle": [],
"last": "Hariharan",
"suffix": ""
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "2901--2910",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for com- positional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 2901- 2910.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Shape-World: A new test methodology for multimodal language understanding",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Kuhnle",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04517"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Kuhnle and Ann Copestake. 2017. Shape- World: A new test methodology for multi- modal language understanding. arXiv preprint arXiv:1704.04517.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A multiworld approach to question answering about realworld scenes based on uncertain input",
"authors": [
{
"first": "Mateusz",
"middle": [],
"last": "Malinowski",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Fritz",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1682--1690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mateusz Malinowski and Mario Fritz. 2014. A multi- world approach to question answering about real- world scenes based on uncertain input. In Advances in neural information processing systems, pages 1682-1690.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Robust change captioning",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Dong Huk Park",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rohrbach",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "4624--4633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Huk Park, Trevor Darrell, and Anna Rohrbach. 2019. Robust change captioning. In Proceedings of the IEEE International Conference on Computer Vision, pages 4624-4633.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Is the red square big? MALeViC: Modeling adjectives leveraging visual contexts",
"authors": [
{
"first": "Sandro",
"middle": [],
"last": "Pezzelle",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2858--2869",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandro Pezzelle and Raquel Fern\u00e1ndez. 2019. Is the red square big? MALeViC: Modeling adjectives leveraging visual contexts. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2858-2869.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Kenji Iwata, and Hirokatsu Kataoka. 2020. 3d-aware scene change captioning from multiview images",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Satoh",
"suffix": ""
},
{
"first": "Ryota",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": null,
"venue": "IEEE Robotics and Automation Letters",
"volume": "5",
"issue": "3",
"pages": "4743--4750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Qiu, Yutaka Satoh, Ryota Suzuki, Kenji Iwata, and Hirokatsu Kataoka. 2020. 3d-aware scene change captioning from multiview images. IEEE Robotics and Automation Letters, 5(3):4743-4750.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Faster r-cnn: Towards real-time object detection with region proposal networks",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "Shaoqing Ren",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time ob- ject detection with region proposal networks. In Advances in neural information processing systems, pages 91-99.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Building and solving odd-oneout classification problems: A systematic approach",
"authors": [
{
"first": "Philippe",
"middle": [
"E"
],
"last": "Ruiz",
"suffix": ""
}
],
"year": 2011,
"venue": "Intelligence",
"volume": "39",
"issue": "5",
"pages": "342--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe E Ruiz. 2011. Building and solving odd-one- out classification problems: A systematic approach. Intelligence, 39(5):342-350.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Imagenet large scale visual recognition challenge",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Computer Vision",
"volume": "115",
"issue": "3",
"pages": "211--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition chal- lenge. International Journal of Computer Vision, 115(3):211-252.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A simple neural network module for relational reasoning",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Santoro",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Raposo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Mateusz",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Malinowski",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Battaglia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lillicrap",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4967--4976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Santoro, David Raposo, David G Barrett, Ma- teusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural net- work module for relational reasoning. In Advances in neural information processing systems, pages 4967-4976.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "How tall is tall? Compositionality, statistics, and gradable adjectives",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lauren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Barner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 31st annual conference of the cognitive science society",
"volume": "",
"issue": "",
"pages": "2759--2764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauren A Schmidt, Noah D Goodman, David Barner, and Joshua B Tenenbaum. 2009. How tall is tall? Compositionality, statistics, and gradable adjectives. In Proceedings of the 31st annual conference of the cognitive science society, pages 2759-2764. Cite- seer.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Reasoning about finegrained attribute phrases using reference games",
"authors": [
{
"first": "Jong-Chyi",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chenyun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Huaizu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Subhransu",
"middle": [],
"last": "Maji",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "418--427",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jong-Chyi Su, Chenyun Wu, Huaizu Jiang, and Subhransu Maji. 2017. Reasoning about fine- grained attribute phrases using reference games. In Proceedings of the IEEE International Conference on Computer Vision, pages 418-427.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A corpus of natural language for visual reasoning",
"authors": [
{
"first": "Alane",
"middle": [],
"last": "Suhr",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "217--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual rea- soning. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 217-223.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A corpus for reasoning about natural language grounded in photographs",
"authors": [
{
"first": "Alane",
"middle": [],
"last": "Suhr",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ally",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Iris",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Huajun",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6418--6428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in pho- tographs. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 6418-6428.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "LXMERT: Learning cross-modality encoder representations from transformers",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5103--5114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from trans- formers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 5103-5114.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Context-aware captions from context-agnostic supervision",
"authors": [
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Gal",
"middle": [],
"last": "Chechik",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "251--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 251-260.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Joint optimization for cooperative image captioning",
"authors": [
{
"first": "Gilad",
"middle": [],
"last": "Vered",
"suffix": ""
},
{
"first": "Gal",
"middle": [],
"last": "Oren",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Atzmon",
"suffix": ""
},
{
"first": "Gal",
"middle": [],
"last": "Chechik",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "8898--8907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gilad Vered, Gal Oren, Yuval Atzmon, and Gal Chechik. 2019. Joint optimization for cooperative image captioning. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 8898-8907.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "one green square that is big in its image in this scene Overview of our visually-grounded sentence verification model. Given a three-image scene and a statement, LXMERT encodes each imagestatement pair separately, from which a single cross-modal representation is extracted from the special [CLS] token (shown in yellow). These [CLS] representations are concatenated and propagated through a non-linear classifier to predict whether the statement accurately describes the scene.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Intersection over Union Precision at K=1, per attention head (in the x-axis), for the target object in an image (a), the smallest object in an image (b), and the largest object in an image (c).",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Comparison of threshold values predicted by the linear regression model (blue dots) with the actual threshold for each of the 6000 test images (orange dots). Here, the real target values are sorted in ascending order, and the predicted values are sorted with respect to the corresponding targets' indices. The thresholds are normalized by the area of the one image, with the square root transformation. Best viewed in color.",
"uris": null,
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "LXMERT results on the test set of 3POS1 by the best model's run, split by statement type.",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Analysis of LXMERT's errors with respect to target object's distance from the threshold. Threshold distance refers to the lowest value in the visual scene.",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "Best parameters for N2NMN model, found with a grid search.",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}