Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:25:55.465780Z"
},
"title": "Combining Geometric, Textual and Visual Features for Predicting Prepositions in Image Descriptions",
"authors": [
{
"first": "Arnau",
"middle": [],
"last": "Ramisa",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Josiah",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Ying",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "LIRIS",
"institution": "",
"location": {
"settlement": "\u00c9cole Centrale de Lyon",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dellandrea",
"suffix": "",
"affiliation": {
"laboratory": "LIRIS",
"institution": "",
"location": {
"settlement": "\u00c9cole Centrale de Lyon",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Francesc",
"middle": [],
"last": "Moreno-Noguer",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate the role that geometric, textual and visual features play in the task of predicting a preposition that links two visual entities depicted in an image. The task is an important part of the subsequent process of generating image descriptions. We explore the prediction of prepositions for a pair of entities, both in the case when the labels of such entities are known and unknown. In all situations we found clear evidence that all three features contribute to the prediction task.",
"pdf_parse": {
"paper_id": "D15-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate the role that geometric, textual and visual features play in the task of predicting a preposition that links two visual entities depicted in an image. The task is an important part of the subsequent process of generating image descriptions. We explore the prediction of prepositions for a pair of entities, both in the case when the labels of such entities are known and unknown. In all situations we found clear evidence that all three features contribute to the prediction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, there has been an increased interest in the task of automatic generation of natural language image descriptions at sentence level, compared to earlier work that annotates images with a laundry list of terms (Duygulu et al., 2002) . The task is important in that such detailed annotations are more informative and discriminative compared to isolated textual labels, and are essential for improved text and image retrieval.",
"cite_spans": [
{
"start": 224,
"end": 246,
"text": "(Duygulu et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most standard approach to generating such descriptions involves first detecting instances of pre-defined concepts in the image, and then reasoning about these concepts to generate image descriptions e.g. (Kulkarni et al., 2011; Yang et al., 2011) . Our work is also based on this paradigm. However, we assume that object instances have already been pre-detected by visual recognisers, and concentrate on a specific subtask of description generation. More specifically, given two visual entity instances where one could potentially act as a modifier to the other, we address the problem of identifying the appropriate preposition to connect these two entities (Figure 1 ). The inferred prepositional relations will subsequently act as an *A. Ramisa and J. Wang contributed equally to this work. The main contribution of this paper is therefore to learn to predict the most suitable preposition given its context, and to learn this jointly from images and their descriptions. In particular, we concentrate on learning from (i) geometric relations between two visual entities from image annotations; (ii) textual features from textual descriptions; (iii) visual features from images. Previous work exists (Yang et al., 2011) that uses text corpora to 'guess' the prepositions given the context without considering the appropriate spatial relations between the entities in the image, signifying a gap between visual content and its corresponding description. For example, although person on horse might commonly occur in text corpora, a particular image might actually depict a person standing beside a horse. On the other hand, work that does consider the image content for generating prepositions (Kulkarni et al., 2011; Elliott and Keller, 2013) map geometric relations to a limited set of prepositions using manually defined rules, not as humans would naturally use them with a richer vocabulary. We would like to have the best of both worlds, by considering image content as well as textual information to select the preposition best used to express the relation between two entities. Our hypothesis is that the combination of geometric, textual and visual features can help with the task of predicting the most appropriate preposition, since incorporating geometric and visual information should help generate a relation that is consistent with the image content, whilst incorporating textual information should help generate a description that is consistent with natural language.",
"cite_spans": [
{
"start": 208,
"end": 231,
"text": "(Kulkarni et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 232,
"end": 250,
"text": "Yang et al., 2011)",
"ref_id": "BIBREF19"
},
{
"start": 1206,
"end": 1225,
"text": "(Yang et al., 2011)",
"ref_id": "BIBREF19"
},
{
"start": 1699,
"end": 1722,
"text": "(Kulkarni et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 1723,
"end": 1748,
"text": "Elliott and Keller, 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 663,
"end": 672,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Natural Language Processing Community has significant interest in different aspects of prepositions. The Prepositions Project (Litkowski and Hargraves, 2005) analysed and produced a lexicon of English prepositions and their senses, and subsequently used them in the Word Sense Disambiguation of Prepositions task in SemEval-2007 (Litkowski and Hargraves, 2007) .",
"cite_spans": [
{
"start": 130,
"end": 161,
"text": "(Litkowski and Hargraves, 2005)",
"ref_id": "BIBREF13"
},
{
"start": 320,
"end": 347,
"text": "SemEval-2007 (Litkowski and",
"ref_id": null
},
{
"start": 348,
"end": 364,
"text": "Hargraves, 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In SemEval-2012, Kordjamshidi et al. (2012) introduce the more fine-grained task of spatial role labelling to detect and classify spatial relations expressed by triples (trajector, landmark, spatial indicator). In the latest edition of SemEval-2015, the SpaceEval task (Pustejovsky et al., 2015) introduce further tasks of identifying spatial and motion signals, as well as spatial configurations/orientation and motion relation.",
"cite_spans": [
{
"start": 17,
"end": 43,
"text": "Kordjamshidi et al. (2012)",
"ref_id": "BIBREF8"
},
{
"start": 269,
"end": 295,
"text": "(Pustejovsky et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In work that links prepositions more strongly to image content, Gupta and Davis (2008) model prepositions implicitly to disambiguate image regions, rather than for predicting prepositions. Their work also require manual annotation of prepositional relations. In image description generation work, Kulkarni et al. (2011) manually map spatial relations to pre-defined prepositions, whilst Yang et al. (2011) predict prepositions from largescale text corpora solely based on the complement term, with the prepositions constrained to describing scenes (on the street). Elliott and Keller (2013) define a list of eight spatial relations and their corresponding prepositional term for sentence generation. Although they also present alternative models that use text corpora for descriptions that are more human-like, they are limited to verbs and do not cover prepositions. Le et al. (2014) exam-ine prepositions modifying human actions (verbs), and conclude that these relate to positional information to a certain extent. Other related work include training classifiers for prepositions with spatial relation features to improve image segmentation and detection (Fidler et al., 2013) ; this work is however limited to four prepositions.",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "Gupta and Davis (2008)",
"ref_id": "BIBREF6"
},
{
"start": 297,
"end": 319,
"text": "Kulkarni et al. (2011)",
"ref_id": "BIBREF10"
},
{
"start": 387,
"end": 405,
"text": "Yang et al. (2011)",
"ref_id": "BIBREF19"
},
{
"start": 868,
"end": 884,
"text": "Le et al. (2014)",
"ref_id": "BIBREF11"
},
{
"start": 1158,
"end": 1179,
"text": "(Fidler et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We formally define the task of predicting prepositions as follows: Let P be the set of possible prepositions. Let L be the set of possible landmark entities acting as the complement of a preposition, and let T be the set of possible trajector entities modified by the prepositional phrase comprising a preposition and its landmark 1 . For example, for the phrase person on bicycle, on would be the preposition, bicycle the landmark, and person the trajector. For this paper, we constrain trajector and landmark to be entities that are visually identifiable in an image since we are interested in discovering the role of visual features and geometric configurations between two entities in the preposition prediction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "D = {d 1 , d 2 , ..., d N } be the set of N ob- servations, where each d i for i = 1, 2..., N is rep- resented by d i = (x i , y i , r i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": ", where x i and y i are the feature representations for the trajector and the landmark entities respectively, and r i the relative geometric feature between the two visual entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "Given d i , the objective of the preposition prediction task is to produce a ranked list of prepositions (p 1 , p 2 , ...p |P | ) according to how likely they are to express the appropriate spatial relation between the given trajector and landmark entities that are either known (Section 6.1) or only represented by visual features (Section 6.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3"
},
{
"text": "We base the preposition prediction task on two large-scale image datasets with human authored descriptions, namely MSCOCO (Lin et al., 2014) and Flickr30k (Young et al., 2014; Plummer et al., 2015) . To extract instances of triples (trajector, preposition, landmark) from image descriptions, we used the Neural Network, transition-based dependency parser of Chen and Manning (2014) as implemented in Stanford CoreNLP . Dependencies signifying prepositional Bounding Box feature (number of dimensions)",
"cite_spans": [
{
"start": 122,
"end": 140,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 155,
"end": 175,
"text": "(Young et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 176,
"end": 197,
"text": "Plummer et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 358,
"end": 381,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Vector (x, y) from centroid of trajector to centroid of landmark, normalised by the size of the bounding box enclosing both objects (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Area of trajector bounding box relative to landmark (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Aspect ratio of each bounding box 2\u2022 Area of each bounding box w.r.t. enclosing box 2\u2022 Intersection over union of the bounding boxes (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Euclidean distance between the trajector and landmark bounding boxes, normalised by the image size (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Area of each bounding box w.r.t. the whole image (2) We consider two variants of trajector and landmark terms in our experiments: (i) using the provided high level categories as terms (80 for MSCOCO and 8 for Flickr30k); (ii) using the terms occurring in the sentence directly, which constitute a bigger and more realistic challenge. For Flickr30k, the descriptive phrases may cause data sparseness (the furry, black and white dog). Thus, we extracted the lemmatised head word of each phrase, using a 'semantic head' variant of the head finding rules of Collins (2003) in Stanford CoreNLP. Entities from the same coreference chain are denoted with a common head noun chosen by majority vote among the group, with ties broken by the most frequent head noun in the corpus, and further ties broken at random.",
"cite_spans": [
{
"start": 556,
"end": 570,
"text": "Collins (2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "Geometric Features: Geometric features between a trajector and a landmark entity are derived from bounding box annotations. We defined an 11-dimensional vector of bounding box features, covering geometric relations such as distance, orientation, relative bounding box sizes and overlaps between bounding boxes (Table 1) . We chose to use continuous features as we felt these may be more powerful and expressive compared to discrete, binned features. Despite some of these features being correlated, we left it to the classifier to determine the most useful features for discrimination without having to withhold any unnecessarily.",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 319,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "Textual features: We consider two textual features to encode the trajector and landmark terms w t i and w l i . The first feature is a one-hot indicator vector x I i and y I i for the trajector and landmark respectively, where x I i,t = 1 if index t corresponds to the trajector term w t i and 0 elsewhere (and similarly for landmark). As data sparseness may be an issue, we also explore an alternative textual feature which encodes the terms as word2vec embeddings (Mikolov et al., 2013) . This encodes each term as a vector such that semantically related terms are close in the vector space. This allows information to be transferred across semantically related terms during training (e.g. information from person on boat can help predict the preposition that mediates man and boat).",
"cite_spans": [
{
"start": 466,
"end": 488,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "Image Features: While it is ideal to have vision systems produce a firm decision about the visual entity instance detected in an image, in reality it may be beneficial to defer the decision by allowing several possible interpretations of the instance being detected. In such cases, we will not have a single concept label for the entity, but instead a high-level visual representation. For this scenario, we extracted visual representations from the final layer of a Convolutional Neural Network trained on ImageNet (Krizhevsky et al., 2012) , and used them as representations for entity instances in place of textual features.",
"cite_spans": [
{
"start": 516,
"end": 541,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "Here we highlight interesting findings from experiments performed for the task of predicting prepositions for two different scenarios (Sections 6.1 and 6.2). Detailed results can be found in the supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preposition Prediction",
"sec_num": "6"
},
{
"text": "Evaluation metrics. As there may be more than one 'correct' preposition for a given context (person on horse and person atop horse), we propose the mean rank of the correct preposition as the main evaluation metric, as it accommodates Table 2 : Top: Mean rank of the correct preposition (lower is better). Bottom: Accuracy with different feature configurations. All results are with the original trajector/landmark terms from descriptions. IND stands for Indicator Vectors, W2V for Word2Vec, and GF for Geometric Features. As baseline we rank the prepositions by their relative frequencies in the training dataset. Figure 2 : Normalised confusion matrices on the balanced test subsets for the two datasets (left: MSCOCO, right: Flickr30k), using geometric features and word2vec with the original terms.",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 2",
"ref_id": null
},
{
"start": 615,
"end": 623,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preposition Prediction",
"sec_num": "6"
},
{
"text": "multiple possible prepositions that may be equally valid. For completeness we also report classification accuracy results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preposition Prediction",
"sec_num": "6"
},
{
"text": "Baseline. As baseline, we rank the prepositions by their relative frequencies in the training dataset. We found this to be a sufficiently strong baseline, as ubiquitous prepositions such as with and in tend to occur frequently in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preposition Prediction",
"sec_num": "6"
},
{
"text": "In this section, we focus on predicting the best preposition given the geometric and textual features of the trajector and landmark entities. This simulates the scenario of a vision detector providing a firm decision on the concept label for the detected entities. We use a multi-class logistic regression classifier (Fan et al., 2008) , and concatenate multiple features into a single vector. We compare high-level categories and terms from descriptions as trajector/landmark labels. Prepositions are ranked in descending order of the classifier output scores.",
"cite_spans": [
{
"start": 317,
"end": 335,
"text": "(Fan et al., 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking with known entity labels",
"sec_num": "6.1"
},
{
"text": "We found a few prepositions (e.g. with) dominating the datasets. Thus, we also evaluated our models on a balanced subset where each preposition is limited to a maximum of 50 random test samples. The training samples are weighted according to their class frequency in order to train non-biased classifiers to predict this balanced test set. The results on both the original and balanced Table 2 , the system performed significantly better than the baseline in most cases. In general, geometric features perform better than the baseline, and when combined with text features further improve the results. In a per-preposition analysis, the geometric features show up to 14% improvement in the mean rank for Flickr30k.",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ranking with known entity labels",
"sec_num": "6.1"
},
{
"text": "In feature ablation tests on MSCOCO (balanced), we found the y component of the trajector to landmark vector to be important to most prepositions, especially for under, above and on. Other important geometric features include the final two features in Table 1 (Euclidean distance and area).",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Ranking with known entity labels",
"sec_num": "6.1"
},
{
"text": "The benefit of the word2vec text feature is clear when moving from high-level categories to original terms from descriptions, where it consistently improves the mean rank (up to 25%). In contrast, the indicator vectors resulted in a less significant improvement, if not worse performance, when using the sparse original terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking with known entity labels",
"sec_num": "6.1"
},
{
"text": "We also evaluated the relative importance of the trajector and the landmark, by withholding either from the textual feature vector. We found that the landmark plays a larger role in preposition prediction as omitting the trajector produces 10%-30% better results than omitting the landmark. Figure 2 shows the confusion matrices of the best-performing systems. Note that many mistakes arise from prepositions that are often equally valid (e.g. predicting near instead of next to).",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 299,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ranking with known entity labels",
"sec_num": "6.1"
},
{
"text": "Here, we investigate the task of jointly predicting prepositions with the entity labels given geometric and visual features (without the trajector and landmark labels). This simulates the scenario of a vision detector output. For this structured prediction task, we use a 3-node chain CRF model 2 , with the centre node representing the preposition and the two end nodes representing the trajector and landmark. We use image features for the entity nodes, and geometric features for the preposition node (Section 5). Due to computational constraints only high-level category labels are used, but as seen in Section 6.1, this may actually be hurting the performance. Table 3 shows the results of the structured model used to predict the most likely (trajector, preposition, landmark) combination. To facilitate comparison with Section 6.1, column Prep (known labels) shows the results with the trajector and landmark labels as known conditions and fixed to the correct values, thus only needing to predict the preposition. The model achieved excellent performance considering the added difficulty of the task.",
"cite_spans": [],
"ref_spans": [
{
"start": 666,
"end": 673,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ranking with unknown entity labels",
"sec_num": "6.2"
},
{
"text": "We explored the role of geometric, textual and visual features in learning to predict a preposition given two bounding box instances in an image, and found clear evidence that all three features play a part in the task. Our system performs well even with uncertainties surrounding the entity labels. Future work could include nonprepositional terms like verbs, having prepositions modify verbs, adding word2vec embeddings to the structured prediction model, and providing stronger features -whether textual, visual or geometric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "The terminologies trajector and landmark are adopted from spatial role labelling(Kordjamshidi et al., 2011)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used the toolbox by Mark Schmidt: http://www. cs.ubc.ca/\u02dcschmidtm/Software/UGM.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the ERA-Net CHIST-ERA D2K VisualSense project (Spanish MINECO PCIN-2013-047, UK EPSRC EP/K019082/1 and French ANR Grant ANR-12-CHRI-0002-04) and the Spanish MINECO RobInstruct project TIN2014-58178-R. Ying Lu was also supported by the China Scholarship Council.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 740-750, Doha, Qatar, Octo- ber. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Head-driven statistical models for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "4",
"pages": "589--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Lin- guistics, 29(4):589-637, December.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary",
"authors": [
{
"first": "Pinar",
"middle": [],
"last": "Duygulu",
"suffix": ""
},
{
"first": "Kobus",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Nando De Freitas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "97--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinar Duygulu, Kobus Barnard, Nando de Freitas, and David A. Forsyth. 2002. Object recognition as ma- chine translation: Learning a lexicon for a fixed im- age vocabulary. In Proceedings of the European Conference on Computer Vision, pages 97-112.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Image description using visual dependency representations",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1292--1302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desmond Elliott and Frank Keller. 2013. Image de- scription using visual dependency representations. In Proceedings of the 2013 Conference on Em- pirical Methods in Natural Language Processing, pages 1292-1302, Seattle, Washington, USA, Oc- tober. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LIBLINEAR: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871-1874.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A sentence is worth a thousand pixels",
"authors": [
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanja Fidler, Abhishek Sharma, and Raquel Urtasun. 2013. A sentence is worth a thousand pixels. In Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiers",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Larry",
"middle": [
"S"
],
"last": "Davis",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "16--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhinav Gupta and Larry S. Davis. 2008. Beyond nouns: Exploiting prepositions and comparative ad- jectives for learning visual classifiers. In Proceed- ings of the European Conference on Computer Vi- sion, pages 16-29.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Spatial role labeling: Towards extraction of spatial relations from natural language",
"authors": [
{
"first": "Parisa",
"middle": [],
"last": "Kordjamshidi",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Martijn Van Otterlo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Transactions on Speech and Language Processing",
"volume": "8",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parisa Kordjamshidi, Martijn van Otterlo, and Marie- Francine Moens. 2011. Spatial role labeling: To- wards extraction of spatial relations from natural language. ACM Transactions on Speech and Lan- guage Processing, 8(3):article 4, 36 p.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semeval-2012 task 3: Spatial role labeling",
"authors": [
{
"first": "Parisa",
"middle": [],
"last": "Kordjamshidi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2012,
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "7--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parisa Kordjamshidi, Steven Bethard, and Marie- Francine Moens. 2012. Semeval-2012 task 3: Spa- tial role labeling. In *SEM 2012: The First Joint Conference on Lexical and Computational Seman- tics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Eval- uation (SemEval 2012), pages 365-373, Montr\u00e9al, Canada, 7-8 June. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "25",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097-1105.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Baby talk: Understanding and generating image descriptions",
"authors": [
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Visruth",
"middle": [],
"last": "Premraj",
"suffix": ""
},
{
"first": "Sagnik",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Siming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2011. Baby talk: Understanding and generat- ing image descriptions. In Proceedings of the IEEE Conference on Computer Vision & Pattern Recogni- tion.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "TUHOI: Trento Universal Human Object Interaction dataset",
"authors": [
{
"first": "Dieu-Thu",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Uijlings",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
}
],
"year": 2014,
"venue": "Dublin City University and the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieu-Thu Le, Jasper Uijlings, and Raffaella Bernardi. 2014. TUHOI: Trento Universal Human Object In- teraction dataset. In Proceedings of the Third Work- shop on Vision and Language, pages 17-24, Dublin, Ireland, August. Dublin City University and the As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Microsoft COCO: common objects in context. CoRR",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. CoRR, abs/1405.0312.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The preposition project",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Litkowski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL-SIGSEM Workshop on The Linguistic Dimensions of Prepositions and Their Use in Computational Linguistic Formalisms and Applications",
"volume": "",
"issue": "",
"pages": "171--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth C. Litkowski and Orin Hargraves. 2005. The preposition project. In ACL-SIGSEM Workshop on The Linguistic Dimensions of Prepositions and Their Use in Computational Linguistic Formalisms and Applications, pages 171-179.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval-2007 task 06: Word-sense disambiguation of prepositions",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Litkowski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "24--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth C. Litkowski and Orin Hargraves. 2007. Semeval-2007 task 06: Word-sense disambigua- tion of prepositions. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 24-29, Prague, Czech Re- public, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55-60, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Plummer",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Cervantes",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Caicedo",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Lazebnik",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Plummer, Liwei Wang, Chris Cervantes, Juan Caicedo, Julia Hockenmaier, and Svetlana Lazeb- nik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- to-sentence models. CoRR, abs/1505.04870.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semeval-2015 task 8: Spaceeval. In Proceedings of the 9th International Workshop on Semantic Evaluation",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Parisa",
"middle": [],
"last": "Kordjamshidi",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Levine",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Dworman",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Yocum",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "884--894",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Parisa Kordjamshidi, Marie- Francine Moens, Aaron Levine, Seth Dworman, and Zachary Yocum. 2015. Semeval-2015 task 8: Spaceeval. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 884-894, Denver, Colorado, June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Corpus-guided sentence generation of natural images",
"authors": [
{
"first": "Yezhou",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ching",
"middle": [],
"last": "Teo",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yiannis",
"middle": [],
"last": "Aloimonos",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "444--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yezhou Yang, Ching Teo, Hal Daum\u00e9 III, and Yiannis Aloimonos. 2011. Corpus-guided sentence genera- tion of natural images. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 444-454. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "67--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for se- mantic inference over event descriptions. Transac- tions of the Association for Computational Linguis- tics, 2:67-78, February.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Given a subject boy and an object sled and their location in the image, what would the best preposition be to connect the two entities? important intermediate representation towards the eventual goal of generating image descriptions."
},
"TABREF0": {
"html": null,
"text": "Geometric features derived from bounding boxes.",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF3": {
"html": null,
"text": "Accuracy (acc) and mean rank (rank, with max rank in parenthesis) for each variable of the CRF model, trained using the high-level concept labels. Columns under Prep (known labels) refer to the results of predicting prepositions with the trajector and landmark labels fixed to the correct values.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>test sets are compared.</td></tr><tr><td>As shown in</td></tr></table>"
}
}
}
}