|
{ |
|
"paper_id": "N16-1022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:36:54.063908Z" |
|
}, |
|
"title": "Unsupervised Visual Sense Disambiguation for Verbs using Multimodal Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Spandana", |
|
"middle": [], |
|
"last": "Gella", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We introduce a new task, visual sense disambiguation for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one that describes the action depicted in the image. Just as textual word sense disambiguation is useful for a wide range of NLP tasks, visual sense disambiguation can be useful for multimodal tasks such as image retrieval, image description, and text illustration. We introduce VerSe, a new dataset that augments existing multimodal datasets (COCO and TUHOI) with sense labels. We propose an unsupervised algorithm based on Lesk which performs visual sense disambiguation using textual, visual, or multimodal embeddings. We find that textual embeddings perform well when goldstandard textual annotations (object labels and image descriptions) are available, while multimodal embeddings perform well on unannotated images. We also verify our findings by using the textual and multimodal embeddings as features in a supervised setting and analyse the performance of visual sense disambiguation task. VerSe is made publicly available and can be downloaded at: https://github. com/spandanagella/verse.", |
|
"pdf_parse": { |
|
"paper_id": "N16-1022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We introduce a new task, visual sense disambiguation for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one that describes the action depicted in the image. Just as textual word sense disambiguation is useful for a wide range of NLP tasks, visual sense disambiguation can be useful for multimodal tasks such as image retrieval, image description, and text illustration. We introduce VerSe, a new dataset that augments existing multimodal datasets (COCO and TUHOI) with sense labels. We propose an unsupervised algorithm based on Lesk which performs visual sense disambiguation using textual, visual, or multimodal embeddings. We find that textual embeddings perform well when goldstandard textual annotations (object labels and image descriptions) are available, while multimodal embeddings perform well on unannotated images. We also verify our findings by using the textual and multimodal embeddings as features in a supervised setting and analyse the performance of visual sense disambiguation task. VerSe is made publicly available and can be downloaded at: https://github. com/spandanagella/verse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Word sense disambiguation (WSD) is a widely studied task in natural language processing: given a word and its context, assign the correct sense of the word based on a pre-defined sense inventory (Kilgarrif, 1998) . WSD is useful for a range of NLP tasks, including information retrieval, information extraction, machine translation, content analysis, and lexicography (see Navigli (2009) for an overview). Standard WSD disambiguates words based on their textual context; however, in a multimodal setting (e.g., newspaper articles with photographs), visual context is also available and can be used for disambiguation. Based on this observation, we introduce a new task, visual sense disambiguation (VSD) for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one depicted in the image. While VSD approaches for nouns exist, VSD for verbs is a novel, more challenging task, and related in interesting ways to action recognition in computer vision. As an example consider the verb play, which can have the senses participate in sport, play on an instrument, and be engaged in playful activity, depending on its visual context, see Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 212, |
|
"text": "(Kilgarrif, 1998)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 387, |
|
"text": "Navigli (2009)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1160, |
|
"end": 1168, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We expect visual sense disambiguation to be useful for multimodal tasks such as image retrieval. As an example consider the output of Google Image Search for the query sit: it recognizes that the verb has multiple senses and tries to cluster relevant images. However, the result does not capture the polysemy of the verb well, and would clearly benefit from VSD (see Figure 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 375, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Visual sense disambiguation has previously been attempted for nouns (e.g., apple can mean fruit or computer), which is a substantially easier task that can be solved with the help of an object detector Figure 2 : Google Image Search trying to disambiguate sit. All clusters pertain to the sit down sense, other senses (baby sit, convene) are not included. (Barnard et al., 2003; Loeff et al., 2006; Saenko and Darrell, 2008; Chen et al., 2015) . VSD for nouns is helped by resources such as ImageNet (Deng et al., 2009) , a large image database containing 1.4 million images for 21,841 noun synsets and organized according to the WordNet hierarchy. However, we are not aware of any previous work on VSD for verbs, and no ImageNet for verbs exists. Not only image retrieval would benefit from VSD for verbs, but also other multimodal tasks that have recently received a lot of interest, such as automatic image description and visual question answering (Karpathy and Li, 2015; Fang et al., 2015; Antol et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 378, |
|
"text": "(Barnard et al., 2003;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 398, |
|
"text": "Loeff et al., 2006;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 424, |
|
"text": "Saenko and Darrell, 2008;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 443, |
|
"text": "Chen et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 500, |
|
"end": 519, |
|
"text": "(Deng et al., 2009)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 952, |
|
"end": 975, |
|
"text": "(Karpathy and Li, 2015;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 976, |
|
"end": 994, |
|
"text": "Fang et al., 2015;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 995, |
|
"end": 1014, |
|
"text": "Antol et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 210, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we explore the new task of visual sense disambiguation for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one that describes the action depicted in the image. We present VerSe, a new dataset that augments existing multimodal datasets (COCO and TUHOI) with sense labels. VerSe contains 3518 images, each annotated with one of 90 verbs, and the OntoNotes sense realized in the image. We propose an algorithm based on the Lesk WSD algorithm in order to perform unsupervised visual sense disambiguation on our dataset. We focus in particular on how to best represent word senses for visual disambiguation, and explore the use of textual, visual, and multimodal embeddings. Textual embeddings for a given image can be constructed over object labels or image descriptions, which are available as gold-standard in the COCO and TUHOI datasets, or can be computed automatically using object detectors and image description models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our results show that textual embeddings perform best when gold-standard textual annotations are available, while multimodal embeddings perform best when automatically generated object labels are used. Interestingly, we find that automatically generated image descriptions result in inferior performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Verbs Acts Images Sen Des PPMI (Yao and Fei-Fei, 2010) 2 24 4800 N N Stanford 40 Actions (Yao et al., 2011) 33 40 9532 N N PASCAL 2012 (Everingham et al., 2015) 9 11 4588 N N 89 Actions (Le et al., 2013) 36 89 2038 N N TUHOI (Le et al., 2014) -2974 10805 N N COCO-a (Ronchi and Perona, 2015) 140 162 10000 N Y HICO (Chao et al., 2015) 111 600 47774 Y N VerSe (our dataset) 90 163 3518 Y Y ", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 54, |
|
"text": "(Yao and Fei-Fei, 2010)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 89, |
|
"end": 107, |
|
"text": "(Yao et al., 2011)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 135, |
|
"end": 160, |
|
"text": "(Everingham et al., 2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 203, |
|
"text": "(Le et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 242, |
|
"text": "(Le et al., 2014)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 291, |
|
"text": "(Ronchi and Perona, 2015)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 334, |
|
"text": "(Chao et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There is an extensive literature on word sense disambiguation for nouns, verbs, adjectives and adverbs. Most of these approaches rely on lexical databases or sense inventories such as WordNet (Miller et al., 1990) or OntoNotes (Hovy et al., 2006) . Unsupervised WSD approaches often rely on distributional representations, computed over the target word and its context (Lin, 1997; McCarthy et al., 2004; Brody and Lapata, 2008) . Most supervised approaches use sense annotated corpora to extract linguistic features of the target word (context words, POS tags, collocation features), which are then fed into a classifier to disambiguate test data (Zhong and Ng, 2010) . Recently, features based on sense-specific semantic vectors learned using large corpora and a sense inventory such as WordNet have been shown to achieve state-of-the-art results for supervised WSD (Rothe and Schutze, 2015; Jauhar et al., 2015) . As mentioned in the introduction, all existing work on visual sense disambiguation has used nouns, starting with Barnard et al. (2003) . Sense discrimination for web images was introduced by Loeff et al. (2006) , who used spectral clustering over multimodal features from the images and web text. Saenko and Darrell (2008) used sense definitions in a dictionary to learn a latent LDA space overs senses, which they then used to construct sensespecific classifiers by exploiting the text surrounding an image.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 213, |
|
"text": "(Miller et al., 1990)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 246, |
|
"text": "(Hovy et al., 2006)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 380, |
|
"text": "(Lin, 1997;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 403, |
|
"text": "McCarthy et al., 2004;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 427, |
|
"text": "Brody and Lapata, 2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 667, |
|
"text": "(Zhong and Ng, 2010)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 867, |
|
"end": 892, |
|
"text": "(Rothe and Schutze, 2015;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 893, |
|
"end": 913, |
|
"text": "Jauhar et al., 2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1029, |
|
"end": 1050, |
|
"text": "Barnard et al. (2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1120, |
|
"end": 1126, |
|
"text": "(2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1213, |
|
"end": 1238, |
|
"text": "Saenko and Darrell (2008)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Most of the datasets relevant for verb sense disambiguation were created by the computer vision community for the task of human action recognition (see Table 1 for an overview). These datasets are annotated with a limited number of actions, where an action is conceptualized as verb-object pair: ride horse, ride bicycle, play tennis, play guitar, etc. Verb sense ambiguity is ignored in almost all action recognition datasets, which misses important generalizations: for instance, the actions ride horse and ride bicycle represent the same sense of ride and thus share visual, textual, and conceptual features, while this is not the case for play tennis and play guitar. This is the issue we address by creating a dataset with explicit sense labels.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 159, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Datasets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "VerSe is built on top of two existing datasets, TUHOI and COCO. The Trento Universal Human-Object Interaction (TUHOI) dataset contains 10,805 images covering 2974 actions. Action (humanobject interaction) categories were annotated using crowdsourcing: each image was labeled by multiple annotators with a description in the form of a verb or a verb-object pair. The main drawback of TUHOI is that 1576 out of 2974 action categories occur only once, limiting its usefulness for VSD. The Microsoft Common Objects in Context (COCO) dataset is very popular in the language/vision community, as it consists of over 120k images with extensive annotation, including labels for 91 object categories and five descriptions per image. COCO contains no explicit action annotation, but verbs and verb phrases can be extracted from the descriptions. (But note that not all the COCO images depict actions.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Datasets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The recently created Humans Interacting with Common Objects (HICO) dataset is conceptually similar to VerSe. It consists of 47774 images annotated with 111 verbs and 600 human-object interaction categories. Unlike other existing datasets, HICO uses sense-based distinctions: actions are denoted by sense-object pairs, rather than by verb-object pairs. HICO doesn't aim for complete coverage, but restricts itself to the top three WordNet senses of a verb. The dataset would be suitable for performing visual sense disambiguation, but has so far not been used in this way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Datasets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We want to build an unsupervised visual sense disambiguation system, i.e., a system that takes an image and a verb and returns the correct sense of the verb. As discussed in Section 2.1, most exist- ing datasets are not suitable for this task, as they do not include word sense annotation. We therefore develop our own dataset with gold-standard sense annotation. The Verb Sense (VerSe) dataset is based on COCO and TUHOI and covers 90 verbs and around 3500 images. VerSe serves two main purposes: (1) to show the feasibility of annotating images with verb senses (rather than verbs or actions); (2) to function as test bed for evaluating automatic visual sense disambiguation methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VerSe Dataset and Annotation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Verb Selection Action recognition datasets often use a limited number of verbs (see Table 1 ). We addressed this issue by using images that come with descriptions, which in the case of action images typically contain verbs. The COCO dataset includes images in the form of sentences, the TUHOI dataset is annotated with verbs or prepositional verb phrases for a given object (e.g., sit on chair), which we use in lieu of descriptions. We extracted all verbs from all the descriptions in the two datasets and then selected those verbs that have more than one sense in the OntoNotes dictionary, which resulted in 148 verbs in total (94 from COCO and 133 from TUHOI).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 91, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "VerSe Dataset and Annotation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Depictability Annotation A verb can have multiple senses, but not all of them may be depictable, e.g., senses describing cognitive and perception processes. Consider two senses of touch: make physical contact is depictable, whereas affect emotionally describes a cognitive process and is not depictable. We therefore need to annotate the synsets of a verb as depictable or non-depictable. Amazon Mechanical Turk (AMT) workers were presented with the definitions of all the synsets of a verb, along with ex- amples, as given by OntoNotes. An example for this annotation is shown in Figure 3 . We used OntoNotes instead of WordNet, as WordNet senses are very fine-grained and potentially make depictability and sense annotation (see below) harder. Granularity issues with WordNet for text-based WSD are well documented (Navigli, 2009) . OntoNotes lists a total of 921 senses for our 148 target verbs. For each synset, three AMT workers selected all depictable senses. The majority label was used as the gold standard for subsequent experiments. This resulted in a 504 depictable senses. Inter-annotator agreement (ITA) as measured by Fleiss' Kappa was 0.645.", |
|
"cite_spans": [ |
|
{ |
|
"start": 817, |
|
"end": 832, |
|
"text": "(Navigli, 2009)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 581, |
|
"end": 589, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "VerSe Dataset and Annotation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We then annotated a subset of the images in COCO and TUHOI with verb senses. For every image we assigned the verb that occurs most frequently in the descriptions for that image (for TUHOI, the descriptions are verb-object pairs, see above). However, many verbs are represented by only a few images, while a few verbs are represented by a large number of images. The datasets therefore show a Zipfian distribution of linguistic units, which is expected and has been observed previously for COCO (Ronchi and Perona, 2015). For sense annotation, we selected only verbs for which either COCO or TUHOI contained five or more images, resulting in a set of 90 verbs (out of the total 148). All images for these verbs were included, giving us a dataset of 3518 images: 2340 images for 82 verbs from COCO and 1188 images for 61 verbs from TUHOI (some verbs occur in both datasets).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense Annotation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These image-verb pairs formed the basis for sense annotation. AMT workers were presented with the image and all the depictable OntoNotes senses of the associated verb. The workers had to chose the sense of the verb that was instantiated in the image (or \"none of the above\", in the case of irrelevant images). Annotators were given sense definitions and examples, as for the depictability annotation (see Figure 3 ). For every image-verb pair, five annotators performed the sense annotation task. A total of 157 annotators participated, reaching an inter-annotator agreement of 0.659 (Fleiss' Kappa). Out of 3528 images, we discarded 18 images annotated with \"none of the above\", resulting in a set of 3510 images covering 90 verbs and 163 senses. We present statistics of our dataset in Table 2 ; we group the verbs into motion verbs and non-motion verb using Levin (1993) classes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 861, |
|
"end": 873, |
|
"text": "Levin (1993)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 405, |
|
"end": 413, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 795, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sense Annotation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For our disambiguation task, we assume we have a set of images I, and a set of polysemous verbs V and each image i \u2208 I is paired with a verb v \u2208 V . For example, Figure 1 shows different images paired with the verb play. Every verb v \u2208 V , has a set of senses S(v), described in a dictionary D. Now given an image i paired with a verb v, our task is to predict the correct sense\u015d \u2208 S(v), i.e., the sense that is depicted by the associated image. Formulated as a scoring task, disambiguation consists of finding the maximum over a suitable scoring function \u03a6:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 170, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "s = arg max s\u2208S (v) \u03a6(s, i, v, D) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For example, in Figure 1 , the correct sense for the first image is participate in sport, for the second one it is play on an instrument, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 24, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The Lesk (1986) algorithm is a well known knowledge-based approach to WSD which relies on the calculation of the word overlap between the sense definition and the context in which a word occurs. It is therefore an unsupervised approach, i.e., it does not require sense-annotated training data, but instead exploits resources such as dictionaries or ontologies to infer the sense of a word in context. Lesk uses the following scoring function to disambiguate the sense of a verb v:", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 15, |
|
"text": "Lesk (1986)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u03a6(s, v, D) = |context(v) \u2229 definition(s, D)| (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Here, context(v) the set of words that occur close the target word v and definition(s, D) is the set of words in the definition of sense s in the dictionary D.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Lesk's approach is very sensitive to the exact wording of definitions and results are known to change dramatically for different sets of definitions (Navigli, 2009 short and do not provide sufficient vocabulary or context. We propose a new variant of the Lesk algorithm to disambiguate the verb sense that is depicted in an image. In particular, we explore the effectiveness of textual, visual and multimodal representations in conjunction with Lesk. An overview of our methodology is given in Figure 4 . For a given image i labeled with verb v (here play), we create a representation (the vector i), which can be text-based (using the object labels and descriptions for i), visual, or multimodal. Similarly, we create text-based, visual, and multimodal representations (the vector s) for every sense s of a verb. Based on the representations i and s (detailed below), we can then score senses as: 1", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 163, |
|
"text": "(Navigli, 2009", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 502, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u03a6(s, v, i, D) = i \u2022 s (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Note that this approach is unsupervised: it requires no sense annotated training data; we will use the sense annotations in our VerSe dataset only for evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visual Sense Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each candidate verb sense, we create a textbased sense representation s t and a visual sense representation s c .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense Representations", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We create a vector s t for every sense s \u2208 S(v) of a verb v from its definition and the example usages provided in 1 Taking the dot product of two normalized vectors is equivalent to using cosine as similarity measure. We experimented with other similarity measures, but cosine performed best. the OntoNotes dictionary D. We apply word2vec (Mikolov et al., 2013) , a widely used model of word embeddings, to obtain a vector for every content word in the definition and examples of the sense. We then take the average of these vectors to compute an overall representation of the verb sense. For our experiments we used the pre-trained 300 dimensional vectors available with the word2vec package (trained on part of Google News dataset, about 100 billion words).", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 116, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 340, |
|
"end": 362, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text-based Sense Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Visual Sense Representation Sense dictionaries typically provide sense definitions and example sentences, but no visual examples or images. For nouns, this is remedied by ImageNet (Deng et al., 2009) , which provides a large number of example images for a subset of the senses in the WordNet noun hierarchy. However, no comparable resource is available for verbs (see Section 2.1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 199, |
|
"text": "(Deng et al., 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text-based Sense Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to obtain visual sense representation s c , we therefore collected sense-specific images for the verbs in our dataset. For each verb sense s, three trained annotators were presented with the definition and examples from OntoNotes, and had to formulate a query Q (s) that would retrieve images depicting the verb sense when submitted to a search engine.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text-based Sense Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For every query q we retrieved images I (q) using Bing image search (for examples, see Figure 5 ). We used the top 50 images returned by Bing for every query.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 95, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Text-based Sense Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Once we have images for every sense, we can turn these images into feature representations us-ing a convolutional neural network (CNN). Specifically, we used the VGG 16-layer architecture (VG-GNet) trained on 1.2M images of the 1000 class ILSVRC 2012 object classification dataset, a subset of ImageNet (Simonyan and Zisserman, 2014 ). This CNN model has a top-5 classification error of 7.4% on ILSVRC 2012. We use the publicly available reference model implemented using CAFFE (Jia et al., 2014) to extract the output of the fc7 layer, i.e., a 4096 dimensional vector c i , for every image i. We perform mean pooling over all the images extracted using all the queries of a sense to generate a single visual sense representation s c (shown in Equation 4):", |
|
"cite_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 332, |
|
"text": "(Simonyan and Zisserman, 2014", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 496, |
|
"text": "(Jia et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text-based Sense Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s c = 1 n \u2211 q j \u2208Q (s) \u2211 i\u2208I (q j ) c i (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text-based Sense Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where n is the total number of images retrieved per sense s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text-based Sense Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We first explore the possibility of representing the image indirectly, viz., through text associated with it in the form of object labels or image descriptions (as shown in Figure 4 ). We experiment with two different forms of textual annotation: GOLD annotation, where object labels and descriptions are provided by human annotators, and predicted (PRED) annotation, where state-of-the-art object recognition and image description generation systems are applied to the image.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 181, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Image Representations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Object Labels (O) GOLD object annotations are provided with the two datasets we use. Each image sampled from COCO is annotated with one or more of 91 object categories. Each image from TUHOI is annotated with one more of 189 object categories. PRED object annotations were generated using the same VGG-16-layer CNN object recognition model that was used to compute visual sense representations. Only object labels with object detection threshold of t > 0.2 were used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Image Representations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To obtain GOLD image descriptions, we used the used human-generated descriptions that come with COCO. For TUHOI images, we generated descriptions of the form subject-verbobject, where the subject is always person, and the verb-object pairs are the action labels that come with TUHOI. To obtain PRED descriptions, we generated three descriptions for every image using the stateof-the-art image description system of Vinyals et al. (2015) . 2 We can now create a textual representation i t of the image i. Again, we used word2vec to obtain word embeddings, but applied these to the object labels and to the words in the image descriptions. An overall representation of the image is then computed by averaging these vectors over all labels, all content words in the description, or both.", |
|
"cite_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 436, |
|
"text": "Vinyals et al. (2015)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 440, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Creating a visual representation i c of an image i is straightforward: we extract the fc7 layer of the VGG-16 network when applied to the image and use the resulting vector as our image representation (same setup as in Section 4.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Apart from experimenting with separate textual and visual representations of images, it also makes sense to combine the two modalities into a multimodal representation. The simplest approach is a concatenation model which appends textual and visual features. More complex multimodal vectors can be created using methods such as Canonical Correlation Analysis (CCA) and Deep Canonical Correlation Analysis (DCCA) (Hardoon et al., 2004; Andrew et al., 2013; . CCA allows us to find a latent space in which the linear projections of text and image vectors are maximally correlated (Gong et al., 2014; Hodosh et al., 2015) . DCCA can be seen as non-linear version of CCA and has been successfully applied to image description task (Yan and Mikolajczyk, 2015) , outperforming previous approaches, including kernel-based CCA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 434, |
|
"text": "(Hardoon et al., 2004;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 455, |
|
"text": "Andrew et al., 2013;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 597, |
|
"text": "(Gong et al., 2014;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 598, |
|
"end": 618, |
|
"text": "Hodosh et al., 2015)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 754, |
|
"text": "(Yan and Mikolajczyk, 2015)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use both CCA and DCCA to map the vectors i t and i c (which have different dimensions) into a joint latent space of n dimensions. We represent the projected vectors of textual and visual features for image i as i t and i c and combine them to obtain multimodal representation i m as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "i m = \u03bb t i t + \u03bb c i c", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We experimented with a number of parameter settings for \u03bb t and \u03bb c for textual and visual models respectively. We use the same model to combine the multimodal representation for sense s as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s m = \u03bb t s t + \u03bb c s c", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use these vectors (i t , s t ), (i c , s c ) and (i m , s m ) as described in Equation 3 to perform sense disambiguation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Descriptions (C)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To train the CCA and DCCA models, we use the text representations learned from image descriptions of COCO and Flickr30k dataset as one view and the VGG-16 features from the respective images as the second view. We divide the data into train, test and development samples (using a 80/10/10 split). We observed that the correlation scores for DCCA model were better than for the CCA model. We use the trained models to generate the projected representations of text and visual features for the images in VerSe. Once the textual and visual features are projected, we then merge them to get the multimodal representation. We experimented with different ways of combining visual and textual features projected using CCA or DCCA: (1) weighted interpolation of textual and visual features (see Equations 5 and 6), and (2) concatenating the vectors of textual and visual features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To evaluate our proposed method, we compare against the first sense heuristic, which defaults to the sense listed first in the dictionary (where senses are typically ordered by frequency). This is a strong baseline which is known to outperform more complex models in traditional text-based WSD. In VerSe we observe skewness in the distribution of the senses and the first sense heuristic is as strong as over text. Also the most frequent sense heuristic, which assigns the most frequently annotated sense for a given verb in VerSe, shows very strong performance. It is supervised (as it requires sense annotated data to obtain the frequencies), so it should be regarded as an upper limit on the performance of the unsupervised methods we propose (also, in text-based WSD, the most frequent sense heuristic is considered an upper limit, Navigli (2009) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 836, |
|
"end": 850, |
|
"text": "Navigli (2009)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In Table 3 , we summarize the results of the goldstandard (GOLD) and predicted (PRED) settings for motion and non-motion verbs across representations. In the GOLD setting we find that for both types of verbs, textual representations based on im-age descriptions (C) outperform visual representations (CNN features). The text-based results compare favorably to the original Lesk (as described in Equation 2), which performs at 30.7 for motion verbs and 36.2 for non-motion verbs in the GOLD setting. This improvement is clearly due to the use of word2vec embeddings. 3 Note that CNN-based visual features alone performed better than goldstandard object labels alone in the case of motion verbs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "We also observed that adding visual features to textual features improves performance in some cases: multimodal features perform better than textual features alone both for object labels (CNN+O) and for image descriptions (CNN+C). However, adding CNN features to textual features based on object labels and descriptions together (CNN+O+C) resulted in a small decrease in performance. Furthermore, we note that CCA models outperform simple vector concatenation in case of GOLD setting for motion verbs, and overall DCCA performed considerably worse than concatenation. Note that for CCA and DCCA we report the best performing scores achieved using weighted interpolation of textual and visual features with weights \u03bb t = 0.5 and \u03bb c = 0.5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "When comparing to our baseline and upper limit, we find that the all the GOLD models which use descriptions-based representations (except DCCA) outperform to the first sense heuristic for motionverbs (accuracy 70.8), whereas they performed below the first sense heuristic in case of non-motion verbs (accuracy 80.6). As expected, both motion and non-motion verbs performed significantly below the most frequent sense heuristic (accuracy 86.2 and 90.7 respectively), which we argued provides an upper limit for unsupervised approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "We now turn the PRED configuration, i.e., to results obtained using object labels and image descriptions predicted by state-of-the-art automatic systems. This is arguably the more realistic scenario, as it only requires images as input, rather than assuming human-generated object labels and image descriptions (though object detection and image description systems are required instead). In the PRED setting, we find that textual features based on ob- Table 4 : Accuracy scores for motion verbs for both supervised and unsupervised approaches using different types of sense and image representation features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 460, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "ject labels (O) outperform both first sense heuristic and textual features based on image descriptions (C) in the case of motion verbs. Combining textual and visual features via concatenation improves performance for both motion and non-motion verbs. The overall best performance of 72.6 for predicted features is obtained by combining CNN features and embeddings based on object labels and outperforms first sense heuristic in case of motion verbs (accuracy 70.8). In the PRED setting for both classes of verbs the simpler concatenation model performed better than the more complex CCA and DCCA models. Note that for CCA and DCCA we report the best performing scores achieved using weighted interpolation of textual and visual features with weights \u03bb t = 0.3 and \u03bb c = 0.7. Overall, our findings are consistent with the intuition that motion verbs are easier to disambiguate than non-motion verbs, as they are Table 5 : Accuracy scores for non-motion verbs for both supervised and unsupervised approaches using different types of sense and image representation features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 911, |
|
"end": 918, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "more depictable and more likely to involve objects. Note that this is also reflected in the higher interannotator agreement for motion verbs (see Table 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 153, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "Along with the unsupervised experiments we investigated the performance of textual and visual representations of images in a simplest supervised setting. We trained logistic regression classifiers for sense prediction by dividing the images in VerSe dataset into train and test splits. To train the classifiers we selected all the verbs which has atleast 20 images annotated and has at least two senses in VerSe. This resulted in 19 motion verbs and 19 non-motion verbs. Similar to our unsupervised experiments we explore multimodal features by using both textual and visual features for classification (similar to concatenation in unsupervised experiments). Table 6 : Images that were assigned an incorrect sense in the PRED setting.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 659, |
|
"end": 666, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supervised Experiments and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In Table 4 we report accuracy scores for 19 motion verbs using a supervised logistic regression classifier and for comparison we also report the scores of our proposed unsupervised algorithm for both GOLD and PRED setting. Similarly in Table 5 we report the accuracy scores for 19 non-motion verbs. We observe that all supervised classifiers for both motion and non-motion verbs performing better than first sense baseline. Similar to our findings using an unsupervised approach we find that in most cases multimodal features obtained using concatenating textual and visual features has outperformed textual or visual features alone especially in the PRED setting which is arguably the more realistic scenario. We observe that the features from PRED image descriptions showed better results for nonmotion verbs for both supervised and unsupervised approaches whereas PRED object features showed better results for motion verbs. We also observe that supervised classifiers outperform most frequent sense for motion verbs and for non-motion verbs our scores match with most frequent sense heuristic.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supervised Experiments and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In order to understand the cases where the proposed unsupervised algorithm failed, we analyzed the images that were disambiguated incorrectly. For the PRED setting, we observed that using predicted image descriptions yielded lower scores compared to predicted object labels. The main reason for this is that the image description system often generates irrelevant descriptions or descriptions not related to the action depicted, whereas the object labels predicted by the CNN model tend to be relevant. This highlights that current image description systems still have clear limitations, despite the high evaluation scores reported in the literature (Vinyals et al., 2015; Fang et al., 2015) . Examples are shown in Table 6 : in all cases human generated descriptions and object labels that are relevant for disambiguation, which explains the higher scores in the GOLD setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 650, |
|
"end": 672, |
|
"text": "(Vinyals et al., 2015;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 673, |
|
"end": 691, |
|
"text": "Fang et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 716, |
|
"end": 723, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We have introduced the new task of visual verb sense disambiguation: given an image and a verb, identify the verb sense depicted in the image. We developed the new VerSe dataset for this task, based on the existing COCO and TUHOI datasets. We proposed an unsupervised visual sense disambiguation model based on the Lesk algorithm and demonstrated that both textual and visual information associated with an image can contribute to sense disambiguation. In an in-depth analysis of various image representations we showed that object labels and visual features extracted using state-of-the-art convolutional neural networks result in good disambiguation performance, while automatically generated image descriptions are less useful.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "make physical contact with, possibly with the effect of physically manipulating. They touched their fingertips together and smiled 2 affect someone emotionally The president's speech touched a chord with voters. 2 be or come in contact without control They sat so close that their arms touched. 2 make reference to, involve oneself with They had wide-ranging discussions that touched on the situation in the Balkans. 2 Achieve a value or quality Nothing can touch cotton for durability. 2 Tinge; repair or improve the appearance of He touched on the paintings, trying to get the colors right.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used Karpathy's implementation, publicly available at https://github.com/karpathy/neuraltalk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also experimented with Glove vectors(Pennington et al., 2014) but observed that word2vec representations consistently achieved better results that Glove vectors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Deep canonical correlation analysis", |
|
"authors": [ |
|
{ |
|
"first": "Galen", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raman", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Bilmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 30th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1247--1255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Galen Andrew, Raman Arora, Jeff A. Bilmes, and Karen Livescu. 2013. Deep canonical correlation analysis. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 1247-1255.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "VQA: visual question answering", |
|
"authors": [ |
|
{ |
|
"first": "Stanislaw", |
|
"middle": [], |
|
"last": "Antol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aishwarya", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiasen", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2425--2433", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: visual question answering. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2425-2433.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Word sense disambiguation with pictures", |
|
"authors": [ |
|
{ |
|
"first": "Kobus", |
|
"middle": [], |
|
"last": "Barnard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Forsyth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the HLT-NAACL 2003 workshop on Learning word meaning from non-linguistic data", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kobus Barnard, Matthew Johnson, and David Forsyth. 2003. Word sense disambiguation with pictures. In Proceedings of the HLT-NAACL 2003 workshop on Learning word meaning from non-linguistic data- Volume 6, pages 1-5. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Good neighbors make good senses: Exploiting distributional similarity for unsupervised wsd", |
|
"authors": [ |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Brody", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel Brody and Mirella Lapata. 2008. Good neigh- bors make good senses: Exploiting distributional sim- ilarity for unsupervised wsd. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 65-72. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "HICO: A benchmark for recognizing human-object interactions in images", |
|
"authors": [ |
|
{ |
|
"first": "Yu-Wei", |
|
"middle": [], |
|
"last": "Chao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yugeng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaxuan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1017--1025", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng. 2015. HICO: A benchmark for recog- nizing human-object interactions in images. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 1017-1025.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Sense discovery via co-clustering on images and text", |
|
"authors": [ |
|
{ |
|
"first": "Xinlei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhinav", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5298--5306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinlei Chen, Alan Ritter, Abhinav Gupta, and Tom M. Mitchell. 2015. Sense discovery via co-clustering on images and text. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 5298-5306.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "ImageNet: A large-scale hierarchical image database", |
|
"authors": [ |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei-Fei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "248--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. ImageNet: A large-scale hi- erarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248-255.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The Pascal visual object classes challenge: A retrospective. International Journal of Computer Vision", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mark Everingham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luc", |
|
"middle": [], |
|
"last": "Ali Eslami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Gool", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Winn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zisserman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "111", |
|
"issue": "", |
|
"pages": "98--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and An- drew Zisserman. 2015. The Pascal visual object classes challenge: A retrospective. International Jour- nal of Computer Vision, 111(1):98-136.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "From captions to visual concepts and back", |
|
"authors": [ |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Hao Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Forrest", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rupesh", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Iandola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Platt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1473--1482", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Fang, Saurabh Gupta, Forrest N. Iandola, Ru- pesh K. Srivastava, Li Deng, Piotr Doll\u00e1r, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In IEEE Conference on Computer Vision and Pattern Recogni- tion, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 1473-1482.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Improving image-sentence embeddings using large weakly annotated photo collections", |
|
"authors": [ |
|
{ |
|
"first": "Yunchao", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liwei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micah", |
|
"middle": [], |
|
"last": "Hodosh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computer Vision -ECCV 2014 -13th European Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "529--545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hock- enmaier, and Svetlana Lazebnik. 2014. Improving image-sentence embeddings using large weakly anno- tated photo collections. In Computer Vision -ECCV 2014 -13th European Conference, Zurich, Switzer- land, September 6-12, 2014, Proceedings, Part IV, pages 529-545.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Canonical correlation analysis: An overview with application to learning methods", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Hardoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S\u00e1ndor", |
|
"middle": [], |
|
"last": "Szedm\u00e1k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Shawe-Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Neural Computation", |
|
"volume": "16", |
|
"issue": "12", |
|
"pages": "2639--2664", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David R. Hardoon, S\u00e1ndor Szedm\u00e1k, and John Shawe- Taylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neu- ral Computation, 16(12):2639-2664.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Framing image description as a ranking task: Data, models and evaluation metrics (extended abstract)", |
|
"authors": [ |
|
{ |
|
"first": "Micah", |
|
"middle": [], |
|
"last": "Hodosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJ-CAI 2015", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4188--4192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2015. Framing image description as a ranking task: Data, models and evaluation metrics (extended ab- stract). In Proceedings of the Twenty-Fourth Interna- tional Joint Conference on Artificial Intelligence, IJ- CAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 4188-4192.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Ontonotes: The 90% solution", |
|
"authors": [ |
|
{ |
|
"first": "Eduard", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard H. Hovy, Mitchell P. Marcus, Martha Palmer, Lance A. Ramshaw, and Ralph M. Weischedel. 2006. Ontonotes: The 90% solution. In Human Language Technology Conference of the North American Chap- ter of the Association of Computational Linguistics, Proceedings, June 4-9, 2006, New York, New York, USA, pages 57-60.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Ontologically grounded multi-sense representation learning for semantic vector space models", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Sujay Kumar Jauhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "683--693", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sujay Kumar Jauhar, Chris Dyer, and Eduard H. Hovy. 2015. Ontologically grounded multi-sense represen- tation learning for semantic vector space models. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 -June 5, 2015, pages 683-693.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Caffe: Convolutional architecture for fast feature embedding", |
|
"authors": [ |
|
{ |
|
"first": "Yangqing", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Shelhamer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Donahue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Karayev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Girshick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Guadarrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the ACM International Conference on Multimedia, MM '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "675--678", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross B. Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Con- volutional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, MM '14, Orlando, FL, USA, November 03 -07, 2014, pages 675-678.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deep visualsemantic alignments for generating image descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Karpathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei-Fei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3128--3137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrej Karpathy and Fei-Fei Li. 2015. Deep visual- semantic alignments for generating image descrip- tions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3128-3137.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Senseval: An exercise in evaluating word sense disambiguation programs", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarrif", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proc. of the first international conference on language resources and evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "581--588", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Kilgarrif. 1998. Senseval: An exercise in evalu- ating word sense disambiguation programs. In Proc. of the first international conference on language re- sources and evaluation, pages 581-588.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Exploiting language models to recognize unseen actions", |
|
"authors": [ |
|
{ |
|
"first": "Raffaella", |
|
"middle": [], |
|
"last": "Dieu Thu Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jasper", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Uijlings", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 3rd ACM conference on International conference on multimedia retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "231--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dieu Thu Le, Raffaella Bernardi, and Jasper Uijlings. 2013. Exploiting language models to recognize un- seen actions. In Proceedings of the 3rd ACM con- ference on International conference on multimedia re- trieval, pages 231-238. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Proceedings of the Third Workshop on Vision and Language, chapter TUHOI: Trento Universal Human Object Interaction Dataset", |
|
"authors": [ |
|
{ |
|
"first": "Dieu-Thu", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jasper", |
|
"middle": [], |
|
"last": "Uijlings", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaella", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dieu-Thu Le, Jasper Uijlings, and Raffaella Bernardi, 2014. Proceedings of the Third Workshop on Vision and Language, chapter TUHOI: Trento Universal Hu- man Object Interaction Dataset, pages 17-24. Dublin City University and the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Lesk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Proceedings of the 5th Annual International Conference on Systems Documentation, SIGDOC 1986", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th Annual International Conference on Systems Docu- mentation, SIGDOC 1986, Toronto, Ontario, Canada, 1986, pages 24-26.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "English verb classes and alternations: A preliminary investigation", |
|
"authors": [ |
|
{ |
|
"first": "Beth", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beth Levin. 1993. English verb classes and alternations: A preliminary investigation. University of Chicago Press.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Using syntactic dependency as local context to resolve word sense ambiguity", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "64--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekang Lin. 1997. Using syntactic dependency as local context to resolve word sense ambiguity. In Proceed- ings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computa- tional Linguistics, pages 64-71. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Discriminating image senses by clustering with multimodal features", |
|
"authors": [ |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Loeff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cecilia", |
|
"middle": [], |
|
"last": "Ovesdotter Alm", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Forsyth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "547--554", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicolas Loeff, Cecilia Ovesdotter Alm, and David A. Forsyth. 2006. Discriminating image senses by clus- tering with multimodal features. In ACL 2006, 21st International Conference on Computational Linguis- tics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Confer- ence, Sydney, Australia, 17-21 July 2006, pages 547- 554. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Finding predominant word senses in untagged text", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Koeling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Weeds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "279--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguis- tics, pages 279-286. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Introduction to wordnet: An on-line lexical database", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Beckwith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "International Journal of Lexicography", |
|
"volume": "3", |
|
"issue": "4", |
|
"pages": "235--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International Journal of Lexicography, 3(4):235-244.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Word sense disambiguation: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACM Computing Surveys (CSUR)", |
|
"volume": "41", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Describing common human visual actions in images", |
|
"authors": [ |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Ruggero Ronchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pietro", |
|
"middle": [], |
|
"last": "Perona", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the British Machine Vision Conference (BMVC 2015)", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "52--53", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matteo Ruggero Ronchi and Pietro Perona. 2015. De- scribing common human visual actions in images. In Proceedings of the British Machine Vision Confer- ence (BMVC 2015), pages 52.1-52.12. BMVA Press, September.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes", |
|
"authors": [ |
|
{ |
|
"first": "Sascha", |
|
"middle": [], |
|
"last": "Rothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Schutze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1793--1803", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sascha Rothe and Hinrich Schutze. 2015. Autoex- tend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federa- tion of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1793-1803.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Unsupervised learning of visual sense models for polysemous words", |
|
"authors": [ |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1393--1400", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kate Saenko and Trevor Darrell. 2008. Unsupervised learning of visual sense models for polysemous words. In Advances in Neural Information Processing Sys- tems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Sys- tems, Vancouver, British Columbia, Canada, Decem- ber 8-11, 2008, pages 1393-1400.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Very deep convolutional networks for large-scale image recognition", |
|
"authors": [ |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Simonyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Zisserman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Show and tell: A neural image caption generator", |
|
"authors": [ |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Toshev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dumitru", |
|
"middle": [], |
|
"last": "Erhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3156--3164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Du- mitru Erhan. 2015. Show and tell: A neural image caption generator. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3156-3164.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "On deep multi-view representation learning", |
|
"authors": [ |
|
{ |
|
"first": "Weiran", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raman", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Bilmes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32nd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1083--1092", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiran Wang, Raman Arora, Karen Livescu, and Jeff A. Bilmes. 2015. On deep multi-view representation learning. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 1083-1092.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Deep correlation for matching images and text", |
|
"authors": [ |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krystian", |
|
"middle": [], |
|
"last": "Mikolajczyk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3441--3450", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fei Yan and Krystian Mikolajczyk. 2015. Deep corre- lation for matching images and text. In IEEE Con- ference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3441-3450.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Grouplet: A structured image representation for recognizing human and object interactions", |
|
"authors": [ |
|
{ |
|
"first": "Bangpeng", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bangpeng Yao and Li Fei-Fei. 2010. Grouplet: A struc- tured image representation for recognizing human and object interactions. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 9-16. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Human action recognition by learning bases of action attributes and parts", |
|
"authors": [ |
|
{ |
|
"first": "Bangpeng", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoye", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Khosla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [ |
|
"Lai" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonidas", |
|
"middle": [], |
|
"last": "Guibas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Computer Vision (ICCV), 2011 IEEE International Conference on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1331--1338", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bangpeng Yao, Xiaoye Jiang, Aditya Khosla, Andy Lai Lin, Leonidas Guibas, and Li Fei-Fei. 2011. Hu- man action recognition by learning bases of action at- tributes and parts. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1331-1338. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "It makes sense: A wide-coverage word sense disambiguation system for free text", |
|
"authors": [ |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL 2010, Proceedings of the 48th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In ACL 2010, Proceedings of the 48th", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Annual Meeting of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "78--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, Sys- tem Demonstrations, pages 78-83.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Visual sense ambiguity: three of the senses of the verb play.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Example item for depictability and sense annotation: synset definitions and examples (in blue) for the verb touch.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "Schematic overview of the visual sense disambiguation model.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"text": "Extracting visual sense representation for the verb play.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>: Comparison of VerSe with existing action</td></tr><tr><td>recognition datasets. Acts (actions) are verb-object</td></tr><tr><td>pairs; Sen indicates whether sense ambiguity is ex-</td></tr><tr><td>plicitly handled; Des indicates whether image de-</td></tr><tr><td>scriptions are included.</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Verb type Examples Verbs Images Senses Depct ITA Motion run, walk, jump, etc. 39 1812 10.76 5.79 0.680 Non-motion sit, stand, lay, etc. 51 1698 8.27 4.86 0.636", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Overview of VerSe dataset divided into motion and non-motion verbs; Depct: depictable senses; ITA: inter-annotator agreement.", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td/><td/><td/><td>Image Representations</td></tr><tr><td/><td colspan=\"2\">O: person, tennis</td><td>objects</td></tr><tr><td/><td colspan=\"2\">racket, sports ball</td><td/></tr><tr><td/><td>C: A woman is</td><td/><td>captions</td></tr><tr><td/><td>playing tennis.</td><td/><td/></tr><tr><td/><td/><td/><td>CNN-fc7</td></tr><tr><td/><td colspan=\"2\">play Sense Inventory: D</td><td/></tr><tr><td>s 1</td><td>s 2</td><td>s 3</td><td/></tr><tr><td>engage in competition</td><td>perform or transmit</td><td>playful engage in a</td><td>Lesk Algorithm</td></tr><tr><td>or sport</td><td>music</td><td>activity</td><td/></tr><tr><td/><td/><td/><td>\u03a6</td></tr><tr><td/><td/><td/><td>s 1</td></tr><tr><td/><td colspan=\"2\">Sense Representations</td><td/></tr></table>", |
|
"html": null, |
|
"text": "). Also, sense definitions are often very", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td colspan=\"5\">Motion verbs (19), FS: 60.0, MFS: 76.1</td></tr><tr><td>Features</td><td colspan=\"2\">GOLD</td><td colspan=\"2\">PRED</td></tr><tr><td/><td>Sup</td><td>Unsup</td><td>Sup</td><td>Unsup</td></tr><tr><td>O</td><td colspan=\"2\">82.3 35.3</td><td colspan=\"2\">80.0 43.8</td></tr><tr><td>C</td><td colspan=\"2\">78.4 53.8</td><td colspan=\"2\">69.2 41.5</td></tr><tr><td>O+C</td><td colspan=\"2\">80.0 55.3</td><td colspan=\"2\">70.7 45.3</td></tr><tr><td>CNN</td><td colspan=\"2\">82.3 58.4</td><td colspan=\"2\">82.3 58.4</td></tr><tr><td>CNN+O</td><td colspan=\"2\">83.0 48.4</td><td colspan=\"2\">83.0 60.0</td></tr><tr><td>CNN+C</td><td colspan=\"2\">82.3 66.9</td><td colspan=\"2\">82.3 53.0</td></tr><tr><td>CNN+O+C</td><td colspan=\"2\">83.0 58.4</td><td colspan=\"2\">83.0 55.3</td></tr></table>", |
|
"html": null, |
|
"text": "Accuracy scores for motion and non-motion verbs using for different types of sense and image representations (O: object labels, C: image descriptions, CNN: image features, FS: first sense heuristic, MFS: most frequent sense heuristic). Configurations that performed better than FS in bold.", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |