Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:50.299510Z"
},
"title": "Unsupervised Disambiguation of Image Captions",
"authors": [
{
"first": "Wesley",
"middle": [],
"last": "May",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"postCode": "M5S 3G4",
"settlement": "Toronto",
"region": "Ontario",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"postCode": "M5S 3G4",
"settlement": "Toronto",
"region": "Ontario",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Afsaneh",
"middle": [],
"last": "Fazly",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"postCode": "M5S 3G4",
"settlement": "Toronto",
"region": "Ontario",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Sven",
"middle": [],
"last": "Dickinson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"postCode": "M5S 3G4",
"settlement": "Toronto",
"region": "Ontario",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"postCode": "M5S 3G4",
"settlement": "Toronto",
"region": "Ontario",
"country": "Canada"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Given a set of images with related captions, our goal is to show how visual features can improve the accuracy of unsupervised word sense disambiguation when the textual context is very small, as this sort of data is common in news and social media. We extend previous work in unsupervised text-only disambiguation with methods that integrate text and images. We construct a corpus by using Amazon Mechanical Turk to caption sensetagged images gathered from ImageNet. Using a Yarowsky-inspired algorithm, we show that gains can be made over text-only disambiguation, as well as multimodal approaches such as Latent Dirichlet Allocation.",
"pdf_parse": {
"paper_id": "S12-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Given a set of images with related captions, our goal is to show how visual features can improve the accuracy of unsupervised word sense disambiguation when the textual context is very small, as this sort of data is common in news and social media. We extend previous work in unsupervised text-only disambiguation with methods that integrate text and images. We construct a corpus by using Amazon Mechanical Turk to caption sensetagged images gathered from ImageNet. Using a Yarowsky-inspired algorithm, we show that gains can be made over text-only disambiguation, as well as multimodal approaches such as Latent Dirichlet Allocation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We examine the problem of performing unsupervised word sense disambiguation (WSD) in situations with little text, but where additional information is available in the form of an image. Such situations include captioned newswire photos, and pictures in social media where the textual context is often no larger than a tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised WSD has been shown to work very well when the target word is embedded in a large",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We thank NSERC and U. Toronto for financial support. Fidler and Dickinson were sponsored by the Army Research Laboratory and this research was accomplished in part under Cooperative Agreement Number W911NF-10-2-0060. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either express or implied, of the Army Research Laboratory or the U.S. Government. quantity of text (Yarowsky, 1995) . However, if the only available text is \"The crane was so massive it blocked the sun\" (see Fig. 1) , then text-only disambiguation becomes much more difficult; a human could do little more than guess. But if an image is available, the intended sense is much clearer. We develop an unsupervised WSD algorithm based on Yarowsky's that uses words in a short caption along with \"visual words\" from the captioned image to choose the best of two possible senses of an ambiguous keyword describing the content of the image.",
"cite_spans": [
{
"start": 464,
"end": 480,
"text": "(Yarowsky, 1995)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 573,
"end": 580,
"text": "Fig. 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language-vision integration is a quickly developing field, and a number of researchers have explored the possibility of combining text and visual features in various multimodal tasks. Leong and Mihalcea (2011) explored semantic relatedness between words and images to better exploit multimodal content. Jamieson et al. (2009) and Feng and Lapata (2010) combined text and vision to perform effective image annotation. Barnard and colleagues (2003; 2005) showed that supervised WSD by could be improved with visual features. Here we show that unsupervised WSD can similarly be improved. Loeff, Alm and Forsyth (2006) and Saenko and Darrell (2008) combined visual and textual information to solve a related task, image sense disambiguation, in an unsupervised fashion. In Loeff et al.'s work, little gain was realized when visual features were added to a great deal of text. We show that these features have more utility with small textual contexts, and that, when little text is available, our method is more suitable than Saenko and Darrell's.",
"cite_spans": [
{
"start": 303,
"end": 325,
"text": "Jamieson et al. (2009)",
"ref_id": "BIBREF7"
},
{
"start": 330,
"end": 352,
"text": "Feng and Lapata (2010)",
"ref_id": "BIBREF6"
},
{
"start": 417,
"end": 446,
"text": "Barnard and colleagues (2003;",
"ref_id": null
},
{
"start": 447,
"end": 452,
"text": "2005)",
"ref_id": "BIBREF0"
},
{
"start": 585,
"end": 614,
"text": "Loeff, Alm and Forsyth (2006)",
"ref_id": "BIBREF9"
},
{
"start": 619,
"end": 644,
"text": "Saenko and Darrell (2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We model our algorithm after Yarowsky's (1995) algorithm for unsupervised WSD: Given a set of documents that contain a certain ambiguous word, the goal is to label each instance of that word as some particular sense. A seed set of collocations that strongly indicate one of the senses is initially used to label a subset of the data. Yarowsky then finds new collocations in the labelled data that are strongly associated with one of the current labels and applies these to unlabelled data. This process repeats iteratively, building a decision list of collocations that indicate a particular sense with a certain confidence.",
"cite_spans": [
{
"start": 29,
"end": 46,
"text": "Yarowsky's (1995)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Algorithm",
"sec_num": "2"
},
{
"text": "In our algorithm (Algorithm 1), we have a document collection D of images relevant to an ambiguous keyword k with senses s 1 and s 2 (though the algorithm is extensible to more than two senses). Such a collection might result from an internet image search using an ambiguous word such as \"mouse\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Algorithm",
"sec_num": "2"
},
{
"text": "Each D i is an image-caption pair repsented as a bag-of-words that includes both lexical words from the caption, and \"visual words\" from the image. A visual word is simply an abstract representation that describes a small portion of an image, such that similar portions in other images are represented by the same visual word (see Section 3.2 for details). Our seed sets consist of the words in the definitions of s 1 and s 2 from WordNet (Fellbaum, 1998) . Any document whose caption contains more words from one sense definition than the other is initially labelled with that sense. We then iterate between two steps that (i) find additional words associated with s 1 or s 2 in currently labelled data, and (ii) relabel all data using the word sense associations discovered so far.",
"cite_spans": [
{
"start": 439,
"end": 455,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Algorithm",
"sec_num": "2"
},
{
"text": "We let V be the entire vocabulary of words across all documents. We run experiements both with and without visual words, but when we use visual words, they are included in V . In the first step, we compute a confidence C i for each word V i . This confidence is a log-ratio of the probability of seeing V i in documents labelled as s 1 as opposed to documents labelled as s 2 . That is, a positive C i indicates greater association with s 1 , and vice versa. In the second step we find, for each document D j , the word V i \u2208 D j with the highest magnitude of C i . If the magnitude of C i is above a labelling threshold \u03c4 c , then we label this document as s 1 or s 2 depending on the sign of C i . Note that all old labels are discarded before this step, so labelled documents may become unlabelled, or even differently labelled, as the algorithm progresses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Algorithm",
"sec_num": "2"
},
{
"text": "D: set of documents D 1 ... D d V : set of lexical and visual words V 1 ... V v in D C i : log-confidence V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "i is sense 1 vs. sense 2 S 1 and S 2 : bag of dictionary words for each sense L 1 and L 2 : documents labelled as sense 1 or 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "for all D i do Initial labelling using seed set if |D i \u2229 S 1 | > |D i \u2229 S 2 | then L 1 \u2190 L 1 \u222a {D i } else if |D i \u2229 S 1 | < |D i \u2229 S 2 | then L 2 \u2190 L 2 \u222a {D i } end if end for repeat for all i \u2208 1..v do Update word conf. C i \u2190 log P (Vi|L1) P (Vi|L2) end for L 1 \u2190 \u2205, L 2 \u2190 \u2205 Update document conf. for all D i do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "Find word with highest confidence m \u2190 arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "j\u22081..v,Vj \u2208Di |C j | if C m > \u03c4 c then L 1 \u2190 L 1 \u222a {D i } else if C m < \u2212\u03c4 c then L 2 \u2190 L 2 \u222a {D i } end if end for until no change to L 1 or L 2 3 Creation of the Dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "We require a collection of images with associated captions. We also require sense annotations for the keyword for each image to use for evaluation. Barnard and Johnson (2005) developed the \"Music is an important means of expression for many teens.\" \"Keeping your office supplies organized is easy, with the right tools.\" \"The internet has opened up the world to people of all nationalities.\"",
"cite_spans": [
{
"start": 148,
"end": 174,
"text": "Barnard and Johnson (2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "\"When there is no cheese I will take over the world.\" Figure 2 : Example image-caption pairs from our dataset, for \"band\" (top) and \"mouse\" (bottom).",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "ImCor dataset by associating images from the Corel database with text from the SemCor corpus (Miller et al., 1993) . Loeff et al. (2006) and Saenko and Darrell (2008) used Yahoo!'s image search to gather images with their associated web pages. While these datasets contain images paired with text, the textual contexts are much larger than typical captions.",
"cite_spans": [
{
"start": 93,
"end": 114,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF11"
},
{
"start": 117,
"end": 136,
"text": "Loeff et al. (2006)",
"ref_id": "BIBREF9"
},
{
"start": 141,
"end": 166,
"text": "Saenko and Darrell (2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Proposed Algorithm",
"sec_num": null
},
{
"text": "To develop a large set of sense-annotated imagecaption pairs with a focus on caption-sized text, we turned to ImageNet (Deng et al., 2009) . ImageNet is a database of images that are each associated with a synset from WordNet. Hundreds of images are available for each of a number of senses of a wide variety of common nouns. To gather captions, we used Amazon Mechanical Turk to collect five sentences for each image. We chose two word senses for each of 20 polysemous nouns and for each sense we collected captions for 50 representative images. For each image we gathered five captions, for a total of 10,000 captions. As we have five captions for each image, we split our data into five sets. Each set has the same images, but each image is paired with a different caption in each set.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Captioning Images",
"sec_num": "3.1"
},
{
"text": "We specified to the Turkers that the sentences should be relevant to, but should not talk directly about, the image, as in \"In this picture there is a blue fish\", as such captions are very unnatural. True captions generally offer orthogonal information that is not readily apparent from the image. The keyword for each image (as specified by ImageNet) was not presented to the Turkers, so the captions do not necessarily contain it. Knowledge of the keyword is presumed to be available to the algorithm in the form of an image tag, or filename, or the like. We found that forcing a certain word to be included in the caption also led to sentences that described the picture very directly. Sentences were required to be a least ten words long, and have acceptable grammar and spelling. We remove stop words from the captions and lemmatize the remaining words. See Figure 2 for some examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 863,
"end": 871,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Captioning Images",
"sec_num": "3.1"
},
{
"text": "We compute visual words for each image with Ima-geNet's feature extractor. This extractor lays down a grid of overlapping squares onto the image and computes a SIFT descriptor (Lowe, 2004) for each square. Each descriptor is a vector that encodes the edge orientation information in a given square. The descriptors are computed at three scales: 1x, 0.5x and 0.25x the original side lengths. These vectors are clustered with k-means into 1000 clusters, and the labels of these clusters (arbitrary integers from 1 to 1000) serve as our visual words.",
"cite_spans": [
{
"start": 176,
"end": 188,
"text": "(Lowe, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the Visual Words",
"sec_num": "3.2"
},
{
"text": "It is common for each image to have a \"vocabulary\" of over 300 distinct visual words, many of which only occur once. To denoise the visual data, we use only those visual words which account for at least 1% of the total visual words for that image.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the Visual Words",
"sec_num": "3.2"
},
{
"text": "To show that the addition of visual features improves the accuracy of sense disambiguation for imagecaption pairs, we run our algorithm both with and without the visual features. We also compare our results to three different baseline methods: K-means (K-M), Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , and an unsupervised WSD algorithm (PBP) explained below. We use accuracy to measure performance as it is commonly used by the WSD community (See Table 1 ).",
"cite_spans": [
{
"start": 293,
"end": 312,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 460,
"end": 467,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "For K-means, we set k = 2 as we have two senses, and represent each document with a V -dimensional vector, where the ith element is the proportion of word V i in the document. We run K-means both with and without visual features. For LDA, we use the dictionary sense model from Saenko and Darrell (2008) . A topic model is learned where the relatedness of a topic to a sense is based on the probabilities of that topic generating the seed words from its dictionary definitions. Analogously to k-means, we learn a model for text alone, and a model for text augmented with visual information.",
"cite_spans": [
{
"start": 278,
"end": 303,
"text": "Saenko and Darrell (2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "For unsupervised WSD (applied to text only), we use WordNet::SenseRelate::TargetWord, hereafter PBP (Patwardhan et al., 2007) , the highest scoring unsupervised lexical sample word sense disambiguation algorithm at SemEval07 (Pradhan et al., 2007) . PBP treats the nearby words around the target word as a bag, and uses the WordNet hierarchy to assign a similarity score between the possible senses of words in the context, and possible senses of the target word. As our captions are fairly short, we use the entire caption as context.",
"cite_spans": [
{
"start": 100,
"end": 125,
"text": "(Patwardhan et al., 2007)",
"ref_id": "BIBREF12"
},
{
"start": 225,
"end": 247,
"text": "(Pradhan et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "The most important result is the gain in accuracy after adding visual features. While the average gain across all words is slight, it is significant at p < 0.02 (using a paired t-test). For 12 of the 20 words, the visual features improve performance, and in 6 of those, the improvement is 5-11%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "For some words there is no significant improvement in accuracy, or even a slight decrease. With words like \"bass\" or \"chip\" there is little room to improve upon the text-only result. For words like \"plant\" or \"press\" it seems the text-only result is not strong enough to help bootstrap the visual features in any useful way. In other cases where little improvement is seen, the problem may lie with high intra-class variation, as our visual words are not very robust features, or with a lack of orthogonality between the lexical and visual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Our algorithm also performs significantly better than the baseline measurements. K-means performs surprisingly well compared to the other baselines, but seems unable to make much sense of the visual information present. Saenko and Darrell's (2008) LDA model makes substansial gains by using visual features, but does not perform as well on this task. We suspect that a strict adherence to the seed words may be to blame: while both this LDA model and our algorithm use the same seed definitions initially, our algorithm is free to change its mind about the usefulness of the words in the definitions as it progresses, whereas the LDA model has no such capacity. Indeed, words that are intuitively nondiscriminative, such as \"carry\", \"lack\", or \"late\", are not uncommon in the definitions we use.",
"cite_spans": [
{
"start": 220,
"end": 247,
"text": "Saenko and Darrell's (2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "We present an approach to unsupervised WSD that works jointly with the visual and textual domains. We showed that this multimodal approach makes gains over text-only disambiguation, and outperforms previous approaches for WSD (both text-only, and multimodal), when textual contexts are limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "This project is still in progress, and there are many avenues for further study. We do not currently exploit collocations between lexical and visual information. Also, the bag-of-SIFT visual features that we use, while effective, have little semantic content. More structured representations over segmented image regions offer greater potential for encoding semantic content (Duygulu et al., 2002) .",
"cite_spans": [
{
"start": 375,
"end": 397,
"text": "(Duygulu et al., 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word sense disambiguation with pictures",
"authors": [
{
"first": "Kobus",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Artificial Intelligence",
"volume": "167",
"issue": "",
"pages": "13--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kobus Barnard and Matthew Johnson. 2005. Word sense disambiguation with pictures. In Artificial In- telligence, volume 167, pages 13-130.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word sense disambiguation with pictures",
"authors": [
{
"first": "Kobus",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2003,
"venue": "Workshop on Learning Word Meaning from Non-Linguistic Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kobus Barnard, Matthew Johnson, and David Forsyth. 2003. Word sense disambiguation with pictures. In Workshop on Learning Word Meaning from Non- Linguistic Data, Edmonton, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "JMLR",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. In JMLR, volume 3, pages 993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Imagenet: A large-scale hierarchical image database",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierar- chical image database. In IEEE Conference on Com- puter Vision and Pattern Recognition.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary",
"authors": [
{
"first": "Pinar",
"middle": [],
"last": "Duygulu",
"suffix": ""
},
{
"first": "Kobus",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nando De Freitas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2002,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinar Duygulu, Kobus Barnard, Nando de Freitas, and David Forsyth. 2002. Object recognition as machine translation: Learning a lexicon for a fixed image vo- cabulary. In European Conference on Computer Vi- sion, Copenhagen, Denmark.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wordnet: An electronic lexical database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "Bradford Books",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. Wordnet: An electronic lex- ical database. In Bradford Books.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Topic models for image annotation and text illustration",
"authors": [
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Annual Conference of the North American Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "831--839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yansong Feng and Mirella Lapata. 2010. Topic models for image annotation and text illustration. In Annual Conference of the North American Chapter of the ACL, pages 831-839, Los Angeles, California.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using language to learn structured appearance models for image annotation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Jamieson",
"suffix": ""
},
{
"first": "Afsaneh",
"middle": [],
"last": "Fazly",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "32",
"issue": "1",
"pages": "148--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Jamieson, Afsaneh Fazly, Suzanne Stevenson, Sven Dickinson, and Sven Wachsmuth. 2009. Using language to learn structured appearance models for im- age annotation. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 32(1):148-164.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Measuring the semantic relatedness between words and images",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Chee Wee Leong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2011,
"venue": "International Conference on Semantic Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chee Wee Leong and Rada Mihalcea. 2011. Measuring the semantic relatedness between words and images. In International Conference on Semantic Computing, Oxford, UK.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discriminating image senses by clustering with multimodal features",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Loeff",
"suffix": ""
},
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter Alm",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions",
"volume": "",
"issue": "",
"pages": "547--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Loeff, Cecilia Ovesdotter Alm, and David Forsyth. 2006. Discriminating image senses by clus- tering with multimodal features. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 547-554, Sydney, Australia.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distinctive image features from scale-invariant keypoints",
"authors": [
{
"first": "David",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2004,
"venue": "International Journal of Computer Vision",
"volume": "60",
"issue": "2",
"pages": "91--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Lowe. 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A semantic concordance",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Randee",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 3rd DARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "303--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Miller, Claudia Leacock, Randee Tengi, and Ross Bunker. 1993. A semantic concordance. In Proceed- ings of the 3rd DARPA Workshop on Human Language Technology, pages 303-308.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "UMND1: Unsupervised word sense disambiguation using contextual semantic relatedness",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval-2007",
"volume": "",
"issue": "",
"pages": "390--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Patwardhan, Satanjeev Banerjee, and Ted Ped- ersen. 2007. UMND1: Unsupervised word sense disambiguation using contextual semantic relatedness. In Proceedings of SemEval-2007, pages 390-393, Prague, Czech Republic.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "English lexical sample, SRL and all words",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval-2007",
"volume": "17",
"issue": "",
"pages": "87--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. Task 17: English lexical sam- ple, SRL and all words. In Proceedings of SemEval- 2007, pages 87-92, Prague, Czech Republic.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised learning of visual sense models for polysemous words",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate Saenko and Trevor Darrell. 2008. Unsupervised learning of visual sense models for polysemous words. In Proceedings of Neural Information Processing Sys- tems, Vancouver, Canada.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Proceed- ings of the 33rd Annual Meeting of the ACL, pages 189-196, Cambridge, Massachusetts.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "\"The crane was so massive it blocked the sun.\" Which sense of crane? With images the answer is clear.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Ours Ours K-M K-M LDA LDA PBP</td></tr><tr><td/><td colspan=\"2\">text w/vis text w/vis text w/vis text</td></tr><tr><td>band</td><td>.80 .82 .66 .65</td><td>.64 .56 .73</td></tr><tr><td>bank</td><td>.77 .78 .71 .59</td><td>.52 .67 .62</td></tr><tr><td>bass</td><td>.94 .94 .90 .88</td><td>.61 .62 .49</td></tr><tr><td>chip</td><td>.90 .90 .73 .58</td><td>.57 .66 .75</td></tr><tr><td>clip</td><td>.70 .79 .65 .58</td><td>.48 .53 .65</td></tr><tr><td>club</td><td>.80 .84 .80 .81</td><td>.61 .73 .63</td></tr><tr><td>court</td><td>.79 .79 .61 .53</td><td>.62 .82 .57</td></tr><tr><td colspan=\"2\">crane .62 .67 .76 .76</td><td>.52 .54 .66</td></tr><tr><td colspan=\"2\">game .78 .78 .60 .66</td><td>.60 .66 .70</td></tr><tr><td>hood</td><td>.74 .73 .73 .70</td><td>.51 .45 .55</td></tr><tr><td>jack</td><td>.76 .74 .62 .53</td><td>.58 .66 .47</td></tr><tr><td>key</td><td>.81 .92 .79 .54</td><td>.57 .70 .50</td></tr><tr><td>mold</td><td>.67 .68 .59 .67</td><td>.57 .66 .54</td></tr><tr><td colspan=\"2\">mouse .84 .84 .71 .62</td><td>.62 .69 .68</td></tr><tr><td>plant</td><td>.54 .54 .56 .53</td><td>.52 .50 .72</td></tr><tr><td>press</td><td>.60 .59 .60 .54</td><td>.58 .62 .48</td></tr><tr><td>seal</td><td>.70 .80 .61 .67</td><td>.55 .53 .62</td></tr><tr><td colspan=\"2\">speaker .70 .69 .57 .53</td><td>.55 .62 .63</td></tr><tr><td colspan=\"2\">squash .89 .95 .84 .92</td><td>.55 .67 .79</td></tr><tr><td>track</td><td>.78 .85 .71 .66</td><td>.51 .54 .69</td></tr><tr><td>avg.</td><td>.76 .78 .69 .65</td><td>.56 .63 .62</td></tr></table>",
"html": null,
"text": "Results (Average accuracy across all five sets of data). Bold indicates best performance for that word."
}
}
}
}