Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:34.199304Z"
},
"title": "Linking Entities Across Images and Text",
"authors": [
{
"first": "Rebecka",
"middle": [],
"last": "Weegar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "DSV Stockholm University Kalle\u00c5str\u00f6m Dept. of Mathematics Lund University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lund University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a set of methods to link entities across images and text. As a corpus, we used a data set of images, where each image is commented by a short caption and where the regions in the images are manually segmented and labeled with a category. We extracted the entity mentions from the captions and we computed a semantic similarity between the mentions and the region labels. We also measured the statistical associations between these mentions and the labels and we combined them with the semantic similarity to produce mappings in the form of pairs consisting of a region label and a caption entity. In a second step, we used the syntactic relationships between the mentions and the spatial relationships between the regions to rerank the lists of candidate mappings. To evaluate our methods, we annotated a test set of 200 images, where we manually linked the image regions to their corresponding mentions in the captions. Eventually, we could match objects in pictures to their correct mentions for nearly 89 percent of the segments, when such a matching exists.",
"pdf_parse": {
"paper_id": "K15-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a set of methods to link entities across images and text. As a corpus, we used a data set of images, where each image is commented by a short caption and where the regions in the images are manually segmented and labeled with a category. We extracted the entity mentions from the captions and we computed a semantic similarity between the mentions and the region labels. We also measured the statistical associations between these mentions and the labels and we combined them with the semantic similarity to produce mappings in the form of pairs consisting of a region label and a caption entity. In a second step, we used the syntactic relationships between the mentions and the spatial relationships between the regions to rerank the lists of candidate mappings. To evaluate our methods, we annotated a test set of 200 images, where we manually linked the image regions to their corresponding mentions in the captions. Eventually, we could match objects in pictures to their correct mentions for nearly 89 percent of the segments, when such a matching exists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Linking an object in an image to a mention of that object in an accompanying text is a challenging task, which we can imagine useful in a number of settings. It could, for instance, improve image retrieval by complementing the geometric relationships extracted from the images with textual descriptions from the text. A successful mapping would also make it possible to translate knowledge and information across image and text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe methods to link mentions of entities in captions to labeled image seg-ments and we investigate how the syntactic structure of a caption can be used to better understand the contents of an image. We do not address the closely related task of object recognition in the images. This latter task can be seen as a complement to entity linking across text and images. See Russakovsky et al. (2015) for a description of progress and results to date in object detection and classification in images.",
"cite_spans": [
{
"start": 393,
"end": 418,
"text": "Russakovsky et al. (2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 An Example Figure 1 shows an example of an image from the Segmented and Annotated IAPR TC-12 data set (Escalantea et al., 2010) . It has four regions labeled cloud, grass, hill, and river, and the caption: a flat landscape with a dry meadow in the foreground, a lagoon behind it and many clouds in the sky containing mentions of five entities that we identify with the words meadow, landscape, lagoon, cloud, and sky. A correct association of the mentions in the caption to the image regions would Figure 1 : Image from the Segmented and Annotated IAPR TC-12 data set with the caption: a flat landscape with a dry meadow in the foreground, a lagoon behind it and many clouds in the sky map clouds to the region labeled cloud, meadow to grass, and lagoon to river.",
"cite_spans": [
{
"start": 104,
"end": 129,
"text": "(Escalantea et al., 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 1",
"ref_id": null
},
{
"start": 500,
"end": 508,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This image, together with its caption, illustrates a couple of issues: The objects or regions labelled or visible in an image are not always mentioned in the caption, and for most of the images in the data set, more entities are mentioned in the captions than there are regions in the images. In addition, for a same entity, the words used to mention it are usually different from the words used as labels (the categories), as in the case of grass and meadow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Related work includes the automatic generation of image captions that describes relevant objects in an image and their relationships. Kulkarni et al. (2011) assign each detected image object a visual attribute and a spatial relationship to the other objects in the image. The spatial relationships are translated into selected prepositions in the resulting captions. Elliott and Keller (2013) used manually segmented and labeled images and introduced visual dependency representations (VDRs) that describe spatial relationships between the image objects. The captions are generated using templates. Both Kulkarni et al. (2011) and Elliott and Keller (2013) used the BLEU-score and human evaluators to assess grammatically the generated captions and on how well they describe the image.",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "Kulkarni et al. (2011)",
"ref_id": "BIBREF11"
},
{
"start": 367,
"end": 392,
"text": "Elliott and Keller (2013)",
"ref_id": "BIBREF2"
},
{
"start": 604,
"end": 626,
"text": "Kulkarni et al. (2011)",
"ref_id": "BIBREF11"
},
{
"start": 631,
"end": 656,
"text": "Elliott and Keller (2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "3"
},
{
"text": "Although much work has been done to link complete images to a whole text, there are only a few papers on the association of elements inside a text and an image. Naim et al. (2014) analyzed parallel sets of videos and written texts, where the videos show laboratory experiments. Written instructions are used to describe how to conduct these experiments. The paper describes models for matching objects detected in the video with mentions of those objects in the instructions. The authors mainly focus on objects that get touched by a hand in the video. For manually annotated videos, Naim et al. (2014) could match objects to nouns nearly 50% of the time. Karpathy et al. (2014) proposed a system for retrieving related images and sentences. They used neural networks and they show that the results are improved if image objects and sentence fragments are included in the model. Sentence fragments are extracted from dependency graphs, where each edge in the graphs corresponds to a fragment.",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "Naim et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 584,
"end": 602,
"text": "Naim et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 656,
"end": 678,
"text": "Karpathy et al. (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "3"
},
{
"text": "We used the Segmented and Annotated IAPR TC-12 Benchmark data set (Escalantea et al., 2010) that consists of about 20,000 photographs with a wide variety of themes. Each image has a short caption that describes its content, most often consisting of one to three sentences separated by semicolons. The images are manually segmented into regions with, on average, about 5 segments in each image.",
"cite_spans": [
{
"start": 66,
"end": 91,
"text": "(Escalantea et al., 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "Each region is labelled with one out of 275 predefined image labels. The labels are arranged in a hierarchy, where all the nodes are available as labels and where object is the top node. The labels humans, animals, man-made, landscape/nature, food, and other form the next level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "An image caption describes a set of entities, the caption entities CE, where each entity CE i is referred to by a set of mentions M . To detect them, we applied the Stanford CoreNLP pipeline (Toutanova et al., 2003 ) that consists of a partof-speech tagger, lemmatizer, named entity recognizer (Finkel et al., 2005) , dependency parser, and coreference solver. We considered each noun in a caption as an entity candidate. If an entity CE i had only one mention M j , we identified it by the head noun of its mention. We represented the entities mentioned more than once by the head noun of their most representative mention. We applied the entity extraction to all the captions in the data set, and we found 3,742 different nouns or noun compounds to represent the entities.",
"cite_spans": [
{
"start": 191,
"end": 214,
"text": "(Toutanova et al., 2003",
"ref_id": "BIBREF19"
},
{
"start": 294,
"end": 315,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entities and Mentions",
"sec_num": "4.2"
},
{
"text": "In addition to the caption entities, each image has a set of labeled segments (or regions) corresponding to the image entities, IE. The Cartesian product of these two sets results in pairs P generating all the possible mappings of caption entities to image labels. We considered a pair (IE i , CE j ) a correct mapping, if the image label IE i and the caption entity CE j referred to the same entity. We represented a pair by the region label and the identifier of the caption entity, i.e. the head noun of the entity mention. In Fig. 1 , the correct pairs are (grass, meadow), (river, lagoon), and (cloud, clouds).",
"cite_spans": [],
"ref_spans": [
{
"start": 530,
"end": 536,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entities and Mentions",
"sec_num": "4.2"
},
{
"text": "As the Segmented and Annotated IAPR TC-12 data set does not provide information on links between the image regions and the mentions, we annotated a set of 200 randomly selected images from the data set to evaluate the automatic linking accuracy. We assigned the image regions to entities in the captions and we excluded these images from the training set. The annotation does not always produce a 1:1 mapping of caption entities to regions. In many cases, objects are grouped or divided into parts differently in the captions and in the segmentation. We created a set of guidelines to handle these mappings in a consistent way. Table 1 shows the sizes of the different image sets and the fraction of image regions that have a corresponding entity mention in the caption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building a Test Set",
"sec_num": "4.3"
},
{
"text": "Files ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Set",
"sec_num": null
},
{
"text": "To identify the links between the regions of an image and the entity identifiers in its caption, we first generated all the possible pairs. We then ranked these pairs using a semantic distance derived from WordNet (Miller, 1995) , statistical association metrics, and finally, a combination of both techniques.",
"cite_spans": [
{
"start": 214,
"end": 228,
"text": "(Miller, 1995)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Entity Pairs",
"sec_num": "5"
},
{
"text": "The image labels are generic English words that are semantically similar to those used in the captions. In Fig. 1 , cloud and clouds are used both as label and in the caption, but the region labeled grass is described as a meadow and the region labeled river, as a lagoon. We used the WordNet Similarity for Java library, (WS4J), (Shima, 2014) to compute the semantic similarity of the region labels and the entity identifiers. WS4J comes with a number of metrics that approximate similarity as distances between WordNet synsets: PATH, WUP (Wu and Palmer, 1994) , RES, (Resnik, 1995) , JCN (Jiang and Conrath, 1997), HSO (Hirst and St-Onge, 1998) , LIN (Lin, 1998) , LCH (Leacock and Chodorow, 1998) , and LESK (Banerjee and Banerjee, 2002) .",
"cite_spans": [
{
"start": 330,
"end": 343,
"text": "(Shima, 2014)",
"ref_id": "BIBREF18"
},
{
"start": 540,
"end": 561,
"text": "(Wu and Palmer, 1994)",
"ref_id": "BIBREF21"
},
{
"start": 569,
"end": 583,
"text": "(Resnik, 1995)",
"ref_id": "BIBREF16"
},
{
"start": 621,
"end": 646,
"text": "(Hirst and St-Onge, 1998)",
"ref_id": "BIBREF8"
},
{
"start": 653,
"end": 664,
"text": "(Lin, 1998)",
"ref_id": "BIBREF13"
},
{
"start": 671,
"end": 699,
"text": "(Leacock and Chodorow, 1998)",
"ref_id": "BIBREF12"
},
{
"start": 711,
"end": 740,
"text": "(Banerjee and Banerjee, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 107,
"end": 113,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Distance",
"sec_num": "5.1"
},
{
"text": "We manually lemmatized and simplified the image labels and the entity mentions so that they are compatible with WordNet entries. It resulted in a smaller set of labels: 250 instead of the 275 original labels. We also simplified the named entities from the captions. When a person or location was not present in WordNet, we used its named entity type as identifier. In some cases, it was not possible to find an entity identifier in WordNet, mostly due to misspellings in the caption, like buldings, or buidling, or because of POS-tagging errors. We chose to identify these entities with the word entity. The normalization reduced the 3,742 entity identifiers to 2,216 unique ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Distance",
"sec_num": "5.1"
},
{
"text": "Finally, we computed a 250 \u00d7 2216 matrix containing the similarity scores for each (image label, entity identifier) pair for each of the WS4J semantic similarity metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Distance",
"sec_num": "5.1"
},
{
"text": "We used three functions to reflect the statistical association between an image label and an entity identifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Associations",
"sec_num": "5.2"
},
{
"text": "\u2022 Co-occurrence counts, i.e. the frequencies of the region labels and entity identifiers that occur together in the pictures of the training set;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Associations",
"sec_num": "5.2"
},
{
"text": "\u2022 Pointwise mutual information (P M I) (Fano, 1961) that compares the joint probability of the occurrence of a (image label, entity identifier) pair to the independent probability of the region label and the caption entity occurring by themselves; and finally",
"cite_spans": [
{
"start": 39,
"end": 51,
"text": "(Fano, 1961)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Associations",
"sec_num": "5.2"
},
{
"text": "\u2022 The simplified Student's t-score as described in Church and Mercer (1993) .",
"cite_spans": [
{
"start": 51,
"end": 75,
"text": "Church and Mercer (1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Associations",
"sec_num": "5.2"
},
{
"text": "As with the semantic similarity scores, we used matrices to hold the scores for all the (image label, entity identifier) pairs for the three association metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Associations",
"sec_num": "5.2"
},
{
"text": "To associate the region labels of an image to the entities in its caption, we mapped the label L i to the caption entity E j that had the highest score with respect to L i . We did this for the three association scores and the eight semantic metrics. Note that a region label is not systematically paired with the same caption entity, since each caption contains different sets of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Mapping Algorithm",
"sec_num": "5.3"
},
{
"text": "Background and foreground are two of the most frequent words in the captions and they were frequently assigned to image regions. Since they rarely represent entities, but merely tell where the entities are located, we included them in a list of stop words, as well as middle, left, right, and front that we removed from the identifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Mapping Algorithm",
"sec_num": "5.3"
},
{
"text": "We applied the linking algorithm to the annotated set. We formed the Cartesian product of the image labels and the entity identifiers and, for each image region, we ranked the caption entities using the individual scoring functions. This results in an ordered list of entity candidates for each region. Table 2 shows the average ranks of the correct candidate for each of the scoring functions and the total number of correct candidates at different ranks.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The Mapping Algorithm",
"sec_num": "5.3"
},
{
"text": "The algorithm in Sect. 5.3 determines the relationship holding between a pair of entities, where one element in the pair comes from the image and the other from the caption. The entities on each side are considered in isolation. We extended their description with relationships inside the image and the caption. Weegar et al. (2014) showed that pairs of entities in a text that were linked by the prepositions on, at, with, or in, often corresponded to pairs of segments that were close to each other. We further investigated the idea that spatial relationships in the image relate to syntactical relationships in the captions and we implemented it in the form of a reranker.",
"cite_spans": [
{
"start": 312,
"end": 332,
"text": "Weegar et al. (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "6"
},
{
"text": "For each label-identifier pair, we included the relationship between the image segment in the pair and the closest segment in the image. As in Weegar et al. 2014, we defined the closeness as the Euclidean distance between the gravity centers of the bounding boxes of the segments. We also added the relationship between the caption entity in the label-identifier pair and the entity mentions which were the closest in the caption. We parsed the captions and we measured the distance as the number of edges between the two entities in the dependency graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "6"
},
{
"text": "The Segmented and Annotated IAPR TC-12 data set comes with annotations for three different types of spatial relationships holding between the segment pairs in each image: Topological, horizontal, and vertical (Hern\u00e1ndez-Gracidas and Su-car, 2007) . The possible values are adjacent or disjoint for the topological category, beside or horizontally aligned for the horizontal one, and finally above, below, or vertically aligned for the vertical one.",
"cite_spans": [
{
"start": 209,
"end": 246,
"text": "(Hern\u00e1ndez-Gracidas and Su-car, 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spatial Features",
"sec_num": "6.1"
},
{
"text": "The syntactic features are all based on the structure of the sentences' dependency graphs. We followed the graph from the caption-entity in the pair to extract its closest ancestors and descendants. We only considered children to the right of the candidate. We also included all the prepositions between the entity and these ancestor and descendant. Figure 2 shows the dependency graph of the sentence a flat landscape with a dry meadow in the foreground. The descendants of the landscape entity are meadow and foreground linked respectively by the prepositions with and in. Its ancestor is the root node and the distance between landscape and meadow is 2. The syntactic features we extract for the entities in this sentence arranged in the order ancestor, distance to ancestor, preposition, descendant, distance to descendant, and preposition are for landscape, (root, 1, null, meadow, 2, with) and (root, 1, null, foreground, 2, in), for meadow, (landscape, 2, with, null, -, null), and for foreground, (landscape, 2, in, null, -, null). We discard foreground as it is part of the stop words.",
"cite_spans": [],
"ref_spans": [
{
"start": 350,
"end": 358,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Syntactic Features",
"sec_num": "6.2"
},
{
"text": "The single features consist of the label, entity identifier, and score of the pair. To take interaction into account, we also paired features characterizing properties across image and text. The list of these features is (Table 3): 1. The label of the image region and the identifier of the caption entity. In Fig 2, we create grass meadow from (grass, meadow).",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 231,
"text": "(Table 3):",
"ref_id": null
},
{
"start": 307,
"end": 316,
"text": "In Fig 2,",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Pairing Features",
"sec_num": "6.3"
},
{
"text": "2. The label of the closest image segment to the ancestor of the caption entity. The closest Anc ClosestSeg: Closest segment label with the ancestor of the caption entity Desc ClosestSeg: Closest segment label with the descendant of the caption entity AncDist: Distance between the ancestor and the caption entity, and distance between segments DescDist: Distance between the descendant and the caption entity, and distance between the segments TopoRel DescPreps: Topological relationship between segments and the prepositions linking the caption entity with its descendant TopoRel AncPreps: Topological relationship between the segments and the prepositions linking the caption entity with its ancestor XRel DescPreps: Horizontal relationship between segments and the prepositions linking the caption entity with its descendant XRel AncPreps: Horizontal relationship between segments and the prepositions linking the caption entity with its ancestor YRel DescPreps: Vertical relationship between segments and the prepositions linking the caption entity with its descendant YRel AncPreps: Vertical relationship between segments and the prepositions linking the caption entity with its ancestor SegmentDist: Distance (in pixels) between the gravity center of the bounding boxes framing the two closest segments Table 3 : The reranking features using the current segment and its closest segment in the image segment of the grass segment is river and the ancestor of meadow is landscape. This gives the paired feature meadow landscape. The labels of the segments closest to the current segment and the descendant of meadow are also paired.",
"cite_spans": [],
"ref_spans": [
{
"start": 1310,
"end": 1317,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pairing Features",
"sec_num": "6.3"
},
{
"text": "the image divided into seven intervals with the distance between the caption entities. We measured the distance in pixels since all the images have the same pixel dimensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The distance between the segment pairs in",
"sec_num": "3."
},
{
"text": "4. The spatial relationships of the closest segments with the prepositions found between their corresponding caption entities. The segments grass and river in the image are adjacent and horizontally aligned and grass is located below the segment labeled river. Each of the spatial features is paired with the prepositions for both the ancestor and the de-scendant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The distance between the segment pairs in",
"sec_num": "3."
},
{
"text": "We trained the reranking models from the pairs of labeled segments and caption entities, where the correct mappings formed the positive examples and the rest, the negative ones. In Fig. 1 , the mapping (grass, meadow) is marked as correct for the region labeled grass, while the mappings (grass, lagoon) and (grass, cloud) are marked as incorrect. We used the manually annotated images (200 images, Table 1 ) as training data, a leave-oneout cross-validation, and L2-regularized logistic regression from LIBLINEAR (Fan et al., 2008) . We applied a cutoff of 3 for the list of candidates in the reranking and we multiplied the original score of the label-identifier pairs with the reranking probability. Table 4 : An example of an assignment before (upper part) and after (lower part) reranking. The caption entities are ranked according to the number of co-occurrences with the label. We obtain the new score for a label-identifier pair by multiplying the original score by the output of the reranker for this pair four regions in Fig. 1 . The column Entity 1 shows that the scoring function maps the caption entity sky to all of the regions. We created a reranker's feature vector for each of the 8 label-identifier pairs. Table 5 shows two of them corresponding to the pairs (grass, sky) and (grass, meadow). The pair (grass, meadow) is a correct mapping, but it has a lower co-occurrence score than the incorrect pair (grass, sky).",
"cite_spans": [
{
"start": 504,
"end": 532,
"text": "LIBLINEAR (Fan et al., 2008)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 181,
"end": 187,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 399,
"end": 406,
"text": "Table 1",
"ref_id": null
},
{
"start": 703,
"end": 710,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1031,
"end": 1037,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 1224,
"end": 1231,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "The distance between the segment pairs in",
"sec_num": "3."
},
{
"text": "In the cross-validation evaluation, we applied the classifier to these vectors and we obtained the reranking scores of 0.0244 for (grass, sky) and 0.79 for (grass, meadow) resulting in the respective final scores of 36 and 699. Table 4 , lower part, shows the new rankings, where the highest scores correspond to the associations: (cloud, cloud), (grass, meadow), (hill, landscape), and (river, cloud), which are all correct except the last one.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Reranking Example",
"sec_num": "6.4"
},
{
"text": "We evaluated the three scoring functions: Cooccurrence, mutual information, and t-score, and the semantic similarity functions. Each labeled segment in the annotated set was assigned the caption-entity that gave the highest scoring labelidentifier pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual Scoring Functions",
"sec_num": "7.1"
},
{
"text": "To confront the lack of annotated data we also investigated a self-training method. We used the statistical associations we derived from the training set and we applied the mapping procedure in Sect. 5.3 to this set. We repeated this procedure Table 5 : Feature vectors for the pairs (grass, meadow) and (grass, sky). The ancestor distance 2 a means that there are two edges in the dependency graph between the words meadow and landscape, and a represents the smallest of the distance intervals, meaning that the two segments grass and river are less than 50 pixels apart with the three statistical scoring functions. We counted all the mappings we obtained between the region labels and the caption identifiers and we used these counts to create three new scoring functions denoted with a sign. Table 6 shows the performance comparison between the different functions. The second column shows how many correct mappings were found by each function. The fourth column shows the improved score when the stop words were removed. The removal of the stop words as entity candidates improved the co-occurrence and tscore scoring functions considerably, but provided only marginal improvement for the scoring functions based on semantic similarity and pointwise mutual information. The percentage of correct mappings is based on the 730 regions that have a matching caption entity in the annotated test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 5",
"ref_id": null
},
{
"start": 796,
"end": 803,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Individual Scoring Functions",
"sec_num": "7.1"
},
{
"text": "The semantic similarity functions -PATH, HSO, JCN, LCH, LESK, LIN, RES and WUPoutperform the statistical one and the self-trained versions of the statistical scoring functions yield better results than the original ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual Scoring Functions",
"sec_num": "7.1"
},
{
"text": "We applied an ensemble voting procedure with the individual scoring functions, where each function was given a number of votes to place on its preferred label-identifier pair. We counted the votes and the entity that received the majority of the votes was selected as the mapping for the current label. Table 7 : Results of ensemble voting on the annotated set",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Individual Scoring Functions",
"sec_num": "7.1"
},
{
"text": "We reranked all the scoring functions using the methods described in Sect. 6. We used the three label-identifier pairs with the highest score for each segment and function to build the model and we also reranked the top three label-identifier pairs for each of the assignments. Table 8 shows the results we obtained with the reranker compared to the original scoring functions. The reranking pro- Table 8 : The performance of the reranked scoring functions compared to the original scoring functions Figure 3 shows the comparison between the original scoring functions, the scoring functions without stop words, and the reranked versions. There is a total of 928 segments, where 730 have a matching entity in the caption.",
"cite_spans": [],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 8",
"ref_id": null
},
{
"start": 397,
"end": 404,
"text": "Table 8",
"ref_id": null
},
{
"start": 500,
"end": 508,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reranking",
"sec_num": "7.2"
},
{
"text": "We applied an ensemble voting with the reranked functions (Table 9) . Reranking yields a significant improvement for the statistical scoring functions. When they get one vote each in the ensemble voting, the results increase from 52% correct mappings to 75%. When used in an ensemble with the semantic similarity scoring functions, the results improve further.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 67,
"text": "(Table 9)",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Reranking",
"sec_num": "7.2"
},
{
"text": "Number We also evaluated ensemble voting with different numbers of votes for the different functions. We tested all the permutations of integer weights in the interval {0,3} on the development set. Table 10 shows the best result for both the original assignments and the reranked assignments on the test set. The reranked assignments gave the best results, 88.76% correct mappings, and this is also the best result we have been able to reach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring function",
"sec_num": null
},
{
"text": "The extraction of relations across text and image is a new area for research. We showed in this paper that we could use semantic and statistical functions to link the entities in an image to mentions of the same entities in captions describing this image. We also showed that using the syntactic structure of the caption and the spatial structure of the image improves linking accuracy. Eventually, we managed to map correctly nearly 89% of the image segments in our data set, counting only segments that have a matching entity in the caption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "The semantic similarity functions form the most accurate mapping tool, when using functions in isolation. The statistical functions improve sig- nificantly their results when they are used in an ensemble. This shows that it is preferable to use multiple scoring functions, as their different properties contribute to the final score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Including the syntactic structures of the captions and pairing them with the spatial structures of the images is also useful when mapping entities to segments. By training a model on such features and using this model to rerank the assignments, the ordering of entities in the assignments is improved with a better precision for all the scoring functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Although we used images manually annotated with segments and labels, we believe the methods we described here can be applied on automatically segmented and labeled images. Using image recognition would then certainly introduce incorrectly classified image regions and thus probably decrease the linking scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "This research was supported by Vetenskapsr\u00e5det under grant 621-2010-4800, and the Det digitaliserade samh\u00e4llet and eSSENCE programs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An adapted Lesk algorithm for word sense disambiguation using Wordnet",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Satanjeev Banerjee. 2002. An adapted Lesk algorithm for word sense disambigua- tion using Wordnet. In Proceedings of the Third In- ternational Conference on Intelligent Text Process- ing and Computational Linguistics, pages 136-145.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to the special issue on computational linguistics using large corpora",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Church and Robert Mercer. 1993. Introduc- tion to the special issue on computational linguis- tics using large corpora. Computational Linguistics, 19(1):1-24.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Image description using visual dependency representations",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1292--1302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desmond Elliott and Frank Keller. 2013. Image de- scription using visual dependency representations. In Proceedings of the 2013 Conference on Em- pirical Methods in Natural Language Processing, pages 1292-1302, Seattle, Washington, USA, Oc- tober. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The segmented and annotated IAPR TC-12 benchmark",
"authors": [
{
"first": "Carlos",
"middle": [
"A"
],
"last": "Hugo Jair Escalantea",
"suffix": ""
},
{
"first": "Jesus",
"middle": [
"A"
],
"last": "Hern\u00e1ndeza",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gonzaleza",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "L\u00f3pez-L\u00f3peza",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [
"F"
],
"last": "Montesa",
"suffix": ""
},
{
"first": "L",
"middle": [
"Enrique"
],
"last": "Moralesa",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Sucara",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Villase\u00f1ora",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grubinger",
"suffix": ""
}
],
"year": 2010,
"venue": "Computer Vision and Image Understanding",
"volume": "114",
"issue": "",
"pages": "419--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Jair Escalantea, Carlos A. Hern\u00e1ndeza, Je- sus A. Gonzaleza, A. L\u00f3pez-L\u00f3peza, Manuel Mon- tesa, Eduardo F. Moralesa, L. Enrique Sucara, Luis Villase\u00f1ora, and Michael Grubinger. 2010. The segmented and annotated IAPR TC-12 bench- mark. Computer Vision and Image Understanding, 114:419-428.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LIBLINEAR: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871-1874.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Transmission of Information: A Statistical Theory of Communications",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Fano",
"suffix": ""
}
],
"year": 1961,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Fano. 1961. Transmission of Information: A Statistical Theory of Communications. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meet- ing of the ACL, pages 363-370, Ann Arbor.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Markov random fields and spatial information to improve automatic image annotation",
"authors": [
{
"first": "Carlos Arturo Hern\u00e1ndez-Gracidas",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Luis",
"middle": [
"Enrique"
],
"last": "Sucar",
"suffix": ""
}
],
"year": 2007,
"venue": "Lecture Notes in Computer Science",
"volume": "4872",
"issue": "",
"pages": "879--892",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Arturo Hern\u00e1ndez-Gracidas and Luis Enrique Sucar. 2007. Markov random fields and spatial in- formation to improve automatic image annotation. In Domingo Mery and Luis Rueda, editors, PSIVT, volume 4872 of Lecture Notes in Computer Science, pages 879-892. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lexical chains as representations of context for the detection and correction of malapropisms",
"authors": [
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "St-Onge",
"suffix": ""
}
],
"year": 1998,
"venue": "WordNet: An Electronic Lexical Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graeme Hirst and David St-Onge. 1998. Lexical chains as representations of context for the detec- tion and correction of malapropisms. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semantic similarity based on corpus statistics and lexical taxonomy",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jay",
"suffix": ""
},
{
"first": "David",
"middle": [
"W"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conrath",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical tax- onomy. CoRR, cmp-lg/9709008.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep fragment embeddings for bidirectional image sentence mapping",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy, Armand Joulin, and Li Fei-Fei. 2014. Deep fragment embeddings for bidirectional image sentence mapping. CoRR, abs/1406.5679.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Baby talk: Understanding and generating image descriptions",
"authors": [
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Visruth",
"middle": [],
"last": "Premraj",
"suffix": ""
},
{
"first": "Sagnik",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Siming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 24th CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Sim- ing Li, Yejin Choi, Alexander C Berg, and Tamara L Berg. 2011. Baby talk: Understanding and generat- ing image descriptions. In Proceedings of the 24th CVPR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Combining local context and wordnet similarity for word sense identification",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 1998,
"venue": "Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Leacock and Martin Chodorow. 1998. Com- bining local context and wordnet similarity for word sense identification. In Christiane Fellbaum, edi- tor, WordNet: An Electronic Lexical Database. MIT press, Cambridge, MA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 15th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. An information-theoretic defini- tion of similarity. In Proceedings of the 15th In- ternational Conference on Machine Learning, pages 296-304. Morgan Kaufmann.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WordNet: A lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41, November.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised alignment of natural language instructions with video segments",
"authors": [
{
"first": "Iftekhar",
"middle": [],
"last": "Naim",
"suffix": ""
},
{
"first": "Young",
"middle": [
"Chol"
],
"last": "Song",
"suffix": ""
},
{
"first": "Qiguang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the National Conference on Artificial Intelligence (AAAI-14)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iftekhar Naim, Young Chol Song, Qiguang Liu, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2014. Unsu- pervised alignment of natural language instructions with video segments. In Proceedings of the National Conference on Artificial Intelligence (AAAI-14).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using information content to evaluate semantic similarity in a taxonomy",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "448--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Pro- ceedings of the 14th International Joint Conference on Artificial Intelligence, pages 448-453.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ImageNet Large Scale Visual Recognition Challenge",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. Interna- tional Journal of Computer Vision (IJCV).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "WordNet Similarity for Java",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Shima",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Shima. 2014. WordNet Similarity for Java, February.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the HLT-NAACL",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the HLT- NAACL, pages 252-259, Edmonton.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Visual entity linking: A preliminary study",
"authors": [
{
"first": "Rebecka",
"middle": [],
"last": "Weegar",
"suffix": ""
},
{
"first": "Linus",
"middle": [],
"last": "Hammarlund",
"suffix": ""
},
{
"first": "Agnes",
"middle": [],
"last": "Tegen",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Oskarsson",
"suffix": ""
},
{
"first": "Kalle\u00e5str\u00f6m",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the AAAI 2014 Workshop on Cognitive Computing for Augmented Human Intelligence",
"volume": "",
"issue": "",
"pages": "46--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecka Weegar, Linus Hammarlund, Agnes Tegen, Magnus Oskarsson, Kalle\u00c5str\u00f6m, and Pierre Nugues. 2014. Visual entity linking: A preliminary study. In Proceedings of the AAAI 2014 Workshop on Cognitive Computing for Augmented Human In- telligence, pages 46-49, Qu\u00e9bec, July 27.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Zhibiao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, ACL '94",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs seman- tics and lexical selection. In Proceedings of the 32nd Annual Meeting of the Association for Com- putational Linguistics, ACL '94, pages 133-138, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Dependency graph of the sentence a flat landscape with a dry meadow in the foreground",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "A comparison of the number of correctly assigned labels when using the different scoring functions. The leftmost bars show the results of the original functions, the middle bars show the performance when the stop words are removed, and the rightmost ones show the performance of the reranked functions cedure improves the performance of all the scoring functions, especially the statistical ones, where the maximal improvement reaches 58%.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"text": "Average rank of the correct candidate obtained by each scoring function on the 200 annotated images of the test set, and number of correct candidates that are ranked first, first or second, etc. The ceiling is 730",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Label: Simplified segment label</td><td>Entity: Identifier for the caption en-</td><td>Label Entity: Label and entity</td></tr><tr><td/><td>tity</td><td>features combined</td></tr><tr><td>Score: Score given by the current</td><td/><td/></tr><tr><td>scoring function</td><td/><td/></tr></table>",
"html": null
},
"TABREF3": {
"text": "upper part, shows the two top candidates obtained from the co-occurrence scores for the",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">Label Entity 1</td><td>Score Entity 2</td><td>Score</td></tr><tr><td colspan=\"2\">cloud sky</td><td>2207 cloud</td><td>1096</td></tr><tr><td colspan=\"2\">grass sky</td><td>1489 meadow</td><td>887</td></tr><tr><td>hill</td><td>sky</td><td>861 cloud</td><td>327</td></tr><tr><td>river</td><td>sky</td><td>655 cloud</td><td>250</td></tr><tr><td colspan=\"2\">cloud cloud</td><td>769 sky</td><td>422</td></tr><tr><td colspan=\"2\">grass meadow</td><td>699 landscape</td><td>176</td></tr><tr><td>hill</td><td>landscape</td><td>113 cloud</td><td>28</td></tr><tr><td>river</td><td>cloud</td><td>37 meadow</td><td>10</td></tr></table>",
"html": null
},
"TABREF5": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td>shows the results, where</td></tr></table>",
"html": null
},
"TABREF6": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">: Comparison of the individual scoring</td></tr><tr><td colspan=\"4\">functions. This test is performed on the annotated</td></tr><tr><td colspan=\"4\">set of 200 images, with 730 possible correct map-</td></tr><tr><td>pings</td><td/><td/><td/></tr><tr><td colspan=\"4\">we reached a maximum 79.45% correct mappings</td></tr><tr><td colspan=\"4\">when all the functions were used together with one</td></tr><tr><td>vote each.</td><td/><td/><td/></tr><tr><td colspan=\"4\">Scoring function Number of votes</td></tr><tr><td>co-oc.</td><td>1</td><td>0</td><td>1</td></tr><tr><td>PMI</td><td>1</td><td>0</td><td>1</td></tr><tr><td>t-score</td><td>1</td><td>0</td><td>1</td></tr><tr><td>co-oc.</td><td>1</td><td>0</td><td>1</td></tr><tr><td>PMI</td><td>1</td><td>0</td><td>1</td></tr><tr><td>t-score</td><td>1</td><td>0</td><td>1</td></tr><tr><td>PATH</td><td>0</td><td>1</td><td>1</td></tr><tr><td>HSO</td><td>0</td><td>1</td><td>1</td></tr><tr><td>JCN</td><td>0</td><td>1</td><td>1</td></tr><tr><td>LCH</td><td>0</td><td>1</td><td>1</td></tr><tr><td>LESK</td><td>0</td><td>1</td><td>1</td></tr><tr><td>LIN</td><td>0</td><td>1</td><td>1</td></tr><tr><td>RES</td><td>0</td><td>1</td><td>1</td></tr><tr><td>WUP</td><td>0</td><td>1</td><td>1</td></tr><tr><td>number correct</td><td colspan=\"3\">382 569 580</td></tr><tr><td>percent correct</td><td>52</td><td>78</td><td>79</td></tr></table>",
"html": null
},
"TABREF9": {
"text": "Results of ensemble voting with reranked assignments segments",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF11": {
"text": "Results of weighted ensemble voting.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}