ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2020.eval4nlp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:45.196758Z"
},
"title": "ViLBERTScore: Evaluating Image Caption Using Vision-and-Language BERT",
"authors": [
{
"first": "Hwanhee",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {
"settlement": "Seoul",
"country": "Korea"
}
},
"email": ""
},
{
"first": "Seunghyun",
"middle": [],
"last": "Yoon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {
"settlement": "Seoul",
"country": "Korea"
}
},
"email": ""
},
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": "",
"affiliation": {
"laboratory": "Adobe Research",
"institution": "",
"location": {
"settlement": "San Jose",
"region": "CA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Doo",
"middle": [
"Soon"
],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "Adobe Research",
"institution": "",
"location": {
"settlement": "San Jose",
"region": "CA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Trung",
"middle": [],
"last": "Bui",
"suffix": "",
"affiliation": {
"laboratory": "Adobe Research",
"institution": "",
"location": {
"settlement": "San Jose",
"region": "CA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Kyomin",
"middle": [],
"last": "Jung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {
"settlement": "Seoul",
"country": "Korea"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose an evaluation metric for image captioning systems using both image and text information. Unlike the previous methods that rely on textual representations in evaluating the caption, our approach uses visiolinguistic representations. The proposed method generates image-conditioned embeddings for each token using ViLBERT from both generated and reference texts. Then, these contextual embeddings from each of the two sentence-pair are compared to compute the similarity score. Experimental results on three benchmark datasets show that our method correlates significantly better with human judgments than all existing metrics.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose an evaluation metric for image captioning systems using both image and text information. Unlike the previous methods that rely on textual representations in evaluating the caption, our approach uses visiolinguistic representations. The proposed method generates image-conditioned embeddings for each token using ViLBERT from both generated and reference texts. Then, these contextual embeddings from each of the two sentence-pair are compared to compute the similarity score. Experimental results on three benchmark datasets show that our method correlates significantly better with human judgments than all existing metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Image captioning is a task that aims to generate a text that describes a given image. While there have been many advances for caption generation algorithms (Vinyals et al., 2015; Anderson et al., 2018) and target datasets (Fang et al., 2015; Sharma et al., 2018) , few studies have focused on assessing the quality of the generated captions with consideration to the image.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Vinyals et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 179,
"end": 201,
"text": "Anderson et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 222,
"end": 241,
"text": "(Fang et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 242,
"end": 262,
"text": "Sharma et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the previous studies on evaluating image captioning tasks rely on n-gram similarity metrics such as BLEU (Papineni et al., 2002) or CIDEr (Vedantam et al., 2015) . These approaches bear limitations in dealing with the text's diverse nature, similarly found in other text generation tasks (e.g., abstractive summarization and dialog) (Kryscinski et al., 2019; Liu et al., 2016) . To alleviate the issues in the n-gram based approaches, researchers proposed word embedding-based techniques (Kusner et al., 2015; Zhao et al., 2019; Lo, 2019; Clark et al., 2019) . These techniques shows robust performance and achieve higher correlation with human judgment than that of other previous metrics in many text \u2026 Embed <IMG>",
"cite_spans": [
{
"start": 113,
"end": 136,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF20"
},
{
"start": 146,
"end": 169,
"text": "(Vedantam et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 341,
"end": 366,
"text": "(Kryscinski et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 367,
"end": 384,
"text": "Liu et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 496,
"end": 517,
"text": "(Kusner et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 518,
"end": 536,
"text": "Zhao et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 537,
"end": 546,
"text": "Lo, 2019;",
"ref_id": "BIBREF16"
},
{
"start": 547,
"end": 566,
"text": "Clark et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transformer Transformer generation tasks, including image captioning. Especially, BERTScore shows that using contextualized embedding is effective for evaluating the text. As BERTScore does not utilize image content, it is still undiscovered how to effectively utilize the image content in the process of evaluating the captions. To further reflect image context while utilizing the advantages of BERTScore, we propose ViL-BERTScore 1 by employing the ViLBERT (Lu et al., 2019) , which is a task-agnostic pre-trained visiolinguistic representation. ViLBERTScore computes cosine similarity between token embeddings for reference and candidate sentences similar to BERTScore. However, different from BERTScore, the token embedding is computed with the consideration of image contexts.",
"cite_spans": [
{
"start": 460,
"end": 477,
"text": "(Lu et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embed",
"sec_num": null
},
{
"text": "<CLS> <TASK> Tok1 \u2026 <SEP> \u2026 \u2026 \u210e 0 \u210e 1 \u210e 2 \u210e x 6 \u210e 0 \u210e 1 \u210e 2 \u210e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embed",
"sec_num": null
},
{
"text": "We evaluate our proposed method on three benchmark datasets (i.e., Composite, Flickr8k, and PASCAL-50S). Extensive experiments show that ViLBERTScore achieves a significantly higher correlation with human judgments than previous metrics. This result demonstrates that the use of contextualized embedding from vision and language is Figure 2 : Overall computation of ViLBERTScore. Given the image I, reference caption x and candidate captionx, we compute contextual embeddings with ViLBERT for x andx respectively. Then, we extract the text embeddings H X V and HX V for each output embedding. Finally, we compute the pairwise cosine similarity between H X V and HX V as in .",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 340,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Embed",
"sec_num": null
},
{
"text": "effective in evaluating image captioning tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embed",
"sec_num": null
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embed",
"sec_num": null
},
{
"text": "We provide a summary of the widely used metrics for evaluating image captions such as n-gram similarity metrics, embedding based metrics, and other task-specific metrics for captioning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Caption Evaluation",
"sec_num": "2.1"
},
{
"text": "The most widely used metrics for evaluating the quality of text generation tasks are n-gram similarity metrics that compute the exact number of n-gram matches between reference and generated text. One example of these metrics is BLEU (Papineni et al., 2002) that computes the precision of overlap n-gram between reference and candidate. ROUGE (Lin, 2004 ) is a set of commonly used metrics for text summarization.",
"cite_spans": [
{
"start": 234,
"end": 257,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF20"
},
{
"start": 343,
"end": 353,
"text": "(Lin, 2004",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Similarity Metrics",
"sec_num": null
},
{
"text": "In particular, ROUGE-N, the longest common subsequence based metric, is the most frequently used variants of ROUGE. CIDEr (Vedantam et al., 2015) , which is proposed for evaluating image captions, computes the tf-idf weighted n-gram similarity between reference and candidate.",
"cite_spans": [
{
"start": 122,
"end": 145,
"text": "(Vedantam et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Similarity Metrics",
"sec_num": null
},
{
"text": "Embedding Based Metrics The n-gram similarity metrics possess critical limitations; they cannot count the synonym matches of the ngram, even though the synonyms are widely found in the generated text. To overcome this weakness, embedding based metrics such as Word Mover Distance(WMD) (Kusner et al., 2015) and BERTScore are proposed.",
"cite_spans": [
{
"start": 285,
"end": 306,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Similarity Metrics",
"sec_num": null
},
{
"text": "WMD computes minimum transportation distance among tokens using pre-trained word embeddings (i.e., GloVe (Pennington et al., 2014) ). On the other hand, BERTScore computes cosine similarity among tokens using contextual embeddings from BERT (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 105,
"end": 130,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 241,
"end": 262,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Similarity Metrics",
"sec_num": null
},
{
"text": "Captioning Specific Metrics After CIDEr is introduced, several metrics for image captioning are proposed. SPICE (Anderson et al., 2016) uses scene graph and LEIC (Cui et al., 2018 ) uses the trainable model to evaluate the captions. VIFI-DEL (Madhyastha et al., 2019) is an extension of Wasserstein distance that utilizes the information from detected objects in the image. TIGEr (Jiang et al., 2019) uses the output of the visual grounding task. BERT-TBR (Yi et al., 2020) focuses on the variance of the captions and combine multiple reference captions to get improved BERTScore.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Anderson et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 162,
"end": 179,
"text": "(Cui et al., 2018",
"ref_id": "BIBREF5"
},
{
"start": 380,
"end": 400,
"text": "(Jiang et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 456,
"end": 473,
"text": "(Yi et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Similarity Metrics",
"sec_num": null
},
{
"text": "To compute contextual representations from the visually-grounded text, researchers proposed a transformer-based model. One such example is ViLBERT (Lu et al., 2019 ), which is a task-agnostic pre-trained representation for vision and language. As shown in Fig. 1 , ViLBERT employs two streams of transformer (Vaswani et al., 2017 )-based architecture; one of each part processes visual and textual inputs, respectively. Specifically, the image and grounded-text inputs are fed into separate embedding layers; followed by two co-attentional transformer block that allows interaction between the two modalities. ViLBERT is pre-trained with two training objectives, masked multi-modal modeling, and multi-modal alignment. Lu et al. (2019) show that fine-tuning this pre-trained ViLBERT to visionand-language related downstream tasks (e.g., visual question answering (Antol et al., 2015) ) significantly outperforms previous approaches. Recently, Lu et al. (2020) investigate and reveal that training the ViLBERT with multi-task learning objectives provides further performance improvement for most of the vision and language tasks.",
"cite_spans": [
{
"start": 147,
"end": 163,
"text": "(Lu et al., 2019",
"ref_id": "BIBREF17"
},
{
"start": 308,
"end": 329,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF24"
},
{
"start": 719,
"end": 735,
"text": "Lu et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 863,
"end": 883,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 943,
"end": 959,
"text": "Lu et al. (2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 256,
"end": 262,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "ViLBERT",
"sec_num": "2.2"
},
{
"text": "We propose ViLBERTScore, a metric that utilizes visually-grounded representations for each token. The overall flow of our proposed ViLBERTScore is described in Fig. 2 . Similar to BERTScore, we first compute contextual embeddings of both reference caption X = (x 1 , ..., x n ) and candidate captionX = (x 1 , ...,x m ). Since we use ViLBERT, we compute the embeddings for each caption conditioning with the target image I. For the target image, we extract N region-level features",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 166,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "ViLBERTScore",
"sec_num": "3"
},
{
"text": "V = (v 1 , ..., v N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ViLBERTScore",
"sec_num": "3"
},
{
"text": "using pre-trained object detection model (see 4.2 for detailed information). Then, we feed each pair of image and caption embeddings (X, V ), (X, V ) to pre-trained ViLBERT and compute the contextual embeddings (H V X , H X V ) and (H VX , HX V ). Note that H V and H X are image and text embeddings, respectively. Among these embeddings, we only utilize the text embeddings, H X V = (h w0 , ..., h wT ) and HX V = (\u0125 w0 , ...,\u0125 wT ), and compute cosine similarity among the pair of tokens from the candidate and reference caption. Finally, the greedy matching process is exercised to the pair of tokens mentioned above for finding the most similar tokenmatch between two sentences. We can formulate ViLBERTScore as follows. (Lu et al., 2020) , which is fine-tuned on 12 downstream tasks. Scores with \u2020 are cited from (Yi et al., 2020) . each candidate caption and image pair. The images in this dataset are from Flickr8k (Hodosh et al., 2013 ), Flickr30k (Plummer et al., 2017 , and COCO captions (Lin et al., 2014) . The human judgments scores range from 1 to 5, depending on the relevance between candidate caption and image.",
"cite_spans": [
{
"start": 725,
"end": 742,
"text": "(Lu et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 818,
"end": 835,
"text": "(Yi et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 922,
"end": 942,
"text": "(Hodosh et al., 2013",
"ref_id": "BIBREF9"
},
{
"start": 943,
"end": 977,
"text": "), Flickr30k (Plummer et al., 2017",
"ref_id": null
},
{
"start": 998,
"end": 1016,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ViLBERTScore",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ViLBERTScore P = \u03a3 m i=1 max\u0125 wj \u2208HX V h T wi\u0125 wj m",
"eq_num": "(1)"
}
],
"section": "ViLBERTScore",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ViLBERTScore R = \u03a3 n i=1 max h wj \u2208H X V\u0125 T wi h wj n (2) ViLBERTScore F = 2 \u2022 ViLBERTScore P \u2022 ViLBERTScore R ViLBERTScore P + ViLBERTScore R",
"eq_num": "(3)"
}
],
"section": "ViLBERTScore",
"sec_num": "3"
},
{
"text": "Flickr8k Flickr8k dataset is composed of 8,092 images with five corresponding human-generated captions. This dataset also provides three expert annotations for each image and candidate caption on 5,822 images. The score ranges from 1 to 4, depending on how well the caption and image match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ViLBERTScore",
"sec_num": "3"
},
{
"text": "PASCAL-50S PASCAL-50S (Vedantam et al., 2015) Figure 3 : Kendall Correlation between human judgments across different layers. C and F are the results for Composite and Flickr8k datasets, respectively. Note that ViLBERTScore* uses the fine-tuned ViLBERT model from (Lu et al., 2020) . tions generated by humans for each image. Different from other datasets, this dataset provides 4,000 caption triplet <A, B, C> composed of 50 reference captions(A) and two candidate captions(B, C) for the given image. There are human annotated answers to which is more similar to \"A\", \"B\" or \"C\". Candidate captions are human-written or modelgenerated.",
"cite_spans": [
{
"start": 22,
"end": 45,
"text": "(Vedantam et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 264,
"end": 281,
"text": "(Lu et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "ViLBERTScore",
"sec_num": "3"
},
{
"text": "We use two versions of ViLBERT, one from the pre-trained ViLBERT model from (Lu et al., 2019) and the other version from (Lu et al., 2020) that are fine-tuned on 12 downstream tasks. We set N = 100 boxes for each image using image detectron model (He et al., 2017) to compute contextual embedding as in (Lu et al., 2019) . We use the textual representations in the 6-th layer, the last coattention layer, of ViLBERT for the main results in Table 1 and Table 2 . For the dataset containing multiple reference captions, we average the score over the pairs of candidate caption and reference captions.",
"cite_spans": [
{
"start": 76,
"end": 93,
"text": "(Lu et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 121,
"end": 138,
"text": "(Lu et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 247,
"end": 264,
"text": "(He et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 303,
"end": 320,
"text": "(Lu et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 440,
"end": 459,
"text": "Table 1 and Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "We compute Kendall's correlation coefficient with human judgments for the Composite dataset and Flickr8k dataset. For the PASCAL-50S dataset, we compute the number of matches between human judgments for each candidate caption pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methods",
"sec_num": null
},
{
"text": "Performance Comparison We present the correlation scores for the baseline metrics and our proposed ViLBERTScore for Composite dataset and Flickr8k dataset in Table 1 . ViLBERTScore shows a higher correlation than all the existing metrics. For the PASCAL-50S dataset, Table 2 shows that ViLBERTScore R is the best metric at comparing captions among all of the metrics. Interestingly, we observe that the performance of ViLBERTScore P is lower than that of ViLBERTScore R for the PASCAL-50S dataset. This is consistent behavior with the results of . We speculate that the main objects in the image are the most critical words the human judgments as in .",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 267,
"end": 274,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation Methods",
"sec_num": null
},
{
"text": "We further explore the performance of ViL-BERTScore with different base model. We choose another ViLBERT model that is fine-tuned on 12 vision-and-language related tasks (see ViL-BERTScore* in Table 1 and 2). This model shows better results than ViLBERTScore. We explain that some of the tasks such as image retrieval or visual entailment (Xie et al., 2019) are related to caption evaluation.",
"cite_spans": [
{
"start": 339,
"end": 357,
"text": "(Xie et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation Methods",
"sec_num": null
},
{
"text": "Correlation Across Layers The co-attentional block in ViLBERT is composed of six layers. To verify the effectiveness of each layer in computing the contextualized embedding of the data, we compute ViLBERTScore using the outputs of different layer. As shown in Fig. 3 , the outputs of a higher layer show a better correlation with human judgments than the lower layer except for the last layer. This observation reveals that blending information among the modalities is essential in computing better contextual representations. We explain that the correlation drops in the last layer because the last layer has task specific property.",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 266,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Methods",
"sec_num": null
},
{
"text": "In this paper, we propose ViLBERTScore, a metric for image captioning task by using pre-trained visio-linguistic representations. Different from the BERTScore, ViLBERTScore utilizes image conditional embeddings for each token which is critical in evaluating vision-language combined task. Empirical results on Composite, Flickr8k, and PASCAL-50S datasets show that the proposed ViL-BERTScore correlates better with human judgments than all of the previous metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/hwanheelee1993/ViLBERTScore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We also gratefully acknowledge support from Adobe Inc. in the form of a generous gift to Seoul National University. K. Jung is with ASRI, Seoul National University, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "From images to sentences through scene description graphs using commonsense reasoning and knowledge",
"authors": [
{
"first": "Somak",
"middle": [],
"last": "Aditya",
"suffix": ""
},
{
"first": "Yezhou",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chitta",
"middle": [],
"last": "Baral",
"suffix": ""
},
{
"first": "Cornelia",
"middle": [],
"last": "Fermuller",
"suffix": ""
},
{
"first": "Yiannis",
"middle": [],
"last": "Aloimonos",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.03292"
]
},
"num": null,
"urls": [],
"raw_text": "Somak Aditya, Yezhou Yang, Chitta Baral, Cornelia Fermuller, and Yiannis Aloimonos. 2015. From images to sentences through scene description graphs using commonsense reasoning and knowl- edge. arXiv preprint arXiv:1511.03292.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spice: Semantic propositional image caption evaluation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "382--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propo- sitional image caption evaluation. In European Conference on Computer Vision, pages 382-398. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bottom-up and top-down attention for image captioning and visual question answering",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buehler",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "6077--6086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 6077-6086.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Vqa: Visual question answering",
"authors": [
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2425--2433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sentence mover's similarity: Automatic evaluation for multi-sentence texts",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2748--2760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Clark, Asli Celikyilmaz, and Noah A Smith. 2019. Sentence mover's similarity: Automatic eval- uation for multi-sentence texts. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2748-2760.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning to evaluate image captioning",
"authors": [
{
"first": "Yin",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Guandao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Veit",
"suffix": ""
},
{
"first": "Xun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "5804--5812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yin Cui, Guandao Yang, Andreas Veit, Xun Huang, and Serge Belongie. 2018. Learning to evaluate image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5804-5812.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "From captions to visual concepts and back",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Hao Fang",
"suffix": ""
},
{
"first": "Forrest",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iandola",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rupesh",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "1473--1482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll\u00e1r, Jianfeng Gao, Xi- aodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1473-1482.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mask r-cnn",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Gkioxari",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2961--2969",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross Girshick. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Framing image description as a ranking task: Data, models and evaluation metrics",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "47",
"issue": "",
"pages": "853--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Ar- tificial Intelligence Research, 47:853-899.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Tiger: Text-to-image grounding for image caption evaluation",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Qiuyuan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pengchuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Diesner",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2141--2152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Jiang, Qiuyuan Huang, Lei Zhang, Xin Wang, Pengchuan Zhang, Zhe Gan, Jana Diesner, and Jian- feng Gao. 2019. Tiger: Text-to-image grounding for image caption evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2141-2152.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural text summarization: A critical evaluation",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Kryscinski",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mc-Cann",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "540--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 540- 551.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "From word embeddings to document distances",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Kusner",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Kolkin",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2015,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "957--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to doc- ument distances. In International conference on ma- chine learning, pages 957-966.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "European conference on computer vision",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European confer- ence on computer vision, pages 740-755. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2122--2132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Yisi-a unified semantic mt quality evaluation and estimation metric for languages with different levels of available resources",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "507--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo. 2019. Yisi-a unified semantic mt quality evaluation and estimation metric for languages with different levels of available resources. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 2: Shared Task Papers, Day 1), pages 507-513.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguis- tic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "12-in-1: Multi-task vision and language representation learning",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "10437--10446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 10437- 10446.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Vifidel: Evaluating the visual fidelity of image descriptions",
"authors": [
{
"first": "Josiah",
"middle": [],
"last": "Pranava Swaroop Madhyastha",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6539--6550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranava Swaroop Madhyastha, Josiah Wang, and Lu- cia Specia. 2019. Vifidel: Evaluating the visual fi- delity of image descriptions. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 6539-6550.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bryan",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Plummer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"C"
],
"last": "Cervantes",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Caicedo",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lazebnik",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCV",
"volume": "123",
"issue": "1",
"pages": "74--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan A. Plummer, Liwei Wang, Christopher M. Cer- vantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2017. Flickr30k entities: Col- lecting region-to-phrase correspondences for richer image-to-sentence models. IJCV, 123(1):74-93.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning",
"authors": [
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2556--2565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for au- tomatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2556-2565.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cider: Consensus-based image description evaluation",
"authors": [
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "4566--4575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4566-4575.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "3156--3164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 3156-3164.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Visual entailment: A novel task for fine-grained image understanding",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Farley",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "Asim",
"middle": [],
"last": "Kadav",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.06706"
]
},
"num": null,
"urls": [],
"raw_text": "Ning Xie, Farley Lai, Derek Doran, and Asim Ka- dav. 2019. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving image captioning evaluation by considering inter references variance",
"authors": [
{
"first": "Yanzhi",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Hangyu",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jinglu",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "985--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanzhi Yi, Hangyu Deng, and Jinglu Hu. 2020. Im- proving image captioning evaluation by considering inter references variance. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 985-994.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09675"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "563--578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The overall architecture of ViLBERT. ViL-BERT consists of a self-attention based embedding layer and co-attention layer for each image and text information."
},
"TABREF2": {
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table",
"text": "Kendall Correlation between human judgments and various metrics. Note that ViLBERTScore* uses the ViLBERT model from"
},
"TABREF4": {
"content": "<table><tr><td>: Result for PASCAL-50S dataset. The paired</td></tr><tr><td>ways HC, HI, HM and MM respectively mean human-</td></tr><tr><td>correct, human-incorrect, human-model and model-</td></tr><tr><td>model. We use five reference captions among 50 ref-</td></tr><tr><td>erence captions for each caption pair.</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": ""
}
}
}
}