ACL-OCL / Base_JSON /prefixA /json /alvr /2021.alvr-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:31.013546Z"
},
"title": "Leveraging Partial Dependency Trees to Control Image Captions",
"authors": [
{
"first": "Wenjie",
"middle": [],
"last": "Zhong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Controlling the generation of image captions attracts lots of attention recently. In this paper, we propose a framework leveraging partial syntactic dependency trees as control signals to make image captions include specified words and their syntactic structures. To achieve this purpose, we propose a Syntactic Dependency Structure Aware Model (SDSAM), which explicitly learns to generate the syntactic structures of image captions to include given partial dependency trees. In addition, we come up with a metric to evaluate how many specified words and their syntactic dependencies are included in generated captions. We carry out experiments on two standard datasets: Microsoft COCO and Flickr30k. Empirical results show that image captions generated by our model are effectively controlled in terms of specified words and their syntactic structures. The code is available on GitHub 1 .",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Controlling the generation of image captions attracts lots of attention recently. In this paper, we propose a framework leveraging partial syntactic dependency trees as control signals to make image captions include specified words and their syntactic structures. To achieve this purpose, we propose a Syntactic Dependency Structure Aware Model (SDSAM), which explicitly learns to generate the syntactic structures of image captions to include given partial dependency trees. In addition, we come up with a metric to evaluate how many specified words and their syntactic dependencies are included in generated captions. We carry out experiments on two standard datasets: Microsoft COCO and Flickr30k. Empirical results show that image captions generated by our model are effectively controlled in terms of specified words and their syntactic structures. The code is available on GitHub 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Controllable image captioning emerges as a popular research topic in recent years. Existing works attempt to enhance models' controllability and captions' diversity by controlling the attributes of image captions such as style (Mathews et al., 2016) , sentiments (Gan et al., 2017) , contents (Dai et al., 2018; Cornia et al., 2019; Zhong et al., 2020) and part-of-speech (Deshpande et al., 2019) . However, some important attributes of image captions like words and syntactic structures, are ignored in previous works. For example, for the image in the Figure 2 , the work (Cornia et al., 2019 ) specifies a set of objects like 'dog, man, frisebee' as a control signal, but there still exist lots of possibilities of composing them into different captions, such as 'a dog and a man play frisebee on grass' and 'a dog playing with a man catches frisebee', since both words and syntactic structures are not determined yet. To address this challenging issue, we propose a framework, which employs partial dependency trees as control signals. As shown in Figure 1 , a partial dependency tree, a sub-tree of a syntactic dependency tree, contains words and their syntactic structures, and thus we can utilize it to specify control information about words and their syntactic structures.",
"cite_spans": [
{
"start": 227,
"end": 249,
"text": "(Mathews et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 263,
"end": 281,
"text": "(Gan et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 293,
"end": 311,
"text": "(Dai et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 312,
"end": 332,
"text": "Cornia et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 333,
"end": 352,
"text": "Zhong et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 372,
"end": 396,
"text": "(Deshpande et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 574,
"end": 594,
"text": "(Cornia et al., 2019",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 554,
"end": 562,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1052,
"end": 1060,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, we develop a pipeline model called syntactic dependency structure-aware model (SD-SAM) which first derives a full syntactic dependency tree and then flatten it into a caption. The motivation behind this pipeline model is that we assume explicitly generating syntactic dependency trees as intermediate representations can better help the model learn how to apply the specified syntactic information to the captions and the intermediate representations can give users an intuitive impression on which part of the captions' syntactic structures is controlled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we propose a syntactic dependencybased evaluation metric which evaluates whether the generated captions have been controlled in terms of syntactic structures. Our metric is computed based on the overlap of syntactic dependencies which is different from existing metrics like BLEU (Papineni et al., 2002) , METEOR (Denkowski and Lavie, 2014), ROUGE (Lin, 2004 ), CIDEr (Vedantam et al., 2015 and SPICE (Anderson et al., 2018) which rely on the overlap of ngrams or semantic graphs. Empirical results show that image captions generated by our model are effectively controlled in terms of specified words and their syntactic structures. Figure 2 : Model architecture: our model generates captions in two steps: (1) generating syntactic dependency tree using syntactic dependency tree generator. (2) flatting it into a caption using caption generator.",
"cite_spans": [
{
"start": 289,
"end": 312,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF13"
},
{
"start": 357,
"end": 367,
"text": "(Lin, 2004",
"ref_id": "BIBREF11"
},
{
"start": 368,
"end": 399,
"text": "), CIDEr (Vedantam et al., 2015",
"ref_id": null
},
{
"start": 410,
"end": 433,
"text": "(Anderson et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 643,
"end": 651,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task presented in this paper is defined as generating a caption sentence (i.e. word sequence)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Definition",
"sec_num": "2"
},
{
"text": "y = w 1 , \u2022 \u2022 \u2022 , w |y|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Definition",
"sec_num": "2"
},
{
"text": "given an image I and a partial dependency tree P as input, so that the dependency tree T y of y includes P as far as possible. The syntactic dependency tree of a sentence, as shown in Figure 1 , refers to a tree structure to represent syntactic relations between words. A syntactic dependency tree T x of a sentence",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Framework Definition",
"sec_num": "2"
},
{
"text": "x = w 1 , \u2022 \u2022 \u2022 , w |x| is defined as a set of depen- dencies, {D 1 , D 2 , \u2022 \u2022 \u2022 , D |Tx| }, where |T x | denotes the number of dependencies in T x . Each depen- dency D k is expressed in the form of w i e i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Definition",
"sec_num": "2"
},
{
"text": "\u2212 \u2212 \u2192 w j , where w i and w j are the head word and the dependent word of D k , and e i,j is the dependency label. We denote child nodes of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Definition",
"sec_num": "2"
},
{
"text": "w i as C(w i ); i.e. C(w i ) = {w j |w i e i,j \u2212 \u2212 \u2192 w j \u2208 T x }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Definition",
"sec_num": "2"
},
{
"text": "A partial dependency tree P here refers to a sub-tree of the syntactic dependency tree of some sentence.That is, P \u2286 T x for some sentence x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Definition",
"sec_num": "2"
},
{
"text": "The syntactic dependency structure-aware model(SDSAM) shown in Figure 2 generates image captions in two steps: (1) the syntactic dependency tree generator on the left part derives a full syntactic dependency tree from the image and the partial dependency tree. (2) the caption generator on the right part will flatten the syntactic dependency tree into a caption.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 71,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic Dependency Structure Aware Model",
"sec_num": "3"
},
{
"text": "The syntactic dependency tree generator encodes the input image with a CNN network implemented with Resent152 into image features and encodes the partial dependency tree with a syntactic dependency tree encoder implemented with Tree-LSTM (Tai et al., 2015) into partial dependency tree features.",
"cite_spans": [
{
"start": 238,
"end": 256,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "After combining the image features and the partial dependency tree features, the syntactic dependency tree generator derives the full syntactic dependency tree using the syntactic dependency tree decoder from the combined features s. The syntactic dependency tree decoder consists of two attention modules, Attn in and Attn out , and two interweaved GRU networks (Cho et al., 2014) , GRU v and GRU h . The decoding process is carried out from the root node to leaf nodes in a top-down manner. For a node w i , its child nodes are decoded one by one from left-to-right. Each child node is predicted based on the information of its parent node and its left sibling node generated in previous steps. At the mean while, the attention modules highlight the words to be generated for the current child node. Assuming we decode the child w j of node w i , the hidden state of node w i and node w j are denoted as h i and h j respectively. The left sibling of node w j is denoted as w j\u22121 and its hidden state as h j\u22121 . For each input image, we detect a set of keywords c = {r 1 , \u2022 \u2022 \u2022 , r |c| } following the method proposed in (You et al., 2016) , and encode c into a matrix C \u2208 R Ew\u00d7|c| , where E w is the size of word embedding.",
"cite_spans": [
{
"start": 363,
"end": 381,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 1123,
"end": 1141,
"text": "(You et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "h 0 = U (s) s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = GRU v (h i , w i ) (2) c in = Attn in (w i , C)",
"eq_num": "(3)"
}
],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h j = GRU h (h i , [h j\u22121 ; w j\u22121 ; c in ]) (4) c out = Attn out (h j , C)",
"eq_num": "(5)"
}
],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w j \u223c Softmax(U (w) h j + V (w) c out ) (6) e i,j \u223c Softmax(U (e) h j + V (e)h i )",
"eq_num": "(7)"
}
],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Attn(q, C) = C\u03b1 (8) \u03b1 = Softmax(A T v) (9) A = tanh(U (\u03b1) (q \u2022 1 T ) + V (\u03b1) C)",
"eq_num": "(10)"
}
],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "In the above formulas,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "U (s) \u2208 R H\u00d7Es , U (w) \u2208 R Vw\u00d7H , U (e) \u2208 R Ve\u00d7H ,U (\u03b1) \u2208 R Ea\u00d7Eq , V (w) \u2208 R Vw\u00d7H , V",
"eq_num": "("
}
],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "e) \u2208 R Ve\u00d7H and V (\u03b1) \u2208 R Ea\u00d7Ew are parameters for reshaping features. Here E s , E a and E q are the size of the input feature s, the attention feature A and the query q respectively. V w and V e are the vocabulary size for the node and edge respectively and H is the size of hidden states. In equation 10, v \u2208 R Ea\u00d71 is a parameter and 1 \u2208 R |c|\u00d71 is a vector with all elements being one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "The Caption Generator The caption generator takes the syntactic dependency tree generated in the first step as input and encodes it with the syntactic dependency tree encoder into syntactic dependency tree features. The caption generator combines it with image features extracted in the first step and use the combined features to initialize the LSTM decoder (Hochreiter and Schmidhuber, 1997) to generate the caption.",
"cite_spans": [
{
"start": 359,
"end": 393,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Syntactic Dependency Tree Generator",
"sec_num": null
},
{
"text": "Preparing Datasets with Partial Dependency Trees For evaluation, we apply two methods to create partial dependency trees for on Microsoft COCO (Chen et al., 2015) and Flickr30k (Young et al., 2014) . The first method extracts partial dependency trees from reference captions. We parsing reference captions to syntactic dependency trees using Spacy 2 and then randomly sample subsets from each syntactic dependency tree. Sampled partial dependency trees are then paired with corresponding reference captions. The dataset created by this procedure is denoted as test gold in Section 5.",
"cite_spans": [
{
"start": 143,
"end": 162,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 177,
"end": 197,
"text": "(Young et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "The other method creates partial dependency trees from images in two steps: (1) we first train a syntactic dependency classifier to predict syntactic dependencies for an input image. (2) Predicted syntactic dependencies are combined to form a syntactic dependency graph for the input image, from which partial dependency trees are sampled. The dataset created by this procedure is denoted as test pred in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "2 https://spacy.io For training, following the first method, we directly sample a partial dependency tree from one of the reference captions for each image and the paired reference caption is used as a training target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Evaluation Metric The evaluation metrics for image captioning fall into two categories: (1)Quality: evaluating the relevance to human annotations with metrics including BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014); ROUGE (Lin, 2004) , and CIDEr (Vedantam et al., 2015) and SPICE (Anderson et al., 2018) . (2)Control-ability: evaluating whether generated image captions are successfully controlled by partial dependency trees. We devise a new metric called Dependency Based Evaluation Metric (DBEM) for this purpose. Assuming that a partial dependency tree P = {D 1 , \u2022 \u2022 \u2022 , D |P | } is input, DBEM calculates how many syntactic dependencies specified in the partial dependency tree are included in the dependency tree T y of generated caption y. The DBEM score for the evaluation dataset is given as an average of this score for each input. Formally,",
"cite_spans": [
{
"start": 174,
"end": 197,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF13"
},
{
"start": 244,
"end": 255,
"text": "(Lin, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 302,
"end": 325,
"text": "(Anderson et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "DBEM (P, T y ) = D\u2208P 1(D, T y ) |P | ,",
"eq_num": "(11)"
}
],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1(D, T ) = 1 if D \u2208 T 0 if D / \u2208 T.",
"eq_num": "(12)"
}
],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Experiment Setting The training of our model is split into two stages including training the syntactic dependency tree generator and training the caption generator. We set the size of hidden states to be 512, the word embedding size to be 512, and the dependency label embedding size to be 300. We train our model using the Adam optimizer (Kingma and Ba, 2015) with a learning rate 5e \u22124 for the first stage and 1e \u22124 for the second stage. Two models, including our SDSAM model and the NIC model (Vinyals et al., 2015) partial dependency trees are sampled from reference captions. This table shows that both NIC and SDSAM achieve significant improvements on evaluation scores when more control signals are input. This indicates that generated captions become closer to reference captions. These improvements are expectable since control signals contain information of reference captions. This result attests that partial dependency trees carry information useful for generating specific sentences. When both models are given the same control signals, SDSAM has comparable performance to NIC in n-gram based metrics (i.e. BLEU-4, METEOR, ROUGE and CIDEr), while achieving a significantly better performance on SPICE, which is a semantic relation based metric. This result reveals an interesting phenomenon that explicitly learning the syntactic structures of captions can improve performance on the semantic relation based metric.",
"cite_spans": [
{
"start": 496,
"end": 518,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "(2) Results on test pred : We show the evaluation results on test pred in Table 2 , whose partial dependency trees are generated from images. For NIC and SDSAM, evaluation scores mostly remain the same level, but slight improvements are observed in SPICE. This result reveals that partial dependency trees generated from images do not have a significant impact on the quality of image captions, while giving partial dependency trees as control signals do not harm caption quality. For the same control signals, SDSAM has a better performance on SPICE in most cases, which follows the results on test gold .",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Controllability DBEM scores on test gold and test pred are shown in Table 3 . The table shows that the DBEM scores of both models are very low when no control is given. This reveals that only a small proportion of syntactic dependencies in partial dependency trees appear in reference captions by chance, indicating that additional input to control syntactic structures is meaningful. When the models are given words as control signals, the DBEM scores are significantly increased, meaning that both models can infer syntactic structures from words even without explicit syntactic structure information. However, it is also clear that nearly half of the specified dependencies are missing in generated captions. These observations suggest that words provide useful information as control signals, but are insufficient to specify syntactic structures completely. When partial dependency trees are input, the DBEM scores further improve significantly. It means that most syntactic dependencies specified in partial dependency trees are included in generated captions. This result demonstrates that syntactic structure information plays an important role in precisely controlling image captions.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "When the models are given no control signals, SDSAM has better DBEM scores than NIC. This is possibly because SDSAM explicitly learns to generate syntactic dependency trees, and can bet- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "In Figure 3 , we show an example of the output from our model on test pred . Our syntactic dependency classifier first predicts a syntactic dependency graph from the input image. Once the syntactic dependency graph is constructed, we sample three partial dependency trees with different node numbers as shown in the figure. Finally, our SDSAM model infers the captions from the input image and the partial dependency trees. From this example, it is obvious that all words and syntactic structures specified in partial dependency trees also appear in the generated captions. Furthermore, the three generated captions are considerably different from each other, demonstrating that giving partial dependency trees as control signals can improve captions' diversity.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "6"
},
{
"text": "We presented a framework for controlling image captions in terms of words and syntactic structures by giving partial dependency trees as control signals. We develop a syntactic dependency structure aware model to explicitly learn the syntactic structures in control signals. Empirical results show that image captions generated by our model are effectively controlled in terms of specified words and their syntactic structures. Furthermore, the results indicate that explicitly learning to generate the syntactic dependency trees of captions enhances the model's controllability. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bottom-up and top-down attention for image captioning and visual question answering",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buehler",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6077--6086",
"other_ids": {
"DOI": [
"10.1109/CVPR.2018.00636"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In 2018 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 6077-6086. IEEE Computer Society.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Microsoft COCO captions: Data collection and evaluation server",
"authors": [
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2015. Microsoft COCO cap- tions: Data collection and evaluation server. CoRR, abs/1504.00325.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "\u00c7aglar",
"middle": [],
"last": "G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {
"DOI": [
"10.3115/v1/d14-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7aglar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, pages 1724-1734. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Show, control and tell: A framework for generating controllable and grounded captions",
"authors": [
{
"first": "Marcella",
"middle": [],
"last": "Cornia",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Baraldi",
"suffix": ""
},
{
"first": "Rita",
"middle": [],
"last": "Cucchiara",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019",
"volume": "",
"issue": "",
"pages": "8307--8316",
"other_ids": {
"DOI": [
"10.1109/CVPR.2019.00850"
]
},
"num": null,
"urls": [],
"raw_text": "Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. 2019. Show, control and tell: A framework for gen- erating controllable and grounded captions. In IEEE Conference on Computer Vision and Pattern Recog- nition, CVPR 2019, Long Beach, CA, USA, June 16- 20, 2019, pages 8307-8316. IEEE Computer Soci- ety.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A neural compositional paradigm for image captioning",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "Dahua",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "656--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Dai, Sanja Fidler, and Dahua Lin. 2018. A neu- ral compositional paradigm for image captioning. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Pro- cessing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, pages 656-666.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT@ACL 2014",
"volume": "",
"issue": "",
"pages": "376--380",
"other_ids": {
"DOI": [
"10.3115/v1/w14-3348"
]
},
"num": null,
"urls": [],
"raw_text": "Michael J. Denkowski and Alon Lavie. 2014. Me- teor universal: Language specific translation eval- uation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Trans- lation, WMT@ACL 2014, June 26-27, 2014, Balti- more, Maryland, USA, pages 376-380. The Associ- ation for Computer Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fast, diverse and accurate image captioning guided by part-of-speech",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Deshpande",
"suffix": ""
},
{
"first": "Jyoti",
"middle": [],
"last": "Aneja",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"G"
],
"last": "Schwing",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019",
"volume": "",
"issue": "",
"pages": "10695--10704",
"other_ids": {
"DOI": [
"10.1109/CVPR.2019.01095"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Deshpande, Jyoti Aneja, Liwei Wang, Alexan- der G. Schwing, and David A. Forsyth. 2019. Fast, diverse and accurate image captioning guided by part-of-speech. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 10695- 10704. IEEE Computer Society.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Stylenet: Generating attractive visual captions with styles",
"authors": [
{
"first": "Chuang",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "955--964",
"other_ids": {
"DOI": [
"10.1109/CVPR.2017.108"
]
},
"num": null,
"urls": [],
"raw_text": "Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In 2017 IEEE Confer- ence on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 955-964. IEEE Computer Society.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {
"DOI": [
"10.1109/CVPR.2016.90"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Senticap: Generating image descriptions with sentiments",
"authors": [
{
"first": "Alexander",
"middle": [
"Patrick"
],
"last": "Mathews",
"suffix": ""
},
{
"first": "Lexing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xuming",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3574--3580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Patrick Mathews, Lexing Xie, and Xuming He. 2016. Senticap: Generating image descriptions with sentiments. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Febru- ary 12-17, 2016, Phoenix, Arizona, USA, pages 3574-3580. AAAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1556--1566",
"other_ids": {
"DOI": [
"10.3115/v1/p15-1150"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Nat- ural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Pa- pers, pages 1556-1566. The Association for Com- puter Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Cider: Consensus-based image description evaluation",
"authors": [
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Ramakrishna Vedantam",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4566--4575",
"other_ids": {
"DOI": [
"10.1109/CVPR.2015.7299087"
]
},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 4566- 4575. IEEE Computer Society.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3156--3164",
"other_ids": {
"DOI": [
"10.1109/CVPR.2015.7298935"
]
},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3156- 3164. IEEE Computer Society.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Image captioning with semantic attention",
"authors": [
{
"first": "Quanzeng",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Hailin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhaowen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016",
"volume": "",
"issue": "",
"pages": "4651--4659",
"other_ids": {
"DOI": [
"10.1109/CVPR.2016.503"
]
},
"num": null,
"urls": [],
"raw_text": "Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with seman- tic attention. In 2016 IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 4651- 4659. IEEE Computer Society.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2014,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "2",
"issue": "",
"pages": "67--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Young, Alice Lai, Micah Hodosh, and Julia Hock- enmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic in- ference over event descriptions. Trans. Assoc. Com- put. Linguistics, 2:67-78.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Comprehensive image captioning via scene graph decomposition",
"authors": [
{
"first": "Yiwu",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Computer Vision -ECCV 2020 -16th European Conference",
"volume": "",
"issue": "",
"pages": "211--229",
"other_ids": {
"DOI": [
"10.1007/978-3-030-58568-6_13"
]
},
"num": null,
"urls": [],
"raw_text": "Yiwu Zhong, Liwei Wang, Jianshu Chen, Dong Yu, and Yin Li. 2020. Comprehensive image captioning via scene graph decomposition. In Computer Vision - ECCV 2020 -16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV, vol- ume 12359 of Lecture Notes in Computer Science, pages 211-229. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "1 https://github.com/ZVengin/DepControl_ALVR a dog plays frisbee with a man on An example of syntactic dependency tree(left) and partial dependency tree (right)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Case study: This figure shows an example generated during inference phase.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF2": {
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table",
"text": "Evaluation of quality on test pred . Each generated caption is evaluated against all reference captions of its corresponding image."
},
"TABREF4": {
"content": "<table><tr><td>: Evaluation of controllability (DBEM scores)</td></tr><tr><td>ter generate high-frequency syntactic dependencies</td></tr><tr><td>that also frequently appear in partial dependency</td></tr><tr><td>trees. When the models are given words and/or</td></tr><tr><td>syntactic dependencies as control signals, SDSAM</td></tr><tr><td>achieves higher DBEM scores than NIC. This re-</td></tr><tr><td>sult demonstrates that explicitly learning to gener-</td></tr><tr><td>ate syntactic dependency trees as an intermediate</td></tr><tr><td>representation contributes to better controlling of</td></tr><tr><td>image captions.</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": ""
}
}
}
}