|
{ |
|
"paper_id": "D18-1013", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:44:27.262247Z" |
|
}, |
|
"title": "simNet: Stepwise Image-Topic Merging Network for Generating Detailed and Comprehensive Image Captions", |
|
"authors": [ |
|
{ |
|
"first": "Fenglin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Beijing University of Posts and Telecommunications", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xuancheng", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "MOE Key Laboratory of Computational Linguistics", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yuanxin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Beijing University of Posts and Telecommunications", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Houfeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "MOE Key Laboratory of Computational Linguistics", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "MOE Key Laboratory of Computational Linguistics", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The encode-decoder framework has shown recent success in image captioning. Visual attention, which is good at detailedness, and semantic attention, which is good at comprehensiveness, have been separately proposed to ground the caption on the image. In this paper, we propose the Stepwise Image-Topic Merging Network (simNet) that makes use of the two kinds of attention at the same time. At each time step when generating the caption, the decoder adaptively merges the attentive information in the extracted topics and the image according to the generated context, so that the visual information and the semantic information can be effectively combined. The proposed approach is evaluated on two benchmark datasets and reaches the state-of-the-art performances. 1", |
|
"pdf_parse": { |
|
"paper_id": "D18-1013", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The encode-decoder framework has shown recent success in image captioning. Visual attention, which is good at detailedness, and semantic attention, which is good at comprehensiveness, have been separately proposed to ground the caption on the image. In this paper, we propose the Stepwise Image-Topic Merging Network (simNet) that makes use of the two kinds of attention at the same time. At each time step when generating the caption, the decoder adaptively merges the attentive information in the extracted topics and the image according to the generated context, so that the visual information and the semantic information can be effectively combined. The proposed approach is evaluated on two benchmark datasets and reaches the state-of-the-art performances. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Image captioning attracts considerable attention in both natural language processing and computer vision. The task aims to generate a description in natural language grounded on the input image. It is a very challenging yet interesting task. On the one hand, it has to identify the objects in the image, associate the objects, and express them in a fluent sentence, each of which is a difficult subtask. On the other hand, it combines two important fields in artificial intelligence, namely, natural language processing and computer vision. More importantly, it has a wide range of applications, including text-based image retrieval, helping visually impaired people see (Wu et al., 2017) , humanrobot interaction (Das et al., 2017) , etc.", |
|
"cite_spans": [ |
|
{ |
|
"start": 671, |
|
"end": 688, |
|
"text": "(Wu et al., 2017)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 732, |
|
"text": "(Das et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Models based on the encoder-decoder framework have shown success in image captioning. According to the pivot representation, they can be * Equal Contributions 1 The code is available at https://github.com/ lancopku/simNet Figure 1 : Examples of using different attention mechanisms. Soft-Attention (Xu et al., 2015) is based on visual attention. The generated caption is detailed in that it knows the visual attributes well (e.g. open). However, it omits many objects (e.g. mouse and dog). ATT-FCN (You et al., 2016) is based on semantic attention. The generated caption is more comprehensive in that it includes more objects. However, it is bad at associating details with the objects (e.g. missing open and mislocating dog). simNet is our proposal that effectively merges the two kinds of attention and generates a detailed and comprehensive caption.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 160, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 315, |
|
"text": "(Xu et al., 2015)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 516, |
|
"text": "(You et al., 2016)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 230, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "roughly categorized into models based on visual information (Vinyals et al., 2015; Mao et al., 2014; Li, 2015, 2017) , and models based on conceptual information You et al., 2016; Wu et al., 2016) . The later explicitly provides the visual words (e.g. dog, sit, red) to the decoder instead of the image features, and is more effective in image captioning according to the evaluation on benchmark datasets. However, the models based on conceptual information have a major drawback that it is hard for the model to associate the details with the specific objects in the image, because the visual words are inherently unordered in semantics. Figure 1 Figure 2 : Illustration of the main idea. The visual information captured by CNN and the conceptual information in the extracted topics are first condensed by attention mechanisms respectively. The merging gate then adaptively adjusts the weight between the visual information and the conceptual information for generating the caption.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 82, |
|
"text": "(Vinyals et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 100, |
|
"text": "Mao et al., 2014;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 116, |
|
"text": "Li, 2015, 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 179, |
|
"text": "You et al., 2016;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 196, |
|
"text": "Wu et al., 2016)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 639, |
|
"end": 647, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 656, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "for the position of the dog. In contrast, models based on the visual information often are accurate in details but have difficulty in describing the image comprehensively and tend to only describe a subregion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we get the best of both worlds and integrate visual attention and semantic attention for generating captions that are both detailed and comprehensive. We propose a Stepwise Image-Topic Merging Network as the decoder to guide the information flow between the image and the extracted topics. At each time step, the decoder first extracts focal information from the image. Then, it decides which topics are most probable for the time step. Finally, it attends differently to the visual information and the conceptual information to generate the output word. Hence, the model can efficiently merge the two kinds of information, leading to outstanding results in image captioning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Overall, the main contributions of this work are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose a novel approach that can effectively merge the information in the image and the topics to generate cohesive captions that are both detailed and comprehensive. We refine and combine two previous competing attention mechanisms, namely visual attention and semantic attention, with an importancebased merging gate that effectively combines and balances the two kinds of information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The proposed approach outperforms the state-of-the-art methods substantially on two benchmark datasets, Flickr30k and COCO, in terms of SPICE, which correlates the best with human judgments. Systematic analysis shows that the merging gate contributes the most to the overall improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A large number of systems have been proposed for image captioning. Neural models based on the encoder-decoder framework have been attracting increased attention in the last few years in several multi-discipline tasks, such as neural image/video captioning (NIC) and visual question answering (VQA) (Vinyals et al., 2015; Karpathy and Li, 2015; Venugopalan et al., 2015; Zhao et al., 2016; . State-of-theart neural approaches (Anderson et al., 2018; Liu et al., 2018; Lu et al., 2018) incorporate the attention mechanism in machine translation (Bahdanau et al., 2014) to generate grounded image captions. Based on what they attend to, the models can be categorized into visual attention models and semantic attention models. Visual attention models pay attention to the image features generated by CNNs. CNNs are typically pre-trained on the image recognition task to extract general visual signals (Xu et al., 2015; Lu et al., 2017) . The visual attention is expected to find the most relevant image regions in generating the caption. Most recently, image features based on predicted bounding boxes are used (Anderson et al., 2018; Lu et al., 2018) . The advantages are that the attention no longer needs to find the relevant generic regions by itself but instead find relevant bounding boxes that are object orientated and can serve as semantic guides. However, the drawback is that predicting bounding boxes is difficult, which requires large datasets (Krishna et al., 2017) and complex models (Ren et al., 2015 (Ren et al., , 2017a .", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 320, |
|
"text": "(Vinyals et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 343, |
|
"text": "Karpathy and Li, 2015;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 369, |
|
"text": "Venugopalan et al., 2015;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 388, |
|
"text": "Zhao et al., 2016;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 448, |
|
"text": "(Anderson et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 466, |
|
"text": "Liu et al., 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 483, |
|
"text": "Lu et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 566, |
|
"text": "(Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 898, |
|
"end": 915, |
|
"text": "(Xu et al., 2015;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 916, |
|
"end": 932, |
|
"text": "Lu et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1108, |
|
"end": 1131, |
|
"text": "(Anderson et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1132, |
|
"end": 1148, |
|
"text": "Lu et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1454, |
|
"end": 1476, |
|
"text": "(Krishna et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1496, |
|
"end": 1513, |
|
"text": "(Ren et al., 2015", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1514, |
|
"end": 1534, |
|
"text": "(Ren et al., , 2017a", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Semantic attention models pay attention to a predicted set of semantic concepts You et al., 2016; Wu et al., 2016) . The semantic concepts are the most frequent words in the captions, and the extractor can be trained using various methods but typically is only trained on the given image captioning dataset. This kind ", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 97, |
|
"text": "You et al., 2016;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 98, |
|
"end": 114, |
|
"text": "Wu et al., 2016)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "V V T \u0303 FC (a) The overall framework. \u2212 1 \u2212 \u2212 LSTM (b)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The data flow in the proposed simNet. Figure 3 : Illustration of the proposed approach. In the right plot, we use \u03c6, \u03c8, \u03c7 to denote input attention, output attention, and topic attention, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 46, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "of approach can be seen as the extension of the earlier template-based slotting-filling approaches (Farhadi et al., 2010; Kulkarni et al., 2013) . However, few work studies how to combine the two kinds of attention models to take advantage of both of them. On the one hand, due to the limited number of visual features, it is hard to provide comprehensive information to the decoder. On the other hand, the extracted semantic concepts are unordered, making it hard for the decoder to portray the details of the objects correctly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 121, |
|
"text": "(Farhadi et al., 2010;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 144, |
|
"text": "Kulkarni et al., 2013)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This work focuses on combining the visual attention and the semantic attention efficiently to address their drawbacks and make use of their merits. The visual attention is designed to focus on the attributes and the relationships of the objects, while the semantic attention only includes words that are objects so that the extracted topics could be more accurate. The combination is controlled by the importance-based merging mechanism that decides at each time step which kind of information should be relied on. The goal is to generate image captions that are both detailed and comprehensive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our proposed model consists of an image encoder, a topic extractor, and a stepwise merging decoder. Figure 3 shows a sketch. We first briefly introduce the image encoder and the topic extractor. Then, we introduce the proposed stepwise image-topic merging decoder in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 108, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For an input image, the image encoder expresses the image as a series of visual feature vectors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Image Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "V = {v 1 , v 2 , . . . , v k }, v i \u2208 R g .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Image Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Each feature corresponds to a different perspective of the image. The visual features serve as descriptive guides of the objects in the image for the decoder. We use a ResNet152 , which is commonly used in image captioning, to generate the visual features. The output of the last convolutional layer is used as the visual information:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Image Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "V = W V,I CNN(I)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Image Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where I is the input image, and ,I shrinks the last dimension of the output. 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 34, |
|
"text": ",I", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Image Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Image Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Typically, identifying an object requires a combination of visual features, and considering the limited capacity of the visual features, it is hard for the conventional decoder to describe the objects in the image comprehensively. An advance in image captioning is to provide the decoder with the semantic concepts in the image directly so that the decoder is equipped with an overall perspective of the image. The semantic concepts can be objects (e.g. person, car), attributes (e.g. off, electric), and relationships (e.g. using, sitting). We only use the words that are objects in this work, the reason of which is explained later. We call such words topics. The topic extractor concludes a list of candidate topic embeddings T = {w 1 , w 2 , . . . , w m }, w i \u2208 R e from the image, where e is the dimension of the topic word embeddings. Following common practice You et al., 2016) , we adopt the weakly-supervised approach of Multiple Instance Learning (Zhang et al., 2006) to build a topic extractor. Due to limited space, please refer to Fang et al. (2015) for detailed explanation. Different from existing work that uses all the most frequent words in the captions as valid semantic concepts or visual words, we only include the object words (nouns) in the topic word list. Existing work relies on attribute words and rela-tionship words to provide visual information to the decoder. However, it not only complicates the extracting procedure but also contributes little to the generation. For an image containing many objects, the decoder is likely to combine the attributes with the objects arbitrarily, as such words are specific to certain objects but are provided to the decoder unordered. In contrast, our model has visual information as additional input and we expect that the decoder should refer to the image for such kind of information instead of the extracted concepts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 868, |
|
"end": 885, |
|
"text": "You et al., 2016)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 958, |
|
"end": 978, |
|
"text": "(Zhang et al., 2006)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Extractor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The essential component of the decoder is the proposed stepwise image-topic merging network. The decoder is based on an LSTM (Hochreiter and Schmidhuber, 1997) . At each time step, it combines the textual caption, the attentive visual information, and the attentive conceptual information as the context for generating an output word. The goal is achieved by three modules, the visual attention, the topic attention, and the merging gate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 159, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Visual Attention as Output The visual attention attends to attracting parts of the image based on the state of the LSTM decoder. In existing work (Xu et al., 2015) , only the previous hidden state h t\u22121 \u2208 R d of the LSTM is used in computation of the visual attention:", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 163, |
|
"text": "(Xu et al., 2015)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Z t = tanh(W Z,V V \u2295 W Z,h h t\u22121 ) (2) \u03b1 t = softmax(Z t w \u03b1,Z )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "W Z,V \u2208 R k\u00d7g , W Z,h \u2208 R k\u00d7d , w \u03b1,Z \u2208 R k", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "are the learnable parameters. We denote the matrix-vector addition as \u2295, which is calculated by adding the vector to each column of the matrix. \u03b1 t \u2208 R k is the attentive weights of V and the attentive visual input z t \u2208 R g is calculated as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z t = V \u03b1 t", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The visual input z t and the embedding of the previous output word y t\u22121 are the input of the LSTM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h t = LSTM( z t y t\u22121 , h t\u22121 )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "However, there is a noticeable drawback that the previous output word y t\u22121 , which is a much stronger indicator than the previous hidden state h t\u22121 , is not used in the attention. As z t is used as the input, we call it input attention. To overcome that drawback, we add another attention that incorporates the current hidden state h t , which is based on the last generated word y t\u22121 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Z t = tanh( W Z,V V \u2295 W Z,h h t ) (6) \u03b1 t = softmax( Z t w \u03b1,Z ) (7) z t = V \u03b1 t (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The procedure resembles the input attention, and we call it output attention. It is worth mentioning that the output attention is essentially the same with the spatial visual attention proposed by Lu et al. (2017) . However, they did not see it from the input-output point of view nor combine it with the input attention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 213, |
|
"text": "Lu et al. (2017)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The attentive visual output is further transformed to r t = tanh(W s,z z t ), W s,z \u2208 R e\u00d7g , which is of the same dimension as the topic word embedding to simplify the following procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Topic Attention In an image caption, different parts concern different topics. In the existing work (You et al., 2016) , the conceptual information is attended based on the previous output word:", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 118, |
|
"text": "(You et al., 2016)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b2 t = softmax(T T U y t\u22121 )", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "U \u2208 R e\u00d7e , \u03b2 t \u2208 R m .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The profound issue is that this approach neglects the visual information. It should be beneficial to provide the attentive visual information when selecting topics. The hidden state of the LSTM contains both the information of previous words and the attentive input visual information. Therefore, the model attends to the topics based on the hidden state of the LSTM:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Q t = tanh(W Q,T T \u2295 W Q,h h t )", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b2 t = softmax(Q t w \u03b2,Q )", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where W Q,T \u2208 R m\u00d7e , W Q,h \u2208 R m\u00d7d , w \u03b2,Q \u2208 R m are the parameters to be learned. \u03b2 t \u2208 R m is the weight of the topics, from which the attentive conceptual output q t \u2208 R e is calculated:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "q t = T \u03b2 t", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The topic attention q t and the hidden state h t are combined as the contextual information s t :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s t = tanh(W s,q q t + W s,h h t )", |
|
"eq_num": "(13)" |
|
} |
|
], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where W s,q \u2208 R e\u00d7e , W s,h \u2208 R e\u00d7d are learnable parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stepwise Image-Topic Merging Decoder", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We have prepared both the visual information r t and the contextual information s t . It is not reasonable to treat the two kinds of information equally when the decoder generates different types of words. For example, when generating descriptive words (e.g., behind, red), r t should matter more than s t . However, when generating object words (e.g., people, table), s t is more important. We introduce a novel score-based merging mechanism to make the model adaptively learn to adjust the balance:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Gate", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03b3 t = \u03c3(S(s t ) \u2212 S(r t )) (14) c t = \u03b3 t s t + (1 \u2212 \u03b3 t )r t (15)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Gate", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03c3 is the sigmoid function, \u03b3 t \u2208 [0, 1] indicates how important the topic attention is compared to the visual attention, and S is the scoring function. The scoring function needs to evaluate the importance of the topic attention. Noticing that Eq. 10and Eq. (11) have a similar purpose, we define S similarly:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Gate", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "S(s t ) = tanh(W S,h h t + W S,s s t ) \u2022 w S (16) S(r t ) = tanh(W S,h h t + W S,r r t ) \u2022 w S (17)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Gate", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u2022 denotes dot product of vectors, W S,s \u2208 R m\u00d7e , W S,r \u2208 R m\u00d7e are the parameters to be learned, and W S,h , w s share the weights of W Q,h , w \u03b2,Q from Eq. (10) and Eq. 11, respectively. Finally, the output word is generated by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Gate", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y t \u223c p t = softmax(W p,c c t )", |
|
"eq_num": "(18)" |
|
} |
|
], |
|
"section": "Merging Gate", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where each value of p t \u2208 R |D| is a probability indicating how likely the corresponding word in vocabulary D is the current output word. The whole model is trained using maximum log likelihood and the loss function is the cross entropy loss. In all, our proposed approach encourages the model to take advantage of all the available information. The adaptive merging mechanism makes the model weigh the information elaborately.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Gate", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We describe the datasets and the metrics used for evaluation, followed by the training details and the evaluation of the proposed approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "There are several datasets containing images and their captions. We report results on the popular Microsoft COCO dataset and the Flickr30k (Young et al., 2014) dataset. They contain 123,287 images and 31,000 images, respectively, and each image is annotated with 5 sentences. We report results using the widely-used publicly-available splits in the work of Karpathy and Li (2015) . There are 5,000 images each in the validation set and the test set for COCO, 1,000 images for Flickr30k.", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 159, |
|
"text": "(Young et al., 2014)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 379, |
|
"text": "Karpathy and Li (2015)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We report results using the COCO captioning evaluation toolkit ) that reports the widely-used automatic evaluation metrics SPICE, CIDEr, BLEU, METEOR, and ROUGE. SPICE (Anderson et al., 2016) , which is based on scene graph matching, and CIDEr , which is based on n-gram matching, are specifically proposed for evaluating image captioning systems. They both incorporate the consensus of a set of references for an example. BLEU (Papineni et al., 2002) and METOR (Banerjee and Lavie, 2005) are originally proposed for machine translation evaluation. ROUGE (Lin and Hovy, 2003; Lin, 2004) is designed for automatic evaluation of extractive text summarization. In the related studies, it is concluded that SPICE correlates the best with human judgments with a remarkable margin over the other metrics, and is expert in judging detailedness, where the other metrics show negative correlations, surprisingly; CIDEr and METEOR follows with no particular precedence, followed by ROUGE-L, and BLEU-4, in that order (Anderson et al., 2016; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 191, |
|
"text": "(Anderson et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 451, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 575, |
|
"text": "(Lin and Hovy, 2003;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 586, |
|
"text": "Lin, 2004)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1007, |
|
"end": 1030, |
|
"text": "(Anderson et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Following common practice, the CNN used is the ResNet152 model pre-trained on ImageNet. 3 There are 2048 7 \u00d7 7 feature maps, and we project them into 512 feature maps, i.e. g is 512. The word embedding size e is 256 and the hidden size d of the LSTM is 512. We only keep caption words that occur at least 5 times in the training set, resulting in 10,132 words for COCO and 7,544 for Flickr30k. We use the topic extractor pre-trained by Fang et al. (2015) for 1,000 concepts on COCO. We only use 568 manuallyannotated object words as topics. For an image, only the top 5 topics are selected, which means m is 5. The same topic extractor is used for Flickr30k, as COCO provides adequate generality. The caption words and the topic words share the same embeddings. In training, we first train the model without visual attention (freezing the CNN parameters) for 20 epochs with the batch size of 80. The learning rate for the LSTM is 0.0004. Then, we switch to jointly train the full model with a learning rate of 0.00001, which exponentially decays with the number of epochs so that it is halved every 50 epochs. We also use momen- tum of 0.8 and weight decay of 0.999. We use Adam (Kingma and Ba, 2014) for parameter optimization. For fair comparison, we adopt early stop based on CIDEr within maximum 50 epochs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We compare our approach with various representative systems on Flickr30k and COCO, including the recently proposed NBT that is the state-of-theart on the two datasets in comparable settings. Table 1 shows the result on Flickr30k. As we can see, our model outperforms the comparable systems in terms of all of the metrics except BLEU-4. Moreover, our model overpasses the state-of-theart with a comfortable margin in terms of SPICE, which is shown to correlate the best with human judgments (Anderson et al., 2016) . Table 2 shows the results on COCO. Among the directly comparable models, our model is arguably the best and outperforms the existing models except in terms of BLEU-4. Most encouragingly, our model is also competitive with Up-Down (Ander-son et al., 2018), which uses much larger dataset, Visual Genome (Krishna et al., 2017) , with dense annotations to train the object detector, and directly optimizes CIDEr. Especially, our model outperforms the state-of-the-art substantially in SPICE and METEOR. Breakdown of SPICE Fscores over various subcategories (see Table 3 ) shows that our model is in dominant lead in almost all subcategories. It proves the effectiveness of our approach and indicates that our model is quite data efficient.", |
|
"cite_spans": [ |
|
{ |
|
"start": 490, |
|
"end": 513, |
|
"text": "(Anderson et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 818, |
|
"end": 840, |
|
"text": "(Krishna et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 516, |
|
"end": 523, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1075, |
|
"end": 1082, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For the methods that directly optimize CIDEr, it is intuitive that CIDEr can improve significantly. The similar improvement of BLEU-4 is evidence that optimizing CIDEr leads to more ngram matching. However, it comes to our notice that the improvements of SPICE, METEOR, and ROUGE-L are far less significant, which suggests there may be a gaming situation where the n-gram matching is wrongfully exploited by the model in reinforcement learning. As shown by , it is most reasonable to jointly optimize all the metrics at the same time. We also evaluate the proposed model on the COCO evaluation server, the results of which are shown in Appendix A.1, due to limited space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this section, we analyze the contribution of each component in the proposed approach, and give examples to show the strength and the potential improvements of the model. The analysis is conducted on the test set of COCO.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Topic Extraction The motivation of using objects as topics is that they are easier to identify so that the generation suffers less from erroneous predictions. This can be proved by the F-score of the identified topics in the test set, which is shown in Table 4 . Using top-5 object words is at least as good as using top-10 all words. However, using top-10 all words introduces more erroneous visual words to the generation. As shown in Ta- Figure 4 : Average merging gate values according to word types. As we can see, object words (noun) dominate the high value range, while attribute and relation words are assigned lower values, indicating the merging gate learns to efficiently combine the information. ble 5, when extracting all words, providing more words to the model indeed increases the captioning performance. However, even when top-20 all words are used, the performance is still far behind using only top-5 object words and seems to reach the performance ceiling. It proves that for semantic attention, it is also important to limit the absolute number of incorrect visual words instead of merely the precision or the recall. It is also interesting to check whether using other kind of words can reach the same effect. Unfortunately, in our experiments, only using verbs or adjectives as semantic concepts works poorly.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 260, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 441, |
|
"end": 449, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To examine the contributions of the submodules in our model, we conduct a series of experiments. The results are summarized in Table 3 . To help with the understanding of the differences, we also report the breakdown of SPICE F-scores.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Visual Attention Our input attention achieves similar results to previous work (Xu et al., 2015) if not better. Using only the output attention is much more effective than using only the input attention, with substantial improvements in all metrics, showing the impact of information gap caused by delayed input in attention. Combining the input attention and the output attention can further improve the results, especially in color and size descriptions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 96, |
|
"text": "(Xu et al., 2015)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Topic Attention As expected, compared with visual attention, the topic attention is better at identifying objects but worse at identifying attributes. We also apply the merging gate to the topic attention, but it now merges q t and h t instead of s t and r t . With the merging gate, the model can balance the information in caption text and extracted topics, resulting in better overall scores. While it overpasses the conventional visual attention, it lags behind the output attention.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Merging Gate Combing the visual attention and the topic attention directly indeed results in a huge boost in performance, which confirms our motivation. However, directly combining them also causes lower scores in attributes, color, count, and size, showing that the advantages are not fully made use of. The most dramatic improvements come from applying the merging gate to the combined attention, showing that the proposed balance mechanism can adaptively combine the two kinds of information and is essential to the overall performance. The average merging gate value summarized in Figure 4 suggests the same.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 585, |
|
"end": 593, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We give some examples in the left plot of Figure 5 to illustrate the differences between the models more intuitively. From the examples, it is clear that the proposed simNet generates the best captions in that more objects are described and many informative and detailed attributes are included, such as the quantity and the color.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 50, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Visualization Figure 6 shows the visualization of the topic attention and the visual attention with running examples. As we can see, the topic attention is active when generating a phrase containing the related topic. For example, bathroom is always most attended when generating a bathroom. The merging gate learns to direct the information flow efficiently. When generating words such as on and a, it gives lower weight to the topic attention and prefers the visual attention. As to the visual attention, the output attention is much more focused than the input attention. As we hypothesized, the conventional input attention lacks the information of the last generated word and does not know what to look for exactly. For example, when generating bathroom, the input attention does not know the previous generated word is a, and it loses its focus, while the output attention is relatively more concentrated. Moreover, the merging gate learns to overcome the erroneous topics, as shown in the second example. When generating chair, the topic attention is focused on a wrong object bed, while the visual attention attends correctly to the chair, and especially the output attention attends to the armrest. The merging gate effectively remedies the misleading information from the topic attention and outputs a lower weight, resulting in the model correctly generating the word chair.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 22, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Error Analysis We conduct error analysis using the proposed (full) model on the test set to provide insights on how the model may be improved. We find 123 out of 1000 generated captions that are not satisfactory. There are mainly three types of errors, i.e. distance (32, 26%), movement (22, 18%), and object (60, 49%), with 9 (7%) other errors. Distance error takes place when there is a lot of objects and the model cannot grasp the foreground and the background relationship. Movement error means that the model fails to describe whether the objects are moving. Those two kinds of errors are hard to eliminate, as they are fundamental problems of computer vision waiting to be resolved. Object error happens when there are incorrect extracted topics, and the merging gate regards the topic as grounded in the image. In the given example, the incorrect topic is garden. The tricky part is that the topic is seemingly correct according to the image features or otherwise the proposed model will choose other topics. A more powerful topic extractor may help with the problem but it is unlikely to be completely avoided.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We propose the stepwise image-topic merging network to sequentially and adaptively merge the visual and the conceptual information for improved image captioning. To our knowledge, we are the first to combine the visual and the semantic attention to achieve substantial improvements. We introduce the stepwise merging mechanism to efficiently guide the two kinds of information when generating the caption. The experimental results demonstrate the effectiveness of the proposed approach, which substantially outperforms the stateof-the-art image captioning methods in terms of SPICE on COCO and Flickr30k datasets. Quantitative and qualitative analysis show that the generated captions are both detailed and comprehensive in comparison with the existing methods. c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 HardAtt (Xu et al., 2015) 0.705 0.881 0.528 0.779 0.383 0.658 0.277 0.537 0.241 0.322 0.516 0.654 0.865 0.893 ATT-FCN (You et al., 2016) 0.731 0.900 0.565 0.815 0.424 0.709 0.316 0.599 0.250 0.335 0.535 0.682 0.943 0.958 SCA-CNN 0.712 0.894 0.542 0.802 0.404 0.691 0.302 0.579 0.244 0.331 0.524 0.674 0.912 0.921 LSTM-A (Yao et al., 2017) 0.739 0.919 0.575 0.842 0.436 0.740 0.330 0.632 0.256 0.350 0.542 0.700 0.984 1.003 SCN-LSTM (Gan et al., 2017) 0.740 0.917 0.575 0.839 0.436 0.739 0.331 0.631 0.257 0.348 0.543 0.696 1.003 1.013 AdaAtt (Lu et al., Table 6 : Performance on the online COCO evaluation server. The SPICE metric is unavailable for our model, thus not reported. c5 means evaluating against 5 references, and c40 means evaluating against 40 references. The symbol * denotes directly optimizing CIDEr. The symbol \u2020 denotes model ensemble. The symbol \u2021 denotes using extra data for training, thus not directly comparable. Our submission does not use the three aforementioned techniques. Nonetheless, our model is second only to Up-Down and surpasses almost all the other models in published work, especially when 40 references are considered.", |
|
"cite_spans": [ |
|
{ |
|
"start": 832, |
|
"end": 849, |
|
"text": "(Xu et al., 2015)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 942, |
|
"end": 960, |
|
"text": "(You et al., 2016)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 1144, |
|
"end": 1162, |
|
"text": "(Yao et al., 2017)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 1256, |
|
"end": 1274, |
|
"text": "(Gan et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1366, |
|
"end": 1377, |
|
"text": "(Lu et al.,", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 762, |
|
"end": 823, |
|
"text": "c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 1378, |
|
"end": 1385, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A.1 Results on COCO Evaluation Server Table 6 shows the performance on the online COCO evaluation server 4 . We put it in the appendix because the results are incomplete and the SPICE metric is not available for our submission, which correlates the best with human evaluation. The SPICE metrics are only available at the leaderboard on the COCO dataset website 5 , which, unfortunately, has not been updated for more than a year. Our submission does not directly optimize CIDEr, use model ensemble, or use extra training data. The three techniques typically result in orthogonal improvements (Lu et al., 2017; Rennie et al., 2017; Anderson et al., 2018) . Moreover, the SPICE results are missing, in which the proposed model has the most advantage. Nonetheless, our model is second only to Up-Down (Anderson et al., 2018) and surpasses almost all the other models in published work, especially when 40 references are considered.", |
|
"cite_spans": [ |
|
{ |
|
"start": 592, |
|
"end": 609, |
|
"text": "(Lu et al., 2017;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 630, |
|
"text": "Rennie et al., 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 631, |
|
"end": 653, |
|
"text": "Anderson et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 798, |
|
"end": 821, |
|
"text": "(Anderson et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 45, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Supplementary Material", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For conciseness, all the bias terms of linear transformations in this paper are omitted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the pre-trained model from torchvision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://competitions.codalab.org/ competitions/3221 5 http://cocodataset.org/ #captions-leaderboard", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by National Natural Science Foundation of China (No. 61673028). We thank all the anonymous reviewers for their constructive comments and suggestions. Xu Sun is the corresponding author of this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "SPICE: semantic propositional image caption evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Anderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Basura", |
|
"middle": [], |
|
"last": "Fernando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Gould", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Computer Vision -ECCV 2016 -14th European Conference, Amsterdam", |
|
"volume": "9909", |
|
"issue": "", |
|
"pages": "382--398", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: semantic proposi- tional image caption evaluation. In Computer Vision -ECCV 2016 -14th European Conference, Amster- dam, The Netherlands, October 11-14, 2016, Pro- ceedings, Part V, volume 9909 of Lecture Notes in Computer Science, pages 382-398. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Bottom-up and top-down attention for image captioning and VQA", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Anderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Buehler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Damien", |
|
"middle": [], |
|
"last": "Teney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Gould", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and VQA. In 2018 IEEE Confer- ence on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "METEOR: an automatic metric for MT evaluation with improved correlation with human judgments", |
|
"authors": [ |
|
{ |
|
"first": "Satanjeev", |
|
"middle": [], |
|
"last": "Banerjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization@ACL 2005, Ann Arbor, Michigan, June 29, 2005, pages 65-72. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Temporal-difference learning with sampling baseline for image captioning", |
|
"authors": [ |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guiguang", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sicheng", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jungong", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hui Chen, Guiguang Ding, Sicheng Zhao, and Jungong Han. 2018. Temporal-difference learning with sam- pling baseline for image captioning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, Febru- ary 2-7, 2018. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning", |
|
"authors": [ |
|
{ |
|
"first": "Long", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanwang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liqiang", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Shao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tat-Seng", |
|
"middle": [], |
|
"last": "Chua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6298--6306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. 2017. SCA- CNN: spatial and channel-wise attention in convolu- tional networks for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recog- nition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6298-6306. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Microsoft COCO captions: Data collection and evaluation server", |
|
"authors": [ |
|
{ |
|
"first": "Xinlei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tsung-Yi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramakrishna", |
|
"middle": [], |
|
"last": "Vedantam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2015. Microsoft COCO cap- tions: Data collection and evaluation server. CoRR, abs/1504.00325.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Mind's eye: A recurrent visual representation for image caption generation", |
|
"authors": [ |
|
{ |
|
"first": "Xinlei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2422--2431", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinlei Chen and C. Lawrence Zitnick. 2015. Mind's eye: A recurrent visual representation for image cap- tion generation. In 2015 IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 2422- 2431. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Visual dialog", |
|
"authors": [ |
|
{ |
|
"first": "Abhishek", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satwik", |
|
"middle": [], |
|
"last": "Kottur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khushi", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avi", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deshraj", |
|
"middle": [], |
|
"last": "Yadav", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Jos\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Moura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1080--1089", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 M. F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In 2017 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1080-1089. IEEE Com- puter Society.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "From captions to visual concepts and back", |
|
"authors": [ |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Hao Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Forrest", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rupesh", |
|
"middle": [ |
|
"Kumar" |
|
], |
|
"last": "Iandola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Platt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1473--1482", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Fang, Saurabh Gupta, Forrest N. Iandola, Ru- pesh Kumar Srivastava, Li Deng, Piotr Doll\u00e1r, Jian- feng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 1473-1482. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Every picture tells a story: Generating sentences from images", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seyyed", |
|
"middle": [], |
|
"last": "Mohammad Mohsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [ |
|
"Amin" |
|
], |
|
"last": "Hejrati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Sadeghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyrus", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Rashtchian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Forsyth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computer Vision -ECCV 2010, 11th European Conference on Computer Vision", |
|
"volume": "6314", |
|
"issue": "", |
|
"pages": "15--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Farhadi, Seyyed Mohammad Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David A. Forsyth. 2010. Every picture tells a story: Gener- ating sentences from images. In Computer Vision -ECCV 2010, 11th European Conference on Com- puter Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV, volume 6314 of Lecture Notes in Computer Science, pages 15-29. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Semantic compositional networks for visual captioning", |
|
"authors": [ |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Gan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chuang", |
|
"middle": [], |
|
"last": "Gan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunchen", |
|
"middle": [], |
|
"last": "Pu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Carin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1141--1150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. 2017. Semantic compositional networks for visual captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1141-1150. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deep residual learning for image recognition", |
|
"authors": [ |
|
{ |
|
"first": "Kaiming", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaoqing", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "770--778", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Deep visualsemantic alignments for generating image descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Karpathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei-Fei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3128--3137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrej Karpathy and Fei-Fei Li. 2015. Deep visual- semantic alignments for generating image descrip- tions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3128-3137. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deep visualsemantic alignments for generating image descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Karpathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei-Fei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
|
"volume": "39", |
|
"issue": "4", |
|
"pages": "664--676", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrej Karpathy and Fei-Fei Li. 2017. Deep visual- semantic alignments for generating image descrip- tions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):664-676.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", |
|
"authors": [ |
|
{ |
|
"first": "Ranjay", |
|
"middle": [], |
|
"last": "Krishna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuke", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Groth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Hata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Kravitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannis", |
|
"middle": [], |
|
"last": "Kalantidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Shamma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei-Fei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Journal of Computer Vision", |
|
"volume": "123", |
|
"issue": "1", |
|
"pages": "32--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Fei-Fei Li. 2017. Vi- sual genome: Connecting language and vision us- ing crowdsourced dense image annotations. Inter- national Journal of Computer Vision, 123(1):32-73.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "BabyTalk: Understanding and generating simple image descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Visruth", |
|
"middle": [], |
|
"last": "Premraj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sagnik", |
|
"middle": [], |
|
"last": "Dhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siming", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Berg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tamara", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Berg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IEEE Transactions on Pattern Analysis Machine Intelligence", |
|
"volume": "35", |
|
"issue": "12", |
|
"pages": "2891--2903", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2013. BabyTalk: Under- standing and generating simple image descriptions. IEEE Transactions on Pattern Analysis Machine In- telligence, 35(12):2891-2903.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "ROUGE: a package for automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. ROUGE: a package for auto- matic evaluation of summaries. In Text Summa- rization Branches Out: Proceedings of the ACL-04", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Automatic evaluation of summaries using n-gram cooccurrence statistics", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin and Eduard H. Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Human Language Technol- ogy Conference of the North American Chapter of the Association for Computational Linguistics, HLT- NAACL 2003, Edmonton, Canada, May 27 -June 1, 2003. The Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Deconvolution-based global decoding for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Junyang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuancheng", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuming", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinsong", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3260--3271", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junyang Lin, Xu Sun, Xuancheng Ren, Shuming Ma, Jinsong Su, and Qi Su. 2018. Deconvolution-based global decoding for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 3260-3271. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Improved image captioning via policy gradient optimization of spider", |
|
"authors": [ |
|
{ |
|
"first": "Siqi", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenhai", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ning", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Guadarrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "873--881", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017. Improved image caption- ing via policy gradient optimization of spider. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 873-881. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Show, tell and discriminate: Image captioning by self-retrieval with partially labeled data", |
|
"authors": [ |
|
{ |
|
"first": "Xihui", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongsheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Shao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dapeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaogang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xihui Liu, Hongsheng Li, Jing Shao, Dapeng Chen, and Xiaogang Wang. 2018. Show, tell and discrim- inate: Image captioning by self-retrieval with par- tially labeled data. CoRR, abs/1803.08314.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Knowing when to look: Adaptive attention via a visual sentinel for image captioning", |
|
"authors": [ |
|
{ |
|
"first": "Jiasen", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3242--3250", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing when to look: Adaptive at- tention via a visual sentinel for image captioning. In 2017 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 3242-3250. IEEE Com- puter Society.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Neural baby talk", |
|
"authors": [ |
|
{ |
|
"first": "Jiasen", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianwei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2018. Neural baby talk. In 2018 IEEE Con- ference on Computer Vision and Pattern Recogni- tion, CVPR 2018, Salt Lake City, UT, USA, June 18- 22, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A hierarchical end-to-end model for jointly improving text summarization and sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Shuming", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junyang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuancheng", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4251--4257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuming Ma, Xu Sun, Junyang Lin, and Xuancheng Ren. 2018. A hierarchical end-to-end model for jointly improving text summarization and sentiment classification. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelli- gence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4251-4257. ijcai.org.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Deep captioning with multimodal recurrent neural networks (m-RNN)", |
|
"authors": [ |
|
{ |
|
"first": "Junhua", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Yuille", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L. Yuille. 2014. Deep captioning with multi- modal recurrent neural networks (m-RNN). CoRR, abs/1412.6632.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "BLEU: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [], |
|
"last": "Philadelphia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Usa", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, July 6-12, 2002, Philadel- phia, PA, USA., pages 311-318. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Faster R-CNN: towards real-time object detection with region proposal networks", |
|
"authors": [ |
|
{ |
|
"first": "Kaiming", |
|
"middle": [], |
|
"last": "Shaoqing Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--99", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Pro- cessing Systems 2015, December 7-12, 2015, Mon- treal, Quebec, Canada, pages 91-99.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Faster R-CNN: towards real-time object detection with region proposal networks", |
|
"authors": [ |
|
{ |
|
"first": "Kaiming", |
|
"middle": [], |
|
"last": "Shaoqing Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
|
"volume": "39", |
|
"issue": "", |
|
"pages": "1137--1149", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2017a. Faster R-CNN: towards real-time ob- ject detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine In- telligence, 39(6):1137-1149.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Deep reinforcement learningbased image captioning with embedding reward", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoyu", |
|
"middle": [], |
|
"last": "Zhou Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ning", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xutao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Lv", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1151--1159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou Ren, Xiaoyu Wang, Ning Zhang, Xutao Lv, and Li-Jia Li. 2017b. Deep reinforcement learning- based image captioning with embedding reward. In 2017 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1151-1159. IEEE Com- puter Society.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Self-critical sequence training for image captioning", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Steven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Etienne", |
|
"middle": [], |
|
"last": "Rennie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Youssef", |
|
"middle": [], |
|
"last": "Marcheret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jarret", |
|
"middle": [], |
|
"last": "Mroueh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vaibhava", |
|
"middle": [], |
|
"last": "Ross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1179--1195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1179-1195. IEEE Computer So- ciety.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "CIDEr: consensus-based image description evaluation", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Ramakrishna Vedantam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4566--4575", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: consensus-based im- age description evaluation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 4566-4575. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Sequence to sequence -video to text", |
|
"authors": [ |
|
{ |
|
"first": "Subhashini", |
|
"middle": [], |
|
"last": "Venugopalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Donahue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4534--4542", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond J. Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence -video to text. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, De- cember 7-13, 2015, pages 4534-4542. IEEE Com- puter Society.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Show and tell: A neural image caption generator", |
|
"authors": [ |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Toshev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dumitru", |
|
"middle": [], |
|
"last": "Erhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3156--3164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In 2015 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3156-3164. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Skeleton key: Image captioning by skeleton-attribute decomposition", |
|
"authors": [ |
|
{ |
|
"first": "Yufei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaohui", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garrison", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Cottrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7378--7387", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yufei Wang, Zhe Lin, Xiaohui Shen, Scott Cohen, and Garrison W. Cottrell. 2017. Skeleton key: Image captioning by skeleton-attribute decomposition. In 2017 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 7378-7387. IEEE Com- puter Society.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "What value do explicit high level concepts have in vision to language problems", |
|
"authors": [ |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunhua", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingqiao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Dick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hengel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "203--212", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony R. Dick, and Anton van den Hengel. 2016. What value do ex- plicit high level concepts have in vision to language problems? In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 203-212. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Automatic alt-text: Computergenerated image descriptions for blind users on a social network service", |
|
"authors": [ |
|
{ |
|
"first": "Shaomei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Wieland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omid", |
|
"middle": [], |
|
"last": "Farivar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Schiller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1180--1192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaomei Wu, Jeffrey Wieland, Omid Farivar, and Julie Schiller. 2017. Automatic alt-text: Computer- generated image descriptions for blind users on a so- cial network service. In Proceedings of the 2017 ACM Conference on Computer Supported Coopera- tive Work and Social Computing, CSCW 2017, Port- land, OR, USA, February 25 -March 1, 2017, pages 1180-1192. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach", |
|
"authors": [ |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuancheng", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Houfeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "979--988", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xu- ancheng Ren, Houfeng Wang, and Wenjie Li. 2018a. Unpaired sentiment-to-sentiment translation: A cy- cled reinforcement learning approach. In Proceed- ings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 979-988. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "A skeleton-based model for promoting coherence among sentences in narrative story generation", |
|
"authors": [ |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuancheng", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyan", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingjing Xu, Yi Zhang, Qi Zeng, Xuancheng Ren, Xi- aoyan Cai, and Xu Sun. 2018b. A skeleton-based model for promoting coherence among sentences in narrative story generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, Brussels, Bel- gium, October 31-November 4, 2018. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Show, attend and tell: Neural image caption generation with visual attention", |
|
"authors": [ |
|
{ |
|
"first": "Kelvin", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhudinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32nd International Conference on Machine Learning", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "2048--2057", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2048-2057, Lille, France. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Boosting image captioning with attributes", |
|
"authors": [ |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingwei", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yehao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaofan", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Mei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4904--4912", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, and Tao Mei. 2017. Boosting image captioning with at- tributes. In IEEE International Conference on Com- puter Vision, ICCV 2017, Venice, Italy, October 22- 29, 2017, pages 4904-4912. IEEE Computer Soci- ety.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Image captioning with semantic attention", |
|
"authors": [ |
|
{ |
|
"first": "Quanzeng", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hailin", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaowen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiebo", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4651--4659", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with seman- tic attention. In 2016 IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 4651- 4659. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micah", |
|
"middle": [], |
|
"last": "Hodosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "67--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for se- mantic inference over event descriptions. Transac- tions of the Association for Computational Linguis- tics, 2:67-78.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Multiple instance boosting for object detection", |
|
"authors": [ |
|
{ |
|
"first": "Cha", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Platt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Viola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Advances in Neural Information Processing Systems 18 [Neural Information Processing Systems, NIPS 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1417--1424", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cha Zhang, John C. Platt, and Paul A. Viola. 2006. Multiple instance boosting for object detection. In Y. Weiss, B. Sch\u00f6lkopf, and J. C. Platt, editors, Advances in Neural Information Processing Sys- tems 18 [Neural Information Processing Systems, NIPS 2005, December 5-8, 2005, Vancouver, British Columbia, Canada], pages 1417-1424. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Visual translation embedding network for visual relation detection", |
|
"authors": [ |
|
{ |
|
"first": "Hanwang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zawlin", |
|
"middle": [], |
|
"last": "Kyaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shih-Fu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tat-Seng", |
|
"middle": [], |
|
"last": "Chua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3107--3115", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, and Tat-Seng Chua. 2017. Visual translation embedding network for visual relation detection. In 2017 IEEE Conference on Computer Vision and Pattern Recog- nition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 3107-3115. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Partial multi-modal sparse coding via adaptive similarity structure regularization", |
|
"authors": [ |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanqing", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deng", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaofei", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yueting", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 ACM Conference on Multimedia Conference, MM 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou Zhao, Hanqing Lu, Deng Cai, Xiaofei He, and Yueting Zhuang. 2016. Partial multi-modal sparse coding via adaptive similarity structure regulariza- tion. In Proceedings of the 2016 ACM Conference on Multimedia Conference, MM 2016, Amsterdam, The Netherlands, October 15-19, 2016, pages 152- 156. ACM.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Soft-Attention: a open laptop computer sitting on top of a table ATT-FCN: a dog sitting on a desk with a laptop computer and mouse simNet: a open laptop computer and mouse sitting on a table with a dog nearby" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Examples of the generated captions. The left plot compares simNet with visual attention and topic attention. Visual attention is good at portraying the relations but is less specific in objects. Topic attention includes more objects but lacks details, such as material, color, and number. The proposed model achieves a very good balance. The right plot shows the error analysis of the proposed simNet." |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Visualization. Please view in color. Here, we give two running examples. The upper part of each example shows the attention weights of each of 5 extracted topics. Deeper color means larger in value. The middle part shows the value of the merging gate that determines the importance of the topic attention. The lower part shows the visualization of visual attention. The attended region is covered with color. The blue shade indicates the output attention. The red shade indicates the input attention." |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Topic Bag</td></tr><tr><td>Merging Gate</td></tr><tr><td>a</td></tr><tr><td>herd</td></tr><tr><td>of</td></tr><tr><td>cows</td></tr><tr><td>are</td></tr><tr><td>standing</td></tr><tr><td>on</td></tr><tr><td>a</td></tr><tr><td>lush</td></tr><tr><td>green</td></tr><tr><td>grass</td></tr><tr><td>eld</td></tr><tr><td>CNN (ResNet152)</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "shows an example. For semantic attention, although open is provided as a visual word, due to the insufficient use of visual information, the model gets confused about what objects open should be associated with and thus discards open in the caption. The model may even associate the details incorrectly, which is the case (cows) (\u00a1eld) (sheep) (water) (grass)" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>COCO</td><td colspan=\"5\">SPICE CIDEr METEOR ROUGE-L BLEU-4</td></tr><tr><td>HardAtt (Xu et al., 2015)</td><td>-</td><td>-</td><td>0.230</td><td>-</td><td>0.250</td></tr><tr><td>ATT-FCN (You et al., 2016)</td><td>-</td><td>-</td><td>0.243</td><td>-</td><td>0.304</td></tr><tr><td>SCA-CNN (Chen et al., 2017)</td><td>-</td><td>0.952</td><td>0.250</td><td>0.531</td><td>0.311</td></tr><tr><td>LSTM-A (Yao et al., 2017)</td><td>0.186</td><td>1.002</td><td>0.254</td><td>0.540</td><td>0.326</td></tr><tr><td>SCN-LSTM (Gan et al., 2017)</td><td>-</td><td>1.012</td><td>0.257</td><td>-</td><td>0.330</td></tr><tr><td>Skeleton (Wang et al., 2017)</td><td>-</td><td>1.069</td><td>0.268</td><td>0.552</td><td>0.336</td></tr><tr><td>AdaAtt (Lu et al., 2017)</td><td>0.195</td><td>1.085</td><td>0.266</td><td>0.549</td><td>0.332</td></tr><tr><td>NBT (Lu et al., 2018)</td><td>0.201</td><td>1.072</td><td>0.271</td><td>-</td><td>0.347</td></tr><tr><td>DRL (Ren et al., 2017b) *</td><td>-</td><td>0.937</td><td>0.251</td><td>0.525</td><td>0.304</td></tr><tr><td>TD-M-ATT (Chen et al., 2018) *</td><td>-</td><td>1.116</td><td>0.268</td><td>0.555</td><td>0.336</td></tr><tr><td>SCST (Rennie et al., 2017) *</td><td>-</td><td>1.140</td><td>0.267</td><td>0.557</td><td>0.342</td></tr><tr><td>SR-PL (Liu et al., 2018) * \u2020</td><td>0.210</td><td>1.171</td><td>0.274</td><td>0.570</td><td>0.358</td></tr><tr><td>Up-Down (Anderson et al., 2018) * \u2020</td><td>0.214</td><td>1.201</td><td>0.277</td><td>0.569</td><td>0.363</td></tr><tr><td>simNet</td><td>0.220</td><td>1.135</td><td>0.283</td><td>0.564</td><td>0.332</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Performance on the Flickr30k Karpathy test split. The symbol * denotes directly optimizing CIDEr. The symbol \u2020 denotes using extra data for training, thus not directly comparable. Nonetheless, our model supersedes all existing models in SPICE, which correlates the best with human judgments." |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td>Method</td><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>Topics (m=5)</td><td>49.95</td><td colspan=\"2\">38.91 42.48</td></tr><tr><td>All words (m=5)</td><td>84.01</td><td colspan=\"2\">17.99 29.49</td></tr><tr><td>All words (m=10)</td><td>70.90</td><td colspan=\"2\">30.18 42.05</td></tr><tr><td>All words (m=20)</td><td>52.51</td><td colspan=\"2\">44.53 47.80</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Results of incremental analysis. For a better understanding of the differences, we further list the breakdown of SPICE F-scores. Objects indicates comprehensiveness, and the others indicate detailedness. Additionally, we report the performance of the current state-of-the-art Up-Down for further comparison, which uses extra denseannotated data for pre-training and directly optimizes CIDEr." |
|
}, |
|
"TABREF7": { |
|
"content": "<table><tr><td>Method</td><td>S</td><td>C</td><td>M</td><td>R</td><td>B</td></tr><tr><td>Topics (m=5)</td><td colspan=\"5\">0.220 1.135 0.283 0.564 0.332</td></tr><tr><td colspan=\"6\">All words (m=5) 0.197 1.047 0.264 0.550 0.314</td></tr><tr><td colspan=\"6\">All words (m=10) 0.201 1.076 0.256 0.528 0.293</td></tr><tr><td colspan=\"6\">All words (m=20) 0.209 1.117 0.276 0.561 0.329</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Performance of visual word extraction." |
|
}, |
|
"TABREF8": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Effect of using different visual words." |
|
} |
|
} |
|
} |
|
} |