ACL-OCL / Base_JSON /prefixN /json /nlpbt /2020.nlpbt-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:47.019392Z"
},
"title": "Unsupervised Keyword Extraction for Full-Sentence VQA",
"authors": [
{
"first": "Kohei",
"middle": [],
"last": "Uehara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tatsuya",
"middle": [],
"last": "Harada",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e., keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e., keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Visual recognition is one of the most actively researched fields; this research is expected to be applied to real-world systems such as robots. Since innumerable object classes exist in the real world, training all of them in advance is impossible. Thus, to train image recognition models, it is important for real-world intelligent systems to actively acquire information. One promising approach to acquire information on the fly is learning by asking, i.e., generating questions to humans about unknown objects, and consequently learning new knowledge from the human response (Misra et al., 2018; Uehara et al., 2018; Shen et al., 2019) . This implies that if we can build a Visual Question Answering (VQA) system (Antol et al., 2015) that functions in the real Figure 1 : Example of the proposed task -keyword extraction from full-sentence VQA. Given an image, the question, and the full-sentence answer, the keyword extraction model extracts a keyword from the full-sentence answer. In this example, the word \"candles\" is the most important part, answering the question \"What is in front of the animal that looks white?\". Therefore, \"candles\" is considered as the keyword of the answer.",
"cite_spans": [
{
"start": 578,
"end": 598,
"text": "(Misra et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 599,
"end": 619,
"text": "Uehara et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 620,
"end": 638,
"text": "Shen et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 716,
"end": 736,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 764,
"end": 772,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "world and extracts knowledge from human responses, we can realize an intelligent system that can learn autonomously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "VQA is a well-known vision and language task which aims to develop a system that can answer a question about an image. One typical dataset used in VQA is the VQA v2 dataset (Goyal et al., 2017) . The answers in the VQA v2 dataset are essentially single words. This is because the annotators are instructed to keep the answer as short as possible when constructing the dataset.",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "(Goyal et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The ultimate goal of the present work is to gain knowledge through VQA that can be easily transferred to other tasks, such as object class recognition and object detection. Therefore, the knowledge (VQA answers) should be represented by a single word, such as a class label. However, in real-world dialog, answers are rarely ex-pressed by single words; rather, they are often expressed as complete sentences. In fact, in VisDial v1.0 (Das et al., 2017) , a dataset of natural conversations about images that does not have a word limit for answers, the average length of answers is 6.5 words. This is significantly longer than the average length of the answers in the VQA v2 dataset (1.2 words).",
"cite_spans": [
{
"start": 434,
"end": 452,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To bridge the gap between existing VQA research and real-world VQA, a challenging problem must be solved: identifying the word in the sentence that corresponds to the answer to the question. It must also be considered that fullsentence answers provided by humans are likely to follow a variety of sentence structures. Thus, the traditional approaches, such as rule-based approaches based on Part-of-Speech tagging or shallow parsing, require a great deal of work on defining rules in order to extract the keywords. Our key challenge is to propose a novel keyword extraction method that leverages information from images and questions as clues, without the heavy work of annotating keywords or defining the rules. This work handles the task of extracting a keyword when a full-sentence answer is obtained from VQA (Full-sentence VQA). The simplest approach to this task is to construct a dataset containing full-sentence answers and keyword annotations, and then train a model based on this dataset in a supervised manner. However, the cost of constructing a VQA dataset with full-sentence answers and keyword annotations is very high. If a keyword extraction model can be trained on a dataset without keyword annotations, we can eliminate the high cost of collecting keyword annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose an unsupervised keyword extraction model using a full-sentence VQA dataset which contains no keyword annotations. Here, the principle is based on the intuition that the keyword is the most informative word in the full-sentence answer, and contains the information that is not included in the question (i.e., the concise answer). Essentially, the full-sentence answer can be decomposed into two types of words: (1) the keyword information that is not included in the question, and (2) the information that is already included in the question. For example, in the answer \"The egg shaped ghost candles are in front of the bear.\" to the question \"What is in front of the animal that looks white?\", the word \"candles\" is the keyword, while the remaining part \"The egg shaped ghost something is in front of the bear\" is either information already included in the question or additional information about the keyword. In this case, words like \"egg,\" \"ghost,\" and \"bear\" are also not in the question, making it difficult to find the keyword via naive methods, e.g., rule-based keyword extraction. Our proposed model utilizes image features and question features to calculate the importance score for each word in the fullsentence answer. Therefore, based on the contents of the image and the question, the model can accurately estimate which words in the full-sentence answer are important. To the best of our knowledge, this is the first attempt at extracting a keyword from full-sentence VQA in an unsupervised manner. The main contributions of this work are as follows: (1) We propose a novel task of extracting keywords from full-sentence VQA with no keyword annotations. 2We designed a novel, unsupervised keyword extraction model by decomposing the full-sentence answer. 3We conducted experiments on two VQA datasets, and provided both qualitative and quantitative results that show the effectiveness of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised keyword extraction methods can be broadly classified into two categories: graphbased methods and statistical methods. Graph-based methods construct graphs from target documents by using co-occurrence between words (Mihalcea and Tarau, 2004; Wan and Xiao) . These methods are only applicable to documents with a certain length, as they require the words in the document to co-occur multiple times. The target document in this work is a full-sentence answer of VQA, whose average length is about 10 words. Therefore, graph-based methods are not suitable here.",
"cite_spans": [
{
"start": 227,
"end": 253,
"text": "(Mihalcea and Tarau, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 254,
"end": 267,
"text": "Wan and Xiao)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Keyword Extraction for Text",
"sec_num": "2.1"
},
{
"text": "Statistical methods rely on statistics obtained from a document. The most basic statistical method is TF-IDF (Ramos, 2003) , which calculates the term frequency and inverse document frequency and scores each word in the target document. Recent work such as EmbedRank (Bennani-Smires et al., 2018) have utilized word embeddings for the unsupervised keyword extraction. EmbedRank calculates the cosine similarity",
"cite_spans": [
{
"start": 109,
"end": 122,
"text": "(Ramos, 2003)",
"ref_id": "BIBREF22"
},
{
"start": 267,
"end": 296,
"text": "(Bennani-Smires et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Keyword Extraction for Text",
"sec_num": "2.1"
},
{
"text": "!\"#$%\"& !\"#$%\"& !\"#$%&'%&(%)*+($%+)%$\",% #(&-#.%$\"#$%.++/'%0\"&$,1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Keyword Extraction for Text",
"sec_num": "2.1"
},
{
"text": "2\",%,33%'\"#4,5%3\"+'$%!\"#$%&' #*,%&(%666 Full-sentence answer !\"#$%&'%&(%)*+($%+)%$\",%#(&-#.% $\"#$%.++/'%0\"&$,1 2\",%,33%'\"#4,5%3\"+'$%7#(5.,'% #*,%&(%)*+($%+)%$\",%8,#*6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Image",
"sec_num": null
},
{
"text": "Full-sentence answer Figure 2 : Illustration of the key concept. In this example, the word \"candles\" is the keyword for the full-sentence answer, \"The egg shaped ghost candles are in front of the bear.\" We consider the keyword extraction task as the decomposition of the full-sentence answer into answer information and question information. Therefore, if the keyword (i.e., the most informative word in the full-sentence answer) can be accurately extracted, the original fullsentence answer can be reconstructed from it. Additionally, the question can be reconstructed from the decomposed question information in the full-sentence answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question",
"sec_num": null
},
{
"text": "between the candidate word (or phrase) embeddings and the sentence embeddings to retrieve the most representative word of the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question",
"sec_num": null
},
{
"text": "VQA is a well-known task that involves learning from image-related questions and answers. The most popular VQA dataset is VQA v2 (Goyal et al., 2017) , and much research has used this dataset for performance evaluations. In VQA v2, the average number of words in an answer is only 1.2, and the variety of answers is relatively limited.",
"cite_spans": [
{
"start": 129,
"end": 149,
"text": "(Goyal et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Question Answering",
"sec_num": "2.2"
},
{
"text": "As stated in Section 1, in natural question answering by humans, the answers will be expressed as a sentence rather than a single word. Some datasets that have both full-sentence answers and keyword annotations exist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Question Answering",
"sec_num": "2.2"
},
{
"text": "FSVQA (Shin et al., 2016 ) is a VQA dataset with answers in the form of full sentences. In it, full-sentence answers are automatically generated by applying the numerous rule-based natural language processing patterns to the questions and single-word answers in the VQA v1 dataset (Antol et al., 2015) .",
"cite_spans": [
{
"start": 6,
"end": 24,
"text": "(Shin et al., 2016",
"ref_id": "BIBREF24"
},
{
"start": 281,
"end": 301,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Question Answering",
"sec_num": "2.2"
},
{
"text": "The recently proposed dataset, named GQA (Hudson and Manning, 2019), also contains automatically generated full-sentence answers. This dataset is constructed on the Visual Genome (Krishna et al., 2017) , which has rich and complex annotations about images, including dense captions, questions, and scene graphs. The questions and answers (both single-word and full-sentence) in the GQA dataset are created from scene graph annotations of the images.",
"cite_spans": [
{
"start": 179,
"end": 201,
"text": "(Krishna et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Question Answering",
"sec_num": "2.2"
},
{
"text": "The full-sentence answers in both datasets described above are annotated automatically, i.e., not by humans. Therefore, neither dataset has both full-sentence answers and manually annotated keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Question Answering",
"sec_num": "2.2"
},
{
"text": "The attention mechanism is a technique originally proposed in machine translation (Bahdanau et al., 2015) , aimed at focusing on the most important part of the input sequences for a task. Since the method proposed herein utilizes an attention mechanism to calculate the importance score of the word in the full-sentence answer, some prior works on attention mechanisms are discussed.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.3"
},
{
"text": "In general, an attention mechanism essentially learns the mapping between a query and key-value pairs. Transformer (Vaswani et al., 2017) is one of the most popular attention mechanisms for machine translation. It enables machine translation without using recurrent neural networks, using a self-attention mechanism and feed-forward networks instead.",
"cite_spans": [
{
"start": 115,
"end": 137,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.3"
},
{
"text": "Another study uses an attention mechanism for weakly supervised keyword extraction (Wu et al., 2018) . They first trained a model for document classification and extracted the word to which the model pays \"attention\" to perform the classification. This system requires additional annotations of document class labels to train the model, whereas we aim to extract keywords without any additional annotations.",
"cite_spans": [
{
"start": 83,
"end": 100,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.3"
},
{
"text": "This section describes the proposed method in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "First, the principal concept of the model is shown in Figure 2 . To extract the keyword, we intend to obtain two features from the full-sentence answer, each representing the keyword information and the information derived from the question, respectively. To ensure that these two features discriminatively include keyword information and question information, we intend to reconstruct the original questions and answers from the question features and keyword features, respectively. Thus, if we successfully extract the keyword and the question information from the full-sentence answer, we can reconstruct original full-sentence answer and the question. Essentially, given an image, its corresponding question, and full-sentence answer, our proposed model extracts the keyword of the answer by decomposing the keyword information and the question information in the answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "An overview of the model is shown in Figure 3 . To realize decomposition-based keyword extraction, we designed a model which consists of the encoder E, the attention scoring modules S a and S q , and the decoder modules D all , D a , and D q .",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "An image I and the corresponding question Q and full-sentence answer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A = {w (a) 1 , w (a) 2 , ..., w",
"eq_num": "(a)"
}
],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "n } are considered as the model input. Here, w Given I and Q, E extracts image and question features and integrates them into joint features f j , i.e., E(I, Q) = f j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "Next, S a and S q use f j and A as input and output the weight vectors a k and a q . Here,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "a k = {a (k) 1 , a (k) 2 , ..., a (k) n } and a q = {a (q) 1 , a (q) 2 , . . . , a (q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "n } for each word in A. We denote a i \u2208 (0, 1) as the weight score of the i-th word in A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "Then, we consider the keyword vector f k as the embedding vector of the word with the highest weight score in a k . Meanwhile, the question information vector f q is considered as the weighted sum of the embedding vectors of A corresponding to the weight score a q .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "Following this, D all uses LSTM to reconstruct the original full-sentence answer using f q and f k . f q and f k are intended to represent the question information and the keyword vector of the fullsentence answer, respectively. However, D all only ensures that both features have the information of the full-sentence answer. To separate them, we designed the additional decoders, D a and D q . The former reconstructs the BoW features of the answer using f k , while the latter reconstructs those of the question using f q with auxiliary vectors. The objective of this operation is to make f k and f q representative features for the full-sentence answer and the question, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "The entire model is trained to minimize the disparity between the reconstructed sentences A recon and the original full-sentence answers, as well as that between the BoW features of the full-sentence answers and the questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "The module E encodes the image I and the question Q and obtains the image feature f I , the question feature f Q , and the joint feature f j . To generate f I , we use the image feature extracted from a deep CNN, which is pre-trained on a large-scale image recognition dataset. For f Q , each word token was converted into a word embeddings and averaged. Following this, l 2 normalization was performed on both features. Finally, those features were concatenated to the joint feature",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "f j \u2208 R d j , i.e., E(I, Q) = f j = [f I ; f Q ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "where d j is the dimension of the joint feature and [; ] indicates concatenation. Note that we did not update the model parameters of E during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "This module takes f j as input and weights each words in the full-sentence answer. We used two of these modules, S a and S q . S a and S q compute the weights based on the importance of a word for the full-sentence answer and that for the question, respectively. S a and S q have a nearly identical structure. Therefore, the details of S a are presented first, following which the difference between S a and S q is described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "The weight scoring in these modules is based on the attention mechanism used in Transformer (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "First, each word in the full-sentence answer was encoded, and the full-sentence answer vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "f A = {w (a) 1 , w (a) 2 , . . . , w (a) n } \u2208 R de\u00d7n was created.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "Here, w (a) i denotes the embedding vector of the i-th word, n is the length of the full-sentence answer, and d e is the dimension of the word embedding vector. To represent the word order, positional encoding was applied to f A . Specifically, before feeding f A into scoring modules, we add positional embedding vectors to f A , similar to those introduced in BERT (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 367,
"end": 388,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "We describe our attention mechanism as a mapping between Query and Key-Value pairs. First, we calculate Query vector Q \u2208 R h , Key vector K \u2208 R h\u00d7n , and Value vector V \u2208 R h\u00d7n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "Q = FFN q (f j ) (1) K = FFN k (f A ) (2) V = FFN v (f A ) = {v (a) 1 , v (a) 2 , . . . , v (a) n } (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "where FFN q , FFN k , FFN v are single-layer feedforward neural networks. Then, the attention weight vector a k = {a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "(k) 1 , a (k) 2 , . . . , a (k) n } \u2208 R n , where a (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "i is the weighted score of the i-th word, is computed as the product of Q and K, as shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a k = K T Q",
"eq_num": "(4)"
}
],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "Then, the word with the highest weighted score is chosen as the keyword of the full-sentence answer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i (k) = argmax i (a (k) i ) (5) f k = v (a) i (k)",
"eq_num": "(6)"
}
],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "However, the argmax operation is nondifferentiable. Therefore, we use an approximation of this operation by softmax with temperature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f k = V softmax( a k \u03c4 )",
"eq_num": "(7)"
}
],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "where \u03c4 is a temperature parameter, and as \u03c4 approaches 0, the output of the softmax function becomes a one-hot distribution. S q has the same structure as S a up to the point of computing the attention weight vector a q . For the keyword vector, we have the intention to focus on the specific word in the full-sentence answer. Therefore, we use the softmax with temperature. However, for the question vector, there is no need to focus on one word. Therefore, the question vector is calculated as the weighted sum of the attention score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f q = V softmax(a q )",
"eq_num": "(8)"
}
],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "Then, we applied single-layer feed-forward neural network, followed by layer normalization (Ba et al., 2016) to the output of this module f k , f q .",
"cite_spans": [
{
"start": 91,
"end": 108,
"text": "(Ba et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Scoring Module",
"sec_num": "3.3"
},
{
"text": "Entire Decoder In the entire decoder D all , the full-sentence is reconstructed from the output of the attention scoring modules f k and f q , i.e., A recon = D all (f k , f q ), where A recon denotes the reconstructed full-sentence answer. We use an LSTM as the sentence generator. As the input to the LSTM at each step (x t ), f k and f q are concatenated to the output of the previous step as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x 0 = W x 0 [f k ; f q ] (9) x t = W x [f k ; f q ;\u015d t\u22121 ]",
"eq_num": "(10)"
}
],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "where\u015d t\u22121 is the output of the LSTM at the t \u2212 1 step, and W x 0 and W x are the learned parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "The objective of D all is defined by the crossentropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "L all = \u2212 n t=1 log(p(\u015d t = s (ans) t | s (ans) 1:t\u22121 )) (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "where s (ans) is the ground-truth full-sentence answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "Further, word dropout (Bowman et al., 2016), a method of masking input words with a specific probability, is applied. This forces the decoder to generate sentences based on the f k and f q rather than relying on the previous word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "Discriminative Decoders D all attempts to reconstruct the full-sentence answer from f k and f q . Thus, D all allows the feature vectors to contain the answer information. However, the keyword and question information are intended to be represented by f k and f q , respectively. Therefore, we designed the discriminative decoders, D a and D q , to generate f k and f q , respectively, thus capturing the desired information separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "D a and D q reconstruct the full-sentence answer and the question, respectively. This reconstruction is performed with the target of the BoW features of the sentence, rather than the sentence itself. This is because we intend to focus on the content of the sentence and not its sequential information. Sentence reconstruction was also considered as an alternative, but this is difficult to train using LSTM. The BoW feature b \u2208 R ns is represented as a vector whose i-th elements is N i /L s , where n s is the vocabulary size, N i is the number of occurrences of the i-th word, and L s is the number of the words in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "The input to these discriminative decoders consists not only of feature vectors, but also auxiliary vectors, the additional features that assist in reconstruction. Specifically, the auxiliary vector for D a is the average of the word embedding vectors in the question, f Q , and, for D q , the auxiliary vector is the image feature f I .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "We build the decoder as the following fullyconnected layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "y a = W A [f k ; f Q ] + B A (12) y q = W Q [f q ; f I ] + B Q (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "The loss function for the discriminative decoder is the cross-entropy loss between the ground-truth BoW features and the predicted BoW features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L a = \u2212 na i=1 b a [i] log(softmax(y a [i]))",
"eq_num": "(14)"
}
],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L q = \u2212 nq i=1 b q [i] log(softmax(y q [i]))",
"eq_num": "(15)"
}
],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "where b denotes the ground-truth of the BoW features, and n a and n q are the vocabulary sizes of the answer and the question, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.4"
},
{
"text": "Finally, the overall objective function for the proposed model is written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Full Objectives",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u03bb all L all + \u03bb a L a + \u03bb q L q ,",
"eq_num": "(16)"
}
],
"section": "Full Objectives",
"sec_num": "3.5"
},
{
"text": "where \u03bb all , \u03bb a , and \u03bb q are hyper-parameters that balance each loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Full Objectives",
"sec_num": "3.5"
},
{
"text": "In the encoder E, image features of size 2048 \u00d7 14 \u00d7 14 were extracted from the pool-5 layer of the ResNet152 (He et al., 2016) . These were pre-trained on ImageNet, and global pooling was applied to obtain 2048dimensional features. To encode the question words, we used 300-dimensional GloVe embeddings (Pennington et al., 2014) . These were pretrained on the Wikipedia / Gigaword corpus 1 . To convert each word in the full-sentence answer into f A , the embedding matrix in the attention scoring module was initialized with the pretrained GloVe embeddings. The temperature parameter \u03c4 is gradually annealed using the schedule \u03c4 i = max(\u03c4 0 e \u2212ri , \u03c4 min ), where i is the overall training iteration, and other parameters are set as \u03c4 0 = 0.5, r = 3.0\u00d710 \u22125 , \u03c4 min = 0.1. The LSTM in the D all has a hidden state of 1024 dimensions. The word dropout rate was set to 0.25.",
"cite_spans": [
{
"start": 110,
"end": 127,
"text": "(He et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 304,
"end": 329,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.6"
},
{
"text": "We used the Adam (Kingma and Ba, 2015) optimizer to train the model, which has an initial learning rate of 1.0 \u00d7 10 \u22123 . 4 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.6"
},
{
"text": "We conducted experiments on two datasets: GQA and FSVQA. In Table 1 , we present the basic statistics of both datasets. GQA GQA (Hudson and Manning, 2019) contains 22M questions and answers. The questions and answers are automatically generated from image scene graphs, and the answers include both the single-word answers and the full-sentence answers. The questions and answers in GQA have unbalanced answer distributions. Therefore, we used a balanced version of this dataset, which is down-sampled from the original dataset and contains 1.7M questions. As pre-processing, we removed the periods, commas, and question marks. FSVQA FSVQA (Shin et al., 2016) contains 370K questions and full-sentence answers. This dataset was built by applying rule-based processing to the VQA v1 dataset (Antol et al., 2015) , and captions in the MSCOCO dataset (Lin et al., 2014) , to obtain the full-sentence answers. There are ten annotations (i.e., single-word answers) per question in the VQA v1 dataset. Of these, the annotations with the highest frequency is chosen to create full-sentence answers. If all the frequencies are equal, an annotation is chosen at random. Since the authors do not provide the mapping between single-word answers and fullsentence answers, we considered the annotations with the highest frequency as the single-word answers matching the full-sentence answers. Questions for which the highest frequency annotation cannot be determined were filtered out. Following this process, we obtained 139,038 questions for the training set, and 68,265 questions for the validation set.",
"cite_spans": [
{
"start": 640,
"end": 659,
"text": "(Shin et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 790,
"end": 810,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 848,
"end": 866,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The model performance was determined based on the keyword accuracy and the Mean Rank. Mean Rank is the average rank of the correct keyword when sorting each word in order of the importance score. Mean Rank is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Mean Rank = 1 N i rank i .",
"eq_num": "(17)"
}
],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "Here, rank i is the number representing the keyword rank when the words in the i-th answer sentence are arranged in order of the importance (TF-IDF score or attention score, i.e., a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "(k) i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "in Eqn. 5), and N is the size of the overall samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "We ran experiments with the various existing unsupervised keyword extraction methods for the comparison: (1) TF-IDF (Ramos, 2003) , (2) YAKE (Campos et al., 2020) , and (3) Em-bedRank (Bennani-Smires et al., 2018) . Since YAKE removes the words with less than three characters as preprocessing, the Mean Rank cannot be calculated under the same conditions as other methods. Therefore, the Mean Rank of YAKE is not shown. We also conducted an ablation study to show the importance of D a and D q . In addition, we changed the reconstruction method from BoW estimation to the original sentence generation using LSTM.",
"cite_spans": [
{
"start": 116,
"end": 129,
"text": "(Ramos, 2003)",
"ref_id": "BIBREF22"
},
{
"start": 141,
"end": 162,
"text": "(Campos et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 184,
"end": 213,
"text": "(Bennani-Smires et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "The experimental results are shown in Table 2 . Also, we provide the accuracy per question types in Appendix A for further analysis. The proposed model, which used BoW estimation in D a and D q , achieves superior performance on almost all metrics and datasets except for the Mean Rank of FSVQA. As can be seen in the results of the ablation study, this superior performance is achieved even without D a and D q , which demonstrates the effectiveness of the proposed reconstruction-based method. When using LSTM in D a and D q , the accuracy and mean rank worsens as compared to those of the proposed model, which reconstructs the BoW in those modules. This is considered to be because sentence reconstruction with LSTM requires management of the sequential information of the sentence, which is more complex than BoW estimation. Since we intended to focus on the contents of the sentence, the BoW is more suitable for these modules. We provide some examples in Figure 4 . The examples on the left and right are from GQA and FSVQA, respectively. Since the statistical methods such as TF-IDF tend to choose rarer words as keywords, they are likely to fail if the keyword is a common word (Figure 4 (a) , (c)). On the other hand, the model proposed herein can accurately extract keywords even in such cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 2",
"ref_id": null
},
{
"start": 962,
"end": 970,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 1187,
"end": 1200,
"text": "(Figure 4 (a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "In this paper, we proposed the novel task of unsupervised keyword extraction from full-sentence VQA. A novel model was designed to handle this task based on information decomposition of fullsentence answers and the reconstruction of questions and answers. Both qualitative and quantitative experiments show that our model successfully extracts the keyword of the full-sentence answer with no keyword supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In future work, the extracted keywords will be utilized in other tasks, such as VQA, object classification, or object detection. This work could also be combined with recent works on VQG (Uehara et al., 2018; Shen et al., 2019) . In these works, the system generates questions to acquire information from humans. However, they assume that the answers are obtained as single words, which will pose a problem when applying it to the real-world question answering. By combining these studies with our research, an intelligent system can ask humans about unseen objects and learn new knowledge from the answer, even if the answer consists of more than a single word.",
"cite_spans": [
{
"start": 187,
"end": 208,
"text": "(Uehara et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 209,
"end": 227,
"text": "Shen et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://nlp.stanford.edu/projects/glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgement This work was partially supported by JST CREST Grant Number JP-MJCR1403, and partially supported by JSPS KAKENHI Grant Number JP19H01115 and JP20H05556. We would like to thank Yang Li, Sho Maeoki, Sho Inayoshi, and Antonio Tejerode-Pablos for helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF2": {
"ref_id": "b2",
"title": "&$*'(* %+#*5#4%*'4*%+#*&9-2#@ A+#*%1-7*&$*9-6#*'4* 8&./#1@ A+#*8-55*&$*,#+&(6*%+#*.-%* 8+&%#@ A+#*,'7*&$*1&6&",
"authors": [
{
"first": ">'?*",
"middle": [],
"last": "%+#*",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": ">'?*%+#*,-./0-./*&$*'(* %+#*5#4%*'4*%+#*&9-2#@ A+#*%1-7*&$*9-6#*'4* 8&./#1@ A+#*8-55*&$*,#+&(6*%+#*.-%* 8+&%#@ A+#*,'7*&$*1&6&(2*'(* $/-%#,'-16@",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "8&./#1 8+&%# $/-%#",
"authors": [
{
"first": "",
"middle": [],
"last": "Ba*c#",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "78",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BA*C#78'16 (' 8&./#1 8+&%# $/-%#,'-16",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Vqa: Visual question answering",
"authors": [
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In ICCV.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Simple unsupervised keyphrase extraction using sentence embeddings",
"authors": [
{
"first": "Kamil",
"middle": [],
"last": "Bennani-Smires",
"suffix": ""
},
{
"first": "Claudiu",
"middle": [],
"last": "Musat",
"suffix": ""
},
{
"first": "Andreea",
"middle": [],
"last": "Hossmann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Baeriswyl",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2018,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamil Bennani-Smires, Claudiu Musat, Andreea Hoss- mann, Michael Baeriswyl, and Martin Jaggi. 2018. Simple unsupervised keyphrase extraction using sentence embeddings. In CoNLL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vilnis",
"suffix": ""
}
],
"year": 2016,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Yake! keyword extraction from single documents using multiple local features",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Vtor",
"middle": [],
"last": "Mangaravite",
"suffix": ""
},
{
"first": "Arian",
"middle": [],
"last": "Pasquali",
"suffix": ""
},
{
"first": "Alpio",
"middle": [],
"last": "Jorge",
"suffix": ""
},
{
"first": "Clia",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Jatowt",
"suffix": ""
}
],
"year": 2020,
"venue": "Information Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Campos, Vtor Mangaravite, Arian Pasquali, Alpio Jorge, Clia Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. Information Sciences.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Visual dialog",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "Khushi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Deshraj",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jose",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jose M. F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In CVPR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Tejas",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2017,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In CVPR.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In CVPR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gqa: A new dataset for real-world visual reasoning and compositional question answering",
"authors": [
{
"first": "A",
"middle": [],
"last": "Drew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Hudson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drew A. Hudson and Christopher D. Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations",
"authors": [
{
"first": "Ranjay",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Yuke",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Groth",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Hata",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Kravitz",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Kalantidis",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Shamma",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vi- sion using crowdsourced dense image annotations. IJCV.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "TextRank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In EMNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning by asking questions",
"authors": [
{
"first": "Ishan",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "Martial",
"middle": [],
"last": "Hebert",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
}
],
"year": 2018,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishan Misra, Ross Girshick, Rob Fergus, Martial Hebert, Abhinav Gupta, and Laurens van der Maaten. 2018. Learning by asking questions. In CVPR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using TF-IDF to Determine Word Relevance in Document Queries",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Ramos",
"suffix": ""
}
],
"year": 2003,
"venue": "the first instructional conference on machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Ramos. 2003. Using TF-IDF to Determine Word Relevance in Document Queries. In the first instruc- tional conference on machine learning.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning to caption images through a lifetime by asking questions",
"authors": [
{
"first": "Tingke",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Amlan",
"middle": [],
"last": "Kar",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2019,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tingke Shen, Amlan Kar, and Sanja Fidler. 2019. Learning to caption images through a lifetime by asking questions. In ICCV.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The color of the cat is gray: 1 million fullsentences visual question answering (fsvqa)",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Yoshitaka",
"middle": [],
"last": "Ushiku",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Harada",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.06657"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew Shin, Yoshitaka Ushiku, and Tatsuya Harada. 2016. The color of the cat is gray: 1 million full- sentences visual question answering (fsvqa). arXiv preprint arXiv:1609.06657.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Visual question generation for class acquisition of unknown objects",
"authors": [
{
"first": "Kohei",
"middle": [],
"last": "Uehara",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Tejero-De-Pablos",
"suffix": ""
},
{
"first": "Yoshitaka",
"middle": [],
"last": "Ushiku",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Harada",
"suffix": ""
}
],
"year": 2018,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kohei Uehara, Antonio Tejero-De-Pablos, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Visual question generation for class acquisition of unknown objects. In ECCV.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Single document keyphrase extraction using neighborhood knowledge",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan and Jianguo Xiao. Single document keyphrase extraction using neighborhood knowl- edge.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A visual attention-based keyword extraction for document classification",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhikang",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Yike",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2018,
"venue": "Multimedia Tools and Applications",
"volume": "77",
"issue": "",
"pages": "25355--25367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wu, Zhikang Du, and Yike Guo. 2018. A vi- sual attention-based keyword extraction for docu- ment classification. Multimedia Tools and Applica- tions, 77(19):25355-25367.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "i-th word in the full-sentence answer.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Overall pipeline of the model. First, the Encoder Module extracts the image features f I and the question features f Q and integrates them into a joint feature f j . Then, the Attention Scoring Modules S a and S q compute the attention weight and calculate the weighted sum of the word-embedding vectors of the full-sentence answer. The output of S a i.e., f k , is the keyword-aware feature of the full-sentence answer, and the output of S q i.e., f q , is the question-aware feature. D all reconstructs the full-sentence answer from both f k and f q . D a estimates the Bag-of-Words(BoW) feature of the full-sentence answer from f k and f Q . Additionally, D q estimates the BoW feature of the question from f q and f I .",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Examples of the keyword extraction results in the GQA dataset (a, b) and the FSVQA dataset (c, d).",
"type_str": "figure"
}
}
}
}