ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.43.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:18.080885Z"
},
"title": "NHK STRL at WNUT-2020 Task 2: GATs with Syntactic Dependencies as Edges and CTC-based Loss for Text Classification",
"authors": [
{
"first": "Yuki",
"middle": [],
"last": "Yasuda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science and Technology Research Laboratories",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Taichi",
"middle": [],
"last": "Ishiwatari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science and Technology Research Laboratories",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Taro",
"middle": [],
"last": "Miyazaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science and Technology Research Laboratories",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jun",
"middle": [],
"last": "Goto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science and Technology Research Laboratories",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The outbreak of COVID-19 has greatly impacted our daily lives. In these circumstances, it is important to grasp the latest information to avoid causing too much fear and panic. To help grasp new information, extracting information from social networking sites is one of the effective ways. In this paper, we describe a method to identify whether a tweet related to COVID-19 is informative or not, which can help to grasp new information. The key features of our method are its use of graph attention networks to encode syntactic dependencies and word positions in the sentence, and a loss function based on connectionist temporal classification that can learn a label for each token without reference data for each token. Experimental results show that the proposed method achieved an F1 score of 0.9175, outperforming baseline methods.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The outbreak of COVID-19 has greatly impacted our daily lives. In these circumstances, it is important to grasp the latest information to avoid causing too much fear and panic. To help grasp new information, extracting information from social networking sites is one of the effective ways. In this paper, we describe a method to identify whether a tweet related to COVID-19 is informative or not, which can help to grasp new information. The key features of our method are its use of graph attention networks to encode syntactic dependencies and word positions in the sentence, and a loss function based on connectionist temporal classification that can learn a label for each token without reference data for each token. Experimental results show that the proposed method achieved an F1 score of 0.9175, outperforming baseline methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The outbreak of COVID-19 that has occurred since the end of 2019 has greatly impacted our daily lives. In these circumstances, it is important for everyone to understand the situation and grasp the latest information to avoid causing too much fear and panic. Nowadays, social networking sites (SNSs) such as Twitter and Facebook are important information sources because users post information regarding their personal events-including that related to COVID-19-in real time. For this reason, many monitoring systems for have been developed such as The Johns Hopkins Coronavirus Dashboard 1 and the COVID-19 Health System Response Monitor 2 . Many systems use SNSs as resources, but largely depend on manual work such as using cloud sourcing to extract informative posts from massive numbers of uninformative ones. Generally, SNSs contain too much information on miscellaneous topics, so extracting important information is difficult. Therefore, we attempted to develop a method to extract important information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our method first embeds each token in the input sentence using BERT (Devlin et al., 2019) . Then, the vectors are fed into graph attention networks (GATs) (Veli\u010dkovi\u0107 et al., 2018) to encode tokento-token relations. Finally, our method classifies each vector into labels using feed-forward neural networks (FFNNs). In the training process, we use a loss function based on connectionist temporal classification (CTC) (Graves et al., 2006) . Experimental results show that our method using GATs and the CTC-based loss function achieved an F1 score of 0.9175, outperforming baseline methods.",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 155,
"end": 180,
"text": "(Veli\u010dkovi\u0107 et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 416,
"end": 437,
"text": "(Graves et al., 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are as follows: 1We propose a GAT-based network to embed syntactic dependencies and positional features of tokens in an input sentence. (2) We also propose a loss function, which enables to train labels for each token. (3) We confirmed the effectiveness of our proposed methods using the identification of informative COVID-19 English Tweets shared task dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tweets Shared Task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Informative COVID-19",
"sec_num": "2"
},
{
"text": "The identification of informative COVID-19 English Tweets 3 is a shared task held at W-NUT (Workshop on Noisy User-generated Text) 2020 (Nguyen et al., 2020b) . The purpose of the task is to identify whether English tweets related to COVID-19 are informative or not. The dataset for the task contains 7,000 tweets for training, 1,000 for validating, and 2,000 for testing. Each tweet in the data, excluding those in the testing data are labelled informative or uninformative. The target metric of the task is the F1 score for informative tweets. ",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "(Nguyen et al., 2020b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Informative COVID-19",
"sec_num": "2"
},
{
"text": "\u210e 1 (0) \u210e 2 (0) \u210e 3 (0) \u210e 4 (0) \u210e 0 (0) \u210e 5 (0) \u210e 1 ( ) \u210e 2 ( ) \u210e 3 ( ) \u210e 4 ( ) \u210e 0 ( ) \u210e 5 ( ) \u210e 0 (0) \u210e 1 (0) \u210e 2 (0) \u210e 4 (0) \u210e 3 (0) \u210e 5 (0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Informative COVID-19",
"sec_num": "2"
},
{
"text": "Token embedding 1200 new cases in the UK",
"cite_spans": [
{
"start": 16,
"end": 20,
"text": "1200",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Informative COVID-19",
"sec_num": "2"
},
{
"text": "Embedding syntactic dependency and positional features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic dependency",
"sec_num": null
},
{
"text": "GAT -2 -1 0 1 2 3 \u210e 0 (0) \u210e 1 (0) \u210e 2 (0) \u210e 3 (0) \u210e 4 (0) \u210e 5 (0) * Example distance from \u210e 2 (0) Position encodings \u210e (0) \u210e ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic dependency",
"sec_num": null
},
{
"text": "Labels for each token Classification *One or more \"informative\" labels included, then output is \"informative.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic dependency",
"sec_num": null
},
{
"text": "1 2 3 4 5 6 2-layer FFNN Figure 1 : Overview of our method. Our method first embeds each token in an input sentence using BERT. Also, syntactic dependencies are obtained using a dependency parser. Then, our method embeds syntactic features using GATs, by using a graph that has nodes of token-embedding vectors and edges of syntactic dependencies and selfloops. Positional features are also added to the graph. The output vectors of the GATs are concatenated with BERT output vectors, and then fed into 2-layer FFNNs, which classifies each vector into labels. If one or more vectors are labelled as informative, the output class is informative. Note that the arrows in the the dependency parser example connect the head word to the dependent word as to follow a convention. On the other hand, arrows in the GAT example connect the dependent word to the head word, as used in our proposed method.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic dependency",
"sec_num": null
},
{
"text": "The overview of our method is illustrated in Figure 1 . The key features of our method are embedding syntactic dependencies and positional features using GATs (Veli\u010dkovi\u0107 et al., 2018) , and calculating loss in the training process using a loss function based on CTC (Graves et al., 2006) . We use masked-token estimation as multi-task learning to help improve the generalization capability. We use word-dropout (Sennrich et al., 2016) before BERT, and the \"dropped\" tokens are used as masked words to be estimated in the training process as a multi-task.",
"cite_spans": [
{
"start": 160,
"end": 185,
"text": "(Veli\u010dkovi\u0107 et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 268,
"end": 289,
"text": "(Graves et al., 2006)",
"ref_id": "BIBREF1"
},
{
"start": 413,
"end": 436,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 45,
"end": 54,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The BERT model, which we use for token embedding, uses position encoding to consider the position of tokens in the model, but its ability to capture global information including syntactic features is limited (Lu et al., 2020) . Therefore, we use GATs with syntactic dependencies as edges of the graph, which enables our method to handle syntactic dependency explicitly. This is inspired from the work of Huang and Carley (2019) . We use all of the universal dependency (McDonald et al., 2013) as a directional edge regardless of dependency type 4 . The tokenizer used in BERT often separates a single word into many tokens. We connect edges from all tokens of a word to all tokens of the head word. For example, if there is a relation between the two words COVID-19 and tweet, and the former word is divided into two tokens COVID and ##-19, our method connects the two edges, COVID to tweet and ##-19 to tweet.",
"cite_spans": [
{
"start": 208,
"end": 225,
"text": "(Lu et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 404,
"end": 427,
"text": "Huang and Carley (2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GATs for encoding token-to-token relations",
"sec_num": "3.1"
},
{
"text": "The GAT is based on multi-head attention (Vaswani et al., 2017) among neighbor nodes, with all the connected nodes used as the keys and values of the attention calculation. In many cases, the number of incoming edges for a node is only zero or one if syntactic dependencies are used as edges. Nodes that have no incoming edges cannot update the vector in the GATs. Also, for nodes that have only one incoming edge, the attention weight in the multi-head attention is 1.0, which leads to poor results. To overcome this problem, a self-loop for each node was proposed (Huang and Carley, 2019; Xu and Yang, 2019) . Following that, we use a self-loop for each node in the GATs.",
"cite_spans": [
{
"start": 41,
"end": 63,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 566,
"end": 590,
"text": "(Huang and Carley, 2019;",
"ref_id": "BIBREF4"
},
{
"start": 591,
"end": 609,
"text": "Xu and Yang, 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GATs for encoding token-to-token relations",
"sec_num": "3.1"
},
{
"text": "Many edges are concentrated on the root word of a sentence, so the GATs treats all nodes equally. On the other hand, nearby and distant words are generally more and less related to the root word, respectively. To simulate this, we use positional encoding to our GATs. We use the relative distance between tokens as a parameter, then embed them along with the attention coefficient between nodes For fixed, we use the following representation as a positional embedding between the i-th and j-th tokens of the sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positional features",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P E ij fixed = L \u2212 (i \u2212 j) ,",
"eq_num": "(1)"
}
],
"section": "Positional features",
"sec_num": null
},
{
"text": "where L is the number of tokens in the sentence. For learned, we use a 1-layer FFNN with an input of P E ij fixed as a positional embedding as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positional features",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P E ij learned = W P E P E ij fixed + b P E ,",
"eq_num": "(2)"
}
],
"section": "Positional features",
"sec_num": null
},
{
"text": "where W P E \u2208 R |1\u00d71| and b P E \u2208 1 |d| are a learnable weight and bias, respectively. The positional features are then broadcasted into P E ij \u2208 |1 \u00d7 d| where d is the dimension of a GAT layer, and added after calculating the multi-head attention along the edges in the graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positional features",
"sec_num": null
},
{
"text": "Most tweets that were labelled as informative contain not only informative phrases but also uninformative parts. To consider this, we propose a new loss function-CTC for Text Classification (CTCTC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CTC for Text Classification (CTCTC)",
"sec_num": "3.2"
},
{
"text": "Let us consider the input sequence of probabilities x \u2208 R |T |\u00d7|L| where |T | denotes the length of the sequence and |L| denotes the number of labels to classify. Note that L includes blank, which is a special symbol for CTC labelled for the data in which no labels are aligned. The probability p ctc (y|x) for input x and reference data y \u2208 1 \u2264|T | is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The basis of CTC",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p ctc (y|x) = \u03c0\u2208B \u22121 (y) p(\u03c0|x) ,",
"eq_num": "(3)"
}
],
"section": "The basis of CTC",
"sec_num": null
},
{
"text": "where B \u22121 is the inverse of the many-to-one map B of all possible labellings from the input to reference data. In generating B, blanks are inserted between each label in y, i.e., for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The basis of CTC",
"sec_num": null
},
{
"text": "y = {y 1 , y 2 , \u2022 \u2022 \u2022 , y |y| }, a modified reference y = {blank, y 1 , blank, y 2 , \u2022 \u2022 \u2022 , y |y| ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The basis of CTC",
"sec_num": null
},
{
"text": "blank} is used to generate B. In Figure 2 , B is equal to the set of the paths of black arrows that finally reach one of the two dots in the blue box. Then, p ctc represents the probability of the sum of all probabilities of paths that pass all labels with the given order as reference data, which is shown as the sum of two probabiliuties in the blue box in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 41,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 359,
"end": 367,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The basis of CTC",
"sec_num": null
},
{
"text": "We use a CTC-based loss function that is utilized for text classification. Our loss function accepts the reference data\u0233, which is a single label for an \"informative\" or \"uninformative\" sentence in the task, and assign a label or blank for all tokens in the sentence. It works by handling the uninformative parts in informative tweets as blank automatically. Calculating CTCTC is almost the same as CTC, differing only in the construction of the many-toone map. First, CTCTC arranges a sufficient number of the given reference label\u0233 and blank, i.e., y = {blank,\u0233, blank,\u0233, \u2022 \u2022 \u2022 ,\u0233, blank}. Then,B is generated, which is the set of all possible labellings from the input x to modified reference data\u0233 regardless of the number of passed labels in\u0233 . In Figure 2 ,B is equal to the set of the paths of black arrows that finally reach one of the dots in the red box. To calculate a CTCTC loss,B is used instead of B in Equation (3). As a result, the probability p ctctc represents the probability of at least one token in the input sequence being aligned to the label y, which is illustrated as the sum of all dots in the red box in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 753,
"end": 761,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1131,
"end": 1139,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "CTCTC loss",
"sec_num": null
},
{
"text": "CTCTC tends to align most tokens to blank, and only one token to the reference label. This is because the probability for blank is learned for every sentence in the training data regardless of its label, so the probability tends to be high for all data. To avoid the probabilities of all data being learned as blank, we prepare three types of smoothing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "Label smoothing We use label smoothing (Szegedy et al., 2016) , which is a regularization technique to avoid overfitting and overconfidence. This replaces the one-hot reference label l with the smoothed label l (k) as follows:",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l (k) = (1 \u2212 )\u03b4 k,l + K ,",
"eq_num": "(4)"
}
],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "where \u03b4 k,l is the Dirac delta function, which equals 1 when k = l and 0 otherwise, K is the set of labels to classify, and is the smoothing rate. The label-wise smoothing is illustrated as a green box in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "Token smoothing This is almost the same as label smoothing but differs in the direction of the smoothing-token-wise. It works on the basis that words close together often have similar meanings. We set the max width to 5 to consider this smoothing in the experiments. The token-wise smoothing is illustrated as a yellow box in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "Leaking To enable learning the probability for labels instead of blank, we use the one-direction smoothing named \"leaking.\" This is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p i,blank = (1 \u2212 )p i,blank + p i,\u0233 ,",
"eq_num": "(5)"
}
],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "where is the smoothing rate, p i,blank and p i,\u0233 are the probabilities for blank and the reference label y of i-th data of the input sequence, respectively. This is calculated only for the probability of blank, and is illustrated as an orange arrow in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 260,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Smoothing for CTCTC",
"sec_num": null
},
{
"text": "Our experiments were based on the identification of informative COVID-19 English Tweets dataset mentioned in Section 2. We conducted two experiments on the basis of the validation and testing data, respectively. For the validation data-based experiment, we used training data contains 7,000 tweets and validating data contains 1,000 tweets for training and testing, respectively. For the testing data-based experiment, we used 8,000 tweets mixed from the training and validating data for 4fold cross validation. Then, an ensemble of the best model of each fold data were used for testing data. We added the output scores of each model for the model ensemble. The models were implemented in PyTorch (Paszke et al., 2019) , with Transformers (Wolf et al., 2019) and Deep Graph Library (Wang et al., 2019) , and learned with the RAdam optimizer (Liu et al., 2020) with a learning rate of 0.0001. We used BERT-base, uncased (Devlin et al., 2019) as a pretrained model, with fine-tuning and a learning rate of 0.00002. We used spaCy (Honnibal and Montani, 2017) for dependency parsing.",
"cite_spans": [
{
"start": 698,
"end": 719,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 740,
"end": 759,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 783,
"end": 802,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 842,
"end": 860,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 920,
"end": 941,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1028,
"end": 1056,
"text": "(Honnibal and Montani, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "4.1"
},
{
"text": "The following hyperparameters were used: number of GATs layers was 2; a mini-batch size of 16; L2 regularization coefficient of 0.1; dropout rate of 0.1; word dropout rate of 0.2; 50 training iterations, with early stopping on the validating data on the basis of the F1 score for the informative class; and smoothing ratio for the three smoothing methods of CTCTC of 0.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "4.1"
},
{
"text": "We prepared baseline methods as shown in Figure 3 . To confirm the effectiveness of the GATs, a baseline method of \"no GATs\" that does not use GATs but the output vectors of BERT is directly fed into the FFNNs. Also, to confirm the effectiveness of CTCTC, a baseline of \"no CTCTC\" that does not use CTCTC but cross entropy loss is used. Table 1 shows the results for the validation databased experiment. The rows in which Use GATs and Use CTCTC are not checked indicate the baselines shown in Section 4.2. F1 score shows the F1 score for the informative class with the mean and standard deviation of five-time trials of the same settings. Our methods using both GATs and CTCTC (# 9 and 10) achieved the top-2 results in the table. Table 2 shows the results on the test data, which are the official results of the shared task and we ranked 21st out of 55 participants 5 . The table also shows the results of the top-3 teams in the shared task.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 50,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 338,
"end": 345,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 732,
"end": 739,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Baseline methods",
"sec_num": "4.2"
},
{
"text": "The results for the methods using GATs with CTCTC (#9 and 10) are better than the others. This is because our CTCTC uses vectors of each token so the performance depends on the quality of the vector of each token. Our GATs work to improve the quality of the vector of each token by using token-to-token relations. Therefore, we believe our GATs and CTCTC work well in combination. On the other hand, GATs without CTCTC cannot make the best use of the improved vectors because they are mixed up vectors of tokens into one vector using max-pooling, so some of the details of the vectors are lost. Also, in using CTCTC without GATs, we observed that the output vectors of each token in the sentence are almost the same. This means that token-level information is lost, so accuracy may be lower for methods using CTCTC in these cases. By using GATs with CTCTC, we can avoid losing the information, which leads to good results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.4"
},
{
"text": "There are a number of methods that use GATs with a pre-trained language model. Lu et al. (2020) use 5 https://competitions.codalab.org/ competitions/25845#results a network on a vocabulary graph, which is based on word co-occurrence information, and Huang and Carley (2019) and Xu and Yang (2019) use syntactic features as a graph. Also, there are several methods that use positional encoding into GATs (Ingraham et al., 2019; Ishiwatari et al., 2020) . Our method uses GATs to consider syntactic features with positional features in combination, which is distinguishable from conventional methods.",
"cite_spans": [
{
"start": 250,
"end": 273,
"text": "Huang and Carley (2019)",
"ref_id": "BIBREF4"
},
{
"start": 278,
"end": 296,
"text": "Xu and Yang (2019)",
"ref_id": "BIBREF21"
},
{
"start": 403,
"end": 426,
"text": "(Ingraham et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 427,
"end": 451,
"text": "Ishiwatari et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The CTC loss function is widely used for long data sequence with not-one-to-one-aligned reference data such as speech recognition (Graves et al., 2013; Kim et al., 2017) , but to the best of our knowledge, no method that uses CTC for text classification tasks exists.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Graves et al., 2013;",
"ref_id": "BIBREF2"
},
{
"start": 152,
"end": 169,
"text": "Kim et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we proposed a GATs-based model that embeds token-to-token relations, and a loss function that can learn classes for each tokens. We conducted evaluations using the identification of informative COVID-19 English Tweets dataset, and confirmed that our proposed methods are effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "To determine whether CTCTC can work for other tasks especially for the classification into large amount of classes and to exploit pre-trained models other than BERT, especially for tweetspecific models such as BERTweet (Nguyen et al., 2020a) and CT-BERT (M\u00fcller et al., 2020) , are subjects of as our future work.",
"cite_spans": [
{
"start": 254,
"end": 275,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://coronavirus.jhu.edu/map.html 2 https://www.covid19healthsystem.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://noisy-text.github.io/2020/ covid19tweet-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We attempted to use each type separately with the GAT, but the results were worse regardless of dependency types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the identification of informative COVID-19 English Tweets shared task organizers for providing the dataset and opportunity for discussion. We also thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Faustino",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "369--376",
"other_ids": {
"DOI": [
"10.1145/1143844.1143891"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks. In Pro- ceedings of the 23rd international conference on Ma- chine learning, pages 369-376.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Speech recognition with deep recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Abdel-Rahman",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE international conference on acoustics, speech and signal processing",
"volume": "",
"issue": "",
"pages": "6645--6649",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2013.6638947"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recur- rent neural networks. In 2013 IEEE international conference on acoustics, speech and signal process- ing, pages 6645-6649. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Syntaxaware aspect level sentiment classification with graph attention networks",
"authors": [
{
"first": "Binxuan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"M"
],
"last": "Carley",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5472--5480",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1549"
]
},
"num": null,
"urls": [],
"raw_text": "Binxuan Huang and Kathleen M Carley. 2019. Syntax- aware aspect level sentiment classification with graph attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5472-5480.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Generative models for graph-based protein design",
"authors": [
{
"first": "John",
"middle": [],
"last": "Ingraham",
"suffix": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "15820--15831",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Ingraham, Vikas Garg, Regina Barzilay, and Tommi Jaakkola. 2019. Generative models for graph-based protein design. In Advances in Neu- ral Information Processing Systems, pages 15820- 15831.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Relation-aware graph attention networks with relational position encodings for emotion recognition in conversations",
"authors": [
{
"first": "Taichi",
"middle": [],
"last": "Ishiwatari",
"suffix": ""
},
{
"first": "Yuki",
"middle": [],
"last": "Yasuda",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Miyazaki",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Goto",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taichi Ishiwatari, Yuki Yasuda, Taro Miyazaki, and Jun Goto. 2020. Relation-aware graph attention net- works with relational position encodings for emo- tion recognition in conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP 2020).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Joint CTC-attention based end-to-end speech recognition using multi-task learning",
"authors": [
{
"first": "Suyoun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE international conference on acoustics, speech and signal processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4835--4839",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2017.7953075"
]
},
"num": null,
"urls": [],
"raw_text": "Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint CTC-attention based end-to-end speech recog- nition using multi-task learning. In 2017 IEEE inter- national conference on acoustics, speech and signal processing (ICASSP), pages 4835-4839. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On the variance of the adaptive learning rate and beyond",
"authors": [
{
"first": "Liyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Eighth International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In Proceedings of the Eighth Inter- national Conference on Learning Representations (ICLR 2020).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "VGCN-BERT: Augmenting BERT with graph embedding for text classification",
"authors": [
{
"first": "Zhibin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Pan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2020,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "369--382",
"other_ids": {
"DOI": [
"10.1007/978-3-030-45439-5_25"
]
},
"num": null,
"urls": [],
"raw_text": "Zhibin Lu, Pan Du, and Jian-Yun Nie. 2020. VGCN- BERT: Augmenting BERT with graph embedding for text classification. In European Conference on Information Retrieval, pages 369-382. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Universal dependency annotation for multilingual parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Os- car T\u00e4ckstr\u00f6m, et al. 2013. Universal dependency annotation for multilingual parsing. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 92-97.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Covid-twitter-bert: A natural language processing model to analyse covid-19 content on twitter",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Marcel",
"middle": [],
"last": "Salath\u00e9",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Per",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kummervold",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.07503"
]
},
"num": null,
"urls": [],
"raw_text": "Martin M\u00fcller, Marcel Salath\u00e9, and Per E Kummervold. 2020. Covid-twitter-bert: A natural language pro- cessing model to analyse covid-19 content on twitter. arXiv preprint arXiv:2005.07503.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERTweet: A pre-trained language model for English Tweets",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020a. BERTweet: A pre-trained language model for English Tweets. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "WNUT-2020 Task 2: Identification of Informative COVID-19 English Tweets",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Afshin",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Mai",
"middle": [
"Hoang"
],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Linh",
"middle": [
"The"
],
"last": "Dao",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Doan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 6th Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Thanh Vu, Afshin Rahimi, Mai Hoang Dao, Linh The Nguyen, and Long Doan. 2020b. WNUT-2020 Task 2: Identification of Infor- mative COVID-19 English Tweets. In Proceedings of the 6th Workshop on Noisy User-generated Text.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Py-Torch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- Torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Edinburgh neural machine translation systems for WMT 16",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "371--376",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2323"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation sys- tems for WMT 16. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers, pages 371-376, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Rethinking the inception architecture for computer vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Zbigniew",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "2818--2826",
"other_ids": {
"DOI": [
"10.1109/CVPR.2016.308"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 2818-2826.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Graph Attention Networks. International Conference on Learning Representations",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Veli\u010dkovi\u0107",
"suffix": ""
},
{
"first": "Guillem",
"middle": [],
"last": "Cucurull",
"suffix": ""
},
{
"first": "Arantxa",
"middle": [],
"last": "Casanova",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Li\u00f2",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Deep graph library: Towards efficient and scalable deep learning on graphs",
"authors": [
{
"first": "Minjie",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lingfan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Gai",
"suffix": ""
},
{
"first": "Zihao",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Mufei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jinjing",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.01315"
]
},
"num": null,
"urls": [],
"raw_text": "Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, et al. 2019. Deep graph library: Towards efficient and scalable deep learning on graphs. arXiv preprint arXiv:1909.01315.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "HuggingFace's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFace's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Look again at the syntax: Relational graph convolutional network for gendered ambiguous pronoun resolution",
"authors": [
{
"first": "Yinchuan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Junlin",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2019,
"venue": "GeBNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3814"
]
},
"num": null,
"urls": [],
"raw_text": "Yinchuan Xu and Junlin Yang. 2019. Look again at the syntax: Relational graph convolutional network for gendered ambiguous pronoun resolution. GeBNLP 2019, page 96.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "CTC is calculated as the sum of two probabilities in the blue box, while CTCTC is the sum of all probabilities except for the all-blank path as shown in the red box. Green and yellow boxes and orange arrow show the direction of label smoothing, token smoothing, and leaking, respectively. when calculating the multi-head attention on the basis of the work fromIngraham et al. (2019) andIshiwatari et al. (2020).Following the work of Ishiwatari et al.(2020), we compared two types of positional embedding in our experiments, fixed and learned.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Overview of baseline methods. Green elements show the process of our proposed method. The blue arrow and orange elements show the process of a baseline method that does not use GATs and does not use CTCTC, respectively.",
"num": null
},
"TABREF1": {
"text": "Experimental results on validation data-based experiments.",
"content": "<table><tr><td/><td colspan=\"2\">GATs parameters</td><td/><td colspan=\"2\">CTCTC parameters</td><td>F1 score</td></tr><tr><td>#</td><td>Use GATs</td><td>Positional feature</td><td>Use CTCTC</td><td>Label smoothing</td><td>Token smoothing</td><td>Leaking</td></tr><tr><td>1</td><td/><td/><td/><td/><td/><td>0.9154 \u00b1 0.0041</td></tr><tr><td>2</td><td/><td/><td/><td/><td/><td>0.9134 \u00b1 0.0015</td></tr><tr><td>3</td><td/><td>Fixed</td><td/><td/><td/><td>0.9151 \u00b1 0.0026</td></tr><tr><td>4</td><td/><td>Learned</td><td/><td/><td/><td>0.9151 \u00b1 0.0009</td></tr><tr><td>5</td><td/><td/><td/><td/><td/><td>0.0000 \u00b1 0.0000</td></tr><tr><td>6</td><td/><td/><td/><td/><td/><td>0.9128 \u00b1 0.0026</td></tr><tr><td>7</td><td/><td/><td/><td/><td/><td>0.9133 \u00b1 0.0052</td></tr><tr><td>8</td><td/><td/><td/><td/><td/><td>0.9153 \u00b1 0.0024</td></tr><tr><td>9</td><td/><td>Fixed</td><td/><td/><td/><td>0.9172 \u00b1 0.0027</td></tr><tr><td>10</td><td/><td>Learned</td><td/><td/><td/><td>0.9175 \u00b1 0.0044</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Results on the test data.",
"content": "<table><tr><td>Team / Method</td><td>F1 score</td></tr><tr><td>Ours (#9 in Table 1)</td><td>0.8898</td></tr><tr><td>Ours (#10 in Table 1)</td><td>0.8885</td></tr><tr><td>NutCracker</td><td>0.9096</td></tr><tr><td>NLP North</td><td>0.9096</td></tr><tr><td>UIT-HSE</td><td>0.9094</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}