Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:44:08.134628Z"
},
"title": "Zewen at SemEval-2018 Task 1: An Ensemble Model for Affect Prediction in Tweets",
"authors": [
{
"first": "Zewen",
"middle": [],
"last": "Chi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Heyan",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Jiangui",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Ran",
"middle": [],
"last": "Wei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a method for Affect in Tweets, which is the task to automatically determine the intensity of emotions and intensity of sentiment of tweets. The term affect refers to emotion-related categories such as anger, fear, etc. Intensity of emotions need to be quantified into a real valued score in [0, 1]. We propose an ensemble system including four different deep learning methods which are CNN, Bidirectional LSTM (BLSTM), LSTM-CNN and a CNN-based Attention model (CA). Our system gets an average Pearson correlation score of 0.682 in the subtask EI-reg and an average Pearson correlation score of 0.784 in subtask V-reg, which ranks 19th among 48 systems in EI-reg and 17th among 38 systems in V-reg.",
"pdf_parse": {
"paper_id": "S18-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a method for Affect in Tweets, which is the task to automatically determine the intensity of emotions and intensity of sentiment of tweets. The term affect refers to emotion-related categories such as anger, fear, etc. Intensity of emotions need to be quantified into a real valued score in [0, 1]. We propose an ensemble system including four different deep learning methods which are CNN, Bidirectional LSTM (BLSTM), LSTM-CNN and a CNN-based Attention model (CA). Our system gets an average Pearson correlation score of 0.682 in the subtask EI-reg and an average Pearson correlation score of 0.784 in subtask V-reg, which ranks 19th among 48 systems in EI-reg and 17th among 38 systems in V-reg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Affect determination is a significant part of nature language processing. Especially, affect in tweets becomes a focus in recent years. Sentiment Analysis in Twitter, which is a task of SemEval, was firstly proposed in 2013 and not replaced until 2018. In SemEval 2018, the task Affect in Tweets (AIT) (Mohammad et al., 2018) was proposed and the objective is to automatically determine the intensity of emotions (E) and intensity of sentiment (aka valence V) of tweets. In this paper, we focus on two subtasks:",
"cite_spans": [
{
"start": 302,
"end": 325,
"text": "(Mohammad et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 EI-reg (emotion intensity regression) -Given a tweet and an emotion E, determine the intensity of E that best represents the mental state of the tweeter -a real-valued score between 0 (least E) and 1 (most E)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 V-reg (sentiment intensity regression) -Given a tweet, determine the intensity of sentiment or valence (V) that best represents the mental state of the tweeter -a real-valued score between 0 (most negative) and 1 (most positive)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Before 2016, most systems use Support Vector Machine (SVM), Naive Bayes, maximum entropy and linear regression (Nakov et al., 2013; Rosenthal et al., 2014 Rosenthal et al., , 2015 . In SemEval 2014, deep learning methods started to appear and a team using them won the second place. Since 2015, more and more teams who were rank at the top used deep learning methods and now deep learning methods including CNN and LSTM networks become really popular (Nakov et al., 2016; Rosenthal et al., 2017) .",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(Nakov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 132,
"end": 154,
"text": "Rosenthal et al., 2014",
"ref_id": "BIBREF21"
},
{
"start": 155,
"end": 179,
"text": "Rosenthal et al., , 2015",
"ref_id": "BIBREF20"
},
{
"start": 451,
"end": 471,
"text": "(Nakov et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 472,
"end": 495,
"text": "Rosenthal et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The system described in this paper is an ensemble of four different DNN methods including CNN, Bidirectional LSTM (Bi-LSTM), LSTM-CNN and a CNN-based Attention model (CA). In these methods, words in tweets are firstly mapped to word vectors. After intensity scores are calculated by these models, we use a logistic regression and finally give the scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 describes the four various methods and the ensemble method used in our system. Section 3 and Section 4 give the implementation and training details of our system for subtask EI-reg and V-reg. Section 5 states the results and discussion in the evaluation period. Finally, Section 6 makes a conclusion on this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inspired by Kim's work on sentence classification (Kim, 2014) , the architecture of the CNN model used in our system is almost identical to his model. As it is shown in Figure 1 , tweets are first fed into the embedding layer, which converts words into word vectors. Then the tweet is mapped into a matrix M of size n \u00d7 d. In order to reduce the number of parameters in the neural network, we just use the single channel non-static model, which sets pre-trained word vectors in the embedding layer and can be modified in the training period. In the convolution layer, convolution operations are applied on the submatrixes of M. The convolution operation here is defined as:",
"cite_spans": [
{
"start": 50,
"end": 61,
"text": "(Kim, 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "CNN",
"sec_num": "2.1"
},
{
"text": "c k = f k ( i j \u03c9 ij x [i:i+h\u22121] + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "2.1"
},
{
"text": "where b \u2208 R is a bias term and f is a nonlinear function such as ReLU (Jarrett et al., 2009) , which is used in our approach. Filters are applied with different size of windows and in each window of size h, feature matrix c \u2208 R (n\u2212h+1)\u00d7m is produced corresponding to the filters:",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "(Jarrett et al., 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "2.1"
},
{
"text": "c = [c 1 , c 2 , ..., c k , ..., c m ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "2.1"
},
{
"text": "where m is the number of filters and c k \u2208 R n\u2212h+1 represents the features extracted from a word sequence. In the pooling layer, we apply a max-over-time pooling operation (Collobert et al., 2011) over feature matrix and take the maximum in each column to preserve the most important features. These maximums are concatenated and then fed into a fully-connected network (L1, L2). L2 is followed by a single sigmoid neuron node to generate the prediction of the affect on the interval [0, 1].",
"cite_spans": [
{
"start": 172,
"end": 196,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "2.1"
},
{
"text": "The LSTM architecture used in our system is a kind of modern Recurrent Neural Networks (RNN). Comparing to CNN, the way RNN work is more similar to that how humans read sentences. A word vector sequence x, which is converted from a tweet, will be fed to the RNN in order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM",
"sec_num": "2.2"
},
{
"text": "h t = \u03c3(W hx x t + W hh h t\u22121 + b h ) y t = sof tmax(W yh h t + b y )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM",
"sec_num": "2.2"
},
{
"text": "At time t, the RNN takes the input from the cur- rent word x t and also from the previous hidden state h t\u22121 to calculate the hidden state h t and the output\u0177 t , which means\u0177 t at time t is in the influence of all previous input words x 1 , ..., x t\u22121 . However, this regular RNN suffers from the exploding and vanishing gradient problem when using the backpropagation algorithm (Hochreiter, 1998) , which makes RNN hard to train. Therefore, we use the Long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) to overcome this problem. Each ordinary node of hidden layer in LSTMs is replaced by a memory cell and the following equations describe the LSTM:",
"cite_spans": [
{
"start": 380,
"end": 398,
"text": "(Hochreiter, 1998)",
"ref_id": "BIBREF8"
},
{
"start": 492,
"end": 526,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM",
"sec_num": "2.2"
},
{
"text": "g t = \u03c6(W gx x t + W gh h t\u22121 + b g ) i t = \u03c3(W ix x t + W ih h t\u22121 + b i ) f t = \u03c3(W f x x t + W f h h t\u22121 + b f ) o t = \u03c3(W ox x t + W oh h t\u22121 + b o ) s t = g t i t + s t\u22121 f t h t = \u03c6(s t ) o t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM",
"sec_num": "2.2"
},
{
"text": "The vector h t is the value of hidden layer of LSTM at time t, g t is the input node, i t is the input gate, f t is the forget gate, o t is the output gate and s t is the internal state where is pointwise multiplication. According to Zaremba and Sutskever (2014) , the function \u03c6 used here is the tanh function.",
"cite_spans": [
{
"start": 234,
"end": 262,
"text": "Zaremba and Sutskever (2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM",
"sec_num": "2.2"
},
{
"text": "For every point in a given sequence, Graves et al. (2005) shows that a bidirectional LSTM can preserve more sequential information about all sequential points before and after it. As the Figure 2 shows, we concatenate the hidden states of two separate LSTMs after they process the word sequence in opposite direction and get the concatenated state h \u2208 R 2m , which is fed to fully connected layers and finally give the result with a single sigmoid neuron node.",
"cite_spans": [
{
"start": 37,
"end": 57,
"text": "Graves et al. (2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 187,
"end": 196,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Bidirectional LSTM",
"sec_num": "2.2"
},
{
"text": "The architecture of LSTM-CNN is a combination of previous two model. Instead of feeding the out- put of LSTM to the fully connected layers, the output of LSTM h t at each time t are regarded as the input of CNN and Figure 3 shows the architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "LSTM-CNN",
"sec_num": "2.3"
},
{
"text": "Since attention mechanism has achieved significant improvements in many NLP tasks, including machine translation (Bahdanau et al., 2014) , caption generation (Xu et al., 2015 ) and text summarization (Rush et al., 2015), it becomes an integral part of compelling sequence modeling and transduction models in various tasks. Motivated by Du's work on sentence classification (Du et al., 2017) , the architecture of our CNNbased attention model resembles his model. We first use a CNN-based network to model the attention signal in sentences. The convolution operation here is same as that described in Section 2.1. The attention signal of original text is represented by the output of convolutional filter. In order to reduce the noise, multiple filters with same size of windows are applied. After that, we get the corresponding attention similarity: So far, we have obtained attention signal c t and the corresponding hidden state vector of RNN h t . The representation of the whole sentence can be computed by",
"cite_spans": [
{
"start": 113,
"end": 136,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 158,
"end": 174,
"text": "(Xu et al., 2015",
"ref_id": "BIBREF25"
},
{
"start": 373,
"end": 390,
"text": "(Du et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A CNN-based Attention Model (CA)",
"sec_num": "2.4"
},
{
"text": "s = 1 T T \u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A CNN-based Attention Model (CA)",
"sec_num": "2.4"
},
{
"text": "t=0 c t h t And then s \u2208 R d is fed into a fully-connected network (L1, L2). L2 is followed by a single sigmoid neuron node to generate the prediction of the affect on the interval [0, 1]. The architecture of this model is shown in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "A CNN-based Attention Model (CA)",
"sec_num": "2.4"
},
{
"text": "According to the results of SemEval-2017 task 4, the use of ensembles stood out clearly. Therefore, we use a mix of deep learning methods to make our system obtain better predictive performance. Inspired by the boosting algorithms, we use a logistic regression to improve the accuracy of these four methods and the architecture is shown in Figure 5 . In order to make the model simple, it only takes the output of the four methods as input rather than training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 340,
"end": 348,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "2.5"
},
{
"text": "We implemented our system with PyTorch (Paszke et al., 2017) in Python 3.",
"cite_spans": [
{
"start": 39,
"end": 60,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3"
},
{
"text": "Preprocessing: For making tweets string clean, we apply a preprocessing procedure on the input tweets which removes the abbreviations like 's, 've and make them lowercased. GloVe (Pennington et al., 2014) trained by Common Crawl.",
"cite_spans": [
{
"start": 179,
"end": 204,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3"
},
{
"text": "Model Hyper-parameters: Table 1 and Table 2 show the hyper-parameters we use in our system.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 43,
"text": "Table 1 and Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3"
},
{
"text": "For fully connected layers, no more than two fully-connected layers are used in the four methods and all fully-connected layers are followed by ReLU. Before the outputs of pooling layers and LSTMs are fed to the fully connected layers, a dropout is applied and the details are described in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3"
},
{
"text": "The dataset used in our system is provided by the AIT task and no external datasets are used in training period. For the subtask EI-reg and subtask Vreg, they are trained with the same model hyperparameters which are listed in Table 1 and Table 2 . Also, the four methods use the same word embeddings, which is a pre-trained 300-dimensional word vectors with common crawl by GloVe algorithm. For different emotions, we train the models for 10 epochs respectively. The network parameters are learned by minimizing the Mean Absolute Error (MAE) between the gold labels and predictions and the four methods used in our system are trained separately. We optimize the loss function by back-propagating algorithm via Minibatch Gradient descent with batch size of 8 for the 4 deep learning models and full batch learning for the ensemble model, as well as the Adam opti-mization algorithm (Kingma and Ba, 2014) for all models with initial leaning rate of 0.001 and 0.01 for the four deep learning models and the ensemble model, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 247,
"text": "Table 1 and Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "4"
},
{
"text": "We compare the results of the four methods used in our system, the ensemble system, the SVM Unigrams Baseline provided from the AIT task and the best-performing system -SeerNet in Table 3 . The metric for evaluating performance is Pearson Correlation.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Result and Discussion",
"sec_num": "5"
},
{
"text": "Its remarkable that, comparing to the individual models, our ensemble model has an improvement of at least 2% on EI-reg subtask and 1.1% on Vreg subtask. However, it's obvious that there is a gap between our models and the best-performance system. The rough preprocessing method of our system is one of the reason for the low score. Because of some words in tweets are misspelled or in a special format like 'yaaaaay!', some of the information is lost in this process. So we added an experiment on the V-reg task to study the effect of preprocessing method. We replace the text preprocessing method with the ekphrasis 1 for the tokenization, word normalization, word segmentation (for splitting hashtags) and spell correction and the keep the other parameters unchanged. As it is shown in Table 4 , the four methods as well as the ensemble model all get an improvement on the results. Actually, some expressions like dates, urls, hashtags and emoticons are converted into the special tokens like <date> , <url>, <hashtag> and <joy>, but these tokens are not in the dictionary of pre-trained word vectors, which means the information of these tokens is still wasted in the embedding process.",
"cite_spans": [],
"ref_spans": [
{
"start": 789,
"end": 796,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result and Discussion",
"sec_num": "5"
},
{
"text": "There is much room for the improvement of our method:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result and Discussion",
"sec_num": "5"
},
{
"text": "1. In our system, a single pre-trained word embedding is used, which lack experimental evidence. For future work, combining more kinds of word embeddings should be taken into consideration. Table 4 : Results of different text preprocessing method on V-reg task when the other parameters are kept unchanged.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result and Discussion",
"sec_num": "5"
},
{
"text": "3. For the input features, we only use the word vectors. We are supposed to experiment with more features like lexicons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result and Discussion",
"sec_num": "5"
},
{
"text": "4. In our system, we just use a simple logistic regression but achieve an impressive result on the two subtasks. There is an interesting idea that we can do more work on finding a better ensemble model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result and Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we propose a model on the sub-task EI-reg and V-reg of SemEval-2018 Task 1: Affect on Tweets. The submitted system is an ensemble model based on CNN, Bidirectional LSTM (BLSTM), LSTM-CNN and a CNN-based Attention model (CA). All methods are described in detail to make our work replicable. For future work, it would be significant to make an improvement on preprocessing of tweets, doing more experiment on word embeddings and feature selection, model validation and ensemble method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": ". We adjust the hyper-parameters by doing evaluation on dev dataset. For future work, we can apply a more advanced strategy like Cross Validation.1 github.com/cbaziotis/ekphrasis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Pelekis",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Doulkeridis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "747--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulk- eridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017), pages 747-754, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bb twtr at semeval-2017 task 4: Twitter sentiment analysis with cnns and lstms",
"authors": [
{
"first": "",
"middle": [],
"last": "Mathieu Cliche",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.06125"
]
},
"num": null,
"urls": [],
"raw_text": "Mathieu Cliche. 2017. Bb twtr at semeval-2017 task 4: Twitter sentiment analysis with cnns and lstms. arXiv preprint arXiv:1704.06125.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A convolutional attention model for text classification",
"authors": [
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "National CCF Conference on Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "183--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiachen Du, Lin Gui, Ruifeng Xu, and Yulan He. 2017. A convolutional attention model for text classifica- tion. In National CCF Conference on Natural Lan- guage Processing and Chinese Computing, pages 183-195. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Prayas at emoint 2017: An ensemble of deep neural architectures for emotion intensity prediction in tweets",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Devang",
"middle": [],
"last": "Kulshreshtha",
"suffix": ""
},
{
"first": "Prayas",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Kaushal Kumar",
"middle": [],
"last": "Shukla",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Goel, Devang Kulshreshtha, Prayas Jain, and Kaushal Kumar Shukla. 2017. Prayas at emoint 2017: An ensemble of deep neural architectures for emotion intensity prediction in tweets. In Pro- ceedings of the 8th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 58-65.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "5-6",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works, 18(5-6):602-610.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Senti17 at semeval-2017 task 4: Ten convolutional neural network voters for tweet polarity classification",
"authors": [
{
"first": "Hussam",
"middle": [],
"last": "Hamdan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.02023"
]
},
"num": null,
"urls": [],
"raw_text": "Hussam Hamdan. 2017. Senti17 at semeval-2017 task 4: Ten convolutional neural network voters for tweet polarity classification. arXiv preprint arXiv:1705.02023.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
}
],
"year": 1998,
"venue": "International Journal of Uncertainty",
"volume": "6",
"issue": "02",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter. 1998. The vanishing gradient prob- lem during learning recurrent neural nets and prob- lem solutions. International Journal of Uncer- tainty, Fuzziness and Knowledge-Based Systems, 6(02):107-116.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What is the best multi-stage architecture for object recognition?",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Jarrett",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Vision",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Jarrett, Koray Kavukcuoglu, Yann LeCun, et al. 2009. What is the best multi-stage architecture for object recognition? In Computer Vision,",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "IEEE 12th International Conference on",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2146--2153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE 12th International Conference on, pages 2146-2153. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semeval-2013 task 2: Sentiment analysis in twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Veselin Stoyanov Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Zornitsa Kozareva, Alan Ritter, Sara Rosenthal, and Veselin Stoyanov Theresa Wilson. 2013. Semeval-2013 task 2: Sentiment analysis in twitter. volume 2.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semeval-2016 task 4: Sentiment analysis in twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. 2016. Semeval- 2016 task 4: Sentiment analysis in twitter. In Pro- ceedings of the 10th International Workshop on Se- mantic Evaluation (SemEval-2016), pages 1-18.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semeval-2017 task 4: Sentiment analysis in twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "502--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. Semeval-2017 task 4: Sentiment analysis in twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502-518.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Semeval-2015 task 10: Sentiment analysis in twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "451--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. Semeval-2015 task 10: Sentiment analysis in twitter. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 451-463.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semeval-2014 task 9: Sentiment analysis in twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Alan Ritter, Preslav Nakov, and Veselin Stoyanov. 2014. Semeval-2014 task 9: Sen- timent analysis in twitter. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 73-80, Dublin, Ireland. As- sociation for Computational Linguistics and Dublin City University.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Lia at semeval-2017 task 4: An ensemble of neural networks for sentiment classification",
"authors": [
{
"first": "Mickael",
"middle": [],
"last": "Rouvier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "760--765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mickael Rouvier. 2017. Lia at semeval-2017 task 4: An ensemble of neural networks for sentiment clas- sification. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 760-765.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1509.00685"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander M Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for ab- stractive sentence summarization. arXiv preprint arXiv:1509.00685.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhudinov",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In International Conference on Machine Learning, pages 2048-2057.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Recurrent neural network regularization",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.2329"
]
},
"num": null,
"urls": [],
"raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The architecture of our CNN model"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The architecture of our bidirectional LSTM model, where h f n and h b n represent the last hidden state of the forward and backward LSTM respectively."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The architecture of our LSTM-CNN model."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The architecture of CNN-based Attention Model (CA)"
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The architecture of the ensemble model [c 1 , c 2 , ..., c k , ..., c m ]. Then we obtain the attention signal of each element which represents the importance of the corresponding word by averaging the attention similarities along the filter-axis: An RNN with LSTM units is used to encode the sentence. According to the equation in Section 2.2, the hidden state h t \u2208 R d (where d is the dimension of the RNN) at time t is h t = \u03c6(s t ) o t ."
},
"TABREF1": {
"text": "Fully connected layers hyper-parameters, the numbers represent the size of outputs of liner layers.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Methods</td><td>p</td><td>CNN</td><td>LSTM</td></tr><tr><td>CNN</td><td colspan=\"3\">0.2 [2, 3, 4], 256 Nil</td></tr><tr><td>BLSTM</td><td colspan=\"2\">0.5 Nil</td><td>300</td></tr><tr><td colspan=\"3\">LSTM-CNN 0.5 [3], 200</td><td>300</td></tr><tr><td>CA</td><td colspan=\"2\">0.5 [3], 50</td><td>150</td></tr></table>",
"num": null
},
"TABREF2": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Network hyper-parameters for the filters of CNN and hidden size of LSTM, and p is the dropout rate. For example, [2, 3, 4], 256 means the filter height is set to 2, 3 and 4, and the number of filters is set to 256 for different sizes of filters.</td></tr></table>",
"num": null
},
"TABREF4": {
"text": "Results on Subtask EI-reg and V-reg.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Methods CNN BLSTM LSTM-CNN 0.759 Rough method ekphrasis 0.773 0.788 0.731 0.733 0.767 CA 0.761 0.773 Ensemble 0.784 0.793</td></tr></table>",
"num": null
}
}
}
}