Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S16-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:26:49.955181Z"
},
"title": "INESC-ID at SemEval-2016 Task 4-A: Reducing the Problem of Out-of-Embedding Words",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ramon",
"middle": [
"F"
],
"last": "Astudillo",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "M\u00e1rio",
"middle": [
"J"
],
"last": "Silva",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the INESC-ID system for the 2016 edition of SemEval Twitter Sentiment Analysis shared task (subtask 4-A). The system was based on the Non-Linear Sub-space Embedding (NLSE) model developed for last year's competition. This model trains a projection of pre-trained embeddings into a small subspace using the supervised data available. Despite its simplicity, the system attained performances comparable to the best systems of last edition with no need for feature engineering. One limitation of this model was the assumption that a pre-trained embedding was available for every word. In this paper, we investigated different strategies to overcome this limitation by exploiting character-level embeddings and learning representations for out-ofembedding vocabulary words. The resulting approach outperforms our previous model by a relatively small margin, while still attaining strong results and a consistent good performance across all the evaluation datasets.",
"pdf_parse": {
"paper_id": "S16-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the INESC-ID system for the 2016 edition of SemEval Twitter Sentiment Analysis shared task (subtask 4-A). The system was based on the Non-Linear Sub-space Embedding (NLSE) model developed for last year's competition. This model trains a projection of pre-trained embeddings into a small subspace using the supervised data available. Despite its simplicity, the system attained performances comparable to the best systems of last edition with no need for feature engineering. One limitation of this model was the assumption that a pre-trained embedding was available for every word. In this paper, we investigated different strategies to overcome this limitation by exploiting character-level embeddings and learning representations for out-ofembedding vocabulary words. The resulting approach outperforms our previous model by a relatively small margin, while still attaining strong results and a consistent good performance across all the evaluation datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pre-trained word embeddings provide a simple means to attain semi-supervised learning in Natural Language Processing (NLP) tasks (Collobert et al., 2011) . They can be trained with large amounts of unsupervised data and be fine tuned as the initial building block of a semi-supervised system. However, in domains with a significant number of typos, use of slang and abbreviations, such as social media, the high number of singletons leads to a poor fine tuning of the embeddings. In previous work, we addressed this by learning a projection of the embeddings into a small sub-space (Astudillo et al., 2015b) . This allowed us to attain representations also for Out-Of-Vocabulary (OOV) words, provided that embeddings for those words are available. However, even if the embeddings are estimated from large amounts of unlabeled text, in noisy domains, such as Twitter, a significant number of words will not be seen and therefore will not have an embedding. We refer to those words as the Outof-Embedding Vocabulary (OOEV).",
"cite_spans": [
{
"start": 129,
"end": 153,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 582,
"end": 607,
"text": "(Astudillo et al., 2015b)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on the problem of obtaining good representations for OOEV words. We experimented with character to word models (C2W) and investigated different strategies for initializing and updating OOEVs from the available training data. The best results were attained by using the labeled data to perform small updates to these representations in the first few epochs of training. The resulting system outperforms that of the previous evaluation, although by a small margin. It ranks fourth in the 2016 evaluation with a consistently high performance in all years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we briefly review the approach introduced in the 2015 evaluation (Astudillo et al., 2015a) . For a particular regression or classification task, only a subset of all the latent aspects captured by the word embeddings will be useful. Therefore, instead of updating the embeddings directly with the available labeled data, we estimate a projection of these embeddings into a low dimensional sub-space. This simple method brings two fundamental advan-tages. Firstly, the lower dimensional embeddings require fewer parameters fitting the complexity of the target task and the available training data. Secondly, the learned projection can be used to adapt the representations for all words with an embedding, even if they do not occur in the labeled dataset.",
"cite_spans": [
{
"start": 82,
"end": 107,
"text": "(Astudillo et al., 2015a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "Assuming we are given a matrix of pre-trained embeddings, where each column represents a word from a vocabulary V, let such matrix be denoted by E \u2208 R e\u00d7|V| , where e is the number of latent dimensions. We define the adapted embedding matrix as the factorization S \u2022 E, where S \u2208 R s\u00d7e , with s e. The parameters of matrix S are estimated using the labeled dataset, while E is kept fixed. In other words, we determine the optimal projection of the embedding matrix E into a sub-space of dimension s. In what follows, we will refer to this approach as Non-Linear Sub-space Embedding (NLSE) model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "The NLSE can be interpreted as a simple feedforward neural network model (Rumelhart et al., 1985) with one single hidden layer utilizing the embedding sub-space approach. Let",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "(Rumelhart et al., 1985)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "m = [w 1 \u2022 \u2022 \u2022 w n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "(1) denote a message of n words. Each column w \u2208 {0, 1} v\u00d71 of m represents a word in one-hot form, that is, a vector of zeros of the size of the vocabulary v with a 1 on the i-th entry of the vector. Let y denote a categorical random variable over K classes. The NLSE model, estimates the probability of each possible category y = k \u2208 K given a message m as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y = k|m; \u03b8) \u221d exp (Y k \u2022 h \u2022 1)",
"eq_num": "(2)"
}
],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "with parameters \u03b8 = {S, Y}. Here, h \u2208 [0, 1] e\u00d7n are the activations of the hidden layer for each word, given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h = \u03c3 (S \u2022 E \u2022 m)",
"eq_num": "(3)"
}
],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "where \u03c3() is a sigmoid function acting on each element of the matrix. The matrix Y \u2208 R 3\u00d7s maps the embedding sub-space to the classification space and 1 \u2208 1 n\u00d71 is a matrix of ones that sums the scores for all words together, prior to normalization. This is equivalent to a bag-of-words assumption. Finally, the model computes a probability distribution over the K classes, using the softmax function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "3 Out-of-Embedding Vocabulary Words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "Despite the fact that word embeddings are typically estimated from very large amounts of unlabeled data, it is often the case that a number of words appearing on the training or test sets are not present on the unlabeled corpus. These words will not be represented in E. This problem is even more significant in social media environments like Twitter, where there is a significant lexical variation and where novel words, expressions and slang can be introduced over time. In Table 1 , we show the percentage of OOV and OOEV words on each Twitter dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 483,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "The n\u00e4ive way of dealing with this issue, is to simply set the embeddings of unknown words to zero, essentially ignoring them. As will see later, a better approach is to treat these words as model parameters and use the training signal to learn a better representation for them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLSE Model Overview",
"sec_num": "2"
},
{
"text": "One natural way of avoiding OOEV in neural network models, is to learn character-level embeddings and define a composition function to combine them into word representations, thus obtaining representations for any given word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Embeddings",
"sec_num": "3.1"
},
{
"text": "We experimented using C2W, a simple compositional model for learning word representations, from character embeddings. Given a word w, the C2W model generates a set of character n-grams {c 1 , . . . , c m }, and projects each n-gram c i into a vector e c i \u2208 R d , where d is the number of latent dimensions. The individual character representations are then combined to obtain a fixed-size representation for word w as e w = e c 1 \u2295 . . . \u2295 e cm , where \u2295 denotes pointwise sum. These word representations can be used as the input to a standard neural language model where the parameters are estimated from unlabeled data by learning to predict words within a context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Embeddings",
"sec_num": "3.1"
},
{
"text": "Unfortunately the C2W embeddings performed very poorly in our model. Therefore, to have embeddings for all the words we employed an approach similar to (Mikolov et al., 2013) . We learn a mapping between the embedding spaces induced by C2W and , allowing us compute an approximate SSG embedding for all the words. To this end, we first obtained C, the set of words present in the two embeddings spaces. Then, we learned a linear map T by solving for the following objective:",
"cite_spans": [
{
"start": 152,
"end": 174,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping C2W to SSG Embeddings",
"sec_num": "3.2"
},
{
"text": "T \u2190 argmin T w\u2208C ||T \u2022 c w \u2212 s w || 2 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping C2W to SSG Embeddings",
"sec_num": "3.2"
},
{
"text": "where, c w denotes the C2W embedding for word w and s w denotes the SSG embedding for word w. This mapping, was then used to compute a SSG embeddings for each OOEV as s w = T \u2022 c w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping C2W to SSG Embeddings",
"sec_num": "3.2"
},
{
"text": "Given the small amount of supervised data, directly updating the embeddings with the SemEval train set leads to very poor results. It is however possible to update only the OOEV words present in the training set simultaneously to the computation of the subspace (Astudillo et al., 2015a) . To obtain positive results with this approach, it was also necessary to reduce the effect of training by lowering the learning rate to 0.001 and updating the embeddings only in the first two iterations.",
"cite_spans": [
{
"start": 262,
"end": 287,
"text": "(Astudillo et al., 2015a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Partial Update of Embeddings during Training",
"sec_num": "3.3"
},
{
"text": "4 Main Improvements over the 2015 NLSE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partial Update of Embeddings during Training",
"sec_num": "3.3"
},
{
"text": "One complication with Twitter-based evaluations is the need of the participant to retrieve the tweets themselves, since some of the tweets may no longer be available. The INESC-ID system presented in 2015 employed a train set of 8604 tweets, considerably smaller than the original dataset (with 11328 tweets). For this edition, it was possible to get ahold of the full dataset, as utilized by Severyn and Moschitti (2015) . For reproducibility and comparison purposes our systems this year were developed with this dataset.",
"cite_spans": [
{
"start": 393,
"end": 421,
"text": "Severyn and Moschitti (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Partial Update of Embeddings during Training",
"sec_num": "3.3"
},
{
"text": "The system presented in 2015 was very simple both in its structure and the number hyperparameters. Furthermore, tunning and selection of candidate systems was also performed without automatic grid-search. It was therefore expected that our previous setup would outright produce better results by training on a larger dataset. Disappointingly, this was not the case. In fact, the NLSE optimized for the 2015 competition seemed to be sitting on a local optimum that was difficult to come out from. To overcome this problem, we introduced two modifications in the training procedure 1 . The NLSE is trained by minimizing the negative log-likelihood. This cost function is sub-optimal taking into account the evaluation metric, as it weights equally positive, negative and neutral predictions. A simple improvement over this cost is an asymmetric weighting that penalizes the predictions of neutral tweets. This was incorporated as a multiplicative factor on the loglikelihood of values 4/3, 4/3 and 1/3 for the positive, negative and neutral classes, respectively. To reduce the risk of getting trapped into a local minimum, the train data was shuffled before each training epoch. The asymmetric cost and randomization led to a slower, less consistent convergence. For this reason the number of iterations was increased from 8 to 12. The learning rate was changed from 0.01 to 0.005. Table 2 shows the effect of the improvements on the submitted system.",
"cite_spans": [],
"ref_spans": [
{
"start": 1381,
"end": 1388,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Partial Update of Embeddings during Training",
"sec_num": "3.3"
},
{
"text": "After introducing these two improvements, we investigated different methods to address the problem of OOEV as described in the previous sec-tion. Namely those exploiting C2W embeddings, mapping C2W embeddings to SSG embeddings and training the embeddings for OOEVs. The results of these strategies are displayed in Table 3 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 322,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Partial Update of Embeddings during Training",
"sec_num": "3.3"
},
{
"text": "As mentioned in the previous section, the system submitted is an improvement over our 2015 system (Astudillo et al., 2015b) . It therefore shares the same training characteristics as the previous model. The 52 million tweets used by Owoputi et al. (2013) and the tokenizer described in the same work were used to train the word embeddings Structured Skip-Gram (SSG). For this submission, the C2W embeddings were also trained using a publicly available toolkit 2 . For the annotated SemEval training data, the messages were previously pre-processed as follows: lower-casing, replacing Twitter user mentions and URLs with special tokens and reducing any character repetition to at most 3 characters. Following Astudillo et al. (2015a), we used embeddings with 600 dimensions and set the sub-space size to 10 dimensions.",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "(Astudillo et al., 2015b)",
"ref_id": "BIBREF1"
},
{
"start": 233,
"end": 254,
"text": "Owoputi et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Submitted System",
"sec_num": "5"
},
{
"text": "To train the model, the development set was split into 80% for parameter learning and 20% for model evaluation and selection, maintaining the original relative class proportions in each set. The weights were all randomly initialized uniformly with ranges of [\u22120.001, 0.001], [\u22120.1, 0.1] and [\u22120.7, 0.7] for the OOEVs, subspace and classification layers respectively. The training procedure entailed minimizing the negative log-likelihood over the training data with respect to the parameters, using standard Stochastic Gradient Descent (Rumelhart et al., 1985) with a fixed learning rate of 0.005 and minibatch of size 1, i.e., updating the weights after each message was processed. We reshuffled the training 2 https://github.com/wlin12/wang2vec examples after each training epoch and performed model selection by early stopping after 12 iterations. The candidate for submission was manually selected by observing the performance across 2013, 2014 and 2015 datasets. Priority was given to models that presented a consistent high performance in all the datasets. In retrospect, this was most probably a suboptimal decision judging from the evaluation results. Table 4 displays the performance for the top 5 systems in SemEval 2016 task 4-B (Nakov et al., 2016 ). The NLSE system (labeled INESC-ID) ranks forth with a stable performance across all years. The results are particularly strong for 2013 with a difference of 0.017 points over the next best performing system on the top five. This is consistent with the divide noticed during system selection between performance in 2013 and 2015. High-performing systems in 2014, and particularly in 2013, do not appear to be equally performing in recent years.",
"cite_spans": [
{
"start": 536,
"end": 560,
"text": "(Rumelhart et al., 1985)",
"ref_id": "BIBREF7"
},
{
"start": 1240,
"end": 1259,
"text": "(Nakov et al., 2016",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1160,
"end": 1167,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "The Submitted System",
"sec_num": "5"
},
{
"text": "We presented the INESC-ID system for the SemEval 2016 task 4-A, built on top of the successful Non-Linear Subspace Embedding model. We found that training with a larger dataset required a more careful procedure to avoid overfitting. Reproducing the best results obtained in SemEval 2015 required shuffling the data before each training epoch and adapting the cost function to better reflect the evaluation metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "To address the problem of out-of-embedding words, we tried to introduce character-level embeddings in our model but found these to be detrimental. We obtained better results by learning embeddings for these words during the training. Even though the performance gains were not very pronounced, our system still attained very strong results across all the evaluation datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "After paper revision the model in https://github. com/ramon-astudillo/NLSE will be updated to reflect the new system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia (FCT), through contracts UID/CEC/50021/2013, EXCL/EEI-ESS/0257/2012 (DataStorm), grant number SFRH/BPD/68428/2010 and Ph.D. scholarship SFRH/BD/89020/2012.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning word representations from scarce and noisy data with embedding subspaces",
"authors": [
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Astudillo",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1074--1084",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ram\u00f3n Astudillo, Silvio Amir, Wang Ling, Mario Silva, and Isabel Trancoso. 2015a. Learning word repre- sentations from scarce and noisy data with embedding subspaces. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1074-1084, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Inesc-id: Sentiment analysis without hand-coded features or liguistic resources using embedding subspaces",
"authors": [
{
"first": "Ramon",
"middle": [
"F"
],
"last": "Astudillo",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "M\u00e1rio",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramon F. Astudillo, Silvio Amir, Wang Ling, Bruno Martins, M\u00e1rio Silva, and Isabel Trancoso. 2015b. Inesc-id: Sentiment analysis without hand-coded fea- tures or liguistic resources using embedding sub- spaces. In Proceedings of the 9th International Work- shop on Semantic Evaluation, SemEval '2015, Denver, Colorado, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493- 2537.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Two/too simple adaptations of word2vec for syntax problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan Black, and Isabel Tran- coso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2016 task 4: Sentiment analysis in Twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoy- anov, and Fabrizio Sebastiani. 2016. SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval 2016).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improved partof-speech tagging for online conversational text with word clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part- of-speech tagging for online conversational text with word clusters. In In Proceedings of NAACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning internal representations by error propagation",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "David E Rumelhart",
"suffix": ""
},
{
"first": "Ronald J Williams ; Dtic",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Document",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1985. Learning internal representations by error propagation. Technical report, DTIC Document.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unitn: Training deep convolutional neural network for twitter sentiment classification",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Unitn: Training deep convolutional neural network for twitter sentiment classification. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval '2015, Denver, Colorado, June. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>2013</td><td>2014</td><td>2015</td><td>2016</td></tr></table>",
"type_str": "table",
"text": "OOV 70.9% 37.9% 39.3% 65.1% OOEV 15.0% 11.2% 11.5% 22% OOV & OOEV 14.8% 11.0% 11.3% 21.8%",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td>System</td><td>2013 2014 2015</td></tr><tr><td colspan=\"2\">2015 hyperparameters 0.618 0.646 0.591 0.706 0.702 0.669 +lower neutral cost 0.723 0.721 0.649 +shuffle per epoch +update OOEVs 2 iter 0.725 0.729 0.657</td></tr><tr><td>Best SemEval 2015</td><td>0.722 0.727 0.652</td></tr></table>",
"type_str": "table",
"text": "Out Of Vocabulary (OOV) and Ouf Of Embedding Vocabulary (OOEV) statistics for the different SemEval Task4-B datasets. Embeddings reported are the Structured Skipgram embeddings used in the experiments.",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"text": "Effect of the improvements on the NLSE model.",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"text": "Comparision of strategies to address the problem of OOEV",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table><tr><td>System</td><td>2013</td><td>2014</td><td>2015</td><td>2016</td><td>Avg</td></tr><tr><td>INESC-ID aueb</td><td>0.687 7 0.723 2 0.666 8</td><td>0.706 7 0.727 3 0.708 6</td><td>0.651 4 0.657 3 0.623 7</td><td>0.617 3 0.610 4 0.605 5</td><td>0.665 4 0.679 3 0.651 5</td></tr></table>",
"type_str": "table",
"text": "SwissCheese 0.700 5 0.716 5 0.671 1 0.633 1 0.680 2 SENSEI-LIF 0.706 4 0.744 2 0.662 2 0.630 2 0.686 1 unimelb",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"text": "Official test-set results for the top five systems in SemEval 2016 Task 4-B. Subscript number indicates position in general ranking.",
"html": null,
"num": null
}
}
}
}