ACL-OCL / Base_JSON /prefixS /json /smm4h /2020.smm4h-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:56.505223Z"
},
"title": "BERT implementation for detecting adverse drug effects mentions in Russian",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Gusev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"settlement": "Moscow"
}
},
"email": "[email protected]"
},
{
"first": "Anna",
"middle": [],
"last": "Kuznetsova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"settlement": "Moscow"
}
},
"email": "[email protected]"
},
{
"first": "Anna",
"middle": [],
"last": "Polyanskaya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"settlement": "Moscow"
}
},
"email": "[email protected]"
},
{
"first": "Egor",
"middle": [],
"last": "Yatsishin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"settlement": "Moscow"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a system developed for the Social Media Mining for Health (SMM4H) 2020 shared task. Our team participated in the second subtask for Russian language creating a system to detect adverse drug reaction (ADR) presence in a text. For our submission, we exploited an ensemble model architecture, combining BERT's extension for Russian language, Logistic Regression and domain-specific preprocessing pipeline. Our system was ranked first among others, achieving F-score of 0.51. We have made our code publicly available 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a system developed for the Social Media Mining for Health (SMM4H) 2020 shared task. Our team participated in the second subtask for Russian language creating a system to detect adverse drug reaction (ADR) presence in a text. For our submission, we exploited an ensemble model architecture, combining BERT's extension for Russian language, Logistic Regression and domain-specific preprocessing pipeline. Our system was ranked first among others, achieving F-score of 0.51. We have made our code publicly available 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we focus on the problem of discovering the presence of adverse drug reaction (ADR) concepts in twitter posts as part of the The Social Media Mining for Health Applications (SMM4H) Shared Task (Klein et al., 2020) . The paper is based on the participation of our team in the Russian language segment of the second task: ADR presence classification. Organizers of SMM4H 2020 Task 2 provided datasets of Russian tweets with binary annotation indicating the presence or absence of ADRs in each post. The aim of the task was to develop a system to classify the tweets according to the presence of ADRs. Texts were given in a raw form, so they contained misspellings, slang, emojis, hashtags, usernames and were quite noisy. This year is the first time for a distinct set of Russian tweets to be included in the task. We tested and compared several different approaches for solving such type of classification task, including classical Machine Learning approaches and neural networks, and also different preprocessing pipelines.",
"cite_spans": [
{
"start": 207,
"end": 227,
"text": "(Klein et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main dataset consists of the training set (7,612 tweets), validation set (1,522 tweets) and test set (1,903 tweets). The dataset is highly imbalanced with only 666 tweets labeled as mentioning ADR (hardly 9%). We used several techniques to overcome this problem, one of them being an attempt to create some additional data. We used RuDReC corpus 2 and manually labeled about 1,800 drug reviews (627 positive and 1195 negative).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "It would be wrong to assume that these drug reviews are completely identical to the tweets from the main set in terms of linguistic features, so we did a simple analysis, which gave us several insights. First of all, the main lexicon of this two types of texts is quite similar, with the only two substantial differences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "1. Reviews tend to mention the cost or, to be more specific, the expensiveness of a drug much more often than tweets do, leading to the higher distribution of words like expensive and overpriced;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "2. In general, reviewers' language is significantly more grammatically correct and uses less slang and word shortenings. The results of using this extended dataset are described in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "Our approach for text preprocessing is to some extent based on the one used in (Ellendorff et al., 2019) . The following changes were made using Python programming language:",
"cite_spans": [
{
"start": 79,
"end": 104,
"text": "(Ellendorff et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Tokenization using CrazyTokenizer from RedditScore package 3 based on spaCy tokenizer for Russian 4 ; \u2022 Lowercasing, except all-caps words such as \u0410\u0414 \"antidepressants\";",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 All \u0451 replaced with e; \u2022 Urls replaced with \u042e\u0420\u041b \"url\" placeholder;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Usernames replaced with \u042e\u0417\u0415\u0420\u041d\u0415\u0419\u041c \"username\" placeholder;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Hashtags replaced with \u0425\u0415\u0428\u0422\u0415\u0413 \"hashtag\" placeholder;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Numbers replaced with NUM placeholder;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Measures such as \u043a\u0433 \"kg\" and \u043c\u043b \"ml\" replaced with MEASURE placeholder;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Emojis replaced with POS EMOJI for ones with positive meanings, NEG EMOJI for ones with negative meanings and NEUTRAL EMOJI for ones related to health issues as they could be semantically important; \u2022 3 or more repetitive letters normalized to 1;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Line breaks deleted;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "\u2022 Stopwords deleted. We used NLTK stopword list for Russian extended with some slang words as \u043a\u0430\u0440\u043e\u0447 \"well\", \u0442\u0438\u043f\u0430 \"like\", \u043f\u0440\u043e\u0441\u0442 \"just\", etc.; Then we applied the following procedures, creating individual datasets for various combinations of them:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "1. Leaving or deleting punctuation as tokens; 2. Lemmatization using PyMorphy2 (Korobov, 2015) ; 3. Stemming using Snowball Stemmer for Russian (Porter, 2001 ) via NLTK (Bird et al., 2009) ; We chose PyMorphy2 over Mystem (Segalovich, 2003) for several reasons, first of them being the time, required to process the data on our OS (Windows). We also prefered pymorphy's lemmatization of unknown words (mostly, names of drugs and medical terms).",
"cite_spans": [
{
"start": 79,
"end": 94,
"text": "(Korobov, 2015)",
"ref_id": "BIBREF6"
},
{
"start": 144,
"end": 157,
"text": "(Porter, 2001",
"ref_id": "BIBREF9"
},
{
"start": 169,
"end": 188,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 222,
"end": 240,
"text": "(Segalovich, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "At first, classical Machine Learning models were trained to classify posts on original data without preprocessing. The data was represented as TF-IDF vectors. We decided to proceed with Support Vector Machine with simple linear kernel (LinearSVM), Logistic Regression model (LogReg) and Gradient Boosting Machine (GBM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learning",
"sec_num": "3.1"
},
{
"text": "For this approach, we explored the implementation of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) : its extensions for Russian language -RuBert (Kuratov and Arkhipov, 2019) and Conversational RuBERT from DeepPavlov framework (Burtsev et al., 2018) . Ru-BERT had the following characteristics: cased, 12-layer, 768-hidden, 12-heads, 180M parameters, and was fine-tuned with initialization from multilingual BERT on the Russian part of Wikipedia and news data. Conversational RuBERT had the same characteristics and was in turn fine-tuned with RuBERT on OpenSubtitles (Lison and Tiedemann, 2016) . Due to the imbalance of classes, as mentioned above -9% positive to 91% negative, we used following models:",
"cite_spans": [
{
"start": 116,
"end": 137,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 184,
"end": 212,
"text": "(Kuratov and Arkhipov, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 265,
"end": 287,
"text": "(Burtsev et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 606,
"end": 633,
"text": "(Lison and Tiedemann, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Learning",
"sec_num": "3.2"
},
{
"text": "Base model Additions For Undersampling approach we split negative class into N equal-sized folds, and combined each split with positive samples, giving that a share of negative samples is almost 0.5. Then we trained N models and stacked their probabilistic predictions for constructing an ensemble architecture. At first, we applied the majority voting method in order to get final answers. Beside that, Logistic Regression model was also used as the ensemble combiner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name",
"sec_num": null
},
{
"text": "RuBERT RuBERT - Conv Conversational RuBERT - ConvUnder Conversational RuBERT Undersampling ConvLogReg Conversational RuBERT Undersampling + LogReg",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name",
"sec_num": null
},
{
"text": "We compared classical ML models with basic BERT models on the original data without any preprocessing. The results reached by classical ML models, which can be found in Table 2 , were not competitive, thus we didn't proceed with this approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Validation F-score LinearSVM 0.28 LogReg 0.07 GBM 0.29 Table 2 : Classical ML models' results on original data BERT models showed an increase in F-score, with Conversational RuBERT being slightly ahead of RuBERT. After deciding to stick with the Conversational RuBERT model, we made cross validation in search of the optimal parameters. Further experiments were conducted with Conversational RuBERT model with batch size equal to 32, dropout probability for non-Bert layers 0.4 and learning rate 10 \u22125 . All results of testing can be seen in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 2",
"ref_id": null
},
{
"start": 542,
"end": 549,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Considering using variants of dataset with additional texts, results were not promising, as there were no visible improvements and the F-score even decreased by 0.05. We believe that the reason for such behaviour lies in the dissimilarities of two types of texts being presumably more significant than we described in Subection 2.1. Implementation of the ensemble architecture proved to be successful and brought up an increase in F-score by 0.05 compared to non-ensemble models when using the ConvLo-gReg model. It should be noted here that we used a relatively small number of training epochs for models with undersampling, due to the high risk of overfitting. Further experiments were conducted in order to evaluate which pipeline of data preprocessing suits this task the most. Judging by the results shown in Table 4 , simple preprocessing without lemmatization, stemming or punctuation deletion works the best, while any other type of preprocessing leads to a decrease in F-score. We discuss the reasons for that in Subsection 5. Table 4 : F-scores of simple models trained on main data with different preprocessing applied both to training and validation sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 814,
"end": 821,
"text": "Table 4",
"ref_id": null
},
{
"start": 1036,
"end": 1043,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Given the results above, we settled on using ConvLogReg with 6 splits and small number of epochs. We also submitted one Conv model for comparison. Scores for the final models can be found in Table 5 : Metrics for three final submissions on validation and test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In this work, we have explored an application of Bidirectional Encoder Representations from Transformers (BERT) to the task of text classification in Russian. We have empirically evaluated different versions of tuned RuBERT and preprocessing pipelines against F-score for the \"positive\" class and experiments have shown that logistic regression trained on the result of a six Conversational RuBERT models ensemble trained on the undersampled data with light preprocessing and tokenized punctuation outperforms every other model, providing a new baseline for ADR presence classification in Russian with F-score 0.51, precision 0.45 and recall 0.60 on the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our research showed that stemming, lemmatization and punctuation removal when working with Russian language only decreases the final scores. Russian is different from English in such a way that the word order in Russian is free to some extent and syntactic relations are mostly encoded by morphology and punctuation. Knowing that BERT is capable of capturing hierarchy-sensitive and syntactic dependencies (Goldberg, 2019) , it becomes obvious, that when dependencies indicators are blurred or removed the results worsen. In addition, some punctuation has its own semantics. For example, \"(\" means \"sad\" and \"!!!!\" can mean high importance. We don't see any premises to remove such kind of data from the dataset. Another point of discussion is the presence of mistakes in the datasets. In the training data we have found some debatable annotations and some which are erroneous for sure. These mistakes could possibly affect the performance of our model.",
"cite_spans": [
{
"start": 406,
"end": 422,
"text": "(Goldberg, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.1"
},
{
"text": "One potential objective for future improvement is to implement rule-based approach to the task, as mixed systems are known to perform better (Ray and Chakrabarti, 2019) . We have already made some advancements on this path, but there is still a lot of research to perform. Also, we hope to continue enhancing the results by extending the dataset from Russian social networks and RuDReC corpus.",
"cite_spans": [
{
"start": 141,
"end": 168,
"text": "(Ray and Chakrabarti, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "5.2"
},
{
"text": "https://github.com/crazyfrogspb/RedditScore 4 https://github.com/aatimofeev/spacy_russian_tokenizer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. 01.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Vadim Polulyakh, Leonid Pugachev, Alexey Sorokin, Maria Vikhreva, and Marat Zaynutdinov",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Burtsev",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Seliverstov",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Airapetyan",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
},
{
"first": "Dilyara",
"middle": [],
"last": "Baymurzina",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bushkov",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Gureenkova",
"suffix": ""
},
{
"first": "Taras",
"middle": [],
"last": "Khakhulin",
"suffix": ""
},
{
"first": "Yurii",
"middle": [],
"last": "Kuratov",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Kuznetsov",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Litinsky",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Burtsev, Alexander Seliverstov, Rafael Airapetyan, Mikhail Arkhipov, Dilyara Baymurzina, Nikolay Bushkov, Olga Gureenkova, Taras Khakhulin, Yurii Kuratov, Denis Kuznetsov, Alexey Litinsky, Varvara Lo- gacheva, Alexey Lymar, Valentin Malykh, Maxim Petrov, Vadim Polulyakh, Leonid Pugachev, Alexey Sorokin, Maria Vikhreva, and Marat Zaynutdinov. 2018. Deeppavlov: Open-source library for dialogue systems. 07.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Approaching SMM4H with merged models and multi-task learning",
"authors": [
{
"first": "Tilia",
"middle": [],
"last": "Ellendorff",
"suffix": ""
},
{
"first": "Lenz",
"middle": [],
"last": "Furrer",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Colic",
"suffix": ""
},
{
"first": "No\u00ebmi",
"middle": [],
"last": "Aepli",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Rinaldi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "58--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tilia Ellendorff, Lenz Furrer, Nicola Colic, No\u00ebmi Aepli, and Fabio Rinaldi. 2019. Approaching SMM4H with merged models and multi-task learning. In Proceedings of the Fourth Social Media Mining for Health Applica- tions (#SMM4H) Workshop & Shared Task, pages 58-61, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Assessing bert's syntactic abilities. CoRR",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2019. Assessing bert's syntactic abilities. CoRR, abs/1901.05287.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overview of the fifth social media mining for health applications (#smm4h) shared tasks at coling 2020",
"authors": [
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": ""
},
{
"first": "Alimova",
"middle": [],
"last": "Ilseyar",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Anne-Lyse",
"middle": [],
"last": "Minard",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"O"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z. Klein, Alimova Ilseyar, Ivan Flores, Arjun Magge, Zulfat Miftahutdinov, Anne-Lyse Minard, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2020. Overview of the fifth social media mining for health applications (#smm4h) shared tasks at coling 2020. In Proceedings of the Fifth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Morphological analyzer and generator for russian and ukrainian languages",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Korobov",
"suffix": ""
}
],
"year": 2015,
"venue": "Analysis of Images",
"volume": "542",
"issue": "",
"pages": "320--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Korobov. 2015. Morphological analyzer and generator for russian and ukrainian languages. In Mikhail Yu. Khachay, Natalia Konstantinova, Alexander Panchenko, Dmitry I. Ignatov, and Valeri G. Labunets, editors, Analysis of Images, Social Networks and Texts, volume 542 of Communications in Computer and Infor- mation Science, pages 320-332. Springer International Publishing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptation of deep bidirectional multilingual transformers for russian language",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Kuratov",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adaptation of deep bidirectional multilingual transformers for russian language. CoRR, abs/1905.07213.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "923--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Snowball: A language for stemming algorithms",
"authors": [
{
"first": "Martin",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin F. Porter. 2001. Snowball: A language for stemming algorithms. Published online, October. Accessed 11.03.2008, 15.00h.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A mixed approach of deep learning method and rule-based method to improve aspect level sentiment analysis",
"authors": [
{
"first": "Paramita",
"middle": [],
"last": "Ray",
"suffix": ""
},
{
"first": "Amlan",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
}
],
"year": 2019,
"venue": "Applied Computing and Informatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paramita Ray and Amlan Chakrabarti. 2019. A mixed approach of deep learning method and rule-based method to improve aspect level sentiment analysis. Applied Computing and Informatics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A fast morphological algorithm with unknown word guessing induced by a dictionary for a web search engine",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Segalovich",
"suffix": ""
}
],
"year": 2003,
"venue": "MLMTA",
"volume": "",
"issue": "",
"pages": "273--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Segalovich. 2003. A fast morphological algorithm with unknown word guessing induced by a dictionary for a web search engine. In Hamid R. Arabnia and Elena B. Kozerenko, editors, MLMTA, pages 273-280. CSREA Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The russian drug reaction corpus and neural models for drug reactions and effectiveness detection in user reviews",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Sakhovskiy",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Nikolenko",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Tutubalina, Ilseyar Alimova, Zulfat Miftahutdinov, Andrey Sakhovskiy, Valentin Malykh, and Sergey Nikolenko. 2020. The russian drug reaction corpus and neural models for drug reactions and effectiveness detection in user reviews. Bioinformatics, 07. btaa675.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "name",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "Model architectures.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"text": "",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>model</td><td/><td/><td>scores Validation</td><td>Test</td></tr><tr><td>name</td><td colspan=\"3\">n epochs precision recall</td><td>F-score</td></tr><tr><td>Conv</td><td>6</td><td>0.39</td><td colspan=\"2\">0.52 0.45 0.48</td></tr><tr><td>ConvLogReg</td><td>2 4</td><td>0.51 0.45</td><td colspan=\"2\">0.45 0.51 0.51 0.58 0.48 0.50</td></tr></table>",
"type_str": "table"
}
}
}
}