ACL-OCL / Base_JSON /prefixL /json /ltedi /2021.ltedi-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:21.210454Z"
},
"title": "KU NLP@LT-EDI-EACL2021: A Multilingual Hope Speech Detection for Equality, Diversity, and Inclusion using Context Aware Embeddings",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Hope speech detection is a new task for finding and highlighting positive comments or supporting content from user-generated social media comments. For this task, we have used a Shared Task multilingual dataset on Hope Speech Detection for Equality, Diversity, and Inclusion (HopeEDI) for three languages English, code-switched Tamil and Malayalam. In this paper, we present deep learning techniques using context-aware string embeddings for word representations and Recurrent Neural Network (RNN) and pooled document embeddings for text representation. We have evaluated and compared the three models for each language with different approaches. Our proposed methodology works fine and achieved higher performance than baselines. The highest weighted average F-scores of 0.93, 0.58, and 0.84 are obtained on the task organisers' final evaluation test set. The proposed models are outperforming the baselines by 3%, 2% and 11% in absolute terms for English, Tamil and Malayalam respectively.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Hope speech detection is a new task for finding and highlighting positive comments or supporting content from user-generated social media comments. For this task, we have used a Shared Task multilingual dataset on Hope Speech Detection for Equality, Diversity, and Inclusion (HopeEDI) for three languages English, code-switched Tamil and Malayalam. In this paper, we present deep learning techniques using context-aware string embeddings for word representations and Recurrent Neural Network (RNN) and pooled document embeddings for text representation. We have evaluated and compared the three models for each language with different approaches. Our proposed methodology works fine and achieved higher performance than baselines. The highest weighted average F-scores of 0.93, 0.58, and 0.84 are obtained on the task organisers' final evaluation test set. The proposed models are outperforming the baselines by 3%, 2% and 11% in absolute terms for English, Tamil and Malayalam respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, social media became an integral part of human life and people started spending more time on these platforms. But people are mindful of social media behaviour and putting less personal information in the public domain Mahesan, 2019, 2020a,b) . Now social media behaviors changed quite dramatically and we are living not just in a pandemic, but also in an \"infodemic\", where fake news is becoming more common (Lima et al., 2020) . Conversations on the internet are often a reflection of the conversations that one makes offline.",
"cite_spans": [
{
"start": 234,
"end": 257,
"text": "Mahesan, 2019, 2020a,b)",
"ref_id": null
},
{
"start": 424,
"end": 443,
"text": "(Lima et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several AI techniques are adopted to analyse the online comments in social media, which are intensified on the detection of negative com-ments such as hate speech detection, offensive language identification and abusive language detection (Chakravarthi, 2020a) . Hate speech is widely used in media, internet and public discourse which seen or read in the media expressing disapproval, hatred, or aggression towards minorities, could lead to violence and form negative impact on minorities. Hope speech detection is a new task related to Hate speech detection for finding and highlighting positive comments or supporting content, rather than just filtering hostile content . For this task, YouTube comments/posts that offer support, reassurance, suggestions, inspiration and insight are recognized as hope speech. In bilingual and multilingual communities linguistic code-switching occurs in social groups (Chakravarthi et al., 2018 (Chakravarthi et al., , 2019 Chakravarthi, 2020b) . It is incredibly important in many social groups, When an individual uses a group's dialect or accent, the audience is more receptive to the content (Jose et al., 2020; Priyadharshini et al., 2020) . Recently, many researchers are focused on high resource languages using monolingual corpora but less attention is given to code-switching languages especially under resourced languages like Indian languages Mandl et al., 2020) . In our work, we used Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) not only in English but also code-switched Tamil and Malayalam (Chakravarthi, 2020a).",
"cite_spans": [
{
"start": 239,
"end": 260,
"text": "(Chakravarthi, 2020a)",
"ref_id": "BIBREF4"
},
{
"start": 906,
"end": 932,
"text": "(Chakravarthi et al., 2018",
"ref_id": "BIBREF6"
},
{
"start": 933,
"end": 961,
"text": "(Chakravarthi et al., , 2019",
"ref_id": "BIBREF7"
},
{
"start": 962,
"end": 982,
"text": "Chakravarthi, 2020b)",
"ref_id": "BIBREF5"
},
{
"start": 1134,
"end": 1153,
"text": "(Jose et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 1154,
"end": 1182,
"text": "Priyadharshini et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 1392,
"end": 1411,
"text": "Mandl et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The authors (Puranik et al., 2021; Ghanghor et al., 2021) introduced a novel task for detecting hostility diffusing content from comments in social media, dubbed hope-speech detection. The authors analysed and studied the importance of automatic identification of user-generated hope speech web content that diffuse tension and violence among people in an international crisis. Finally, the obtained results are very promising and automatic recognition of hope speech may also find applications in many other contexts. But they restricted the definition of hope into diffuse tension and violence not considering the other perspectives of hope. Chakravarthi (2020a) constructed a multilingual Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from YouTube for English and two lowresource languages, Malayalam and Tamil. The authors considered much more perspectives support, reassurance, suggestions, inspiration and insight of the hope and EDI. To facilitate future research on encouraging positivity, the authors make this dataset publicly available and created several baselines to benchmark the proposed dataset.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "(Puranik et al., 2021;",
"ref_id": "BIBREF20"
},
{
"start": 35,
"end": 57,
"text": "Ghanghor et al., 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section describes the dataset used for our experiments and technical description of the proposed methodology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Materials and Methods",
"sec_num": "3"
},
{
"text": "For our work, we used the Shared Task dataset on Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI 2021-EACL 2021 (Chakravarthi and Muralidaran, 2021). The dataset contains YouTube comments from English, codeswitched Tamil and Malayalam. This is considered a multilingual resource to allow cross-lingual studies and approaches. The corpus consists of a total of 59,354 comments from YouTube videos, where 28,451 comments are in English, 20,198 comments are in Tamil, and the remaining 10,705 comments are in Malayalam. The dataset was manually annotated with three different labels: Hope Speech, Not-Hope Speech and Other languages, where Other languages refer to comments that were not in the intended language. The Figure 1 shows, distribution of three classes in shared task dataset. For our work, we have used the training, validation and test set of the shared task as depicted in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 738,
"end": 746,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 907,
"end": 914,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Hope speech detection is a form of text classification, which classify sentences or documents into specified categories. Most current state of art approaches to text classification rely on a technique called text embedding. The embeddings of words in a sentence is used to make a vector representation of the sentence. The sentence embeddings can be achieved in many ways. It could be done by convolutional neural networks (CNN) (Kim, 2014) , by averaging word vectors (Iyyer et al., 2015) or by using Recurrent Neural Networks(RNNs) (Dai and Le, 2015).",
"cite_spans": [
{
"start": 429,
"end": 440,
"text": "(Kim, 2014)",
"ref_id": "BIBREF16"
},
{
"start": 469,
"end": 489,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "In our proposed method we have used Deep contextual and fixed (non-contextual) embeddings to derive word representations. Then, puts them into an RNN or does a pooling operation on overall word embeddings to obtain a text representation. Finally, a softmax layer accepts the text representations to get the actual class label (Akbik et al., 2019b) . We have implemented three models for each language using contextualized string embeddings (flair) in a FLAIR framework (Akbik et al., 2019a) .",
"cite_spans": [
{
"start": 326,
"end": 347,
"text": "(Akbik et al., 2019b)",
"ref_id": "BIBREF1"
},
{
"start": 469,
"end": 490,
"text": "(Akbik et al., 2019a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "Contextualized String Embedding (Flair) : (Akbik et al., 2018) proposed a novel word embedding from internal states of a trained character language model and termed as contextual string embeddings. The two primary factors powering contextual string embeddings (Flair) are, words are trained as characters and embeddings are contextualised by their surrounding text. Hence, it is treated as character language model(charLM) and it perhaps the biggest benefit of using over a wordbased language model when a word has not been seen in the training data. The Flair embeddings are obtained by training two LSTM -based language models, forward and backward. The final word embedding is the concatenation of two specific hidden states from two language models. Let, t 1 , t 2 ...., t n be the character indices. Furthermore, let h f t , h b t be the hidden states at character position t for forward and backward LM respectively. The Flair embedding w k for the word w k is defined as given in Equation 1.",
"cite_spans": [
{
"start": 42,
"end": 62,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w k = h f t k+1 \u22121 ; h b t k \u22121",
"eq_num": "(1)"
}
],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "Stacked Embedding : In this method we combine different embeddings, such as word2vec, Glove, FastText along with embeddings generated from Flair language models (Akbik et al., 2019a) . ",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Akbik et al., 2019a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "We have utilized the FLAIR framework (Akbik et al., 2019a) for all our experiments with GPU (12 GB) provided by Google Colab. We have trained three models for each language. The first model employed a pretrained flair LM for word representation and RNN for text representation (FLAIR+RNN). The second model combines pretrained flair LM and pretrained word embeddings (PWE) for word representaion and RNN is used for text representaion (FLAIR+PWE+RNN). The third model apply the same word representation and pooled document embedding for text representation (FLAIR+PWE+Pooled). The Gated Recurrent Unit (GRU) is used for document RNN embedding and max pool operation is used for pooled document embedding. The Table 2 shows the hyperparameters settings for all our experiments. For all three languages, We have adopted pretrianed flair embeddings from FLAIR framework (enforward, en-backward, ml-forward, ml-backward, ta-forward and ta-backward) (Akbik et al., 2019a (Kakwani et al., 2020) .",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "(Akbik et al., 2019a)",
"ref_id": "BIBREF0"
},
{
"start": 945,
"end": 965,
"text": "(Akbik et al., 2019a",
"ref_id": "BIBREF0"
},
{
"start": 966,
"end": 988,
"text": "(Kakwani et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 709,
"end": 716,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "The performance of a text classification model is usually noted as F1-score (harmonic mean of precision and recall) since accuracy can often be misleading in an imbalanced class distribution (Akosa, 2017) . For this task, the dataset having imbalanced class distribution as in Figure 1 . Due to the imbalance problem, we measured our system performance in terms of weighted averaged Precision, weighted averaged Recall and weighted averaged F-Score across all the three classes. Weighted averaged calculations use average of the supportweighted mean per label. ",
"cite_spans": [
{
"start": 191,
"end": 204,
"text": "(Akosa, 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 277,
"end": 285,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "This section presents the results and analysis of our experiments, which we have explained in the previous sections. As well as, we present the baseline results obtained by the authors (Chakravarthi, 2020a), to compare the performance with our proposed models. The final evaluation results obtained for participants' submissions by task organisers are also presented in overview paper (Chakravarthi and Muralidaran, 2021). The best-performing classifiers on HopeEDI datasets are considered as baseline models for this task (See Table 3 ). The Table 4 depict the precision, recall and F-score results of trained models for three languages described in section 4.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 535,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 543,
"end": 551,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "As shown, all our proposed models for Hope EDI detection task in three languages work well and show significant improvement than the bestperformed baseline models. The results indicate that a Hope EDI detection classifier with good precision and recall can be constructed using deep learning approaches. It may be due to the representational power of pre-trained word embeddings or language models to capture semantic and lexical structure. It has virtually replaced the feature engineering part of supervised machine learning classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "For English and Malayalam, our model with stacked word embeddings and pooled document embedding achieved the highest performance on the HopeEDI dataset with a weighted average F-Score of 0.93 and 0.84 respectively. But in Tamil, stacked PWE with RNN document embedding performed better than other models with weighted average F-Score of 0.58. Furthermore, it can be seen that all Malayalam language models (0.79,0.83,0.84) achieved greater improvement than baseline models(0.73). Other two languages exhibit slight improvement in proposed models than baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "In most cases, it can be observed from the Table 4 that the flair embedding with PWE gives higher performance than the standalone embeddings. This indicates that combining contextual embedding with non-contextual embeddings achieves noticeably better outcomes. This finding is in line with the findings of (Akbik et al., 2018) . The Figures 2 and 3 represents the class level precision, recall and f-scores of best-performed baseline models and best performed proposed models for all three languages. It can be noticed from these figures that the class \"not-english\" have no significant performance in both models due to the imbalance in shared task dataset (Chakravarthi, 2020a).",
"cite_spans": [
{
"start": 306,
"end": 326,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 333,
"end": 348,
"text": "Figures 2 and 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "Also, we can observe that the \"Non-hopespeech\" class for English shows similar perfor- mance in both models and \"Hope-speech\" class hold a remarkable improvement over baseline models. In Tamil and Malayalam the majority class \"Non-hope-speech\" possesses approximately similar F-scores for baseline and proposed the best model. Another notable and interesting observation from Figure 2 and 3 is that the two minority classes in Malayalam dataset, \"Hope-speech\" and \"not-malayalam\" shows comparable performance approximately 30% above F-score than the baseline model. Also, \"not-Tamil\" class shows approximately 25% above F-score than the baseline model. These higher scores indicate that deep learning techniques can classify the correct class regardless of label distribution better than traditional machine learning technique. This may because our deep learning techniques rely on pre-trained embeddings and language model, which is a contextualized word representation allowing a word to be associated with multiple word vectors, whereas the classical techniques rely on merely manually selected features.",
"cite_spans": [],
"ref_spans": [
{
"start": 376,
"end": 384,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "We have presented a deep learning technique with contextual aware embeddings for Hope speech de-tection task in three languages: English, Malayalam and Tamil (code-switched). We have used contextual string embedding (flair) and pre-trained word embeddings (PWE) for word representations and Recurrent Neural Network (RNN) and pooled document embedding with max pool operation for text representations. All the three models were evaluated for each language and compared the baseline models using the dataset given by Hope Speech Detection for Equality, Diversity, and Inclusion (HopeEDI) Shared Task organizers. All model performances are measured using weighted average F-score due to the imbalanced class distribution in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have obtained the highest F-scores 0.93, 0.58 and 0.84 for three languages English, Tamil and Malayalam respectively, which significantly improved performance over baselines (0.90, 0.56, and 0.73). For the minority class 'not-language', the proposed best model improved 35% and 25% performance than baselines for Malayalam and Tamil, respectively. Based on these observations, we conclude that the deep learning models with contextual string embeddings are well suited for HopeEDI detection task with an imbalanced dataset. We could also achieve good performance results with moder-ate resources (one GPU and a small corpus), even without optimizing hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The performance can be improved further by fine-tuning hyperparameters and pre-trained contextual embeddings, incorporating different attention mechanisms and increasing the size of the training dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "(a) Malayalam (b) Tamil",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "FLAIR: An easy-to-use framework for state-of-theart NLP",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Rasul",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Schweter",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "54--59",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4010"
]
},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019a. FLAIR: An easy-to-use framework for state-of-the- art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pooled contextualized embeddings for named entity recognition",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "724--728",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1078"
]
},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019b. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 724-728, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Predictive accuracy : A misleading performance measure for highly imbalanced data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Akosa",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Akosa. 2017. Predictive accuracy : A misleading performance measure for highly imbalanced data.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion",
"authors": [
{
"first": "Chakravarthi",
"middle": [],
"last": "Bharathi Raja",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
"volume": "",
"issue": "",
"pages": "41--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi. 2020a. HopeEDI: A mul- tilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Me- dia, pages 41-53, Barcelona, Spain (Online). Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Leveraging orthographic information to improve machine translation of under-resourced languages",
"authors": [
{
"first": "Chakravarthi",
"middle": [],
"last": "Bharathi Raja",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi. 2020b. Leveraging ortho- graphic information to improve machine translation of under-resourced languages. Ph.D. thesis, NUI Galway.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improving wordnets for underresourced languages using machine translation",
"authors": [
{
"first": "Mihael",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Arcan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Mihael Arcan, and John P. McCrae. 2018. Improving wordnets for under- resourced languages using machine translation. In Proceedings of the 9th Global Wordnet Conference, pages 77-86, Nanyang Technological University (NTU), Singapore. Global Wordnet Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet gloss translation for underresourced languages using multilingual neural machine translation",
"authors": [
{
"first": "Mihael",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Arcan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Multilingualism at the Intersection of Knowledge Bases and Machine Translation",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Mihael Arcan, and John P. McCrae. 2019. WordNet gloss translation for under- resourced languages using multilingual neural ma- chine translation. In Proceedings of the Second Workshop on Multilingualism at the Intersection of Knowledge Bases and Machine Translation, pages 1-7, Dublin, Ireland. European Association for Ma- chine Translation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of the track on HASOC-Offensive Language Identification-DravidianCodeMix",
"authors": [
{
"first": "Anand",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Philip Mccrae",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Premjith",
"suffix": ""
},
{
"first": "K",
"middle": [
"P"
],
"last": "Soman",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mandl",
"suffix": ""
}
],
"year": 2020,
"venue": "Working Notes of the Forum for Information Retrieval Evaluation (FIRE 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, M Anand Kumar, John Philip McCrae, Premjith B, Soman KP, and Thomas Mandl. 2020a. Overview of the track on HASOC-Offensive Language Identification- DravidianCodeMix. In Working Notes of the Forum for Information Retrieval Evaluation (FIRE 2020). CEUR Workshop Proceedings. In: CEUR-WS. org, Hyderabad, India.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion",
"authors": [
{
"first": "Vigneshwaran",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muralidaran",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi and Vigneshwaran Mural- idaran. 2021. Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclu- sion. In Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclu- sion. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the Track on Sentiment Analysis for Dravidian Languages in Code-Mixed Text",
"authors": [
{
"first": "Ruba",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Vigneshwaran",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Shardul",
"middle": [],
"last": "Muralidaran",
"suffix": ""
},
{
"first": "Navya",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Sherly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "In Forum for Information Retrieval Evaluation",
"volume": "2020",
"issue": "",
"pages": "21--24",
"other_ids": {
"DOI": [
"10.1145/3441501.3441515"
]
},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Vigneshwaran Muralidaran, Shardul Suryawanshi, Navya Jose, Elizabeth Sherly, and John P. McCrae. 2020b. Overview of the Track on Sentiment Analy- sis for Dravidian Languages in Code-Mixed Text. In Forum for Information Retrieval Evaluation, FIRE 2020, page 21-24, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semisupervised sequence learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew M. Dai and Quoc V. Le. 2015. Semi- supervised sequence learning.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "IIITK@LT-EDI-EACL2021: Hope Speech Detection for Equality, Diversity, and Inclusion in Tamil, Malayalam and English",
"authors": [
{
"first": "Rahul",
"middle": [],
"last": "Nikhil Kumar Ghanghor",
"suffix": ""
},
{
"first": "Prasanna",
"middle": [],
"last": "Ponnusamy",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Kumar Kumaresan",
"suffix": ""
},
{
"first": "Sajeetha",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Bharathi Raja",
"middle": [],
"last": "Thavareesan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Kumar Ghanghor, Rahul Ponnusamy, Prasanna Kumar Kumaresan, Ruba Priyad- harshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. IIITK@LT-EDI-EACL2021: Hope Speech Detection for Equality, Diversity, and Inclusion in Tamil, Malayalam and English. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion, Online.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep unordered composition rivals syntactic methods for text classification",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1681--1691",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1681-1691, Beijing, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Survey of Current Datasets for Code-Switching Research",
"authors": [
{
"first": "Navya",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Shardul",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Sherly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)",
"volume": "",
"issue": "",
"pages": "136--141",
"other_ids": {
"DOI": [
"10.1109/ICACCS48705.2020.9074205"
]
},
"num": null,
"urls": [],
"raw_text": "Navya Jose, Bharathi Raja Chakravarthi, Shardul Suryawanshi, Elizabeth Sherly, and John P. McCrae. 2020. A Survey of Current Datasets for Code- Switching Research. In 2020 6th International Con- ference on Advanced Computing and Communica- tion Systems (ICACCS), pages 136-141.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages",
"authors": [
{
"first": "Divyanshu",
"middle": [],
"last": "Kakwani",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Satish",
"middle": [],
"last": "Golla",
"suffix": ""
},
{
"first": "N",
"middle": [
"C"
],
"last": "Gokul",
"suffix": ""
},
{
"first": "Avik",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Pratyush",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4948--4961",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.445"
]
},
"num": null,
"urls": [],
"raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 4948- 4961, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1181"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Social media: friend or foe in the COVID-19 pandemic?",
"authors": [
{
"first": "Diego",
"middle": [
"Laurentino"
],
"last": "Lima",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Antonieta Albanez",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "De Medeiros Lopes",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"Maria"
],
"last": "Brito",
"suffix": ""
}
],
"year": 2020,
"venue": "Clinics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Laurentino Lima, Maria Antonieta Albanez A. de Medeiros Lopes, and Ana Maria Brito. 2020. So- cial media: friend or foe in the COVID-19 pan- demic? Clinics, 75.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "Sandip",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Bharathi Raja Chakravarthi ;",
"middle": [],
"last": "Malayalam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hindi",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "English",
"suffix": ""
}
],
"year": 2020,
"venue": "Forum for Information Retrieval Evaluation",
"volume": "2020",
"issue": "",
"pages": "29--32",
"other_ids": {
"DOI": [
"10.1145/3441501.3441517"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malay- alam, Hindi, English and German. In Forum for Information Retrieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Com- puting Machinery.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding",
"authors": [
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Vegupatti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)",
"volume": "",
"issue": "",
"pages": "68--72",
"other_ids": {
"DOI": [
"10.1109/ICACCS48705.2020.9074379"
]
},
"num": null,
"urls": [],
"raw_text": "Ruba Priyadharshini, Bharathi Raja Chakravarthi, Mani Vegupatti, and John P. McCrae. 2020. Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding. In 2020 6th International Conference on Advanced Computing and Communi- cation Systems (ICACCS), pages 68-72.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "IIITT@LT-EDI-EACL2021-Hope Speech Detection: There is always hope in Transformers",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Puranik",
"suffix": ""
},
{
"first": "Adeep",
"middle": [],
"last": "Hande",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Sajeetha",
"middle": [],
"last": "Thavareesan",
"suffix": ""
},
{
"first": "Bharathi Raja",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Puranik, Adeep Hande, Ruba Priyad- harshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. IIITT@LT-EDI-EACL2021- Hope Speech Detection: There is always hope in Transformers. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation",
"authors": [
{
"first": "Sajeetha",
"middle": [],
"last": "Thavareesan",
"suffix": ""
},
{
"first": "Sinnathamby",
"middle": [],
"last": "Mahesan",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 14th Conference on Industrial and Information Systems (ICIIS)",
"volume": "",
"issue": "",
"pages": "320--325",
"other_ids": {
"DOI": [
"10.1109/ICIIS47346.2019.9063341"
]
},
"num": null,
"urls": [],
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2019. Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Rep- resentation. In 2019 14th Conference on Industrial and Information Systems (ICIIS), pages 320-325.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts",
"authors": [
{
"first": "Sajeetha",
"middle": [],
"last": "Thavareesan",
"suffix": ""
},
{
"first": "Sinnathamby",
"middle": [],
"last": "Mahesan",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 Moratuwa Engineering Research Conference (MERCon)",
"volume": "",
"issue": "",
"pages": "272--276",
"other_ids": {
"DOI": [
"10.1109/MERCon50084.2020.9185369"
]
},
"num": null,
"urls": [],
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020a. Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts. In 2020 Moratuwa Engineering Re- search Conference (MERCon), pages 272-276.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Word embedding-based Part of Speech tagging in Tamil texts",
"authors": [
{
"first": "Sajeetha",
"middle": [],
"last": "Thavareesan",
"suffix": ""
},
{
"first": "Sinnathamby",
"middle": [],
"last": "Mahesan",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)",
"volume": "",
"issue": "",
"pages": "478--482",
"other_ids": {
"DOI": [
"10.1109/ICIIS51140.2020.9342640"
]
},
"num": null,
"urls": [],
"raw_text": "Sajeetha Thavareesan and Sinnathamby Mahesan. 2020b. Word embedding-based Part of Speech tag- ging in Tamil texts. In 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), pages 478-482.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Distribution of three classes over the Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI 2021-EACL 2021 Shared Task dataset for English, Malayalam and Tamil languages Document Pool Embeddings : It is a simple document embedding, calculate a pooling operation (mean or max or min) over a list of token embeddings in a document. The default operation is mean which gives the mean of all words in the sentence(Akbik et al., 2019a).",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Class level precision, recall and f-scores of baselines models (Chakravarthi, 2020a) for three languages. Class level precision, recall and f-scores of best-performed models from our experiments for three languages.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"text": "Train-Devolopment-Test Split",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>: Hyper Parameters</td></tr><tr><td>alam and Tamil langauge used the PWE from In-</td></tr><tr><td>dicFT, FastText-based word embeddings (11 lan-</td></tr><tr><td>guages), which is a 300-dimensional word embed-</td></tr><tr><td>dings for each language on IndicCorp, recently pub-</td></tr><tr><td>lished large monolingual sentence level corpora for</td></tr><tr><td>11 Indian languages</td></tr></table>",
"num": null
},
"TABREF5": {
"type_str": "table",
"text": "Baseline results from (Chakravarthi, 2020a).",
"html": null,
"content": "<table><tr><td colspan=\"2\">Language Model</td><td/><td>Weighted avg</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F</td></tr><tr><td/><td>FLAIR+RNN</td><td colspan=\"2\">0.91 0.92 0.91</td></tr><tr><td>English</td><td>FLAIR+PWE+RNN</td><td colspan=\"2\">0.92 0.93 0.91</td></tr><tr><td/><td colspan=\"3\">FLAIR+PWE+Pooled 0.92 0.93 0.93</td></tr><tr><td/><td>FLAIR+RNN</td><td colspan=\"2\">0.58 0.58 0.56</td></tr><tr><td>Tamil</td><td>FLAIR+PWE+RNN</td><td colspan=\"2\">0.60 0.59 0.58</td></tr><tr><td/><td colspan=\"3\">FLAIR+PWE+Pooled 0.62 0.60 0.56</td></tr><tr><td/><td>FLAIR+RNN</td><td colspan=\"2\">0.78 0.82 0.79</td></tr><tr><td colspan=\"2\">Malayalam FLAIR+PWE+RNN</td><td colspan=\"2\">0.82 0.86 0.83</td></tr><tr><td/><td colspan=\"3\">FLAIR+PWE+Pooled 0.84 0.85 0.84</td></tr></table>",
"num": null
},
"TABREF6": {
"type_str": "table",
"text": "Our models results from proposed methodology.",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}