ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-industry.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:44:02.196179Z"
},
"title": "Scalable Cross-lingual Treebank Synthesis for Improved Production Dependency Parsers",
"authors": [
{
"first": "Yousef",
"middle": [],
"last": "El-Kurdi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sarioglu",
"middle": [],
"last": "Kayi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Vittorio",
"middle": [],
"last": "Castelli",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Hans",
"middle": [],
"last": "Florian",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present scalable Universal Dependency (UD) treebank synthesis techniques that exploit advances in language representation modeling which leverage vast amounts of unlabeled generalpurpose multilingual text. We introduce a data augmentation technique that uses synthetic treebanks to improve production-grade parsers. The synthetic treebanks are generated using a state-of-the-art biaffine parser adapted with pretrained Transformer models, such as Multilingual BERT (M-BERT). The new parser improves LAS by up to two points on seven languages. The production models' LAS performance improves as the augmented treebanks scale in size, surpassing performance of production models trained on originally annotated UD treebanks.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present scalable Universal Dependency (UD) treebank synthesis techniques that exploit advances in language representation modeling which leverage vast amounts of unlabeled generalpurpose multilingual text. We introduce a data augmentation technique that uses synthetic treebanks to improve production-grade parsers. The synthetic treebanks are generated using a state-of-the-art biaffine parser adapted with pretrained Transformer models, such as Multilingual BERT (M-BERT). The new parser improves LAS by up to two points on seven languages. The production models' LAS performance improves as the augmented treebanks scale in size, surpassing performance of production models trained on originally annotated UD treebanks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsers are important components in many NLP systems, such as language understanding, semantic role labeling and relation extraction (Marcheggiani and Titov, 2017; . Universal Dependencies (UD) (Nivre et al., 2020; Zeman et al., 2018) are becoming a widely accepted standard among many NLP practitioners for definition of syntactic structures and treebanks. However, production parsers require custom tokenization policies and Part of Speech (PoS) tagging, mostly dictated by supported downstream applications. In addition, parsers in production environments require fine balancing of demands for model accuracy, service performance, response time and constraints on hardware resources, making the design of an industrial-grade parser a challenge. Hereby, we introduce data augmentation techniques to improve production parsers without violating their architectural constraints.",
"cite_spans": [
{
"start": 144,
"end": 174,
"text": "(Marcheggiani and Titov, 2017;",
"ref_id": "BIBREF9"
},
{
"start": 205,
"end": 225,
"text": "(Nivre et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 226,
"end": 245,
"text": "Zeman et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since their early inception, advances in language representation modeling lead to major improvements in many NLP tasks (Wang et al., 2018; Moon et al., 2019) . Representations trained on various language modeling objectives, ranging from context free embeddings (Pennington et al., 2014; Mikolov et al., 2013) , to deep context aware representations (Peters et al., 2018; Le and Mikolov, 2014; Devlin et al., 2018) , were trained on massive amounts of unlabeled multilingual text, greatly enabling transfer learning opportunities for NLP tasks. Particularly, models such as BERT (Devlin et al., 2018) , ALBERT (Lan et al., 2020) , RoBERTa (Liu et al., 2019) and XLM (Lample and Conneau, 2019 ) employ a masked language modeling objective (Taylor, 1953 ) on a bidirectional self-attention encoder (Vaswani et al., 2017) enabling such models to utilize both left and right context for each word representation. Pretrained multilingual BERT (M-BERT) was used for dependency parsing in (Kondratyuk and Straka, 2019) aiming to create a single multilingual model. This work, in contrast, shows that parsing performance for a particular language can considerably be improved when adapting the biaffine-attention parser with a selected set of pretrained Transformer models while training on multilingual subsets of selected language family treebanks. We then use this novel parser to project synthetic treebanks, which are used in a teacher-student technique to improve the accuracy of a fast production parser.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 139,
"end": 157,
"text": "Moon et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 262,
"end": 287,
"text": "(Pennington et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 288,
"end": 309,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 350,
"end": 371,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 372,
"end": 393,
"text": "Le and Mikolov, 2014;",
"ref_id": "BIBREF7"
},
{
"start": 394,
"end": 414,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 579,
"end": 600,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 610,
"end": 628,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 639,
"end": 657,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 666,
"end": 691,
"text": "(Lample and Conneau, 2019",
"ref_id": "BIBREF5"
},
{
"start": 738,
"end": 751,
"text": "(Taylor, 1953",
"ref_id": "BIBREF18"
},
{
"start": 796,
"end": 818,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 982,
"end": 1011,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach can generally be described as a form of model compression which was introduced by (Bucilu et al., 2006) , and later reformulated and generalized as neural network knowledge distillation by (Hinton et al., 2015) . However, instead of using a large number of ensemble for a teacher model, we use a deep neural network parser augmented with a large transformer-based pretrained model creating a new parser that advances the current state of the art. Since the pretrained transformer model can be trained on large amounts of unlabeled monolingual and multilingual data of various domains and languages, the teacher model gains improved generalization performance that is facilitated by both cross-lingual and cross-domain transfer learning. Our student model is a non-neural net based model that is designed to be fast and efficient in production environments.",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "(Bucilu et al., 2006)",
"ref_id": "BIBREF0"
},
{
"start": 202,
"end": 223,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conventionally, parsers are trained on human annotated treebanks, which can be both costly and limited in quantity. Certain languages may not have enough annotation resources, have very small amount of data, or data that carries non-commercial licenses. In other cases, the data may be available in a specific topical domain resulting in models that perform poorly on unrelated domains. In addition, annotation errors can be common in some treebanks. To address these data challenges, we use cross-lingual transfer learning and pretrained deep contextualized representations to create a novel parser that helps generate synthetic data. We describe a production parser trained on these data, whose performance increases as the synthetic data size surpasses that of human annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The aim of our system is to produce synthetically labeled treebanks in order to significantly improve the accuracy of a production parser. The synthetic data will be generated using a different parser that is higher in quality. We create a new parser using two key components: the deep biaffine-attention parser and a pretrained Transformer model. Not only does such a setup improve the parsing accuracy, as shown in Section 3, the incorporation of Transformer models facilitates greater degree of generalization and domain adaptation. In the sections below, we detail the training data augmentation process as well as the new parser architecture. Figure 1 shows the architecture of the Transformer enhanced Biaffine-Attention Parser (TBAP). The Transformer provides contextualized word representations for each input sentence to the BiLSTM layer of the biaffine parser. First, a tokenized input sentence is passed through the Transformer. The Transformer further breaks word tokens into subword tokens. This is done in order to significantly reduce the size of the fixed vocabulary representation in the output prediction layer (Sennrich et al., 2016) overcom- ing the open-vocabulary problem. We then take the sum of the last four encoder layers of the Transformer as the output representation, which is comprised of the contextualized subword representations of the input sentence. Afterwords, two operations are performed on the Transformer's output. Subword token representations (also referred to as WordPiece tokens for BERT) are merged back into word-based representations. Merging the subword representations can either be done by averaging, maxpooling, or simply taking the first subword of each word. A forward Fully Connected (FC) layer is then applied to the merged subword representations, resulting in a sequence of representations aligned for each word of the tokenized input sequence.",
"cite_spans": [
{
"start": 1129,
"end": 1152,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 648,
"end": 656,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "The TBAP is trained on available treebanks. This is a process where the Transformer itself is fine-tuned by allowing backpropagation to flow through it during training. Alternatively, freezing the Transformer layers while training can help in speeding up the training process with some drop in performance. Figure 2 outlines the stages of multilingual treebank generation. Initially, the TBAP is trained with available treebanks. Depending on the type of Transformer model used, two training approaches can be followed, monolingual and multilingual. Monolingual training can be applied when monolingual Transformer models are used. Pretrained monolingual Transformer models are available for certain languages, such as English, German, French, Chinese, Japanese as well as others. Performance can particularly be improved for these languages due to both the abundance and specialization of their monolingual data. Multilingual Transformer models, such as Multilingual-BERT (M-BERT) are trained on more than 100 languages. When using M-BERT, both monolingual as well as multilingual treebanks can be used to train the parser. Low resource languages can particularly benefit from cross-lingual transfer learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 315,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Transformer Enhanced Biaffine-Attention Parser",
"sec_num": "2.1"
},
{
"text": "Our production parser should meet rigid criteria regarding runtime speed; thus, we choose the arc-eager algorithm (Nivre, 2004) trained with features similar to those used by Chen and Manning (2014) . To maintain UD compatability for existing downstream tasks, the tokenization and PoS tagging should not be modified even if they do not completely follow the definitions from UD. As shown in Figure 2 , the dependency parser takes the tokenizer and PoS tagger's results as input in order to produce UD-based syntactic structures. ",
"cite_spans": [
{
"start": 114,
"end": 127,
"text": "(Nivre, 2004)",
"ref_id": "BIBREF13"
},
{
"start": 175,
"end": 198,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 392,
"end": 400,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Fast Production Parser",
"sec_num": "2.3"
},
{
"text": "In this section, we show results demonstrating the improved performance of the new TBAP architecture on seven languages. We also show the effectiveness of the treebank synthesis technique when used in the augmented training of a production parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The TBAP is implemented by combining two key components, a pretrained Transformer model and the Biaffine-attention parser. The interface to the pretrained Transformer models was obtained from the Hugging Face's Transformers library (Wolf et al., 2019) . The implementation of the biaffine-attention parser was obtained from the open-source StanfordNLP (SNLP) library . The FC and the subwords merge layers were added between the Transformer and the biaffine parser. We have adapted the dependency parser component to be connected to the pretrained transformers and left other components of the SNLP pipeline unchanged. In fact, since the synthetic data is being preprocessed by the production parser's tokenizer and tagger, we only needed to adapt the UD parser and disable the other modules in the pipeline. Other modifications were performed on the UD parser to make it more suitable for our task such as changing the internal dimensions of the embeddings layers, adjusting the vocabulary data structures to make them suitable for multilingual training, and controlling which UD features can be used when training the UD parser. The PyTorch 1 library is used to implement the TBAP code. We use the standard UD treebanks v2.6 in our evaluations of the TBAP models. The UD v2.6 designated devset of each treebank is used as a tune-set for early stopping criterion during training. The UD v2.6 testset of each treebank is used for Labeled Attachment Score (LAS) results in the tables below. All models generated from UDs are for evaluation purposes. In most cases, we re-trained the SNLP (unmodified) parser in order to obtain improved baseline scores over the existing pretrained models. Table 1 shows the LAS results of TBAP with various Tranformer models compared with the baseline SNLP parser. Since we only modified the dependency parser, we compute scores based on gold sentences, tokens and tags. Table 1 shows that TBAP with any of the used Transformer models improves LAS over the baseline parser. Also the best results are observed when using a Transformer model trained monolingually on the corresponding language. This can be attributed to the larger amount of monolingual text used to train the monolingual Transformer. Also in monolingual language models, the subword splitting models are improved, which results in less splitting and consequently improved contextual representations. For English, BERT large outperforms the base one. Table 2 shows LAS results for training on different language UDs using M-BERT TBAP. M-BERT TBAP consistently outperforms the baseline SNLP parser.",
"cite_spans": [
{
"start": 232,
"end": 251,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 1688,
"end": 1695,
"text": "Table 1",
"ref_id": null
},
{
"start": 1903,
"end": 1910,
"text": "Table 1",
"ref_id": null
},
{
"start": 2448,
"end": 2455,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Transformer Enhanced Biaffine-Attention Parser (TBAP)",
"sec_num": "3.1"
},
{
"text": "In the typical case where the synthetic data will be used to train a different production parser, it will be first preprocessed by the production parser; that is sentence segmented, tokenized, and PoS tagged by the Figure 3 : LAS against the size of synthetic training corpora. The filled symbols (e.g. ,",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transformer Enhanced Biaffine-Attention Parser (TBAP)",
"sec_num": "3.1"
},
{
"text": ") denote the results with the corresponding UD corpora. Table 4 : Results of the production parser for seven languages on UD 2.6 testsets. Comparing training data by number of sentences and LAS (F1) for both the original UD and the larger synthetic corpora. production parser's own pipeline. This preprocessing is required so that the parser's output is compatible with other downstream NLP tasks. This means that the preprocessing will not necessarily be consistent with the treebank from its corresponding language. In order to improve the robustness of the synthetic data under different preprocessing requirements, the M-BERT TBAP must be trained without relying on such predicted tags. Table 3 shows the effect of removing the tags while training the M-BERT TBAP for synthetic data generation. As expected the overall LAS consequently drops; however, the no-tags model's scores shows less of an impact for the preprocessed testset.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 4",
"ref_id": null
},
{
"start": 691,
"end": 698,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Transformer Enhanced Biaffine-Attention Parser (TBAP)",
"sec_num": "3.1"
},
{
"text": "We retrained the production parser using the synthetic data generated by the methods above. Table 4 shows the results of seven language parsers, evaluated on the testsets of the UD corpus (v2.6) of the corresponding language. Parsers trained with the larger synthetic data showed higher LAS than those trained with the smaller manually created UD corpus data. Figure 3 shows LAS against the size of training corpora. All languages show similar trends between parsing accuracy and corpus size; larger synthetic corpora compensate for the smaller size of the UD corpora, except for English, French and German in which the synthetic data performs nearly equally with the same size of the original UD training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 4",
"ref_id": null
},
{
"start": 360,
"end": 368,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Augmented Training of the Production Parser",
"sec_num": "3.2"
},
{
"text": "We presented a data augmentation approach for UD parsing that improves fast production parsers accuracy and overcomes critical treebank limitations. A new Transformer enhanced biaffine parser is used to generate scalable synthetic data. We showed that utilizing deep contextualized representations pretrained on massive multilingual corpora can be used to considerably improve parsing accuracy. In the future, we plan to extending our method to generate synthetic data for additional languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "https://pytorch.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Model compression",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Bucilu",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu-Mizil",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "535--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535-541.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740-750.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirec- tional transformers for language understanding. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "75 languages, 1 model: Parsing universal dependencies universally",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Kondratyuk",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2779--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cross-lingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. CoRR, abs/1901.07291.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. ArXiv, abs/1909.11942.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning",
"volume": "32",
"issue": "",
"pages": "22--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Eric P. Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1188-1196, Bejing, China, 22-24 Jun. PMLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Encoding sentences with graph convolutional networks for semantic role labeling",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1506--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506-1515, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards lingua franca named entity recognition with bert",
"authors": [
{
"first": "Taesun",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Parul",
"middle": [],
"last": "Awasthy",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taesun Moon, Parul Awasthy, Jian Ni, and Radu Florian. 2019. Towards lingua franca named entity recognition with bert. ArXiv, abs/1912.01389.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Universal Dependencies v2: An evergrowing multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4034--4043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France, May. European Language Resources Association.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Incrementality in deterministic dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the workshop on incremental parsing: Bringing engineering and cognition together",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the workshop on incremental parsing: Bringing engineering and cognition together, pages 50-57.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word repre- sentation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Universal dependency parsing from scratch",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "160--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Timothy Dozat, Yuhao Zhang, and Christopher D. Manning. 2018. Universal dependency parsing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160-170, Brussels, Belgium, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "cloze procedure\": a new tool for measuring readability. Journalism & Mass Communication Quarterly",
"authors": [
{
"first": "Wilson",
"middle": [
"L"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "30",
"issue": "",
"pages": "415--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson L. Taylor. 1953. \"cloze procedure\": a new tool for measuring readability. Journalism & Mass Communi- cation Quarterly, 30:415-433.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brus- sels, Belgium, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Huggingface's transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the- art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Filip Ginter, Jan Haji\u010d, Joakim Nivre, Martin Popel, and Milan Straka. 2018. CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, Brussels, Belgium.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Graph convolution over pruned dependency trees improves relation extraction",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2205--2215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2205-2215, Brussels, Belgium, October-November. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Transformer enhanced biaffine-attention Parser (TBAP).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "The data augumentation process for training a production parser.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "LAS comparison SNLP and TBAP.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>UD</td><td colspan=\"2\">Tags No-Tags</td></tr><tr><td>fr gsd</td><td>72.30</td><td>84.90</td></tr><tr><td colspan=\"2\">pt bosque 62.59</td><td>75.47</td></tr></table>"
},
"TABREF2": {
"text": "TBAB LAS for unmatched tags.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}