ACL-OCL / Base_JSON /prefixC /json /computel /2021.computel-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:12:10.723152Z"
},
"title": "Integrating Automated Segmentation and Glossing into Documentary and Descriptive Linguistics",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Moeller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado Boulder",
"location": {}
},
"email": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado Boulder",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Any attempt to integrate NLP systems to the study of endangered languages must take into consideration traditional approaches by both NLP and linguistics. This paper tests different strategies and workflows for morpheme segmentation and glossing that may affect the potential to integrate machine learning. Two experiments train Transformer models on documentary corpora from five under-documented languages. In one experiment, a model learns segmentation and glossing as a joint step and another model learns the tasks into two sequential steps. We find the sequential approach yields somewhat better results. In a second experiment, one model is trained on surface segmented data, where strings of texts have been simply divided at morpheme boundaries. Another model is trained on canonically segmented data, the approach preferred by linguists, where abstract, underlying forms are represented. We find no clear advantage to either segmentation strategy and note that the difference between them disappears as training data increases. On average the models achieve more than a 0.5 F 1-score, with the best models scoring 0.6 or above. An analysis of errors leads us to conclude consistency during manual segmentation and glossing may facilitate higher scores from automatic evaluation but in reality the scores may be lowered when evaluated against original data because instances of annotator error in the original data are \"corrected\" by the model.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Any attempt to integrate NLP systems to the study of endangered languages must take into consideration traditional approaches by both NLP and linguistics. This paper tests different strategies and workflows for morpheme segmentation and glossing that may affect the potential to integrate machine learning. Two experiments train Transformer models on documentary corpora from five under-documented languages. In one experiment, a model learns segmentation and glossing as a joint step and another model learns the tasks into two sequential steps. We find the sequential approach yields somewhat better results. In a second experiment, one model is trained on surface segmented data, where strings of texts have been simply divided at morpheme boundaries. Another model is trained on canonically segmented data, the approach preferred by linguists, where abstract, underlying forms are represented. We find no clear advantage to either segmentation strategy and note that the difference between them disappears as training data increases. On average the models achieve more than a 0.5 F 1-score, with the best models scoring 0.6 or above. An analysis of errors leads us to conclude consistency during manual segmentation and glossing may facilitate higher scores from automatic evaluation but in reality the scores may be lowered when evaluated against original data because instances of annotator error in the original data are \"corrected\" by the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper examines the direct effects of variations in research design when integrating machine learning into the morphological analysis and annotation of endangered languages. Morpheme segmentation and glossing are traditionally the first tasks undertaken by linguists after documenting a language's sound system. Both tasks pro-vide essential linguistic information. Segmenting words into morphemes clarifies relationships between various word forms and can reduce confusion caused by data sparsity in NLP models. Glosses make implicit linguistic structures explicit and accessible for analysis and they can be leveraged to improve NLP models in low-resource settings, such as for machine translation (Shearing et al., 2018; Zhou et al., 2020) . Therefore, automating these tasks with NLP systems and integrating those systems into the documentary and descriptive workflow is important to both linguistics and NLP.",
"cite_spans": [
{
"start": 704,
"end": 727,
"text": "(Shearing et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 728,
"end": 746,
"text": "Zhou et al., 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When we bring together two disciplinary fields for mutual benefit, different expectations or accepted conventions are also brought together. The issues that this paper addresses stem from conventional methods in natural language processing (NLP) and linguistic analysis. The methods are based or have led to differing expectations which raise potentially conflicting issues. For example, it is generally expected in NLP that textual data will be orthographic representations and that the goal is to process that form, whereas linguists may prefer to work with phonetic representations and see their goal to process underlying linguistic forms. These differences can make the interdisciplinary collaboration unnecessarily slow or confusing. When the differences affect overall research design, it is easy to simply choose one or the other convention without testing which choice might actually benefit the task at hand or be more efficient for longterm goals. This paper studies and compares the short-term affect of two pairs of differing expectations which have arisen during the authors' research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first study asks whether morpheme segmentation and glossing should be done jointly or sequentially. In other words, are NLP systems more accurate when trained to do these two tasks sepa-rately or when trained to do them jointly? Instead of arbitrarily choosing one or the other method, we could test whether one approach gives more accurate results than the other. If the sequential approach is more accurate, then linguists might consider adjusting their workflow in order to gain optimal benefit from the NLP system, but if the joint task approach performs better, then perhaps NLP would benefit by adjusting experiments to match the linguists' expectations about data annotation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This second issue we investigate is how morpheme segmentation strategies affect NLP performance. Linguistic theory assumes the existence of underlying morpheme forms and generally the goal to discover these forms determines research design. The morphemes are often represented in their theoretical, underlying forms, which also allows orthographic changes triggered by surrounding phones to be ignored. This contrast with surface segmentation which simply inserts morpheme breaks in the orthographic representation. The two segmentation strategies are compared in (1) where the first two surface letters of each word in (1a) are represented by identical canonical segments in (1b). Since NLP almost always deals with orthographic representations, its systems are trained to perform surface segmentation almost exclusively. In practice, both strategies are encountered during language documentation and description, the initial strategy depending in part on software tools. For example, the older, but still popular, Toolbox 1 allows surface segmentation whereas ELAN (Auer et al., 2010) supports both but as separate tasks, while FLEx (Baines, 2018) requires surface segmentation but facilitates simultaneous canonical segmentation. It might seem reasonable that linguists who want to integrate automated assistance would adjust their strategy to match NLP expectations. But without testing, are we sure that NLP systems perform better at surface or at canonical segmentation?",
"cite_spans": [
{
"start": 1067,
"end": 1086,
"text": "(Auer et al., 2010)",
"ref_id": "BIBREF2"
},
{
"start": 1135,
"end": 1149,
"text": "(Baines, 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) a. il-legal in-capable im-mature b. in-legal in-capable in-mature c. NEG-legal NEG-capable NEG-mature This paper describes experiments that test results of Transformer models (Vaswani et al., 2017) trained on segmented and glossed data and then 1 https://software.sil.org/toolbox/ compare those results between a joint and sequential approach to segmentation and glossing and between a surface and canonical strategy to segmentation. After a review of related literature in \u00a7 2, \u00a7 3 introduces the data used by the models that are described in \u00a7 4. The experiments are described in \u00a7 5 and results are presented in \u00a7 6 and analyzed in \u00a7 7.",
"cite_spans": [
{
"start": 73,
"end": 105,
"text": "NEG-legal NEG-capable NEG-mature",
"ref_id": null
},
{
"start": 179,
"end": 201,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many NLP models have been applied to segmentation and glossing of low-resource languages. Automatic morpheme segmentation was introduced by Harris (1970) and much segmentation research since then has implemented this in an unsupervised fashion (Goldsmith, 2001; Creutz and Lagus, 2002; Poon et al., 2009) . This is probably motivated by the difficulty of finding high quality amounts of segmented data that is needed for supervised learning. A recent supervised segmentation experiment (Ansari et al., 2019) had to first manually segment a Persian corpus before being able to conduct the experiment. NLP experiments with low-resource languages often treat segmentation and glossing as separate tasks. Their approach seems to assume that the two tasks are performed sequentially and that it is reasonable to expect morpheme segments to be available before glosses. However, our field experiences indicates that it is not uncommon for linguists to segment a morpheme and gloss it immediately.",
"cite_spans": [
{
"start": 244,
"end": 261,
"text": "(Goldsmith, 2001;",
"ref_id": "BIBREF10"
},
{
"start": 262,
"end": 285,
"text": "Creutz and Lagus, 2002;",
"ref_id": "BIBREF6"
},
{
"start": 286,
"end": 304,
"text": "Poon et al., 2009)",
"ref_id": "BIBREF21"
},
{
"start": 486,
"end": 507,
"text": "(Ansari et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Glossing-only experiments make the assumption that the data is already segmented into morphemes or that it does not need to be segmented. McMillan-Major (2020) trained conditional random field (CRF) systems to produce a gloss line for several high-resource languages and three lowresource languages. The systems incorporated predictions made directly from the segmented line and predictions made with information from the free translation line that was enriched with IN-TENT (Georgi, 2016) . The low-resource language data came from field projects, as does the data in this paper. Both McMillan-Major and Samardzic et al. (2015) used information from other lines of interlinearized texts such as translation and partof-speech tags, whereas our work assumes the texts have not yet been annotated with any other information.",
"cite_spans": [
{
"start": 475,
"end": 489,
"text": "(Georgi, 2016)",
"ref_id": "BIBREF9"
},
{
"start": 586,
"end": 628,
"text": "McMillan-Major and Samardzic et al. (2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In general, joint learning is characterized by training on different types of information and is based on the intuition that one type of linguistic knowledge (e.g. syntax) can improve results in another domain (e.g. morphology) (Goldsmith et al., 2017) . Joint learning of segmentation and glossing, or labeled segmentation, is less common but has been successful in low-resource languages (Cotterell et al., 2015; Moeller and Hulden, 2018) , usually with non-neural models. The authors' previous work on Lezgi (Moeller and Hulden, 2018) used the same corpus as the current work and compared sequential vs. joint models as well as feature-based vs. deep learning models. The reported F 1 -scores were nearly .9. However, a direct comparison between the two studies cannot be made because the previous work only segmented and glossed affixes while the current work includes root and affixes. The segmentation strategies that an NLP project implements may depend on available data or the type of learning model employed. Unsupervised learning of morphology naturally leans towards surface segmentation. Supervised models depend on annotated data provided by linguists and preprocessed to reduce inconsistencies. Moeller and Hulden (2018) trained a joint system with good results on canonical morphemes in language with little allomorphy or morphophonological processes.",
"cite_spans": [
{
"start": 228,
"end": 252,
"text": "(Goldsmith et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 390,
"end": 414,
"text": "(Cotterell et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 415,
"end": 440,
"text": "Moeller and Hulden, 2018)",
"ref_id": "BIBREF17"
},
{
"start": 511,
"end": 537,
"text": "(Moeller and Hulden, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In languages with more complicated morphophonology and allomorphyincluding null morphemes that must be \"segmented\" and glossed, or circumfixation-the effect of canonical segementation may be unclear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The selected data represent a range of documentary and descriptive projects that manually interlinearized several texts. Each project's unique priorities and workflow resulted in different amounts of data and percentages of segmented and glossed tokens, as shown in Table 1 . We selected only projects that interlinearized with FLEx, since the software always includes both surface and canonical segmentations. Less effort was made to represent various typological features, geographic areas, or language families. The corpora were shared in the form of backup flextext XML files. Alas [btz] (Alas-Kluet, Batak Alas, Batak Alas-Kluet) is an Austronesian language spoken by 200,000 people on the Indonesian island of Sumatra (Eberhard et al., 2020) . The selected corpus is from the Alas dialect and features reduplication, infixation, and circumfixation.",
"cite_spans": [
{
"start": 724,
"end": 747,
"text": "(Eberhard et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Lamkang [lmk] is a Northern Kuki-Chin (Tibeto-Burman) language with an estimated 4 to 10 thousand speakers, primarily in Manipur, India but also in Burma (Thounaojam and Chelliah, 2007 Naess and Boerger, 2008) . The data is stored at SIL Language & Culture Archives. 6 Gold standard data was assembled by filtering out tokens that were not completely segmented and glossed as far as could be determined automatically by assuring that the surface, canonical, and gloss lines aligned with each other. Morpheme boundary markers such as hyphens and equal signs were preserved to distinguish clitics from bound morphemes and to indicate relative ordering of morphemes (i.e. pre-/suf/infixing); angle brackets (\u2329 \u232a) were used to denote circumfixes.",
"cite_spans": [
{
"start": 154,
"end": 184,
"text": "(Thounaojam and Chelliah, 2007",
"ref_id": "BIBREF25"
},
{
"start": 185,
"end": 209,
"text": "Naess and Boerger, 2008)",
"ref_id": "BIBREF19"
},
{
"start": 267,
"end": 268,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "All tasks are treated as a problem of converting an input sequence of characters x = (x 1 , . . . , x n ) to an output sequence of labels y = (y 1 , . . . , y n ). The output sequence of labels indicate the (canonical or surface) morpheme and/or the morpheme's gloss. Since Conditional Random Fields (CRF) (Lafferty et al., 2001) , the state-of-art non-neural sequence labeling model, has not performed as well as neural models on low-resource sequenceto-sequence tasks since about 2016 (Liu and Mao, 2016) , we selected the Transformer (Vaswani et al., 2017) as our model. The Transformer is a supervised deep learning system that has achieved promising results for NLP in low-resource languages (Abbott and Martinus, 2018; Martinus and Abbott, 2019) . It is a stateless encoderdecoder model that uses additional attention layers to boost speed and performance. We used the Fairseq (Ott et al., 2019) implementation with the modifications and code described by Wu et al. (2020) which have been successful in lowresource character-level morphological tasks. 7",
"cite_spans": [
{
"start": 306,
"end": 329,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF13"
},
{
"start": 487,
"end": 506,
"text": "(Liu and Mao, 2016)",
"ref_id": "BIBREF14"
},
{
"start": 537,
"end": 559,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 697,
"end": 724,
"text": "(Abbott and Martinus, 2018;",
"ref_id": "BIBREF0"
},
{
"start": 725,
"end": 751,
"text": "Martinus and Abbott, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 883,
"end": 901,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 962,
"end": 978,
"text": "Wu et al. (2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "The experiments assume access to field data that has only been segmented and glossed. Therefore, no other information was leveraged from the in-terlinearized glossed texts or elsewhere. The data was arranged so as to accommodate both joint and sequential learning. That is, after withholding ten percent of the corpus as a test set, the remaining data was split into two equal training sets. It is assumed that segments and glosses exist for the first part which can be used for training in the sequential system, but not for the second part. Ten percent of each part was used as a development set. For easier comparison, the joint model was trained on only one part, the same part used for training the segmentation step in the sequential system. One additional experiment was run with the joint model that trained on both parts together, minus the held-out data. For each experiment, a ten-fold cross validation was run.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation and Glossing Experiments",
"sec_num": "5"
},
{
"text": "The first experiment tested whether joint or sequential segmentation and glossing is a better approach to interlinearization when integrating automated assistance. Joint segmentation assumes that segmented data without glosses is unlikely because identifying a morpheme usually means there has already been an identification of the morpheme's meaning. Joint segmentation requires the model to learn the morpheme boundary and gloss simultaneously for each segment. The sequential system-glossing after segmenting the whole text-assumes that segmentation is easier to do by hand or that unsupervised segmentation tools such as Morfessor (Smit et al., 2014) are available for low-resource languages. For joint learning, the input is a character-level representation of a word, shown in (2a). Each character is treated as as separate symbol by the model. The output is a sequence of labels, one label per morpheme, shown in (2b). The label combines the morpheme's shape and gloss. The combination allows the system to perform segmentation and glossing simultaneously.",
"cite_spans": [
{
"start": 635,
"end": 654,
"text": "(Smit et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "(2) a. IN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "t a x e s b. OUT: tax#levy -es#PL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "The sequential system trains two models: one model learns morpheme segments and the other learns to gloss the predicted morphemes. In the sequential system the first equal part of the data was used for the segmentation step and its output was the training input for the glossing step. The output to the first model is a sequence of segments only, shown in (3b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "(3) a. IN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "t a x e s b. OUT: tax -es",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "The output of the segmentation model is used as input to the second model, as shown in (4a.) The glossing model then outputs the predicted glosses, shown in (4b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "(4) a. IN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "tax -es b. OUT: levy PL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint versus Sequential System",
"sec_num": "5.1"
},
{
"text": "The second experiment compares the Transformer's performance when trained on different segmentation strategies. Both systems described above are trained on both strategies. Canonical segmentation gives more information about a language's underlying morphological structure. At the same time, it reduces the number of unique labels in languages that reflect allomorphy and morphophonological processes in the orthography. On the other hand, surface segmentation does not require computational models to learn allomorphy or morphophonology (Goldsmith et al., 2017) and does not provide a thorough analysis of the language's morphology by annotators. It simply divide strings of text into segments known as \"morphs\" (Virpioja et al., 2011) without regard to potential relationships between the segments. The intention of this study is not to provide a direct comparison, since technically the corpora of surface and canonical segments are different datasets. The study assumes that if one strategy was conducted first, then the other type of segmentation might be more easily learned from it. For example, if a corpus could be surface segmented very quickly with very high accuracy based on initial hypotheses of morpheme shapes, then having the predicted surface segments for the whole corpus might make the discovery of canonical, underlying morphemes easier and faster for linguists, as well as matching a common expectation in NLP.",
"cite_spans": [
{
"start": 538,
"end": 562,
"text": "(Goldsmith et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 713,
"end": 736,
"text": "(Virpioja et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation Strategy",
"sec_num": "5.2"
},
{
"text": "The difference in the methodology of the two strategies is their outputs. Their input does not change and it is the same as the models described in section 5.1. The output for surface segmentation is shown (5a), and the corresponding output for canonical segmentation is in (5b). In addition to the alternation between surface morphs and underlying morpheme representations, the data was handled slightly differently for the two strategies. The most obvious difference is the handling of circumfixes. Surface representation only preserves the ordering of morphs and does not require knowledge of morpheme types, so the two parts of each circumfix were treated as two different prefix and suffix morphs. Canonical segmentation represents the circumfix as a single morpheme that repeats before and after the stem. These changes are shown in (6). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation Strategy",
"sec_num": "5.2"
},
{
"text": "Performance was evaluated by a cross-validation on ten training and development sets that were randomly split from the part of the data used for each experiment. The system predictions were automatically evaluated against the gold standard. Scores were calculated as a micro-average on all labels, independent of word accuracy. Since the system may predict more or fewer labels for a word, both precision and recall are calculated. Table 2 compares the average F 1 -scores across a 10fold validation. For joint learning, the scores indicate morphemes that were correctly segmented and glossed. For the sequential system, the score is a weighted average of the scores from both the segmentation and glossing models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Overall, sequential learning does better than joint learning, but the differences are not great. The maximum improvement is less than 0.06 points on Lezgi [lez] . The best models achieved over 0.60 F 1 on all but the smallest corpus. Lamkang [lmk] , which has the largest number of tokens by far, achieved over 0.70 average F 1 score. The performance on the Nat\u00fcgu data is the only case where the sequential system is not consistently an improvement over the joint system. When considering word-level accuracy, Nat\u00fcgu joint learning outperformed sequential learning on canonical segmentation. Interestingly, it also has the smallest change in the number of unique labels between surface and canonical segmentation (an increase of 14 labels, compared to next lowest of 46). With so few languages, it is difficult to say whether the relative number of unique labels affect the relative performance when trained on surface vs. canonical segmentation. More corpora should be included for this question to be explored further.",
"cite_spans": [
{
"start": 155,
"end": 160,
"text": "[lez]",
"ref_id": null
},
{
"start": 242,
"end": 247,
"text": "[lmk]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint vs Sequential Results",
"sec_num": "6.1"
},
{
"text": "When half of the total data is used, the comparison of surface and canonical segmentation paints a less clear picture. The differences when going from surface to canonical segmentation are shown in Table 3 . The general trend when comparing segmentation strategies is that languages with a higher ratio of unique labels to total tokens do better with canonical segmentation. The differences are quite small for Alas [btz], Lezgi, and Nat\u00fcgu [ntu] . The biggest differences are found in Lamkang and Manipuri [mni], but their improvement goes in opposite directions. Surface segmentation gives higher scores for Lamkang data while Manipuri has higher scores with canonical. Interestingly, these two languages have the largest difference of the number of unique labels between surface and canonically segmented data. In Lamkang and Manipuri training data, the average number of unique joint labels increased by over 500 and 400, respectively, and in the segmentation step of the sequential system the number of segments increased by over 350. In the other languages the largest average increase of labels is 88 but usually the differences are less than 15. Since Lamkang and Manipuri belong to the same family, it is possible that significant differ- Table 4 : Results the joint model with surface and canonical segmentation strategies when using half the training data compared to all training data.",
"cite_spans": [
{
"start": 441,
"end": 446,
"text": "[ntu]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 1248,
"end": 1255,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Surface vs. Canonical Results",
"sec_num": "6.2"
},
{
"text": "ences in segmentation strategies are due to characteristics of their familial morphological structure, but it could be due to other factors such as idiosyncratic choices in the orthographic representation. The differences in the results in both joint and sequential systems are shown in Table 3 . The effect of the segmentation strategy is roughly the same in both systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 294,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Surface vs. Canonical Results",
"sec_num": "6.2"
},
{
"text": "The segmentation strategies were also compared using all available data in the joint system. Table 4 shows the how doubling the training data affects the performance. Doubling the training data always improves F 1 -scores by about .2 to .4 points. While the difference between the two strategies becomes less noticeable when the data is increased, canonical segmentation tends to outperform surface segmentation, but in all languages the difference between the strategies becomes almost negligible (less than .15 points).",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Surface vs. Canonical Results",
"sec_num": "6.2"
},
{
"text": "A closer look at the results reveals interesting patterns. One significant factor in system performance is sparsity of data. Unsurprisingly, most errors occur on rarer forms. Another factor is the amount of inconsistencies or errors in the manually annotated data. Annotation quality can amplify data sparsity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Allomorphy and isomorphy (same character sequence, different meaning) caused repeated errors during the glossing step and joint learning, where it becomes quite obvious that the model must deal with multiple options. For example, the Lezgi suffix -di 8 has five possible glosses as shown by the joint labels in (7). These morphological phenomena are a moot issue during the segmentation step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "(7) -\u0434\u0438#ENT -\u0434\u0438#DIR -\u0434\u0438#ERG -\u0434\u0438#OBL -\u0434\u0438#SBST Sometimes multiple glosses are not due to morphological structure, but because the same morph(eme) was given different glosses. For example, interchanging 'be' and 'is' and 'COP' for copular verbs or alternating between lexical glosses (e.g. 'you') and grammatical glosses (e.g. '2SG.ERG'). Sometimes different glosses appear because the item can be translated by different English words depending on the context. For example, one Lezgi word can be, and is, translated as 'be' in some context or 'happen' in others. If alternative labels such as bahaye#danger and bahaye#dangerous are equally frequent, the model must choose randomly. Such inconsistency is to be expected from manual work and could be reduced with more automated assistance from machine learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Another pattern of errors is caused by tokens that were only partially segmented (and therefore, not correctly glossed). We knew that many such tokens were included in the gold standard data but there was no reliable way to eliminate them automatically. It is unclear how many exist in each corpus, although Alas and Nat\u00fcgu seem to have the least. Manipuri [mni] and Lezgi seem to have most incomplete segmentation. This became clear for Manipuri during another project when a language expert was asked to the correct the glosses for several inflected words. It appears that, in the data set, the annotators had been focused only on segmented and glossing certain morphemes on each word, leaving other affixes on the word unsegmented. The Lezgi data was annotated by a non-linguist who was trained to use FLEx and did not fully grasp Lezgi's unique morphology or simply did not finish segmenting all words.",
"cite_spans": [
{
"start": 357,
"end": 362,
"text": "[mni]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Many quality issues unpredictably increase the number of possible labels and amplify data sparsity. An example is repeated mispelling of glosses (e.g. apperance-appereanceappearance, fourty-forty). Other misspellings originate in transcription. In the Lezgi test data, over 50 misspelled or incorrectly segmented strings were found in the first 200 hundred unique segments, although a few spelling changes are representation of dialectical variations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "The results from the Alas corpus were quite good when compared to the much larger corpora. However, the errors are less predictable and more random. It seems likely that the small data set increased the noise to signal ratio and obscured general patterns. One noticeable confusion was caused by the canonical representation of circumfixes. This is shown in (8) where the model predicted a prefix n-. This prefix is a correct surface allomorph of the circumfix at that position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "(8) a. GOLD: n\u2329\u232aken-nindekh -n\u2329\u232aken OUTPUT: n-nindekh -n\u2329\u232aken Nevertheless, error analysis shows that the models deal with data sparsity quite well. Even incorrect segments often have very similar character sequences to the correct choice, particularly when the difference is due to a change in the root vowel (e.g. dakhi \u223c dikhi). One of the most interesting errors, indicating the model's strong ability to learn patterns even in the face of data sparsity, occurred in Lezgi. The transcribed oral speech has a few dozen codeswitched Russian words. The test data include one or two examples, and in one case the model substituted one codeswitched word with another codeswitched word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "Many errors noted during error analysis were not actually errors. Since the annotation was originally done by hand, sometimes by multiple annotators, the glosses varied due to misspellings or synonomous glossing choices (e.g. 'BE.PST' vs. 'was'). There was a clear pattern in all datasets for one of the variants to be predicted rather than a random, unrelated label. These cases would not be considered errors by human annotators but were evaluated automatically as errors in the test data. For instance, one Lezgi demonstrative pronoun was sometimes glossed as 'these' and sometimes as 'this -ABS.PL'. In at least once case, the second (and more linguistically precise) analysis was predicted. Unfortunately, because we did not have access to language experts for every corpora, we were not able to normalize our scores based on this knowledge; however, in the future it may be useful to consider that the performance of models trained on field data may, for all practical purposes, be better than the initial scores indicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "In other cases, the labels in the test data were evaluated as errors, but closer examination revealed that the original human annotation were incorrect in that particular instance and the predicted label was actually the best fit to the data. So, an human error had been \"corrected\". Word instances that had been incorrectly segmented by the human annotators were sometimes correctly segmented by the model, although again these examples were evaluated as incorrect because they did not match the gold standard data. For Lezgi, these examples of \"correction\" by the model were more frequent in the sequential system, and may explain why biggest improvement by the sequential system over the joint system is found in the Lezgi data, which we know had many incorrect or incomplete segmentations. Again, due to the lack of language experts, we are unable to say whether this holds true for all corpora but this should be explored deeper in future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "This paper is aimed at smoothing the road to more interdisciplinary work with NLP and linguistics by articulating and examining the results of different research designs. Different research designs arise from different expectations or conventions in the two fields. Although they do not present barriers to mutually beneficial research, different expectations, such as in segmentation strategies, and different workflows, such as joint or separate segmentation/glossing, should not be dismissed when they arise. This paper tests the possible effects of these two differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "The small difference between surface and canonical segmentation for three of the five languages suggests either strategy is a useful approach with minimal data, although this changes when data is increased in the joint model. Even though surface segmentation increases the num-ber of labels in a dataset, this appears to be balanced by the by the abstract character of canonical morphemes, most noticeably by circumfixes. The fact that the difference almost disappears when the data size is doubled indicates that the question of segmentation strategy can be eliminated by simply annotating more data with whatever strategy suits the project at hand. However, larger differences on Lamkang and Manipuri corpora indicate that the reasons why segmentation strategies does sometimes differ in performance on the same corpsu should be explored more across other Tibeto-Burman languages. Testing the differences in related languages might indicate whether certain linguistics features influence the results of different segmentation strategy when integrating NLP systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "The consistent improvement of the sequential system over joint learning may be a reason to consider separating segmentation and glossing tasks in order to leverage the higher accuracy of segmentations , and a more completely segmented corpus, when glossing the corpus. The strenght of the sequential system might be applied when a corpus cannot be completely segmented and glossed due to budget or time constraints. Instead, a strategy would be to prioritize segmenting and benefit from computational assistance when glossing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "Finally, these studies could serve as a foundation towards more efficient use of computational methods in linguistic analysis and annotation. This paper shows, for example, that the glossing-only model performs well even on inaccurate segmentation predictions and can even \"correct\" manual segmentation errors. The study presented here assumes that the model's segmentation is not corrected by the language experts before training the glossing model. If a humanin-the-loop workflow was introduced to first correct segmentations, then the glossing-only model could improve even more. Such methodological considerations should be tested to see to what extent linguistic analysis and annotation of endangered language might benefit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "Finally, as McMillan-Major (2020) noted in glossing research, consistency of the annotations has a strong effect on system performance. This is most clearly seen in Lezgi which is known to be particularly noisy. Random strange characters were found at morpheme boundaries (e.g. * instead of -). The human annotators fre-quently segmented one pair of characters whenever it occurred because it matched a frequent suffix. Allomorphs were frequently glossed as if they were different morphemes, undoing the benefit of canonical segmentation. Finally, its unique casestacking caused confusion both to the human annotator and to the system results, in particular because one morpheme with several semanticallymotivated allomorphs is (incorrectly) glossed one way when it stands as a single case marker and glossed another way when it precedes additional case markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "So what would happen if linguists emphasized quality over quantity? We can answer this question by comparing Lezgi to Alas. According to the accounts of the linguists involved, and evidenced by our experimental results, the Alas data was annotated much more consistently and meticulously. With a corpus one third the size of the Lezgi corpus, the Alas model performs almost equally well. It is possible but seems unlikely that this is due to differing morphological structure. Unlike Lezgiwhich is overwhelmingly suffixing and has fairly limited morphophonological changes-Alas features prefixing, suffixing, circumfixing, and infixing with various morphophonological processes. The main difficulty for the Alas systems was the sparsity of stems, compared to oft-repeated affixes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "Interestingly, Alas showed the least marked preference between sequential and joint learning. This may indicate that higher consistency may eliminate the need to consider any change to segmentation/glossing workflow, but it should be investigated with further experiments focused on differences in annotation quality. Preferably these experiments would conducted on closely related languages to reduce effects due to different typology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "When considering low-resource settings, consistency for machine learning seems more important than data size, strategy, or workflow. Ruthless consistency is not something linguists have had reason to put high value on and it is not something to be expected by manual annotation, Consistency can be provided by machine learning integration, but ironically, supervised machine learning needs high consistency in annotated data before it can perform accurately enough to assist human annotators by increasing their speed or accuracy. Our best estimate of the accuracy threshold for practi-cal integration of machine learning into annotation is 60% (Felt, 2012) . This threshold on F 1 -scores was soundly passed by Lamkang because it over 18k manually annotated tokens for training but it was barely reached by the corpora with 4.5k-5.5k tokens. However, the meticulously annotated Alas corpus came close to this threshold with only 1.5k training tokens. If linguists wish to successfully integrate machine learning into the documentation and description of underdocumented and endangered languages, then they must adopt from NLP an emphasis on highly consistent annotation.",
"cite_spans": [
{
"start": 645,
"end": 657,
"text": "(Felt, 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "8"
},
{
"text": "Rights holders gave informed consent to use the data for this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://digital.library.unt.edu/ explore/collections/SAALT 4 The Lezgi is currently being deposited at the SIL Language and Cultures Archives.5 https://digital.library.unt.edu/ explore/collections/MDR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.sil.org/resources/search/ language/ntu 7 Code available here: github.com/shijie-wu/ neural-transducer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In running text, Lezgi text is transliterated from the Cyrillic orthography for the reader's convenience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards neural machine translation for African languages",
"authors": [
{
"first": "Jade",
"middle": [
"Z"
],
"last": "Abbott",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Martinus",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.05467"
]
},
"num": null,
"urls": [],
"raw_text": "Jade Z. Abbott and Laura Martinus. 2018. Towards neural machine translation for African languages. arXiv:1811.05467 [cs, stat].",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Supervised Morphological Segmentation Using Rich Annotated Lexicon",
"authors": [
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Zden\u011bk",
"middle": [],
"last": "\u017dabokrtsk\u00fd",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Mahmoudi",
"suffix": ""
},
{
"first": "Hamid",
"middle": [],
"last": "Haghdoost",
"suffix": ""
},
{
"first": "Jon\u00e1\u0161",
"middle": [],
"last": "Vidra",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "52--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ebrahim Ansari, Zden\u011bk \u017dabokrtsk\u00fd, Mohammad Mahmoudi, Hamid Haghdoost, and Jon\u00e1\u0161 Vidra. 2019. Supervised Morphological Segmentation Us- ing Rich Annotated Lexicon. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 52-61, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "ELAN as flexible annotation framework for sound and image processing detectors",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Russel",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Sloetjes",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Wittenburg",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Schreer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Masnieri",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Tsch\u00f6pel",
"suffix": ""
}
],
"year": 2010,
"venue": "European Language Resources Association LREC 2010: Proceedings of the 7th International Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "890--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Auer, Albert Russel, Han Sloetjes, Peter Witten- burg, Oliver Schreer, S. Masnieri, Daniel Schneider, and Sebastian Tsch\u00f6pel. 2010. ELAN as flexible an- notation framework for sound and image processing detectors. In European Language Resources Asso- ciation LREC 2010: Proceedings of the 7th Interna- tional Language Resources and Evaluation, pages 890-893. European Language Resources Associa- tion.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An overview of FieldWorks and related programs for collaborative lexicography and publishing online or as a mobile app",
"authors": [
{
"first": "David",
"middle": [],
"last": "Baines",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the XVIII Euralex International Congress",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Baines. 2018. An overview of FieldWorks and related programs for collaborative lexicography and publishing online or as a mobile app. In Proceed- ings of the XVIII Euralex International Congress, Ljubliana, Slovenia. Ljubliana University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Grammar of Meithei",
"authors": [
{
"first": "Chelliah",
"middle": [],
"last": "Shobhana Lakshmi",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shobhana Lakshmi Chelliah. 1997. A Grammar of Meithei, volume 17 of Mouton Grammar Library. Mouton de Gruyter, Berlin.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Labeled morphological segmentation with semi-markov models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Fraser",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "164--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Thomas M\u00fcller, Alexander M. Fraser, and Hinrich Sch\u00fctze. 2015. Labeled morphological segmentation with semi-markov models. In CoNLL, pages 164-174.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised discovery of morphemes",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 workshop on Morphological and phonological learning",
"volume": "6",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL-02 workshop on Morphological and phonolog- ical learning-Volume 6, pages 21-30. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Ethnologue: Languages of the World, twenty-third edition",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Eberhard",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"F"
],
"last": "Simons",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"D"
],
"last": "Fennig",
"suffix": ""
}
],
"year": 2020,
"venue": "SIL International",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Eberhard, Gary F. Simons, and Charles D. Fennig, editors. 2020. Ethnologue: Languages of the World, twenty-third edition. SIL International, Dallas, Texas.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving the Effectiveness of Machine-Assisted Annotation",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Felt",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Felt. 2012. Improving the Effectiveness of Machine-Assisted Annotation. Phd thesis, Brigham Young University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "From Aari to Zulu: Massively Multilingual Creation of Language Tools using Interlinear Glossed Text",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Georgi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Georgi. 2016. From Aari to Zulu: Massively Multilingual Creation of Language Tools using In- terlinear Glossed Text. PhD, University of Wash- ington.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised learning of the morphology of a natural language",
"authors": [
{
"first": "John",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational linguistics",
"volume": "27",
"issue": "2",
"pages": "153--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational linguistics, 27(2):153-198.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Computational learning of morphology. Annual Review",
"authors": [
{
"first": "John",
"middle": [],
"last": "Goldsmith",
"suffix": ""
},
{
"first": "Jackson",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Aris",
"middle": [],
"last": "Xanthos",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Goldsmith, Jackson Lee, and Aris Xanthos. 2017. Computational learning of morphology. Annual Re- view, 3.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "From phoneme to morpheme",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1970,
"venue": "Papers in Structural and Transformational Linguistics",
"volume": "",
"issue": "",
"pages": "32--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S. Harris. 1970. From phoneme to morpheme. In Papers in Structural and Transformational Lin- guistics, pages 32-67. Springer Netherlands, Dor- drecht.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando C N",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 18th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando C N Pereira. 2001. Conditional Random Fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning, pages 282- 289.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Morphological reinflection with conditional random fields and unsupervised features",
"authors": [
{
"first": "Ling",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lingshuang Jack",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "36--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling Liu and Lingshuang Jack Mao. 2016. Morpho- logical reinflection with conditional random fields and unsupervised features. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphol- ogy, pages 36-40.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A focus on neural machine translation for African languages",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Martinus",
"suffix": ""
},
{
"first": "Jade",
"middle": [
"Z"
],
"last": "Abbott",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Martinus and Jade Z. Abbott. 2019. A focus on neural machine translation for African languages. ArXiv, abs/1906.05685.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automating gloss generation in interlinear glossed text",
"authors": [
{
"first": "Angelina",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angelina McMillan-Major. 2020. Automating gloss generation in interlinear glossed text. volume 3.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic glossing in a low-resource setting for language documentation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Moeller",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages",
"volume": "",
"issue": "",
"pages": "84--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Moeller and Mans Hulden. 2018. Automatic glossing in a low-resource setting for language docu- mentation. In Proceedings of the Workshop on Com- putational Modeling of Polysynthetic Languages, pages 84-93. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Atlas of the World's Languages in Danger",
"authors": [],
"year": 2010,
"venue": "UNESCO Publishing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Moseley, editor. 2010. Atlas of the World's Languages in Danger, third edition. UNESCO Pub- lishing, Paris.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Reefs-santa Cruz as Oceanic: Evidence from the verb complex",
"authors": [
{
"first": "\u00c5shild",
"middle": [],
"last": "Naess",
"suffix": ""
},
{
"first": "Brenda",
"middle": [
"H"
],
"last": "Boerger",
"suffix": ""
}
],
"year": 2008,
"venue": "Oceanic Linguistics",
"volume": "47",
"issue": "",
"pages": "185--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c5shild Naess and Brenda H. Boerger. 2008. Reefs-santa Cruz as Oceanic: Evidence from the verb complex. Oceanic Linguistics, 47:185- 212.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised morphological segmentation with log-linear models",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "209--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proceedings of Human Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 209-217. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic interlinear glossing as two-level sequence classification",
"authors": [
{
"first": "Tanja",
"middle": [],
"last": "Samardzic",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Schikowski",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Stoll",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanja Samardzic, Robert Schikowski, and Sabine Stoll. 2015. Automatic interlinear glossing as two-level sequence classification. In Proceed- ings of the 9th SIGHUM Workshop on Lan- guage Technology for Cultural Heritage, Social Sci- ences, and Humanities (LaTeCH), Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving low resource machine translation using morphological glosses",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Shearing",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "Huda",
"middle": [],
"last": "Khayrallah",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Shearing, Christo Kirov, Huda Khayrallah, and David Yarowsky. 2018. Improving low resource ma- chine translation using morphological glosses. In Proceedings of the 13th Conference of the Associ- ation for Machine Translation in the Americas (Vol- ume 1: Research Papers). Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Morfessor 2.0: Toolkit for statistical morphological segmentation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "21--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Smit, Sami Virpioja, Stig-Arne Gr\u00f6nroos, and Mikko Kurimo. 2014. Morfessor 2.0: Toolkit for statistical morphological segmentation. In Proceed- ings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Compu- tational Linguistics, pages 21-24, Gothenburg, Swe- den. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Lamkang language: Grammatical sketch, texts and lexicon. Linguistics of the Tibeto-Burman Area",
"authors": [
{
"first": "Harimohon",
"middle": [],
"last": "Thounaojam",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Shobhana",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chelliah",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "30",
"issue": "",
"pages": "1--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harimohon Thounaojam and Shobhana L. Chelliah. 2007. The Lamkang language: Grammatical sketch, texts and lexicon. Linguistics of the Tibeto-Burman Area, 30(1):1-212.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, pages 6000-6010, Long Beach, Cal- ifornia, USA. Curran Associates Inc.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Empirical comparison of evaluation methods for unsupervised learning of morphology",
"authors": [
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Ville",
"middle": [],
"last": "Turunen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Spiegler",
"suffix": ""
},
{
"first": "Oskar",
"middle": [],
"last": "Kohonen",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2011,
"venue": "TAL",
"volume": "52",
"issue": "2",
"pages": "45--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sami Virpioja, Ville Turunen, Sebastian Spiegler, Os- kar Kohonen, and Mikko Kurimo. 2011. Empirical comparison of evaluation methods for unsupervised learning of morphology. TAL, 52(2):45-90.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Applying the transformer to character-level transduction",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.10213"
]
},
"num": null,
"urls": [],
"raw_text": "Shijie Wu, Ryan Cotterell, and Mans Hulden. 2020. Applying the transformer to character-level trans- duction. arXiv:2005.10213 [cs.CL].",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Using interlinear glosses as pivot in low-resource multilingual machine translation. arXiv: Computation and Language",
"authors": [
{
"first": "Zhong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lori",
"middle": [
"S"
],
"last": "Levin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mortensen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhong Zhou, Lori S. Levin, David Mortensen, and Alex Waibel. 2020. Using interlinear glosses as pivot in low-resource multilingual machine transla- tion. arXiv: Computation and Language.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(6) a. SURFACE: ke-STEM -en b. CANON.: ke\u2329\u232aen-STEM -ke\u2329\u232aen",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "The approximate total token considers multiple word expressions (when parsed as such) as single tokens. The percentage and total number of tokens that are both segmented (canonical and surface) and glossed are shown.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "Nat\u00fcgu [ntu] belongs to the Reefs-Santa Cruz group in the Austronesian family spoken by about 4,000 people in the Temotu Province of the Solomon Islands. It has mainly agglutinative morphology with complex verb structures (\u00c5shild",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Canonical Joint Seq Joint Seq Alas .4280 .4565 .5166 .5291 Lamkang .7091 .7391 .5414 .5785 Lezgi .5489 .6062 .4993 .5371 Manipuri .4719 .5067 .6401 .6675 Nat\u00fcgu .5423 .5263 .6083 .6335 Average .5400 .5670 .5011 .5895",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "F 1 -scores of Transformer joint and sequential models on both segmentation strategies. Scores are an average across a 10-fold cross-validation. The bottom row shows the average score across all languages.",
"num": null,
"content": "<table><tr><td colspan=\"2\">(5) a. SURFACE: tax#levy -es#PL</td></tr><tr><td>b. CANON.:</td><td>tax#levy -s#PL</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "Average F 1 differences between surface and canonical segmentation strategies. Positive scores mean surface segmentation outperformed canonical segmentation.",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Surface Canon</td></tr><tr><td>Alas</td><td>.4280</td><td>.5166</td></tr><tr><td>Alas all</td><td>.6771</td><td>.6902</td></tr><tr><td>Lamkang</td><td>.7091</td><td>.5414</td></tr><tr><td>Lamkang all</td><td>.8547</td><td>.8573</td></tr><tr><td>Lezgi</td><td>.5489</td><td>.4993</td></tr><tr><td>Lezgi all</td><td>.7834</td><td>.7735</td></tr><tr><td>Manipuri</td><td>.4719</td><td>.6401</td></tr><tr><td>Manipuri all</td><td>.8693</td><td>.8903</td></tr><tr><td>Nat\u00fcgu</td><td>.5423</td><td>.6083</td></tr><tr><td>Nat\u00fcgu all</td><td>.8965</td><td>.8995</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}