|
{ |
|
"paper_id": "D13-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:41:59.962180Z" |
|
}, |
|
"title": "The Effects of Syntactic Features in Automatic Prediction of Morphology", |
|
"authors": [ |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Seeker", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Stuttgart", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Stuttgart", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Morphology and syntax interact considerably in many languages and language processing should pay attention to these interdependencies. We analyze the effect of syntactic features when used in automatic morphology prediction on four typologically different languages. We show that predicting morphology for languages with highly ambiguous word forms profits from taking the syntactic context of words into account and results in state-ofthe-art models.", |
|
"pdf_parse": { |
|
"paper_id": "D13-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Morphology and syntax interact considerably in many languages and language processing should pay attention to these interdependencies. We analyze the effect of syntactic features when used in automatic morphology prediction on four typologically different languages. We show that predicting morphology for languages with highly ambiguous word forms profits from taking the syntactic context of words into account and results in state-ofthe-art models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In this paper, we investigate the interplay between syntax and morphology with respect to the task of assigning morphological descriptions (or tags) to each token of a sentence. Specifically, we examine the effect of syntactic information when it is integrated into the feature model of a morphological tagger. We test the effect of syntactic features on four languages -Czech, German, Hungarian, and Spanish -and find that syntactic features improve our tagger considerably for Czech and German, but not for Hungarian and Spanish. Our analysis of constructions that show morpho-syntactic agreement suggests that syntactic features are important if the language shows frequent word form syncretisms 1 that can be disambiguated by the syntactic context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The meaning of a sentence is structurally encoded by morphological and syntactic means. 2 Different languages, however, use them to a different extent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Languages like English encode grammatical information (like the subject vs object status of an argument) via word order, whereas languages like Czech or Hungarian use different word forms. Automatic analysis of languages with rich morphology needs to pay attention to the interaction between morphology and syntax in order to arrive at suitable computational models. Linguistic theory (e. g., Bresnan (2001) , Mel\u010duk (2009) ) suggests many interactions between morphology and syntax. For example, languages with a case system use different forms of the same word to mark different syntactic (or semantic) relations (Blake, 2001 ). In many languages, two words that participate in a syntactic relation show covariance in some or all of their morphological features (so-called agreement, Corbett (2006) ). 3 Automatic annotation of morphology assigns morphological descriptions (e. g., nominativesingular-masculine) to word forms. It is usually modeled as a sequence model, often in combination with part-of-speech tagging and lemmatization (Collins, 2002; Haji\u010d, 2004; Smith et al., 2005; Chrupa\u0142a et al., 2008, and others) . Sequence models achieve high accuracy and coverage but since they only use linear context they only approximate some of the underlying hierarchical relationships. As an example for these hierarchical relationships, Figure 1 shows a German noun phrase taken from the German TiGer corpus (Brants et al., 2002) . The two bold-faced words are the determiner and the head noun of the phrase, and they agree in their gender, number, and case values. The word Regionen (regions) is four-way ambiguous for its case value, which is reduced to a two-way ambiguity between nominative and accusative by the determiner. Further disambiguation would require information about the syntactic role of the noun phrase in a sentence. There are 11 tokens between these two words, which would require a context window of at least 13 to capture the agreement relation within a sequence model. Syntactically, however, as indicated by the dependency tree, the determiner and the head are linked directly. The interdependency between morphology and syntax in the example thus manifests itself in the morphological disambiguation of a highly syncretic word form because of its government or agreement relation to its respective syntactic head/dependents. Of course, the sequence model is most of the time a reasonable approximation, because the majority of noun phrases in the TiGer corpus are not as long as the example in Figure 1 . 4 Furthermore, not all languages show this kind of relationship between morphological forms and syntactic relation as demonstrated for German. But taking advantage of the morphosyntactic dependencies in a language can give us better models that may even be capable of handling the more difficult or rare cases. We therefore advocate that models for predicting morphology should be designed with the typological characteristics of a language and its morphosyntactic properties in mind, and should, where appropriate, integrate syntactic information in order to better model the morphosyntactic interdependencies of the language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 407, |
|
"text": "Bresnan (2001)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 423, |
|
"text": "Mel\u010duk (2009)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 627, |
|
"text": "(Blake, 2001", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 800, |
|
"text": "(so-called agreement, Corbett (2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 804, |
|
"end": 805, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1039, |
|
"end": 1054, |
|
"text": "(Collins, 2002;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1055, |
|
"end": 1067, |
|
"text": "Haji\u010d, 2004;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1068, |
|
"end": 1087, |
|
"text": "Smith et al., 2005;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1088, |
|
"end": 1122, |
|
"text": "Chrupa\u0142a et al., 2008, and others)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1411, |
|
"end": 1432, |
|
"text": "(Brants et al., 2002)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2534, |
|
"end": 2535, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1340, |
|
"end": 1348, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 2523, |
|
"end": 2531, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the remainder of the paper, we show empirically that taking syntactic information into account produces state-of-the-art models for languages with a high interdependency between morphology and syntax. We use a simple setup, where we combine a morphological tagger and a dependency parser in a bootstrapping architecture in order to analyze the effect of syntactic information on the performance of the morphological tagger (Section 2). Using syntactic features in morphology prediction requires a syntactically annotated corpus for training a statistical parser, which may not be available for languages with few resources. We show in Section 3 that only very little syntactically annotated data is required to achieve the improvements. We furthermore expect that the improved morphological information also improves parsing performance and present a preliminary experiment in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we present a series of experiments that investigate the effect of syntactic information on the prediction of morphological features. We start by describing our data sets and the system that we used for the experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We test our hypotheses on four different languages: Czech, German, Hungarian, and Spanish.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages and Data Sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Spanish, a Romance language, and German, a Germanic language, constitute inflecting languages that show verbal and nominal morphology, but not as sophisticated as Czech and Hungarian. As we will see in the experiments, it is relatively easy to predict the morphological information annotated in the Spanish data set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages and Data Sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Czech and Hungarian represent languages with very rich morphological systems both in verbal and nominal morphological paradigms. They differ significantly in the way in which morphological information is encoded in word forms. Czech, a Slavic language, is an inflecting language, where one suffix may signal several different morphological categories simultaneously (e. g., number, gender, case). In contrast, Hungarian, a Finno-Ugric language, is of the agglutinating type, where each morphological category is marked by its own morpheme.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages and Data Sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Both German and Czech show various form syncretisms in their inflection paradigms. Form syncretisms emerge when the same word form is ambiguous between several different morphological descriptions, and they are a major challenge to automatic morphological analysis. Spanish shows syncretism in the verbal inflection paradigms. In Hungarian, form syncretisms are much less frequent. The case paradigm of Hungarian only shows one form syncretism between dative and genitive case (out of about 18 case suffixes).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages and Data Sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "All languages show agreement between subject and verb, and within the noun phrase. The word order in Czech and Hungarian is very variable whereas it is more restrictive in Spanish and German.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages and Data Sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As our data, we use the CoNLL 2009 Shared Task data sets for Czech and Spanish. For German, we use the dependency conversion of the TiGer treebank by Seeker and Kuhn (2012) , splitting it into 40k/5k/5k sentences for training/development/test. For Hungarian, we use the Szeged Dependency Treebank (Vincze et al., 2010) , with the split of Farkas et al. (2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 172, |
|
"text": "Seeker and Kuhn (2012)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 318, |
|
"text": "(Vincze et al., 2010)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 359, |
|
"text": "Farkas et al. (2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages and Data Sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To test our hypotheses, we implemented a tagger that assigns full morphological descriptions to each token in a sentence. The system was inspired by the morphological tagger included in mate-tools. 5 Like the tagger provided with mate-tools, it is a classifier that tags each token using the surrounding tokens in its feature model. Models are trained using passiveaggressive online training (Crammer et al., 2003) . The system makes two passes over each sentence: The first pass provides predicted tags that are used as features during the second pass. We also adopted the idea of a tag filter, which deterministically assigns tags for words that always occur with the same tag in the training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 199, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 414, |
|
"text": "(Crammer et al., 2003)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For all matters of syntactic annotation in this paper, we use the graph-based dependency parser by Bohnet (2010) , also included in mate-tools. All data sets are annotated with gold syntactic information, which is used to train the parsing models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 112, |
|
"text": "Bohnet (2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For our experiments, we use a bootstrapping approach: the parser uses the output of the morphology in its feature set, and the morphological tagger we want to analyze uses the output of the parser as syntactic features. Since it is best to keep the training setting as similar as possible to the test setting, we use 10-fold jackknifing to annotate our training data with predicted morphology or syntax respectively. Jackknifing differs from cross-validation only in its purpose. Cross-validation is used for evaluating data, jackknifing is used to annotate data. The data set is split into n parts, and n-1 parts are used to train a model for annotating the n th part. This is then rotated n times such that each part is annotated by the automatic tool without training it on its own test data. Jackknifing is important for creating a realistic training scenario that provides automatic preprocessing. For annotating development and test sets, models are trained on the jackknifed training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In the first experiment, we use the system described in Section 2.2 to predict morphological information on all four languages. We start with describing the general setup and the feature set, and continue with a discussion of the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The experimental setup is as follows: the German and Spanish data sets are annotated with lemma and part-of-speech information using 10-fold jackknifing. The annotation is done with mate-tools' lemmatizer and pos-tagger. For Czech and Hungarian, we keep the annotation provided with the data sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Note that our experimental setup does not include lemmas or part-of-speech tags as part of the prediction of the morphology but annotates them in a pre-processing step. It is not necessary to separate partof-speech and lemma from the prediction of morphology and, in fact, many systems perform these steps simultaneously (e. g. Spoustov\u00e1 et al. (2009) ). Doing morphology prediction as a separate step allows us to use lemma and part-of-speech information in the feature set . Table 1 : Baseline feature set. form means word form, lemma is lemma, pos is part-of-speech, s1/p1 stand for suffix and prefix of length 1 (characters), tag is the morphological tag predicted by the system, 1b/1a means 1 token before/after the current token, and + marks feature conjunctions. number marks if the form contains a digit.", |
|
"cite_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 351, |
|
"text": "Spoustov\u00e1 et al. (2009)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 477, |
|
"end": 484, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "After preprocessing the data, our baseline system is trained using the feature set shown in Table 1. The baseline system does not make use of any syntactic information but predicts morphological information based solely on tokens and their linear context. The features are divided into static features, which can be computed on the input, and dynamic features, which are computed also on previous output of the system (cf. two passes in Section 2.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The feature sets in Table 1 were developed specifically for our experiments and are the result of an automatic forward/backward feature selection process. The purpose of the feature selection was to arrive at a baseline system that performs well without any syntactic information. With such an optimized baseline system, we can measure the contribution of syntactic features more reliably.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 27, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The last-verb/next-verb and pos+case features are variants of the features proposed in Votrubec (2006) . They extract information about the first verb within the last 10/the next 30 tokens in the sentence. The case feature extracts the case value from previously assigned morphological tags. Note that the verb features are approximating syntactic information by making the assumption that the closest verbs are likely to be syntactic heads for many words. After training the baseline models, we use them to annotate the whole data set with morphological information (using 10-fold jackknifing for the training portions). We then use 10-fold jackknifing again to annotate the data sets with the dependency parser.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 102, |
|
"text": "Votrubec (2006)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "At this point, all our data sets are annotated with predicted morphology from our baseline system and with syntactic information from the parser, which uses the morphological information from our baseline system in its feature set. We can now retrain our morphological tagger using features that are derived from the dependency trees provided by the parser. Note that this is not a stacking architecture, since the second system does not use the predicted morphology output from the baseline system. The loop simply ensures that we get the best possible syntactic features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We extract two kinds of syntactic features: features of the syntactic head of the current token, and features of the left-most daughter of the current token. We also experimented with other types, e. g. the right-most daughter, but these features did not improve the model. This is likely due to the way these languages encode morphological information and may be different for other languages. From the head and the left-most daughter, we construct features about form, lemma, affixes, and tags. Table 2 lists the syntactic features that we use in the model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 497, |
|
"end": 504, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "With the syntactic features available due to the parsing step, we train new models with the full system. For each language, we run four experiments. The first two are baseline experiments, where we use the off-the-shelf morphological tagger morfette (Chrupa\u0142a et al., 2008) and our own baseline system, both of which do not use any syntactic features. In the third experiment, we evaluate our full system using the syntactic features provided by the dependency parser. As an oracle experiment, we also report results on the full system when using the gold standard syntax from the treebank. and out-of-vocabulary tokens only (oov). Out-ofvocabulary tokens do not occur in the training data. We find trends along several axes: Generally, the syntactic features work well for Czech and German, whereas for Hungarian and Spanish, they do not yield any significant improvement. The improvements for German and Czech are between 0.5 (Czech) and 1.0 (German) percentage points absolute in token accuracy, and between 0.2 (Czech test set) and 2.5 (German dev set) percentage points absolute in accuracy of unknown words. There are no obvious differences between the development and the test set in any of the languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 273, |
|
"text": "(Chrupa\u0142a et al., 2008)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Compared to the morfette baseline, we find our systems to be either superior or equal to morfette in terms of token accuracy. Regarding accuracy on unknown words, morfette outperforms our systems for Hungarian, but is outperformed on Czech and German. For Spanish, all systems yield similar results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Looking at the oracle experiment, we see that for all languages, the system can learn something from syntax. For Czech and German, this is clearly the case, for Hungarian and Spanish, the differences are small but visible. There are pronounced differences between the predicted and the gold syntax experiments in Czech and German. Clearly, the parser makes mistakes that propagate through to the prediction of the morphology.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Effects of Syntactic Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The current state-of-the-art in predicting morphological features makes use of morphological lexicons (e.g. Haji\u010d (2000) , Hakkani-T\u00fcr et al. (2002) , Haji\u010d (2004) ). Lexicons define the possible morphological descriptions of a word and a statistical model selects the most probable one among them. In the following experiment, we test whether the contribution of syntactic features is similar or different to the contribution of morphological lexicons.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 120, |
|
"text": "Haji\u010d (2000)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 148, |
|
"text": "Hakkani-T\u00fcr et al. (2002)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 163, |
|
"text": "Haji\u010d (2004)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntax vs Lexicon", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Lexicons encode important knowledge that is difficult to pick up in a purely statistical system, e. g. the gender of nouns, which often cannot be deduced from the word form (Corbett, 1991). 7 We extend our system from the previous experiment to include information from a morphological dictionaries. For Czech, we use the morphological analyzer distributed with the Prague Dependency Treebank 2 (Haji\u010d et al., 2006) . For German, we use DMor (Schiller, 1994) . For Hungarian, we use (Tr\u00f3n et al., 2006) , and for Spanish, we use the morphological analyzer included in Freeling (Carreras et al., 2004) . The output of the analyzers is given to the system as features that simply record the presence of a particular morphological analysis for the current word. The system can thus use the output of any tool regardless of its annotation scheme, especially if the annotation scheme of the treebank is different from the one of the morphological analyzer. Table 4 presents the results of experiments where we add the output of the morphological analyzers to our system. Again, we run experiments with and without syntactic features. For Czech, we also show results from featurama 8 with the feature set developed by Votrubec (2006) . For German, we show results for RFTagger (Schmid and Laws, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 191, |
|
"text": "7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 415, |
|
"text": "(Haji\u010d et al., 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 458, |
|
"text": "(Schiller, 1994)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 502, |
|
"text": "(Tr\u00f3n et al., 2006)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 600, |
|
"text": "(Carreras et al., 2004)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1212, |
|
"end": 1227, |
|
"text": "Votrubec (2006)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1271, |
|
"end": 1294, |
|
"text": "(Schmid and Laws, 2008)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 952, |
|
"end": 959, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntax vs Lexicon", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "As expected, the information from the morphological lexicon improves the overall performance considerably compared to the results in Table 3 , especially on unknown tokens. This shows that even with the considerable amounts of training data available nowadays, rule-based morphological analyzers are important resources for morphological description (cf. Haji\u010d (2000) ). The contribution of syntactic features in German and Czech is almost the same as in the previous experiment, indicating that the syntactic features contribute information that is orthogonal to that of the morphological lexicon. The lexicon provides lexical knowledge about a word form, while the syntactic features provide the syntactic context that is needed in German and Czech to decide on the right morphological tag.", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 367, |
|
"text": "Haji\u010d (2000)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 140, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntax vs Lexicon", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "From the previous experiments, we conclude that syntactic features help in the prediction of morphology for Czech and German, but not for Hungarian and Spanish. To further investigate the difference between Czech and German on the one hand, and Hungarian and Spanish on the other, we take a closer look at the output of the tagger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Differences", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "We find an interesting difference between the two pairs of languages, namely the performance with respect to agreement. Agreement is a phenomenon where morphology and syntax strongly interact. Morphological features co-vary between two items in the sentence, but the relation between these items can occur at various linguistic levels (Corbett, 2006) . If the syntactic information helps with predicting morphological information, we expect this to be particularly helpful with getting agreement right. All languages show agreement to some extent. Specifically, all languages show agreement in number (and person) between the subject and the verb of a clause. Czech, German, and Spanish show agreement in number, gender, and case (not Spanish) within a noun phrase. Hungarian shows case agreement within the noun phrase only rarely, e.g. for attributively used demonstrative pronouns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 350, |
|
"text": "(Corbett, 2006)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Differences", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "In order to test the effect on agreement, we measure the accuracy on tokens that are in an agreement relation with their syntactic head. We counted subject verb agreement as well as agreement with respect to number, gender, and case (where applicable) between a noun and its dependent adjective and determiner. Table 5 displays the counts from the devel-opment sets of each language. We compare the baseline system that does not use any syntactic information with the output of the morphological tagger that uses the gold syntax. We use the gold syntax rather than the predicted one in order to eliminate any influence from parsing errors. As can be seen from the results, the level of agreement relations in Czech and German improves when using syntactic information, whereas in Spanish and Hungarian, only very tiny changes occur. Table 5 : Agreement counts in morphological annotation compared between the baseline system and the oracle system using gold syntax.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 318, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 833, |
|
"end": 840, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language Differences", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "For Czech and German, these results sugguest that syntactic information helps with agreement. We believe that the reasons why it does not help for Hungarian and Spanish are the following: for Spanish, we see that also the baseline model achieves very high accuracies (cf. Table 3 ) and also high rates of correct agreement. It seems that for Spanish, syntactic context is simply not necessary to make the correct prediction. For Hungarian, the reason lies within the inflectional paradigms of the language, which do not show any form syncretism, meaning that word forms in Hungarian are usually not ambiguous within one morphological category (e.g. case). Making a morphological tag prediction, however, is difficult only if the word form itself is ambiguous between several morphological tags. In this case, using the agreement relation between the word and its syntactic head can help the system making the proper prediction. This is the situation that we find in Czech and German, where form syncretism is pervasive in the inflectional paradigms.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 279, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language Differences", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "In Section 2.4 we compared the performance of our system on Czech to another system, featurama (see Table 4 ). Featurama outperforms our baseline system by a percentage point in token accuracy (and even more for unknown tokens). Syntactic information closes that gap to a large extent but only using gold syntax gets our system on a par with featurama.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 107, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Features in Czech", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "The question then arises whether the syntactic features actually contribute something new to the task, or whether the same effect could also be achieved with linear context features alone as in featurama. In order to test this we run an additional experiment, where we add some of the syntax features to the feature set of featurama. Specifically, we add the static features from Table 2 that do not use lemma or part-of-speech information. Due to the way featurama works, we cannot use features from the morphological tags (the dynamic features).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Features in Czech", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "The results in Table 6 show that also featurama profits from syntactic features, which corroborates the findings from the previous experiments. We also note again that better syntax would improve results even more. Table 6 : Syntactic features for featurama (Czech). * mark statistically significantly better models compared to featurama (sentence-based t-test with \u03b1 = 0.05).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 22, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 222, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Features in Czech", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "Syntactic features require syntactically annotated corpora. Without a treebank to train the parser, the morphology cannot profit from syntactic features. 9 This may be problematic for languages for which there is no treebank, because creating a treebank is expensive. Fortunately, it turns out that very small amounts of syntactically annotated data are enough to provide a parsing quality that is sufficient for the morphological tagger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How Much Syntax is Needed?", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to test what amount of training data is needed, we train several parsing models on increasing amounts of syntactically annotated data. For example, the first experiment uses the first 1,000 sentences of the treebank. We perform 5-fold jackknifing with the parser on these sentences to annotate them with syntax. Then we train one parsing model on these 1,000 sentences and use it to annotate the rest of the training data as well as the development and the test set. This gives us the full data set annotated with syntax that was learned from the first 1,000 sentences of the treebank. The morphological tagger is then trained on the full training set and applied to development and test set. Figure 2 shows the dependency between the amount of training data given to the parser and the quality of the morphological tagger using syntactic features provided by this parser. The left-most point corresponds to a model that does not use syntactic information. For both languages, German and Czech, we find that already 1,000 sentences are enough training data for the parser to provide useful syntactic information to the morphological tagger. After 5,000 sentences, both curves flatten out and stay on the same level. We conclude that using syntactic features for morphological prediction is viable even if there is only small amounts of syntactic data available to train the parser.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 702, |
|
"end": 710, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How Much Syntax is Needed?", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a related experiment, we also test if we can get the same effect with a very simple and thus much faster parser. We use the brute-force algorithm described in Covington (2001) , which selects for each token in the sentence another token as the head. It does not have any tree requirements, so it is not even guaranteed to yield a cycle-free tree structure. In Table 7, we compare the simple parser with the mateparser, both trained on the first 5,000 sentences of the treebank. Evaluation is done in terms of labeled (LAS) and unlabeled attachment score (UAS As expected, the simple parser performs much worse in terms of syntactic quality. Table 8 shows the performance of the morphological tagger when using the output of both parsers as syntactic features. For Czech, both parsers seem to supply similar information to the morphological tagger, while for German, using the full parser is clearly better. In both cases, the morphological tagger outperforms the models that do not use syntactic information (cf. Table 3 ). The performance on unknown words is however much worse for both languages. We conclude that even with a simple parser and little training data, the morphology can make use of syntactic information to some extent. Table 8 : Simple parser vs full parser -morphological quality. The parsing models were trained on the first 5,000 sentences of the training data, the morphological tagger was trained on the full training set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 178, |
|
"text": "described in Covington (2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 651, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1016, |
|
"end": 1023, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1240, |
|
"end": 1247, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How Much Syntax is Needed?", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 Does Better Morphology lead to Better Parses?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How Much Syntax is Needed?", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the previous sections, we show that syntactic information improves a model for predicting morphology for Czech and German, where syntax and morphology interact considerably. A natural question then is whether the improvement also occurs in the other direction, namely whether the improved morphology also leads to better parsing models. In the previous experiments, we run a 10-fold jackknifing process to annotate the training data with morphological information using no syntactic features and afterwards use jackknifing with the parser to annotate syntax. The syntax is subsequently used as features for our predicted-syntax experiments. We can apply the same process once more with the morphology prediction in order to annotate the training data with morphological information that is predicted using the syntactic features. A parser trained on this data will then use the improved morphology as features. If the improved morphology has an impact on the parser, the quality of the second parsing model should then be superior to the first parsing model, which uses the morphology predicted without syntactic information. Note that for the following experiments, neither morphology model uses the morphological lexicon. Table 9 presents the evaluation of the two parsing models (one using morphology without syntactic features, the other one using the improved morphology). The results show no improvement in parsing performance when using the improved morphology. Looking closer at the output, we find differences be- Table 9 : Impact of the improved morphology on the quality of the dependency parser for Czech and German.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1227, |
|
"end": 1234, |
|
"text": "Table 9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1526, |
|
"end": 1533, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How Much Syntax is Needed?", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "tween the two parsing models with respect to grammatical functions that are morphologically marked. For example, in German, performance on subjects and accusative objects improves while performance for dative objects and genitives decreases. This suggests different strengths in the two parsing models. However, the question how to make use of the improved morphology in parsing clearly needs more research in the future. A promising avenue may be the approach by Hohensee and Bender (2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 490, |
|
"text": "Hohensee and Bender (2012)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How Much Syntax is Needed?", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Morphological taggers have been developed for many languages. The most common approach is the combination of a morphological lexicon with a statistical disambiguation model (Hakkani-T\u00fcr et al., 2002; Haji\u010d, 2004; Smith et al., 2005; Spoustov\u00e1 et al., 2009; Zsibrita et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 199, |
|
"text": "(Hakkani-T\u00fcr et al., 2002;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 212, |
|
"text": "Haji\u010d, 2004;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 232, |
|
"text": "Smith et al., 2005;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 256, |
|
"text": "Spoustov\u00e1 et al., 2009;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 279, |
|
"text": "Zsibrita et al., 2013)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our work has been inspired by Versley et al. (2010) , who annotate a treebank with morphological information after the syntax had been annotated already. The system used a finite-state morphology to propose a set of candidate tags for each word, which is then further restricted using hand-crafted rules over the already available syntax tree. Lee et al. (2011) pursue the idea of jointly predicting syntax and morphology, out of the motivation that joint models should model the problem more faithfully. They demonstrate that both sides can use information from each other. However, their model is computationally quite demanding and its overall performance falls far behind the standard pipeline approach where both tasks are done in sequence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 51, |
|
"text": "Versley et al. (2010)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 361, |
|
"text": "Lee et al. (2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The problem of modeling the interaction between morphology and syntax has recently attracted some attention in the SPMRL workshops . Modeling morphosyntactic relations explicitly has been shown to improve statistical parsing models (Tsarfaty and Sima'an, 2010; Goldberg and Elhadad, 2010; Seeker and Kuhn, 2013) , but the codependency between morphology and syntax makes it a difficult problem, and linguistic intuition is often contradicted by the empirical findings. For example, Marton et al. (2013) show that case information is the most helpful morphological feature for parsing Arabic, but only if it is given as gold information, whereas using case information from an automatic system may even harm the performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 260, |
|
"text": "(Tsarfaty and Sima'an, 2010;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 288, |
|
"text": "Goldberg and Elhadad, 2010;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 311, |
|
"text": "Seeker and Kuhn, 2013)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 502, |
|
"text": "Marton et al. (2013)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Morphologically rich languages pose different challenges for automatic systems. In this paper, we work with European languages, where the problem of predicting morphology can be reduced to a tagging problem. In languages like Arabic, Hebrew, or Turkish, widespread ambiguity in segmentation of single words into meaningful morphemes adds an additional complexity. Given a good segmentation tool that takes care of this, our approach is applicable to these languages as well. For Hebrew, this problem has also been addressed by jointly modeling segmentation, morphological prediction, and syntax (Cohen and Smith, 2007; Goldberg and Tsarfaty, 2008; Goldberg and Elhadad, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 606, |
|
"end": 618, |
|
"text": "Smith, 2007;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 619, |
|
"end": 647, |
|
"text": "Goldberg and Tsarfaty, 2008;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 675, |
|
"text": "Goldberg and Elhadad, 2013)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we have demonstrated that using syntactic information for predicting morphological information is helpful if the language shows form syncretism in combination with morphosyntactic phenomena like agreement. A model that uses syntactic information is superior to a sequence model because it leverages the syntactic dependencies that may hold between morphologically dependent words as suggested by linguistic theory. We also showed that only small amounts of training data for a statistical parser would be needed to improve the morphological tagger. Making use of the improved morphology in the dependency parser is not straight-forward and requires more investigation in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Modeling the interaction between morphology and syntax is important for building successful parsing pipelines for languages with free word order and rich morphology. Moreover, our experiments show that paying attention to the individual properties of a language can help us explain and predict the behavior of automatic tools. Thus, the term \"morphologically rich language\" should be viewed as a broad term that covers many different languages, whose differences among each other may be as important as the difference with languages with a less rich morphology.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Syncretism describes the situation where a word form is ambiguous between several different morphological descriptions within its inflection paradigm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "And also by prosodic means, which we will not discuss since text-based tools rarely have access to this information.3 For example, in English, the subject of a sentence and the finite verb agree with respect to their number and person feature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We find 57,551 noun phrases with less than three tokens between determiner and noun and 4,670 with three or more.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A collection of language independent, data-driven analysis tools for lemmatization, pos-tagging, morphological analysis, and dependency parsing: http://code.google.com/p/mate-tools", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lemma and part-of-speech prediction may also profit from syntactic information, see e.g.Prins (2004) orBohnet and Nivre (2012).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lexicons are also often used to speed up processing considerably by restricting the search space of the statistical model. 8 http://sourceforge.net/projects/featurama/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Which is of course only a problem for statistical parsers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LAS: correct edges with correct labels all edges , UAS: correct edges all edges", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Jan Haji\u010d and Jan\u0160t\u011bp\u00e1nek for their kind help with the Czech morphology and featurama. We would also like to thank Thomas M\u00fcller for sharing resources and thoughts with us, and Anders Bj\u00f6rkelund for commenting on earlier versions of this paper. This work was funded by the Deutsche Forschungsgemeinschaft (DFG) via SFB 732 \"Incremental Specification in Context\", project D8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Case", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Barry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blake", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barry J. Blake. 2001. Case. Cambridge University Press, Cambridge, New York, 2nd edition.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A Transition-Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1455--1465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernd Bohnet and Joakim Nivre. 2012. A Transition- Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 1455- 1465, Jeju, South Korea. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Very high accuracy and fast dependency parsing is not a contradiction", |
|
"authors": [ |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernd Bohnet. 2010. Very high accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 89-97, Beijing, China. International Committee on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The TIGER treebank", |
|
"authors": [ |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefanie", |
|
"middle": [], |
|
"last": "Dipper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvia", |
|
"middle": [], |
|
"last": "Hansen-Shirra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Lezius", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 1st Workshop on Treebanks and Linguistic Theories", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabine Brants, Stefanie Dipper, Silvia Hansen-Shirra, Wolfgang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the 1st Workshop on Treebanks and Linguistic Theories, pages 24-41, Sozopol, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Lexical-Functional Syntax", |
|
"authors": [ |
|
{ |
|
"first": "Joan", |
|
"middle": [], |
|
"last": "Bresnan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joan Bresnan. 2001. Lexical-Functional Syntax. Black- well Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Freeling: An open-source suite of language analyzers", |
|
"authors": [ |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isaac", |
|
"middle": [], |
|
"last": "Chao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llus", |
|
"middle": [], |
|
"last": "Padr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muntsa", |
|
"middle": [], |
|
"last": "Padr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC'04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "239--242", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xavier Carreras, Isaac Chao, Llus Padr, and Muntsa Padr. 2004. Freeling: An open-source suite of language analyzers. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC'04), pages 239-242. European Language Re- sources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning morphology with morfette", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Chrupa\u0142a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgiana", |
|
"middle": [], |
|
"last": "Dinu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2362--2367", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Chrupa\u0142a, Georgiana Dinu, and Josef van Genabith. 2008. Learning morphology with mor- fette. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), pages 2362-2367, Marrakech, Morocco. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Joint morphological and syntactic disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Shay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "208--217", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shay B. Cohen and Noah A. Smith. 2007. Joint morpho- logical and syntactic disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 208-217, Prague, Czech Republic. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natu- ral Language Processing, pages 1-8. Association for Computational Linguistics, July.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Gender. Cambridge Textbooks in Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Greville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Corbett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greville G. Corbett. 1991. Gender. Cambridge Text- books in Linguistics. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Agreement. Cambridge Textbooks in Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Greville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Corbett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greville G. Corbett. 2006. Agreement. Cambridge Text- books in Linguistics. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A fundamental algorithm for dependency parsing (with corrections). In Proceedings of the 39th", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Covington", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Annual ACM Southeast Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael A. Covington. 2001. A fundamental algorithm for dependency parsing (with corrections). In Pro- ceedings of the 39th Annual ACM Southeast Confer- ence, Athens, Gorgia. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Online passive-aggressive algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Shalev-Shwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 16th Annual Conference on Neural Information Processing Systems", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "1217--1224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer, Ofer Dekel, Shai Shalev-Shwartz, and Yoram Singer. 2003. Online passive-aggressive algo- rithms. In Proceedings of the 16th Annual Conference on Neural Information Processing Systems, volume 7, pages 1217-1224, Cambridge, Massachusetts, USA. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Dependency parsing of hungarian: Baseline results and challenges", |
|
"authors": [ |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--65", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich\u00e1rd Farkas, Veronika Vincze, and Helmut Schmid. 2012. Dependency parsing of hungarian: Baseline re- sults and challenges. In Proceedings of the 13th Con- ference of the European Chapter of the Association for Computational Linguistics, pages 55-65, Avignon, France. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Easy first dependency parsing of modern Hebrew", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"Elhadad" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--107", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg and Michael Elhadad. 2010. Easy first dependency parsing of modern Hebrew. In Proceed- ings of the NAACL HLT 2010 First Workshop on Sta- tistical Parsing of Morphologically-Rich Languages, pages 103-107, Los Angeles, California, USA. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Word segmentation, unknown-word resolution, and morphological agreement in a hebrew parsing system", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "", |
|
"pages": "121--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg and Michael Elhadad. 2013. Word seg- mentation, unknown-word resolution, and morpholog- ical agreement in a hebrew parsing system. Computa- tional Linguistics, 39(1):121-160.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A single generative model for joint morphological segmentation and syntactic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reut", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "371--379", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg and Reut Tsarfaty. 2008. A single gener- ative model for joint morphological segmentation and syntactic parsing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguis- tics, pages 371-379, Columbus, Ohio. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Morphological Tagging: Data vs. Dictionaries", |
|
"authors": [], |
|
"year": 2000, |
|
"venue": "Proceedings of the 6th ANLP Conference / 1st NAACL Meeting", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "94--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d. 2000. Morphological Tagging: Data vs. Dic- tionaries. In Proceedings of the 6th ANLP Conference / 1st NAACL Meeting, pages 94-101, Seattle, Wash- ington. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Disambiguation of Rich Inflection (Computational Morphology of Czech). Nakladatelstv\u00ed Karolinum", |
|
"authors": [], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d. 2004. Disambiguation of Rich Inflection (Computational Morphology of Czech). Nakladatel- stv\u00ed Karolinum, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The CoNLL-2009 shared task: Syntactic and Semantic dependencies in multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [ |
|
"Ant\u00f2nia" |
|
], |
|
"last": "Mart\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Step\u00e1nek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Stran\u00e1k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning: Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan Step\u00e1nek, Pavel Stran\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and Semantic dependen- cies in multiple languages. In Proceedings of the 13th Conference on Computational Natural Language Learning: Shared Task, pages 1-18, Boulder, Col- orado, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Statistical morphological disambiguation for agglutinative languages", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Dilek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kemal", |
|
"middle": [], |
|
"last": "Hakkani-T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00f6khan", |
|
"middle": [], |
|
"last": "Oflazer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "T\u00fcr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computers and the Humanities", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "381--410", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dilek Z. Hakkani-T\u00fcr, Kemal Oflazer, and G\u00f6khan T\u00fcr. 2002. Statistical morphological disambiguation for agglutinative languages. Computers and the Humani- ties, 36(4):381-410.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Getting more from morphology in multilingual dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Hohensee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bender", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "315--326", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Hohensee and Emily M. Bender. 2012. Getting more from morphology in multilingual dependency parsing. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 315-326, Montr\u00e9al, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A discriminative model for joint morphological disambiguation and dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Naradowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th annual meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "885--894", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Lee, Jason Naradowsky, and David A. Smith. 2011. A discriminative model for joint morphological disam- biguation and dependency parsing. In Proceedings of the 49th annual meeting of the Association for Compu- tational Linguistics, pages 885-894, Portland, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Dependency parsing of modern standard arabic with lexical and inflectional features", |
|
"authors": [ |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Marton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "1", |
|
"pages": "161--194", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuval Marton, Nizar Habash, and Owen Rambow. 2013. Dependency parsing of modern standard arabic with lexical and inflectional features. Computational Lin- guistics, 39(1):161-194.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Dependency in linguistic description", |
|
"authors": [ |
|
{ |
|
"first": "Igor", |
|
"middle": [], |
|
"last": "Mel\u010duk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Igor Mel\u010duk. 2009. Dependency in linguistic descrip- tion.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Beyond N in N-gram tagging", |
|
"authors": [ |
|
{ |
|
"first": "Robbert", |
|
"middle": [], |
|
"last": "Prins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the ACL 2004 Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robbert Prins. 2004. Beyond N in N-gram tagging. In Leonoor Van Der Beek, Dmitriy Genzel, and Daniel Midgley, editors, Proceedings of the ACL 2004 Student Research Workshop, pages 61-66, Barcelona, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Dmor -user's guide", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Schiller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne Schiller. 1994. Dmor -user's guide. Technical report, University of Stuttgart.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Estimation of conditional probabilities with decision trees and an application to fine-grained POS tagging", |
|
"authors": [ |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Laws", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "777--784", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helmut Schmid and Florian Laws. 2008. Estimation of conditional probabilities with decision trees and an application to fine-grained POS tagging. In Proceed- ings of the 22nd International Conference on Compu- tational Linguistics, pages 777-784, Morristown, NJ, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Making Ellipses Explicit in Dependency Conversion for a German Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Seeker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3132--3139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wolfgang Seeker and Jonas Kuhn. 2012. Making El- lipses Explicit in Dependency Conversion for a Ger- man Treebank. In Proceedings of the 8th Interna- tional Conference on Language Resources and Eval- uation, pages 3132-3139, Istanbul, Turkey. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Morphological and syntactic case in statistical dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Seeker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "1", |
|
"pages": "23--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wolfgang Seeker and Jonas Kuhn. 2013. Morphologi- cal and syntactic case in statistical dependency pars- ing. Computational Linguistics, 39(1):23-55.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Context-based morphological disambiguation with random fields", |
|
"authors": [ |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Tromble", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "475--482", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noah A. Smith, David A. Smith, and Roy W. Tromble. 2005. Context-based morphological disambiguation with random fields. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 475-482, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Semi-supervised training for the averaged perceptron POS tagger", |
|
"authors": [ |
|
{ |
|
"first": "\"johanka\"", |
|
"middle": [], |
|
"last": "Drahom\u00edra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Spoustov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miroslav", |
|
"middle": [], |
|
"last": "Raab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Spousta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "763--771", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Drahom\u00edra \"Johanka\" Spoustov\u00e1, Jan Haji\u010d, Jan Raab, and Miroslav Spousta. 2009. Semi-supervised train- ing for the averaged perceptron POS tagger. In Pro- ceedings of the 12th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 763-771, Athens, Greece. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Morphdb.hu: Hungarian lexical database and morphological grammar", |
|
"authors": [ |
|
{ |
|
"first": "P\u00e9ter", |
|
"middle": [], |
|
"last": "Viktor Tr\u00f3n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P\u00e9ter", |
|
"middle": [], |
|
"last": "Hal\u00e1csy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rebrus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1670--1673", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Viktor Tr\u00f3n, P\u00e9ter Hal\u00e1csy, P\u00e9ter Rebrus, Andr\u00e1s Rung, P\u00e9ter Vajda, and Eszter Simon. 2006. Morphdb.hu: Hungarian lexical database and morphological gram- mar. In Proceedings of the 5th International Confer- ence on Language Resources and Evaluation, pages 1670-1673, Genoa, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Modeling morphosyntactic agreement in constituency-based parsing of Modern Hebrew", |
|
"authors": [ |
|
{ |
|
"first": "Reut", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reut Tsarfaty and Khalil Sima'an. 2010. Modeling mor- phosyntactic agreement in constituency-based parsing of Modern Hebrew. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 40-48, Los Angeles, California, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Statistical parsing of morphologically rich languages (SPMRL): what, how and whither", |
|
"authors": [ |
|
{ |
|
"first": "Reut", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Djam\u00e9", |
|
"middle": [], |
|
"last": "Seddah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Candito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reut Tsarfaty, Djam\u00e9 Seddah, Yoav Goldberg, Sandra K\u00fcbler, Marie Candito, Jennifer Foster, Yannick Vers- ley, Ines Rehbein, and Lamia Tounsi. 2010. Statistical parsing of morphologically rich languages (SPMRL): what, how and whither. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 1-12, Los Angeles, California, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "A syntax-first approach to high-quality morphological analysis and lemma disambiguation for the tba-d/z treebank", |
|
"authors": [ |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathrin", |
|
"middle": [], |
|
"last": "Beck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erhard", |
|
"middle": [], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heike", |
|
"middle": [], |
|
"last": "Telljohann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "9th Conference on Treebanks and Linguistic Theories (TLT9)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "233--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yannick Versley, Kathrin Beck, Erhard Hinrichs, and Heike Telljohann. 2010. A syntax-first approach to high-quality morphological analysis and lemma dis- ambiguation for the tba-d/z treebank. In 9th Confer- ence on Treebanks and Linguistic Theories (TLT9), pages 233-244.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Zolt\u00e1n Alexin, and J\u00e1nos Csirik", |
|
"authors": [ |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D\u00f3ra", |
|
"middle": [], |
|
"last": "Szauter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Attila", |
|
"middle": [], |
|
"last": "Alm\u00e1si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "M\u00f3ra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 7th Conference on International Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1855--1862", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronika Vincze, D\u00f3ra Szauter, Attila Alm\u00e1si, Gy\u00f6rgy M\u00f3ra, Zolt\u00e1n Alexin, and J\u00e1nos Csirik. 2010. Hungar- ian Dependency Treebank. In Proceedings of the 7th Conference on International Language Resources and Evaluation, pages 1855-1862, Valletta, Malta. Euro- pean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Morphological tagging based on averaged perceptron", |
|
"authors": [], |
|
"year": 2006, |
|
"venue": "WDS'06 Proceedings of Contributed Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "191--195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Votrubec. 2006. Morphological tagging based on averaged perceptron. In WDS'06 Proceedings of Con- tributed Papers, pages 191-195, Praha, Czechia. Mat- fyzpress, Charles University.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "magyarlanc 2.0: szintaktikai elemz\u00e9s\u00e9s felgyors\u00edtott sz\u00f3faji egy\u00e9rtelms\u00edt\u00e9s", |
|
"authors": [ |
|
{ |
|
"first": "J\u00e1nos", |
|
"middle": [], |
|
"last": "Zsibrita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IX. Magyar Sz\u00e1m\u00edt\u00f3g\u00e9pes Nyelv\u00e9szeti Konferencia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "368--374", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00e1nos Zsibrita, Veronika Vincze, and Rich\u00e1rd Farkas. 2013. magyarlanc 2.0: szintaktikai elemz\u00e9s\u00e9s felgy- ors\u00edtott sz\u00f3faji egy\u00e9rtelms\u00edt\u00e9s. In Attila Tan\u00e1cs and Veronika Vincze, editors, IX. Magyar Sz\u00e1m\u00edt\u00f3g\u00e9pes Nyelv\u00e9szeti Konferencia, pages 368-374, Szeged, Hungary.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "die wirtschaftlich am weitesten entwickelten , modernen und zum Teil katholisch gepr\u00e4gten Regionen nom/acc.pl.fem nom/acc.pl.that are economically most developed, modern, and partly catholic'", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Example of a German noun phrase. First and last word agree in number, gender, and case value.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Dependency between amount of training data for syntactic parser and quality of morphological prediction.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Syntactic features. h and ld mark features from the head and the left-most daughter, dir is a binary feature marking the direction of the head with respect to the current token.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "The effect of syntactic features when predicting morphological information. * mark statistically significantly better models compared to our baseline (sentencebased t-test with \u03b1 = 0.05).", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td/><td colspan=\"2\">dev set</td><td colspan=\"2\">test set</td></tr><tr><td/><td>all</td><td>oov</td><td>all</td><td>oov</td></tr><tr><td/><td/><td>Czech</td><td/></tr><tr><td colspan=\"5\">featurama 94.75 our baseline 97.27 92.61 97.03 91.28</td></tr><tr><td>pred syntax</td><td colspan=\"4\">97.38 92.39 97.19 91.50</td></tr><tr><td colspan=\"5\">gold syntax *97.63 92.79 *97.45 91.92</td></tr><tr><td/><td/><td>Spanish</td><td/></tr><tr><td colspan=\"5\">our baseline 98.23 92.46 98.02 93.15</td></tr><tr><td>pred syntax</td><td colspan=\"4\">98.24 92.30 98.07 93.03</td></tr><tr><td>gold syntax</td><td colspan=\"4\">98.40 92.82 *98.22 93.64</td></tr></table>", |
|
"num": null, |
|
"text": "presents all results in terms of accuracy on all tokens (all) 84.12 94.78 84.23 our baseline 93.80 80.47 93.57 80.53 pred syntax *94.40 81.51 *94.24 81.61 gold syntax *94.80 82.45 *94.64 82.80 German RFTagger 90.63 72.11 89.04 70.80 our baseline 92.59 80.73 91.48 78.83 pred syntax *93.70 82.71 *92.51 80.20 gold syntax *94.28 *84.12 *93.32 *82.35 Hungarian", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF10": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Simple parser vs full parser -syntactic quality. Trained on first 5,000 sentences of the training set.", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |