Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E06-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:34:49.374961Z"
},
"title": "Statistical Dependency Parsing of Turkish",
"authors": [
{
"first": "G\u00fcl\u015fen",
"middle": [],
"last": "Eryi\u01e7it",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Istanbul Technical University Istanbul",
"location": {
"postCode": "34469",
"country": "Turkey"
}
},
"email": ""
},
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sabanci University Istanbul",
"location": {
"postCode": "34956",
"country": "Turkey"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents results from the first statistical dependency parser for Turkish. Turkish is a free-constituent order language with complex agglutinative inflectional and derivational morphology and presents interesting challenges for statistical parsing, as in general, dependency relations are between \"portions\" of words-called inflectional groups. We have explored statistical models that use different representational units for parsing. We have used the Turkish Dependency Treebank to train and test our parser but have limited this initial exploration to that subset of the treebank sentences with only left-to-right non-crossing dependency links. Our results indicate that the best accuracy in terms of the dependency relations between inflectional groups is obtained when we use inflectional groups as units in parsing, and when contexts around the dependent are employed.",
"pdf_parse": {
"paper_id": "E06-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents results from the first statistical dependency parser for Turkish. Turkish is a free-constituent order language with complex agglutinative inflectional and derivational morphology and presents interesting challenges for statistical parsing, as in general, dependency relations are between \"portions\" of words-called inflectional groups. We have explored statistical models that use different representational units for parsing. We have used the Turkish Dependency Treebank to train and test our parser but have limited this initial exploration to that subset of the treebank sentences with only left-to-right non-crossing dependency links. Our results indicate that the best accuracy in terms of the dependency relations between inflectional groups is obtained when we use inflectional groups as units in parsing, and when contexts around the dependent are employed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The availability of treebanks of various sorts have fostered the development of statistical parsers trained with the structural data in these treebanks. With the emergence of the important role of word-to-word relations in parsing (Charniak, 2000; Collins, 1996) , dependency grammars have gained a certain popularity; e.g., Yamada and Matsumoto (2003) for English, Kudo and Matsumoto (2000; 2002) , Sekine et al. (2000) for Japanese, Chung and Rim (2004) for Korean, Nivre et al. (2004) for Swedish, Nivre and Nilsson (2005) for Czech, among others.",
"cite_spans": [
{
"start": 231,
"end": 247,
"text": "(Charniak, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 248,
"end": 262,
"text": "Collins, 1996)",
"ref_id": "BIBREF3"
},
{
"start": 325,
"end": 352,
"text": "Yamada and Matsumoto (2003)",
"ref_id": "BIBREF15"
},
{
"start": 366,
"end": 391,
"text": "Kudo and Matsumoto (2000;",
"ref_id": "BIBREF6"
},
{
"start": 392,
"end": 397,
"text": "2002)",
"ref_id": "BIBREF7"
},
{
"start": 400,
"end": 420,
"text": "Sekine et al. (2000)",
"ref_id": "BIBREF14"
},
{
"start": 435,
"end": 455,
"text": "Chung and Rim (2004)",
"ref_id": "BIBREF1"
},
{
"start": 468,
"end": 487,
"text": "Nivre et al. (2004)",
"ref_id": "BIBREF9"
},
{
"start": 501,
"end": 525,
"text": "Nivre and Nilsson (2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dependency grammars represent the structure of the sentences by positing binary dependency relations between words. For instance, shows the dependency graph of a Turkish and an English sentence where dependency labels are shown annotating the arcs which extend from dependents to heads.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Parsers employing CFG-backbones have been found to be less effective for free-constituentorder languages where constituents can easily change their position in the sentence without modifying the general meaning of the sentence. Collins et al. (1999) applied the parser of Collins (1997) developed for English, to Czech, and found that the performance was substantially lower when compared to the results for English.",
"cite_spans": [
{
"start": 228,
"end": 249,
"text": "Collins et al. (1999)",
"ref_id": "BIBREF2"
},
{
"start": 272,
"end": 286,
"text": "Collins (1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Turkish is an agglutinative language where a sequence of inflectional and derivational morphemes get affixed to a root (Oflazer, 1994) . At the syntax level, the unmarked constituent order is SOV, but constituent order may vary freely as demanded by the discourse context. Essentially all constituent orders are possible, especially at the main sentence level, with very minimal formal constraints.",
"cite_spans": [
{
"start": 119,
"end": 134,
"text": "(Oflazer, 1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "In written text however, the unmarked order is dominant at both the main sentence and embedded clause level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "Turkish morphotactics is quite complicated: a given word form may involve multiple derivations and the number of word forms one can generate from a nominal or verbal root is theoretically infinite. Derivations in Turkish are very productive, and the syntactic relations that a word is in-volved in as a dependent or head element, are determined by the inflectional properties of the one or more (possibly intermediate) derived forms. In this work, we assume that a Turkish word is represented as a sequence of inflectional groups (IGs hereafter), separated by\u02c6DBs, denoting derivation boundaries, in the following general form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "root+IG1 +\u02c6DB+IG2 +\u02c6DB+\u2022 \u2022 \u2022 +\u02c6DB+IGn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "Here each IG i denotes relevant inflectional features including the part-of-speech for the root and for any of the derived forms. For instance, the derived modifier sa\u01e7lamla\u015ft\u0131rd\u0131\u01e7\u0131m\u0131zdaki 1 would be represented as: 2 sa\u01e7lam(strong)+Adj +\u02c6DB+Verb+Become +\u02c6DB+Verb+Caus+Pos +\u02c6DB+Noun+PastPart+A3sg+P3sg+Loc +\u02c6DB+Adj+Rel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "The five IGs in this are the feature sequences separated by the\u02c6DB marker. The first IG shows the part-of-speech for the root which is its only inflectional feature. The second IG indicates a derivation into a verb whose semantics is \"to become\" the preceding adjective. The third IG indicates that a causative verb with positive polarity is derived from the previous verb. The fourth IG indicates the derivation of a nominal form, a past participle, with +Noun as the part-of-speech and +PastPart, as the minor part-of-speech, with some additional inflectional features. Finally, the fifth IG indicates a derivation into a relativizer adjective. A sentence would then be represented as a sequence of the IGs making up the words. When a word is considered as a sequence of IGs, linguistically, the last IG of a word determines its role as a dependent, so, syntactic relation links only emanate from the last IG of a (dependent) word, and land on one of the IGs of a (head) word on the right (with minor exceptions), as exemplified in Figure 2 . And again with minor exceptions, the dependency links between the IGs, when drawn above the IG sequence, do not cross. 3 Figure 3 from Oflazer (2003) shows a dependency tree for a Turkish sentence laid on top of the words segmented along IG boundaries.",
"cite_spans": [
{
"start": 1164,
"end": 1165,
"text": "3",
"ref_id": null
},
{
"start": 1180,
"end": 1194,
"text": "Oflazer (2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1034,
"end": 1042,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1166,
"end": 1174,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "With this view in mind, the dependency relations that are to be extracted by a parser should be relations between certain inflectional groups and 1 Literally, \"(the thing existing) at the time we caused (something) to become strong\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "2 The morphological features other than the obvious partof-speech features are: +Become: become verb, +Caus: causative verb, +PastPart: Derived past participle, +P3sg: 3sg possessive agreement, +A3sg: 3sg numberperson agreement, +Loc: Locative case, +Pos: Positive Polarity, +Rel: Relativizing Modifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "3 Only 2.5% of the dependencies in the Turkish treebank actually cross another dependency link. Since only the wordfinal inflectional groups have out-going dependency links to a head, there will be IGs which do not have any outgoing links (e.g., the first IG of the word b\u00fcy\u00fcmesi in Figure 3 ). We assume that such IGs are implicitly linked to the next IG, but neither represent nor extract such relationships with the parser, as it is the task of the morphological analyzer to extract those. Thus the parsing models that we will present in subsequent sections all aim to extract these surface relations between the relevant IGs, and in line with this, we will employ performance measures based on IGs and their relationships, and not on orthographic words.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 291,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "We use a model of sentence structure as depicted in Figure 4 . In this figure, the top part represents the words in a sentence. After morphological analysis and morphological disambiguation, each word is represented with (the sequence of) its inflectional groups, shown in the middle of the figure. The inflectional groups are then reindexed so that they are the \"units\" for the purposes of parsing. The inflectional groups marked with * are those from which a dependency link will emanate from, to a head-word to the right. Please note that the number of such marked inflectional groups is the same as the number of words in the sentence, and all of such IGs, (except one corresponding to the distinguished head of the sentence which will not have any links), will have outgoing dependency links.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "In the rest of this paper, we first give a very brief overview a general model of statistical dependency parsing and then introduce three models for dependency parsing of Turkish. We then present our results for these models and for some additional experiments for the best performing model. We then close with a discussion on the results, analysis of the errors the parser makes, and conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turkish",
"sec_num": "2"
},
{
"text": "Statistical dependency parsers first compute the probabilities of the unit-to-unit dependencies, and then find the most probable dependency tree T * among the set of possible dependency trees. This +'s indicate morpheme boundaries. The rounded rectangles show the words while the inflectional groups within the words that have more than 1 IG are emphasized with the dashed rounded rectangles. The inflectional features of each inflectional group as produced by the morphological analyzer are listed below. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "w 1 \u00d4 \u00d4 5 5 IG 1 IG 2 \u2022 \u2022 \u2022 IG * g 1 IG 1 IG 2 \u2022 \u2022 \u2022 IG * g 1 w 2 \u00d1 \u00d1 6 6 IG 1 IG 2 \u2022 \u2022 \u2022 IG * g 2 IG g 1 +1 \u2022 \u2022 \u2022 IG * g 1 +g 2 . . . . . . w n \u00d4 \u00d4 5 5 IG 1 IG 2 \u2022 \u2022 \u2022 IG * gn \u2022 \u2022 \u2022 IG * \u03a5 n \u03a5 i = \u00c8 i k=1 g k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "T * = argmax T P (T, S) = argmax T n\u22121 i=1 P (dep (w i , w H(i) ) | S)(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "where in our case S is a sequence of units (words, IGs) and T , ranges over possible dependency trees consisting of left-to-right dependency links dep (w i , w H(i) ) with w H(i) denoting the head unit to which the dependent unit, w i , is linked to. The distance between the dependent units plays an important role in the computation of the dependency probabilities. Collins (1996) employs this distance \u2206 i,H(i) in the computation of word-toword dependency probabilities",
"cite_spans": [
{
"start": 368,
"end": 382,
"text": "Collins (1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "P (dep (w i , w H(i) ) | S) \u2248 (2) P (link(w i , w H(i) ) | \u2206 i,H(i) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "suggesting that distance is a crucial variable when deciding whether two words are related, along with other features such as intervening punctuation. Chung and Rim (2004) propose a different method and introduce a new probability factor that takes into account the distance between the dependent and the head. The model in equation 3 takes into account the contexts that the dependent and head reside in and the distance between the head and the dependent.",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "Chung and Rim (2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "P (dep (w i , w H(i) ) | S) \u2248 (3) P (link(w i , w H(i) )) | \u03a6 i \u03a6 H(i) ) \u2022 P (w i links to some head H(i) \u2212 i away|\u03a6 i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "Here \u03a6 i represents the context around the dependent w i and \u03a6 H(i) , represents the context around the head word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "P (dep (w i , w H(i) ) | S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "is the probability of the directed dependency relation between w i and w H(i) in the current sentence, while",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "P (link(w i , w H(i) ) | \u03a6 i \u03a6 H(i) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "is the probability of seeing a similar dependency (with w i as the dependent, w H(i) as the head in a similar context) in the training treebank. For the parsing models that will be described below, the relevant statistical parameters needed have been estimated from the Turkish treebank . Since this treebank is relatively smaller than the available treebanks for other languages (e.g., Penn Treebank), we have opted to model the bigram linkage probabilities in an unlexicalized manner (that is, by just taking certain morphosyntactic properties into account), to avoid, to the extent possible, the data sparseness problem which is especially acute for Turkish. We have also been encouraged by the success of the unlexicalized parsers reported recently (Klein and Manning, 2003; Chung and Rim, 2004) .",
"cite_spans": [
{
"start": 753,
"end": 778,
"text": "(Klein and Manning, 2003;",
"ref_id": "BIBREF5"
},
{
"start": 779,
"end": 799,
"text": "Chung and Rim, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "For parsing, we use a version of the Backward Beam Search Algorithm (Sekine et al., 2000) developed for Japanese dependency analysis adapted to our representations of the morphological structure of the words. This algorithm parses a sentence by starting from the end and analyzing it towards the beginning. By making the projectivity assumption that the relations do not cross, this algorithm considerably facilitates the analysis.",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Sekine et al., 2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3"
},
{
"text": "In this section we detail three models that we have experimented with for Turkish. All three models are unlexicalized and differ either in the units used for parsing or in the way contexts modeled. In all three models, we use the probability model in Equation 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details of the Parsing Models",
"sec_num": "4"
},
{
"text": "Our morphological analyzer produces a rather rich representation with a multitude of morphosyntactic and morphosemantic features encoded in the words. However, not all of these features are necessarily relevant in all the tasks that these analyses can be used in. Further, different subsets of these features may be relevant depending on the function of a word. In the models discussed below, we use a reduced representation of the IGs to \"unlexicalize\" the words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifying IG Tags",
"sec_num": "4.1"
},
{
"text": "1. For nominal IGs, 4 we use two different tags depending on whether the IG is used as a dependent or as a head during (different stages of ) parsing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifying IG Tags",
"sec_num": "4.1"
},
{
"text": "\u2022 If the IG is used as a dependent, (and, only word-final IGs can be dependents), we represent that IG by a reduced tag consisting of only the case marker, as that essentially determines the syntactic function of that IG as a dependent, and only nominals have cases. \u2022 If the IG is used as a head, then we use only part-of-speech and the possessive agreement marker in the reduced tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifying IG Tags",
"sec_num": "4.1"
},
{
"text": "2. For adjective IGs with present/past/future participles minor part-of-speech, we use the part-of-speech when they are used as dependents and the part-of-speech plus the the possessive agreement marker when used as a head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifying IG Tags",
"sec_num": "4.1"
},
{
"text": "3. For other IGs, we reduce the IG to just the part-of-speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifying IG Tags",
"sec_num": "4.1"
},
{
"text": "Such a reduced representation also helps alleviate the sparse data problem as statistics from many word forms with only the relevant features are conflated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifying IG Tags",
"sec_num": "4.1"
},
{
"text": "We modeled the second probability term on the right-hand side of Equation 3 (involving the distance between the dependent and the head unit) in the following manner. First, we collected statistics over the treebank sentences, and noted that, if we count words as units, then 90% of dependency links link to a word that is less than 3 words away. Similarly, if we count distance in terms of IGs, then 90% of dependency links link to an IG that is less than 4 IGs away to the right. Thus we selected a parameter k = 4 for Models 1 and 3 below, where distance is measured in terms of words, and k = 5 for Model 2 where distance is measured in terms of IGs, as a threshold value at and beyond which a dependency is considered \"distant\". During actual runs, P (w i links to some head H(i) \u2212 i away|\u03a6 i ) was computed by interpolating P 1 (w i links to some head H(i) \u2212 i away|\u03a6 i ) estimated from the training corpus, and P 2 (w i links to some head H(i) \u2212 i away) the estimated probability for a length of a link when no contexts are considered, again estimated from the training corpus. When probabilities are estimated from the training set, all distances larger than k are assigned the same probability. If even after interpolation, the probability is 0, then a very small value is used. This is a modified version of the backed-off smoothing used by Collins (1996) to alleviate sparse data problems. A similar interpolation is used for the first component on the right hand side of Equation 3 by removing the head and the dependent contextual information all at once.",
"cite_spans": [
{
"start": 1350,
"end": 1364,
"text": "Collins (1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifying IG Tags",
"sec_num": "4.1"
},
{
"text": "In this model, we represent each word by a reduced representation of its last IG when used as a dependent, 5 and by concatenation of the reduced representation of its IGs when used as a head. Since a word can be both a dependent and a head word, the reduced representation to be used is dynamically determined during parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1 -\"Unlexicalized\" Word-based Model",
"sec_num": "4.2"
},
{
"text": "Parsing then proceeds with words as units represented in this manner. Once the parser links these units, we remap these links back to IGs to recover the actual IG-to-IG dependencies. We already know that any outgoing link from a dependent will emanate from the last IG of that word. For the head word, we assume that the link lands on the first IG of that word. 6 For the contexts, we use the following scheme. A contextual element on the left is treated as a dependent and is modeled with its last IG, while a contextual element on the right is represented as if it were a head using all its IGs. We ignore any overlaps between contexts in this and the subsequent models.",
"cite_spans": [
{
"start": 362,
"end": 363,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1 -\"Unlexicalized\" Word-based Model",
"sec_num": "4.2"
},
{
"text": "In Figure 5 we show in a table the sample sentence in Figure 3 , the morphological analysis for each word and the reduced tags for representing the units for the three models. For each model, we list the tags when the unit is used as a head and when it is used as a dependent. For model 1, we use the tags in rows 3 and 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 5",
"ref_id": null
},
{
"start": 54,
"end": 62,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Model 1 -\"Unlexicalized\" Word-based Model",
"sec_num": "4.2"
},
{
"text": "In this model, we represent each IG with reduced representations in the manner above, but do not concatenate them into a representation for the word. So our \"units\" for parsing are IGs. The parser directly establishes IG-to-IG links from word-final IGs to some IG to the right. The contexts that are used in this model are the IGs to the left (starting with the last IG of the preceding word) and the right of the dependent and the head IG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 2 -IG-based Model",
"sec_num": "4.3"
},
{
"text": "The units and the tags we use in this model are in rows 5 and 6 in the table in Figure 5 . Note that the empty cells in row 4 corresponds to IGs which can not be syntactic dependents as they are not word-final.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model 2 -IG-based Model",
"sec_num": "4.3"
},
{
"text": "This model is almost exactly like Model 2 above. The two differences are that (i) for contexts we only use just the word-final IGs to the left and the right ignoring any non-word-final IGs in between (except for the case that the context and the head overlap, where we use the tag of the head IG in-stead of the final IG); and (ii) the distance function is computed in terms of words. The reason this model is used is that it is the word final IGs that determine the syntactic roles of the dependents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3 -IG-based Model with Word-final IG Contexts",
"sec_num": "4.4"
},
{
"text": "Since in this study we are limited to parsing sentences with only left-to-right dependency links 7 which do not cross each other, we eliminated the sentences having such dependencies (even if they contain a single one) and used a subset of 3398 such sentences in the Turkish Treebank. The gold standard part-of-speech tags are used in the experiments. The sentences in the corpus ranged between 2 words to 40 words with an average of about 8 words; 8 90% of the sentences had less than or equal to 15 words. In terms of IGs, the sentences comprised 2 to 55 IGs with an average of 10 IGs per sentence; 90% of the sentences had less than or equal to 15 IGs. We partitioned this set into training and test sets in 10 different ways to obtain results with 10-fold cross-validation. We implemented three baseline parsers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "1. The first baseline parser links a word-final IG to the first IG of the next word on the right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "2. The second baseline parser links a word-final IG to the last IG of the next word on the right. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "3. The third baseline parser is a deterministic rule-based parser that links each word-final IG to an IG on the right based on the approach of Nivre (2003) . The parser uses 23 unlexicalized linking rules and a heuristic that links any non-punctuation word not linked by the parser to the last IG of the last word as a dependent. Table 1 shows the results from our experiments with these baseline parsers and parsers that are based on the three models above. The three models have been experimented with different contexts around both the dependent unit and the head. In each row, columns 3 and 4 show the percentage of IG-IG dependency relations correctly recovered for all tokens, and just words excluding punctuation from the statistics, while columns 5 and 6 show the percentage of test sentences for which all dependency relations extracted agree with the Figure 5 : Tags used in the parsing models relations in the treebank. Each entry presents the average and the standard error of the results on the test set, over the 10 iterations of the 10-fold crossvalidation. Our main goal is to improve the percentage of correctly determined IG-to-IG dependency relations, shown in the fourth column of the table. The best results in these experiments are obtained with Model 3 using 1 unit on both sides of the dependent. Although it is slightly better than Model 2 with the same context size, the difference between the means (0.4\u00b10.2) for each 10 iterations is statistically significant.",
"cite_spans": [
{
"start": 143,
"end": 155,
"text": "Nivre (2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 330,
"end": 337,
"text": "Table 1",
"ref_id": null
},
{
"start": 861,
"end": 869,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Since we have been using unlexicalized models, we wanted to test out whether a smaller training corpus would have a major impact for our current models. Table 2 shows results for Model 3 with no context and 1 unit on each side of the dependent, obtained by using only a 1500 sentence subset of the original treebank, again using 10-fold cross validation. Remarkably the reduction in training set size has a very small impact on the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Although all along, we have suggested that determining word-to-word dependency relationships is not the right approach for evaluating parser performance for Turkish, we have nevertheless performed word-to-word correctness evaluation so that comparison with other word based approaches can be made. In this evaluation, we assume that a dependency link is correct if we correctly determine the head word (but not necessarily the correct IG). Table 3 shows the word based results for the best cases of the models in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 440,
"end": 447,
"text": "Table 3",
"ref_id": null
},
{
"start": 513,
"end": 520,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We have also tested our parser with a pure word model where both the dependent and the head are represented by the concatenation of their IGs, that is, by their full morphological analysis except the root. The result for this case is given in the last row of Table 3 . This result is even lower than the rulebased baseline. 10 For this model, if we connect the 10 Also lower than Model 1 with no context (79.1\u00b11.1) dependent to the first IG of the head as we did in Model 1, the IG-IG accuracy excluding punctuations becomes 69.9\u00b13.1, which is also lower than baseline 3 (70.5%).",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our results indicate that all of our models perform better than the 3 baseline parsers, even when no contexts around the dependent and head units are used. We get our best results with Model 3, where IGs are used as units for parsing and contexts are comprised of word final IGs. The highest accuracy in terms of percent of correctly extracted IG-to-IG relations excluding punctuations (73.5%) was obtained when one word is used as context on both sides of the the dependent. 11 We also noted that using a smaller treebank to train our models did not result in a significant reduction in our accuracy indicating that the unlexicalized models are quite effective, but this also may hint that a larger treebank with unlexicalized modeling may not be useful for improving link accuracy.",
"cite_spans": [
{
"start": 476,
"end": 478,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "A detailed look at the results from the best performing model shown in in Table 4, 12 indicates that, accuracy decrases with the increasing sentence length. For longer sentences, we should employ more sophisticated models possibly including lexicalization.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 85,
"text": "Table 4, 12",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "A further analysis of the actual errors made by the best performing model indicates almost 40% of the errors are \"attachment\" problems: the dependent IGs, especially verbal adjuncts and arguments, link to the wrong IG but otherwise with the same morphological features as the correct one except for the root word. This indicates we may have to model distance in a more sophisticated way and perhaps use a limited lexicalization such as including limited non-morphological information (e.g., verb valency) into the tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "We have presented our results from statistical dependency parsing of Turkish with statistical models trained from the sentences in the Turkish treebank. The dependency relations are between sub-lexical units that we call inflectional groups (IGs) and the parser recovers dependency relations between these IGs. Due to the modest size of the treebank available to us, we have used unlexicalized statistical models, representing IGs by reduced representations of their morphological properties. For the purposes of this work we have limited ourselves to sentences with all left-to-right dependency links that do not cross each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We get our best results (73.5% IG-to-IG link accuracy) using a model where IGs are used as units for parsing and we use as contexts, word final IGs of the words before and after the dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Future work involves a more detailed understanding of the nature of the errors and see how limited lexicalization can help, as well as investigation of more sophisticated models such as SVM or memory-based techniques for correctly identifying dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "These are nouns, pronouns, and other derived forms that inflect with the same paradigm as nouns, including infinitives, past and future participles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Remember that other IGs in a word, if any, do not have any bearing on how this word links to its head word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This choice is based on the observation that in the treebank, 85.6% of the dependency links land on the first (and possibly the only) IG of the head word, while 14.4% of the dependency links land on an IG other than the first one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In 95% of the treebank dependencies, the head is the right of the dependent.8 This is quite normal; the equivalents of function words in English are embedded as morphemes (not IGs) into these words.9 Note that for head words with a single IG, the first two baselines behave the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We should also note that early experiments using different sets of morphological features that we intuitively thought should be useful, gave rather low accuracy results.12 These results are significantly higher than the best baseline (rule based) for all the sentence length categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by a research grant from TUBITAK (The Scientific and Technical Research Council of Turkey) and from Istanbul Technical University.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": "Words+Punc Words only Words+Punc Words only Baseline 1 NA 59.9 \u00b10.3 63.9 \u00b10.7 21.4 \u00b10.6 24.0 \u00b10.7 Baseline 2 NA 58. The Context column entries show the context around the dependent and the head unit. Dl=1 and Dr=1 indicate the use of 1 unit left and the right of the dependent respectively. Hl=1 and Hr=1 indicate the use of 1 unit left and the right of the head respectively. Both indicates both head and the dependent have 1 unit of context on both sides. Table 3 : Results from word-to-word correctness evaluation.Sentence Length l (IGs) % Accuracy 1 < l \u2264 10 80.2 \u00b10.5 10 < l \u2264 20 70.1 \u00b10.4 20 < l \u2264 30 64.6 \u00b11.0 30 < l 62.7 \u00b11.3 ",
"cite_spans": [
{
"start": 63,
"end": 67,
"text": "\u00b10.3",
"ref_id": null
},
{
"start": 83,
"end": 87,
"text": "\u00b10.6",
"ref_id": null
},
{
"start": 629,
"end": 633,
"text": "\u00b11.3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 458,
"end": 465,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Percentage of Sentences Relations Correct With ALL Relations Correct Parsing Model Context",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximum-entropyinspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "1st Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy- inspired parser. In 1st Conference of the North American Chapter of the Association for Computa- tional Linguistics, Seattle, Washington.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unlexicalized dependency parser for variable word order languages based on local contextual pattern",
"authors": [
{
"first": "Hoojung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Hae-Chang",
"middle": [],
"last": "Rim",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoojung Chung and Hae-Chang Rim. 2004. Un- lexicalized dependency parser for variable word or- der languages based on local contextual pattern. In Computational Linguistics and Intelligent Text Processing (CICLing-2004), Seoul, Korea. Lecture Notes in Computer Science 2945.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A statistical parser for Czech",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "505--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Jan Hajic, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for Czech. In Proceedings of the 37th Annual Meet- ing of the Association for Computational Linguis- tics, pages 505-518, University of Maryland.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A new statistical parser based on bigram lexical dependencies",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Annual Meeting of the Association for Com- putational Linguistics, Santa Cruz, CA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Compu- tational Linguistics and 8th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 16-23, Madrid, Spain.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics, pages 423-430, Sapporo, Japan.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Japanese dependency analysis based on support vector machines",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2000,
"venue": "Joint Sigdat Conference On Empirical Methods In Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and Yuji Matsumoto. 2000. Japanese dependency analysis based on support vector ma- chines. In Joint Sigdat Conference On Empirical Methods In Natural Language Processing and Very Large Corpora, Hong Kong.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Japanese dependency analysis using cascaded chunking",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Sixth Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In Sixth Conference on Natural Language Learning, Taipei, Taiwan.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Pseudoprojective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "99--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo- projective dependency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 99-106, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Memory-based dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2004,
"venue": "8th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In 8th Confer- ence on Computational Natural Language Learning, Boston, Massachusetts.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An efficient algorithm for projective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of 8th International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "23--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2003. An efficient algorithm for pro- jective dependency parsing. In Proceedings of 8th International Workshop on Parsing Technologies, pages 23-25, Nancy, France, April.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building and Exploiting Syntactically-annotated Corpora",
"authors": [
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": ""
},
{
"first": "Bilge",
"middle": [],
"last": "Say",
"suffix": ""
},
{
"first": "G\u00f6khan",
"middle": [],
"last": "Dilek Zeynep Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kemal Oflazer, Bilge Say, Dilek Zeynep Hakkani-T\u00fcr, and G\u00f6khan T\u00fcr. 2003. Building a Turkish tree- bank. In Anne Abeille, editor, Building and Exploit- ing Syntactically-annotated Corpora. Kluwer Acad- emic Publishers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Two-level description of Turkish morphology",
"authors": [
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": ""
}
],
"year": 1994,
"venue": "Literary and Linguistic Computing",
"volume": "9",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kemal Oflazer. 1994. Two-level description of Turk- ish morphology. Literary and Linguistic Comput- ing, 9(2).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dependency parsing with an extended finite-state approach",
"authors": [
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kemal Oflazer. 2003. Dependency parsing with an extended finite-state approach. Computational Lin- guistics, 29(4).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Backward beam search algorithm for dependency analysis of Japanese",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2000,
"venue": "17th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "754--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoshi Sekine, Kiyotaka Uchimoto, and Hitoshi Isa- hara. 2000. Backward beam search algorithm for dependency analysis of Japanese. In 17th Inter- national Conference on Computational Linguistics, pages 754 -760, Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "Hiroyasu",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "8th International Workshop of Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statis- tical dependency analysis with support vector ma- chines. In 8th International Workshop of Parsing Technologies, Nancy, France.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Dependency Relations for a Turkish and an English sentence",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Dependency Links and IGs not orthographic words.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Dependency links in an example Turkish sentence.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Figure 4: Sentence Structure",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "Bu eskiev+de +ki g\u00fcl+\u00fcn b\u00f6yle b\u00fcy\u00fc +me+si herkes+i \u00e7ok etkile+di",
"html": null,
"content": "<table><tr><td/><td>Det</td><td/><td/><td/><td>Subj</td><td/><td/><td>Subj</td><td/><td/></tr><tr><td/><td>Mod</td><td/><td colspan=\"2\">Mod</td><td colspan=\"2\">Mod</td><td/><td/><td colspan=\"2\">Obj Mod</td></tr><tr><td>b u</td><td>eski</td><td>ev</td><td>+Adj</td><td>g\u00fcl</td><td>b\u00f6yle</td><td>b\u00fcy\u00fc</td><td>+Noun</td><td>herkes</td><td>\u00e7ok</td><td>etkile</td></tr><tr><td>+Det</td><td>+Adj</td><td>+Noun</td><td/><td>+Noun</td><td>+A dv</td><td>+Verb</td><td>+Inf</td><td>+Pron</td><td>+Adv</td><td>+Verb</td></tr><tr><td/><td/><td>+A3sg</td><td/><td>+A3sg</td><td/><td/><td>+A3sg</td><td>+A3pl</td><td/><td>+Past</td></tr><tr><td/><td/><td>+Pnon +Loc</td><td/><td>+Pnon +Gen</td><td/><td/><td>+P3sg +Nom</td><td>+Acc +Pnon</td><td/><td>+A3sg</td></tr><tr><td>This</td><td>old</td><td colspan=\"2\">house-at+that-is</td><td>rose's</td><td>such</td><td colspan=\"2\">grow +ing</td><td>everyone</td><td>very</td><td>impressed</td></tr><tr><td colspan=\"7\">Such growing of the rose in this old house impressed everyone very much.</td><td/><td/><td/><td/></tr></table>",
"num": null
}
}
}
}