ACL-OCL / Base_JSON /prefixD /json /disrpt /2021.disrpt-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:40:29.602446Z"
},
"title": "Delexicalised Multilingual Discourse Segmentation for DISRPT 2021 and Tense, Mood, Voice and Modality Tagging for 11 Languages",
"authors": [
{
"first": "Tillmann",
"middle": [],
"last": "D\u00f6nicke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humanities University of G\u00f6ttingen",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our participating system for the Shared Task on Discourse Segmentation and Connective Identification across Formalisms and Languages. Key features of the presented approach are the formulation as a clause-level classification task, a languageindependent feature inventory based on Universal Dependencies grammar, and compositeverb-form analysis. The achieved F1 is 92% for German and English and lower for other languages. The paper also presents a clauselevel tagger for grammatical tense, aspect, mood, voice and modality in 11 languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our participating system for the Shared Task on Discourse Segmentation and Connective Identification across Formalisms and Languages. Key features of the presented approach are the formulation as a clause-level classification task, a languageindependent feature inventory based on Universal Dependencies grammar, and compositeverb-form analysis. The achieved F1 is 92% for German and English and lower for other languages. The paper also presents a clauselevel tagger for grammatical tense, aspect, mood, voice and modality in 11 languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Despite the important role of discourse segmentation for natural language processing (NLP), there is no clear-cut definition of what a discourse segment is. Degand and Simon (2009) determine the boundaries of discourse segments as the intersection of clause boundaries and prosodic boundaries, which means specifically that a discourse segment spans one or several clauses (clauses as minimal discourse segments had been proposed in preceding works, e.g. Mann and Thompson (1988) ). We follow this approach and view discourse segmentation as a binary classification problem that predicts for a clause whether it is the start of a new discourse segment. Working with only text makes it impossible to fully implement Degand and Simon (2009) 's approach and include features that capture prosody and prosodic change. Instead, we represent clauses as morphosyntactic feature structures that capture grammatical roles (subject, object etc.), verbal categories and clause connectives, believing that the use of pronouns, the change of tense, aspect and mood, the presence of conjunctions and other linguistic features also signal segment boundaries.",
"cite_spans": [
{
"start": 157,
"end": 180,
"text": "Degand and Simon (2009)",
"ref_id": "BIBREF9"
},
{
"start": 455,
"end": 479,
"text": "Mann and Thompson (1988)",
"ref_id": "BIBREF18"
},
{
"start": 715,
"end": 738,
"text": "Degand and Simon (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The shared task provides discourse-segmented treebanks for 11 languages. All datasets exist in the Universal Dependencies (UD) format (Nivre Dataset Sents Conn. Delex. WO deu.rst.pcc 2, 193 et al., 2016) . UD grammar (UDG) builds on the idea that all natural languages can be described by a unique inventory of word categories and grammatical rules. Treebanks annotated in UDG thus share the same part-of-speech (POS) tags, morphological features (MFs) and dependency relations (DepRels), which encourages the development of multilingual applications. Things that still significantly differ between languages are the surface forms of words (obviously), the presence/absence of MFs and the order of words and constituents. To alleviate these dissimilarities, we will view sentences as delexicalised, unordered trees and assimilate morphosyntactic features between languages.",
"cite_spans": [
{
"start": 134,
"end": 185,
"text": "(Nivre Dataset Sents Conn. Delex. WO deu.rst.pcc 2,",
"ref_id": null
},
{
"start": 186,
"end": 189,
"text": "193",
"ref_id": null
},
{
"start": 190,
"end": 203,
"text": "et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Data and Task Table 1 gives an overview of the available data. There are 16 datasets for 11 languages, but the number of sentences for each dataset varies greatly, from 0.5k in spa.rst.sctb to 48.6k in eng.pdtb.pdtb.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 25,
"text": "Task Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In 13 of the datasets, only discourse segments are annotated: a token which is the begin of a new discourse segment is labelled with BeginSeg=Yes. In the remaining three datasets, the full discourse connective is annotated: a token which is the begin of a new discourse segment/connective is labelled with Seg=B-Conn and subsequent tokens that are part of the connective are labelled with Seg=I-Conn. The task for both annotation schemes is to identify the starts of discourse segments/connectives. In addition, the full connective should be identified for the latter scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Five datasets are only made available in a delexicalised format to participants without a Linguistic Data Consortium membership.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The following subsections briefly describe how distinct syntactic units are represented in UDG and what features are extracted for the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Universal Morphosyntactic Features",
"sec_num": "3"
},
{
"text": "Clauses can be extracted from a UDG tree by \"cutting\" specific clause-marking DepRels. These are: root, csubj, ccomp, xcomp, acl, advcl, parataxis, list, vocative and discourse, as well as conj if its head is itself governed by a clause-marking DepRel (cf. D\u00f6nicke, 2020 ). 1 Figure 1 shows an example sentence with three clauses, governed by discourse, root and xcomp, respectively. The first two clauses are the starts of a new discourse segment. We handle punctuation at clause boundaries separately. In the example, the comma (,) and the period (.) are stored as preceding and succeeding punctuation of the clause I 'll try.",
"cite_spans": [
{
"start": 257,
"end": 270,
"text": "D\u00f6nicke, 2020",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 276,
"end": 284,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Clauses",
"sec_num": "3.1"
},
{
"text": "From a clause, we extract the following features: root token's DepRel and POS tag; preceding punctuation; succeeding punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clauses",
"sec_num": "3.1"
},
{
"text": "Noun phrases (NPs) realise grammatical roles within a clause. Like clauses, they can be ex- tracted from a UDG tree by cutting specific Dep-Rels. These are: nsubj, obj, iobj, obl and nmod. In Figure 1 , the second clause contains the subject NP I, and the third clause contains the object NP an email. The morphological feature structure (MFS) for each individual word (as given in the data) is shown in (1) and (2), respectively. \uf8ee",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "NPs",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf8ef \uf8ef \uf8f0 CASE Nom NUMBER Sing PERSON 1 PRONTYPE Prs \uf8f9 \uf8fa \uf8fa \uf8fb (1) DEFINITE Ind PRONTYPE Art NUMBER Sing",
"eq_num": "(2)"
}
],
"section": "NPs",
"sec_num": "3.2"
},
{
"text": "NP-level features are obtained by unifying the MFSs of the involved words into a single feature structure. As a grammatical rule, Case, Person, Number and Gender have to agree for all words within an NP. Sometimes, this rule is violated (in the data) by compound nouns like internet problems where the nouns differ in Number (singular vs. plural). Therefore, we take the agreement features only from the NP's root token; all other features are taken from all words (and are allowed to have multiple values). For a proper handling of analytic languages such as Chinese, which tend to mark features not by morphemes but by particles, we introduce a rule for particles that we apply to an NP's root token w before unifying features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPs",
"sec_num": "3.2"
},
{
"text": "Particle Rule If w has any particles (i.e. dependents with the POS tag PART), move all particles' features to w and delete the particles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPs",
"sec_num": "3.2"
},
{
"text": "In Figure 2 , for example, the particle de [\u7684] has the feature CASE Gen , which is moved to the governing noun l\u00e8ix\u00edng [\u7c7b\u578b] . From each NP in a clause, we extract: root token's DepRel and POS tag; agreement features: Case, Person, Number, Gender; other nominal features: Degree, Definite, Animacy; lexical features: PronType, NumType, Poss, Reflex, Foreign, Abbr, Typo. 2 To make the features NPspecific, we prefix every feature with the root relation, e.g. NSUBJ_CASE Nom , assuming that a clause usually contains only one NP per DepRel.",
"cite_spans": [
{
"start": 119,
"end": 123,
"text": "[\u7c7b\u578b]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "NPs",
"sec_num": "3.2"
},
{
"text": "The composite verb in a clause consists of the main verb and its accompanying full, light, auxiliary and modal verbs as well as verbal particles. (Since we do not distinguish a simple verb form (e.g. try) from a compound verb form (e.g. will try), we use the term \"composite verb\" for all cases.) In UDG, we define these as tokens with the POS tag VERB or AUX and subordinate tokens with the POS tag PART and/or the DepRel compound. In Figure 1 , the second clause contains the composite verb 'll try, and the third clause contains the composite verb to send. The MFSs of 'll try (as given in the data) are shown in (3).",
"cite_spans": [],
"ref_spans": [
{
"start": 436,
"end": 444,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Composite Verbs",
"sec_num": "3.3"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "Unfortunately, MFs in the datasets are far from complete; the verbs in (3) are only labelled with VerbForm but not with the other verbal features: Aspect, Mood, Tense, Voice.-An issue that we will take up again in Section 4. For the sake of illustration, we now assume that the MFs are complete, as shown in (4). Note that English finite verbs do not mark Aspect and Voice at the morphological level and English infinitives do not have any inflectional features (both properties differ in other languages).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "2 Explanations and possible values for all of these features can be found at https://universaldependencies.org/u/feat/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "\uf8ee \uf8f0 TENSE Pres MOOD Ind VERBFORM Fin \uf8f9 \uf8fb VERBFORM Inf (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "Combining the MFSs of the individual words into a single feature structure is not as easily possible as for NPs since there are no linguistic unification/agreement rules amongst the words in a composite verb as they exist for NPs. A simple method for feature extraction would still be to use MFs and prefix them by the POS tag of the corresponding word (and allowing multiple values if there are more than one words with the same POS tag). The morphosyntactic feature structure (MSFS) resulting from (4) is shown in (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "\uf8ee \uf8ef \uf8ef \uf8f0 AUX_MOOD Ind AUX_TENSE Pres AUX_VERBFORM Fin VERB_VERBFORM Inf \uf8f9 \uf8fa \uf8fa \uf8fb (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "However, grammaticalised composite verb constructions are quite different for the languages of the world (and also for those in the shared task). Another way to represent (4) is as the grammatical feature structure (GFS) in (6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 ASPECT Imp MOOD Ind TENSE Fut VERBFORM Fin VOICE Act \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb",
"eq_num": "(6)"
}
],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "Arriving at grammatical features (GFs) is a complex task on its own, which is why we describe the procedure separately in Section 4. Note that the structure in (6) includes TENSE Fut , since will try is grammatically future tense, whereas (5) only includes AUX_TENSE Pres because of the morphological present tense of will. GFs assimilate universal clause representations in such that they encode which features are expressed by a composite verb and not how the verbs are composed. For example, most languages have grammatical future tense, but in some languages (e.g. English) future tense is only marked grammatically whereas in others (e.g. Basque) it is also marked morphologically. We thus assume that GFSs show a greater similarity between languages than MSFSs. Note, however, that GFSs still exhibit differences between languages, because not all languages have parallel grammaticalised constructions. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VERBFORM Fin VERBFORM Inf",
"sec_num": null
},
{
"text": "Some words are neither part of an NP nor of the composite verb. If these words are clause-level, i.e. directly governed by the clause's root token, we call them \"free discourse elements\". These elements comprise e.g. adverbs, complementisers and conjunctions, and are thus very interesting for the task of discourse segmentation. Therefore, we extract DepRel and POS tag from every free discourse element. As for NP-level features, we prefix every feature with the element's DepRel, e.g. MARK_POS SCONJ .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Free Discourse Elements",
"sec_num": "3.4"
},
{
"text": "When vectorising a document D = [c 1 , . . . , c n ], we get clause vectors c 1 , . . . , c n , which we then concatenate to context-sensitive vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": "X D = [ x 1 , . . . , x n ] using a window of 3 clauses: x i = c i\u22121 \u2022 c i \u2022 c i+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": ". For the context clauses c i\u22121 and c i+1 , we add additional features that indicate whether the clause is in the same sentence as c i and whether the clause is directly subordinate or directly superordinate to c i . The classes corresponding to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": "X D are Y D = [y 1 , . . . , y n ] with y i \u2208 {TRUE, FALSE} (see Section 5.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": "For the documents that include discourse connectives, we create additional vectors X Conn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": "D = [ d 1 , . . . , d m ] for the connectives d 1 , . . . , d m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": "Let c j be the clause that starts with d j . To construct d j , we extract from the first 5 tokens of c j : POS tag; DepRel; index (starting at 1) of head if the head is among the first five tokens, 0 otherwise. (All features are index-specific, e.g. 1_POS INTJ .) Since not every clause contains 5 or more tokens, we further add a feature with value min{|c j |, 5}. 4 We will use these features to predict the length of the discourse connectives",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": "Y Conn D = [|d 1 |, . . . , |d m |] (see Section 5.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vectors",
"sec_num": "3.5"
},
{
"text": "D\u00f6nicke (2020) presents an algorithm for tagging the GFs Tense, Aspect, Mood, Voice and Modality (TMVM) of a clause in German. The algorithm identifies the words that contribute to a composite verb and uses a function R that maps a bag of MFSs to a GFS, like",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "R \uf8eb \uf8ed \uf8f1 \uf8f2 \uf8f3 \uf8ee \uf8f0 LEMMA will TENSE Pres MOOD Ind VERBFORM Fin \uf8f9 \uf8fb , [VERBFORM Inf] \uf8fc \uf8fd \uf8fe \uf8f6 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "4 0.3% of the discourse segments start with a connective that is longer than 5 tokens. These connectives are ignored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "= \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 ASPECT Imp MOOD Ind TENSE Fut VERBFORM Fin VOICE Act \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "where R relies on a comprehensive table of all composite verb constructions (i.e. the complete conjugation table of the language). Note that only the lemmas of auxiliary verbs are relevant for the algorithm since R does not depend on the main verb. In addition to a list of auxiliary verbs, a list of modal verbs is required. 5 Algorithm 1 shows an updated version of the original algorithm that has been modified to work with a broader range of languages, specifically the languages in the shared task. In the following, the algorithm is briefly described, with a focus on the adaptions made for multiple languages (numbers in parentheses refer to lines in the pseudocode); for further explanations see D\u00f6nicke (2020) .",
"cite_spans": [
{
"start": 704,
"end": 718,
"text": "D\u00f6nicke (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "Given a composite verb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "V = [v 1 , . . . , v |V | ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "in a language , first of all the particle rule from Section 3.2 is applied to all words (ll. 1-2). Considering the Chinese example in Figure 2 again, this moves the feature ASPECT Perf from the particle le [\u4e86] to its governing verb chu\u00e0ngji\u00e0n [\u521b\u5efa] and removes the particle from V . After this step, V contains only verbs. 6 The algorithm is designed for an OV language, i.e. a language in which the basic order of object (O) and verb (V) is O-V. More importantly for the algorithm, the basic order of auxiliary (Aux) and verb in OV languages is V-Aux, whereas it is Aux-V in VO languages (Dryer, 1992) . Thus, if the input language is a VO language (see Table 1 ), V has to be reversed before going on (ll. 3-4).",
"cite_spans": [
{
"start": 322,
"end": 323,
"text": "6",
"ref_id": null
},
{
"start": 588,
"end": 601,
"text": "(Dryer, 1992)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 654,
"end": 661,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "To counteract finite-verb movement in some languages (e.g. German and Dutch), finite verbs and non-finite verbs are selected separately (ll. 5-6) and then the finite verb is inserted at the syntactically highest position (ll. 7-9). After this step, all verbs in V should be ordered from syntactically lowest Algorithm 1: Compute features of composite verb V in language ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "1 for i = 1 to |V | do 2 particle_rule(v i ) 3 if is VO language then 4 V \u2190 [v |V | , . . . , v 1 ] 5 V f in \u2190 [finite verbs in V ] 6 V \u2190 [non-finite verbs in V ] 7 if |V f in | > 0 then 8 v f in \u2190 right-most finite verb in V f in 9 V \u2190 [v 1 , . . . , v |V | , v f in ] if |V | = 0 then return else if main verb in V then v main \u2190 right-most main verb in V else v main \u2190 left-most verb in V V \u2190 [v main , . . . , v f in ] M \u2190 [features * (v i , ) for i = 1 to |V |] for i = |V | to 1 do if v i is modal verb then 20 m i\u22121 \u2190 m i while |V | > 0 do Set v 1 to be the main verb F \u2190 \u00d7 1\u2264i\u2264|V | v i is not modal verb m |M |\u2212|V |+i if |F | i=1 |f i | j=1 |f ij | = 0 then 25 return A \u2190 {} for i = 1 to |F | do 28 A \u222a \u2190 R * (f i , ) if |A| = 0 \u2227 |V | = 1 then 30 A \u2190 m |M | if |A| > 0 then 32 A \u2190 filter(A) 33 a \u2190 combine(A) 34 a \u2190 unify_verb_form(a) 35 V modal \u2190 [modal verbs in V ] 36 V modal \u2190 unify_modals(V modal , ) 37 a \u2190 MODALITY V modal 38 return a V \u2190 [v 2 , . . . , v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "v i \u2208 V are stored in m i \u2208 M (l. 17)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": ", where m i is a set of MFSs since it is (theoretically) possible that a verb is morphologically ambiguous. However, since the MFs for each verb are given in the data, |m i | = 1 per default. 7, 8 As in the original algorithm, MFs of modal verbs overwrite those of syntactically lower verbs (ll. 18-20). All possible combinations of the involved verbs' MFSs, excluding modal verbs, are then stored in F = {f 1 , . . . , f |F | } (l. 23). In a simple case with no modal verbs and |m",
"cite_spans": [
{
"start": 192,
"end": 194,
"text": "7,",
"ref_id": null
},
{
"start": 195,
"end": 196,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "i | = 1 for all m i \u2208 M , |F | = 1 and f 1 contains the MFS of every verb v 1 , . . . , v |V | , i.e. f 1 = {m 11 , . . . , m |V | 1 }. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "Every combination f i \u2208 F is then analysed with the language-specific look-up table and the analyses (i.e. GFSs) are stored in A (ll. 26-28). 10 As mentioned in Section 3.3, a lot of MFs are missing in the data. R * treats missing features as features with wildcard values and returns all matching analyses, which means that the number of returned analyses for f i increases with the number of missing features in each f ij \u2208 f i and would become maximal if every f ij is empty. As a basic restriction, we require that at least one f ij is not empty and return an empty feature structure otherwise (ll. 24-25) .",
"cite_spans": [
{
"start": 142,
"end": 144,
"text": "10",
"ref_id": null
},
{
"start": 598,
"end": 609,
"text": "(ll. 24-25)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "In contrast to too many analyses, it is also possible that no analysis is found. In this case, the syntactically lowest verb is removed (l. 39) and the look-up is repeated (l. 21). If only one verb is left and still no analysis is found, A is set to the verb's MFSs (ll. 29-30).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "The analyses in A are then filtered (l. 32), e.g. by giving higher preference to analyses with 7 As in D\u00f6nicke (2020), we add the participle analysis to potential substitute infinitives in German. 8 We perform a small number of modifications to the MFs for cases where we think that the data is not labelled ideally. For example, some languages use VERBFORM Ger and some use TENSE Pres VERBFORM Part for very similar forms of the verb (gerunds and present participles). For this reason, the UD guidelines discourage the use of VERBFORM Ger (see https://universaldependencies.org/u/feat/VerbForm.html# Ger) and we convert it to the latter feature combination. 9 Actually, the Cartesian product yields an ordered combination [m1 1 , . . . , m |V | 1 ] but we treat it as unordered combination to be less prone to potential local verb movements.",
"cite_spans": [
{
"start": 95,
"end": 96,
"text": "7",
"ref_id": null
},
{
"start": 197,
"end": 198,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "10 The look-up tables have been created manually. For some languages, this required extensive study of composite verb constructions, and we want to acknowledge a few works that were very helpful in this process: Berro et al. 2019for Basque, Izadi and Rahimi (2015) for Persian, Babby and Brecht (1975) for Russian, Jendraschek (2011) for Turkish, and Li and Thompson (1989) for Chinese. VOICE Act and/or MOOD Ind . The remaining analyses are unified into a single GFS a, ignoring features with conflicting values (l. 33).",
"cite_spans": [
{
"start": 241,
"end": 264,
"text": "Izadi and Rahimi (2015)",
"ref_id": "BIBREF15"
},
{
"start": 278,
"end": 301,
"text": "Babby and Brecht (1975)",
"ref_id": "BIBREF1"
},
{
"start": 315,
"end": 333,
"text": "Jendraschek (2011)",
"ref_id": "BIBREF16"
},
{
"start": 351,
"end": 373,
"text": "Li and Thompson (1989)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "Since not all languages have the same types of non-finite verb forms, we normalise them as follows (l. 34):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "[VERBFORM Inf] a : a \u2190 VERBFORM Inf VERBFORM* Verb [VERBFORM Vnoun] a : a \u2190 VERBFORM Inf VERBFORM* Noun [VERBFORM Part] a : a \u2190 VERBFORM Part VERBFORM* Adj [VERBFORM Conv] a : a \u2190 VERBFORM Part VERBFORM* Adv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "In a last step, we add the modal verbs to a (ll. 35-37). In D\u00f6nicke (2020) , the lemmas of the verbs are used but in our multilingual implementation, we map the lemmas to three categories of modal verbs (cf. Biber et al., 2002, p. 176) : permission/possibility/ability (POS), obligation/necessity (OBL), and volition/prediction (VOL).",
"cite_spans": [
{
"start": 60,
"end": 74,
"text": "D\u00f6nicke (2020)",
"ref_id": "BIBREF11"
},
{
"start": 208,
"end": 235,
"text": "Biber et al., 2002, p. 176)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical TMVM Tagging",
"sec_num": "4"
},
{
"text": "We expect a high interdependence between the extracted features, which is why we use decision trees for the classification of clauses. A decision tree is a statistical classification method that can both learn such complex dependencies and also visualise them in an understandable manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "5"
},
{
"text": "Experiments with other classifiers, including complement Naive Bayes, random forest and multilayer perceptron, could not improve the performance over that of a simple decision tree. This suggests that the decision tree makes the best out of the available features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "5"
},
{
"text": "Given a training set X D train , we train a decision tree classifier with Gini impurity as split criterion. Since the performance of a decision tree strongly depends on its depth and leaf size, grid search is performed to select the optimal values for the maximum tree depth in {5, 10, 15, 20, 25, \u221e} and the minimum leaf size in {1, 2, 5, 10, 15, 20}. For the grid search, the development set X D dev corresponding to X D train is used for validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Segments",
"sec_num": "5.1"
},
{
"text": "The classifier that predicts the length of a connective is also a decision tree with Gini impurity as split criterion. We let this tree fully expand on the training set X Conn D train (maximum tree depth = \u221e; minimum leaf size = 1) since we assume that discourse connectives are like a closed class and generalising to unseen feature combinations is rarely needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Connectives",
"sec_num": "5.2"
},
{
"text": "Parsed vs. Plain As suggested in the shared task, we evaluate our systems in two main conditions: using the parsed/treebanked datasets (.conllu files) and using the plain/tokenised datasets (.tok files). We approach the second condition by preprocessing the plain datasets with spaCy (https: //spacy.io/) and training new classifiers on the processed training sets. SpaCy provides pretrained UDG models for all shared task's languages except German, Persian, Basque and Turkish. For these languages, we trained new models on the UD treebanks HDT (German), PerDT (Persian), BDT (Basque) and Kenet (Turkish) (Zeman et al., 2021) .",
"cite_spans": [
{
"start": 606,
"end": 626,
"text": "(Zeman et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Morphosyntactic vs. Grammatical In all experiments, we represent composite verbs either as morphosyntactic (M) or as grammatical (G) feature structures (as described in Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Monolingual vs. Multilingual Each system is evaluated on all 16 test sets. In the monolingual condition, we train a system on one dataset only. We further train a system on all training sets combined (ALL) as well as 16 systems on all but one training sets (CV). In the CV condition, we evaluate the system on the test set corresponding to the excluded training set. Thus, the CV condition corresponds to a scenario without training data for the test language. Tables 2 and 3 show the results in the parsed and the plain condition. Numbers are the F1 scores for discourse segmentation or connective identification, depending on the dataset. For the monolingual experiments, the highest value in each column is boldfaced. For the multilingual experiments, the higher value on each test set is underlined. In the monolingual experiments, the F1 scores for parsed data are on average 3.5% higher than those for plain data. The best result on a test set is usually achieved by the system trained on the corresponding training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 461,
"end": 475,
"text": "Tables 2 and 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The systems presented in this paper do not perform better than the best systems from DISRPT 5 7 30 33 11 14 8 8 1 1 10 10 9 14 8 8 9 12 14 15 12 14 11 12 10 10 37 37 1 1 2 zho.pdtb.cdtb 0 0 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 zho. Table 3 : Results on the plain data in %. Delexicalised datasets are excluded because they cannot be preprocessed; italicised results have been obtained by the shared task organisers.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "M G M G M G M G M G M G M G M G M G M G M G M G M G M G M G M G deu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "M G M G M G M G M G M G M G M G M G M G M G M G M G M G M G M G deu.rst.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "2019 (Zeldes et al., 2019) . A fundamental difference between previous systems and the current system is the classification approach: Whereas previous works performed a token-level classification, the current work tries a clause-level classification. The latter approach relies on the assumption that starts of discourse segments are almost always starts of clauses; and it was our mistake and maybe also bit of an unfortunate coincidence that we checked this hypothesis only for deu.rst.pcc and eng.sdrt.stac (the datasets which we mostly used for development), where indeed 95% and 97%, respectively, of the segment starts coincide with clause starts. As we can see in Table 4 , the percentage is much lower in other datasets. Since we train our decision tree to Table 4 : Percentage of segment starts that are also clause starts, and achieved recall, precision and F1 (see Table 2 ) for each dataset. F1 * 1 and F1 * 2 are the F1 scores of the individual classifiers that predict clause-initial segment starts and connective lengths, respectively. distinguish clauses where the first token is the start of a discourse segment from all other clauses (including clauses that contain starts of discourse segments at non-initial positions), the percentage sets an upper bound for the classification recall. The languages with the highest achieved precision are English (86%-99%), Portuguese (97%), German (95%) and Dutch (95%); the languages with the highest F1 are German (92%), English (74%-92%) and Dutch (90%). If only clause-initial segment starts are taken into account (F1 * 1 in Table 4 ), the F1 of the decision tree significantly increases for almost all datasets (+9% on average). The performance for determining the length of discourse connectives ranges between 89% and 97% (F2 * 1 in Table 4 ).",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Zeldes et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 671,
"end": 678,
"text": "Table 4",
"ref_id": null
},
{
"start": 765,
"end": 772,
"text": "Table 4",
"ref_id": null
},
{
"start": 876,
"end": 883,
"text": "Table 2",
"ref_id": "TABREF7"
},
{
"start": 1586,
"end": 1593,
"text": "Table 4",
"ref_id": null
},
{
"start": 1797,
"end": 1804,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "These results suggest that the clause-level approach could achieve reasonable results if segment starts would always coincide with clause starts. This precondition is, however, hard to fulfil, since there are not only different frameworks for discourse segmentation but also different notions of what a clause is. In this paper, we define clauses in terms of UDG. In practice, UD annotations are carried out by many different research groups or converted from non-UD treebanks and thus prone to inconsistencies that may also affect the annotations of clause-marking relations (e.g. de Marneffe et al., 2017) . Furthermore, a lot of the datasets in the shared task incorporate automatically created dependency trees (created by models trained on UD treebanks), which may lead to follow-up errors in the clause-splitting step. D\u00f6nicke (2020) reports an F1 of 81% for predicting clauses in a German text after preprocessing it with a spaCy model trained on the German UD treebanks. Even though this number only gives a rough estimate on how well our system identifies clauses, there is clearly room for improvement. One could also try to resolve the mismatch between segment starts and clause starts in a postprocessing step, e.g. by a second classifier that identifies the position of a segment start in a clause (similar to our connective-length classifier).",
"cite_spans": [
{
"start": 582,
"end": 607,
"text": "de Marneffe et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 825,
"end": 839,
"text": "D\u00f6nicke (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "All of the systems from DISRPT 2019 use lexical features, where the best systems (Muller et al., 2019; are recurrent neural networks. The system that is most similar to the current work is the (best) system from Bourgonje and Sch\u00e4fer (2019) , who use a random forest classifier and extract features at the token-level, e.g. surface form, POS tag, position in the sentence, succeeding punctuation mark. Like we do with clauses, they extract features from the current, the preceding and the succeeding token. For German and Basque, our clause-level, delexicalised and unordered-tree approach yields higher F1s than Bourgonje and Sch\u00e4fer (2019)'s random forest; these are, however, the only languages on which our system performs better. The motivation for not using lexical features was to create language-independent, universal representations for multilingual learning. However, lexical features potentially improve the performance in a monolingual setting. 11 11 A multilingual alternative to lexical features are semantic features, which we also experimented with in the development phase. We extracted semantic features for English verbs and their synonyms in the other languages from ConceptNet (Speer et al., 2017) , and added the features of a clause's main verb to its grammatical feature structure. (The most common semantic features are: change, contact, communication, motion, social, stative, possession, cognition, body, creation, perception, emotion.) Using these features could not improve our results. A possible reason for this is that we assigned semantic features without disambiguating verb senses and therefore a lot of verbs received a broad range of features. However, we are not aware of an existing multilingual resource for word Training on all languages does never improve over the performance of the best monolingual system. The results with a multilingual training set, however, might be distorted because eng.pdtb.pdtb, tur.pdtb.tdb and rus.rst.rrt constitute far larger parts in the multilingual training set than the other datasets. This is also visible in the CV experiments: when one of the large datasets is excluded, the performance drops more than when a smaller dataset is excluded. For example, the performance drops from 71% to 26% when eng.pdtb.pdtb is excluded, whereas it drops from 67% to 66% when spa.rst.sctb (the smallest dataset) is excluded. Although our system does not profit from multilingual training in the context of the shared task, it might be useful in scenarios where no training data is available for a language. For example, training and testing on spa.rst.rststb achieves an F1 of 85%, and training on other RST treebanks leads to 72%-83% on spa.rst.rststb (see Table 2 ). Note that training on the small but same-language treebank spa.rst.sctb yields 78%, whereas training on nld.rst.nldt, which is three times as large, yields 83%. Joining only some and not all datasets might improve the performance for individual languages as well. Future work on multilingual training could also experiment with balanced datasets.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Muller et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 212,
"end": 240,
"text": "Bourgonje and Sch\u00e4fer (2019)",
"ref_id": "BIBREF4"
},
{
"start": 1199,
"end": 1219,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 2723,
"end": 2730,
"text": "Table 2",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "The use of GFSs instead of MSFSs did not have a great impact on the classification performance (<1% difference on average). Inspection of the learned decision trees showed that the top-level features concern punctuation, clause types, free discourse elements and partially NPs, but features concerning the verb are less common. (As an extreme example, the decision tree for eng.sdrt.stac, see Appendix B, does not include any verbal feature.) Unexpectedly, tense, mood, voice etc. seem to be irrelevant for discourse segmentation, and so it does not matter how they are represented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "In this paper, we approached discourse segmentation as a clause-level classification task and represented clauses as delexicalised UD-based feature structures. While the approach works sufficiently on some datasets (e.g. German), the performance is generally lower than that of other approaches (cf. Zeldes et al., 2019) . A major reason for this sense disambiguation and semantic feature assignment.",
"cite_spans": [
{
"start": 300,
"end": 320,
"text": "Zeldes et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "is that, contrary to our expectation, boundaries of discourse segments do not typically fall onto clause boundaries in most datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "In the context of the shared task, we extended D\u00f6nicke (2020)'s algorithm for the grammatical analysis of composite verb forms and created the language-specific resources to run it for all 11 languages. Thus, we also contribute to the task of compound-verb analysis, which is (in contrast to morphological analysis) underrepresented in NLP. 12 However, annotating data with grammatical features and testing the algorithm goes beyond the scope of participating in the shared task and is left to future work.",
"cite_spans": [
{
"start": 341,
"end": 343,
"text": "12",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Our system is available at https://gitlab.gwdg.de/ tillmann.doenicke/disrpt2021-tmvm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The three clauses from Figure 1 are The prefix clause, NP, verb or disc corresponds to the syntactic unit as described in Sections 3.1-3.4. Figure 3 : Decision tree learned on eng.sdrt.stac (using GFSs). \"TRUE\" are segment starts. The number prefixed to a feature is the offset to the current clause, e.g. feature of the preceding clause start with \"-1\".",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 35,
"text": "Figure 1 are",
"ref_id": null
},
{
"start": 140,
"end": 148,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Clause Representations",
"sec_num": null
},
{
"text": "If the head of a clause-marking DepRel is a modal verb, we do not consider its subtree as a clause because we do not want to separate modal verbs from the verbs they modify; in some treebanks, the modal verb governs the modified verb with an xcomp relation (whereas in the English treebanks, the modified verb governs the modal verb with an aux relation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Just to give an example: English has progressive aspect which German has not. The same holds for NPs: German has dative case which English has not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The algorithm differentiates between full verbs, auxiliary verbs and modal verbs. However, UDG only distinguishes full verbs (VERB) and auxiliary verbs (AUX), and modal verbs are tagged as either, depending on the language (see https: //universaldependencies.org/u/pos/AUX_.html). Therefore, a list of modal verbs is required to identify them. As a reviewer pointed out, one could also use the language-specific POS tags to identify modal verbs. In our implementation, we also need the lemmas for modal unification across languages (see text).6 As in linguistic works (e.g.Antonenko, 2008; Dobrushina, 2012), we also consider the Russian \u0431\u044b as particle although it is tagged with AUX in the Russian dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Fabricio Chalub, Shweta Chauhan, Ethan Chi, Taishi Chika, Yongseok Cho, Jinho Choi, Jayeol Chun, Alessandra T. Cignarella, Silvie Cinkov\u00e1, Aur\u00e9lie Collomb, \u00c7agr\u0131 \u00c7\u00f6ltekin, Miriam Connor, Marine Courtin, Mihaela Cristescu, Philemon. Daniel, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Mehmet Oguz Derin, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Arawinda Dinakaramani, Elisa Di Nuovo, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Sandra Eiche, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Toma\u017e Erjavec, Aline Etienne, Wograine Evelyn, Sidney Facundes, Rich\u00e1rd Farkas, Mar\u00edlia Fernanda, Hector Fernandez Alcalde, Jennifer Foster, Cl\u00e1udia Freitas, Kazunori Fujita, Katar\u00edna Gajdo\u0161ov\u00e1, Daniel Galbraith, Marcos Garcia, Moa G\u00e4rdenfors, Sebastian Garza, Fabr\u00edcio Ferraz Gerardi, Kim Gerdes, Filip Ginter, Gustavo Godoy, Iakes Goenaga, Koldo Gojenola, Memduh G\u00f6k\u0131rmak, Yoav Goldberg, Xavier ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The nature of Russian subjunctive clauses",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Antonenko",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Antonenko. 2008. The nature of Russian sub- junctive clauses.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The syntax of voice in Russian",
"authors": [
{
"first": "H",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"D"
],
"last": "Babby",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brecht",
"suffix": ""
}
],
"year": 1975,
"venue": "Language",
"volume": "51",
"issue": "2",
"pages": "342--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard H. Babby and Richard D. Brecht. 1975. The syntax of voice in Russian. Language, 51(2):342- 367.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Basque and Romance: Aligning Grammars. Grammars and Sketches of the World's Languages",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1163/9789004395398"
]
},
"num": null,
"urls": [],
"raw_text": "Ane Berro, Fern\u00e1ndez Beatriz, and Jon Ortiz de Urbina, editors. 2019. Basque and Romance: Aligning Grammars. Grammars and Sketches of the World's Languages. Brill.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Longman grammar of spoken and written English. Second impression",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
},
{
"first": "Stig",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Leech",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Conrad",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Finegan",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Biber, Stig Johansson, Geoffrey Leech, Susan Conrad, and Edward Finegan. 2002. Longman gram- mar of spoken and written English. Second impres- sion 2003.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual and cross-genre discourse unit segmentation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Bourgonje",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "105--114",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2714"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Bourgonje and Robin Sch\u00e4fer. 2019. Multi- lingual and cross-genre discourse unit segmentation. In Proceedings of the Workshop on Discourse Rela- tion Parsing and Treebanking 2019, pages 105-114, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A rule based method for the identification of TAM features in a PoS tagged corpus",
"authors": [
{
"first": "Narayan",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Girish Nath",
"middle": [],
"last": "Jha",
"suffix": ""
}
],
"year": 2014,
"venue": "Human Language Technology Challenges for Computer Science and Linguistics",
"volume": "",
"issue": "",
"pages": "178--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Narayan Choudhary, Pramod Pandey, and Girish Nath Jha. 2014. A rule based method for the iden- tification of TAM features in a PoS tagged cor- pus. In Human Language Technology Challenges for Computer Science and Linguistics, pages 178- 188, Cham. Springer International Publishing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Assessing the annotation consistency of the Universal Dependencies corpora",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Matias",
"middle": [],
"last": "Grioni",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fourth International Conference on Dependency Linguistics",
"volume": "",
"issue": "",
"pages": "108--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Matias Grioni, Jenna Kanerva, and Filip Ginter. 2017. Assessing the an- notation consistency of the Universal Dependencies corpora. In Proceedings of the Fourth International Conference on Dependency Linguistics (Depling 2017), pages 108-115, Pisa,Italy. Link\u00f6ping Univer- sity Electronic Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "We are aware of works on Czech",
"authors": [
{
"first": "",
"middle": [],
"last": "\u017d\u00e1\u010dkov\u00e1",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "We are aware of works on Czech (\u017d\u00e1\u010dkov\u00e1 et al., 2000),",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Italian (Faro and Pavone",
"authors": [
{
"first": "(",
"middle": [],
"last": "Hindi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choudhary",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hindi (Choudhary et al., 2014), Italian (Faro and Pavone, 2015), German, French and English (Ramm et al., 2017; My- ers and Palmer, 2019).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On identifying basic discourse units in speech: theoretical and empirical issues",
"authors": [
{
"first": "Liesbeth",
"middle": [],
"last": "Degand",
"suffix": ""
},
{
"first": "Anne",
"middle": [
"Catherine"
],
"last": "Simon",
"suffix": ""
}
],
"year": 2009,
"venue": "Discours",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.4000/discours.5852"
]
},
"num": null,
"urls": [],
"raw_text": "Liesbeth Degand and Anne Catherine Simon. 2009. On identifying basic discourse units in speech: theoreti- cal and empirical issues. Discours [En ligne], 4.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Subjunctive complement clauses in Russian",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Dobrushina",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "36",
"issue": "",
"pages": "121--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Dobrushina. 2012. Subjunctive complement clauses in Russian. Russian linguistics, 36(2):121- 156.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Clause-level tense, mood, voice and modality tagging for German",
"authors": [
{
"first": "Tillmann",
"middle": [],
"last": "D\u00f6nicke",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {
"DOI": [
"10.18653/v1/2020.tlt-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Tillmann D\u00f6nicke. 2020. Clause-level tense, mood, voice and modality tagging for German. In Pro- ceedings of the 19th International Workshop on Tree- banks and Linguistic Theories, pages 1-17, D\u00fcssel- dorf, Germany. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Greenbergian word order correlations. Language",
"authors": [
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Dryer",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "68",
"issue": "",
"pages": "81--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew S. Dryer. 1992. The Greenbergian word order correlations. Language, 68:81-138.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Refined tagging of complex verbal phrases for the Italian language",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Faro",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Pavone",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Prague Stringology Conference",
"volume": "",
"issue": "",
"pages": "132--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Faro and Arianna Pavone. 2015. Refined tag- ging of complex verbal phrases for the Italian lan- guage. In Proceedings of the Prague Stringology Conference 2015, pages 132-145, Czech Technical University in Prague, Czech Republic.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multilingual segmentation based on neural networks and pre-trained word embeddings",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Iruskieta",
"suffix": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Bengoetxea",
"suffix": ""
},
{
"first": "Aitziber Atutxa",
"middle": [],
"last": "Salazar",
"suffix": ""
},
{
"first": "Arantza",
"middle": [],
"last": "Diaz De Ilarraza",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2716"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Iruskieta, Kepa Bengoetxea, Aitziber Atutxa Salazar, and Arantza Diaz de Ilarraza. 2019. Multilingual segmentation based on neural networks and pre-trained word embeddings. In Proceedings of the Workshop on Discourse Relation Parsing and Treebanking 2019, pages 125-132, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word order of Persian and English: A processing-based analysis",
"authors": [
{
"first": "Mehri",
"middle": [],
"last": "Izadi",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Rahimi",
"suffix": ""
}
],
"year": 2015,
"venue": "Education Journal",
"volume": "4",
"issue": "1",
"pages": "37--43",
"other_ids": {
"DOI": [
"10.11648/j.edu.20150401.18"
]
},
"num": null,
"urls": [],
"raw_text": "Mehri Izadi and Maryam Rahimi. 2015. Word order of Persian and English: A processing-based analysis. Education Journal, 4(1):37-43.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A fresh look at the tenseaspect system of Turkish",
"authors": [
{
"first": "Gerd",
"middle": [],
"last": "Jendraschek",
"suffix": ""
}
],
"year": 2011,
"venue": "Language Research",
"volume": "47",
"issue": "2",
"pages": "245--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerd Jendraschek. 2011. A fresh look at the tense- aspect system of Turkish. Language Research, 47(2):245-270.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mandarin Chinese: A Functional Reference Grammar",
"authors": [
{
"first": "Charles",
"middle": [
"N"
],
"last": "Li",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles N. Li and Sandra A. Thompson. 1989. Man- darin Chinese: A Functional Reference Grammar. University of California Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Rhetorical structure theory: Toward a functional theory of text organization. Text",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "8",
"issue": "",
"pages": "243--281",
"other_ids": {
"DOI": [
"10.1515/text.1.1988.8.3.243"
]
},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text, 8:243-281.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full documents",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2715"
]
},
"num": null,
"urls": [],
"raw_text": "Philippe Muller, Chlo\u00e9 Braud, and Mathieu Morey. 2019. ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full docu- ments. In Proceedings of the Workshop on Dis- course Relation Parsing and Treebanking 2019, pages 115-124, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "ClearTAC: Verb tense, aspect, and form classification using neural nets",
"authors": [
{
"first": "Skatje",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First International Workshop on Designing Meaning Representations",
"volume": "",
"issue": "",
"pages": "136--140",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3315"
]
},
"num": null,
"urls": [],
"raw_text": "Skatje Myers and Martha Palmer. 2019. ClearTAC: Verb tense, aspect, and form classification using neural nets. In Proceedings of the First Interna- tional Workshop on Designing Meaning Represen- tations, pages 136-140, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Universal Dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Haji\u010d, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Annotating tense, mood and voice for English, French and German",
"authors": [
{
"first": "Anita",
"middle": [],
"last": "Ramm",
"suffix": ""
},
{
"first": "Sharid",
"middle": [],
"last": "Lo\u00e1iciga",
"suffix": ""
},
{
"first": "Annemarie",
"middle": [],
"last": "Friedrich",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anita Ramm, Sharid Lo\u00e1iciga, Annemarie Friedrich, and Alexander Fraser. 2017. Annotating tense, mood and voice for English, French and German. In Proceedings of ACL 2017, System Demonstra- tions, pages 1-6, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-first AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Thirty-first AAAI conference on artificial intelligence.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic tagging of compound verb groups in Czech corpora",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "\u017d\u00e1\u010dkov\u00e1",
"suffix": ""
},
{
"first": "Lubo\u0161",
"middle": [],
"last": "Popel\u00ednsk\u00fd",
"suffix": ""
},
{
"first": "Miloslav",
"middle": [],
"last": "Nepil",
"suffix": ""
}
],
"year": 2000,
"venue": "Text, Speech and Dialogue",
"volume": "",
"issue": "",
"pages": "115--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva \u017d\u00e1\u010dkov\u00e1, Lubo\u0161 Popel\u00ednsk\u00fd, and Miloslav Nepil. 2000. Automatic tagging of compound verb groups in Czech corpora. In Text, Speech and Dialogue, pages 115-120, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The DIS-RPT 2019 shared task on elementary discourse unit segmentation and connective detection",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zeldes",
"suffix": ""
},
{
"first": "Debopam",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Erick",
"middle": [
"Galani"
],
"last": "Maziero",
"suffix": ""
},
{
"first": "Juliano",
"middle": [],
"last": "Antonio",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Iruskieta",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2713"
]
},
"num": null,
"urls": [],
"raw_text": "Amir Zeldes, Debopam Das, Erick Galani Maziero, Ju- liano Antonio, and Mikel Iruskieta. 2019. The DIS- RPT 2019 shared task on elementary discourse unit segmentation and connective detection. In Proceed- ings of the Workshop on Discourse Relation Parsing and Treebanking 2019, pages 97-104, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Amir Zeldes, Hanzhi Zhu, Anna Zhuravleva, and Rayan Ziane. 2021. Universal dependencies 2.8.1. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics",
"authors": [
{
"first": "Joel",
"middle": [
"C"
],
"last": "Wallenberg",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Wallin",
"suffix": ""
},
{
"first": "Abigail",
"middle": [],
"last": "Walsh",
"suffix": ""
},
{
"first": "Jing Xian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"North"
],
"last": "Washington",
"suffix": ""
},
{
"first": "Maximilan",
"middle": [],
"last": "Wendt",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Widmer",
"suffix": ""
},
{
"first": "Seyi",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Mats",
"middle": [],
"last": "Wir\u00e9n",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Wittern",
"suffix": ""
},
{
"first": "Tsegay",
"middle": [],
"last": "Woldemariam",
"suffix": ""
},
{
"first": "Tak-Sum",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Wr\u00f3blewska",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Yako",
"suffix": ""
},
{
"first": "Kayo",
"middle": [],
"last": "Yamashita",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Yamazaki",
"suffix": ""
},
{
"first": "Chunxiao",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Koichi",
"middle": [],
"last": "Yasuoka",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marat",
"suffix": ""
},
{
"first": "Arife",
"middle": [],
"last": "Yavrumyan",
"suffix": ""
},
{
"first": "Olcay",
"middle": [],
"last": "Bet\u00fcl Yenice",
"suffix": ""
},
{
"first": "Zhuoran",
"middle": [],
"last": "Taner Y\u0131ld\u0131z",
"suffix": ""
},
{
"first": "Zden\u011bk",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shorouq",
"middle": [],
"last": "\u017dabokrtsk\u00fd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zahra",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel C. Wallenberg, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Max- imilan Wendt, Paul Widmer, Seyi Williams, Mats Wir\u00e9n, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wr\u00f3blewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Arife Bet\u00fcl Yenice, Olcay Taner Y\u0131ld\u0131z, Zhuoran Yu, Zden\u011bk \u017dabokrtsk\u00fd, Shorouq Zahra, Amir Zeldes, Hanzhi Zhu, Anna Zhuravleva, and Rayan Ziane. 2021. Uni- versal dependencies 2.8.1. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Ap- plied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "\u90a3\u4e48 \u5b83 \u521b\u5efa \u4e86 \u4ec0\u4e48 \u7c7b\u578b \u7684 \u672f\u8bed \uff1f ADV PRON VERB PART PRON NOUN PART NOUN Example sentence from zho.rst.sctb's training set.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "pcc 89 89 ------53 53 69 69 71 71 57 56 82 81 67 67 62 62 71 73 63 63 ----65 eng.pdtb.pdtb --48 48 ---------------------------eng.rst.gum ----75 75 -------------------------eng.rst.rstdt ------80 80 -----------------------eng.sdrt.stac 78 83 ------67 69 60 62 62 71 55 62 79 81 68 72 58 58 72 70 59 57 ----55 eus.rst.ert 83 84 ------51 52 72 73 55 56 53 53 82 81 63 67 60 59 74 76 69 67 ----69 fas.rst.prstc 88 86 ------57 57 70 69 77 74 57 56 83 83 67 66 64 63 74 74 63 65 ----67 fra.sdrt.annodis 85 85 ------59 55 65 65 65 64 66 65 83 83 73 76 64 65 71 72 57 61 ----54 nld.rst.nldt 86 85 ------54 53 71 71 66 64 56 57 87 87 67 68 64 64 77 77 62 68 ----67 por.rst.cstn 84 85 ------54 54 68 67 64 61 60 62 84 85 77 77 65 65 74 72 62 59 ----69 rus.rst.rrt 87 87 ------48 51 70 68 61 65 56 58 80 83 68 72 71 71 73 73 64 64 ----68 spa.rst.rststb 84 82 ------43 46 68 69 57 64 55 54 79 80 69 68 63 61 80 81 68 68 ----70 spa.rst.sctb 85 82 ------51 46 65 65 59 59 53 53 81 77 64 63 60 55 77 78 69 68 ----70 tur.pdtb.tdb --------------------------41 41 ---zho.pdtb.cdtb ----------------------------",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "Datasets: total number of sentences, whether discourse connectives are annotated, whether surface forms have been removed, and basic word order.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"text": "|V | ] return \u222a \u2190 and \u2190 are augmented assignment operators for union and unification, respectively. to highest position.If V is not empty (ll. 10-11), the main verb is determined (ll.[12][13][14][15] and all syntactically lower verbs are removed from V (l. 16), because they are not relevant for TMVM tagging.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF4": {
"type_str": "table",
"text": "Test set deu. eng. eng. eng. eng. eus. fas. fra. nld. por. rus. spa. spa. tur. zho. zho. rst. pdtb. rst. rst. sdrt. rst. rst. sdrt. rst. rst. rst. rst. rst. pdtb. pdtb. rst. pcc pdtb gum rstdt stac ert prstc annodis nldt cstn rrt rststb sctb tdb cdtb sctb Training set",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF7": {
"type_str": "table",
"text": "Results on the parsed data in %. set deu. eng. eng. eng. eng. eus. fas. fra. nld. por. rus. spa. spa. tur. zho. zho. rst. pdtb. rst. rst. sdrt. rst. rst. sdrt. rst. rst. rst. rst. rst. pdtb. pdtb. rst.",
"num": null,
"html": null,
"content": "<table><tr><td>Test</td></tr></table>"
}
}
}
}