|
{ |
|
"paper_id": "H05-1002", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:34:22.390829Z" |
|
}, |
|
"title": "Data-driven Approaches for Information Structure Identification", |
|
"authors": [ |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Postolache", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Saarland", |
|
"location": { |
|
"settlement": "Saarbr\u00fccken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ivana", |
|
"middle": [], |
|
"last": "Kruijff-Korbayov\u00e1", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Saarland", |
|
"location": { |
|
"settlement": "Saarbr\u00fccken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Geert-Jan", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kruijff", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper investigates automatic identification of Information Structure (IS) in texts. The experiments use the Prague Dependency Treebank which is annotated with IS following the Praguian approach of Topic Focus Articulation. We automatically detect t(opic) and f(ocus), using node attributes from the treebank as basic features and derived features inspired by the annotation guidelines. We present the performance of decision trees (C4.5), maximum entropy, and rule induction (RIPPER) classifiers on all tectogrammatical nodes. We compare the results against a baseline system that always assigns f(ocus) and against a rule-based system. The best system achieves an accuracy of 90.69%, which is a 44.73% improvement over the baseline (62.66%).", |
|
"pdf_parse": { |
|
"paper_id": "H05-1002", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper investigates automatic identification of Information Structure (IS) in texts. The experiments use the Prague Dependency Treebank which is annotated with IS following the Praguian approach of Topic Focus Articulation. We automatically detect t(opic) and f(ocus), using node attributes from the treebank as basic features and derived features inspired by the annotation guidelines. We present the performance of decision trees (C4.5), maximum entropy, and rule induction (RIPPER) classifiers on all tectogrammatical nodes. We compare the results against a baseline system that always assigns f(ocus) and against a rule-based system. The best system achieves an accuracy of 90.69%, which is a 44.73% improvement over the baseline (62.66%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Information Structure (IS) is a partitioning of the content of a sentence according to its relation to the discourse context. There are numerous theoretical approaches describing IS and its semantics (Halliday, 1967; Sgall, 1967; Vallduv\u00ed, 1990; Steedman, 2000) and the terminology used is diversesee (Kruijff-Korbayov\u00e1 and Steedman, 2003) for an overview. However, all theories consider at least one of the following two distinctions: (i) a Topic/Focus 1 distinction that divides the linguistic meaning of the sentence into parts that link the sentence content \uf731 We use the Praguian terminology for this distinction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 216, |
|
"text": "(Halliday, 1967;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 229, |
|
"text": "Sgall, 1967;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 245, |
|
"text": "Vallduv\u00ed, 1990;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 261, |
|
"text": "Steedman, 2000)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 339, |
|
"text": "(Kruijff-Korbayov\u00e1 and Steedman, 2003)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 563, |
|
"text": "\uf731", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "to the discourse context, and other parts that advance the discourse, i.e., add or modify information; and (ii) a background/kontrast 2 distinction between parts of the utterance which contribute to distinguishing its actual content from alternatives the context makes available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Information Structure is an important factor in determining the felicity of a sentence in a given context. Applications in which IS is crucial are textto-speech systems, where IS helps to improve the quality of the speech output (Prevost and Steedman, 1994; Moore et al., 2004) , and machine translation, where IS improves target word order, especially that of free word order languages (Stys and Zemke, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 257, |
|
"text": "(Prevost and Steedman, 1994;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 277, |
|
"text": "Moore et al., 2004)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 409, |
|
"text": "(Stys and Zemke, 1995)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Existing theories, however, state their principles using carefully selected illustrative examples. Because of this, they fail to adequately explain how different linguistic dimensions cooperate to realize Information Structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we describe data-driven, machine learning approaches for automatic identification of Information Structure; we describe what aspects of IS we deal with and report results of the performance of our systems and make an error analysis. For our experiments, we use the Prague Dependency Treebank (PDT) (Haji\u010d, 1998) . PDT follows the theory of Topic-Focus Articulation (Haji\u010dov\u00e1 et al., 1998) and to date is the only corpus annotated with IS. Each node of the underlying structure of sentences in PDT is annotated with a TFA value: t(opic), differentiated in contrastive and non-contrastive, and f(ocus). Our system identifies these two TFA values automatically. We trained three different clas-\uf732 The notion 'kontrast' with a 'k' has been introduced in (Vallduv\u00ed and Vilkuna, 1998) to replace what Steedman calls 'focus', and to avoid confusion with other definitions of focus. sifiers, C4.5, RIPPER and MaxEnt using basic features from the treebank and derived features inspired by the annotation guidelines. We evaluated the performance of the classifiers against a baseline system that simulates the preprocessing procedure that preceded the manual annotation of PDT, by always assigning f(ocus), and against a rule-based system which we implemented following the annotation instructions. Our best system achieves a 90.69% accuracy, which is a 44.73% improvement over the baseline (62.66%).", |
|
"cite_spans": [ |
|
{ |
|
"start": 312, |
|
"end": 325, |
|
"text": "(Haji\u010d, 1998)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 402, |
|
"text": "(Haji\u010dov\u00e1 et al., 1998)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 763, |
|
"end": 791, |
|
"text": "(Vallduv\u00ed and Vilkuna, 1998)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The organization of the paper is as follows. Section 2 describes the Prague Dependency Treebank and the Praguian approach of Topic-Focus Articulation, from two perspectives: of the theoretical definition and of the annotation guidelines that have been followed to annotate the PDT. Section 3 presents our experiments, the data settings, results and error analysis. The paper closes with conclusions and issues for future research (Section 4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Prague Dependency Treebank (PDT) consists of newspaper articles from the Czech National Corpus (\u010cerm\u00e1k, 1997) and includes three layers of annotation:", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 113, |
|
"text": "(\u010cerm\u00e1k, 1997)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prague Dependency Treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. The morphological layer gives a full morphemic analysis in which 13 categories are marked for all sentence tokens (including punctuation marks).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prague Dependency Treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. The analytical layer, on which the \"surface\" syntax (Haji\u010d, 1998) is annotated, contains analytical tree structures, in which every token from the surface shape of the sentence has a corresponding node labeled with main syntactic functions like SUBJ, PRED, OBJ, ADV.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prague Dependency Treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. The tectogrammatical layer renders the deep (underlying) structure of the sentence (Sgall et al., 1986; Haji\u010dov\u00e1 et al., 1998) . Tectogrammatical tree structures (TGTSs) contain nodes corresponding only to the autosemantic words of the sentence (e.g., no preposition nodes) and to deletions on the surface level; the condition of projectivity is obeyed, i.e., no crossing edges are allowed; each node of the tree is assigned a functor such as ACTOR, PATIENT, ADDRESSEE, ORIGIN, EFFECT, the repertoire of which is very rich; elementary coreference links are annotated for pronouns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 106, |
|
"text": "(Sgall et al., 1986;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 129, |
|
"text": "Haji\u010dov\u00e1 et al., 1998)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prague Dependency Treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The tectogrammatical level of the PDT was motivated by the ever increasing need for large corpora to include not only morphological and syntactic information but also semantic and discourse-related phenomena. Thus, the tectogrammatical trees have been enriched with features indicating the information structure of sentences which is a means of showing their contextual potential. In the Praguian approach to IS, the content of the sentence is divided into two parts: the Topic is \"what the sentence is about\" and the Focus represents the information asserted about the Topic. A prototypical declarative sentence asserts that its Focus holds (or does not hold) about its Topic: Focus(Topic) or not-Focus(Topic).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The TFA definition uses the distinction between Context-Bound (CB) and Non-Bound (NB) parts of the sentence. To distinguish which items are CB and which are NB, the question test is applied, (i.e., the question for which a given sentence is the appropriate answer is considered). In this framework, weak and zero pronouns and those items in the answer which reproduce expressions present in the question (or associated to those present) are CB. Other items are NB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In example (1), (b) is the sentence under investigation, in which CB and NB items are marked. Sentence (a) is the context in which the sentence (b) is uttered, and sentence (c) is the question for which the sentence (b) is an appropriate answer:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(1) (a) Tom and Mary both came to John's party. It should be noted that the CB/NB distinction is not equivalent to the given/new distinction, as the pronoun \"her\" is NB although the cognitive entity, Mary, has already been mentioned in the discourse (therefore is given).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The following rules determine which lexical items (CB or NB) belong to the Topic or to the Focus of the sentence (Haji\u010dov\u00e1 et al., 1998; Haji\u010dov\u00e1 and Sgall, 2001 ):", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 136, |
|
"text": "(Haji\u010dov\u00e1 et al., 1998;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 161, |
|
"text": "Haji\u010dov\u00e1 and Sgall, 2001", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. The main verb and any of its direct dependents belong to the Focus if they are NB;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "2. Every item that does not depend directly on the main verb and is subordinated to a Focus element belongs to the Focus (where \"subordinated to\" is defined as the irreflexive transitive closure of \"depend on\");", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "3. If the main verb and all its dependents are CB, then those dependents d i of the verb which have subordinated items s m that are NB are called 'proxi foci'; the items s m together with all items subordinated to them belong to the Focus (i, m > 1);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "4. Every item not belonging to the Focus according to 1 -3 belongs to the Topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Applying these rules for the sentence (b) in example (1) we find the Topic and the Focus of the sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "[John invited] T opic [only her] F ocus .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "It is worth mentioning that although most of the time, CB items belong to the Topic and NB items belong to the Focus (as it happens in our example too), there may be cases when the Focus contains some NB items and/or the Topic contains some CB items. Figure 1 shows such configurations: in the top-left corner the tectogrammatical representation of sentence (1) (b) is presented together with its Topic-Focus partitioning. The other three configurations are other possible tectogrammatical trees with their Topic-Focus partitionings; the top-right one corresponds to the example (2), the bottom-left to (3), and bottom-right to (4).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 259, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(2) Q: Which teacher did Tom meet?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A: TomCB metCB the teacherCB of chemistryNB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(3) Q: What did he think about the teachers?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A: HeCB likedNB the teacherCB of chemistryNB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(4) Q: What did the teachers do?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A: The teacherCB of chemistryNB metNB hisCB pupilsNB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic-Focus Articulation (TFA)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Within PDT, the TFA attribute has been annotated for all nodes (including the restored ones) from the tectogrammatical level. Instructions for the assignment of the TFA attribute have been specified in Figure 1 : Topic-Focus partitionings of tectogrammatical trees. (Bur\u00e1\u0148ov\u00e1 et al., 2000) and are summarized in Table 1. These instructions are based on the surface word order, the position of the sentence stress (intonation center -IC) 3 and the canonical order of the dependents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 289, |
|
"text": "(Bur\u00e1\u0148ov\u00e1 et al., 2000)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 210, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The TFA attribute has three values:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "1. t -for non-contrastive CB items;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "2. f -for NB items;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3. c -for contrastive CB items.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this paper, we do not distinguish between contrastive and non-contrastive items, considering both of them as being just t. In the PDT annotation, the notation t (from topic) and f (from focus) was chosen to be used because, as we mentioned earlier, in the most common cases and in prototypical sentences, t-items belong to the Topic and f-items to the Focus. Prior the manual annotation, the PDT corpus was preprocessed to mark all nodes with the TFA attribute of f, as it is the most common value. Then the annotators corrected the value according to the guidelines in Table 1 . Figure 2 illustrates the tectogramatical tree structure of the following sentence: \uf733 In the PDT the intonation center is not annotated. However, the annotators were instructed to use their judgement where the IC would be if they uttered the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 573, |
|
"end": 580, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 591, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "1. The bearer of the IC (typically, the rightmost child of the verb) f 2. If IC is not on the rightmost child, everything after IC t 3. A left-side child of the verb (unless it carries IC) t 4. The verb and the right children of the verb before the f-node (cf. 1) that are canonically ordered f 5. Embedded attributes (unless repeated or restored) f 6. Restored nodes t 7. Indexical expressions (j\u00e1 I, ty you, t\u011bd now, tady here), weak pronouns, pronominal expressions with a general meaning (n\u011bkdo somebody, jednou once) (unless they carry IC) t 8. Strong forms of pronouns not preceded by a preposition (unless they carry IC) t Table 1 : Annotation guidelines; IC = Intonation Center.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 630, |
|
"end": 637, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Each node is labeled with the corresponding word's lemma, the TFA attribute, and the functor attribute. For example, votrok\u016f has lemma votrok, the TFA attribute f, and the functor APP (appurtenance). In order to measure the consistency of the annotation, Interannotator Agreement has been measured (Vesel\u00e1 et al., 2004) . 4 During the annotation process, there were four phases in which parallel annotations have been performed; a sample of data was chosen and annotated in parallel by three annotators. same TFA value (be it t, c or f). Because in our experiments we do not differentiate between t and c, considering both as t, we computed, in the last row of the table, the agreement between the three annotators after replacing the TFA value c with t. 5", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 319, |
|
"text": "(Vesel\u00e1 et al., 2004)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 323, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TFA annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this section we present data-driven, machine learning approaches for automatic identification of Information Structure. For each tectogrammatical node we detect the TFA value t(opic) or f(ocus) (that is CB or NB). With these values one can apply the rules presented in Subsection 2.1 in order to find the Topic-Focus partitioning of each sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Identification of topic and focus", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our experiments use the tectogrammatical trees from The Prague Dependency Treebank 2.0. 6 Statistics of the experimental data are shown in Table 3 . Our goal is to automatically label the tectogrammatical nodes with topic or focus. We built machine learning models based on three different well known techniques, decision trees (C4.5), rule induction (RIPPER) and maximum entropy (MaxEnt), in order to find out which approach is the most suitable for our task. For C4.5 and RIPPER we use the Weka implementations (Witten and Frank, 2000) and for MaxEnt we use the openNLP package. 7 \uf735 In (Vesel\u00e1 et al., 2004) , the number of cases when the annotators disagreed when labeling t or c is reported; this allowed us to compute the t/f agreement, by disregarding this number. \uf736 We are grateful to the researchers at the Charles University in Prague for providing us the data before the PDT 2.0 official release. All our models use the same set of 35 features (presented in detail in Appendix A), divided in two types:", |
|
"cite_spans": [ |
|
{ |
|
"start": 513, |
|
"end": 537, |
|
"text": "(Witten and Frank, 2000)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 609, |
|
"text": "(Vesel\u00e1 et al., 2004)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 146, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental settings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1. Basic features, consisting of attributes of the tectogrammatical nodes whose values were taken directly from the treebank annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental settings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We used a total of 25 basic features, that may have between 2 and 61 values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental settings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "2. Derived features, inspired by the annotation guidelines. The derived features are computed using the dependency information from the tectogrammatical level of the treebank and the surface order of the words corresponding to the nodes. 8 We also used lists of forms of Czech pronouns that are used as weak pronouns, indexical expressions, pronouns with general meaning, or strong pronouns. All the derived features have boolean values.", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 239, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental settings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The classifiers were trained on 494,759 instances (78.3%) (cf. Table 3 ) (tectogrammatical nodes) from the training set. The performance of the classifiers was evaluated on 70,323 instances (11.2%) from the evaluation set. We compared our models against a baseline system that assigns focus to all nodes (as it is the most common value) and against a deterministic, rule-based system, that implements the instructions from the annotation guidelines. Table 4 shows the percentages of correctly classified instances for our models. We also performed a \uf738 In the tectogramatical level in the PDT, the order of the nodes has been changed during the annotation process of the TFA attribute, so that all t items precede all f items. Our features use the surface order of the words corresponding to the nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 457, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "10-fold cross validation, which for C4.5 gives accuracy of 90.62%. Table 4 : Correctly classified instances (the numbers are given as percentages). * The RIPPER classifier was trained with only 40% of the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 74, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The baseline value is considerably high due to the topic/focus distribution in the test set (a similar distribution characterizes the training set as well). The rule-based system performs very poorly, although it follows the guidelines according to which the data was annotated. This anomaly is due to the fact that the intonation center of the sentence, which plays a very important role in the annotation, is not marked in the corpus, thus the rule-based system doesn't have access to this information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The results show that all three models perform much better than the baseline and the rule-based system. We used the \u03c7 \uf732 test to examine if the difference between the three classifiers is statistically significant. The C4.5 model significantly outperforms the MaxEnt model (\u03c7 \uf732 = 113.9, p < 0.001) and the MaxEnt model significantly outperforms the RIPPER model although with a lower level of confidence (\u03c7 \uf732 = 9.1, p < 0.01).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The top of the decision tree generated by C4.5 in the training phase looks like this: It is worth mentioning that the RIPPER classifier was built with only 40% of the training set (with more data, the system crashes due to insufficient memory). Interestingly and quite surprisingly, the values of all three classifiers are actually greater than the interannotator agreement which has an average of 86.42%. What is the cause of the classifiers' success? How come that they perform better than the annotators themselves? Is it because they take advantage of a large amount of training data? To answer this question we have computed the learning curves. They are shown in the figure 3, which shows that, actually, after using only 1% of the training data (4,947 instances), the classifiers already perform very well, and adding more training data improves the results only slightly. On the other hand, for RIPPER, adding more data causes a decrease in performance, and as we mentioned earlier, even an impossibility of building a classifier. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "If errors don't come from the lack of training data, then where do they come from? To answer this question we performed an error analysis. For each instance (tectogrammatical node), we considered its context as being the set of values for the features presented in Appendix A. Table 5 displays in the second column the number of all contexts. The last three columns divide the contexts in three groups:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 284, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "1. Only t -all instances having these contexts are assigned t;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "2. Only f -all instances having these contexts are assigned f;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "3. Ambiguous -some instances that have these contexts are assigned t and some other are assigned f.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The last row of the table shows the number of instances for each type of context, in the training data. Table 5 : Contexts & Instances in the training set. Table 5 shows that the source of ambiguity (and therefore of errors) stays in 4,991 contexts that correspond to nodes that have been assigned both t and f. Moreover these contexts yield the largest amount of instances (72.49%). We investigated further these ambiguous contexts and we counted how many of them correspond to a set of nodes that are mostly assigned t (#t > #f), respectively f (#t < #f), and how many are highly ambiguous (half of the corresponding instances are assigned t and the other half f (#t = #f)). The numbers, shown in Table 6 , suggest that in the training data there are 41,851 instances (8.45%) (the sum of highlighted numbers in the third row of the Table 6 ) that are exceptions, meaning they have contexts that usually correspond to instances that are assigned the other TFA value. There are two explanations for these exceptions: either they are part of the annotators disagreement, or they have some characteristics that our set of features fail to capture. Table 6 : Ambiguous contexts in the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 111, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 163, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 706, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 834, |
|
"end": 841, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1146, |
|
"end": 1153, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "#t > #f #t = #f #t < #f #", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The error analysis led us to the idea of implementing a na\u00efve predictor. This predictor trains on the training set, and divides the contexts into five groups. Table 7 describes these five types of contexts and displays the TFA value assigned by the na\u00efve predictor for each type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "If an instance has a context of type #t = #f, we decide to assign f because this is the most common value. Also, for the same reason, new contexts in the test set that don't appear in the training set are assigned f.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The performance of the na\u00efve predictor on the evaluation set is 89.88% (correctly classified instances), a value which is significantly higher than", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the training set, instances with a context of this type are: Predicted TFA value Only t all t t Only f all f f #t > #f more t than f t #t = #f half t, half f f #t < #f more f than t f unseen not seen f Table 7 : Na\u00efve Predictor: its TFA prediction for each type of context. the one obtained by the MaxEnt and RIPPER classifiers (\u03c7 \uf732 = 30.7, p < 0.001 and respectively \u03c7 \uf732 = 73.3, p < 0.001), and comparable with the C4.5 value, although the C4.5 classifier still performs significantly better (\u03c7 \uf732 = 26.3, p < 0.001).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 212, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Context Type", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To find out whether the na\u00efve predictor would improve if we added more data, we computed the learning curve, shown in Figure 3 . Although the curve is slightly more abrupt than the ones of the other classifiers, we do not have enough evidence to believe that more data in the training set would bring a significant improvement. We calculated the number of new contexts in the development set, and although the number is high (2,043 contexts), they correspond to only 2,125 instances. This suggests that the new contexts that may appear are very rare, therefore they cannot yield a big improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 126, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Context Type", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper we investigated the problem of learning Information Structure from annotated data. The contribution of this research is to show for the first time that IS can be successfuly recovered using mostly syntactic features. We used the Prague Dependency Treebank which is annotated with Information Structure following the Praguian theory of Topic Focus Articulation. The results show that we can reliably identify t(opic) and f(ocus) with over 90% accuracy while the baseline is at 62%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Issues for further research include, on the one hand, a deeper investigation of the Topic-Focus Articulation in the Prague Dependency Treebank of Czech, by improving the feature set, considering also the distinction between contrastive and noncontrastive t items and, most importantly, by investigating how we can use the t/f annotation in PDT (and respectively our results) in order to detect the Topic/Focus partitioning of the whole sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We also want to benefit from our experience with the Czech data in order to create an English corpus annotated with Information Structure. We have already started to exploit a parallel English-Czech corpus, in order to transfer to the English version the topic/focus labels identified by our systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "4" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "In this appendix we provide a full list of the feature names and the values they take (a feature for MaxEnt being a combination of the name, value and the prediction).BASIC FEATURE POSSIBLE VALUES nodetype complex, atom, dphr, list, qcomplex is generated true, false functor ACT, LOC, DENOM, APP, PAT, DIR1, MAT, RSTR, THL, TWHEN, REG, CPHR, COMPL, MEANS, ADDR, CRIT, TFHL, BEN, ORIG, DIR3, TTILL ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 404, |
|
"text": "FEATURE POSSIBLE VALUES nodetype complex, atom, dphr, list, qcomplex is generated true, false functor ACT, LOC, DENOM, APP, PAT, DIR1, MAT, RSTR, THL, TWHEN, REG, CPHR, COMPL, MEANS, ADDR, CRIT, TFHL, BEN, ORIG, DIR3, TTILL", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Appendix A", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Tagging of very large corpora: Topic-Focus Articulation", |
|
"authors": [ |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Bur\u00e1\u0148ov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 18th International Conference on Computational Linguistics (COLING 2000)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Bur\u00e1\u0148ov\u00e1, Eva Haji\u010dov\u00e1, and Petr Sgall. 2000. Tagging of very large corpora: Topic-Focus Articulation. In Proceedings of the 18th International Confer- ence on Computational Linguistics (COLING 2000), pages 139-144.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Building a syntactically annotated corpus: The Prague Dependency Treebank", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "Issues of valency and Meaning. Studies in Honor of Jarmila Panevov\u00e1. Karolinum", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d. 1998. Building a syntactically annotated corpus: The Prague Depen- dency Treebank. In Eva Haji\u010dov\u00e1, editor, Issues of valency and Meaning. Studies in Honor of Jarmila Panevov\u00e1. Karolinum, Prague.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Topic-focus and salience", |
|
"authors": [ |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL 2001)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "268--273", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Haji\u010dov\u00e1 and Petr Sgall. 2001. Topic-focus and salience. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL 2001), pages 268-273, Toulose, France.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Topic-focus articulation, tripartite structures, and semantic content", |
|
"authors": [ |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Partee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Studies in Linguistics and Philosophy, number 71", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Haji\u010dov\u00e1, Barbara Partee, and Petr Sgall. 1998. Topic-focus articulation, tripartite structures, and semantic content. In Studies in Linguistics and Phi- losophy, number 71. Dordrecht: Kluwer.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Notes on transitivity and theme in english, part ii", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Halliday", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "Journal of Linguistic", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "199--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Halliday. 1967. Notes on transitivity and theme in english, part ii. Journal of Linguistic, (3):199-244.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Discourse and Information Structure", |
|
"authors": [ |
|
{ |
|
"first": "Ivana", |
|
"middle": [], |
|
"last": "Kruijff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Korbayov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Logic, Language and Information", |
|
"volume": "", |
|
"issue": "12", |
|
"pages": "249--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivana Kruijff-Korbayov\u00e1 and Mark Steedman. 2003. Discourse and Information Structure. Journal of Logic, Language and Information, (12):249-259.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Producing Contextually Appropriate Intonation in an Information-State Based Dialog System", |
|
"authors": [ |
|
{ |
|
"first": "Ivana", |
|
"middle": [], |
|
"last": "Kruijff-Korbayov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stina", |
|
"middle": [], |
|
"last": "Erricson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kepa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Rodr\u00edgues", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Karagjosova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceeding of European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivana Kruijff-Korbayov\u00e1, Stina Erricson, Kepa J. Rodr\u00edgues, and Elena Karagjosova. 2003. Producing Contextually Appropriate Intonation in an Information-State Based Dialog System. In Proceeding of European Chapter of the Association for Computational Linguistics, Budapest, Hungary.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Generating Tailored, Comparative Description in Spoken Dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Johanna", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ellen" |
|
], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Seventeenth International Florida Artificial Intelligence Research Sociey Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johanna Moore, Mary Ellen Foster, Oliver Lemon, and Michael White. 2004. Generating Tailored, Comparative Description in Spoken Dialogue. In Pro- ceedings of the Seventeenth International Florida Artificial Intelligence Re- search Sociey Conference.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Information Based Intonation Synthesis", |
|
"authors": [ |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Prevost", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the ARPA Workshop on Human Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott Prevost and Mark Steedman. 1994. Information Based Intonation Synthe- sis. In Proceedings of the ARPA Workshop on Human Language Technology, Princeton, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The Meaning of the Sentence in Its Semantic and Pragmatic Aspects", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jarmila", |
|
"middle": [], |
|
"last": "Panevov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Petr Sgall, Eva Haji\u010dov\u00e1, and Jarmila Panevov\u00e1. 1986. The Meaning of the Sen- tence in Its Semantic and Pragmatic Aspects. Reidel, Dordrecht.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Functional sentence perspective in a generative description", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "Prague Studies in Mathematical Linguistics", |
|
"volume": "", |
|
"issue": "2", |
|
"pages": "203--225", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Petr Sgall. 1967. Functional sentence perspective in a generative description. Prague Studies in Mathematical Linguistics, (2):203-225.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Information Structure and the syntax-phonology interface", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Linguistic Inquiry", |
|
"volume": "", |
|
"issue": "34", |
|
"pages": "649--689", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Steedman. 2000. Information Structure and the syntax-phonology inter- face. Linguistic Inquiry, (34):649-689.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Incorporating Discourse Aspects in English-Polish MT: Towards Robust Implementation", |
|
"authors": [ |
|
{ |
|
"first": "Malgorzata", |
|
"middle": [], |
|
"last": "Stys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Zemke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Recent Advances in NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malgorzata Stys and Stefan Zemke. 1995. Incorporating Discourse Aspects in English-Polish MT: Towards Robust Implementation. In Recent Advances in NLP, Velingrad, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "On rheme and kontrast", |
|
"authors": [ |
|
{ |
|
"first": "Enrich", |
|
"middle": [], |
|
"last": "Vallduv\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Vilkuna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "The Limits of Syntax", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Enrich Vallduv\u00ed and Maria Vilkuna. 1998. On rheme and kontrast. In P. Culicover and L. McNally, editors, Syntax and Semantics Vol 29: The Limits of Syntax. Academic Press, San Diego.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Enrich Vallduv\u00ed. 1990. The information component", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Enrich Vallduv\u00ed. 1990. The information component. Ph.D. thesis, University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Czech National Corpus: A Case in Many Contexts", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Franti\u0161ek\u010derm\u00e1k", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "International Journal of Corpus Linguistics", |
|
"volume": "", |
|
"issue": "2", |
|
"pages": "181--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franti\u0161ek\u010cerm\u00e1k. 1997. Czech National Corpus: A Case in Many Contexts. International Journal of Corpus Linguistics, (2):181-197.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Annotators' Agreement: The Case of Topic-Focus Articulation", |
|
"authors": [ |
|
{ |
|
"first": "Kate\u0159ina", |
|
"middle": [], |
|
"last": "Vesel\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Havelka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kate\u0159ina Vesel\u00e1, Ji\u0159\u00ed Havelka, and Eva Haji\u010dova. 2004. Annotators' Agreement: The Case of Topic-Focus Articulation. In Proceedings of the Language Re- sources and Evaluation Conference (LREC 2004).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Practical Machine Learning Tools and Techniques with Java Implementations", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Witten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian H. Witten and Eibe Frank. 2000. Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Francisco.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "(b) JohnCB invitedCB onlyNB herNB. (c) Whom did John invite?", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "did not shake the self-confidence of those bastards'.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Tectogramatical tree annotated with t/f.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"text": "Learning curves for C4.5 (+), RIPPER(\u00d7), MaxEnt( * ) and a na\u00efve predictor (2) (introduced in Section 3.3).", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"text": "Interannotator Agreement for TFA assignment in PDT 2.0.The agreement for each of the four phases, as well as an average agreement, is shown inTable 2. The second row of the table displays the percentage of nodes for which all three annotators assigned the", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"text": "PDT data: Statistics for the training, development and evaluation sets.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |