Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:20:08.174895Z"
},
"title": "Playing the Telephone Game: Determining the Hierarchical Structure of Perspective and Speech Expressions",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Breck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14853",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14853",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "News articles report on facts, events, and opinions with the intent of conveying the truth. However, the facts, events, and opinions appearing in the text are often known only secondor third-hand, and as any child who has played \"telephone\" knows, this relaying of facts often garbles the original message. Properly understanding the information filtering structures that govern the interpretation of these facts, then, is critical to appropriately analyzing them. In this work, we present a learning approach that correctly determines the hierarchical structure of information filtering expressions 78.30% of the time.",
"pdf_parse": {
"paper_id": "C04-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "News articles report on facts, events, and opinions with the intent of conveying the truth. However, the facts, events, and opinions appearing in the text are often known only secondor third-hand, and as any child who has played \"telephone\" knows, this relaying of facts often garbles the original message. Properly understanding the information filtering structures that govern the interpretation of these facts, then, is critical to appropriately analyzing them. In this work, we present a learning approach that correctly determines the hierarchical structure of information filtering expressions 78.30% of the time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Newswire text has long been a primary target for natural language processing (NLP) techniques such as information extraction, summarization, and question answering (e.g. MUC (1998) ; NIS (2003) ; DUC (2003) ). However, newswire does not offer direct access to facts, events, and opinions; rather, journalists report what they have experienced, and report on the experiences of others. That is, facts, events, and opinions are filtered by the point of view of the writer and other sources. Unfortunately, this filtering of information through multiple sources (and multiple points of view) complicates the natural language interpretation process because the reader (human or machine) must take into account the biases introduced by this indirection. It is important for understanding both newswire and narrative text (Wiebe, 1994) , therefore, to appropriately recognize expressions of point of view, and to associate them with their direct and indirect sources.",
"cite_spans": [
{
"start": 174,
"end": 180,
"text": "(1998)",
"ref_id": "BIBREF12"
},
{
"start": 183,
"end": 193,
"text": "NIS (2003)",
"ref_id": null
},
{
"start": 196,
"end": 206,
"text": "DUC (2003)",
"ref_id": null
},
{
"start": 816,
"end": 829,
"text": "(Wiebe, 1994)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper introduces two kinds of expression that can filter information. First, we define a perspective expression to be the minimal span of text that denotes the presence of an explicit opinion, evaluation, emotion, speculation, belief, sentiment, etc. 1 Private state is the general term typically used to refer to these mental and emotional states that cannot be directly observed or verified (Quirk et al., 1985) . Further, we define the source of a perspective expression to be the experiencer of that private state, that is, the person or entity whose opinion or emotion is being conveyed in the text. Second, speech expressions simply convey the words of another individual -and by the choice of words, the reporter filters the original source's intent. Consider for example, the following sentences (in which perspective expressions are denoted in bold, speech expressions are underlined, and sources are denoted in italics):",
"cite_spans": [
{
"start": 398,
"end": 418,
"text": "(Quirk et al., 1985)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Charlie was angry at Alice's claim that Bob was unhappy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Philip Clapp, president of the National Environment Trust, sums up well the general thrust of the reaction of environmental movements: \"There is no reason at all to believe that the polluters are suddenly going to become reasonable.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Perspective expressions in Sentence 1 describe the emotions or opinion of three sources: Charlie's anger, Bob's unhappiness, and Alice's belief. Perspective expressions in Sentence 2, on the other hand, introduce the explicit opinion of one source, i.e. the reaction of the environmental movements. Speech expressions also perform filtering in these examples. The reaction of the environmental movements is filtered by Clapp's summarization, which, in turn, is filtered by the writer's choice of quotation. In addition, the fact that Bob was unhappy is filtered through Alice's claim, which, in turn, is filtered by the writer's choice of words for the sentence. Similarly, it is only according to the writer that Charlie is angry. The specific goal of the research described here is to accurately identify the hierarchical structure of perspective and speech expressions (pse's) in text. 2 al.'s (2003) \"expressive subjective elements\" are not the subject of study here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given sentences 1 and 2 and their pse's, for example, we will present methods that produce the structures shown in Figure 1 , which represent the multistage information filtering that should be taken into account in the interpretation of the text. We propose a supervised machine learning approach to the problem that relies on a small set of syntactically-based features. More specifically, the method first trains a binary classifier to make pairwise parent-child decisions among the pse's in the same sentence, and then combines the decisions to determine their global hierarchical structure. We compare the approach to two heuristicbased baselines -one that simply assumes that every pse is filtered only through the writer, and a second that is based on syntactic dominance relations in the associated parse tree. In an evaluation using the opinion-annotated NRRC corpus , the learning-based approach achieves an accuracy of 78.30%, significantly higher than both the simple baseline approach (65.57%) and the parse-based baseline (71.64%). We believe that this study provides a first step towards understanding the multi-stage filtering process that can bias and garble the information present in newswire text.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. We present related work in Section 2 and describe the machine learning approach in Section 3. The experimental methodology and results are presented in Sections 4 and 5, respectively. Section 6 summarizes our conclusions and plans for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper addresses the problem of identifying the hierarchical structure of perspective and speech expressions. We view this as a necessary and important component of a larger perspective-analysis amples, both types of pse appear in boldface. Note that the acronym 'pse' has been used previously with a different meaning (Wiebe, 1994 As far as we are aware, no single system exists that simultaneously solves all these problems. There is, however, quite a bit of work that addresses various pieces of this larger task, which we will now survey. Gerard (2000) proposes a computational model of the reader of a news article. Her model provides for multiple levels of hierarchical beliefs, such as the nesting of a primary source's belief within that of a reporter. However, Gerard does not provide algorithms for extracting this structure directly from newswire texts. Bethard et al. (2004) seek to extract propositional opinions and their holders. They define an opinion as \"a sentence, or part of a sentence that would answer the question 'How does X feel about Y?' \" A propositional opinion is an opinion \"localized in the propositional argument\" of certain verbs, such as \"believe\" or \"realize\". Their task then corresponds to identifying a pse, its associated direct source, and the content of the private state. However, they consider as pse's only verbs, and further restrict attention to verbs with a propositional argument, which is a subset of the perspective and speech expressions that we consider here. Table 1 , for example, shows the diversity of word classes that correspond to pse's in our corpus. Perhaps more importantly for the purposes of this paper, their work does not address information filtering issues, i.e. problems that arise when an opinion has been filtered through multiple sources. Namely, Bethard et al. 2004do not consider sentences that contain multiple pse's, and do not, therefore, need to identify any indirect sources of opinions. As shown in Table 2 , however, we find that sentences with multiple non-writer pse's (i.e. sentences that contain 3 or more total pse's) comprise a significant portion (29.98%) of our corpus. An advantage over our work, however, is that Bethard et al. 2004do not require separate solutions to pse identification and the identification of their direct sources. Automatic identification of sources has also been addressed indirectly by Gildea and Jurafsky's (2002) work on semantic role identification in that finding sources often corresponds to finding the filler of the agent role for verbs. Their methods then might be used to identify sources and associate them with pse's that are verbs or portions of verb phrases. Whether their work will also apply to pse's that are realized as other parts of speech is an open question. Wiebe (1994) , studies methods to track the change of \"point of view\" in narrative text (fiction). That is, the \"writer\" of one sentence may not correspond to the writer of the next sentence. Although this is not as frequent in newswire text as in fiction, it will still need to be addressed in a solution to the larger problem. Bergler (1993) examines the lexical semantics of speech event verbs in the context of generative lexicon theory. While not specifically addressing our problem, the \"semantic dimensions\" of reporting verbs that she extracts might be very useful as features in our approach.",
"cite_spans": [
{
"start": 323,
"end": 335,
"text": "(Wiebe, 1994",
"ref_id": "BIBREF16"
},
{
"start": 547,
"end": 560,
"text": "Gerard (2000)",
"ref_id": "BIBREF9"
},
{
"start": 869,
"end": 890,
"text": "Bethard et al. (2004)",
"ref_id": "BIBREF3"
},
{
"start": 2405,
"end": 2433,
"text": "Gildea and Jurafsky's (2002)",
"ref_id": "BIBREF10"
},
{
"start": 2799,
"end": 2811,
"text": "Wiebe (1994)",
"ref_id": "BIBREF16"
},
{
"start": 3128,
"end": 3142,
"text": "Bergler (1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 1516,
"end": 1523,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1983,
"end": 1990,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The Larger Problem and Related Work",
"sec_num": "2"
},
{
"text": "Finally, Wiebe et al. (2003) present preliminary results for the automatic identification of perspective and speech expressions using corpus-based techniques. While the results are promising (66% F- measure), the problem is still clearly unsolved. As explained below, we will instead rely on manually tagged pse's for the studies presented here.",
"cite_spans": [
{
"start": 9,
"end": 28,
"text": "Wiebe et al. (2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Larger Problem and Related Work",
"sec_num": "2"
},
{
"text": "Our task is to find the hierarchical structure among the pse's in individual sentences. One's first impression might be that this structure should be obvious from the syntax: one pse should filter another roughly when it dominates the other in a dependency parse. This heuristic, for example, would succeed for \"claim\" and \"unhappy\" in sentence 1, whose pse structure is given in Figure 1 and parse structure (as produced by the Collins parser) in Figure 2. 4 Even in sentence 1, though, we can see that the problem is more complex: \"angry\" dominates \"claim\" in the parse tree, but does not filter it. Unfortunately, an analysis of the parse-based heuristic on our training data (the data set will be described in Section 4), uncovered numerous, rather than just a few, sources of error. Therefore, rather than trying to handcraft a more complex collection of heuristics, we chose to adopt a supervised machine learning approach that relies on features identified in this analysis. In particular, we will first train a binary classifier to make pairwise decisions as to whether a given pse is the immediate parent of another. We then use a simple approach to combine these decisions to find the hierarchical information-filtering structure of all pse's in a sentence.",
"cite_spans": [
{
"start": 448,
"end": 459,
"text": "Figure 2. 4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Approach",
"sec_num": "3"
},
{
"text": "We assume that we have a training corpus of sentences, annotated with pse's and their hierarchical pse structure (Section 4 describes the corpus). Training instances for the binary classifier are pairs of pse's from the same sentence, pse target , pse parent 5 . We assign a class value of 1 to a training instance if pse parent is the immediate parent of pse target in the manually annotated hierarchical structure for the sentence, and 0 otherwise. For sentence 1, there are nine training instances generated: claim, writer , angry, writer , unhappy, claim (class 1), claim, angry , claim, unhappy , angry, claim , angry, unhappy , unhappy, writer , unhappy, angry (class 0). The features used to describe each training instance are explained below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Approach",
"sec_num": "3"
},
{
"text": "During testing, we construct the hierarchical pse structure of an entire sentence as follows. For each pse in the sentence, ask the binary classifier to judge each other pse as a potential parent, and choose the pse with the highest confidence 6 . Finally, join these immediate-parent links to form a tree. 7 One might also try comparing pairs of potential parents for a given pse, or other more direct means of ranking potential parents. We chose what seemed to be the simplest method for this first attempt at the problem.",
"cite_spans": [
{
"start": 307,
"end": 308,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Approach",
"sec_num": "3"
},
{
"text": "Here we motivate and describe the 23 features used in our model. Unless otherwise stated, all features are binary (1 if the described condition is true, 0 otherwise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "Based on the performance of the parse-based heuristic, we include a pse parent -dominates-pse target feature in our feature set. To compensate for parse errors, however, we also include a variant of this that is 1 if the parent of pse parent dominates pse target .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse-based features (6).",
"sec_num": null
},
{
"text": "Many filtering expressions filter pse's that occur in their complements, but not in adjuncts. Therefore, we add variants of the previous two syntaxbased features that denote whether the parent node 5 We skip sentences where there is no decision to make (sentences with zero or one non-writer pse). Since the writer pse is the root of every structure, we do not generate instances with the writer pse in the psetarget position.",
"cite_spans": [
{
"start": 198,
"end": 199,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parse-based features (6).",
"sec_num": null
},
{
"text": "6 There is an ambiguity if the classifier assigns the same confidence to two potential parents. For evaluation purposes, we consider the classifier's response incorrect if any of the highestscoring potential parents are incorrect. 7 The directed graph resulting from flawed automatic predictions might not be a tree (i.e. it might be cyclic and disconnected). Since this occurs very rarely (5 out of 9808 sentences on the test data), we do not attempt to correct any non-tree graphs.",
"cite_spans": [
{
"start": 231,
"end": 232,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parse-based features (6).",
"sec_num": null
},
{
"text": "dominates pse target , but only if the first dependency relation is an object relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse-based features (6).",
"sec_num": null
},
{
"text": "For similar reasons, we include a feature calculating the domination relation based on a partial parse. Consider the following sentence: 3. He was criticized more than recognized for his policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse-based features (6).",
"sec_num": null
},
{
"text": "One of \"criticized\" or \"recognized\" will be the root of this dependency parse, thus dominating the other, and suggesting (incorrectly) that it filters the other pse. Because a partial parse does not attach all constituents, such spurious dominations are eliminated. The partial parse feature is 1 for fewer instances than pse parent -dominates-pse target , but it is more indicative of a positive instance when it is 1. So that the model can adjust when the parse is not present, we include a feature that is 1 for all instances generated from sentences on which the parser failed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse-based features (6).",
"sec_num": null
},
{
"text": "Forcing the model to decide whether pse parent is the parent of pse target without knowledge of the other pse's in the sentence is somewhat artificial. We therefore include several features that encode the relative position of pse parent and pse target in the sentence. Specifically, we add a feature that is 1 if pse parent is the root of the parse (and similarly for pse target ). We also include a feature giving the ordinal position of pse parent among the pse's in the sentence, relative to pse target (-1 means pse parent is the pse that immediately precedes pse target , 1 means immediately following, and so forth). To allow the model to vary when there are more potential parents to choose from, we include a feature giving the total number of pse's in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positional features (5).",
"sec_num": null
},
{
"text": "Some particular pse's are special, so we specify indicator features for four types of parents: the writer pse, and the lexical items \"said\" (the most common nonwriter pse) and \"according to\". \"According to\" is special because it is generally not very high in the parse, but semantically tends to filter everything else in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Special parents and lexical features (6).",
"sec_num": null
},
{
"text": "In addition, we include as features the part of speech of pse parent and pse target (reduced to noun, verb, adjective, adverb, or other), since intuitively we expected distinct parts of speech to behave differently in their filtering. Genre-specific features (6). Finally, journalistic writing contains a few special forms that are not always parsed accurately. Examples are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Special parents and lexical features (6).",
"sec_num": null
},
{
"text": "4. \"Alice disagrees with me,\" Bob argued.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Special parents and lexical features (6).",
"sec_num": null
},
{
"text": "5. Charlie, she noted, dislikes Chinese food.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Special parents and lexical features (6).",
"sec_num": null
},
{
"text": "The parser may not recognize that \"noted\" and \"argued\" should dominate all other pse's in sentences 4 and 5, so we attempt to recognize when a sentence falls into one of these two patterns. For disagrees, argued generated from sentence 4, features pse parent -pattern-1 and pse target -pattern-1 would be 1, while for dislikes, noted generated from sentence 5, feature pse parent -pattern-2 would be 1. We also add features that denote whether the pse in question falls between matching quote marks. Finally, a simple feature indicates whether pse parent is the last word in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Special parents and lexical features (6).",
"sec_num": null
},
{
"text": "We rely on a variety of resources to generate our features. The corpus (see Section 4) is distributed with annotations for sentence breaks, tokenization, and part of speech information automatically generated by the GATE toolkit (Cunningham et al., 2002) . 8 For parsing we use the Collins (1999) parser. 9 For partial parses, we employ CASS (Abney, 1997) . Finally, we use a simple finite-state recognizer to identify (possibly nested) quoted phrases.",
"cite_spans": [
{
"start": 229,
"end": 254,
"text": "(Cunningham et al., 2002)",
"ref_id": "BIBREF7"
},
{
"start": 257,
"end": 258,
"text": "8",
"ref_id": null
},
{
"start": 282,
"end": 296,
"text": "Collins (1999)",
"ref_id": "BIBREF6"
},
{
"start": 342,
"end": 355,
"text": "(Abney, 1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3.2"
},
{
"text": "For classifier construction, we use the IND package (Buntine, 1993) to train decision trees (we use the mml tree style, a minimum message length criterion with Bayesian smoothing).",
"cite_spans": [
{
"start": 52,
"end": 67,
"text": "(Buntine, 1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3.2"
},
{
"text": "The data for these experiments come from version 1.1 of the NRRC corpus . 10 . The corpus consists of 535 newswire documents (mostly from the FBIS), of which we used 66 (1375 sentences) for developing the heuristics and features, while keeping the remaining 469 (9808 sentences) blind (used for 10-fold cross-validation).",
"cite_spans": [
{
"start": 74,
"end": 76,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4"
},
{
"text": "Although the NRRC corpus provides annotations for all pse's, it does not provide annotations to denote directly their hierarchical structure within a 8 GATE's sentences sometimes extend across paragraph boundaries, which seems never to be warranted. Inaccurately joining sentences has the effect of adding more noise to our problem, so we split GATE's sentences at paragraph boundaries, and introduce writer pse's for the newly created sentences. 9 We convert the parse to a dependency format that makes some of our features simpler using a method similar to the one described in Xia and Palmer (2001) . We also employ a method from Adam Lopez at the University of Maryland to find grammatical relationships between words (subject, object, etc.).",
"cite_spans": [
{
"start": 447,
"end": 448,
"text": "9",
"ref_id": null
},
{
"start": 580,
"end": 601,
"text": "Xia and Palmer (2001)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4"
},
{
"text": "10 The original corpus is available at http: //nrrc.mitre.org/NRRC/Docs_Data/MPQA_ 04/approval_mpqa.htm. Code and data used in our experiments are available at http://www.cs.cornell. edu/\u02dcebreck/breck04playing/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4"
},
{
"text": "sentence. This structure must be extracted from an attribute of each pse annotation, which lists the pse's direct and indirect sources. For example, the \"source chain\" for \"unhappy\" in sentence 1, would be (writer, Alice, Bob). The source chains allow us to automatically recover the hierarchical structure of the pse's: the parent of a pse with source chain (s 0 , s 1 , . . . s n\u22121 , s n ) is the pse with source chain (s 0 , s 1 , . . . s n\u22121 ). Unfortunately, ambiguities can arise. Consider the following sentence: 6. Bob said, \"you're welcome\" because he was glad to see that Mary was happy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4"
},
{
"text": "Both \"said\" and \"was glad\" have the source chain (writer, Bob), 11 while \"was happy\" has the source chain (writer, Bob, Mary). It is therefore not clear from the manual annotations whether \"was happy\" should have \"was glad\" or \"said\" as its parent. 5.82% of the pse's have ambiguous parentage (i.e. the recovery step finds a set of parents P (pse) with |P (pse)| > 1). For training, we assign a class value of 1 to all instances pse, par , par \u2208 P (pse). For testing, if an algorithm attaches pse to any element of P (pse), we score the link as correct (see Section 5.1). Since ultimately our goal is to find the sources through which information is filtered (rather than the pse's), we believe this is justified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4"
},
{
"text": "For training and testing, we used only those sentences that contain at least two non-writer pse's 12 -for all other sentences, there is only one way to construct the hierarchical structure. Again, Table 2 presents a breakdown (for the test set) of the number of pse's per sentence -thus we only use approximately one-third of all the sentences in the corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4"
},
{
"text": "How do we evaluate the performance of an automatic method of determining the hierarchical structure of pse's? Lin (1995) proposes a method for evaluating dependency parses: the score for a sentence is the fraction of correct parent links identified; the score for the corpus is the average sentence score.",
"cite_spans": [
{
"start": 110,
"end": 120,
"text": "Lin (1995)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "Formally, the score for a Table 3 : Performance on test data. \"Lin\" is Lin's dependency score, \"perf\" is the fraction of sentences whose structure was identified perfectly, and \"bin\" is the performance of the binary classifier (broken down for positive and negative instances). \"Size\" is the number of sentences or pse pairs. ",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": ", where S is the set of all sentences in the corpus, Non writer pse s(s) is the set of non-writer pse's in sentence s, parent(pse) is the correct parent of pse, and autopar(pse) is the automatically identified parent of pse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|S|",
"sec_num": null
},
{
"text": "We also present results using two other (related) metrics. The \"perf\" metric measures the fraction of sentences whose structure is determined entirely correctly (i.e. \"perf\"ectly). \"Bin\" is the accuracy of the binary classifier (with a 0.5 threshold) on the instances created from the test corpus. We also report the performance on positive and negative instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|S|",
"sec_num": null
},
{
"text": "We compare the learning-based approach (decTree) to the heuristic-based approaches introduced in Section 3 -heurOne assumes that all pse's are attached to the writer's implicit pse; heurTwo is the parse-based heuristic that relies solely on the dominance relation 13 .",
"cite_spans": [
{
"start": 264,
"end": 266,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "We use 10-fold cross-validation on the evaluation data to generate training and test data (although the heuristics, of course, do not require training). The results of the decision tree method and the two heuristics are presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Encouragingly, our machine learning method uniformly and significantly 14 outperforms the two heuristic methods, on all metrics and in sentences with any number of pse's. The difference is most striking in the \"perf\" metric, which is perhaps the most intuitive. Also, the syntax-based heuristic (heurTwo) significantly 15 outperforms heurOne, confirming our intuitions that syntax is important in this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "As the binary classifer sees many more negative instances than positive, it is unsurprising that its performance is much better on negative instances. This suggests that we might benefit from machine learning methods for dealing with unbalanced datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "Examining the errors of the machine learning system on the development set, we see that for half of the pse's with erroneously identified parents, the parent is either the writer's pse, or a pse like \"said\" in sentences 4 and 5 having scope over the entire sentence. For example, 7. \"Our concern is whether persons used to the role of policy implementors can objectively assess and critique executive policies which impinge on human rights,\" said Ramdas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "Our model chose the parent of \"assess and critique\" to be \"said\" rather than \"concern.\" We also see from Table 4 that the model performs more poorly on sentences with more pse's. We believe that this reflects a weakness in our decision to combine binary decisions, because the model has learned that in general, a \"said\" or writer's pse (near the root of the structure) is likely to be the parent, while it sees many fewer examples of pse's such as \"concern\" that lie in the middle of the tree. Although we have ignored the distinction throughout this paper, error analysis suggests speech event pse's behave differently than private state pse's with respect to how closely syntax reflects their hierarchical structure. It may behoove us to add features to allow the model to take this into account. Other sources of error include erroneous sentence boundary detection, parenthetical statements (which the parser does not treat correctly for our purposes) and other parse errors, partial quotations, as well as some errors in the annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "Examining the learned trees is difficult because of their size, but looking at one tree to depth three reveals a fairly intuitive model. Ignoring the probabilities, the tree decides pse parent is the parent of pse target if and only if pse parent is the writer's pse (and pse target is not in quotation marks), or if pse parent is the word \"said.\" For all the trees learned, the root feature was either the writer pse test or the partial-parse-based domination feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "We have presented the concept of perspective and speech expressions, and argued that determining their hierarchical structure is important for natural language understanding of perspective. We have shown that identifying the hierarchical structure of pse's is amenable to automated analysis via a machine learning approach, although there is room for improvement in the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In the future, we plan to address the related tasks discussed in Section 2, especially identifying pse's and their immediate sources. We are also interested in ways of improving the machine learning formulation of the current task, such as optimizing the binary classifier on the whole-sentence evaluation, or defining a different binary task that is easier to learn. Nevertheless, we believe that our results provide a step towards the development of natural language systems that can extract and summarize the viewpoints and perspectives expressed in text while taking into account the multi-stage information filtering process that can mislead more na\u00efve systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Note that implicit expressions of perspective, i.e.Wiebe et",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the rest of this paper, then, we ignore the distinction between perspective and speech expressions, so in future ex-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In(Wiebe, 2002), this is referred to as the inside.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For this heuristic and the features that follow, we will speak of the pse's as if they had a position in the parse tree. However, since pse's are often multiple words, and do not necessarily form a constituent, this is not entirely accurate. The parse node corresponding to a pse will be the highest node in the dependency parse corresponding to a word in the pse. We consider the writer's implicit pse to correspond to the root of the parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The annotators also performed coreference resolution on sources.12 Under certain circumstances, such as paragraph-long quotes, the writer of a sentence will not be the same as the writer of a document. In such sentences, the NRRC corpus contains additional pse's for any other sources besides the writer of the document. Since we are concerned in this work only with one sentence at a time, we discard all such implicit pse's besides the writer of the sentence. Also, in a few cases, more than one pse in a sentence was marked as having the writer as its source. We believe this to be an error and so discarded all but one writer pse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "That is, heurTwo attaches a pse to the pse most immediately dominating it in the dependency tree. If no other pse dominates it, a pse is attached to the writer's pse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by NSF Grant IIS-0208028 and by an NSF Graduate Research Fellowship. We thank Rebecca Hwa for creating the dependency parses. We also thank the Cornell NLP group for helpful suggestions on drafts of this paper. Finally, we thank Janyce Wiebe and Theresa Wilson for draft suggestions and advice regarding this problem and the NRRC corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": ") for descriptions of this method. 15 Using the same test as above, p < 0.01, except for the performance on sentences with more than 5 pse's, because of the small amount of data",
"authors": [
{
"first": "",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1993,
"venue": "using an approximate randomization test with 9,999 trials. See",
"volume": "",
"issue": "",
"pages": "430--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "p < 0.01, using an approximate randomization test with 9,999 trials. See (Eisner, 1996, page 17) and (Chinchor et al., 1993, pages 430-433) for descriptions of this method. 15 Using the same test as above, p < 0.01, except for the performance on sentences with more than 5 pse's, because of the small amount of data, where p < 0.02.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The SCOL manual. cass is avail",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 1997. The SCOL manual. cass is avail- able from http://www.vinartus.net/spa/scol1h.tar.gz.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic dimensions in the field of reporting verbs",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Bergler",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Ninth Annual Conference of the University of Waterloo Centre for the New Oxford English Dictionary and Text Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Bergler. 1993. Semantic dimensions in the field of reporting verbs. In Proceedings of the Ninth An- nual Conference of the University of Waterloo Centre for the New Oxford English Dictionary and Text Re- search, Oxford, England, September.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic extraction of opinion propositions and their holders",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ashley",
"middle": [],
"last": "Thornton",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2004,
"venue": "Working Notes of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, Hong Yu, Ashley Thornton, Vasileios Hatzivassiloglou, and Dan Jurafsky. 2004. Automatic extraction of opinion propositions and their holders. In Working Notes of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. March 22-24, 2004, Stanford.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning classification trees",
"authors": [
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
}
],
"year": 1993,
"venue": "Artificial Intelligence frontiers in statistics",
"volume": "",
"issue": "",
"pages": "182--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wray Buntine. 1993. Learning classification trees. In D. J. Hand, editor, Artificial Intelligence frontiers in statistics, pages 182-201. Chapman & Hall,London. Available at http://ic.arc.nasa.gov/projects/bayes- group/ind/IND-program.html.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluating message understanding systems: An analysis of the third message understanding conference (MUC-3)",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "3",
"pages": "409--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Chinchor, Lynette Hirschman, and David Lewis. 1993. Evaluating message understanding systems: An analysis of the third message understanding conference (MUC-3). Computational Linguistics, 19(3):409-450.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Head-driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael John",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Collins",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael John Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "GATE: A framework and graphical development environment for robust nlp tools and applications",
"authors": [
{
"first": "Hamish",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Tablan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL '02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamish Cunningham, Diana Maynard, Kalina Bont- cheva, and Valentin Tablan. 2002. GATE: A frame- work and graphical development environment for ro- bust nlp tools and applications. In Proceedings of the 40th Anniversary Meeting of the Association for Com- putational Linguistics (ACL '02), Philadelphia, July. 2003. Proceedings of the Workshop on Text Summariza- tion, Edmonton, Alberta, Canada, May. Presented at the 2003 Human Language Technology Conference.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An empirical comparison of probability models for dependency grammar",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 1996. An empirical comparison of proba- bility models for dependency grammar. Technical Re- port IRCS-96-11, IRCS, University of Pennsylvania.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Modelling readers of news articles using nested beliefs",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Gerard",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Gerard. 2000. Modelling readers of news ar- ticles using nested beliefs. Master's thesis, Concordia University, Montr\u00e9al, Qu\u00e9bec, Canada.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling of semantic roles. Computational Linguistics, 28(3):245-288.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A dependency-based method for evaluating broad-coverage parsers",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1995,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "1420--1427",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1995. A dependency-based method for evaluating broad-coverage parsers. In IJCAI, pages 1420-1427.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Proceedings of the Seventh Message Understanding Conference (MUC-7)",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "500--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Seventh Message Understand- ing Conference (MUC-7). Morgan Kaufman, April. NIST. 2003. Proceedings of The Twelfth Text REtrieval Conference (TREC 2003), Gaithersburg, MD, Novem- ber. NIST special publication SP 500-255.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Comprehensive Grammar of the English Language",
"authors": [
{
"first": "Randolph",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Sidney",
"middle": [],
"last": "Greenbaum",
"suffix": ""
}
],
"year": 1985,
"venue": "Geoffrey Leech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman, New York.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "NRRC Summer Workshop on Multiple-Perspective Question Answering Final Report",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Wiebe, E. Breck, C. Buckley, C. Cardie, P. Davis, B. Fraser, D. Litman, D. Pierce, E. Riloff, and T. Wil- son. 2002. NRRC Summer Workshop on Multiple- Perspective Question Answering Final Report. Tech report, NRRC, Bedford, MA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Recognizing and Organizing Opinions Expressed in the World Press",
"authors": [
{
"first": "E",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maybury",
"suffix": ""
}
],
"year": 2003,
"venue": "Papers from the AAAI Spring Symposium on New Directions in Question Answering (AAAI tech report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiebe, E. Breck, C. Buckley, C. Cardie, P. Davis, B. Fraser, D. Litman, D. Pierce, E. Riloff, T. Wilson, D. Day, and M. Maybury. 2003. Recognizing and Or- ganizing Opinions Expressed in the World Press. In Papers from the AAAI Spring Symposium on New Di- rections in Question Answering (AAAI tech report SS- 03-07). March 24-26, 2003. Stanford.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tracking point of view in narrative",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "2",
"pages": "233--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe. 1994. Tracking point of view in narrative. Computational Linguistics, 20(2):233-287.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Instructions for annotating opinions in newspaper articles",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe. 2002. Instructions for annotating opin- ions in newspaper articles. Technical Report TR-02- 101, Dept. of Comp. Sci., University of Pittsburgh.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Converting dependency structures to phrase structures",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the HLT Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia and Martha Palmer. 2001. Converting depen- dency structures to phrase structures. In Proc. of the HLT Conference.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Hierarchical structure of the perspective and speech expressions in sentences 1 and 2",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Dependency parse of sentence 1 according to the Collins parser.",
"uris": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">: Breakdown of classes of pse's. \"writer\" de-</td></tr><tr><td colspan=\"2\">notes pse's with the writer as source. \"No parse\" denotes</td></tr><tr><td colspan=\"2\">pse's in sentences where the parse failed, and so the part</td></tr><tr><td>of speech could not be determined.</td><td/></tr><tr><td colspan=\"2\">number of pse's number of sentences</td></tr><tr><td>1</td><td>3612</td></tr><tr><td>2</td><td>3256</td></tr><tr><td>3</td><td>1810</td></tr><tr><td>4</td><td>778</td></tr><tr><td>5</td><td>239</td></tr><tr><td>&gt;5</td><td>113</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>: Breakdown of number of pse's per sentence</td></tr><tr><td>system. Such a system would be able to identify</td></tr><tr><td>all pse's in a document, as well as identify their</td></tr><tr><td>structure. The system would also identify the direct</td></tr><tr><td>source of each pse. Finally, the system would iden-</td></tr><tr><td>tify the text corresponding to the content of a private</td></tr><tr><td>state or the speech expressed by a pse. 3 Such a sys-</td></tr><tr><td>tem might analyze sentence 2 as follows:</td></tr><tr><td>(source: writer</td></tr><tr><td>pse: (implicit speech event)</td></tr><tr><td>content: Philip ... reasonable.\")</td></tr><tr><td>(source: clapp</td></tr><tr><td>pse: sums up</td></tr><tr><td>content: \"There ... reasonable.\")</td></tr><tr><td>(source: environmental movements</td></tr><tr><td>pse: reaction</td></tr><tr><td>content: (no text))</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table><tr><td/><td>: Performance by number of pse's per sentence</td></tr><tr><td colspan=\"2\">method evaluated on the entire corpus (\"Lin\") is</td></tr><tr><td/><td>|{pse|pse\u2208Non writer pse s(s)\u2227parent(pse)=autopar(pse))}|</td></tr><tr><td>s\u2208S</td><td>|Non writer pse s(s)|</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
}
}
}
}