Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W07-0208",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:39:50.003822Z"
},
"title": "Learning to Transform Linguistic Graphs",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Jijkoun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"addrLine": "Kruislaan 403",
"postCode": "1098 SJ",
"settlement": "Amsterdam",
"country": "The Netherlands"
}
},
"email": "[email protected]"
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"addrLine": "Kruislaan 403",
"postCode": "1098 SJ",
"settlement": "Amsterdam",
"country": "The Netherlands"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We argue in favor of the the use of labeled directed graph to represent various types of linguistic structures, and illustrate how this allows one to view NLP tasks as graph transformations. We present a general method for learning such transformations from an annotated corpus and describe experiments with two applications of the method: identification of non-local depenencies (using Penn Treebank data) and semantic role labeling (using Proposition Bank data).",
"pdf_parse": {
"paper_id": "W07-0208",
"_pdf_hash": "",
"abstract": [
{
"text": "We argue in favor of the the use of labeled directed graph to represent various types of linguistic structures, and illustrate how this allows one to view NLP tasks as graph transformations. We present a general method for learning such transformations from an annotated corpus and describe experiments with two applications of the method: identification of non-local depenencies (using Penn Treebank data) and semantic role labeling (using Proposition Bank data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Availability of linguistically annotated corpora such as the Penn Treebank (Bies et al., 1995) , Proposition Bank (Palmer et al., 2005) , and FrameNet (Johnson et al., 2003) has stimulated much research on methods for automatic syntactic and semantic analysis of text. Rich annotations of corpora has allowed for the development of techniques for recovering deep linguistic structures: syntactic non-local dependencies (Johnson, 2002; Hockenmaier, 2003; Dienes, 2004; Jijkoun and de Rijke, 2004) and semantic arguments (Gildea, 2001; Pradhan et al., 2005; Toutanova et al., 2005; Giuglea and Moschitti, 2006) . Most state-of-the-art methods for the latter two tasks use a cascaded architecture: they employ syntactic parsers and re-cast the corresponding tasks as pattern matching (Johnson, 2002) or classification (Pradhan et al., 2005) problems. Other meth-ods (Jijkoun and de Rijke, 2004) use combinations of pattern matching and classification.",
"cite_spans": [
{
"start": 75,
"end": 94,
"text": "(Bies et al., 1995)",
"ref_id": "BIBREF0"
},
{
"start": 114,
"end": 135,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF17"
},
{
"start": 151,
"end": 173,
"text": "(Johnson et al., 2003)",
"ref_id": "BIBREF15"
},
{
"start": 419,
"end": 434,
"text": "(Johnson, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 435,
"end": 453,
"text": "Hockenmaier, 2003;",
"ref_id": "BIBREF11"
},
{
"start": 454,
"end": 467,
"text": "Dienes, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 468,
"end": 495,
"text": "Jijkoun and de Rijke, 2004)",
"ref_id": "BIBREF12"
},
{
"start": 519,
"end": 533,
"text": "(Gildea, 2001;",
"ref_id": "BIBREF9"
},
{
"start": 534,
"end": 555,
"text": "Pradhan et al., 2005;",
"ref_id": "BIBREF18"
},
{
"start": 556,
"end": 579,
"text": "Toutanova et al., 2005;",
"ref_id": "BIBREF20"
},
{
"start": 580,
"end": 608,
"text": "Giuglea and Moschitti, 2006)",
"ref_id": "BIBREF10"
},
{
"start": 781,
"end": 796,
"text": "(Johnson, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 815,
"end": 837,
"text": "(Pradhan et al., 2005)",
"ref_id": "BIBREF18"
},
{
"start": 863,
"end": 891,
"text": "(Jijkoun and de Rijke, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The method presented in this paper belongs to the latter category. Specifically, we propose (1) to use a flexible and expressive graph-based representation of linguistic structures at different levels; and (2) to view NLP tasks as graph transformation problems: namely, problems of transforming graphs of one type into graphs of another type. An example of such a transformation is adding a level of the predicate argument structure or semantic arguments to syntactically annotated sentences. Furthermore, we describe a general method to automatically learn such transformations from annotated corpora. Our method combines pattern matching on graphs and machine learning (classification) and can be viewed as an extension of the Transformation-Based Learning paradigm (Brill, 1995) . After describing the method for learning graph transformations we demonstrate its applicability on two tasks: identification of non-local dependencies (using Penn Treebank data) and semantic roles labeling (using Proposition Bank data).",
"cite_spans": [
{
"start": 768,
"end": 781,
"text": "(Brill, 1995)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. In Section 2 we give our motivations for using graphs to encode linguistic data. In Section 3 we describe our method for learning graph transformations and in Section 4 we report on experiments with applications of our method. We conclude in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Trees and graphs are natural and common ways of encoding linguistic information, in particular, syn- Figure 1 shows a graph encoding of the Penn Treebank annotation of the local (solid edges) and non-local (dashed edges) syntantic structure of the sentence directors this month planned to seek more seats. In this example, the co-indexing-based implicit annotation of the non-local dependency (subject control) in the Penn Treebank (Bies et al., 1995) is made explicit in the graph-based encoding. Figure 2 shows a graph encoding of linguistic structures for the sentence Lorillard Inc stopped using crocodolite in sigarette filters in 1956. Here, solid lines correspond to surface syntactic structure, produced by Charniak's parser (Charniak, 2000) , and dashed lines are an encoding of the Proposition Bank annotation of the semantic roles with respect to the verb stopped.",
"cite_spans": [
{
"start": 432,
"end": 451,
"text": "(Bies et al., 1995)",
"ref_id": "BIBREF0"
},
{
"start": 733,
"end": 749,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 1",
"ref_id": null
},
{
"start": 498,
"end": 506,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graphs for linguistic structures and language processing tasks",
"sec_num": "2"
},
{
"text": "Graph-based representations allow for a uniform view on the linguistic structures on different layers. An advantage of such a uniform view is that apparently different NLP tasks can be considered as manipulations with graphs, in other words, as graph transformation problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphs for linguistic structures and language processing tasks",
"sec_num": "2"
},
{
"text": "Consider the task of recovering non-local dependencies (such as control, WH-extraction, topicalization) in the surface syntactic phrase trees produced by the state-of-the-art parser of (Charniak, 2000) . Figure 3 shows a graph-based encoding of the output of the parser, and the task in question would consist in transforming the graph in Figure 3 into the graph in Figure 1 . We notice that this transformation can be realised as a sequence of independent and relatively simple graph transformations: adding nodes and edges to the graph or changing their labels (e.g., from NP to NP-SBJ).",
"cite_spans": [
{
"start": 185,
"end": 201,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 339,
"end": 347,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 366,
"end": 374,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graphs for linguistic structures and language processing tasks",
"sec_num": "2"
},
{
"text": "Similarly, for the example in Figure 2 , adding a semantic layer (dashed edges) to the syntactic structure can also be seen as transforming a graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graphs for linguistic structures and language processing tasks",
"sec_num": "2"
},
{
"text": "In general, we can view NLP tasks as adding additional linguistic information to text, based on the information already present: e.g., syntactic parsing taking part-of-speech tagged sentences as input (Collins, 1999) , or anaphora resolution taking sequences of syntactically analysed and namedentity-tagged sentences. If both input and output linguistic structures are encoded as graphs, such NLP tasks become graph transformation problems.",
"cite_spans": [
{
"start": 201,
"end": 216,
"text": "(Collins, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graphs for linguistic structures and language processing tasks",
"sec_num": "2"
},
{
"text": "In the next section we describe our general method for learning graph transformations from an annotated corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graphs for linguistic structures and language processing tasks",
"sec_num": "2"
},
{
"text": "We start with a few basic definitions. Similar to (Sch\u00fcrr, 1997) , we define \u0131emphgraph as a relational structure, i.e., a set of objects and relations between them; we represent such structures as sets of first-order logic atomic predicates defining nodes, directed edges and their attributes (labels). Constants used in the predicates represent objects (nodes and edges) of graphs, as well as attribute names and values. Atomic predicates node(\u2022), edge(\u2022, \u2022, \u2022) and attr(\u2022, \u2022, \u2022) define nodes, edges and their attributes. We refer to (Sch\u00fcrr, 1997; Jijkoun, 2006) for formal definitions and only illustrate these concepts with an example. The following set of predicates:",
"cite_spans": [
{
"start": 50,
"end": 64,
"text": "(Sch\u00fcrr, 1997)",
"ref_id": "BIBREF19"
},
{
"start": 536,
"end": 550,
"text": "(Sch\u00fcrr, 1997;",
"ref_id": "BIBREF19"
},
{
"start": 551,
"end": 565,
"text": "Jijkoun, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "node(n 1 ), node(n 2 ), edge(e, n 1 , n 2 ), attr(n 1 , label, Src), attr(n 2 , label, Dst)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "defines a graph with two nodes, n 1 and n 2 , having labels Src and Dst (encoded as attributes named label), and an (unlabelled) edge e going from n 1 to n 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "A pattern is an arbitrary graph and an occurence of a pattern P in graph G is a total injective homomorphism \u2126 from P to G, i.e., a mapping that associates each object of P with one object G and preserves the graph structure (relations between nodes, edges, attribute names and values). We will also use the term occurence to refer to the graph \u2126(P ), a subgraph of G, the image of the mapping \u2126 on P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "A graph rewrite rule is a triple r = lhs r , C r , rhs r : the left-hand side, the constraint and the right-hand side of r, respectively, where lhs r and rhs r are graphs and C r is a function that returns 0 or 1 given a graph G, pattern lhs r and its occurence in G (i.e., C r specifies a constraint on occurences of a pattern in a graph).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "To apply a rewrite rule r = lhs r , C r , rhs r to a graph G means finding all occurences of lhs r in G for which C r evaluates to 1, and replacing such occurences of lhs r with occurences of rhs r . Effectively, objects and relations present in lhs r but not in rhs r will be removed from G, objects and relations in rhs r but not in lhs r will be added to G, and common objects and relations will remain intact. Again, we refer to (Jijkoun, 2006) for formal definitions.",
"cite_spans": [
{
"start": 433,
"end": 448,
"text": "(Jijkoun, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "As will be discussed below, our method for learning graph transformations is based on the ability to compare pairs of graphs, identifying where the two graphs are similar and where they differ. An alignment of two graphs is a partial one-to-one homomorphism between their nodes and edges, such that if two edges of the two graphs are aligned, their respective endpoints are aligned as well. A maximal alignment of two graphs is an alignment that maximizes the sum of (1) the number of aligned objects (nodes and edges), and (2) the number of matching attribute values of all aligned objects. In other words, a maximal alignment identifies as many similarities between two graphs as possible. Given an alignment of two graphs, it is possible to extract a list of rewrite rules that can transform one graph into another. For a maximal alignment such a list will consist of rules with the smallest possible left-and right-hand sides. See (Jijkoun, 2006) for details.",
"cite_spans": [
{
"start": 935,
"end": 950,
"text": "(Jijkoun, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "As stated above, we view NLP applications as graph transformation modules. Our supervised method for learning graph transformation requires two corpora: input graphs In = {In k } and corresponding output graphs Out = {Out k }, such that Out k is the desired output of the NLP module on the input In k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "The result of the method is an ordered list of graph rewrite rules R = r 1 , . . . r n , that can be applied in sequence to input graphs to produce the output of the NLP module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "Our method for learning graph transformations follows the structure of Transformation-Based Learning (Brill, 1995) and proceeds iteratively, as shown in Figure 4 . At each iteration, we compare and align pairs of input and output graphs, identify possible rewrite rules and select rules with the most frequent left-hand sides. For each selected rewrite rule r, we extract all occurences of its left-hand side and use them to train a two-class classifier implementing the constraint C r : the classifier, given an encoding of an occurence of the left-hand side predicts whether this particular occurence should be replaced with the corresponding right-hand side. When encoding an occurence as a feature vector, we add as features all paths and all attributes of nodes and edges in the one-edge neighborhood from the nodes of the occurence. For the experiments described in this paper we used the SVM Light classifier (Joachims, 1999) with a standard linear kernel. See (Jijkoun, 2006) for details.",
"cite_spans": [
{
"start": 101,
"end": 114,
"text": "(Brill, 1995)",
"ref_id": "BIBREF2"
},
{
"start": 916,
"end": 932,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 968,
"end": 983,
"text": "(Jijkoun, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Learning graph transformations",
"sec_num": "3"
},
{
"text": "Having presented a general method for learning graph transformations, we now illustrate the method at work and describe two applications to concrete NLP problems: identification of non-local dependencies (with the Penn Treebank data) and semantic role labeling (with the Proposition Bank data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "4"
},
{
"text": "State-of-the-art statistical phrase structure parsers, e.g., Charniak's and Collins' parsers trained on the Penn Treebank, produce syntactic parse trees with bare phrase labels, (NP, PP, S, see Figure 3 ), i.e., providing surface grammatical analysis of sentences, even though the training corpus, the Penn Treebank, is richer and contains additional grammatical and semantic information: it distinguishes various types of modifiers, complements, subjects, objects and annotates non-local dependencies, i.e., relations between phrases not adjacent in the parse tree (see Figure 1) . The task of recovering this information in the parser's output has received a good deal of attention. (Campbell, 2004 ) presents a rulebased algorithm for empty node identification in syntactic trees, competitive with the machine learning methods we mention next. In (Johnson, 2002) a simple pattern-matching algorithm was proposed for inserting empty nodes into syntactic trees, with patterns extracted from the Penn Treebank. (Dienes, 2004) used a preprocessor that identified surface location of empty nodes and a syntactic parser incorporating non-local dependencies into its probabilis-tic model. (Jijkoun and de Rijke, 2004) described an extension of the pattern-matching method with a classifier trained on the dependency graphs derived from the Penn Treebank data. In order to apply our graph transformation method to the task of identifying non-local dependencies, we need to encode the information provided in the Penn Treebank annotations and in the output of a syntactic parser using directed labeled graphs. We used a straightforward encoding of syntactic trees, with nodes representing terminals and non-terminals and edges defining the parent-child relationship. For each node, we used the attribute type to specify whether it is a terminal or a non-terminal. Terminals corresponding to Penn empty nodes were marked with the attribute empty = 1. For each terminal (i.e., each word), the values of attributes pos, word and lemma provided the part-of-speech tag, the actual form and the lemma of the word. For non-terminals, the attribute label contained the label of the corresponding syntactic phrase. The coindexing of empty nodes and non-terminals used in the Penn Treebank to annotate non-local dependencies was encoded using explicit edges with a distinct type attribute, connecting empty nodes with their antecedents (e.g., the dashed edge in Figure 1 ). For each non-terminal node, its head child was marked by attaching attribute head with value 1 to the corre-sponding parent-child edge, and the lexical head of each non-terminal was explicitly indicated using additional edges with the attribute type = lexhead. We used a heuristic method of (Collins, 1999) for head identification.",
"cite_spans": [
{
"start": 685,
"end": 700,
"text": "(Campbell, 2004",
"ref_id": "BIBREF3"
},
{
"start": 850,
"end": 865,
"text": "(Johnson, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 1011,
"end": 1025,
"text": "(Dienes, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 2749,
"end": 2764,
"text": "(Collins, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 194,
"end": 202,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 571,
"end": 580,
"text": "Figure 1)",
"ref_id": null
},
{
"start": 2446,
"end": 2454,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "When Penn Treebank sentences and the output of the parser are encoded as directed labeled graphs as described above, the task of identifying nonlocal dependencies can be formulated as transforming phrase structure graphs produced by a parser into graphs of the type used in Penn Treebank annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "We parsed the strings of the Penn Treebank with Charniak's parser and then used the data from sections 02-21 of the Penn Treebank for training: encoding of the parser's output was used as the corpus of input graphs for our learning method, and the encoding of the original Penn annotations was used as the corpus of output graphs. Similarly, we used the data of sections 00-01 for development and section 23 for testing. Using the input and output corpora, we ran the learning method as described above, at each iteration considering 20 most frequent left-hand sides of rewrite rules. At each iteration, the learned rewrite rules were applied to the current training and development corpora to create a corpus of input graphs for the next iteration (see Figure 4) and to estimate the performance of the system at the current iteration. The system was evaluated on the development corpus with respect to non-local dependencies using the \"strict\" evaluation measure of (Johnson, 2002) : the F 1 score of precision and recall of correctly identified empty nodes and antecedents. If the absolute improvement of the F 1 score for the evaluation measure was smaller than 0.1, the learning cycle was terminated, otherwise a new iteration was started.",
"cite_spans": [
{
"start": 967,
"end": 982,
"text": "(Johnson, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 754,
"end": 763,
"text": "Figure 4)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "The learning cycle terminated after 12 iterations. The resulting sequence of 12 \u00d7 20 = 240 graph rewrite rules was applied to the test corpus of input graphs: Charniak's parser output on the strings of section 23 of the Penn Treebank. The result was evaluated against the original annotations of the Penn Treebank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "The results of the evaluation of the system on empty nodes and non-local dependencies and the PARSEVAL F 1 score on local syntactic phrase structure against the test corpus at each iteration are shown in Table 1 . As one can expect, at each iteration the method extracts graph rewrite rules that introduce empty nodes and non-local relations into syntactic structures, increasing the recall. The performance of the final system (P/R/F 1 = 86.7/65.2/74.4) for the task of identifying non-local dependencies is comparable to the performance of the best model of (Dienes, 2004): P/R/F 1 =82.5/70.1/75.8. The PARSE-VAL score for the present system (88.4) is, however, higher than the 87.3 for the system of Dienes.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "Another effect of the learned transformations is changing node labels of non-terminals, specifically, modifying labels to include Penn functional tags (e.g., changing NP in the input graph in Figure 3 to NP-SBJ in the output graph in Figure 1 ). In fact, 17% of all learned rewrite rules involved only changing labels of non-terminal nodes. Analysis of the results showed that the system is capable of assigning Penn function tags to constituents produced by Charniak's parser with F 1 = 91.4 (we use here the evaluation measure of (Blaheta, 2004) : the F 1 score of the precision and recall for assigning function tags to constituents with surface spans correctly identified by Charniak's parser). Comparison to the evaluation results of the function tagging method presented in (Blaheta, 2004) is shown in Table 2 .",
"cite_spans": [
{
"start": 532,
"end": 547,
"text": "(Blaheta, 2004)",
"ref_id": "BIBREF1"
},
{
"start": 780,
"end": 795,
"text": "(Blaheta, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 234,
"end": 242,
"text": "Figure 1",
"ref_id": null
},
{
"start": 808,
"end": 815,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "The present system outperforms the system of Blaheta on semantic tags such as -TMP or -MNR marking temporal and manner adjuncts, respectively, but performs worse on syntactic tags such as -SBJ or -PRD marking subjects and predicatives, (Blaheta, 2004 respectively. Note that the present method was not specifically designed to add functional tags to constituent labels. The method is not even \"aware\" that functional tags exists: it simply treats NP and NP-SBJ as different labels and tries to correct labels comparing input and output graphs in the training corpora.",
"cite_spans": [
{
"start": 236,
"end": 250,
"text": "(Blaheta, 2004",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "In general, of the 240 graph rewrite rules extracted during the 12 iterations of the method, 25% involved only one graph node in the left-hand side, 16% two nodes, 12% three nodes, etc. The two most complicated extracted rewrite rules involved left-hand sides with ten nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "We now switch to the second application of our graph transformation method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "4.1"
},
{
"text": "Put very broadly, the task of semantic role labeling consists in detecting and labeling simple predicates: Who did what to whom, where, when, how, why, etc. There is no single definition of a universal set of semantic roles and moreover, different NLP applications may require different specificity of role labels. In this section we apply the graph transformation method to the task of identification of semantic roles as annotated in the Proposition Bank (Palmer et al., 2005) , PropBank for short. In PropBank, for all verbs (except copular) of the syntactically annotated sentences of the Wall Street Journal section of the Penn Treebank, semantic arguments are marked using references to the syntactic constituents of the Penn Treebank. For the 49,208 syntactically annotated sentences of the Penn Treebank, the PropBank annotated 112,917 verb predicates (2.3 predicates per sentence on average), with a total of 292,815 semantic arguments (2.6 arguments per predicate on average).",
"cite_spans": [
{
"start": 457,
"end": 478,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic role labeling",
"sec_num": "4.2"
},
{
"text": "PropBank does not aim at cross-verb semantically consistent labeling of arguments, but rather at annotating the different ways arguments of a verb can be realized syntactically in the corpus, which resulted in the choice of theory-neutral numbered labels (e.g., Arg0, Arg1, etc.) for semantic arguments. Figure 2 shows an example of a PropBank annotation (dashed edges).",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 312,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic role labeling",
"sec_num": "4.2"
},
{
"text": "In this section we address a specific NLP task: identifying and labeling semantic arguments in the output of a syntactic parser. For the example in Figure 2 this task corresponds to adding \"semantic\" nodes and edges to the syntactic tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic role labeling",
"sec_num": "4.2"
},
{
"text": "As before, in order to apply our graph transformation method, we need to encode the available information using graphs. Our encoding of syntactic phrase structure is the same as in Section 4.1 and the encoding of the semantic annotations of PropBank is straightforward. For each PropBank predicate, a new node with attributes type = propbank and label = pred is added. Another node with label = head and nodes for all semantic arguments of the predicate (with labels indicating PropBank argument names) are added and connected to the predicate node. Argument nodes with label ARGM (adjunct) additionally have a feature attribute with values TMP, LOC, etc., as specified in PropBank. The head node and all argument nodes are linked to their respective syntactic constituents, as specified in the PropBank annotation. All introduced semantic edges are marked with the attribute type = propbank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic role labeling",
"sec_num": "4.2"
},
{
"text": "As before, we used section 02-21 of the Prop-Bank (which annotates the same text as the Penn Treebank) to train our graph transformation system, section 00-01 for development and section 23 for testing. We ran three experiments, taking three different corpora of input graphs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic role labeling",
"sec_num": "4.2"
},
{
"text": "Treebank containing function tags, empty nodes, non-local dependencies, etc.;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "2. the output of Charniak's parser (i.e., bare syntactic trees) on the strings of sections 02-21; and 3. the output of Charniak's parser processed with the graph transformation system described in 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "For all three experiments we used the gold standard syntactic and semantic annotations from the Table 3 : Evaluation of our method for semantic role identification with Propbank: with Charniak parses and with parses processed by the system of Section 4.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "Penn Treebank and PropBank as the corpora of output graphs (for the experiment with bare Charniak parses, we dropped function tags, empty nodes and non-local dependencies from the syntactic annotation of the output graphs: we did not want our system to start recovering these annotations, but were interested in the identification of PropBank information alone). For each of the experiments, we used the corpora of input and output graphs as before, at each iteration extracting 20 rewrite rules with most frequent left-hand sides, applying the rules to the development data to measure the current performance of the system. We stopped the learning in case the performance improvement was less than a threshold and, otherwise, continued the learning loop. As our performance measure we used the F 1 score of precision and recall of the correctly identified and labeled nonempty constituents-semantic arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "In all experiments, the learning stopped after 11 or 12 iterations. The results of the evaluation of the system at each iteration on the test section of Prop-Bank are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 183,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "As one may expect, the performance of our semantic role labeler is substantially higher on the gold Penn Treebank syntactic structures than on the parser's output. Surprisingly, however, adding extra information to the parser's output (i.e., processing it with the system of Section 4.1) does not significantly improve the performance of the resulting system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "In Table 4 we compare our system for semantic System P R F 1 (Pradhan et al., 2005) 80.9 76.8 78.8 Here 81.0 70.4 75.3 Table 4 : Evaluation of our methods for semantic role identification with Propbank (12 first iterations).",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "(Pradhan et al., 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": null
},
{
"start": 119,
"end": 126,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "roles labeling with the output of Charniak's parser to the state-of-the-art system of (Pradhan et al., 2005) . While showing good precision, our system performs worse than state-of-the-art with respect to recall. Taking into account the iterative nature of the method and imperfect rule selection criteria (we simply take the most frequent left-hand sides), we believe that it is the rule selection and learning termination condition that account for the relatively low recall values. Indeed, in all three experiments described above the learning loop stops while the recall is still on the rise, albeit very slowly. It seems that a more careful rule selection mechanism and loop termination criteria are needed to address the recall problem.",
"cite_spans": [
{
"start": 86,
"end": 108,
"text": "(Pradhan et al., 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "the original syntactic structures of the Penn",
"sec_num": "1."
},
{
"text": "In this paper we argued that encoding diverse and complex linguistic structures as directed labeled graphs allows one to view many NLP tasks as graph transformation problems. We proposed a general method for learning graph transformation from annotated corpora and described experiments with two NLP applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "For the task of identifying non-local dependencies and for function tagging our general method demonstrates performance similar to the state-ofthe-art systems, designed specifically for these tasks. For the PropBank semantic role labeling the method shows a relatively low recall, which can be explained by our sub-optimal \"rule of thumb\" heuristics (such as selecting 20 most frequent rewrite rules at each iteration of the learning method). We see two ways of avoiding such heuristics. First, one can define and fine-tune the heuristics for each specific application. Second, one can use more informed rewrite rule selection methods, based on graph-based relational learning and frequent subgraph detection algorithms (Cook and Holder, 2000; Yan and Han, 2002) . Furthermore, more experiments are required to see how the details of encoding linguistic information in graphs affect the performance of the method.",
"cite_spans": [
{
"start": 720,
"end": 743,
"text": "(Cook and Holder, 2000;",
"ref_id": null
},
{
"start": 744,
"end": 762,
"text": "Yan and Han, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This research was supported by the Netherlands Organization for Scientific Research (NWO) under project numbers 017. 001.190, 220-80-001, 264-70-050, 354-20-005, 600.065.120, 612-13-001, 612.000.106, 612.066.302, 612.069.006, 640.001.501, 640.002.501, and by the E.U. IST programme of the 6th FP for RTD under project MultiMATCH contract IST-033104.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bracketing guidelines for Treebank II style Penn Treebank project",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mac-Intyre",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Bies, Mark Ferguson, Karen Katz, and Robert Mac- Intyre. 1995. Bracketing guidelines for Treebank II style Penn Treebank project. Technical report, Uni- versity of Pennsylvania.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Function Tagging",
"authors": [
{
"first": "Don",
"middle": [],
"last": "Blaheta",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Don Blaheta. 2004. Function Tagging. Ph.D. thesis, Brown University.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "4",
"pages": "543--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Lin- guistics, 21(4):543-565.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using linguistic principles to recover empty categories",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "645--653",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Campbell. 2004. Using linguistic principles to recover empty categories. In Proceedings of the 42nd Annual Meeting on Association for Computa- tional Linguistics, pages 645-653.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st Meeting of NAACL",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Meeting of NAACL, pages 132-139.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univer- sity of Pennsylvania.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Graph-based data mining",
"authors": [],
"year": null,
"venue": "IEEE Intelligent Systems",
"volume": "15",
"issue": "2",
"pages": "32--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graph-based data mining. IEEE Intelligent Systems, 15(2):32-41.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Statistical Parsing with Non-local Dependencies",
"authors": [
{
"first": "P\u00e9ter",
"middle": [],
"last": "Dienes",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P\u00e9ter Dienes. 2004. Statistical Parsing with Non-local Dependencies. Ph.D. thesis, Universit\u00e4t des Saarlan- des, Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical Language Understanding Using Frame Semantics",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea. 2001. Statistical Language Understand- ing Using Frame Semantics. Ph.D. thesis, University of California, Berkeley.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semantic role labeling via framenet, verbnet and propbank",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Giuglea",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "929--936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana-Maria Giuglea and Alessandro Moschitti. 2006. Se- mantic role labeling via framenet, verbnet and prop- bank. In Proceedings of the 21st International Con- ference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguis- tics, pages 929-936.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parsing with generative models of predicate-argument structure",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Meeting of ACL",
"volume": "",
"issue": "",
"pages": "359--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier. 2003. Parsing with generative mod- els of predicate-argument structure. In Proceedings of the 41st Meeting of ACL, pages 359-366.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Enriching the output of a parser using memory-based learning",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Jijkoun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maarten De Rijke",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin Jijkoun and Maarten de Rijke. 2004. Enrich- ing the output of a parser using memory-based learn- ing. In Proceedings of the 42nd Meeting of the Asso- ciation for Computational Linguistics (ACL'04), Main Volume, pages 311-318, Barcelona, Spain, July.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Graph Transformations for Natural Language Processing",
"authors": [
{
"first": "",
"middle": [],
"last": "Valentin Jijkoun",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin Jijkoun. 2006. Graph Transformations for Nat- ural Language Processing. Ph.D. thesis, University of Amsterdam.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Making large-scale svm learning practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1999. Making large-scale svm learning practical. In B. Sch\u00f6lkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods -Sup- port Vector Learning. MIT-Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "FrameNet: Theory and Practice",
"authors": [
{
"first": "Christopher",
"middle": [
"R"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Miriam",
"middle": [
"R L"
],
"last": "Petruck",
"suffix": ""
},
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ellsworth",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher R. Johnson, Miriam R. L. Petruck, Collin F. Baker, Michael Ellsworth, Josef Ruppenhofer, and Charles J. Fillmore. 2003. FrameNet: Theory and Practice. http://www.icsi.berkeley.edu/ \u223c framenet.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A simple pattern-matching algorithm for recovering empty nodes and their antecedents",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th meeting of ACL",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2002. A simple pattern-matching al- gorithm for recovering empty nodes and their an- tecedents. In Proceedings of the 40th meeting of ACL, pages 136-143.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic role labeling using different syntactic views",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Kadri",
"middle": [],
"last": "Hacioglu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Wayne Ward, Kadri Hacioglu, Jim Mar- tin, and Dan Jurafsky. 2005. Semantic role label- ing using different syntactic views. In Proceedings of ACL-2005.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Handbook of Graph Grammars and Computing by Graph Transformation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sch\u00fcrr",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "479--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Sch\u00fcrr. 1997. Programmed graph replacement sys- tems. In Grzegorz Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Transfor- mation, chapter 7, pages 479-546.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joint learning improves semantic role labeling",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Aria Haghighi, and Chris Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43rd Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "gspan: Graph-based substructure pattern mining",
"authors": [
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 IEEE International Conference on Data Mining (ICDM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xifeng Yan and Jiawei Han. 2002. gspan: Graph-based substructure pattern mining. In Proceedings of the 2002 IEEE International Conference on Data Mining (ICDM).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Local and non-local syntantic relations. Syntactic structure and semantic roles. tactic structures (phrase trees, dependency structures). In this paper we use node-and edge-labeled directed graphs as our representational formalism.Figures 1 and 2give informal examples of such representations.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Output of a syntactic parser.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Structure of our method for learning graph transformations.",
"uris": null
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table><tr><td>: Evaluation of our method for identification</td></tr><tr><td>of empty nodes and their antecedents (12 first itera-</td></tr><tr><td>tions).</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>)</td><td>Here</td></tr><tr><td>Type</td><td>Count</td><td>P / R / F1</td><td>P / R / F1</td></tr><tr><td>All tags</td><td>8480</td><td>-</td><td>93.3 / 89.6 / 91.4</td></tr><tr><td>Syntactic</td><td colspan=\"2\">4917 96.5</td><td/></tr></table>",
"type_str": "table",
"text": "/ 95.3 / 95.9 95.4 / 95.5 / 95.5 Semantic 3225 86.7 / 80.3 / 83.4 89.7 / 82.5 / 86.0"
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Evaluation of adding Penn Treebank function tags."
}
}
}
}