Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H01-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:31:18.683879Z"
},
"title": "Automatic Pattern Acquisition for Japanese Information Extraction",
"authors": [
{
"first": "Kiyoshi",
"middle": [],
"last": "Sudo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {
"addrLine": "715 Broadway, 7th floor",
"postCode": "10003",
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {
"addrLine": "715 Broadway, 7th floor",
"postCode": "10003",
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {
"addrLine": "715 Broadway, 7th floor",
"postCode": "10003",
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "One of the central issues for information extraction is the cost of customization from one scenario to another. Research on the automated acquisition of patterns is important for portability and scalability. In this paper, we introduce Tree-Based Pattern representation where a pattern is denoted as a path in the dependency tree of a sentence. We outline the procedure to acquire Tree-Based Patterns in Japanese from un-annotated text. The system extracts the relevant sentences from the training data based on TF/IDF scoring and the common paths in the parse tree of relevant sentences are taken as extracted patterns.",
"pdf_parse": {
"paper_id": "H01-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "One of the central issues for information extraction is the cost of customization from one scenario to another. Research on the automated acquisition of patterns is important for portability and scalability. In this paper, we introduce Tree-Based Pattern representation where a pattern is denoted as a path in the dependency tree of a sentence. We outline the procedure to acquire Tree-Based Patterns in Japanese from un-annotated text. The system extracts the relevant sentences from the training data based on TF/IDF scoring and the common paths in the parse tree of relevant sentences are taken as extracted patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information Extraction (IE) systems today are commonly based on pattern matching. New patterns need to be written when we customize an IE system for a new scenario (extraction task); this is costly if done by hand. This has led to recent research on automated acquisition of patterns from text with minimal pre-annotation. Riloff [4] reported a successful result for her procedure that needs only a pre-classified corpus. Yangarber [6] developed a procedure for unannotated natural language texts.",
"cite_spans": [
{
"start": 330,
"end": 333,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 432,
"end": 435,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "One of their common assumption is that the relevant documents include good patterns. Riloff implemented this idea by applying the pre-defined heuristic rules to pre-classified (relevant) documents and Yangarber advanced further so that the system can classify the documents by itself given seed patterns specific to a scenario and then find the best patterns from the relevant document set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Considering how they represent the patterns, we can see that, in general, Riloff and Yangarber relied on the sentence structure of English. Riloff's predefined heuristic rules are based on syntactic structures, such as \" subj active-verb\" and \"active-verb . dobj \". Yangarber used triples of a predicate and some of its arguments, such as \" pred subj obj \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Our careful examination of Japanese revealed some of the challenges for automated acquisition of patterns and information extraction on Japanese(-like) language and other challenges which arise regardless of the languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Challenges",
"sec_num": null
},
{
"text": "Free word order is one of the most significant problems in analyzing Japanese. To capture all the possible patterns given a predicate and its arguments, we need to permute the arguments and list all the patterns separately. For example, for \" subj dobj iobj predicate \" with the constraint that the predicate comes last in the sentence, there would be six possible patterns (permutations of three arguments). The number of patterns to cover even simple facts would rise unacceptably high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Free Word-ordering",
"sec_num": null
},
{
"text": "There is also a difficulty in a language with a flexible case marking system, like Japanese. In particular, we found that, in Japanese, some of the arguments that are usually marked as object in English were variously marked by different post-positions, and some case markers (postpositions) are used for marking more than one grammatical category in different situations. For example, the topic marker in Japanese, \"wa\", can mark almost any entity that would have been variously marked in English. It is difficult to deal with this variety by simply fixing the number of arguments of a predicate for creating patterns in Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Flexible case marking system",
"sec_num": null
},
{
"text": "Furthermore, we may want to capture the relationship between a predicate and a modifier of one of its arguments. In previous approaches, one had to introduce an ad hoc frame for such a relationship, such as \"verb obj [PP head-noun ]\", to extract the relationship between \"to assume\" and \" organization \" in the sentence \" person will assume the post of organization \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships beyond direct predicate-argument",
"sec_num": null
},
{
"text": "Another problem lies in relationships beyond clause boundaries, especially if the event is described in a subordinate clause. For example, for a sentence like \" organization announced that person retired from post ,\" it is hard to find a relationship between organization and the event of retiring without the global view from the predicate \"announce\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships beyond clausal boundaries",
"sec_num": null
},
{
"text": "These problems lead IE systems to fail to capture some of the arguments needed for filling the template. Overcoming the problems above makes the system capable of finding more patterns from the training data, and therefore, more slot-fillers in the template. In this paper, we introduce Tree-based pattern representation and consider how it can be acquired automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships beyond clausal boundaries",
"sec_num": null
},
{
"text": "Tree-based representation of patterns (TBP) is a representation of patterns based on the dependency tree of a sentence. A pattern is defined as a path in the dependency tree passing through zero or more intermediate nodes within the tree. The dependency tree is a directed tree whose nodes are bunsetsus or phrasal units, and whose directed arcs denote the dependency between two bunsetsus: A B denotes A's dependency on B (e.g. A is a subject and B is a predicate.) Here dependency relationships are not limited to just those between a case-marked element and a predicate, but also include those between a modifier and its head element, which covers most relationships within sentences. 1 Figure 2 shows how TBP is used in comparison with the wordorder based pattern, where A...F in the left part of the figure is a sequence of the phrasal units in a sentence appearing in this order and the tree in the right part is its dependency tree. To find the relationship between B F, a word-order based pattern needs a dummy expression to hold C, D and E, while TBF can denote the direct relationship as B F. TBP can also represent a complicated pattern for a node which is far from the root node in the dependency tree, like C D E, which is hard to represent without the sentence structure. For matching with TBP, the target sentence should be parsed into a dependency tree. Then all the predicates are detected and the subtrees which have a predicate node as a root are traversed to find a match with a pattern.",
"cite_spans": [
{
"start": 688,
"end": 689,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 690,
"end": 698,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "TREE-BASED PATTERN REPRESENTA-TION (TBP) Definition",
"sec_num": "2."
},
{
"text": "TBP has some advantages for pattern matching over the surface word-order based patterns in addressing the problems mentioned in the previous section:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benefit of TBP",
"sec_num": null
},
{
"text": "Free word-order problem TBP can offer a direct representation of the dependency relationship even if the word-order is different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benefit of TBP",
"sec_num": null
},
{
"text": "Free case-marking problem TBP can freely traverse the whole dependency tree and find any significant path as a pattern. It does not depend on predefined case-patterns as Riloff [4] and Yangarber [6] did.",
"cite_spans": [
{
"start": 177,
"end": 180,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 195,
"end": 198,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benefit of TBP",
"sec_num": null
},
{
"text": "TBP can find indirect relationships, such as the relationship between a predicate and the modifier of the argument of the \u00bd In this paper, we used the Japanese parser KNP [1] to obtain the dependency tree of a sentence.",
"cite_spans": [
{
"start": 171,
"end": 174,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Indirect relationships",
"sec_num": null
},
{
"text": "predicate. For example, the pattern \" organization \u00d3 post",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indirect relationships",
"sec_num": null
},
{
"text": "\u00d8\u00d3 appoint\" can capture the relationship between \" organization \" and \"to be appointed\" in the sentence \" person was appointed to post of organization .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indirect relationships",
"sec_num": null
},
{
"text": "Relationships beyond clausal boundaries TBP can capture relationships beyond clausal boundaries. The pattern \" post \u00d8\u00d3 appoint \u00c7\u00c5\u00c8 announce\" can find the relationship between \" post \" and \"to announce\". This relationship, later on, can be combined with the relationship \" organization \" and \"to announce\" and merged into one event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indirect relationships",
"sec_num": null
},
{
"text": "In this section, we outline our procedure for automatic acquisition of patterns. We employ a cascading procedure, as is shown in Figure 3 . First, the original documents are processed by a morphological analyzer and NE-tagger. Then the system retrieves the relevant documents for the scenario as a relevant document set. The system, further, selects a set of relevant sentences as a relevant sentence set from those in the relevant document set. Finally, all the sentences in the relevant sentence set are parsed and the paths in the dependency tree are taken as patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 137,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "ALGORITHM",
"sec_num": "3."
},
{
"text": "Morphological analysis and Named Entity (NE) tagging is performed on the training data at this stage. We used JUMAN [2] for the former and a NE-system which is based on a decision tree algorithm [5] for the latter. Also the part-of-speech information given by JUMAN is used in the later stages.",
"cite_spans": [
{
"start": 116,
"end": 119,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 195,
"end": 198,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document Preprocessing",
"sec_num": "3.1"
},
{
"text": "The system first retrieves the documents that describe the events of the scenario of interest, called the relevant document set. A set of narrative sentences describing the scenario is selected to create a query for the retrieval. For this experiment, we set the size of the relevant document set to 300 and retrieved the documents using CRL's stochastic-model-based IR system [3] , which performed well in the IR task in IREX, Information Retrieval and Extraction evaluation project in Japan 2 . All the sentences used to create the patterns are retrieved from this relevant document set.",
"cite_spans": [
{
"start": 377,
"end": 380,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document Retrieval",
"sec_num": "3.2"
},
{
"text": "The system then calculates the TF/IDF-based score of relevance to the scenario for each sentence in the relevant document set and retrieves the n most relevant sentences as the source of the patterns, where n is set to 300 for this experiment. The retrieved sentences will be the source for pattern extraction in the next subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Retrieval",
"sec_num": "3.3"
},
{
"text": "First, the TF/IDF-based score for every word in the relevant document set is calculated. TF/IDF score of word w is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Retrieval",
"sec_num": "3.3"
},
{
"text": "\u00d7 \u00d3\u00d6 \u00b4\u00db\u00b5 \u00b4\u00cc \u00b4\u00db\u00b5 \u00a1 \u00d0\u00d3 \u00b4AE \u2022\u00bc \u00b5 \u00b4\u00db\u00b5 \u00d0\u00d3 \u00b4AE \u2022\u00bd\u00b5 if w is Noun, Verb or Named Entity \u00bc otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Retrieval",
"sec_num": "3.3"
},
{
"text": "where N is the number of documents in the collection, TF(w) is the term frequency of w in the relevant document set and DF(w) is the document frequency of w in the collection. Second, the system calculates the score of each sentence based on the score of its words. However, unusually short sentences and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Retrieval",
"sec_num": "3.3"
},
{
"text": "A B C D E F Pattern [B * F] F E B A D C \u00bf \u00c8 \u00c8 \u00c8 \u00c8 \u00d5 \u00bf \u00c9 \u00c9 \u00c9 \u00d7 \u00b9 Pattern [B F] \u00c8 \u00c8 \u00c8 \u00c8 \u00c8 \u00c8 \u00c8 \u00c8 \u00c8 \u00d5 Pattern [C E F] TBP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Retrieval",
"sec_num": "3.3"
},
{
"text": "unusually long sentences will be penalized. The TF/IDF score of sentence s is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2: Extraction using Tree-Based Pattern Representation",
"sec_num": null
},
{
"text": "\u00d7 \u00d3\u00d6 \u00b4\u00d7\u00b5 \u00c8 \u00db\u00be\u00d7 \u00d7 \u00d3\u00d6 \u00b4\u00db\u00b5 \u00d0 \u00d2 \u00d8 \u00b4\u00d7\u00b5 \u2022 \u00d0 \u00d2 \u00d8 \u00b4\u00d7\u00b5 \u00ce",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2: Extraction using Tree-Based Pattern Representation",
"sec_num": null
},
{
"text": "where length(s) is the number of words in s, and AVE is the average number of words in a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2: Extraction using Tree-Based Pattern Representation",
"sec_num": null
},
{
"text": "Based on the dependency tree of the sentences, patterns are extracted from the relevant sentences retrieved in the previous subsection. Figure 4 shows the procedure. First, the retrieved sentence is parsed into a dependency tree by KNP [1] (Stage 1). This stage also finds the predicates in the tree. Second, the system takes all the predicates in the tree as the roots of their own subtrees, as is shown in (Stage 2). Then each path from the root to a node is extracted, and these paths are collected and counted across all the relevant sentences. Finally, the system takes those paths with fre-quency higher than some threshold as extracted patterns. Figure 5 shows examples of the acquired patterns.",
"cite_spans": [
{
"start": 236,
"end": 239,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 4",
"ref_id": null
},
{
"start": 653,
"end": 661,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern Extraction",
"sec_num": "3.4"
},
{
"text": "It is not a simple task to evaluate how good the acquired patterns are without incorporating them into a complete extraction system with appropriate template generation, etc. However, finding a match of the patterns and a portion of the test sentences can be a good measure of the performance of patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXPERIMENT",
"sec_num": "4."
},
{
"text": "The task for this experiment is to find a bunsetsu, a phrasal unit, that includes slot-fillers by matching the pattern to the test sentence. The performance is measured by recall and precision in terms of the number of slot-fillers that the matched patterns can find; these are calculated as follows. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXPERIMENT",
"sec_num": "4."
},
{
"text": "The procedure proposed in this paper is based on bunsetsus, and an individual bunsetsu may contain more than one slot filler. In such cases the procedure is given credit for each slot filler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "Strictly speaking, we don't know how many entities in a matched pattern might be slot-fillers when, actually, the pattern does not contain any slot-fillers (in the case of over-generating). We approximate the potential number of slot-fillers by assigning 1 if the (falsely) matched pattern does not contain any Named-Entities, or assigning the number of Named-Entities in the (falsely) matched pattern. For example, if we have a pattern \"go to dinner\" for a management succession scenario and it matches falsely in some part of the test sentences, this match will gain one at the number of All Matched Slot-fillers (the denominator of the precision). On the other hand, if the pattern is \" post person laugh\" and it falsely matches \"President Clinton laughed\", this will gain two, the number of the Named Entities in the pattern.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "For the sake of comparison, we defined the baseline system with the patterns acquired by the same procedure but only from the direct relationships between a predicate and its arguments (PA in Figure 6 and 7) .",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 207,
"text": "Figure 6 and 7)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "We chose the following two scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "Executive Management Succession: events in which corporate managers left their positions or assumed new ones regardless of whether it was a present (time of the report) or past event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "Items to extract: Date, person, organization, title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "Robbery Arrest: events in which robbery suspects were arrested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "Items to extract: Date, suspect, suspicion. For all the experiments, we used the Mainichi-Newspaper-95 corpus for training. As described in the previous section, the system retrieved 300 articles for each scenario as the relevant document set from the training data and it further retrieved 300 sentences as the relevant sentence set from which all the patterns were extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00d6 \u00d7 \u00d3\u00d2 \u00d3 \u00c5 \u00d8 \u00ca \u00d0 \u00da \u00d2\u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7 \u00d3 \u00d0\u00d0 \u00c5 \u00d8 \u00cb\u00d0\u00d3\u00d8 \u00d0\u00d0 \u00d6\u00d7",
"sec_num": null
},
{
"text": "Test data was taken from Mainichi-Newspaper-94 by manually reviewing the data for one month. The statistics of the test data are shown in Table 1 experiment for the executive management succession scenario and robbery arrest scenario, respectively. We ranked all the acquired patterns by calculating the sum of the TF/IDF-based score (same as for sentence retrieval in Section 3.3) for each word in the pattern and sorting them on this basis. Then we obtained the precisionrecall curve by changing the number of the top-ranked patterns in the list. Figure 6 shows that TBP is superior to the baseline system both in recall and precision. The highest recall for TBP is 34% while the baseline gets 29% at the same precision level. On the other hand, at the same level of recall, TBP got higher precision (75%) than the baseline (70%).",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 145,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 549,
"end": 557,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We can also see from Figure 6 that the curve has a slightly anomalous shape where at lower recall (below 20%) the precision is also low for both TBP and the baseline. This is due to the fact that the pattern lists for both TBP and the baseline contains some nonreliable patterns which get a high score because each word in the patterns gets higher score than others. Figure 7 shows the result of this experiment on the Robbery Arrest scenario. Although the overall recall is low, TBP achieved higher precision and recall (as high as 30% recall at 40% of precision) than the baseline except at the anomalous point where both TBP and the baseline got a small number of perfect slot-fillers by a highly ranked pattern, namely \"gotoyogi-de taihosuru (to arrest ",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 6",
"ref_id": "FIGREF4"
},
{
"start": 367,
"end": 375,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "It is mostly because we have not made a class of types of crimes that the recall on the robbery arrest scenario is low. Once we have a classifier as reliable as Named-Entity tagger, we can make a significant gain in the recall of the system. And in turn, once we have a class name for crimes in the training data (automatically annotated by the classifier) instead of a separate name for each crime, it becomes a good indicator to see if a sentence should be used to acquire patterns. And also, incorporating the classes in patterns can reduce the noisy patterns which do not carry any slot-fillers of the template.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low Recall",
"sec_num": null
},
{
"text": "For example on the management succession scenario, all the slot-fillers defined there were able to be tagged by the Named-Entity tagger [5] we used for this experiment, including the title. Since we knew all the slot-fillers were in one of the classes, we also knew those patterns whose argument was not classified any of the classes would not likely capture slot-fillers. So we could put more weight on those patterns which contained person , organization , post and date to collect the patterns with higher performance, and therefore we could achieve high precision.",
"cite_spans": [
{
"start": 136,
"end": 139,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Low Recall",
"sec_num": null
},
{
"text": "We also investigated other scenarios, namely train accident and airplane accident scenario, which we will not report in this paper. However, some of the problems which arose may be worth mentioning since they will arise in other, similar scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Erroneous Case Analysis",
"sec_num": null
},
{
"text": "Especially for the airplane accident scenario, most errors were identified as matching the effect or result of the incident. A typical example is \"Because of the accident, the airport had been closed for an hour.\" In the airplane accident scenario, the performance of the document retrieval and the sentence retrieval is not as good as the other two scenarios, and therefore, the frequency of relevant acquired patterns is rather low because of the noise. Further improvement in retrieval and a more robust approach is necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results or Effects of the Target Event",
"sec_num": null
},
{
"text": "If the scenario is specific enough to make it difficult as an IR task, the result of the document retrieval stage may include many documents related to the scenario in a broader sense but not specific enough for IE tasks. In this experiment, this was the case for the airplane accident scenario. The result of document retrieval included documents about other accidents in general, such as traffic accidents. Therefore, the sentence retrieval and pattern acquisition for these scenarios were affected by the results of the document retrievals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related but Not-Desired Sentences",
"sec_num": null
},
{
"text": "To apply the acquired patterns to an information extraction task, further steps are required besides those mentioned above. Since the patterns are a set of the binary relationships of a predicate and another element, it is necessary to merge the matched elements into a whole event structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FUTURE WORK Information Extraction",
"sec_num": "6."
},
{
"text": "We have not yet attempted any (lexical) generalization of pattern candidates. The patterns can be expanded by using a thesaurus and/or introducing a new (lexical) class suitable for a particular domain. For example, the class of expressions of flight number clearly helps the performance on the airplane accident scenario. Especially, the generalized patterns will help improve recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Necessity for Generalization",
"sec_num": null
},
{
"text": "As is discussed in the previous section, the performance of our system relies on each component. If the scenario is difficult for the IR task, for example, the whole result is affected. The investigation of a more conservative approach would be necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Necessity for Generalization",
"sec_num": null
},
{
"text": "The presented results show that our procedure of automatic pattern acquisition is promising. The procedure is quite general and addresses problems which are not specific to Japanese. With an appropriate morphological analyzer, a parser that produces a dependency tree and an NE-tagger, our procedure should be applicable to almost any language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translingualism",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by the Defense Advanced Research Projects Agency as part of the Translingual Information Detection, Extraction and Summarization (TIDES) program, under Grant N66001-00-1-8917 from the Space and Naval Warfare Systems Center San Diego. This paper does not necessarily reflect the position or the policy of the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Kn parser : Japanese dependency/case structure analyzer",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "the Proceedings of the Workshop on Sharable Natural Language Resources",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kurohashi and M. Nagao. Kn parser : Japanese dependency/case structure analyzer. In the Proceedings of the Workshop on Sharable Natural Language Resources, 1994.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Japanese morphological analyzing system: Juman",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Yamaji",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Taeki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagano",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Matsumoto, S. Kurohashi, O. Yamaji, Y. Taeki, and M. Nagano. Japanese morphological analyzing system: Juman. Kyoto University and Nara Institute of Science and Technology, 1997.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Information retrieval based on stochastic models in irex",
"authors": [
{
"first": "M",
"middle": [],
"last": "Murata",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ozaku",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 1994,
"venue": "the Proceedings of the IREX Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Murata, K. Uchimoto, H. Ozaku, and Q. Ma. Information retrieval based on stochastic models in irex. In the Proceedings of the IREX Workshop, 1994.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatically generating extraction patterns from untagged text",
"authors": [
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 1996,
"venue": "the Proceedings of Thirteenth National Conference on Artificial Intelligence (AAAI-96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Riloff. Automatically generating extraction patterns from untagged text. In the Proceedings of Thirteenth National Conference on Artificial Intelligence (AAAI-96), 1996.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A decision tree method for finding and classifying names in japanese texts",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shinnou",
"suffix": ""
}
],
"year": 1998,
"venue": "the Proceedings of the Sixth Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Sekine, R. Grishman, and H. Shinnou. A decision tree method for finding and classifying names in japanese texts. In the Proceedings of the Sixth Workshop on Very Large Corpora, 1998.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised discovery of scnario-level patterns for information extraction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Yangarber",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tapanainen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Huttunen",
"suffix": ""
}
],
"year": 2000,
"venue": "the Proceedings of the Sixth Applied Natural Language Processing Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Yangarber, R. Grishman, P. Tapanainen, and S. Huttunen. Unsupervised discovery of scnario-level patterns for information extraction. In the Proceedings of the Sixth Applied Natural Language Processing Conference, 2000.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "IREX Homepage: http://cs.nyu.edu/cs/projects/proteus/irex Tree-Based Pattern Representation Word-order Pattern",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Pattern Acquisition Procedure Overall Process",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Figure 7illustrates the precision-recall curve of this Stage 1 that person was appointed to post .) Pattern Acquisition from \" org -wa psn -ga pst -ni shuninsuru-to happyoshita.\"",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "Result on Management Succession Scenario",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "Result on Robbery Arrest Scenario on suspicion of robbery)\" for the baseline and \" person yogisha number -o taihosuru (to arrest the suspect, person , age number )\".",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td/><td>Robbery Arrest</td></tr><tr><td>Documents</td><td>28</td></tr><tr><td>Sentences</td><td>182</td></tr><tr><td>DATE</td><td>26</td></tr><tr><td>SUSPICION</td><td>34</td></tr><tr><td>SUSPECT</td><td>50</td></tr></table>",
"html": null,
"num": null,
"text": ""
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
}
}
}
}