|
{ |
|
"paper_id": "N06-1039", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:45:50.302523Z" |
|
}, |
|
"title": "Preemptive Information Extraction using Unrestricted Relation Discovery", |
|
"authors": [ |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Shinyama", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We are trying to extend the boundary of Information Extraction (IE) systems. Existing IE systems require a lot of time and human effort to tune for a new scenario. Preemptive Information Extraction is an attempt to automatically create all feasible IE systems in advance without human intervention. We propose a technique called Unrestricted Relation Discovery that discovers all possible relations from texts and presents them as tables. We present a preliminary system that obtains reasonably good results.", |
|
"pdf_parse": { |
|
"paper_id": "N06-1039", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We are trying to extend the boundary of Information Extraction (IE) systems. Existing IE systems require a lot of time and human effort to tune for a new scenario. Preemptive Information Extraction is an attempt to automatically create all feasible IE systems in advance without human intervention. We propose a technique called Unrestricted Relation Discovery that discovers all possible relations from texts and presents them as tables. We present a preliminary system that obtains reasonably good results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Every day, a large number of news articles are created and reported, many of which are unique. But certain types of events, such as hurricanes or murders, are reported again and again throughout a year. The goal of Information Extraction, or IE, is to retrieve a certain type of news event from past articles and present the events as a table whose columns are filled with a name of a person or company, according to its role in the event. However, existing IE techniques require a lot of human labor. First, you have to specify the type of information you want and collect articles that include this information. Then, you have to analyze the articles and manually craft a set of patterns to capture these events. Most existing IE research focuses on reducing this burden by helping people create such patterns. But each time you want to extract a different kind of information, you need to repeat the whole process: specify arti-cles and adjust its patterns, either manually or semiautomatically. There is a bit of a dangerous pitfall here. First, it is hard to estimate how good the system can be after months of work. Furthermore, you might not know if the task is even doable in the first place. Knowing what kind of information is easily obtained in advance would help reduce this risk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An IE task can be defined as finding a relation among several entities involved in a certain type of event. For example, in the MUC-6 management succession scenario, one seeks a relation between COMPANY, PERSON and POST involved with hiring/firing events. For each row of an extracted table, you can always read it as \"COMPANY hired (or fired) PERSON for POST.\" The relation between these entities is retained throughout the table. There are many existing works on obtaining extraction patterns for pre-defined relations (Riloff, 1996; Yangarber et al., 2000; Agichtein and Gravano, 2000; Sudo et al., 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 521, |
|
"end": 535, |
|
"text": "(Riloff, 1996;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 559, |
|
"text": "Yangarber et al., 2000;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 560, |
|
"end": 588, |
|
"text": "Agichtein and Gravano, 2000;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 607, |
|
"text": "Sudo et al., 2003)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Unrestricted Relation Discovery is a technique to automatically discover such relations that repeatedly appear in a corpus and present them as a table, with absolutely no human intervention. Unlike most existing IE research, a user does not specify the type of articles or information wanted. Instead, a system tries to find all the kinds of relations that are reported multiple times and can be reported in tabular form. This technique will open up the possibility of trying new IE scenarios. Furthermore, the system itself can be used as an IE system, since an obtained relation is already presented as a table. If this system works to a certain extent, tuning an IE system becomes a search problem: all the tables are already built \"preemptively.\" A user only needs to search for a relevant table. Article dump be-hit 2005-09-23 Katrina New Orleans 2005-10-02 Longwang Taiwan 2005-11-20 Gamma Florida Keywords: storm, evacuate, coast, rain, hurricane Table 1 : Sample discovered relation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 785, |
|
"end": 974, |
|
"text": "relevant table. Article dump be-hit 2005-09-23 Katrina New Orleans 2005-10-02 Longwang Taiwan 2005-11-20 Gamma Florida Keywords: storm, evacuate, coast, rain, hurricane Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We implemented a preliminary system for this technique and obtained reasonably good performance. Table 1 is a sample relation that was extracted as a table by our system. The columns of the table show article dates, names of hurricanes and the places they affected respectively. The headers of the table and its keywords were also extracted automatically.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 104, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Unrestricted Relation Discovery, the discovery process (i.e. creating new tables) can be formulated as a clustering task. The key idea is to cluster a set of articles that contain entities bearing a similar relation to each other in such a way that we can construct a table where the entities that play the same role are placed in the same column.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Idea", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Suppose that there are two articles A and B, and both report hurricane-related news. Article A contains two entities \"Katrina\" and \"New Orleans\", and article B contains \"Longwang\" and \"Taiwan\". These entities are recognized by a Named Entity (NE) tagger. We want to discover a relation among them. First, we introduce a notion called \"basic pattern\" to form a relation. A basic pattern is a part of the text that is syntactically connected to an entity. Some examples are \"X is hit\" or \"Y's residents\". Figure 1 shows several basic patterns connected to the entities \"Katrina\" and \"New Orleans\" in article A. Similarly, we obtain the basic patterns for article B. Now, in Figure 2 , both entities \"Katrina\" and \"Longwang\" have the basic pattern \"headed\" in common. In this case, we connect these two entities to each other. Furthermore, there is also a common basic pattern \"was-hit\" shared by \"New Orleans\" and \"Taiwan\". Now, we found two sets of entities that can be placed in correspondence at the same time. What does this mean? We can infer that both entity sets (\"Katrina\"-\"New Orleans\" and \"Longwang\"-\"Taiwan\") represent a certain relation that has something in common: a hurricane name Figure 2 : Finding a similar relation from two articles. and the place it affected. By finding multiple parallel correspondences between two articles, we can estimate the similarity of their relations. Generally, in a clustering task, one groups items by finding similar pairs. After finding a pair of articles that have a similar relation, we can bring them into the same cluster. In this case, we cluster articles by using their basic patterns as features. However, each basic pattern is still connected to its entity so that we can extract the name from it. We can consider a basic pattern to represent something like the \"role\" of its entity. In this example, the entities that had \"headed\" as a basic pattern are hurricanes, and the entities that had \"was-hit\" as a basic pattern are the places it affected. By using basic patterns, we can align the entities into the corresponding column that represents a certain role in the relation. From this example, we create a two-by-two table, where each column represents the roles of the entities, and each row represents a different article, as shown in the bottom of Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 503, |
|
"end": 511, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 680, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1194, |
|
"end": 1202, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2312, |
|
"end": 2320, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic Idea", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We can extend this table by finding another article in the same manner. In this way, we gradually extend a table while retaining a relation among its columns. In this example, the obtained table is just what an IE system (whose task is to find a hurricane name and the affected place) would create. However, these articles might also include other things, which could represent different relations. For example, the governments might call for help or some casualties might have been reported. To obtain such relations, we need to choose different entities from the articles. Several existing works have tried to extract a certain type of relation by manually choosing different pairs of entities (Brin, 1998; Ravichandran and Hovy, 2002) . Hasegawa et al. (2004) tried to extract multiple relations by choosing entity types. We assume that we can find such relations by trying all possible combinations from a set of entities we have chosen in advance; some combinations might represent a hurricane and government relation, and others might represent a place and its casualties. To ensure that an article can have several different relations, we let each article belong to several different clusters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 696, |
|
"end": 708, |
|
"text": "(Brin, 1998;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 709, |
|
"end": 737, |
|
"text": "Ravichandran and Hovy, 2002)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 762, |
|
"text": "Hasegawa et al. (2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Idea", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a real-world situation, only using basic patterns sometimes gives undesired results. For example, \"(President) Bush flew to Texas\" and \"(Hurricane) Katrina flew to New Orleans\" both have a basic pattern \"flew to\" in common, so \"Bush\" and \"Katrina\" would be put into the same column. But we want to separate them in different tables. To alleviate this problem, we put an additional restriction on clustering. We use a bag-of-words approach to discriminate two articles: if the word-based similarity between two articles is too small, we do not bring them together into the same cluster (i.e. table). We exclude names from the similarity calculation at this stage because we want to link articles about the same type of event, not the same instance. In addition, we use the frequency of each basic pattern to compute the similarity of relations, since basic patterns like \"say\" or \"have\" appear in almost every article and it is dangerous to rely on such expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Idea", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the above explanation, we have assumed that we can obtain enough basic patterns from an article. However, the actual number of basic patterns that one can find from a single article is usually not enough, because the number of sentences is rather small in comparison to the variation of expressions. So having two articles that have multiple basic patterns in common is very unlikely. We extend the number of articles for obtaining basic patterns by using a cluster of comparable articles that report the same event instead of a single article. We call this cluster of articles a \"basic cluster.\" Using basic clusters instead of single articles also helps to increase the redundancy of data. We can give more confidence to repeated basic patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Increasing Basic Patterns", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that the notion of \"basic cluster\" is different from the clusters used for creating tables explained above. In the following sections, a cluster for creating a table is called a \"metacluster,\" because this is a cluster of basic clusters. A basic cluster consists of a set of articles that report the same event which happens at a certain time, and a metacluster consists of a set of events that contain the same relation over a certain period.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Increasing Basic Patterns", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We try to increase the number of articles in a basic cluster by looking at multiple news sources simultaneously. We use a clustering algorithm that uses a vector-space-model to obtain basic clusters. Then we apply cross-document coreference resolution to connect entities of different articles within a basic cluster. This way, we can increase the number of basic patterns connected to each entity. Also, it allows us to give a weight to entities. We calculate their weights using the number of occurrences within a cluster and their position within an article. These entities are used to obtain basic patterns later.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Increasing Basic Patterns", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also use a parser and tree normalizer to generate basic patterns. The format of basic patterns is crucial to performance. We think a basic pattern should be somewhat specific, since each pattern should capture an entity with some relevant context. But at the same time a basic pattern should be general enough to reduce data sparseness. We choose a predicate-argument structure as a natural solution for this problem. Compared to traditional constituent trees, a predicate-argument structure is a higher-level representation of sentences that has gained wide acceptance from the natural language community recently. In this paper we used a logical feature structure called GLARF proposed by Meyers et al. (2001a) . A GLARF converter takes a syntactic tree as an input and augments it with several features. Figure 3 shows a sample GLARF structure obtained from the sentence \"Katrina hit Louisiana's coast.\" We used GLARF for two reasons: first, unlike traditional constituent parsers, GLARF has an ability to regularize several linguistic phenomena such as participial constructions and coordination. This allows us to handle this syntactic variety in a uniform way. Second, an output structure can be easily converted into a directed graph that represents the relationship between each word, without losing significant information from the original sentence. Compared to an ordinary constituent tree, it is easier to extract syntactic relationships. In the next section, we discuss how we used this structure to generate basic patterns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 694, |
|
"end": 715, |
|
"text": "Meyers et al. (2001a)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 810, |
|
"end": 818, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Increasing Basic Patterns", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The overall process to generate basic patterns and discover relations from unannotated news articles is shown in Figure 4 . Theoretically this could be a straight pipeline, but due to the nature of the implementation we process some stages separately and combine them in the later stage. In the following subsection, we explain each component.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 121, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "First of all, we need a lot of news articles from multiple news sources. We created a simple web crawler that extract the main texts from web pages. We observed that the crawler can correctly take the main texts from about 90% of the pages from each news site. We ran the crawler every day on several news sites. Then we applied a simple clustering algorithm to the obtained articles in order to find a set of arti- cles that talk about exactly the same news and form a basic cluster. We eliminate stop words and stem all the other words, then compute the similarity between two articles by using a bag-of-words approach. In news articles, a sentence that appears in the beginning of an article is usually more important than the others. So we preserved the word order to take into account the location of each sentence. First we computed a word vector from each article:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web Crawling and Basic Clustering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "V w (A) = IDF(w) i\u2208P OS(w,A) exp(\u2212 i avgwords )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web Crawling and Basic Clustering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where V w (A) is a vector element of word w in article A, IDF (w) is the inverse document frequency of word w, and P OS(w, A) is a list of w's positions in the article. avgwords is the average number of words for all articles. Then we calculated the cosine value of each pair of vectors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web Crawling and Basic Clustering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Sim(A 1 , A 2 ) = cos(V (A 1 ) \u2022 V (A 2 ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web Crawling and Basic Clustering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We computed the similarity of all possible pairs of articles from the same day, and selected the pairs whose similarity exceeded a certain threshold (0.65 in this experiment) to form a basic cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web Crawling and Basic Clustering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After getting a set of basic clusters, we pass them to an existing statistical parser (Charniak, 2000) and rule-based tree normalizer to obtain a GLARF structure for each sentence in every article. The current implementation of a GLARF converter gives about 75% F-score using parser output. For the details of GLARF representation and its conversion, see Meyers et al. (2001b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 102, |
|
"text": "(Charniak, 2000)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 376, |
|
"text": "Meyers et al. (2001b)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing and GLARFing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In parallel with parsing and GLARFing, we also apply NE tagging and coreference resolution for each article in a basic cluster. We used an HMM-based NE tagger whose performance is about 85% in Fscore. This NE tagger produces ACE-type Named Entities 1 : PERSON, ORGANIZATION, GPE, LO-CATION and FACILITY 2 . After applying singledocument coreference resolution for each article, we connect the entities among different articles in the same basic cluster to obtain cross-document coreference entities with simple string matching.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NE Tagging and Coreference Resolution", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "After getting a GLARF structure for each sentence and a set of documents whose entities are tagged and connected to each other, we merge the two outputs and create a big network of GLARF structures whose nodes are interconnected across different sentences/articles. Now we can generate basic patterns for each entity. First, we compute the weight for each cross-document entity E in a certain basic cluster as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Pattern Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "W E = e\u2208E mentions(e) \u2022 exp(\u2212C \u2022 f irstsent(e))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Pattern Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where e \u2208 E is an entity within one article and mentions(e) and f irstsent(e) are the number of mentions of entity e in a document and the position 1 The ACE task description can be found at http://www.itl.nist.gov/iad/894.01/tests/ace/ and the ACE guidelines at http://www.ldc.upenn.edu/Projects/ACE/ 2 The hurricane names used in the examples were recognized as PERSON. of the sentence where entity e first appeared, respectively. C is a constant value which was 0.5 in this experiment. To reduce combinatorial complexity, we took only the five most highly weighted entities from each basic cluster to generate basic patterns. We observed these five entities can cover major relations that are reported in a basic cluster. Next, we obtain basic patterns from the GLARF structures. We used only the first ten sentences in each article for getting basic patterns, as most important facts are usually written in the first few sentences of a news article. Figure 5 shows all the basic patterns obtained from the sentence \"Katrina hit Louisiana's coast.\" The shaded nodes \"Katrina\" and \"Louisiana\" are entities from which each basic pattern originates. We take a path of GLARF nodes from each entity node until it reaches any predicative node: noun, verb, or adjective in this case. Since the nodes \"hit\" and \"coast\" can be predicates in this example, we obtain three unique paths \"Louisiana+T-POS:coast (Louisiana's coast)\", \"Katrina+SBJ:hit (Katrina hit something)\", and \"Katrina+SBJ:hit-OBJ:coast (Katrina hit some coast)\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 149, |
|
"text": "1", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 303, |
|
"text": "2", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 954, |
|
"end": 962, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic Pattern Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "To increase the specificity of patterns, we generate extra basic patterns by adding a node that is immediately connected to a predicative node. (From this example, we generate two basic patterns: \"hit\" and \"hit-coast\" from the \"Katrina\" node.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Pattern Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Notice that in a GLARF structure, the type of each argument such as subject or object is preserved in an edge even if we extract a single path of a graph. Now, we replace both entities \"Katrina\" and \"Louisiana\" with variables based on their NE tags and obtain parameterized patterns: \"GPE+T-POS:coast (Louisiana's coast)\", \"PER+SBJ:hit (Katrina hit something)\", and \"PER+SBJ:hit-OBJ:coast (Katrina hit some coast)\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Pattern Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "After taking all the basic patterns from every basic cluster, we compute the Inverse Cluster Frequency (ICF) of each unique basic pattern. ICF is similar to the Inverse Document Frequency (IDF) of words, which is used to calculate the weight of each basic pattern for metaclustering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Pattern Generation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Finally, we can perform metaclustering to obtain tables. We compute the similarity between each basic cluster pair, as seen in Figure 6 . X A and X B are the set of cross-document entities from basic clusters c A and c B , respectively. We examine all possible mappings of relations (parallel mappings of multiple entities) from both basic clusters, and find all the mappings M whose similarity score exceeds a certain threshold. wordsim(c A , c B ) is the bag-of-words similarity of two clusters. As a weighting function we used ICF:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 135, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Metaclustering", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "weight(p) = \u2212 log( clusters that include p all clusters )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metaclustering", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We then sort the similarities of all possible pairs of basic clusters, and try to build a metacluster by taking the most strongly connected pair first. Note that in this process we may assign one basic cluster to several different metaclusters. When a link is found between two basic clusters that were already assigned to a metacluster, we try to put them into all the existing metaclusters it belongs to. However, we allow a basic cluster to be added only if it can fill all the columns in that table. In other words, the first two basic clusters (i.e. an initial two-row table) determines its columns and therefore define the relation of that table.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metaclustering", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We used twelve newspapers published mainly in the U.S. We collected their articles over two months (from Sep. 21, 2005 -Nov. 27, 2005 . We obtained 643,767 basic patterns and 7,990 unique types. Then we applied metaclustering to these basic clusters Table 2 : Articles and obtained metaclusters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 118, |
|
"text": "(from Sep. 21, 2005", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 133, |
|
"text": "-Nov. 27, 2005", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 257, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "and obtained 302 metaclusters (tables). We then removed duplicated rows and took only the tables that had 3 or more rows. Finally we had 101 tables. The total number the of articles and clusters we used are shown in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 223, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluated the obtained tables as follows. For each row in a table, we added a summary of the source articles that were used to extract the relation. Then for each table, an evaluator looks into every row and its source article, and tries to come up with a sentence that explains the relation among its columns. The description should be as specific as possible. If at least half of the rows can fit the explanation, the table is considered \"consistent.\" For each consistent table, the evaluator wrote down the sentence using variable names ($1, $2, ...) to refer to its columns. Finally, we counted the number of consistent tables. We also counted how many rows in each table can fit the explanation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Method", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We evaluated 48 randomly chosen tables. Among these tables, we found that 36 tables were consistent. We also counted the total number of rows that fit each description, shown in Table 3 . Table 4 shows the descriptions of the selected tables. The largest consistent table was about hurricanes (Table 5) . Although we cannot exactly measure the recall of each table, we tried to estimate the recall by comparing this hurricane table to a manually created one (Table 6 ). We found 6 out of 9 hurricanes 3 . It is worth noting that most of these hurricane names were automatically disambiguated although our NE tagger didn't distinguish a hurricane name from a person 8/16 Nominee $2(PER) must be confirmed by $1(ORG). 4/7 $1(PER) urges $2(GPE) to make changes. 4/6 $1(GPE) launched an attack in $2(GPE).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 185, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 195, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 302, |
|
"text": "(Table 5)", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 467, |
|
"text": "(Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "3/5 $1(PER) ran against $2(PER) in an election. 4/5 $2(PER) visited $1(GPE) on a diplomatic mission.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "2/4 $2(PER) beat $1(PER) in golf. 4/4 $2(GPE) soldier(s) were killed in $1(GPE).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "3/3 $2(PER) ran for governor of $1(GPE).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "2/3 Boxer $1(PER) fought boxer $2(PER).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "3/3 Table 7 . We reviewed 10 incorrect rows from various tables and found 4 of them were due to coreference errors and one error was due to a parse error. The other 4 errors were due to multiple basic patterns distant from each other that happened to refer to a different event reported in the same cluster. The causes of the one remaining error was obscure. Most inconsistent tables were a mixture of multiple relations and some of their rows still looked consistent.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 11, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We have a couple of open questions. First, the overall recall of our system might be lower than ex-isting IE systems, as we are relying on a cluster of comparable articles rather than a single document to discover an event. We might be able to improve this in the future by adjusting the basic clustering algorithm or weighting schema of basic patterns. Secondly, some combinations of basic patterns looked inherently vague. For example, we used the two basic patterns \"pitched\" and \"'s-series\" in the following sentence (the patterns are underlined): It is not clear whether this set of patterns can yield any meaningful relation. We are not sure how much this sort of table can affect overall IE performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this paper we proposed Preemptive Information Extraction as a new direction of IE research. As its key technique, we presented Unrestricted Relation Discovery that tries to find parallel correspondences between multiple entities in a document, and perform clustering using basic patterns as features. To increase the number of basic patterns, we used a cluster of comparable articles instead of a single document. We presented the implementation of our preliminary system and its outputs. We obtained dozens of usable tables. Table 6 : Hurricanes in North America between mid-Sep. and Nov. (from Wikipedia). Rows with a star (*) were actually extracted. The number of the source articles that contained a mention of the hurricane is shown in the right column. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 529, |
|
"end": 536, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Hurricane Katrina and Longwang shown in the previous examples are not included in this table. They appeared before this period.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported by the National Science Foundation under Grant IIS-00325657. This paper does not necessarily reflect the position of the U.S. Government. We would like to thank Prof. Ralph Grishman who provided useful suggestions and discussions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "More than 2,000 National Guard troops were put on active-duty alert to assist as Rita slammed into the string of islands and headed west, perhaps toward Texas", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "More than 2,000 National Guard troops were put on active-duty alert to assist as Rita slammed into the string of islands and headed west, perhaps toward Texas. ...", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Typhoon Damrey smashed into Vietnam on Tuesday after killing nine people in China", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Typhoon Damrey smashed into Vietnam on Tuesday af- ter killing nine people in China, ...", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Oil markets have been watching Wilma's progress nervously, ... but the threat to energy interests appears to have eased as forecasters predict the storm will turn toward Florida", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "3. Oil markets have been watching Wilma's progress ner- vously, ... but the threat to energy interests appears to have eased as forecasters predict the storm will turn to- ward Florida. ...", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Snowball: Extracting Relations from Large Plaintext Collections", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "References Eugene Agichtein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gravano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 5th ACM International Conference on Digital Libraries", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "References Eugene Agichtein and L. Gravano. 2000. Snowball: Ex- tracting Relations from Large Plaintext Collections. In Proceedings of the 5th ACM International Conference on Digital Libraries (DL-00).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Extracting Patterns and Relations from the World Wide Web", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sergey Brin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "WebDB Workshop at EDBT '98", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Brin. 1998. Extracting Patterns and Relations from the World Wide Web. In WebDB Workshop at EDBT '98.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A maximum-entropy-inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of NAACL-2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL-2000.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Discovering relations among named entities from large corpora", |
|
"authors": [ |
|
{ |
|
"first": "Takaaki", |
|
"middle": [], |
|
"last": "Hasegawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Annual Meeting of Association of Computational Linguistics (ACL-04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of the Annual Meeting of Association of Computational Linguistics (ACL-04).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Covering Treebanks with GLARF", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michiko", |
|
"middle": [], |
|
"last": "Kosaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shubin", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "ACL/EACL Workshop on Sharing Tools and Resources for Research and Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Meyers, Ralph Grishman, Michiko Kosaka, and Shubin Zhao. 2001a. Covering Treebanks with GLARF. In ACL/EACL Workshop on Sharing Tools and Resources for Research and Education.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Parsing and GLARFing", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michiko", |
|
"middle": [], |
|
"last": "Kosaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shubin", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of RANLP-2001", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Meyers, Michiko Kosaka, Satoshi Sekine, Ralph Grishman, and Shubin Zhao. 2001b. Parsing and GLARFing. In Proceedings of RANLP-2001, Tzigov Chark, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Learning surface text patterns for a question answering system", |
|
"authors": [ |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of the 40th Annual Meeting of the As- sociation for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Automatically Generating Extraction Patterns from Untagged Text", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 13th National Conference on Artificial Intelligence (AAAI-96)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff. 1996. Automatically Generating Extrac- tion Patterns from Untagged Text. In Proceedings of the 13th National Conference on Artificial Intelligence (AAAI-96).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "An Improved Extraction Pattern Representation Model for Automatic IE Pattern Acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Kiyoshi", |
|
"middle": [], |
|
"last": "Sudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Annual Meeting of Association of Computational Linguistics (ACL-03)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2003. An Improved Extraction Pattern Representa- tion Model for Automatic IE Pattern Acquisition. In Proceedings of the Annual Meeting of Association of Computational Linguistics (ACL-03).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Unsupervised Discovery of Scenario-Level Patterns for Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pasi", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silja", |
|
"middle": [], |
|
"last": "Huttunen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 18th International Conference on Computational Linguistics (COLING-00)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Unsupervised Discovery of Scenario-Level Patterns for Information Extraction. In Proceedings of the 18th International Conference on Computational Linguistics (COLING-00).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "GLARF structure of the sentence \"Katrina hit Louisiana's coast.\"", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "System overview.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Basic patterns obtained from the sentence \"Katrina hit Louisiana's coast.\"", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "Santana pitched 5 1-3 gutsy innings in his postseason debut for the Angels, Adam Kennedy hit a goahead triple that sent Yankees outfielders crashing to the ground, and Los Angeles beat New York 5-3 Monday night in the decisive Game 5 of their AL playoff series.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "for each cluster pair (cA, cB) { XA = cA.entities XB = cB.entities for each entity mapping M = [(xA1, xB1), ..., (xAn, xBn)] \u2208 (2 |X A | \u00d7 2 |X B | ) { for each entity pair (xAi, xBi) { Pi = xAi.patterns \u2229 xBi.patterns pairscorei = p\u2208P i weight(p) < |M | and T 2 < mapscore and T 3 < wordsim(c A .words, c B .words) { link c A and c B with mapping M .", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>}</td><td>}</td><td>} mapscore = if T 1 }</td><td>pairscore i</td></tr><tr><td/><td/><td colspan=\"2\">Figure 6: Computing similarity of basic clusters.</td></tr><tr><td colspan=\"4\">Tables: Consistent tables Inconsistent tables Total Rows: Rows that fit the description 118 (73%) 36 (75%) 12 48 Rows not fitted 43 Total 161</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Evaluation results.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Description Storm $1(PER) probably affected $2(GPE).</td><td>Rows</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Description of obtained tables and the number of fitted/total rows. name. The second largest table (about nominations of officials) is shown in", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Hurricane table (\"Storm $1(PER) probably affected $2(GPE).\") and the actual expressions we used for extraction.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Articles 6 566 83 18 12 368 Oct 22-24 (Haiti, Dominican Rep.) 80 Hurricane Date (Affected Place) Philippe Sep 17-20 (?) * Rita Sep 17-26 (Louisiana, Texas, etc.) * Stan Oct 1-5 (Mexico, Nicaragua, etc.) * Tammy Oct 5-? (Georgia, Alabama) Vince Oct 8-11 (Portugal, Spain) * Wilma Oct 15-25 (Cuba, Honduras, etc.) Alpha * Beta Oct 26-31 (Nicaragua, Honduras) 55 * Gamma Nov 13-20 (Belize, etc.) 36</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |