|
{ |
|
"paper_id": "C10-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:58:59.543503Z" |
|
}, |
|
"title": "Simplicity is Better: Revisiting Single Kernel PPI Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Sung-Pil", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sung-Hyon", |
|
"middle": [], |
|
"last": "Myaeng", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "It has been known that a combination of multiple kernels and addition of various resources are the best options for improving effectiveness of kernel-based PPI extraction methods. These supplements, however, involve extensive kernel adaptation and feature selection processes, which attenuate the original benefits of the kernel methods. This paper shows that we are able to achieve the best performance among the stateof-the-art methods by using only a single kernel, convolution parse tree kernel. In-depth analyses of the kernel reveal that the keys to the improvement are the tree pruning method and consideration of tree kernel decay factors. It is noteworthy that we obtained the performance without having to use any additional features, kernels or corpora.", |
|
"pdf_parse": { |
|
"paper_id": "C10-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "It has been known that a combination of multiple kernels and addition of various resources are the best options for improving effectiveness of kernel-based PPI extraction methods. These supplements, however, involve extensive kernel adaptation and feature selection processes, which attenuate the original benefits of the kernel methods. This paper shows that we are able to achieve the best performance among the stateof-the-art methods by using only a single kernel, convolution parse tree kernel. In-depth analyses of the kernel reveal that the keys to the improvement are the tree pruning method and consideration of tree kernel decay factors. It is noteworthy that we obtained the performance without having to use any additional features, kernels or corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Protein-Protein Interaction (PPI) Extraction refers to an automatic extraction of the interactions between multiple protein names from natural language sentences using linguistic features such as lexical clues and syntactic structures. A sentence may contain multiple protein names and relations, i.e., multiple PPIs. For example, the sentence in Fig.1 contains a total of six protein names of varying word lengths and three explicit interactions (relations). The interaction type between phosphoprotein and the acronym P in the parentheses is \"EQUAL.\" A longer protein name phosphoprotein of vesicular stomatitis virus is related to nucleocapsid protein via \"INTERACT\" relation. Like the first PPI, nuc-leocapsid protein is equivalent to the abbreviated term N.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 347, |
|
"end": 352, |
|
"text": "Fig.1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It is not straightforward to extract PPIs from a sentence or textual segment. There may be multiple protein names and their relationships, which are intertwined in a sentence. An interaction type may be expressed in a number of different ways. A significant amount of efforts have been devoted to kernel-based approaches to PPI extractions (PPIE) as well as relation extractions 2 (Zhang et al., 2006; Pyysalo et al., 2008; Guo-Dong et al., 2007; Zhang et al., 2008; Miwa et al., 2009) . They include word feature kernels, parse tree kernels, and graph kernels. One of the benefits of using a kernel method is that it can keep the original 1 BioInfer, Sentence ID:BioInfer.d10.s0 2 Relation extraction has been studied massively with the help of the ACE (www.nist.gov/tac) competition workshop and its corpora. The ACE corpora contain valuable information showing the traits of target entities (e.g., entity types, roles) for relation extraction in single sentences. Since all target entities are of the same type, protein name, in PPIE, however, we cannot use relational information that exists among entity types. This makes PPIE more challenging.", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 401, |
|
"text": "(Zhang et al., 2006;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 423, |
|
"text": "Pyysalo et al., 2008;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 446, |
|
"text": "Guo-Dong et al., 2007;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 466, |
|
"text": "Zhang et al., 2008;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 485, |
|
"text": "Miwa et al., 2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "formation of target objects such as parse trees, not requiring extensive feature engineering for learning algorithms (Zelenko et al., 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 139, |
|
"text": "(Zelenko et al., 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In an effort to improve the performance of PPIE, researchers have developed not only new kernels but also methods for combining them (GuoDong et al., 2007; Zhang et al., 2008; Airola et al., 2008; Miwa et al., 2009a; Miwa et al., 2009b) . While the intricate ways of combing various kernels and using extra resources have played the role of establishing strong baseline performance for PPIE, however, they are viewed as another form of engineering efforts. After all, one of the reasons the kernel methods have become popular is to avoid such engineering efforts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 155, |
|
"text": "(GuoDong et al., 2007;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 175, |
|
"text": "Zhang et al., 2008;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 196, |
|
"text": "Airola et al., 2008;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 216, |
|
"text": "Miwa et al., 2009a;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 236, |
|
"text": "Miwa et al., 2009b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Instead, we focus on a state-of-the-art kernel and investigate how it can be best utilized for enhanced performance. We show that even with a single kernel, convolution parse tree kernel in this case, we can achieve superior performance in PPIE by devising an appropriate preprocessing and factor adjustment method. The keys to the improvement are tree pruning and consideration of a tree kernel decay factor, which are independent of the machine learning model used in this paper. The main contribution of our work is the extension and application of the particular convolution tree kernel method for PPIE, which gives a lesson that a deep analysis and a subsequent extension of a kernel for maximal performance can override the gains obtained from engineering additional features or combining other kernels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remaining part of the paper is organized as follows. In section 2, we survey the existing approaches. Section 3 introduces the parse tree kernel model and its algorithm. Section 4 explains the performance improving factors applied to the parse tree kernel. The architecture of our system is introduced in section 5. Section 6 shows the improvements in effectiveness in multiple PPI corpora and finally we conclude our work in section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In recent years, numerous studies have attempted to extract PPI automatically from text. Zhou and He (2008) classified various PPIE approaches into three categories: linguistic, rule-based and machine learning and statistical methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 107, |
|
"text": "Zhou and He (2008)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Linguistic approaches involve constructing special grammars capable of syntactically expressing the interactions in sentences and then applying them to the language analyzers such as part-of-speech taggers, chunkers and parsers to extract PPIs. Based on the level of linguistic analyses, we can divide the linguistic approaches into two categories: shallow parsing (Sekimizu et al., 1998; Gondy et al., 2003) and full parsing methods (Temkin & Gilder, 2003; Nikolai et al., 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 388, |
|
"text": "(Sekimizu et al., 1998;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 408, |
|
"text": "Gondy et al., 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 457, |
|
"text": "(Temkin & Gilder, 2003;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 479, |
|
"text": "Nikolai et al., 2004)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Rule-based approaches use manually defined sets of lexical patterns and find text segments that match the patterns. Blaschke et al. (1996) built a set of lexical rules based on clue words denoting interactions. Ono et al. (2001) defined a group of lexical and syntactic interaction patterns, embracing negative expressions, and applied them to extract PPIs from documents about \"Saccharomyces cerevisiae\" and \"Escherichia coli\". Recently, Fundel et al. (2007) proposed a PPI extraction model based on more systematic rules using a dependency parser.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 138, |
|
"text": "Blaschke et al. (1996)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 228, |
|
"text": "Ono et al. (2001)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 459, |
|
"text": "Fundel et al. (2007)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Machine learning and statistical approaches have been around for a while but have recently become a dominant approach for PPI extraction. These methods involve building supervised or semi-supervised models based on training sets and various feature extraction methods (Andrade & Valencia, 1998; Marcotte et al., 2001; Craven & Kumlien, 1999) . Among them, kernel-based methods have been studied extensively in recent years. Airola et al. (2008) attempted to extract PPIs using a graph kernel by converting dependency parse trees into the corresponding dependency graphs. Miwa et al. (2009a) utilized multiple kernels such as word feature kernels, parse tree kernels, and even graph kernels in order to improve the performance of PPI extraction. Their experiments based on five PPI corpora, however, showed that combining multiple kernels gave only minor improvements compared to other methods. To further improve the performance of the multiple kernel system, the same group combined multiple corpora to exploit additional features for a modified SVM model (Miwa et al., 2009b) . While they achieved the best performance in PPI extraction, it was possible only with additional kernels and corpora from which additional features were extracted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 294, |
|
"text": "(Andrade & Valencia, 1998;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 317, |
|
"text": "Marcotte et al., 2001;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 341, |
|
"text": "Craven & Kumlien, 1999)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 444, |
|
"text": "Airola et al. (2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 590, |
|
"text": "Miwa et al. (2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1057, |
|
"end": 1077, |
|
"text": "(Miwa et al., 2009b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Unlike the aforementioned approaches trying to use all possible resources for performance enhancement, this paper aims at maximizing the performance of PPIE using only a single kernel without any additional resources. Without lowering the performance, we attempt to stick to the initial benefits of the kernel methods: simplicity and modularity (Shawe-Taylor & Cristianini, 2004).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The main idea of a convolution parse tree kernel is to sever a parse tree into its sub-trees and transfer it as a point in a vector space in which each axis denotes a particular sub-tree in the entire set of parse trees. If this set contains M unique sub-trees, the vector space becomes Mdimensional. The similarity between two parse trees can be obtained by computing the inner product of the two corresponding vectors, which is the output of the parse tree kernel. There are two types of parse tree kernels of different forms of sub-trees: one is SubTree Kernel (STK) proposed by Vishwanathan and Smola (2003) , and the other is SubSet Tree Kernel (SSTK) developed by Collins and Duffy (2001) . In STK, each sub-tree should be a complete tree rooted by a specific node in the entire tree and ended with leaf nodes. All the sub-trees must obey the production rules of the syntactic grammar. Meanwhile, SSTK can have any forms of sub-trees in the entire parse tree given that they should obey the production rules. It was shown that SSTK is much superior to STK in many tasks (Moschitti, 2006) . He also introduced a fast algorithm for computing a parse tree kernel and showed its beneficial effects on the semantic role labeling problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 611, |
|
"text": "Vishwanathan and Smola (2003)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 670, |
|
"end": 694, |
|
"text": "Collins and Duffy (2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1076, |
|
"end": 1093, |
|
"text": "(Moschitti, 2006)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Convolution Parse Tree Kernel Model for PPIE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A parse tree kernel can be computed by the following equation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Convolution Parse Tree Kernel Model for PPIE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(1) where T i is i th parse tree and n 1 and n 2 are nodes in N T , the set of the entire nodes of T. \u03bb represents a tree kernel decay factor, which will be explained later, and \u03c3 decides the way the tree is severed. Finally \u0394(n 1 , n 2 , \u03bb, \u03c3) counts the number of the common sub-trees of the two parse trees rooted by n 1 and n 2 . Figure 2 shows the algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 342, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Convolution Parse Tree Kernel Model for PPIE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this algorithm, the get_children_number function returns the number of the direct child nodes of the current node in a tree. The function named get_node_value gives the value of a node such as part-of-speeches, phrase tags and words. The get_production_rule function finds the grammatical rule of the current node and its children by inspecting their relationship. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Convolution Parse Tree Kernel Model for PPIE", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Tree pruning for relation extraction was firstly introduced by Zhang et al. (2006) and also referred to as \"tree shrinking task\" for removing less related contexts. They suggested five types of the pruning methods and later invented two more in Zhang et al. (2008) . Among them, the path-enclosed tree (PT) method was shown to give the best result in the relation extraction task based on ACE corpus. We opted for this pruning method in our work. Figure 3 shows how the PT method prunes a tree. To focus on the pivotal context, it preserves only the syntactic structure encompassing the two proteins at hand and the words in between them (the part enclosed by the dotted lines). Without pruning, all the words like addition, increased and activity would intricately participate in deciding the interaction type of this sentence. Another important effect of the tree pruning is its ability to separate features when two or more interactions exist in a sentence. As in Figure 1, each interaction involves its unique context even though a sentence has multiple interactions. With tree pruning, it is likely to extract context-sensitive features by ignoring external features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 82, |
|
"text": "Zhang et al. (2006)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 264, |
|
"text": "Zhang et al. (2008)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 455, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 967, |
|
"end": 973, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree Pruning Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Collins and Duffy (2001) addressed two problems of the parse tree kernel. The first one is that its kernel value tends to be largely dominated by the size of two input trees. If they are large in size, it is highly probable for the kernel to accumulate a large number of overlapping counts in computing their similarity. Secondly, the kernel value of two identical parse trees can become overly large while the value of two different parse trees is much tiny in general. These two aspects can cause a trouble during a training phase because pairs of large parse trees that are similar to each other are disproportionately dominant. Consequently, the resulting models could act like nearest neighbor models (Collins and Duffy, 2001) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 706, |
|
"end": 731, |
|
"text": "(Collins and Duffy, 2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Kernel Decay Factor", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To alleviate the problems, Collins and Duffy (2001) introduced a scalability parameter called decay factor, 0 < \u03bb \u2264 1 which scales the relative importance of tree fragments with their sizes as in line 33 of Fig. 2 . Based on the algorithm, a decay factor decreases the degree of contribution of a large sub-tree exponentially in kernel computation. Figure 4 illustrates both the way a tree kernel is computed and the effect of a decay factor. In the figure, T 1 and T 2 share four common sub-trees (S 1 , S 2 , S 3 , S 5 ). Let us assume that there are only two trees in a training set and only five unique sub-trees exist. Then each tree can be expressed by a vector whose elements are the number of particular sub-trees. Kernel value is obtained by computing the inner product of the two vectors. As shown in the figure, S 1 is a large sub-sub-trees, S 1 , S 2 S 3 , and S 4 , two of which (S 2 , and S 3 ) are duplicated in the inner product computation. It is highly probable for large sub-trees to contain many smaller subtrees, which lead to an over-estimated similarity value between two parse trees. As mentioned above, therefore, it is necessary to rein those large sub-trees with respect to their sizes in computing kernel values by using decay factors. In this paper, we treat the decay factor as one of the important optimization parameters for a PPI extraction task. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 213, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 357, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree Kernel Decay Factor", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In order to show the superiority of the simple kernel based method using the two factors used in this paper, compared to the resent results for PPIE using additional resources, we ran a series of experiments using the same PPI corpora cited in the literature. In addition, we show that the method is robust especially for cross-corpus experiments where a classifier is trained and tested with entirely different corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To evaluate our approach for PPIE, we used \"Five PPI Corpora 3 \" organized by Pyysalo et al. (2008) . It contains five different PPI corpora: AImed, BioInfer, HPRD50, IEPA and LLL. They have been combined in a unified XML format and \"binarized\" in case of involving multiple interaction types. Table 1 shows the size of each corpus in \"Five PPI Corpora.\" As mentioned before, a sentence can have multiple interactions, which results in the gaps between the number of sentences and the sum of the number of instances. Negative instances have been automatically generated by enumerating sentences with multiple proteins but not having interactions between them (Pyysalo et al., 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 99, |
|
"text": "Pyysalo et al. (2008)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 681, |
|
"text": "(Pyysalo et al., 2008)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 301, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Corpora", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to parse each sentence, we used Charniak Parser 4 . For kernel-based learning, we expanded the original libsvm 2.89 5 (Chang & Lin, 2001 ) so that it has two additional kernels including parse tree kernel and composite kernel 6 along with four built-in kernels 7 Our experiment uses both macro-averaged and micro-averaged F-scores. Macro-averaging 3 http://mars.cs.utu.fi/PPICorpora/eval-standard.html 4 http://www.cs.brown.edu/people/ec/#software 5 http://www.csie.ntu.edu.tw/~cjlin/libsvm/ 6 A kernel combining built-in kernels and parse tree kernel 7 Linear, polynomial, radial basis function, sigmoid kernels computes F-scores for all the classes individually and takes average of the scores. On the other hand, micro-averaging enumerates both positive results and negative results on the whole without considering the score of each class and computes total F-score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 145, |
|
"text": "(Chang & Lin, 2001", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 271, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Settings", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In 10-fold cross validation, we apply the same split used in Airola et al., (2008) , Miwa et al., (2009a) and Miwa et al., (2009b) for comparisons. Also, we empirically estimate the regularization parameters of SVM (C-values) by conducting 10-fold cross validation on each training data. We do not adjust the SVM thresholds to the optimal value as in Airola et al., (2008) and Miwa et al., (2009a) . Table 2 shows the best scores of our system. The optimal decay factor varies with each corpus. In LLL, the optimal decay factor is 0.2 8 indicating that the shortage of data has forced our system to normalize parse trees more intensively with a strong decay factor in kernel computation in order to cover various syntactic structures. Miwa et al., (2009a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 82, |
|
"text": "Airola et al., (2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 105, |
|
"text": "Miwa et al., (2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 130, |
|
"text": "Miwa et al., (2009b)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 372, |
|
"text": "Airola et al., (2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 397, |
|
"text": "Miwa et al., (2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 735, |
|
"end": 755, |
|
"text": "Miwa et al., (2009a)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 407, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Settings", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Our system outperforms the previous results as in Table 2 . Even using rich feature vectors including Bag-Of-Words and shortest path trees generated from multiple corpora, Miwa et al., (2009b) reported 64.0% and 66.7% in AIMed and BioInfer, respectively. Our system, however, produced 67.0% in AIMed and 72.6% in BioInfer with a single parse tree kernel. We did not have to perform any intensive feature generation tasks using various linguistic analyzers and more importantly, did not use any additional corpora for training as done in Miwa et al., (2009b) . While the performance differences are not very big, we argue that obtaining higher performance values is significant because the proposed system did not use any of the additional efforts and resources.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 192, |
|
"text": "Miwa et al., (2009b)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 557, |
|
"text": "Miwa et al., (2009b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 57, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To investigate the effect of the scaling parameter of the parse tree kernel in PPI extraction, we measure how the performance changes as the decay factor varies ( Figure 5 ). It is obvious that the decay factor influences the overall performance of PPI extraction. Especially, the Fscores of the small-scale corpora such as HPRD50 and LLL are influenced by the decay factor. The gaps between the best and worst scores in LLL and HPRD50 are 19.1% and 5.2%, respectively. The fluctuation in F-scores of the large-scale corpora (AIMed, BioInfer, IEPA) is not so extreme, which seems to stem from the abundance in syntactic and lexical forms that reduce the normalizing effect of the decay factor. The increase in the decay factor leads to the increase in the precision values of all the corpora except for LLL. The phenomenon is fairly plausible because the decreased normalization power causes the system to compute the tree similarities more intensively and therefore it classifies each instance in a strict and detailed manner. On the contrary, the recall values slightly decrease with respect to the decay factor, which indicates that the tree pruning (PT) has already conducted the normalization process to reduce the sparseness problem in each corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 171, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Most importantly, along with tree pruning, decay factor could boost the performance of our system by controlling the rigidness of the parse tree kernel in PPI extraction. Table 3 shows the results of the cross-corpus evaluation to measure the generalization power of our system as conducted in Airola et al., (2008) and Miwa et al., (2009a) . Miwa et al., (2009b) executed a set of combinatorial experiments by mixing multiple corpora and pre-sented their results. Therefore, it is not reasonable to compare our results with them due to the size discrepancy between training corpora. Nevertheless, we will compare our results with their approaches in later based on AIMed corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 315, |
|
"text": "Airola et al., (2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 340, |
|
"text": "Miwa et al., (2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 363, |
|
"text": "Miwa et al., (2009b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 178, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "As seen in Table 3 , our system outperforms the existing approaches in almost all pairs of corpora. In particular, in the multiple corporabased evaluations aimed at AIMed which has been frequently used as a standard set in PPI extraction, our approach shows prominent results compared with others. While other approaches showed the performance ranging from 33.3% to 60.8%, our approach achieved much higher scores between 55.9% and 67.0%. More specific observations are:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 18, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "(1) Our PPIE method trained on any corpus except for IEPA outperforms the other approaches regardless of the test corpus only with a few exceptions with IEPA and LLL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "(2) Even when using LLL or HPRD50, two smallest corpora, as training sets, our system performs well with every other corpus for testing. It indicates that our approach is much less vulnerable to the sizes of training corpora than other methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "(3) The degree of score fluctuation of our system across different testing corpora is much smaller than other regardless of the training data set. When trained on LLL, for example, the range for our system (55.9% ~ 82.1%) is smaller than the others (38.6% ~ 83.2% and 33.3% ~ 76.8%). (4) The cross-corpus evaluation reveals that our method outperforms the others significantly. This is more visibly shown especially when the large-scale corpora (AIMed and BioInfer) are used. (5) PPI extraction model trained on AIMed shows lower scores in IEPA and LLL as compared with other methods, which could trigger further investigation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In order to convince ourselves further the superiority of the proposed method, we compare it with other previously reported approaches. Table 4 lists the macro-averaged precision, recall and F-scores of the nine approaches tested on AIMed. While the experimental settings are different as reported in the literature, they are quite close in terms of the numbers of positive and negative documents.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 143, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "As seen in the table, the proposed method is superior to all the others in F-scores. The improvement in precision (12.8%) is most significant, especially in comparison with the work of Miwa et al., (2009b) , which used multiple corpora (AIMed + IEPA) for training and combined various kernels such as bag-of-words, parse trees and graphs. It is natural that the recall value is lower since a less number of patterns (features) must have been learned. What's important is that the proposed method has a higher or at least comparable overall performance without additional resources.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 205, |
|
"text": "Miwa et al., (2009b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Our approach is significantly better than that of Airola et al., (2008) , which employed two different forms of graph kernels to improve the initial model. Since they did not use multiple corpora for training, the comparison shows the direct benefit of using the extension of the kernel.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 71, |
|
"text": "Airola et al., (2008)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PPI Extraction Performance", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To improve the performance of PPIE, recent research activities have had a tendency of increasing the complexity of the systems by combining various methods and resources. In this paper, however, we argue that by paying more (Miwa et al., 2009a) 60.8 53.1 68.3 68.1 73.5 (Airola et al., 2008) 56.4 47.1 69.0 67.4 74.5", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 244, |
|
"text": "(Miwa et al., 2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 291, |
|
"text": "(Airola et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Works", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our System 65.2 72.6 71.9 72.9 78.4 (Miwa et al., 2009a) 49.6 68.1 68.3 71.4 76.9 (Airola et al., 2008) 47.2 61.3 63.9 68.0 78.0", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 56, |
|
"text": "(Miwa et al., 2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 103, |
|
"text": "(Airola et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BioInfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our System 63.1 65.5 73.1 69.3 73.7 (Miwa et al., 2009a) 43.9 48.6 70.9 67.8 72.2 (Airola et al., 2008) 42 (Miwa et al., 2009a) 40.4 55.8 66.5 71.7 83.2 (Airola et al., 2008) 39.1 51.7 67.5 75.1 77.6 LLL Our System 55.9 64.4 69.4 71.4 82.1 (Miwa et al., 2009a) 38.6 48.9 64.0 65.6 83.2 (Airola et al., 2008) 33.3 42.5 59.8 64.9 76.8 Table 3 . Macro-averaged F1 scores in cross-corpora evaluation. Rows and columns correspond to the training and test corpora, respectively. We parallel our results with other recently reported results. All the split methods in 10-fold CV are the same for fair comparisons. attention to a single model and adjusting parameters more carefully, we can obtain at least comparable performance if not better. This paper indicates that a well-tuned parse tree kernel based on decay factor can achieve the superior performance in PPIE when it is preprocessed by the path-enclosed tree pruning method. It was shown in a series of experiments that our system produced the best scores in single corpus evaluation as well as cross-corpora validation in comparison with other state-ofthe-art methods. Contribution points of this paper are as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 56, |
|
"text": "(Miwa et al., 2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 103, |
|
"text": "(Airola et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 127, |
|
"text": "(Miwa et al., 2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 174, |
|
"text": "(Airola et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 260, |
|
"text": "(Miwa et al., 2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 307, |
|
"text": "(Airola et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 340, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "HPRD50", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) We have shown that the benefits of using additional resources including richer features can be obtained by tuning a single tree kernel method with tree pruning and decaying factors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPRD50", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) We have newly found that the decay factor influences precision enhancement of PPIE and hence its overall performance as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPRD50", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) We have also revealed that the parse tree kernel method equipped with decay factors shows superior generalization power even with small corpora while presenting significant performance increase on cross-corpora experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPRD50", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As a future study, we leave experiments with training the classifier with multiple corpora and deeper analysis of what aspects of the corpora gave different magnitudes of the improvements. Collins, M. & Duffy, N. (2001) . Convolution Kernels for Natural Language. NIPS-2001, (pp. 625-632) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 219, |
|
"text": "Collins, M. & Duffy, N. (2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 288, |
|
"text": "NIPS-2001, (pp. 625-632)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPRD50", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Craven, M. & Kumlien, J. (1999). Constructing biological knowledge bases by extracting information from text sources. Proceedings of the 7th International conference on intelligent systems for molecular biology, (pp.77-86), Heidelberg, Germany.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPRD50", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Ding, J., Berleant, D., Nettleton, D. & Wurtele, E. (2002) . Mining MEDLINE: abstracts, sentences, or phrases?. Proceedings of PSB'02, Erkan, G., Ozgur, A., & Radev, D. R., (2007) . Semisupervised classification for extracting protein in-POS NEG ma-P ma-R ma-F \u03c3 F Our System 1,000 4,834 72.8 62.1 67.0 4.5 (Miwa et al., 2009b) 1,000 4,834 60.0 71.9 65.2 (Miwa et al., 2009a) 1,000 4,834 58.7 66.1 61.9 7.4 (Miwa et al., 2008) 1,005 4,643 60.4 69.3 61.5 (Miyao et al., 2008) 1,059 4,589 54.9 65.5 59.5 (Giuliano et al., 2006) --60.9 57.2 59.0 (Airola et al., 2008) 1,000 4,834 52.9 61.8 56.4 5.0 (Sae tre et al., 2007) 1,068 4,563 64.3 44.1 52.0 (Erkan et al., 2007) 951 4,020 59.6 60.7 60.0 (Bunescu & Mooney, 2005) --65.0 46.4 54. 2 Table 4 . Comparative results in AIMed. The number of positive instances (POS) and negative instances (NEG) and macro-averaged precision (ma-P), recall (ma-R) and F1-score (ma-F) are shown.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 58, |
|
"text": "Berleant, D., Nettleton, D. & Wurtele, E. (2002)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "PSB'02,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 179, |
|
"text": "Ozgur, A., & Radev, D. R., (2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 327, |
|
"text": "(Miwa et al., 2009b)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 375, |
|
"text": "(Miwa et al., 2009a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 407, |
|
"end": 426, |
|
"text": "(Miwa et al., 2008)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 474, |
|
"text": "(Miyao et al., 2008)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 525, |
|
"text": "(Giuliano et al., 2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 564, |
|
"text": "(Airola et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 618, |
|
"text": "(Sae tre et al., 2007)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 646, |
|
"end": 666, |
|
"text": "(Erkan et al., 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 692, |
|
"end": 716, |
|
"text": "(Bunescu & Mooney, 2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 733, |
|
"end": 744, |
|
"text": "2 Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "HPRD50", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We want to thank the anonymous reviewers for their valuable comments. This work has been supported in part by KRCF Grant, the Korean government.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgment", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Airola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bjorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Pahikkala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "9", |
|
"issue": "S2", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1186/1471-2105-9-S11-S2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Airola, A., Pyysalo, S., Bjorne, J., Pahikkala, T., Ginter, F. & Salakoski, T. (2008). All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC Bioinformatics, 9(S2), doi:10.1186/1471-2105-9- S11-S2.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatic extraction of keywords from scientific text: application to the knowledge domain of protein families", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Andrade", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Valencia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Bioinformatics", |
|
"volume": "14", |
|
"issue": "7", |
|
"pages": "600--607", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrade, M. A. & Valencia, A. (1998). Automatic extraction of keywords from scientific text: appli- cation to the knowledge domain of protein fami- lies. Bioinformatics, 14(7), 600-607.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Automatic extraction of biological information from scientific text: protein-protein interactions", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Blaschke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrade", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Ouzounis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Valencia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. Int. Conf. Intell. Syst. Mol. Biol", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "60--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blaschke, C., Andrade, M., Ouzounis, C. & Valencia, A. (1999). Automatic extraction of biological in- formation from scientific text: protein-protein in- teractions. Proc. Int. Conf. Intell. Syst. Mol. Biol., (pp. 60-67).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Comparative Experiments on Learning Information Extractors for Proteins and their Interactions", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bunescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kate", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Marcotte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ramani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Artif. Intell. Med., Summarization and Information Extraction from Medical Documents", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "139--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bunescu, R., Ge, R., Kate, R., Marcotte, E., Mooney, R., Ramani, A. & Wong, Y. (2005). Comparative Experiments on Learning Information Extractors for Proteins and their Interactions. Artif. Intell. Med., Summarization and Information Extraction from Medical Documents, 33, 139-155", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "teraction sentences using dependency parsing", |
|
"authors": [], |
|
"year": 2007, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "teraction sentences using dependency parsing. In EMNLP 2007.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "RelEx -Relation extraction using dependency parse trees", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Fundel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "K\u00fcffner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Zimmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Bioinformatics", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "365--371", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fundel, K., K\u00fcffner, R. & Zimmer, R. (2007). RelEx -Relation extraction using dependency parse trees. Bioinformatics, 23, 365-371.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Exploiting Shallow Linguistic Information for Relation Extraction From Biomedical Literature", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Giuliano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lavelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Romano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Giuliano, C., Lavelli, A., Romano, L., (2006). Ex- ploiting Shallow Linguistic Information for Rela- tion Extraction From Biomedical Literature. Pro- ceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A shallow parser based on closed-class words to capture relations in biomedical text", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Gondy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Hsinchun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Martinez Jesse", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "J. Biomed. Informatics", |
|
"volume": "36", |
|
"issue": "3", |
|
"pages": "145--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gondy, L., Hsinchun C. & Martinez Jesse D. (2003). A shallow parser based on closed-class words to capture relations in biomedical text. J. Biomed. Informatics. 36(3), 145-158.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Tree Kernel-based Relation Extraction with Context-Sensitive Structured Parse Tree Information", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Guodong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Qiaoming", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "728--736", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "GuoDong, Z., Min, Z., Dong, H. J. & QiaoMing, Z. (2007). Tree Kernel-based Relation Extraction with Context-Sensitive Structured Parse Tree In- formation. Proceedings of the 2007 Joint Confe- rence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Prague, (pp. 728-736)", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Mining literature for protein-protein interactions", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Marcotte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Xenarios", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Eisenberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Bioinformatics", |
|
"volume": "17", |
|
"issue": "4", |
|
"pages": "359--363", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcotte, E. M., Xenarios, I. & Eisenberg D. (2001). Mining literature for protein-protein interactions. Bioinformatics, 17(4), 359-363.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Protein-protein interaction extraction by leveraging multiple kernels and parsers", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sae Tre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "International Journal of Medical Informatics", |
|
"volume": "78", |
|
"issue": "12", |
|
"pages": "39--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miwa, M., Sae tre, R., Miyao, Y. & Tsujii J. (2009a). Protein-protein interaction extraction by leverag- ing multiple kernels and parsers. International Journal of Medical Informatics, 78(12), e39-e46.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A Rich Feature Vector for Protein-Protein Interaction Extraction from Multiple Corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sae Tre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miwa, M., Sae tre, R., Miyao, Y. & Tsujii J. (2009b). A Rich Feature Vector for Protein-Protein Inte- raction Extraction from Multiple Corpora. Pro- ceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, (pp. 121-130)", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Combining multiple layers of syntactic information for protein-protein interaction extraction", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sae Tre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Third International Symposium on Semantic Mining in Biomedicine (SMBM 2008)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miwa, M., Sae tre, R., Miyao, Y., Ohta, T., & Tsujii, J. (2008). Combining multiple layers of syntactic information for protein-protein interaction extrac- tion. In Proceedings of the Third International Symposium on Semantic Mining in Biomedicine (SMBM 2008), (pp. 101-108)", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Task-oriented evaluation of syntactic parsers and their representations", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sae Tre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Matsuzaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 45th Meeting of the Association for Computational Linguistics (ACL'08:HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miyao, Y., Sae tre, R., Sagae, K., Matsuzaki, T., & Tsujii, J. (2008). Task-oriented evaluation of syn- tactic parsers and their representations. Proceed- ings of the 45th Meeting of the Association for Computational Linguistics (ACL'08:HLT).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Making tree kernels practical for natural language learning", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of EACL'06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moschitti, A. (2006). Making tree kernels practical for natural language learning. Proceedings of EACL'06, Trento, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Extracting human protein interactions from MEDLINE using a full-sentence parser", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Nikolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Anton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Sergei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Svetalana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Alexander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Llya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Bioinformatics", |
|
"volume": "20", |
|
"issue": "5", |
|
"pages": "604--611", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolai, D., Anton, Y., Sergei, E., Svetalana, N., Alexander, N. & llya, M. (2004). Extracting hu- man protein interactions from MEDLINE using a full-sentence parser. Bioinformatics, 20(5), 604- 611.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automated extraction of information on protein-protein interactions from the biological literature", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ono", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hishigaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Tanigam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Takagi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Bioinformatics", |
|
"volume": "17", |
|
"issue": "2", |
|
"pages": "155--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ono, T., Hishigaki, H., Tanigam, A. & Takagi, T. (2001). Automated extraction of information on protein-protein interactions from the biological li- terature. Bioinformatics, 17(2), 155-161.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Comparative analysis of five protein-protein interaction corpora", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Airola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Heimonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bj\u00f6rne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "9", |
|
"issue": "S6", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1186/1471-2105-9-S3-S6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pyysalo, S., Airola, A., Heimonen, J., Bj\u00f6rne, J., Ginter, F. & Salakoski, T. (2008). Comparative analysis of five protein-protein interaction corpo- ra. BMC Bioinformatics, 9(S6), doi:10.1186/1471-2105-9-S3-S6.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Syntactic features for protein-protein interaction extraction", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sae Tre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "LBM 2007 short papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sae tre, R., Sagae, K., & Tsujii, J. (2007). Syntactic features for protein-protein interaction extraction. In LBM 2007 short papers.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Identifying the interaction between genes and gene products based on frequently seen verbs in MEDLINE abstracts", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Sekimizu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Workshop on genome informatics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "62--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sekimizu, T., Park H. S. & Tsujii J. (1998). Identify- ing the interaction between genes and gene prod- ucts based on frequently seen verbs in MEDLINE abstracts. Workshop on genome informatics, vol. 9, (pp. 62-71).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Kernel Methods for Pattern Analysis", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Shawe-Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Cristianini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shawe-Taylor, J., Cristianini, N., (2004). Kernel Methods for Pattern Analysis, Cambridge Univer- sity Press.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Extraction of protein interaction information from unstructured text using a context-free grammar", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Temkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gilder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Bioinformatics", |
|
"volume": "19", |
|
"issue": "16", |
|
"pages": "2046--2053", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Temkin, J. M. & Gilder, M. R. (2003). Extraction of protein interaction information from unstructured text using a context-free grammar. Bioinformatics, 19(16), 2046-2053.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Fast Kernels for String and Tree Matching", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"V N" |
|
], |
|
"last": "Vishwanathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Smola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "569--576", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vishwanathan, S. V. N., Smola, A. J. (2003). Fast Kernels for String and Tree Matching. Advances in Neural Information Processing Systems, 15, 569-576, MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Exploring syntactic structured features over parse trees for relation extraction using kernel methods", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Guodong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Aiti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "formation Processing and Management", |
|
"volume": "44", |
|
"issue": "", |
|
"pages": "687--701", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, M., GuoDong, Z. & Aiti, A. (2008). Explor- ing syntactic structured features over parse trees for relation extraction using kernel methods. In- formation Processing and Management, 44, 687- 701", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A Composite Kernel to Extract Relations between Entities with both Flat and Structured Features. 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "825--832", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, M., Zhang, J., Su, J. & Zhou, G. (2006). A Composite Kernel to Extract Relations between Entities with both Flat and Structured Features. 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, (pp.825-832).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Extracting interactions between proteins from the literature", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Biomedical Informatics", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "393--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou, D. & He, Y. (2008). Extracting interactions between proteins from the literature. Journal of Biomedical Informatics, 41, 393-407.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "An example sentence containing multiple PPIs involving different names of varying scopes and relations 1", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "\u0394 (n 1 , n 2 , \u03bb, \u03c3) algorithm 4 Performance Improving Factors", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Path-enclosed Tree (PT) Method", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "The effect of decaying in comparing two trees. n(\u2022) denotes #unique subtrees in a tree.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"text": "It was determined by increasing it by 0.1 progressively through 10-fold cross validation.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"text": "Performance variation with respect to decay factor in Five PPI Corpora. Macroaveraged F1 (left), Precision (middle), Recall (right) evaluated by 10-fold CV", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td>1</td><td>FUNCTION delta(TreeNode n 1 , TreeNode n 2 , \u03bb, \u03c3)</td></tr><tr><td>2</td><td>n 1 = one node of T 1 ; n 2 = one node of T 2 ;</td></tr><tr><td>3</td><td>\u03bb = tree kernel decay factor; \u03c3 = tree division me-</td></tr><tr><td>4</td><td>thod;</td></tr><tr><td>5</td><td>BEGIN</td></tr><tr><td>6</td><td>nc 1 = get_children_number(n 1 );</td></tr><tr><td>7</td><td>nc 2 = get_children_number(n 2 );</td></tr><tr><td>8</td><td>IF nc 1 EQUAL 0 AND nc 2 EQUAL 0 THEN</td></tr><tr><td>9</td><td>nv 1 = get_node_value(n 1 );</td></tr><tr><td>10</td><td>nv 2 = get_node_value(n 2 );</td></tr><tr><td>11</td><td>IF nv 1 EQUAL nv 2 THEN RETURN 1;</td></tr><tr><td>12</td><td>ENDIF</td></tr><tr><td>13</td><td>np 1 = get_production_rule(n 1 );</td></tr><tr><td>14</td><td>np 2 = get_production_rule(n 2 );</td></tr><tr><td>15 16 17 18</td><td>IF np 1 AND nc 2 EQUAL 1 THEN</td></tr><tr><td>19</td><td>RETURN \u03bb;</td></tr><tr><td>20</td><td>END IF</td></tr><tr><td>21</td><td/></tr><tr><td>22</td><td>mult_delta = 1;</td></tr><tr><td>23 24</td><td>FOR I = 1 TO nc 1 nch 1 = I th child of n 1 ; nch 2 = I th child of n 2 ;</td></tr><tr><td>25</td><td>mult_delta = mult_delta \u00d7</td></tr><tr><td>26</td><td>(\u03c3 + delta(nch 1 , nch 2 , \u03bb, \u03c3));</td></tr><tr><td>27</td><td>END FOR</td></tr><tr><td>28</td><td>RETURN \u03bb \u00d7 mult_delta;</td></tr><tr><td>29</td><td>END</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "NOT EQUAL np 2 THEN RETURN 0; IF np 1 EQUAL np 2 AND nc 1 EQUAL 1" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Five PPI Corpora" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td>AC: accuracy, ma-F: macro-averaged F1, \u03c3 ma-F :</td></tr><tr><td>standard deviation of F-scores in CV. A:AIMed,</td></tr><tr><td>B:BioInfer, H:HPRD50, I:IEPA, L:LLL. The</td></tr><tr><td>numbers in parentheses refer to the scores of</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "The highest results of the proposed system w.r.t. decay factors. DF: Decay Factor," |
|
} |
|
} |
|
} |
|
} |