Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:02.797199Z"
},
"title": "FBK-IRST: Kernel Methods for Semantic Relation Extraction",
"authors": [
{
"first": "Claudio",
"middle": [],
"last": "Giuliano",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Istituto per la Ricerca Scientifica e Tecnologica I-38050",
"location": {
"settlement": "Povo",
"region": "TN",
"country": "ITALY"
}
},
"email": "[email protected]"
},
{
"first": "Alberto",
"middle": [],
"last": "Lavelli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Istituto per la Ricerca Scientifica e Tecnologica I-38050",
"location": {
"settlement": "Povo",
"region": "TN",
"country": "ITALY"
}
},
"email": "[email protected]"
},
{
"first": "Daniele",
"middle": [],
"last": "Pighin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Istituto per la Ricerca Scientifica e Tecnologica I-38050",
"location": {
"settlement": "Povo",
"region": "TN",
"country": "ITALY"
}
},
"email": "[email protected]"
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Istituto per la Ricerca Scientifica e Tecnologica I-38050",
"location": {
"settlement": "Povo",
"region": "TN",
"country": "ITALY"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an approach for semantic relation extraction between nominals that combines shallow and deep syntactic processing and semantic information using kernel methods. Two information sources are considered: (i) the whole sentence where the relation appears, and (ii) WordNet synsets and hypernymy relations of the candidate nominals. Each source of information is represented by kernel functions. In particular, five basic kernel functions are linearly combined and weighted under different conditions. The experiments were carried out using support vector machines as classifier. The system achieves an overall F 1 of 71.8% on the Classification of Semantic Relations between Nominals task at SemEval-2007.",
"pdf_parse": {
"paper_id": "S07-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an approach for semantic relation extraction between nominals that combines shallow and deep syntactic processing and semantic information using kernel methods. Two information sources are considered: (i) the whole sentence where the relation appears, and (ii) WordNet synsets and hypernymy relations of the candidate nominals. Each source of information is represented by kernel functions. In particular, five basic kernel functions are linearly combined and weighted under different conditions. The experiments were carried out using support vector machines as classifier. The system achieves an overall F 1 of 71.8% on the Classification of Semantic Relations between Nominals task at SemEval-2007.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The starting point of our research is an approach for identifying relations between named entities exploiting only shallow linguistic information, such as tokenization, sentence splitting, part-of-speech tagging and lemmatization (Giuliano et al., 2006) . A combination of kernel functions is used to represent two distinct information sources: (i) the global context where entities appear and (ii) their local contexts. The whole sentence where the entities appear (global context) is used to discover the presence of a relation between two entities. Windows of limited size around the entities (local contexts) provide useful clues to identify the roles played by the entities within a relation (e.g., agent and target of a gene interaction). In the task of detecting protein-protein interactions, we obtained state-of-the-art results on two biomedical data sets. In addition, promising results have been recently obtained for relations such as work for and org based in in the news domain 1 .",
"cite_spans": [
{
"start": 230,
"end": 253,
"text": "(Giuliano et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate the use of the above approach to discover semantic relations between nominals. In addition to the original feature representation, we have integrated deep syntactic processing of the global context and semantic information for each candidate nominals using WordNet as external knowledge source. Each source of information is represented by kernel functions. A tree kernel (Moschitti, 2004) is used to exploit the deep syntactic processing obtained using the Charniak parser (Charniak, 2000) . On the other hand, bag of synonyms and hypernyms is used to enhance the representation of the candidate nominals. The final system is based on five basic kernel functions (bag-ofwords kernel, global context kernel, tree kernel, supersense kernel, bag of synonyms and hypernyms kernel) linearly combined and weighted under different conditions. The experiments were carried out using support vector machines (Vapnik, 1998) as classifier.",
"cite_spans": [
{
"start": 402,
"end": 419,
"text": "(Moschitti, 2004)",
"ref_id": "BIBREF5"
},
{
"start": 504,
"end": 520,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF1"
},
{
"start": 930,
"end": 944,
"text": "(Vapnik, 1998)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present results on the Classification of Semantic Relations between Nominals task at SemEval-2007, in which sentences containing ordered pairs of marked nominals, possibly semantically related, have to be classified. On this task, we achieve an overall F 1 of 71.8% (B category evaluation), largely outperforming all the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to implement the approach based on syntactic and semantic information, we employed a linear weighted combination of kernels, using support vector machines as classifier. We designed two families of basic kernels: syntactic kernels and semantic kernels. These basic kernels are combined by exploiting the closure properties of kernels. We define our composite kernel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Methods for Relation Extraction",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K C (x 1 , x 2 ) as follows n i=1 w i K i (x 1 , x 2 ) K i (x 1 , x 1 )K i (x 2 , x 2 ) ,",
"eq_num": "(1)"
}
],
"section": "Kernel Methods for Relation Extraction",
"sec_num": "2"
},
{
"text": "where each basic kernel K i is normalized and w i \u2208 {0, 1} is the kernel weight. The normalization factor plays an important role in allowing us to integrate information from heterogeneous knowledge sources. All basic kernels, but the tree kernel (see Section 2.1.3), are explicitly calculated as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Methods for Relation Extraction",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K i (x 1 , x 2 ) = \u03c6(x 1 ), \u03c6(x 2 ) ,",
"eq_num": "(2)"
}
],
"section": "Kernel Methods for Relation Extraction",
"sec_num": "2"
},
{
"text": "where \u03c6(\u2022) is the embedding vector. Even though the resulting feature space has high dimensionality, an efficient computation of Equation 2 can be carried out explicitly since the input representations defined below are extremely sparse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Methods for Relation Extraction",
"sec_num": "2"
},
{
"text": "Syntactic kernels are defined over the whole sentence where the candidate nominals appear. Bunescu and Mooney (2005) and Giuliano et al. (2006) successfully exploited the fact that relations between named entities are generally expressed using only words that appear simultaneously in one of the following three contexts. Here, we investigate whether this assumption is also correct for semantic relations between nominals. Our global context kernel operates on the contexts defined above, where each context is represented using a bag-of-words. More formally, given a) ",
"cite_spans": [
{
"start": 91,
"end": 116,
"text": "Bunescu and Mooney (2005)",
"ref_id": "BIBREF0"
},
{
"start": 121,
"end": 143,
"text": "Giuliano et al. (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Kernels",
"sec_num": "2.1"
},
{
"text": "\u03c6C(R) = (tf (t1, C), tf (t2, C), . . . , tf (t l , C)) \u2208 R l , (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fore-Between",
"sec_num": null
},
{
"text": "where the function tf (t i , C) records how many times a particular token t i is used in C. Note that this approach differs from the standard bag-of-words as punctuation and stop words are included in \u03c6 C , while the nominals are not. To improve the classification performance, we have further extended \u03c6 C to embed n-grams of (contiguous) tokens (up to n = 3). By substituting \u03c6 C into Equation 2, we obtain the n-gram kernel K n , which counts uni-grams, bigrams, . . . , n-grams that two patterns have in common 2 . The Global Context kernel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fore-Between",
"sec_num": null
},
{
"text": "K GC (R 1 , R 2 ) is then defined as KF B (R1, R2) + KB(R1, R2) + KBA(R1, R2), (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fore-Between",
"sec_num": null
},
{
"text": "where K F B , K B and K BA are n-gram kernels that operate on the Fore-Between, Between and Between-After patterns respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fore-Between",
"sec_num": null
},
{
"text": "The bag-of-words kernel is defined as the previous kernel but it operates on the whole sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Kernel",
"sec_num": "2.1.2"
},
{
"text": "Tree kernels can trigger automatic feature selection and represent a viable alternative to the man-ual design of attribute-value syntactic features (Moschitti, 2004) . A tree kernel K T (t 1 , t 2 ) evaluates the similarity between two trees t 1 and t 2 in terms of the number of fragments they have in common. Let N t be the set of nodes of a tree t and F = {f 1 , f 2 , . . . , f |F| } be the fragment space of t 1 and t 2 . Then KT (t1, t2) = P",
"cite_spans": [
{
"start": 148,
"end": 165,
"text": "(Moschitti, 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernel",
"sec_num": "2.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n i \u2208Nt 1 P n j \u2208Nt 2 \u2206(ni, nj) ,",
"eq_num": "(5)"
}
],
"section": "Tree Kernel",
"sec_num": "2.1.3"
},
{
"text": "where \u2206(n i , n j ) = |F| k=1 I k (n i ) \u00d7 I K (n j ) and I k (n) = 1 if k is rooted in n, 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernel",
"sec_num": "2.1.3"
},
{
"text": "For this task, we defined an ad-hoc class of structured features (Moschitti et al., 2006) , the Reduced Tree (RT), which can be derived from a sentence parse tree t by the following steps: (1) remove all the terminal nodes but those labeled as relation entities and those POS tagged as verbs, auxiliaries, prepositions, modals or adverbs; (2) remove all the internal nodes not covering any remaining terminal;",
"cite_spans": [
{
"start": 65,
"end": 89,
"text": "(Moschitti et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernel",
"sec_num": "2.1.3"
},
{
"text": "(3) replace the entity words with placeholders that indicate the direction in which the relation should hold. Figure 1 shows a parse tree and the resulting RT structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tree Kernel",
"sec_num": "2.1.3"
},
{
"text": "In (Giuliano et al., 2006) , we used the local context kernel to infer semantic information on the candidate entities (i.e., roles played by the entities). As the task organizers provide the WordNet sense and role for each nominal, we directly use this information to enrich the feature space and do not include the local context kernel in the combination.",
"cite_spans": [
{
"start": 3,
"end": 26,
"text": "(Giuliano et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Kernels",
"sec_num": "2.2"
},
{
"text": "By using the WordNet sense key provided, each nominal is represented by the bag of its synonyms and hypernyms (direct and inherited hypernyms). Formally, given a relation example R, each nominal N is represented as a row vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag of Synonyms and Hypernyms Kernel",
"sec_num": "2.2.1"
},
{
"text": "\u03c6N (R) = (f (t1, N ), f (t2, N ), . . . , f (t l , N )) \u2208 R l , (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag of Synonyms and Hypernyms Kernel",
"sec_num": "2.2.1"
},
{
"text": "where the binary function f (t i , N ) records if a particular lemma t i is contained into the bag of synonyms and hypernyms of N. The bag of synonyms and hypernyms kernel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag of Synonyms and Hypernyms Kernel",
"sec_num": "2.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K S&H (R 1 , R 2 ) is defined as Ktarget(R1, R2) + Kagent(R1, R2),",
"eq_num": "(7)"
}
],
"section": "Bag of Synonyms and Hypernyms Kernel",
"sec_num": "2.2.1"
},
{
"text": "where K target and K agent are defined by substituting the embedding of the target and agent nominals into Equation 2 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag of Synonyms and Hypernyms Kernel",
"sec_num": "2.2.1"
},
{
"text": "WordNet synsets are organized into 45 lexicographer files, based on syntactic category and logical groupings. E.g., noun.artifact is for nouns denoting man-made objects, noun.attribute for nouns denoting attributes for people and objects etc. The supersense kernel K SS (R 1 , R 2 ) is a variant of the previous kernel that uses the names of the lexicographer files (i.e., the supersense) to index the feature space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supersense Kernel",
"sec_num": "2.2.2"
},
{
"text": "Sentences have been tokenized, lemmatized, and POS tagged with TextPro 3 . We considered each relation as a different binary classification task, and each sentence in the data set is a positive or negative example for the relation. The direction of the relation is considered labelling the first argument of the relation as agent and the second as target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Results",
"sec_num": "3"
},
{
"text": "All the experiments were performed using the SVM package SVMLight-TK 4 , customized to embed our own kernels. We optimized the linear combination weights w i and regularization parameter c using 10-fold cross-validation on the training set. We set the cost-factor j to be the ratio between the number of negative and positive examples. Table 1 shows the performance on the test set. We achieve an overall F 1 of 71.8% (B category evaluation), largely outperforming all the baselines, ranging from 48.5% to 57.0%. The average training plus test running time for a relation is about 10 seconds on a Intel Pentium M755 2.0 GHz. Figure 2 shows the learning curves on the test set. For all relations but theme-tool, accurate classifiers can be learned using a small fraction of training.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 343,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 625,
"end": 633,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup and Results",
"sec_num": "3"
},
{
"text": "Experimental results show that our kernel-based approach is appropriate also to detect semantic relations between nominals. However, differently from relation extraction between named entities, there is not a common kernel setup for all relations. E.g., for content-container we obtain the best performance combining the tree kernel and the bag of synonyms and hypernyms kernel; on the other hand, for instrument-agency the best performance is obtained by combining the global kernel and the supersense kernel. Surprisingly, the supersense kernel alone works quite well and obtains results comparable to the bag of synonyms and hypernyms kernel. This result is particularly interesting as a supersense tagger can easily provide a satisfactory accuracy (Ciaramita and Altun, 2006) . On the other hand, obtaining an acceptable accuracy in word sense disambiguation (required for a realistic application of the bag of synonyms and hypernyms kernel) is impractical as a sufficient amount of training for at least all nouns is currently not available. Hence, the supersense could play a crucial role to improve the performance when approaching this task without the nominals disambiguated. To model the global context using the Fore-Between, Between and Between-After contexts did not produce a significant improvement with respect to the bag-of-words model. This is mainly due to the fact that examples have been col-lected from the Web using heuristic patterns/queries, most of which implying Between patterns/contexts (e.g., for the cause-effect relation \"* comes from *\", \"* out of *\" etc.).",
"cite_spans": [
{
"start": 752,
"end": 779,
"text": "(Ciaramita and Altun, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "4"
},
{
"text": "These results appear in a paper currently under revision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the literature, it is also called n-spectrum kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://tcc.itc.it/projects/textpro/ 4 http://ai-nlp.info.uniroma2.it/moschitti/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Claudio Giuliano, Alberto Lavelli and Lorenza Romano are supported by the X-Media project (http: //www.x-media-project.org), sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant number IST-FP6-026978.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "5"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Subsequence kernels for relation extraction",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 19th Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond J. Mooney. 2005. Subse- quence kernels for relation extraction. In Proceedings of the 19th Conference on Neural Information Pro- cessing Systems, Vancouver, British Columbia.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the First Meeting of the North American Chapter of the Association for Com- putational Linguistics, pages 132-139, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "594--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In Pro- ceedings of the 2006 Conference on Empirical Meth- ods in Natural Language Processing, pages 594-602, Sydney, Australia, July.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Exploiting shallow linguistic information for relation extraction from biomedical literature",
"authors": [
{
"first": "Claudio",
"middle": [],
"last": "Giuliano",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Lavelli",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006)",
"volume": "",
"issue": "",
"pages": "5--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudio Giuliano, Alberto Lavelli, and Lorenza Romano. 2006. Exploiting shallow linguistic information for re- lation extraction from biomedical literature. In Pro- ceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguis- tics (EACL-2006), Trento, Italy, 5-7 April.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic role labeling via tree kernel joint inference",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Pighin",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti, Daniele Pighin, and Roberto Basili. 2006. Semantic role labeling via tree kernel joint inference. In Proceedings of the Tenth Confer- ence on Computational Natural Language Learning, CoNLL-X.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A study on convolution kernels for shallow statistic parsing",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume",
"volume": "",
"issue": "",
"pages": "335--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2004. A study on convolution kernels for shallow statistic parsing. In Proceedings of the 42nd Meeting of the Association for Computa- tional Linguistics (ACL'04), Main Volume, pages 335- 342, Barcelona, Spain, July.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical Learning Theory",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Vapnik. 1998. Statistical Learning Theory. John Wiley and Sons, New York, NY.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "A content-container relation test sentence parse tree (a) and the corresponding RT structure (b). a relation example R, we represent a context C as a row vector",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Tokens before and between the two entities, e.g. \"the head of [ORG], Dr. [P ER]\". Between Only tokens between the two entities, e.g. \"[ORG] spokesman [P ER]\". Between-After Tokens between and after the two entities, e.g. \"[P ER], a [ORG] professor\".",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Results on the test set.",
"type_str": "table"
}
}
}
}