Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S10-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:27:30.995681Z"
},
"title": "BUAP: An Unsupervised Approach to Automatic Keyphrase Extraction from Scientific Articles",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Ortiz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BUAP Puebla",
"location": {
"country": "Mexico"
}
},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BUAP Puebla",
"location": {
"country": "Mexico"
}
},
"email": "[email protected]"
},
{
"first": "Mireya",
"middle": [],
"last": "Tovar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BUAP Puebla",
"location": {
"country": "Mexico"
}
},
"email": "[email protected]"
},
{
"first": "H\u00e9ctor",
"middle": [],
"last": "Jim\u00e9nez-Salazar",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, it is presented an unsupervised approach to automatically discover the latent keyphrases contained in scientific articles. The proposed technique is constructed on the basis of the combination of two techniques: maximal frequent sequences and pageranking. We evaluated the obtained results by using micro-averaged precision, recall and Fscores with respect to two different gold standards: 1) reader's keyphrases, and 2) a combined set of author's and reader's keyphrases. The obtained results were also compared against three different baselines: one unsupervised (TF-IDF based) and two supervised (Na\u00efve Bayes and Maximum Entropy).",
"pdf_parse": {
"paper_id": "S10-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, it is presented an unsupervised approach to automatically discover the latent keyphrases contained in scientific articles. The proposed technique is constructed on the basis of the combination of two techniques: maximal frequent sequences and pageranking. We evaluated the obtained results by using micro-averaged precision, recall and Fscores with respect to two different gold standards: 1) reader's keyphrases, and 2) a combined set of author's and reader's keyphrases. The obtained results were also compared against three different baselines: one unsupervised (TF-IDF based) and two supervised (Na\u00efve Bayes and Maximum Entropy).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of automatic keyphrase extraction has been studied for several years. Firstly, as semantic metadata useful for tasks such as summarization (Barzilay and Elhadad, 1997; Lawrie et al., 2001; DAvanzo and Magnini, 2005) , but later recognizing the impact that good keyphrases would have on the quality of various Natural Language Processing (NLP) applications Turney, 1999; Barker and Corrnacchia, 2000; Medelyan and Witten, 2008) . Thus, the selection of important, topical phrases from within the body of a document may be used in order to improve the performance of systems dealing with different NLP problems such as, clustering, question-answering, named entity recognition, information retrieval, etc.",
"cite_spans": [
{
"start": 148,
"end": 176,
"text": "(Barzilay and Elhadad, 1997;",
"ref_id": "BIBREF1"
},
{
"start": 177,
"end": 197,
"text": "Lawrie et al., 2001;",
"ref_id": null
},
{
"start": 198,
"end": 224,
"text": "DAvanzo and Magnini, 2005)",
"ref_id": null
},
{
"start": 365,
"end": 378,
"text": "Turney, 1999;",
"ref_id": "BIBREF8"
},
{
"start": 379,
"end": 379,
"text": "",
"ref_id": null
},
{
"start": 423,
"end": 436,
"text": "Witten, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In general, a keyphrase may be considered as a sequence of one or more words that capture the main topic of the document, as that keyphrase is expected to represent one of the key ideas expressed by the document author. Following the previously mentioned hypothesis, we may take advantage of two different techniques of text analysis: maximal frequent sequences to extract a sequence of one or more words from a given text, and pageranking, expecting to extract those word sequences that represent the key ideas of the author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The interest on extracting high quality keyphrases from raw text has motivated forums, such as SemEval, where different systems may evaluate their performances. The purpose of SemEval is to evaluate semantic analysis systems. In particular, in this paper we are reporting the results obtained in Task #5 of SemEval-2 2010, which has been named: \"Automatic Keyphrase Extraction from Scientific Articles\". We focused this paper on the description of our approach and, therefore, we do not describe into detail the task nor the dataset used. For more information about this information read the \"Task #5 Description paper\", also published in this proceedings volume (Nam Kim et al., 2010) .",
"cite_spans": [
{
"start": 663,
"end": 685,
"text": "(Nam Kim et al., 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is structured as follows. Section 2 describes into detail the components of the proposed approach. In Section 3 it is shown the performance of the presented system. Finally, in Section 4 a discussion of findings and further work is given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The approach presented in this paper relies on the combination of two different techniques for selecting the most prominent terms of a given text: maximal frequent sequences and pageranking. In Figure 1 we may see this two step approach, where we are considering a sequence to be equivalent to an n-gram. The complete description of the procedure is given as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 200,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Description of the approach",
"sec_num": "2"
},
{
"text": "We select maximal frequent sequences which we consider to be candidate keyphrases and, thereafter, we ranking them in order to determine which ones are the most importants (according to the pageranking algorithm). In the following subsections we give a brief description of these two techniques. Afterwards, we provide an algorithm of the presented approach. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the approach",
"sec_num": "2"
},
{
"text": "Definition: If a sequence p is a subsequence of q and the number of elements in p is equal to n, then the p is called an n-gram in q. Definition: A sequence p = a 1 \u2022 \u2022 \u2022 a k is a subsequence of a sequence q if all the items a i occur in q and they occur in the same order as in p. If a sequence p is a subsequence of a sequence q we say that p occurs in q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Frequent Sequences",
"sec_num": "2.1"
},
{
"text": "Definition: A sequence p is frequent in S if p is a subsequence of at least \u03b2 documents in S where \u03b2 is a given frequency threshold. Only one occurrence of sequence in the document is counted. Several occurrences within one document do not make the sequence more frequent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Frequent Sequences",
"sec_num": "2.1"
},
{
"text": "Definition: A sequence p is a maximal frequent sequence in S if there does not exists any sequence q in S such that p is a subsequence of q and p is frequent in S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Frequent Sequences",
"sec_num": "2.1"
},
{
"text": "The algorithm of PageRanking was defined by Brin and Page in (Brin and Page, 1998) . It is a graph-based algorithm used for ranking webpages. The algorithm considers input and output links of each page in order to construct a graph, where each vertex is a webpage and each edge may be the input or output links for this webpage. They denote as In(V i ) the set of input links of webpage V i , and Out(V i ) their output links. The algorithm proposed to rank each webpage based on the voting or recommendation of other webpages. The higher the number of votes that are cast for a vertex, the higher the importance of the vertex. Moreover, the importance of the vertex casting the vote determines how important the vote itself is, and this information is also taken into account by the ranking model.",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(Brin and Page, 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PageRanking",
"sec_num": "2.2"
},
{
"text": "Although this algoritm has been initially proposed for webpages ranking, it has been also used for other NLP applications which may model their corresponding problem in a graph structure. Eq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PageRanking",
"sec_num": "2.2"
},
{
"text": "(1) is the formula proposed by Brin and Page.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PageRanking",
"sec_num": "2.2"
},
{
"text": "V i ) = (1 \u2212 d) + d * j\u2208In(V i ) 1 |Out(V j )| S(V j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S(",
"sec_num": null
},
{
"text": "(1) where d is a damping factor that can be set between 0 and 1, which has the role of integrating into the model the probability of jumping from a given vertex to another random vertex in the graph. This factor is usually set to 0.85 (Brin and Page, 1998) .",
"cite_spans": [
{
"start": 235,
"end": 256,
"text": "(Brin and Page, 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S(",
"sec_num": null
},
{
"text": "There are some other propossals, like the one presented in (Mihalcea and Tarau, 2004) , where a textranking algorithm is presented. The authors consider a weighted version of PageRank and present some applications to NLP using unigrams. They also construct multi-word terms by exploring the conections among ranked words in the graph. Our algorithm differs from textranking in that we use MFS for feeding the PageRanking algorithm.",
"cite_spans": [
{
"start": 59,
"end": 85,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S(",
"sec_num": null
},
{
"text": "The complete algoritmic description of the presented approach is given in Algorithm 1. Readers and writers keyphrases may be quite different. In particular, writers usually introduce acronyms in their text, but they use the complete or expanded representation of these acronyms for their keyphrases. Therefore, we have included a module (Extract Acronyms) for extracting both, acronyms with their corresponding expanded version, which are used afterwards as output of our system. We have preprocessed the dataset removing stopwords and punctuation symbols. Lemmatization (TreeTagger 1 ) and stemming (Porter Stemmer (Porter, 1980) ) were also applied in some stages of preprocessing.",
"cite_spans": [
{
"start": 600,
"end": 630,
"text": "(Porter Stemmer (Porter, 1980)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2.3"
},
{
"text": "The M aximal F req Sequences module extracts maximal frequent sequences of words and we feed the PageRaking module (P ageRanking) with all these sequences for determining the most important ones. We use the structure of the scientific articles in order to determine in and out links of the sequences found. In fact, we use a neighborhood criterion (a pair of MFS in the same sentence) for determining the links between those MFS's. Once the ranking is calculated, we may select those sequences of a given length (unigrams, bigrams and trigrams) as output of our system. We also return a maximum of three acronyms, and their associated multiterm phrases (M ultiT erm), as candidate keyphrases. Determining the length and quantity of the sequences (n-grams) was experimentally deduced from the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2.3"
},
{
"text": "Algorithm 1: Algorithm of the Two Step approach for the Task #5 at SemEval-2 Input: A document set: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2.3"
},
{
"text": "D = {d 1 , d 2 , \u2022 \u2022 \u2022 } Output: A set K = {K 1 , K 2 , \u2022 \u2022 \u2022 } of keyphrases for each document d i : K i = {k i,1 , k i,2 , \u2022 \u2022 \u2022 } foreach d i \u2208 D do 1 AcronymSet = Extract Acronyms(d i ); 2 d 1 i = Pre Processing(d i ); 3 M F S = Maximal Freq Sequences(d 1 i );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2.3"
},
{
"text": "K i = K i CB; 26 end 27 return K = {K 1 , K 2 , \u2022 \u2022 \u2022 } 28",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2.3"
},
{
"text": "In this edition of the Task #5 of SemEval-2 2010, we tested three different runs, which were named: BU AP \u2212 1, BU AP \u2212 2 and BU AP \u2212 3. Definition and differences among the three runs are given in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2.3"
},
{
"text": "The results obtained with each run, together with three different baselines are given in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2.3"
},
{
"text": "In all tables, P , R, F mean micro-averaged precision, recall and F -scores. For baselines, there were provided 1,2,-3 grams as candidates and T F IDF as features. In Table 2 , T F IDF is an unsupervised method to rank the candidates based on T F IDF scores. N B and M E are supervised methods using Na\u00efve Bayes and maximum entropy in WEKA. In second column, R means to use the reader-assigned keyword set as goldstandard data and C means to use both authorassigned and reader-assigned keyword sets as answers.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3"
},
{
"text": "Notice from Tables 2 and 3 that we outperformed all the baselines for the Top 15 candidates. However, the Top 10 candidates were only outperformed by the Reader-Assigned keyphrases found. This implies that the Writer keyphrases we obtained were not of as good as the Reader ones. As we mentioned, readers and writers assign different keywords. The former write keyphrases based on the lecture done, by the latter has a wider context and their keyphrases used to be more complex. We plan to investigate this issue in the future.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 26,
"text": "Tables 2 and 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3"
},
{
"text": "We have presented an approach based on the extraction of maximal frequent sequences which are then ranked by using the pageranking algorithm. Three different runs were tested, modifying the preprocessing stage and the number of bigrams given as output. We did not see an improvement when we used lemmatization of the documents. The run which obtained the best results was ranking by the organizer according to the top 15 best keyphrases, however, we may see that our runs need to be analysed more into detail in order to provide a re-ranking procedure for the best 15 keyphrases found. This procedure may improve the top 5 candidates precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "Description BU AP \u2212 1 : This run is exactly the one described in Algorithm 1. BU AP \u2212 2 : Same as BU AP \u2212 1 but lemmatization was applied a priori and stemming at the end. BU AP \u2212 3 : Same as BU AP \u2212 2 but output twice the number of bigrams. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run name",
"sec_num": null
},
{
"text": "http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially supported by CONA-CYT (Project #106625) and PROMEP (Grant #103.5/09/4213).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using noun phrase heads to extract document keyphrases",
"authors": [
{
"first": "]",
"middle": [
"K"
],
"last": "Corrnacchia2000",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrnacchia",
"suffix": ""
}
],
"year": 2000,
"venue": "13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Barker and Corrnacchia2000] K. Barker and N. Cor- rnacchia. 2000. Using noun phrase heads to extract document keyphrases. In 13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using lexical chains for text summarization",
"authors": [
{
"first": "]",
"middle": [
"R"
],
"last": "Elhadad1997",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 1997,
"venue": "ACL/EACL 1997 Workshop on Intelligent Scalable Text Summarization",
"volume": "",
"issue": "",
"pages": "10--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Barzilay and Elhadad1997] R. Barzilay and M. El- hadad. 1997. Using lexical chains for text sum- marization. In ACL/EACL 1997 Workshop on Intel- ligent Scalable Text Summarization, pages 10-17.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "DAvanzo and B. Magnini. 2005. A keyphrase-based approach to summarization:the lake system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Page",
"suffix": ""
}
],
"year": 1998,
"venue": "Document Understanding Conferences (DUC-2005)",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Brin and Page1998] S. Brin and L. Page. 1998. The anatomy of a large-scale hypertextual web search engine. In COMPUTER NETWORKS AND ISDN SYSTEMS, pages 107-117. Elsevier Science Pub- lishers B. V. [DAvanzo and Magnini2005] E. DAvanzo and B. Magnini. 2005. A keyphrase-based approach to summarization:the lake system. In Document Understanding Conferences (DUC-2005).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finding topic words for hierarchical summarization",
"authors": [
{
"first": "[",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 1999,
"venue": "16th International Joint Conference on AI",
"volume": "",
"issue": "",
"pages": "668--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Frank et al.1999] E. Frank, G.W. Paynter, I. Witten, C. Gutwin, and C.G. Nevill-Manning. 1999. Do- main specific keyphrase extraction. In 16th Interna- tional Joint Conference on AI, pages 668-673. [Lawrie et al.2001] D. Lawrie, W. B. Croft, and A. Rosenberg. 2001. Finding topic words for hi- erarchical summarization. In SIGIR 2001.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Domain independent automatic keyphrase indexing with small training sets",
"authors": [
{
"first": "]",
"middle": [
"O"
],
"last": "Witten2008",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Medelyan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Witten",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of American Society for Information Science and Technology",
"volume": "59",
"issue": "7",
"pages": "1026--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Medelyan and Witten2008] O. Medelyan and I. H. Witten. 2008. Domain independent automatic keyphrase indexing with small training sets. Jour- nal of American Society for Information Science and Technology, 59(7):1026-1040.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Textrank: Bringing order into texts",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP 2004, ACL",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Mihalcea and Tarau2004] R. Mihalcea and P. Tarau. 2004. Textrank: Bringing order into texts. In EMNLP 2004, ACL, pages 404-411.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2010 task5: Automatic keyphrase extraction from scientific articles",
"authors": [
{
"first": "Kim",
"middle": [],
"last": "[nam",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fifth International Workshop on Semantic Evaluations (SemEval-2010). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Nam Kim et al.2010] S. Nam Kim, O. Medelyan, and M.Y. Kan. 2010. Semeval-2010 task5: Auto- matic keyphrase extraction from scientific articles. In Proceedings of the Fifth International Workshop on Semantic Evaluations (SemEval-2010). Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An algorithm for suffix stripping",
"authors": [
{
"first": "M",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. F. Porter. 1980. An algorithm for suf- fix stripping. Program, 14(3).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning to extract keyphrases from text",
"authors": [
{
"first": "P",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Turney. 1999. Learning to extract keyphrases from text. Technical Report ERB-1057. (NRC #41622), National Research Council, Institute for Information Technology.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Kea:practical automatic key phrase extraction",
"authors": [],
"year": null,
"venue": "fourth ACM conference on Digital libraries",
"volume": "",
"issue": "",
"pages": "254--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kea:practical automatic key phrase extraction. In fourth ACM conference on Digital libraries, pages 254-256.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Two step approach of BUAP Team at the Task #5 of SemEval-2",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Acronyms+|CT |+N U )); 24 CB = Top N Bigrams(CK, N );25",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Description of the three runs submitted to the Task #5 of SemEval-2 2010 IDF R 17.80% 7.39% 10.44% 13.90% 11.54% 12.61% 11.60% 14.45% 12.87% C 22.00% 7.50% 11.19% 17.70% 12.07% 14.35% 14.93% 15.28% 15.10% N B R 16.80% 6.98% 9.86% 13.30% 11.05% 12.07% 11.40% 14.20% 12.65% C 21.40% 7.30% 10.89% 17.30% 11.80% 14.03% 14.53% 14.87% 14.70% 30% 11.05% 12.07% 11.40% 14.20% 12.65% C 21.40% 7.30% 10.89% 17.30% 11.80% 14.03% 14.53% 14.87% 14.70%",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Method</td><td>by</td><td colspan=\"2\">top 5 candidates</td><td/><td>top 10 candidates</td><td/><td/><td>top 15 candidates</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td>T F \u2212 M E</td><td colspan=\"2\">R 16.80% 6.98%</td><td>9.86%</td><td>13.</td><td/><td/><td/><td/></tr></table>",
"html": null
},
"TABREF1": {
"text": "BU AP \u2212 1 R 10.40% 4.32% 6.10% 13.90% 11.54% 12.61% 14.93% 18.60% 16.56% C 13.60% 4.64% 6.92% 17.60% 12.01% 14.28% 19.00% 19.44% 19.22% BU AP \u2212 2 R 10.40% 4.32% 6.10% 13.80% 11.46% 12.52% 14.67% 18.27% 16.27% C 14.40% 4.91% 7.32% 17.80% 12.14% 14.44% 18.73% 19.17% 18.95% BU AP \u2212 3 R 10.40% 4.32% 6.10% 12.10% 10.05% 10.98% 12.33% 15.37% 13.68% C 14.40% 4.91% 7.32% 15.60% 10.64% 12.65% 15.67% 16.03% 15.85%",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Baselines</td></tr></table>",
"html": null
},
"TABREF2": {
"text": "The three different runs submitted to the competition",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}