|
{ |
|
"paper_id": "S10-1035", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:27:32.909082Z" |
|
}, |
|
"title": "WINGNUS: Keyphrase Extraction Utilizing Document Logical Structure", |
|
"authors": [ |
|
{ |
|
"first": "Thuy", |
|
"middle": [ |
|
"Dung" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University of Singapore", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Singapore", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a system description of the WINGNUS team work 1 for the SemEval-2010 task #5 Automatic Keyphrase Extraction from Scientific Articles. A key feature of our system is that it utilizes an inferred document logical structure in our candidate identification process, to limit the number of phrases in the candidate list, while maintaining its coverage of important phrases. Our top performing system achieves an F 1 of 25.22% for the combined keyphrases (author and reader assigned) in the final test data. We note that the method we report here is novel and orthogonal from other systems, so it can be combined with other techniques to potentially achieve higher performance.", |
|
"pdf_parse": { |
|
"paper_id": "S10-1035", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a system description of the WINGNUS team work 1 for the SemEval-2010 task #5 Automatic Keyphrase Extraction from Scientific Articles. A key feature of our system is that it utilizes an inferred document logical structure in our candidate identification process, to limit the number of phrases in the candidate list, while maintaining its coverage of important phrases. Our top performing system achieves an F 1 of 25.22% for the combined keyphrases (author and reader assigned) in the final test data. We note that the method we report here is novel and orthogonal from other systems, so it can be combined with other techniques to potentially achieve higher performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Keyphrases are noun phrases (NPs) that capture the primary topics of a document. While beneficial for applications such as summarization, clustering and indexing, only a minority of documents have manually-assigned keyphrases, as it is a timeconsuming process. Automatic keyphrase generation is thus a focus for many researchers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most existing keyphrase extraction systems view this task as a supervised classification task in two stages: generating a list of candidates -candidate identification; and using answer keyphrases to distinguish true keyphrases -candidate selection. The selection model uses a set of features that capture the saliency of a phrase as a keyphrase. A major challenge of the keyphrase extraction task lies in the candidate identification process. A narrow candidate list will overlook some true keyphrases (favoring precision), whereas a broad list will produce more errors and require more processing in latter selection stage (favoring recall).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our previous system (Nguyen and Kan, 2007) , we made use of the document logical structure in the proposed features. The premise of this earlier work was that keyphrases are distributed non-uniformly in different logical sections of a paper, favoring sections such as introduction, and related work. We introduced features indicating which sections a candidate occurrs in. For our fielded system in this task (Kim et al., 2010), we further leverage the document logical structure for both candidate identification and selection stages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 45, |
|
"text": "(Nguyen and Kan, 2007)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions are as follows: 1) We suggest the use of Google Scholar-based crawler to automatically find PDF files to enhance logical structure extraction; 2) We provide a keyphrase distribution study with respect to different logical structures; and 3) From the study result, we propose a candidate identification approach that uses logical structures to effectively limit the number of candidates considered while ensuring good coverage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although we have plain text for all test input, we posit that logical structure recovery is much more robust given the original richly-formatted document (e.g., PDF), as font and formatting information can be used for detection. As a bridge between plain text data provided by the organizer and PDF input required to extract formatting features, we first describe our Google Scholar-based crawler to find PDFs given plain texts. We then detail on the logical structure extraction process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our crawler 2 takes inputs as titles to query Google Scholar (GS) by means of web scraping. It pro-cesses GS results and performs approximate title matching using character-based Longest Common Subsequence similarity. Once a matching title with high similarity score (> 0.7 experimentally) is found, the crawler retrieves the list of available PDFs, and starts downloading until one is correctly stored. We enforce that PDFs accepted should have the OCR texts closely match the provided plain texts in terms of lines and tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Google Scholar-based Paper Crawler", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the keyphrase task, we approximate the title inputs to our crawler by considering the first two lines of each plain text provided. For 140 train and 100 test input documents, the crawler downloaded 117 and 80 PDFs, of which 116 and 76 files are correct, respectively. This yields an acceptable level of performance in terms of (Precision, Recall) of (99.15%, 82.86%) for train and (95%, 76%) for test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Google Scholar-based Paper Crawler", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Logical structure is defined as \"a hierarchy of logical components, for example, titles, authors, affiliations, abstracts, sections, etc.\" in (Mao et al., 2003) . Operationalizing this definition, we employ an in-house software, called SectLabel (Luong et al., to appear), to obtain comprehensive logical structure information for each document. SectLabel classifies each text line in a scholarly document with a semantic class (e.g., title, header, bodyText). Header lines are furthered classified into generic roles (e.g., abstract, intro, method).", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 160, |
|
"text": "(Mao et al., 2003)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Structure Extraction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A prominent feature of SectLabel is that it is capable of utilizing rich information, such as font format and spatial layout, from an optical character recognition (OCR) output if PDF files are present 3 . In case PDFs are unavailable, SectLabel still handles plain text based logical structure discovery, but with degraded performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Structure Extraction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We perform a study of keyphrase distribution on the training data over different logical structures (LSs) to understand the importance of each section within documents. These LSs include: title, headers, abstract, introduction (intro), related work (rw), conclusion, and body text 4 (body).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Identification Phrase Distribution Study", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We make a key observation that within a paragraph, important phrases occur mostly in the first n sentences. To validate our hypothesis, we consider keyphrase distribution over body n , which is the subset of all of the body LS, limited to the first n sentences of each paragraph (n = 1, 2, 3 experimentally). Results in Table 1 show that individual LSs (title, headers, abstract, intro, rw, concl) contain a high concentration (i.e., density > 0.2) of keyphrases, with title and abstract having the highest density, and intro being the most dominant LS in terms of keyphrase count. With all these LSs and body, we obtain the full setting, covering 1994/2059=96.84% of all keyphrases appearing in the original text, fulltext, while effectively reducing the number of processed sentences by more than two-thirds.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 327, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Identification Phrase Distribution Study", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Considering only the first sentence of each paragraph in the body text, body 1 , yields fair keyphrase coverage of 1035/1411=73.35% relative to that of fulltext. The number of lines to be processed is much smaller, about a third, which validates our aforementioned hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Identification Phrase Distribution Study", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Results from the keyphrase distribution study motivates us to further explore the use of logical structures (LS). The idea is to limit the search scope of our candidate identification system while maintaining coverage. We propose a new apcaption, footnote, and reference lines. proach, which extracts candidates according to the regular expression rules discussed in (Kim and Kan, 2009) . However, instead of using the whole document text as input, we abridge the input text at different levels from full to minimal. Recall is computed with respect to the total number of keyphrases in the original texts (2059). Table 2 show that we could gather a recall of 63.72% when considering a significantly abridged form of the input culled from title, headers, abstract (abs) and introduction (intro) -minimal. Further adding related work (rw) and conclusion -medium -enhances the recall by 4.95%. When adding only the first line of each paragraph in the body text, we achieve a good recall of 76.74% while effectively reducing the number of candidate phrases to be process by a half with respect to the fulltext input. Even though full 2 , full 3 , and full show further improvements in terms of recall, we opt to use full 1 in our experimental runs, which trades off recall for less computational complexity, which may influence downstream classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 386, |
|
"text": "(Kim and Kan, 2009)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 613, |
|
"end": 620, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Keyphrase Extraction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Following (Nguyen and Kan, 2007) , we use the Na\u00efve Bayes model implemented in Weka (Hall et al., 2009) for candidate phrase selection. As different learning models have been discussed much previous work, we just list the different features with which we experimented with. Our features 5 are as follows (where n indicates a numeric feature; b, a boolean one):", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 32, |
|
"text": "(Nguyen and Kan, 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 84, |
|
"end": 103, |
|
"text": "(Hall et al., 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F1-F3 (n): TF\u00d7IDF, term frequency, term frequency of substrings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F4-F5 (n): First and last occurrences (word offset).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F6 (n): Length of phrases in words. F7 (b): Typeface attribute (available when PDF is present) -Indicates if any part of the candidate phrase has appeared in the document with bold or italic format, a good hint for its relevance as a keyphrase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F8 (b): InTitle -shows whether a phrase is also part of the document title.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F9 (n): TitleOverlap -the number of times a phrase appears in the title of other scholarly documents (obtained from a dump of the DBLP database).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F10-F14 (b): Header, Abstract, Intro, RW, Concl -indicate whether a phrase appears in headers, abstract, introduction, related work or conclusion sections, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F15-F19 (n): HeaderF, AbstractF, IntroF, RWF, ConclF -indicate the frequency of a phrase in the headers, abstract, introduction, related work or conclusion sections, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Phrase Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For this task (Kim et al., 2010), we are given two datasets: train (144 docs) and test (100 docs) with detailed answers for train. To tune our system, we split the train dataset into train and validation subsets: train t (104 docs) and train v (40 docs). Once the best setting is derived from train t -train v , we obtain the final model trained on the full data, and apply it to the test set for the final results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Our evaluation process is accomplished in two stages: we first experiment different feature combinations by using the input types fulltext and full 1 . We then fix the best feature set, and vary our different abridged inputs to find the optimal one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To evaluate the performance of individual features, we define a base feature set, as F 1,4 , and measure the performance of each feature added separately to the base. Results in Table 3 have highlighted the set of positive features, which is F 3,5,6,13,16 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 185, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From the positive set F 3,5,6,13,16 , we tried different combinations for the two input types shown 24.24% 26.70% + F 3,6,5,13,16", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "23.42% 26.70% Table 4 : Performance (F 1 ) over difference feature combinations for fulltext and full 1 inputs. Table 5 gives the performance for the abridged inputs we tried with the best feature set F 1,3,4,6 . All full 1 , full 2 , full 3 and full show improved performance compared to those on the fulltext. We achieve our best performance with full 1 at 28.18% F Score. These results validate the effectiveness of our approach in utilizing logical structure for the candidate identification. We report our results submitted in Table 6 . These figures are achieved using the best feature combination F 1,3,4,6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 21, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 119, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 539, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have described and evaluated our keyphrase extraction system for the SemEval-2 Task #5. With the use of logical structure in the candidate identification, our system has demonstrated its superior performance over systems that do not use such information. Moreover, we have effectively reduced the numbers of text lines and candidate Table 5 : Performance over different abridged inputs using the best feature set F 1,3,4,6 . \"@N\" indicates the number of top N keyphrase matches.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 343, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "System Description F@5 F@10 F@15 WINGNUS 1 full, F 1,3,4,6 20.65% 24.66% 24.95% WINGNUS 2 full 1 , F 1,3,4,6 20.45% 24.73% 25.22% Table 6 : Final results on the test data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 137, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "phrases to be processed in the candidate identification and selection respectively by about half.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our system takes advantage of the logical structure analysis but not to the extent we had hoped. We had hypothesized that formatting features (F 7 ) such as bold and italics, would help discriminate key phrases, but in our limited experiments for this task did not validate this. Similarly, external knowledge should help in the keyphrase task, but the prior knowledge about keyphrase likelihood (F 9 ) in DBLP hurt performance in our tests. We plan to further explore these issues for the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This work was supported by a National Research Foundation grant \"Interactive Media Search\" (grant # R-252-000-325-279).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://wing.comp.nus.edu.sg/\u02dclmthang/GS/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We note that the PDFs have author assigned keyphrases of the document, but we filtered this information before passing to our keyphrases system to ensure a fair test.4 We utilize the comprehensive output of our logical structure system to filter out copyright, email, equation,figure,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Detailed feature definitions are described in(Nguyen and Kan, 2007;Kim and Kan, 2009).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The WEKA data mining software: an update", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Holmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Pfahringer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Reutemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "SIGKDD Explor. Newsl", |
|
"volume": "11", |
|
"issue": "1", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. SIGKDD Explor. Newsl., 11(1):10-18.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Re-examining automatic keyphrase extraction approaches in scientific articles", |
|
"authors": [ |
|
{ |
|
"first": "Nam", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Nam Kim and Min-Yen Kan. 2009. Re-examining automatic keyphrase extraction approaches in scien- tific articles. In MWE '09.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Task 5: Automatic keyphrase extraction from scientific articles", |
|
"authors": [ |
|
{ |
|
"first": "Nam", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alyona", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Medelyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Nam Kim, Alyona Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. Task 5: Automatic keyphrase extraction from scientific articles. In Se- mEval.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Logical structure recovery in scholarly articles with rich document features. IJDLS. Forthcoming, accepted for publication", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thuy", |
|
"middle": [ |
|
"Dung" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Thuy Dung Nguyen, and Min-Yen Kan. to appear. Logical structure recovery in schol- arly articles with rich document features. IJDLS. Forthcoming, accepted for publication.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Document structure analysis algorithms: a literature survey", |
|
"authors": [ |
|
{ |
|
"first": "Song", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Azriel", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapas", |
|
"middle": [], |
|
"last": "Kanungo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. SPIE Electronic Imaging", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Song Mao, Azriel Rosenfeld, and Tapas Kanungo. 2003. Document structure analysis algorithms: a lit- erature survey. In Proc. SPIE Electronic Imaging.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Keyphrase extraction in scientific publications", |
|
"authors": [], |
|
"year": null, |
|
"venue": "ICADL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keyphrase extraction in scientific publications. In ICADL.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Keyphrase distribution over different logical structures computed from the 144 training documents. The type counts of author-assigned (ath), reader-assigned (rder) and combined (comb) keyphrases are shown. Sent indicates the number of sentences in each LS. The Den column gives the density of keyphrases for each LS.", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">: Performance of individual features (on</td></tr><tr><td colspan=\"2\">fulltext) added separately to the base set F 1,4 .</td></tr><tr><td colspan=\"2\">in Table 4. The results indicate that while fulltext</td></tr><tr><td colspan=\"2\">obtains the best performance with F 3,6,5 added, us-</td></tr><tr><td colspan=\"2\">ing full 1 shows superior performance at 28.18% F</td></tr><tr><td colspan=\"2\">Score with F 3,6 added. Hence, we have identified</td></tr><tr><td colspan=\"2\">our best feature set as F 1,3,4,6 .</td></tr><tr><td/><td>fulltext full 1</td></tr><tr><td>base (F 1,4 )</td><td>23.42% 22.60%</td></tr><tr><td>+ F 3,6</td><td>25.88% 28.18%</td></tr><tr><td>+ F 3,6,5</td><td>26.21% 26.21%</td></tr><tr><td>+ F 3,6,5,13</td><td>24.90% 26.21%</td></tr><tr><td>+ F 3,6,5,16</td><td/></tr></table>", |
|
"num": null, |
|
"text": "", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |