|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:05:02.142878Z" |
|
}, |
|
"title": "TermEval 2020: RACAI's automatic term extraction system", |
|
"authors": [ |
|
{ |
|
"first": "Vasile", |
|
"middle": [], |
|
"last": "P\u0103i\u0219", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Romanian Academy CASA ACADEMIEI", |
|
"location": { |
|
"addrLine": "13 \"Calea 13 Septembrie\"", |
|
"postCode": "050711", |
|
"settlement": "Bucharest", |
|
"country": "ROMANIA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Ion", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Romanian Academy CASA ACADEMIEI", |
|
"location": { |
|
"addrLine": "13 \"Calea 13 Septembrie\"", |
|
"postCode": "050711", |
|
"settlement": "Bucharest", |
|
"country": "ROMANIA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes RACAI's automatic term extraction system, which participated in the TermEval 2020 shared task on English monolingual term extraction. We discuss the system architecture, some of the challenges that we faced as well as present our results in the English competition.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes RACAI's automatic term extraction system, which participated in the TermEval 2020 shared task on English monolingual term extraction. We discuss the system architecture, some of the challenges that we faced as well as present our results in the English competition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automatic term extraction, also known as ATE, is a wellknown task within the domain of natural language processing. Given a text (this can be either a fragment or an entire corpus), an automatic term extractor system will produce a list of terms (single or multiword expressions) characteristic for the domain of text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Felber, in the \"Terminology Manual\" (Felber, 1984) , defines a term as \"any conventional symbol representing a concept defined in a subject field\". Nevertheless, considering current practice in natural language processing tasks, it is not always possible to give a general definition applicable for the workings of a term extractor. One question is whether or not to include named entities as part of the identified terms. This problem is also raised by the organizers of the TermEval 2020 shared task, each system being evaluated twice, once including and once excluding named entities 1 . Furthermore, since named entity recognizers can be trained on many classes (such as diseases or chemicals for example), another potential question is what kinds of entities (if any) can be included as part of the identified terms. However, an agreement must be made that all identified terms must be specific to the domain of the analyzed text, regardless of inclusion or not of named entities. For example, in the shared task's provided training dataset, the named entity \"United States Dressage Federation\" is included as a term in the \"equestrian\" section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 50, |
|
"text": "(Felber, 1984)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The present paper presents our attempt at constructing an automatic term extraction system in the context of the TermEval 2020 shared task on monolingual term extraction (Rigouts Terryn et al., 2020) . We start by presenting related research, then continue with the description of our system and finally present concluding remarks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 199, |
|
"text": "(Rigouts Terryn et al., 2020)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The usefulness of the term identification process is both in its own use, such as creation of document indices, and as a pre-processing step in other more advanced processes, such as machine translation. Furthermore, the output produced by an automatic system can be manually validated by a human user in order to remove irrelevant terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Traditional approaches for ATE (Kageura, 1998 ) make use of statistical features such as word frequency or \"termhood\" (degree of relatedness of a proposed term to the domain) metrics. Additionally, information such as part of speech can be used to further filter candidate terms. Term formalization attempts can be identified in the literature as early as e.g. 1996, when Frantzi and Ananiadou (1996) defined C-value as a basic measure of termhood, a principle we have also used in one of our algorithms. In this section, we will briefly mention the inner workings of some existing term extraction algorithms that we used in our term extraction system. For a detailed coverage of this rather vast sub-domain of NLP, the reader is referred to e.g. Pazienza et al. (2005) or the more recent Firoozeh et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 45, |
|
"text": "(Kageura, 1998", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 400, |
|
"text": "Frantzi and Ananiadou (1996)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 769, |
|
"text": "Pazienza et al. (2005)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 789, |
|
"end": 811, |
|
"text": "Firoozeh et al. (2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "TextRank (Mihalcea and Tarau, 2004 ) is a term extraction algorithm using a graph representation of the text in which each word is a node and an edge is created between words collocated within a certain window of words. Based on the number of links to each node a score is computed similar to the PageRank algorithm (Brin and Page, 1998) . Further filtering is performed based on the part of speech of the words. The graph is created based on single words. However, as the last step of the algorithm a reconstruction of multi-word terms is performed if multiple single word terms are collocated in the sentence. RAKE, an acronym for Rapid Automatic Keyword Extraction (Rose et al., 2010) , combines graph measures such as the degree (number of connected edges) with statistical measures such as word frequency. Furthermore, RAKE uses a strategy similar to TextRank for combining single words that occur together at least twice into a multi-word term. An interesting idea deriving from the RAKE paper is the importance of the stop words list used. In this context, it is mentioned that FOX (Fox, 1989) stop list produces an increase in the F1 score for the RAKE algorithm. An improvement over the initial RAKE algorithm is described in Gupta et al. (2016) . Campos et al. (2020) present YAKE, which makes use of statistical features. According to their analysis 2 it is comparable or even better in some cases to previous stateof-the-art methods. In the HAMLET system (Rigouts Terryn et al., 2019) a number of 152 features are computed on each candidate term and a binary decision tree classifier is trained. Candidates are determined based on their part of speech, but the patterns of occurrence are determined automatically based on training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 34, |
|
"text": "(Mihalcea and Tarau, 2004", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 337, |
|
"text": "(Brin and Page, 1998)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 668, |
|
"end": 687, |
|
"text": "(Rose et al., 2010)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1089, |
|
"end": 1100, |
|
"text": "(Fox, 1989)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1235, |
|
"end": 1254, |
|
"text": "Gupta et al. (2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1257, |
|
"end": 1277, |
|
"text": "Campos et al. (2020)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The dataset proposed for the TermEval task is described in detail in the task paper (Rigouts Terryn et al., 2020) . However, several aspects must be mentioned. It is comprised of 4 domains: wind energy ('wind'), corruption ('corp'), horse dressage ('equi') and heart failure ('hf'). The first 3 domains were provided with annotations for training purposes, while the heart failure domain was used for testing. All the domains were made available in English, French and Dutch.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 113, |
|
"text": "(Rigouts Terryn et al., 2020)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and basic processing", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "For the purposes of our experiments, we focused on the English version of the corpus. However, we tried to keep our algorithms independent of the actual language being used. Towards this end, we used only resources normally available for many languages, such as annotations and stop words, and did not create any rules or patterns specific to the English language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and basic processing", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "One of the primary processing operations was to annotate the corpus with part-of-speech and lemma information. For this purpose, we used Stanford CoreNLP (Manning et al., 2014) . Furthermore, we precomputed statistical indicators based on the corpus, such as n-gram frequency, document frequency and letters used (in some cases terms contained non-English letters). Statistics were computed for both the corpus and the provided training annotations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 176, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and basic processing", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Unfortunately, the corpus is not balanced with respect to the different domains. Therefore, some statistical indicators may be less meaningful. For example, the corruption part of the corpus contains 12 annotated texts with an additional 12 texts provided without annotations. However, the equestrianism part contains 34 annotated text files and 55 unannotated documents. Furthermore, the evaluation section on heart failure contains 190 files. This seems to suggest that indicators like document frequency (the number of documents containing a certain word/expression) may be more meaningful for certain sections and less meaningful for others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and basic processing", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "More statistics regarding the English domains of the corpus are presented in Table 1 One of the characteristics specific only to the wind energy section of the corpus is the presence of mathematical formulas in some of the files. We could not identify an easy way to automatically remove them and did not want to manually perform this action. For example, \"CP\" is considered a term and it also appears in some formulas. Furthermore, there are lines of text presumably between formulas which look similar to a formula, like \"CP ,max CT CTr\" or full lines of text containing embedded formulas. Even more, the term \"PCO2\", indicated in the gold annotations, seems to only appear inside a formula (\"PCO2 = TCO2 -HCO2 PCO2\"). Therefore, in order to avoid removal of potentially useful portions of text, the files were used as they were provided.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 84, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and basic processing", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Given these discrepancies between the different domain sub-corpora, it was our assumption, from the beginning, that different algorithms will obtain different results on each of the domains. Therefore, we started first by analyzing the results provided by known algorithms on the training parts of the corpus. These results are presented in Tables 2, 3 , 4 and are compared against the provided annotations with named entities included. In these tables, the algorithm with the best F1 score in each section is marked in bold. The \"1W\" specification besides an algorithm denotes the score for single word terms.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 352, |
|
"text": "Tables 2, 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and basic processing", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In accordance with our previous observation, because of the imbalances between the different sections of the corpus, from Table 2 it can easily be seen that most of the algorithms perform better on the \"equi\" section and worse on the other sections. In some cases, there are even extreme differences. For example, the YAKE implementation gives on multi-word expressions an F1 score of 22.3 on the \"equi\" section and only 5.94 on the \"wind\" section. This is improved for single word expressions with 12% on the \"equi\" section and less then 3% for the other sections. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 129, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and basic processing", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Looking at the above tables, two observations can be made: a) no single system performs best on all three sections; b) systems tend to balance precision and recall, but in extreme cases they prefer either precision (for example the YAKE method in \"corp\" and \"wind\" sections) or recall (for example the RAKE method).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "A first idea that we explored was to implement a voting mechanism between the systems. However, the results presented only slight improvements. Without a complete and in-depth analysis, we concluded that each system was good at identifying certain terms (based on their pattern of occurrence) but performing badly for other terms. Therefore, we decided to extend the basic system and implement additional algorithms that would try to complement and extend the previous ones, by using new methods and finally use the same voting mechanism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The first algorithm, PLEARN (from \"pattern learn\") is trying to identify patterns based on statistics computed on the train set annotations and their appearance in context. We used the following features: letters accepted in annotations (for example there is no term using \",\"), stop words accepted at start or end of a term (for example there is no term starting or ending with \"and\"), stop words accepted inside multi word terms, stop words accepted before or after a term (for example \"and\" usually is not contained within a term but rather it separates two distinct terms, thus appearing before or after a term), suffixes of words other than stop words present in terms (usually we tend to find nouns as terms, but we tried not to impose this condition, thus we only checked the suffixes of words).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "For the purpose of the algorithm, all information was extracted automatically from the training set and no manual conditions or word lists were created. One immediate problem with the algorithm is that the training set did not provide the actual position of the term. Therefore, if the same word or multi-word expression was used both as term and as a non-term then the feature extraction part was not able to identify this case. Nevertheless, the algorithm was able to produce the good recall that we were expecting, presented in A second algorithm used a clustering approach, thus we'll refer to it as \"CLUS\" for the purposes of this paper. In this case we worked under the assumption that terms belonging to a particular domain will tend to cluster together because they will be related in meaning. In order to model this relation, we represented the words using word embeddings and used the cosine distance. For the clustering algorithm, we implemented a DBSCAN algorithm (Ester et al., 1996) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 976, |
|
"end": 996, |
|
"text": "(Ester et al., 1996)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The input for the clustering algorithm was composed of the terms identified by the PLEARN algorithm. From these terms we kept only the single word terms. Furthermore, we decided to use an approach similar to the one used in TextRank to compose at the end multi-word terms based on the colocation of single word terms. This last operation was done in a post-processing step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "For the word embedding representation we considered necessary to use a model trained on a large enough corpus to allow for words to be used in different domains, including those of interest for this work. Therefore, we decided to use a word embeddings model trained on the Open American National Corpus (Ide, 2008) . Furthermore, due to the relatively short time available for the task participation, we decided to use a pre-trained model 3 . Results are given in Table 6 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 314, |
|
"text": "(Ide, 2008)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 471, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "This algorithm already has a much better F1 score for single word terms then all the other algorithms tested. In the case of the \"wind\" section the F1 score is almost double (45.02%) then the best previous result (22.79%). Table 6 : Precision, Recall, F1 measures for the CLUS algorithm on the training parts of the corpus", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 230, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Since the CLUS algorithm works on single word terms and only in the post-processing step combines them to create multi-word terms, we decided to work on a third algorithm that would work directly with multi-word expression candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The third (and last) algorithm that we developed is called WEMBF (word embeddings filtered) and, as its name implies, uses the word embeddings vector representation of words to measure the termhood of each word. The algorithm executes the following steps: 1) Tokenizes and POS tags all text files of the specified domain of the corpus, using the NLTK Python library (Bird et al., 2009) ;", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 385, |
|
"text": "(Bird et al., 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2) Extracts all NPs from the domain sub-corpus, using simple prenominal-nominal patterns, including all prepositional phrases headed by the preposition 'of', which are almost always attached to the previous NP. Furthermore, it deletes any determiners that start NPs and removes URLs, emails, numbers and other entities considered to be irrelevant for the term extraction task;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3) For each content word (i.e. nouns, adjectives, adverbs and verbs) of each NP, computes a cosine distance between two word embeddings vectors. The first vector is obtained from training on a \"general\"-domain corpus containing news, literature, sports, etc., being careful not to include texts from the domain of interest. The second vector is obtained from training only on the domain of interest (e.g. 'wind'); 4) Score each NP by averaging the previously computed cosine distance of its member content words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Step 4 of the WEMBF algorithm gives us a preliminary term list on the assumption that the larger the cosine distance of the general and domain word embeddings vectors is, the more likely is that the word is a term in the domain of interest. However, the obtained list contains too many NPs which makes it perform poorly in terms of precision. Thus, we decided to remove some term NPs from this initial list, using the following filters: a) Only keep NPs which appear (are embedded) in other NPs from the preliminary term list (Frantzi and Ananiadou, 1996) . The number of occurrences (in other NPs) is kept for each surviving NP to be rescored later;", |
|
"cite_spans": [ |
|
{ |
|
"start": 526, |
|
"end": 555, |
|
"text": "(Frantzi and Ananiadou, 1996)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b) Remove all single-word terms that appear as head nouns in other NPs on the assumption that if they can be modified, they are too general to be kept as terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The termhood score of each NP in the final list is modified by multiplying the following indicators: the original score of the NP, the number of words in the NP, the number of NPs in which this NP appeared.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus, if an NP has more words, it appeared in many other NPs and its average cosine distance (between the general domain and the domain of interest) of its member content words is higher, the NP is more likely to be a term.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Results of the WEMBF term extraction algorithm are given in Table 8 Table 8 . Precision, Recall, F1 measures for the WEMBF algorithm on the training parts of the corpus", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 67, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 68, |
|
"end": 75, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The WEMBF algorithm has a performance similar to the PLEARN algorithm for single words, even though with a more balanced precision and recall, but better performance for multi-word terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The final step in our approach was to construct an ensemble module that takes the annotations from different algorithms and combines them together via a voting scheme. This is presented schematically in Figure 1 . Each algorithm is fed into the voting module, having one vote for the final result. An exception is in the case of PLEARN and CLUS algorithms which are linked together and thus constitute a single vote.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 211, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "P%", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Once the test set annotations were released, we were able to evaluate our system, including all the other algorithms on the final data. When comparing this information with results based on the different training sections, we must keep in mind the peculiarities of each section of the corpus, as presented in Table 1 above. Evaluation results on the \"heart failure\" section are presented in Table 9 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 316, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 398, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System evaluation", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Our CLUS algorithm performed best on the single word terms giving an F1 score of 53.48 with balanced precision and recall. Furthermore, the PLEARN algorithm produced the best recall, which was to be expected since it was designed especially for this purpose. However, the final algorithm with the combination of all of them did perform better on the multi-word terms, this being reflected in the final F1 score. Table 9 . Precision, Recall, F1 measures of different algorithms on the evaluation set (\"heart failure\").", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 419, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System evaluation", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "This paper presented our system proposal 4 for the TermEval 2020 shared task. We started by investigating the performance of existing algorithms. Then went on and created three new algorithms: PLEARN, CLUS and WEMBF as described in section 4. Finally, we constructed an ensemble module, based on voting, which combined the results of all the algorithms in order to produce the final results. Evaluation on the \"heart failure\" dataset is presented in Table 9 above.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 450, |
|
"end": 457, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "The approach behind the ACTER dataset, of building a term annotated corpus in multiple languages is very interesting and it was extremely helpful for building our automatic term extractor system. It is our hope that this or a similar approach could be used for Romanian language as well. In this context, we envisage extending our term extractor to support Romanian language and further include it in the RELATE platform (P\u0103i\u0219 et al., 2019) dedicated to processing Romanian language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 440, |
|
"text": "(P\u0103i\u0219 et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "We managed to successfully use pre-trained word embeddings on a large corpus for our CLUS algorithm. This proves that transfer learning is a possibility that should be explored also in the field of term extraction. Therefore, amongst our future work we'll try to use the same approach for the Romanian language, by using pretrained word embeddings (P\u0103i\u0219 and Tufi\u0219, 2018) on the Reference Corpus of Contemporary Romanian Language (CoRoLa) (Mititelu et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 370, |
|
"text": "(P\u0103i\u0219 and Tufi\u0219, 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 461, |
|
"text": "(Mititelu et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Part of this work was conducted in the context of the ReTeRom project. Part of this work was conducted in the context of the Marcell project.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "https://termeval.ugent.be/task-evaluation/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/LIAAD/yake", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://data.world/jaredfern/oanc-word-embeddings", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/racai-ai/TermEval2020", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Natural Language Processing with Python ---Analyzing Text with the Natural Language Toolkit", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bird, S., Klein, E. and Loper, E. (2009). Natural Language Processing with Python ---Analyzing Text with the Natural Language Toolkit. O'Reilly Media; available online at http://www.nltk.org/book_1ed/.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Brin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Page", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1-7).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "YAKE! Keyword Extraction from Single Documents using Multiple Local Features", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Campos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Mangaravite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Pasquali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Jatowt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Jorge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Nunes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Jatowt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Information Sciences Journal. Elsevier", |
|
"volume": "509", |
|
"issue": "", |
|
"pages": "257--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A density-based algorithm for discovering clusters in large spatial databases with noise", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Kriegel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "226--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ester, M., Kriegel, H. P., Sander, J. and Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96),pp 226-231.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Terminology Manual. Paris: International Information Centre for Terminology", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Felber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felber, H. (1984). Terminology Manual. Paris: International Information Centre for Terminology.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Keyword extraction: Issues and methods", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Firoozeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nazarenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Alizon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Daille", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Natural Language Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Firoozeh, N., Nazarenko, A., Alizon, F. and Daille, B. (2019). Keyword extraction: Issues and methods. Natural Language Engineering, pages 1-33, Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A stop list for general text", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Fox", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "ACM SIGIR Forum", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "19--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fox, C. (1989). A stop list for general text. ACM SIGIR Forum, vol. 24, pp. 19-21. ACM, New York, USA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Extracting Nested Collocations", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Frantzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 16th conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "41--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frantzi, K. T. and Ananiadou, Sophia. (1996) Extracting Nested Collocations. In Proceedings of the 16th conference on Computational Linguistics -Volume 1, pages 41-46. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Rake-pmi automated keyphrase extraction: An unsupervised approach for automated extraction of keyphrases", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the International Conference on Informatics and Analytics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gupta, S., Mittal, N., & Kumar, A. (2016). Rake-pmi automated keyphrase extraction: An unsupervised approach for automated extraction of keyphrases. In Proceedings of the International Conference on Informatics and Analytics, pp. 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The American National Corpus: Then, Now, and Tomorrow", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Ide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Selected Proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus: Mustering Languages, Cascadilla Proceedings Project", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ide, N. (2008). The American National Corpus: Then, Now, and Tomorrow. In Michael Haugh, Kate Burridge, Jean Mulder and Pam Peters (eds.), Selected Proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus: Mustering Languages, Cascadilla Proceedings Project, Sommerville, MA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Methods of automatic term recognition", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kageura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Umino", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Terminology", |
|
"volume": "3", |
|
"issue": "2", |
|
"pages": "259--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kageura, K.; Umino, B. (1998). Methods of automatic term recognition. Terminology. 3(2):259-289.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The Stanford CoreNLP Natural Language Processing Toolkit", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S.J. and McClosky, D. (2014). The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "TextRank: Bringing Order into Text", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Tarau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "404--411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihalcea, R., Tarau, P. (2004). TextRank: Bringing Order into Text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing EMNLP 2004, pp 404-411.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The Reference Corpus of Contemporary Romanian Language (CoRoLa)", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Mititelu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tufi\u0219", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Irimia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 11th Language Resources and Evaluation Conference -LREC'18", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mititelu, B.V., Tufi\u0219, D. and Irimia, E. (2018). The Reference Corpus of Contemporary Romanian Language (CoRoLa). In Proceedings of the 11th Language Resources and Evaluation Conference - LREC'18, Miyazaki, Japan, European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Terminology Extraction: An Analysis of Linguistic and Statistical Approaches", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pennacchiotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"M ;" |
|
], |
|
"last": "Zanzotto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tufi\u0219", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Knowledge Mining. Studies in Fuzziness and Soft Computing", |
|
"volume": "185", |
|
"issue": "", |
|
"pages": "403--409", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pazienza M.T., Pennacchiotti M. and Zanzotto F.M. (2005). Terminology Extraction: An Analysis of Linguistic and Statistical Approaches. In: Sirmakessis S. (eds) Knowledge Mining. Studies in Fuzziness and Soft Computing, vol 185. Springer, Berlin, Heidelberg P\u0103i\u0219, V., Tufi\u0219, D. (2018). Computing distributed representations of words using the COROLA corpus. In Proceedings of the Romanian Academy, Series A, Volume 19, Number 2/2018, pp. 403-409.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Integration of Romanian NLP tools into the RELATE platform", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "P\u0103i\u0219", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tufi\u0219", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ion", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the International Conference on Linguistic Resources and Tools for Processing Romanian Language -CONSILR 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P\u0103i\u0219, V., Tufi\u0219, D. and Ion, R. (2019). Integration of Romanian NLP tools into the RELATE platform. In Proceedings of the International Conference on Linguistic Resources and Tools for Processing Romanian Language -CONSILR 2019, pages 181-192.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "TermEval 2020: Shared Task on Automatic Term Extraction Using the Annotated Corpora for Term Extraction Research (ACTER) Dataset", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rigouts Terryn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Drouin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Lefever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of CompuTerm", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rigouts Terryn, A., Drouin, P., Hoste, V., & Lefever, E. (2020). TermEval 2020: Shared Task on Automatic Term Extraction Using the Annotated Corpora for Term Extraction Research (ACTER) Dataset. In Proceedings of CompuTerm 2020.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Analysing the Impact of Supervised Machine Learning on Automatic Term Extraction: HAMLET vs TermoStat", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rigouts Terryn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Drouin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Lefever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of Recent Advances in Natural Language Processing -RANLP 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1012--1021", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rigouts Terryn, A., Drouin, P., Hoste, V., & Lefever, E. (2019). Analysing the Impact of Supervised Machine Learning on Automatic Term Extraction: HAMLET vs TermoStat. In Proceedings of Recent Advances in Natural Language Processing -RANLP 2019, pages 1012-1021, Varna, Bulgaria, Sep 2-4, 2019.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Automatic keyword extraction from individual documents", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Engel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Cramer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Cowley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Text mining: applications and theory", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rose, S., Engel, D., Cramer, N., & Cowley, W. (2010). Automatic keyword extraction from individual documents. Text mining: applications and theory, 1, 1- 20.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A statistical interpretation of term specificity and its application in retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Sparck", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "Journal of Documentation", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "11--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sparck Jones, K. (1972). A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28:11-21.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "RACAI's term extraction system architecture that participated in TermEval 2020", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Statistics regarding the English sections of the corpus" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"4\">: Precision, Recall, F1 measures for tested</td></tr><tr><td colspan=\"2\">algorithms on the \"equi\" section</td><td/><td/></tr><tr><td/><td>P%</td><td>R%</td><td>F1%</td></tr><tr><td>TFIDF 1W</td><td>16.02</td><td>27.29</td><td>20.19</td></tr><tr><td>TFIDF</td><td>7.81</td><td>18.65</td><td>11.01</td></tr><tr><td>RAKE 1W</td><td>16.80</td><td>75.30</td><td>27.47</td></tr><tr><td>RAKE</td><td>12.95</td><td>65.08</td><td>21.60</td></tr><tr><td>YAKE 1W</td><td>30.94</td><td>8.57</td><td>13.42</td></tr><tr><td>YAKE</td><td>11.81</td><td>9.88</td><td>10.76</td></tr><tr><td>TRANK 1W</td><td>17.67</td><td>39.24</td><td>24.37</td></tr><tr><td>TRANK</td><td>17.05</td><td>18.40</td><td>17.70</td></tr></table>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"4\">: Precision, Recall, F1 measures for tested</td></tr><tr><td colspan=\"2\">algorithms on the \"corp\" section</td><td/><td/></tr><tr><td/><td>P%</td><td>R%</td><td>F1%</td></tr><tr><td>TFIDF 1W</td><td>17.30</td><td>19.96</td><td>18.54</td></tr><tr><td>TFIDF</td><td>13.18</td><td>11.60</td><td>12.34</td></tr><tr><td>RAKE 1W</td><td>13.62</td><td>58.13</td><td>22.07</td></tr><tr><td>RAKE</td><td>13.90</td><td>63.17</td><td>22.79</td></tr><tr><td>YAKE 1W</td><td>64.29</td><td>3.18</td><td>6.06</td></tr><tr><td>YAKE</td><td>12.37</td><td>3.91</td><td>5.94</td></tr><tr><td>TRANK 1W</td><td>14.57</td><td>34.81</td><td>20.54</td></tr><tr><td>TRANK</td><td>14.11</td><td>13.62</td><td>13.86</td></tr></table>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>.</td><td/></tr><tr><td/><td>P%</td><td>R%</td><td>F1%</td></tr><tr><td>Equi 1W</td><td>21.28</td><td>87.56</td><td>34.24</td></tr><tr><td>Equi</td><td>7.96</td><td>86.22</td><td>14.57</td></tr><tr><td>Corp 1W</td><td>15.61</td><td>91.43</td><td>26.66</td></tr><tr><td>Corp</td><td>4.85</td><td>89.86</td><td>9.19</td></tr><tr><td>Wind 1W</td><td>13.37</td><td>89.93</td><td>23.28</td></tr><tr><td>Wind</td><td>5.53</td><td>88.33</td><td>10.41</td></tr></table>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Precision, Recall, F1 measures for the PLEARN algorithm on the training parts of the corpus" |
|
} |
|
} |
|
} |
|
} |