ACL-OCL / Base_JSON /prefixC /json /computerm /2020.computerm-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:05:08.414278Z"
},
"title": "TermEval 2020: Using TSR Filtering Method to Improve Automatic Term Extraction",
"authors": [
{
"first": "Antoni",
"middle": [],
"last": "Oliver",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Oberta de Catalunya Barcelona",
"location": {
"country": "Spain"
}
},
"email": "[email protected]"
},
{
"first": "Merc\u00e8",
"middle": [],
"last": "V\u00e0zquez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Oberta de Catalunya Barcelona",
"location": {
"country": "Spain"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The identification of terms from domain-specific corpora using computational methods is a highly time-consuming task because terms have to be validated by specialists. In order to improve term candidate selection, we have developed the Token Slot Recognition (TSR) method, a filtering strategy based on terminological tokens which is used to rank extracted term candidates from domain-specific corpora. We have implemented this filtering strategy in TBXTools. In this paper we present the system we have used in the TermEval 2020 shared task on monolingual term extraction. We also present the evaluation results for the system for English, French and Dutch and for two corpora: corruption and heart failure. For English and French we have used a linguistic methodology based on POS patterns, and for Dutch we have used a statistical methodology based on n-grams calculation and filtering with stop-words. For all languages, TSR (Token Slot Recognition) filtering method has been applied. We have obtained competitive results, but there is still room for improvement of the system.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The identification of terms from domain-specific corpora using computational methods is a highly time-consuming task because terms have to be validated by specialists. In order to improve term candidate selection, we have developed the Token Slot Recognition (TSR) method, a filtering strategy based on terminological tokens which is used to rank extracted term candidates from domain-specific corpora. We have implemented this filtering strategy in TBXTools. In this paper we present the system we have used in the TermEval 2020 shared task on monolingual term extraction. We also present the evaluation results for the system for English, French and Dutch and for two corpora: corruption and heart failure. For English and French we have used a linguistic methodology based on POS patterns, and for Dutch we have used a statistical methodology based on n-grams calculation and filtering with stop-words. For all languages, TSR (Token Slot Recognition) filtering method has been applied. We have obtained competitive results, but there is still room for improvement of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic Term Extraction (ATE) has been considered a relevant Natural Language Processing task involving terminology since the early 1980s, due to its accurate terminology construction that can improve a wide range of tasks, such as ontology learning, computer-assisted translation or information retrieval. However, automatic term extraction methods implemented up to now usually involve extracting a large list of term candidates that has to be manually selected by specialists (Bourigault et al., 2001; Vivaldi and Rodr\u00edguez, 2001 ), a highly timeconsuming activity and a repetitive task that poses the risk of being unsystematic, and very costly in economic terms (Loukachevitch, 2012; Conrado et al., 2013; Vasiljevs et al., 2014) .",
"cite_spans": [
{
"start": 481,
"end": 506,
"text": "(Bourigault et al., 2001;",
"ref_id": "BIBREF2"
},
{
"start": 507,
"end": 534,
"text": "Vivaldi and Rodr\u00edguez, 2001",
"ref_id": "BIBREF24"
},
{
"start": 669,
"end": 690,
"text": "(Loukachevitch, 2012;",
"ref_id": "BIBREF11"
},
{
"start": 691,
"end": 712,
"text": "Conrado et al., 2013;",
"ref_id": "BIBREF5"
},
{
"start": 713,
"end": 736,
"text": "Vasiljevs et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In order to achieve a more accurate term candidate selection, we implemented the Token Slot Recognition (TSR) method, a filtering strategy based on terminological tokens used to rank extracted term candidates from domain-specific corpora. The TSR filtering method has been implemented in TBXTools, a term extraction tool, and can be used both with statistical and linguistic term extraction .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The main goal of this paper is to determine whether the TSR filtering method could provide an accurate term candidate's selection from the Annotated Corpora for Term Extraction Research (ACTER) Dataset , provided by the organizers of the TermEval 2020 shared task on monolingual term extraction (Rigouts Terryn et al., 2020) . The TSR filtering method is based on reference terms to provide a precise term candidate selection.",
"cite_spans": [
{
"start": 295,
"end": 324,
"text": "(Rigouts Terryn et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This paper is structured as follows: in Section 2, the background of automatic term extraction is described. In Sections 3 and 4, the TSR filtering method and the TBXTools are described. In Section 5, the experimental part is presented. In section 6 the discussion about the obtained results is presented. The paper is concluded with some final remarks and ideas for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Under the generic name of Automatic Terminology Extraction (ATE) we can find a series of techniques and algorithms for the detection of terms in corpora. ATE programs provide a list of term candidates, that is, a set of words or group of words with high probability of being terms. Results of the ATE programs should be revised by human specialists. The methods for ATE can be classified in two main groups: (Pazienza et al., 2005 ):",
"cite_spans": [
{
"start": 408,
"end": 430,
"text": "(Pazienza et al., 2005",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic terminology extraction",
"sec_num": "2."
},
{
"text": "\u2022 Statistical methods: term extraction is performed based on statistical properties (Salton et al., 1975) and usually implies the calculation of n-grams of words and filtering them with a list of stop-words. Although the most common and easiest to implement statistical property is the term candidate frequency, a long set of statistical measures and other approaches have been developed for term candidate scoring and ranking (Evert and Krenn, 2001; V\u00e0zquez and Oliver, 2013; Astrakhantsev et al., 2015) .",
"cite_spans": [
{
"start": 84,
"end": 105,
"text": "(Salton et al., 1975)",
"ref_id": "BIBREF19"
},
{
"start": 427,
"end": 450,
"text": "(Evert and Krenn, 2001;",
"ref_id": "BIBREF9"
},
{
"start": 451,
"end": 476,
"text": "V\u00e0zquez and Oliver, 2013;",
"ref_id": "BIBREF25"
},
{
"start": 477,
"end": 504,
"text": "Astrakhantsev et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic terminology extraction",
"sec_num": "2."
},
{
"text": "\u2022 Linguistic methods (Bourigault, 1992) : term extraction is performed based on linguistic properties. Most of the systems use a set of predefined morphosyntactic patterns (Evans and Zhai, 1996) . After term candidates are extracted using the patterns, a set of statistical measures, the simplest of them being the frequency, are also used to rank the candidates (Daille et al., 1994) .",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "(Bourigault, 1992)",
"ref_id": "BIBREF3"
},
{
"start": 172,
"end": 194,
"text": "(Evans and Zhai, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 363,
"end": 384,
"text": "(Daille et al., 1994)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic terminology extraction",
"sec_num": "2."
},
{
"text": "Most of the systems may be considered as hybrid, as they use both approaches in a higher or lesser extent (Earl, 1970) . A recent study indicates that the hybrid approaches are the most relevant, and the strategies that use noun identification, compound terms and TF-IDF metrics are the most significant (Valaski and Malucelli, 2015) .",
"cite_spans": [
{
"start": 106,
"end": 118,
"text": "(Earl, 1970)",
"ref_id": "BIBREF7"
},
{
"start": 304,
"end": 333,
"text": "(Valaski and Malucelli, 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic terminology extraction",
"sec_num": "2."
},
{
"text": "In the last few years a semantic and contextual information is used to improve term extraction systems. The first one involves using lexical semantic categories from an external lexical source of the corpus, such as WordNet (Miller, 1995) . The second one involves extracting the semantic categories of the words from the same corpus through contextual elements that refer to the syntacticsemantic combination of words (Velardi et al., 2001 ). Recently, external semantic resources are also used for building ontologies in the medical domain (Bouslimi et al., 2016) .",
"cite_spans": [
{
"start": 224,
"end": 238,
"text": "(Miller, 1995)",
"ref_id": "BIBREF12"
},
{
"start": 419,
"end": 440,
"text": "(Velardi et al., 2001",
"ref_id": "BIBREF23"
},
{
"start": 542,
"end": 565,
"text": "(Bouslimi et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic terminology extraction",
"sec_num": "2."
},
{
"text": "As already mentioned, with any of these methods we are able to detect a set of term candidates, that is, units with a high chance of being real terms. After the automatic procedure, manual revision must be performed in order to select the real terms from the list of term candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic terminology extraction",
"sec_num": "2."
},
{
"text": "To get a more accurate term candidate selection from specialized corpora, we implemented the Token Slot Recognition (TSR) method (V\u00e0zquez and Oliver, 2018) , a filtering strategy which uses terminological units to rank extracted term candidates from domain-specific corpora.",
"cite_spans": [
{
"start": 129,
"end": 155,
"text": "(V\u00e0zquez and Oliver, 2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Token Slot Recognition filtering method",
"sec_num": "3."
},
{
"text": "The algorithm is based on the concept of terminological token (a token or word of a term) to filter out term candidates. Thus, an unigram term is formed by a token that can be the first token of a term (FT) or the last token of a term (LT) depending on the language, a bigram term is formed by FT LT, a trigram term is formed by FT MT LT (where MT is the middle token of a term), and a tetragram term is formed by FT MT1 MT2 LT. In general, an n-gram term is formed by FT MT1 [..] MTn-2 LT. For example: for English, a unigram term like \"rate\" can be considered an LT unit as it can also be part of a bigram term like \"interest rate\". However, a term like \"interest\" can be considered either an LT unit, such as \"vested interest\", or an FT, like \"interest rate\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Slot Recognition filtering method",
"sec_num": "3."
},
{
"text": "The algorithm reads the terminological tokens from a list of already known terms and stores them taking into account its position in the terminological unit (first, middle, last). As a list of already known terms a terminological database for the language and subject can be used. If no terminological database is available, a first terminology extraction without TSR filtering can be performed to create a small set of terms to use for TSR filtering. TSR filtering can be performed iteratively to enrich the set of already known terms to use in the next TSR filtering process. Thus, the TSR method filters term candidates by taking into account their tokens. To do so, two filtering variants are designed: strict and flexible filtering. In strict TSR filtering, a term candidate will be kept only if all the tokens are present in the corresponding position. In flexible TSR filtering, a term candidate will be kept if any of the tokens is present in the corresponding position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Slot Recognition filtering method",
"sec_num": "3."
},
{
"text": "The algorithm performs this filtering process recursively, that is, by enlarging the list of terminological tokens with the new selected term candidates. In strict mode this is not possible, because all the validated candidates are formed with already known terminological tokens. With flexible filtering it is possible to extract new terminological units, as the candidates are validated if they have a terminological unit in any position. Furthermore, we designed a combined TSR filtering variant. In combined TSR filtering, strict filtering is first used and is then followed by flexible filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Slot Recognition filtering method",
"sec_num": "3."
},
{
"text": "Using flexible and combined TSR filtering variants the term candidates are processed in each iteration in descending order of frequency. If a term candidate is not filtered out, this is stored in the output stack following that order. Since the process is recursive in these filtering strategies, the term candidates filtered out in the previous iteration are processed again in descending order of frequency in the following iterations. The process is repeated until no new terminological tokens are detected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Slot Recognition filtering method",
"sec_num": "3."
},
{
"text": "TBXTools ) is a Python class that provides methods to perform a set of terminology extraction and management tasks. Using this class, Python programs performing state-of-the art terminology extraction tasks can be written with few lines of code. A completely new version of TBXTools have been developed. The old version stored most of the data in memory and this provoked memory problems when working with large corpora. The new version of TBXTools uses a SQLite database to store all the data of a given terminology extraction project, allowing us to work with very big corpora in standard computers with no memory restrictions. Using this database we can open again a project, and we can continue to work in the project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TBXTools, a term extraction tool",
"sec_num": "4."
},
{
"text": "To use TBXTools a Python3 interpreter 1 should be installed on the computer. As the interpreter is available for most operating systems, TBXTools can be used in Linux, Windows and Mac.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TBXTools, a term extraction tool",
"sec_num": "4."
},
{
"text": "A sample script to perform statistical terminology extraction over the corpus corpus.txt, using bigrams and trigrams, and filtering with stopwords (stop-eng.txt) is shown below. Term candidates are stored in candidates.txt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TBXTools, a term extraction tool",
"sec_num": "4."
},
{
"text": "from TBXTools import * e=TBXTools() e.create_project(\"project.sqlite\",\"eng\") e.load_sl_corpus(\"corpus.txt\") e.ngram_calculation(nmin=2,nmax=3) e.load_sl_stopwords(\"stop-eng.txt\") e.statistical_term_extraction() e.save_term_candidates(\"candidates.txt\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TBXTools, a term extraction tool",
"sec_num": "4."
},
{
"text": "The use of TBXTools is very easy but some minimal knowledge of Python is required. In the near future a graphical user interface providing the main functionalities will be developed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TBXTools, a term extraction tool",
"sec_num": "4."
},
{
"text": "TBXTools holds a free licence (GNU GPL) and can be downloaded from its Sourceforge page 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TBXTools, a term extraction tool",
"sec_num": "4."
},
{
"text": "We have participated in the TermEval 2020 shared task on monolingual term extraction in order to provide an accurate term candidate's selection in three languages (English, French and Dutch) and two domain-specific corpora (Corruption and Heart failure) using the ACTER Dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5.1."
},
{
"text": "We report in the sections below the results we have obtained for the Corruption corpora, a manually created corpora with the help of the Dutch DGT of the European Commission; and Heart failure corpora, a manually collected corpora based on a corpus of titles (Hoste et al., 2019) . Both corpora are part of the ACTER Dataset.",
"cite_spans": [
{
"start": 259,
"end": 279,
"text": "(Hoste et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5.1."
},
{
"text": "\u2022 For English and French corpora: linguistic strategy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 For Dutch corpora: statistical strategy For all the strategies and language pairs a TSR filtering method has been performed. To use TSR filtering a reference terminological glossary should be used. The IATE 3 database has been used in the experiments. We have downloaded the TBX file and used the IATExtract.jar program provided to get a subset for the subjects LAW and HEALTH for the three working languages. Then, for each language we have selected the full form terms with a confidence score of 3 or higher. In Table 1 the number of terms for each reference glossary can be observed.",
"cite_spans": [],
"ref_spans": [
{
"start": 516,
"end": 523,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "The linguistic strategy has been performed in the following steps. In Figure 2 the scripts used for each step are shown:",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 Corpus tagging has been performed using Freeling (Padr\u00f3 and Stanilovsky, 2012) through its Python API. \u2022 Automatic learning of POS patterns: Using the tagged corpus and the list of reference terms, a set of POS patterns are automatically learnt. TBXTools can provide a list of learnt patterns along with its frequency, that is, the number of terms that can be detected with the given POS pattern. In Figure 1 an example of the learnt patterns is shown. These patterns are manually revised and some of them are dropped. To decide whether to accept or reject a pattern we take into account its frequency and the examples of extracted terms that can be retrieved using TBXTools. In Table 2 the number of automatically learnt and accepted patterns are shown.",
"cite_spans": [
{
"start": 51,
"end": 80,
"text": "(Padr\u00f3 and Stanilovsky, 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 402,
"end": 410,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 Linguistic terminology extraction and TSR filtering: the terminology extraction is performed using the tagged corpus and the accepted POS patterns. An additional step of filtering using stop-words and a step of nested terms detection are performed. For English a list of 399 stop-words is used and for French a list of 352 stop-words. As a last step, a combined TSR filtering using the IATE reference terms is performed. As a result, a list of term candidates is obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "The script for statistical automatic terminology extraction performed for Dutch can be observed in Figure 3 :",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 N-gram calculation (with n from 1 to 5) and filtering wit stop-words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 Case normalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 Nested terms detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 Dropping some term candidates using a rejection regular expressions list. This list usually includes combinations of .+ (any character) \\w+ (combinations of word characters, that is [a-zA-Z0-9\\_], \\W+ (combinations of non word characters) and [0-9]+ (numbers). Each element of the regular expression will be matched against each component of the given n-gram. For example, the regular expression .+ \\W+ would reject any bigram with the second element containing one or more non-word characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "\u2022 TSR filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "Lang ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two different strategies have been used:",
"sec_num": null
},
{
"text": "The number of term candidates obtained for each language and corpus are shown in Table 3 . The evaluation of the results has been performed using the term list provided by the organizers of the task. As no detection of named entities is done in our scripts, the sets of terms including named entities are used. In Table 4 the number of tokens of each corpus along with the number of terms are shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 314,
"end": 321,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and evaluation",
"sec_num": "5.2."
},
{
"text": "1,001 740 358 Heart failure 1,066 900 744 As the TSR filtering method aims to filter and resort term candidates with a high likelihood to be terms in the top positions, for each corpus and language, we show the evaluation results for subsets of the list of candidates: the top 100, 200, 500 and 1,000 (when the number of candidates is higher than 1,000). The last row of the Table of results shows the overall values.",
"cite_spans": [],
"ref_spans": [
{
"start": 375,
"end": 392,
"text": "Table of results",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "In Table 6 the evaluation values for the Corruption corpus for English are shown. As we can observe, best values of precision are achieved for the top positions: 37% of precision for the top 100 candidates, whereas we achieve 26.4% for the overall set (position 1001). But values of recall and F 1 show that top candidates results are very low, because we are getting fewer candidates than the current number of terms in the corpus. To illustrate this benefits of using TSR filtering, in Table 5 Results for the Corruption corpus for French have a similar behaviour (see Table 7 ), but we tend to get lower precision but higher recall for all the evaluation positions. The overall results for French achieves lower precision but higher recall, yielding to almost exact F 1 value. Table 8 ), where we achieve worse values both of precision (11.5%) and recall (3,2%), yielding to a very low value of F 1 (0.05). It may suggest that the statistical methodology doesn't work well for this language.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 488,
"end": 495,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 571,
"end": 578,
"text": "Table 7",
"ref_id": "TABREF10"
},
{
"start": 780,
"end": 787,
"text": "Table 8",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "In Tables 9, 10 and 11 we can observe the values for the Heart failure corpus. These values are the one that have been compared with other participants in the shared task.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 15,
"text": "Tables 9, 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "In general, if we compare the results for the Corruption corpus and the Heart failure corpus we observe a higher",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "Corpus tagging:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "from TBXTools import * extractor=TBXTools() extractor.create project(\"ACTER-corruption-ling-eng.sqlite\",\"eng\",overwrite=True) extractor.load sl corpus(\"corpus-en.txt\") extractor.start freeling api(\"en\") extractor.tag freeling api() extractor.save sl tagged corpus(\"corpus-tagged-en.txt\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "Automatic learning of POS patterns from TBXTools import * extractor=TBXTools() extractor.create project(\"learnpatterns.sqlite\",\"eng\",overwrite=True) extractor.load sl tagged corpus(\"corpus-tagged-en.txt\") extractor.load evaluation terms(\"IATE-LAW-eng.txt\",nmin=1,nmax=5) extractor.tagged ngram calculation(nmin=1,nmax=5,minfreq=1) extractor.learn linguistic patterns(\"learnt-patterns-eng.txt\",representativity=100)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "Linguistic terminology extraction and TSR filtering:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "from TBXTools import * extractor=TBXTools() extractor.create project(\"linguistic-tsr.sqlite\",\"eng\",overwrite=True) extractor.load sl tagged corpus(\"corpus-tagged-en.txt\") extractor.load linguistic patterns(\"clean-patterns-eng.txt\") extractor.tagged ngram calculation(nmin=1,nmax=5,minfreq=2) extractor.load sl stopwords(\"stop-eng.txt\") extractor.linguistic term extraction(minfreq=2) extractor.nest normalization(verbose=False) extractor.tsr(\"IATE-LAW-eng.txt\",type=\"combined\",max iterations=100) extractor.save term candidates(\"candidates-linguistic-tsr-eng.txt\",minfreq=2,show measure=True) Figure 2 : Steps and scripts for linguistic terminology extraction from TBXTools import * extractor=TBXTools() extractor.create project(\"statistical-tsr-nld.sqlite\",\"nld\",overwrite=True) extractor.load sl corpus(\"corpus-nl.txt\") extractor.ngram calculation(nmin=1,nmax=5,minfreq=2) extractor.load sl stopwords(\"stop-nld.txt\") extractor.load sl exclusion regexps(\"regexps.txt\") extractor.statistical term extraction(minfreq=2) extractor.case normalization(verbose=True) extractor.nest normalization(verbose=True) extractor.regexp exclusion() extractor.tsr(\"IATE-HEALTH-nld.txt\",type=\"combined\",max iterations=100) extractor.save term candidates(\"candidates-tsr-nld.txt\",minfreq=2,show measure=True) The difference in the results between languages can be explained by the different strategies used. For English and French corpora we have used linguistic terminology extraction obtaining better results. Results for English and French are comparable, and the differences between them can be produced by different factors: the precision of the tagger for each language, the number of POS tags in the tagset for each language, French having a higher number of tags. This fact can make the revision of the automatically learnt patterns more difficult.",
"cite_spans": [],
"ref_spans": [
{
"start": 593,
"end": 601,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "The different results obtained for the two corpora, Corruption and Heart failure, can be due to several factors. Although the size of the corpora for every subject and every language is almost equal, the number of different terms in Heart failure is higher. For example, for English the Corruption corpus has 45,218 tokens and 1,174 terms, whereas the Heart failure corpus has almost the same number of tokens (45,665) but more than twice number of terms (2,585). The IATE reference terms used for the Token Slot Recognition filtering for Heart failure is almost twice the number of terms used for Corruption (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 614,
"end": 622,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus eng fra nld Corruption",
"sec_num": null
},
{
"text": "The experimental results confirm that the combined TSR filtering method we have implemented to identify terms from Corruption and Heart failure domain-specific corpora is productive in terms of precision than recall for all three languages. As for Corruption domain the best results are obtained for English and as for Heart failure the best results are obtained for French. To apply the TSR filtering strategy we have use IATE glossaries for law and health. These glossaries are domain-specific, but for broader domains than the corpora. Results obtained could be enhanced using more specific reference glossaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "The low results obtained for Dutch may be explained by the statistical methodology used. We decided to use statistical terminology extraction because the tagger we use, Freeling, is not available for Dutch. In further experiments we plan to use any available Dutch tagger, as for example TreeTagger 4 (Schmid, 1994) or Frog 5 (Bosch et al., 2007) . We will adapt the output of these taggers to the TBXTools format for tagged corpora and perform a linguistic terminology extraction.",
"cite_spans": [
{
"start": 301,
"end": 315,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF20"
},
{
"start": 326,
"end": 346,
"text": "(Bosch et al., 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "In the TermEval 2020 shared task on monolingual term extraction we have implemented the combined TSR filtering method using TBXTools in order to extract the highest number of terms from Corruption and Heart failure corpora from the ACTER Dataset. This methodology uses tokens from already known terms, in this case from IATE glossaries, to search term candidates containing some tokens related to the subject of the corpora. The process is iterative and the list of terminological tokens can be enriched in each iteration, allowing the discovery of completely new terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7."
},
{
"text": "The results obtained from the shared task can confirm that the combined TSR filtering method is suitable for term candidates extraction in any domain-specific corpora. Moreover, the TSR filtering method results would have been better if the reference terms had been more closely associated with the subject corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7."
},
{
"text": "As a future work, we plan to test the TSR filtering method with larger corpora and in other languages and domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7."
},
{
"text": "https://www.python.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.cis.uni-muenchen.de/\u02dcschmid/ tools/TreeTagger/ 5 https://languagemachines.github.io/frog/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Methods for automatic term recognition in domain-specific text collections: A survey. Programming and Computer Software",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Astrakhantsev",
"suffix": ""
},
{
"first": "D",
"middle": [
"G"
],
"last": "Fedorenko",
"suffix": ""
},
{
"first": "D",
"middle": [
"Y"
],
"last": "Turdakov",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "41",
"issue": "",
"pages": "336--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Astrakhantsev, N. A., Fedorenko, D. G., and Turdakov, D. Y. (2015). Methods for automatic term recognition in domain-specific text collections: A survey. Program- ming and Computer Software, 41(6):336-349.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An efficient memory-based morphosyntactic tagger and parser for dutch",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Busser",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Canisius",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2007,
"venue": "LOT Occasional Series",
"volume": "7",
"issue": "",
"pages": "191--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bosch, A. v. d., Busser, B., Canisius, S., and Daelemans, W. (2007). An efficient memory-based morphosyntac- tic tagger and parser for dutch. LOT Occasional Series, 7:191-206.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bourigault",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Jacquemin",
"suffix": ""
},
{
"first": "M.-C",
"middle": [],
"last": "Homme",
"suffix": ""
}
],
"year": 2001,
"venue": "Recent Advances in Computational Terminology",
"volume": "",
"issue": "",
"pages": "page iix--xviii",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bourigault, D., Jacquemin, C., and L'Homme, M.-C. (2001). Introduction. In Recent Advances in Compu- tational Terminology, page iix-xviii, Amsterdam, The Netherlands. John Benjamins.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Surface grammatical analysis for the extraction of terminological noun phrases",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bourigault",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th Conference on Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "977--981",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bourigault, D. (1992). Surface grammatical analysis for the extraction of terminological noun phrases. In Pro- ceedings of the 14th Conference on Computational Lin- guistics -Volume 3, COLING '92, pages 977-981, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A medical collaboration network for medical image analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bouslimi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Akaichi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gaith Ayadi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hedhli",
"suffix": ""
}
],
"year": 2016,
"venue": "Network Modeling Analysis in Health Informatics and Bioinformatics",
"volume": "5",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bouslimi, R., Akaichi, J., Gaith Ayadi, M., and Hedhli, H. (2016). A medical collaboration network for medical image analysis. In Network Modeling Analysis in Health Informatics and Bioinformatics, 5, pages 1-11.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exploration of a rich feature set for automatic term extraction",
"authors": [
{
"first": "M",
"middle": [
"S"
],
"last": "Conrado",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "S",
"middle": [
"O"
],
"last": "Rezende",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Artificial Intelligence and Its Applications",
"volume": "",
"issue": "",
"pages": "342--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conrado, M. S., Pardo, T., and Rezende, S. O. (2013). Exploration of a rich feature set for automatic term ex- traction. In Advances in Artificial Intelligence and Its Applications, Lecture Notes in Computer Science, page 342-354, Berlin, Heidelberg. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards automatic extraction of monolingual and bilingual terminology",
"authors": [
{
"first": "B",
"middle": [],
"last": "Daille",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "J.-M",
"middle": [],
"last": "Lang\u00e9",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "515--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daille, B., Gaussier, E., and Lang\u00e9, J.-M. (1994). Towards automatic extraction of monolingual and bilingual termi- nology. In Proceedings of the 15th Conference on Com- putational Linguistics -Volume 1, COLING '94, pages 515-521, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Experiments in automatic extracting and indexing",
"authors": [
{
"first": "L",
"middle": [
"L"
],
"last": "Earl",
"suffix": ""
}
],
"year": 1970,
"venue": "Information Storage and Retrieval",
"volume": "6",
"issue": "4",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Earl, L. L. (1970). Experiments in automatic extract- ing and indexing. Information Storage and Retrieval, 6(4):313 -330.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Noun-phrase analysis in unrestricted text for information retrieval",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL '96",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, D. A. and Zhai, C. (1996). Noun-phrase analysis in unrestricted text for information retrieval. In Proceed- ings of the 34th Annual Meeting on Association for Com- putational Linguistics, ACL '96, pages 17-24, Strouds- burg, PA, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Methods for the qualitative evaluation of lexical association measures",
"authors": [
{
"first": "S",
"middle": [],
"last": "Evert",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Krenn",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "188--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evert, S. and Krenn, B. (2001). Methods for the qualita- tive evaluation of lexical association measures. In Pro- ceedings of the 39th Annual Meeting on Association for Computational Linguistics, page 188-195. AWERProce- dia Information Technology Computer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The trade-off between quantity and quality. comparing a large web corpus and a small focused corpus for medical terminology extraction",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vanopstal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rigouts Terryn",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Lefever",
"suffix": ""
}
],
"year": 2019,
"venue": "Across Languages and Cultures",
"volume": "",
"issue": "",
"pages": "197--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoste, V., Vanopstal, K., Rigouts Terryn, A., and Lefever, E. (2019). The trade-off between quantity and quality. comparing a large web corpus and a small focused cor- pus for medical terminology extraction. In Across Lan- guages and Cultures, pages 197-211.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic term recognition needs multiple evidence",
"authors": [
{
"first": "N",
"middle": [
"V"
],
"last": "Loukachevitch",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)",
"volume": "",
"issue": "",
"pages": "2401--2407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Loukachevitch, N. V. (2012). Automatic term recognition needs multiple evidence. In Proceedings of the 8th Inter- national Conference on Language Resources and Evalu- ation (LREC 2012), page 2401-2407.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. A. (1995). Wordnet: a lexical database for en- glish. In Communications of the ACM, 38, page 39-41.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Tbxtools: a free, fast and flexible tool for automatic terminology extraction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Oliver",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "V\u00e0zquez",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "473--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver, A. and V\u00e0zquez, M. (2015). Tbxtools: a free, fast and flexible tool for automatic terminology extraction. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 473- 479.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "TBXTools: A free, fast and flexible tool for automatic terminology extraction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Oliver",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "V\u00e0zquez",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "473--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver, A. and V\u00e0zquez, M. (2015). TBXTools: A free, fast and flexible tool for automatic terminology extrac- tion. In Proceedings of Recent Advances in Natural Lan- guage Processing (RANLP-2015), pages 473-479.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Freeling 3.0: Towards wider multilinguality",
"authors": [
{
"first": "L",
"middle": [],
"last": "Padr\u00f3",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Stanilovsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Language Resources and Evaluation Conference (LREC 2012)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Padr\u00f3, L. and Stanilovsky, E. (2012). Freeling 3.0: To- wards wider multilinguality. In Proceedings of the Lan- guage Resources and Evaluation Conference (LREC 2012), Istanbul, Turkey, May. ELRA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Terminology extraction: an analysis of linguistic and statistical approaches",
"authors": [
{
"first": "M",
"middle": [
"T"
],
"last": "Pazienza",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "F",
"middle": [
"M"
],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2005,
"venue": "Knowledge mining",
"volume": "",
"issue": "",
"pages": "255--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pazienza, M. T., Pennacchiotti, M., and Zanzotto, F. M. (2005). Terminology extraction: an analysis of linguistic and statistical approaches. In Knowledge mining, pages 255-279. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "No uncertain terms: A dataset for monolingual and multilingual automatic term extraction from comparable corpora. Language Resources and Evaluation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rigouts Terryn",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Lefever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rigouts Terryn, A., Hoste, V., and Lefever, E. (2019). No uncertain terms: A dataset for monolingual and multilin- gual automatic term extraction from comparable corpora. Language Resources and Evaluation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Termeval 2020: Shared task on automatic term extraction using the annotated corpora for term extraction research (acter) dataset",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rigouts Terryn",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Drouin",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Lefever",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Computational Terminology CompuTerm 2020",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rigouts Terryn, A., Drouin, P., Hoste, V., and Lefever, E. (2020). Termeval 2020: Shared task on automatic term extraction using the annotated corpora for term extrac- tion research (acter) dataset. In Proceedings of Compu- tational Terminology CompuTerm 2020, COMPUTERM 2020, pages 1-4, Paris, France. European Language Re- sources Association.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A theory of term importance in automatic text analysis",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "C.-S",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "C",
"middle": [
"T"
],
"last": "",
"suffix": ""
}
],
"year": 1975,
"venue": "Journal of the American society for Information Science",
"volume": "26",
"issue": "1",
"pages": "33--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salton, G., Yang, C.-S., and Yu, C. T. (1975). A theory of term importance in automatic text analysis. Journal of the American society for Information Science, 26(1):33- 44.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Probabilistic part-of-speech tagging using decision trees",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of International Conference on New Methods in Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmid, H. (1994). Probabilistic part-of-speech tagging using decision trees. In Proceedings of International Conference on New Methods in Language Processing, Manchester, UK.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Approaches and strategies to extract relevant terms: How are they being applied?",
"authors": [
{
"first": "J",
"middle": [],
"last": "Valaski",
"suffix": ""
},
{
"first": "R",
"middle": [
"S"
],
"last": "Malucelli",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Artificial Intelligence (ICAI 2015)",
"volume": "",
"issue": "",
"pages": "478--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valaski, J., R. S. and Malucelli, A. (2015). Approaches and strategies to extract relevant terms: How are they being applied? In Proceedings of the International Conference on Artificial Intelligence (ICAI 2015), page 478-484, San Diego, USA. The Steering Committee of the World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Service model for semi-automatic generation of multilingual terminology resources",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vasiljevs",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pinnis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gornostay",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Terminology and Knowledge Engineering Conference",
"volume": "",
"issue": "",
"pages": "67--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasiljevs, A., Pinnis, M., and Gornostay, T. (2014). Ser- vice model for semi-automatic generation of multilin- gual terminology resources. In Proceedings of the Ter- minology and Knowledge Engineering Conference, page 67-76.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Identification of relevant terms to support the construction of domain ontologies",
"authors": [
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Missikoff",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Workshop on Human Language Technology and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Velardi, P., Missikoff, M., and Basili, R. (2001). Iden- tification of relevant terms to support the construction of domain ontologies. In Proceedings of the Workshop on Human Language Technology and Knowledge Man- agement, pages 1-8, Morristown, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving term extraction by combining different techniques",
"authors": [
{
"first": "J",
"middle": [],
"last": "Vivaldi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Rodr\u00edguez",
"suffix": ""
}
],
"year": 2001,
"venue": "In Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication",
"volume": "",
"issue": "",
"pages": "31--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivaldi, J. and Rodr\u00edguez, H. (2001). Improving term ex- traction by combining different techniques. In Terminol- ogy. International Journal of Theoretical and Applied Is- sues in Specialized Communication, page 31-48, Ams- terdam, The Netherlands. John Benjamins.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving term candidate validation using ranking metrics",
"authors": [
{
"first": "M",
"middle": [],
"last": "V\u00e0zquez",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Oliver",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 3rd World Conference on Information Technology (WCIT-2012)",
"volume": "",
"issue": "",
"pages": "1348--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00e0zquez, M. and Oliver, A. (2013). Improving term can- didate validation using ranking metrics. In Proceedings of the 3rd World Conference on Information Technology (WCIT-2012), page 1348-1359. AWERProcedia Infor- mation Technology Computer.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving term candidates selection using terminological tokens",
"authors": [
{
"first": "M",
"middle": [],
"last": "V\u00e0zquez",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Oliver",
"suffix": ""
}
],
"year": 2018,
"venue": "In Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication",
"volume": "",
"issue": "",
"pages": "122--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00e0zquez, M. and Oliver, A. (2018). Improving term can- didates selection using terminological tokens. In Termi- nology. International Journal of Theoretical and Applied Issues in Specialized Communication, pages 122-147, Amsterdam, The Netherlands. John Benjamins.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Example of automatically learnt patterns.",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>228 |#|NN</td></tr><tr><td>112 |#|JJ |#|NN</td></tr><tr><td>40 |#|JJ #||NNS</td></tr><tr><td>36 |#|NN |#|IN |#|NN</td></tr><tr><td>32 |#|NN |#|NN</td></tr></table>",
"html": null,
"text": "Number of terms in the reference glossaries"
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Number of learnt and accepted POS patterns."
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Corpus</td><td>lang tokens terms</td></tr><tr><td>Corruption</td><td>eng 45,218 1,174</td></tr><tr><td>Corruption</td><td>fra 50,403 1,217</td></tr><tr><td>Corruption</td><td>nld 47,288 1,295</td></tr><tr><td>Heart failure</td><td>eng 45,665 2,585</td></tr><tr><td>Heart failure</td><td>fra 46,626 2,423</td></tr><tr><td>Heart failure</td><td>nld 47,734 2,257</td></tr></table>",
"html": null,
"text": "Number of term candidates"
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Number of tokens and terms"
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">Position Precision Recall F1</td></tr><tr><td>100</td><td>0.23</td><td>0.02</td><td>0.036</td></tr><tr><td>200</td><td>0.205</td><td>0.035</td><td>0.06</td></tr><tr><td>300</td><td>0.207</td><td>0.053</td><td>0.084</td></tr><tr><td>400</td><td>0.21</td><td>0.072</td><td>0.107</td></tr><tr><td>500</td><td>0.21</td><td>0.089</td><td>0.125</td></tr><tr><td>600</td><td>0.22</td><td>0.112</td><td>0.149</td></tr><tr><td>700</td><td>0.22</td><td>0.131</td><td>0.164</td></tr><tr><td>800</td><td>0.212</td><td>0.145</td><td>0.172</td></tr><tr><td>1000</td><td>0.2</td><td>0.17</td><td>0.184</td></tr><tr><td>2395</td><td>0.151</td><td>0.307</td><td>0.202</td></tr></table>",
"html": null,
"text": "we offer results of term candidates extraction without filtering for the corruption English corpus."
},
"TABREF7": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">: Evaluation results: Corruption English with no</td></tr><tr><td>TSR filtering</td><td/><td/><td/></tr><tr><td colspan=\"4\">Position Precision Recall F1</td></tr><tr><td>100</td><td>0.37</td><td>0.032</td><td>0.058</td></tr><tr><td>200</td><td>0.36</td><td>0.061</td><td>0.105</td></tr><tr><td>500</td><td>0.336</td><td>0.143</td><td>0.201</td></tr><tr><td>1000</td><td>0.264</td><td>0.225</td><td>0.243</td></tr><tr><td>1001</td><td>0.264</td><td>0.225</td><td>0.243</td></tr></table>",
"html": null,
"text": ""
},
"TABREF8": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": ""
},
"TABREF10": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Evaluation results: Corruption French</td></tr><tr><td>The situation is different for Corruption corpus in Dutch</td></tr><tr><td>(see</td></tr></table>",
"html": null,
"text": ""
},
"TABREF12": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">: Evaluation results: Corruption Dutch</td></tr><tr><td colspan=\"4\">are obtained again for Dutch, but results are much better</td></tr><tr><td colspan=\"4\">than results obtained from Corruption corpus (29% vs.</td></tr><tr><td colspan=\"4\">11.5% of precision and 9.6% vs. 3.2% of recall).</td></tr><tr><td colspan=\"4\">Position Precision Recall F1</td></tr><tr><td>100</td><td>0.35</td><td>0.014</td><td>0.026</td></tr><tr><td>200</td><td>0.435</td><td>0.034</td><td>0.062</td></tr><tr><td>500</td><td>0.43</td><td>0.083</td><td>0.139</td></tr><tr><td>1000</td><td>0.347</td><td>0.134</td><td>0.194</td></tr><tr><td>1066</td><td>0.343</td><td>0.142</td><td>0.2</td></tr></table>",
"html": null,
"text": ""
},
"TABREF13": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">Position Precision Recall F1</td></tr><tr><td>100</td><td>0.37</td><td>0.015</td><td>0.029</td></tr><tr><td>200</td><td>0.375</td><td>0.031</td><td>0.057</td></tr><tr><td>500</td><td>0.384</td><td>0.079</td><td>0.131</td></tr><tr><td>900</td><td>0.363</td><td>0.135</td><td>0.197</td></tr></table>",
"html": null,
"text": "Evaluation results: Heart failure English"
},
"TABREF14": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">Position Precision Recall F1</td></tr><tr><td>100</td><td>0.44</td><td>0.019</td><td>0.037</td></tr><tr><td>200</td><td>0.385</td><td>0.034</td><td>0.063</td></tr><tr><td>500</td><td>0.352</td><td>0.078</td><td>0.128</td></tr><tr><td>744</td><td>0.29</td><td>0.096</td><td>0.144</td></tr></table>",
"html": null,
"text": "Evaluation results: Heart failure French"
},
"TABREF15": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Evaluation results: Heart failure Dutch"
}
}
}
}