|
{ |
|
"paper_id": "W94-0104", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:46:48.661228Z" |
|
}, |
|
"title": "Study and Implementation of Combined Techniques for Automatic Extraction of Terminology", |
|
"authors": [ |
|
{ |
|
"first": "Bdatrice", |
|
"middle": [], |
|
"last": "Daille", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "TALANA University", |
|
"location": { |
|
"addrLine": "Paris 7 Case, Place Jussieu", |
|
"postCode": "7003 2, F-75251", |
|
"settlement": "Paris Cedex 05", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "daille@linguist@fr" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents an original method and its implementation to extract terminology from corpora by combining linguistic filters and statistical methods. Starting from a linguistic study of the terms of telecommunication domain, we designed a number of filters which enable us to obtain a first selection of sequences that may be considered as terms. Various statistical scores are applied to this selection and results are evaluated. This method has been applied to French and to English, but this paper deals only with French.", |
|
"pdf_parse": { |
|
"paper_id": "W94-0104", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents an original method and its implementation to extract terminology from corpora by combining linguistic filters and statistical methods. Starting from a linguistic study of the terms of telecommunication domain, we designed a number of filters which enable us to obtain a first selection of sequences that may be considered as terms. Various statistical scores are applied to this selection and results are evaluated. This method has been applied to French and to English, but this paper deals only with French.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A terminology bank contains the vocabulary of a technical domain: terms, which refer to its concepts. Building a terminological bank requires a lot of time and both linguistic and technical knowledge. The issue, at stake, is the automatic extraction of terminology of a specific domain from a corpus. Current research on extracting terminology uses either linguistic specifications or statistical approaches. Concerning the former, [Bouriganlt, 1992] has proposed a program which extracts automatically from a corpus sequences of lexical units whose morphosyntax characterizes maximal technical noun phrases. This list of sequences is given to a terminologist to be checked. For the latter, several works ( [Lafon, 1984] , [Church and Hanks, 1990] , [Calzolari and Bindi, 1990] , [Smadja and McKeown, 1990] ) have shown that statistical scores are useful to extract collocations from corpora. The main problem with one or the other approach is the \"noise\": indeed, morphosyntactic criteria are not sufficient to isolate terms, and collocations extracted thanks to statistical methods belong to various types of associations: functional, semantical, thematical or uncharacterizable ones. Our goal is to use statistical scores for extracting tech-nical compounds only and to forget about the other types of collocations. We proceed in two steps: first, apply a linguistic filter which selects candidates from the corpus; then, apply statistical scores to rank these candidates and select the scores which fit our purpos(~ best, in other words scores that concentrate their high values to terms and their low values to co-occurrcuccs which are not terms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 450, |
|
"text": "[Bouriganlt, 1992]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 720, |
|
"text": "[Lafon, 1984]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 747, |
|
"text": "[Church and Hanks, 1990]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 777, |
|
"text": "[Calzolari and Bindi, 1990]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 806, |
|
"text": "[Smadja and McKeown, 1990]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In a first part, we therefore study the linguistic specifications on the nature of terms in the technical domain of telecommunications for French. Then, taking into account these linguistics results, we present the method and the program which extracts andcounts the candidate terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Terms are mainly multi-word units of nominal type that could be characterized by a range of morphological, syntactic or semantic properties. The main property of nominal terms is the morphosyntactic one: its str,cture belongs to well-known morphosyntactic structures such asN ADJ, N1 de N2, etc. that have been studied by [Mathieu-Colas, 1988] for French. Some graphic indications (hyphen), morphological indications (restrictious in flexion) and syntactic ones (absence of determiners) could also be good clues that a noun phrase is a term. We have also employed a semantic criteria: the criterion of unique referent. A term refers to an unique and universal concept. However, it is not obvious to apply this criterion to a technical domain where we are not expert. So, we have interpreted the criterion of unique referent by the one of unique translation. A French term is always identically translated, mostly by a compound or a simple noun in English. We have extracted mare,ally terms following these criteria from our bilingual corpus, available in French and English, the Satellite Communication Handbook (SCH) containing 200 000 words in ,,ach language. Then, we have classified terms following their lengths; the length of a term is defined as the numb,,r of main items it contains. 1 From this classification, it at)p('ars that terms of length 2 are by far the most frcquenl, ones. As statistical methods ask for a good rcl)rcsentatiou in number of the samples, we decided to extract in a first round only terms of length 2 that we will call base-term which matched a list of previously Of course, terms exist whose length is greater than 2. But the majority of terms of length greater than 2 are created recursively from base-terms. We have distinguished three operations that lead to a term of length 3 from a term of length 1 or 2: \"overcomposition\", modificatio,, and coordination. We illustrate now these operations with a few examples where the base-terms appear inside brackets:", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 343, |
|
"text": "[Mathieu-Colas, 1988]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic specifications", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Two kinds of overcomposition have been pointed out: ow'rcomposition by juxtaposition and overcomposition by substitution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. Ovcrcomposltion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A term obtained by juxtaposition is built with at least one base-term whose structure will not be altered. The example below illustrate the juxtaposition of a base-term and a simple noun: Orthographic variants concern N1 PREP N2 structure. For this structure, the number of N2 is generally fixed, either singular or plural. However, we 2In this case, the length of the term is equal to 4 have encountered some exceptions: rdseau(x) ~ satellite, rgseaux(x) fi satellites (satellite network(s)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(a) .I u xtaposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Ni PREPI [N2 PREP2 N3] modulation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(a) .I u xtaposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Morphosyntactic variants refer to the presence or not of an article before the N2 in the N1 PREP N~ structure: ligne d'abonng, lignes de l'abonng (subscriber lines), to the optional character of the preposition: tension hdlice, tension d'hdlice (helix voltage) and to synonymy relation between two base-terms of different structures: for example N ADJ and N1 d N2: rgseau commutd, rgseau d commutation (switched network) 3. Elliptical variants A base-term of length 2 could be called up by an elliptic form: for example: ddbit which is used instead of dgbit binaire (bit rate).", |
|
"cite_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 260, |
|
"text": "(helix voltage)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 420, |
|
"text": "(switched network)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphosyntactic variants", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "After this linguistic investigation, we decide to concentrate on terms of length 2 (base-terms) which seem by far the most frequent ones. Moreover, the majority of terms whose length is greater than 2 are built from base-terms. A statistical approach requires a good sampling that base-terms provide. To filter base-terms from the corpus, we use their morphosyntaetic structures. For this task, we need a tagged corpus where each item comes with its part-of-speech and its lemma.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphosyntactic variants", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The part-of-speech is used to filter and the lemma to obtain an optimal sampling. We have use the stochastic tagger and the lemmatizer of the Scientific Center of IBM-France developed by the speech recognition team ([Ddrouault, 1985] and [E1-B~ze, 19931).", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 233, |
|
"text": "([Ddrouault, 1985]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphosyntactic variants", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We now face a choice: we can either isolate collocations using statistics and then apply linguistic filters, or apply linguistic filters and then statistics. It is the latter strategy that has been adopted: indeed, the former asks for the use of a window of an arbitrary size; if you take a small window size, you will miss a lot of occurrences, mainly morphosyntactic variants, base-terms modified by an inserted modifier, very frequent in French, and coordinated base-terms; if you take a longer one, you will obtain occurrences that do not refer to the same conceptual entity, a lot of ill-formed sequences which do not characterizes terms, and moreover wrong frequency counts as several short sequences are masked by only one long sequence. Using first linguistic filters based on part-of-speech tags appears as the best solution. Moreover, as patterns that characterizes base-terms can be described by regular expressions, the use of finite automata seems a natural way to extract and count the occurrences of the candidate base-terms. The frequency counts of the occurrences of the candidate terms are crucial as they are the parameters of the statistical scores. A wrong frequency count implies wrong or not relevant values of statistical scores. The objective is to optimize the count of base-terms occurrences and to minimize the count of incorrect oc-currences. Graphical, orthographic and xnorpho.sy.t;wtic variants of base-terms (except synomymic varbmt,~) are taken into account as well as some syntactic variations that affect the base-terms structure: coordhmtion and insertion of modifiers. Coordimttion of two base-terms rarely leads to the creation of a new tcrnt of length greater than 2, so it is reasonable to thi.k that the sequence gquipements de or a modified base-term, namely antenne de rgception modified by the inserted adjective parabolique. On one hand, we don't want to extract terms of length greater than 2, but on the other hand, it is not possible to ignore adjective insertion. So, we have chosen to accept insertion of adjective inside N1 PREP N~ structure. This choice implies the extraction of terms of length 3 of N 1 ADJ PREP N2 structure that are considered as terms of length 2. However, such cases are rare and the majority of N1 ADJ PREP N2 sequences refer to a N1 PREP N2 base-term modified by an adjective. Each occurrence of a base-terms is counted equally; we consider that there is equiprobability of the term appearance in the corpus. The occurrences of morphological sequences which characterize base-terms are classified under pairs: a pair is composed of two main items in a fixed order and collects all the sequences where the two lemmas of the pair appear in one of the allowed morphosyntactic patterns; for example, the se- Pairs sorted accordin to fr__g...~,._ ", |
|
"cite_spans": [ |
|
{ |
|
"start": 1767, |
|
"end": 1769, |
|
"text": "de", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic filters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The problem to solve now is to discover which statistical score is the best to isolate terms among our list of Candidates. So, we compute several measures: frequencies, association criteria, Shannon diversity and distance scores. All these measures could not be used for the same purpose: frequencies are the parameters of the association criteria, association criteria propose a conceptual sort of the couples, and Shannon diversity an<i distance measures are not discriminatory scores but provide other types of informations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical Statistics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From a statistical point of view, the two lemmas of a pair could be considered as two qualitative variables whose link has to be tested. A contingency table is defined for each pair (Li, Lj):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frequencies and Association criteria", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I II Lj I Lj, with j' # j Li, with i' \u00a2 i c d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frequencies and Association criteria", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where: A property of these scores is that their values increase with the strength of the bond of the lemmas. We have tried out several scores (more than ten) including IM, \u2022 2 and Loglike and we have sorted the pairs following the score value. Each score proposes a conceptual sort of the pairs. This sort, however, could put at the top of the list compounds that belong to general language rather than to the telecommunication domain. As we want to obtain a list of telecommunication terms, it is essential to evaluate the correlation between the score values and the pairs and to find out which scores are the best to extract terminology. Therefore, we compare the values obtained for each score to a reference list of the domain. We have used the terminology data bank of the EEC, telecommunication section, which has been elaborated by experts. This evaluation has been done for 2 200 French pairs3of N1 de (DEW) N 2 structure extracted from our corpus SCH (200 000 words). Each score provides as a result a list where the candidates are sorted following the score value. We have defined equivalence classes which generally collect 50 successive pairs of the list. The results of a score are represented graphically thanks to an histogram in which the x-axis represents the pairs sorted according to the score value, and y-axis the ratio of the number of pairs belonging to the reference list divided by the number of pairs per equivalence class, i.e. generally 50 pairs. If all the pairs of an equivalence class belong to the ref-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frequencies and Association criteria", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frequencies and Association criteria", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "erence list, we obtain the maximum ratio of 1; if none of the pairs appear in the reference list, the minimum ratio of 0 is reached. The ideal score should assign its high values (resp. low) to good (resp. bad) pairs, i.e. candidates which belong (resp. which don't belong) to the reference list. In other words, the histogram of the ideal score should assign to equivalence classes containing the high values (resp. low values) of the score a ratio close to 1 (resp. 0). We are not going to present here all the histograms obtained (see [Daille, 1994] ). All of 3Only pairs which appear at least twice in the corpus have been retained. them show a general growing trend that confirm that the score values increase with the strength of the bond of the ]emma. However, the growth is more or less clear, with more or less sharp variations. The most beautiSd histogram is the simple frequency of the pair (see Figure 1) . This histogram shows that more frequent tile pair is, the more likely the pair is a term. Frequency is the most significant score to detect terms of a technical domain. This results contradicts numerous results of lexical resources, which claim that association criteria are more significant than frequency: for example, all the most frequent pairs whose terminological status is undoubted share low values of association ratio (formula 1) as for example rdseau d satellites (satellite network} IM=2.57, liaison par satellite (satellite link) IM=2.72, circuit tglgphonique (telephone circuit )IM=3.32, station spatiale (space station) IM=l.17 etc. Tile remaining problem with the sort proposed by frequency is that it integrates very quickly bad candidates, i.e. pairs which are not terms. So, we have preferred to elect the Loglike coefficient (formula 3) the best score. Indeed, Loglike coefficient which is a real statistical test, takes into account the pair frequency but accepts very little noise for high values. To give an element of comparison, the first bad candidate with frequency for the general pattern N1 (PREP (DEW)) N2 is the pair (cas, transmission) which appears in 56th place; this pair, which is also the first bad candidate with Loglike, appears in 176th place. We give in figure 2 the topmost 11 french pairs sorted by the Loglike coefficient (Logl) (Nbc is the number of the pair occurrences and IM the value of association ratio).", |
|
"cite_spans": [ |
|
{ |
|
"start": 538, |
|
"end": 552, |
|
"text": "[Daille, 1994]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 907, |
|
"end": 916, |
|
"text": "Figure 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Frequencies and Association criteria", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Diversity has been introduced by [Shannon, 1948] and chara,~terizes the marginal distribution of the lemma of a pair through the range of pairs . Its computation uses a contingency table of length n: we give below as an ,xample the contingency table which is .ouns with regards to a given adjective. These distributio,s arc called \"marginal distributions\" of the nouns and the adjectives for the N ADJ structure. Diversity is computed for each lemma appearing in a pair, using the fornmla: We note H1, diversity of the first lemma of a pair and !t2 diversity of the second lemma. We take into account the diversity normalized by the number of occurrences of the pairs:", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 48, |
|
"text": "[Shannon, 1948]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 260, |
|
"text": ". Its computation uses a contingency table of length n: we give below as an ,xample the contingency table which is", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Diversity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Hi = nbi. log hi. -~ nblj log nblj", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Diversity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Hi hi --", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Diversity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "nij Hj hj = -nij The normalized diversities hi and h2 are defined from Ill and H2. 'l'h~, normalized diversity provides interesting informations about the distribution of the pair lemmas in the set of pairs. A lemma with a high diversity means that it appears in several pairs in equal proportion; conw'rscly, a lemma which appear only in one pair owns a zero diversity (minimal value) and this, whatever is the frequency of the pair. High values of hi applied to the pairs of N ^DJ structure characterizes nouns that could l)c seen as key-words of the domain: r#sean (network), s~gnal, antenne (antenna), satellite. Conversely, high values of h~ applied to the pairs of N ADJ structure characterizes adjectives which do not take part to base-MWVs as n&essaire (necessary), suivant (following), important, different (various) , tel (such), etc. The pairs with a zero diversity on one of their lemma receive high values of association ratio and other association criteria and a non-definite value of Loglike coefficient. However, the diversity is more precise because it indicates if the two lemmas appear only together as for (ocEan, indien) (indian ocean) (Hl=hl=H2=h2=0), or if not, which of the two lemmas appear only with the other, as for (r&,eau, maill~) (mesh network) (H2=hz=0), where the adjective mailM apl:wears only with rdseau or for (\u00a2odeur, id&al) (ideal coder) (Hi=hi=0) where the noun codeur appears only with the adjective ideal. Other examples are: Oh, salomon) (solomon island), (h~lium, gazeux) (helium gas), (suppresscur, bzho) (echo suppressor). These pairs collects many frozen compounds and collocations of the current language. In future work, we will investigate how to incorporate the nice results provided by diversity into an automatic extraction algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 795, |
|
"end": 825, |
|
"text": "important, different (various)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Diversity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "French base-terms often accept modifications of their internal structure as it has been demonstrated previously. Each time, an occurrence of a pair is extracted and counted, two distances are computed: the number of items Dist and the number of main items MDist which occur between the two lemmas. Then, for each couple, the mean and the variance of the number of items and main items are computed. The variance formula is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance Measures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "v(x) = =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance Measures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The distance measures bring interesting informations which concern the morphosyntactic variations of the base-terms, but they don't allow to take a decision upon the status of term or non-term of a candidate. A pair which has no distance variation, whatever is the distance, is or is not a term; we give now some examples of pairs which have no distance variations and which are not terms: paire de signal (a pair of signaO, type d'antenne (a type off antenna), organigramme de la figure (diagram of the figure) , etc. We illustrate below how the distance measures allow to attribute to a pair its elementary type automatically, for example, either N1 N2, N1 PREP N2, N1 PREP DET N2, or Ni ADJ PREP (VET) N2 for the general N1 (PREP (VET)) N2 structure. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 481, |
|
"end": 511, |
|
"text": "figure (diagram of the figure)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Distance Measures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We presented a combining approach for automatic term extraction. Starting from a first selection of lemma pairs representing candidate terms from a morphosyntactic point of view, we have applied and evaluated several statistical scores. Results were surprising: most association criteria (for example, mutual association) didn't give good results contrary to frequency. This bad behavior of the association criteria could be explained by the introduction of linguistic filters. We can notice anyway that frequency characterizes undoubtedly terms, contrary to association criteria which select in their high values frozen compounds belonging to general language. However, we preferred to elect the Loglike criterion rather than frequency as the best score. This latter takes into account frequency of the pairs but provide a conceptual sort of high accuracy. Our system which uses finite automata allows to increase the results of the extraction of lexical resources and to demonstrate the efficiency to incorporate linguistics in a statistic system. This method has been extended to bilingual terminology extraction using aligned corpora ([Daille et al., 1994] ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 1138, |
|
"end": 1160, |
|
"text": "([Daille et al., 1994]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "I would like to thank the IBM-France team, aJl<l i. particular l~ric Gaussier and Jean-Marc Laug6, Ibr tl,~ tagged and lemmatized version of the French corpus all(I for their evaluation of statistics; Owen Rainbow f.r r~.viewing. This research was supported by the Eurol)~'at~ Commission and IBM-France, I.hrough I.Iw IC1'-10/~;.1 project,.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Surfa~'c grammatical analysis for the extraction of terminological noun phrases", |
|
"authors": [], |
|
"year": 1990, |
|
"venue": "Proceedings of the Thirteenth International Conference on ComPutational Linguistics", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "22--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "References [Bourigault, 1992] Didier Bourigault. 1992. Surfa~'c grammatical analysis for the extraction of termino- logical noun phrases. In Proceedings of the Fourte,'nth International Conference on Computational Linguis- tics (COLING-9~), Nantes, France. [Calzolari and Bindi, 1990] Nicoletta Calzolari a,d Remo Bindi. 1990. Acquisition of lexical information from a large textual Italian corpus. In Proceedings of the Thirteenth International Conference on ComPu- tational Linguistics, Helsinki, Finland. [Church and Hanks, 1990] Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mu- tual information, and lexicography. Computational Linguistics, vol. 16, n \u00b0 1, pp. 22-29.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Approche mixtc pour l'extraction automatique de terminoloqie : statistique lexicale et filtres linguistiques", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Daille ; Daille", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Bdatrice Daille, I~;ric Gaussier and Jean-Marc Lang6. Towards Automatic Extraction of Monolingual and Bilingual Ternfinology. COLING-94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daille, 1994] B6atrice Daille. 1994. Approche mixtc pour l'extraction automatique de terminoloqie : statistique lexicale et filtres linguistiques. Ph D thesis, University Paris 7, France. [Daille et al., 1994] 1994. Bdatrice Daille, I~;ric Gaussier and Jean-Marc Lang6. Towards Automatic Extraction of Monolingual and Bilingual Ternfinol- ogy. COLING-94, Kyoto, Japon. [D~rouault, 1985] Anne-Marie Ddrouault. 1985. Mod- glisation d'une langue naturelle pour la ddsambigua- tion des chaines phondtiques. PhD thesis, University Paris VII, France.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Accurate Methods for the Statistics of Surprise and Coincidence", |
|
"authors": [ |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Dunning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dunning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Marc E1-B/~ze. 1093. Lea Moddies de Langage Probabilistes : Quelques Domaines d'Applications. Habilitation b. diriger des reeherches", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dunning, 1993] Ted Dunning. 1993. Accurate Meth- ods for the Statistics of Surprise and Coincidence. Computational Linguistics, vol. 19, n \u00b0 1. [EI-B6ze, 1993] Marc E1-B/~ze. 1093. Lea Mod- dies de Langage Probabilistes : Quelques Domaines d'Applications. Habilitation b. diriger des reeherches (Thesis required in France to be a professor), Univer- sity Paris-Nord, France. [Jacquemin, 1991] Christian Jacquemin. 1991. 7hans- formations des noms composds. PhD thesis, Univer- sity Paris 7, France.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Concordances for parallel texts", |
|
"authors": [ |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Lafon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nneth", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "K~", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Proceedings of the Seventh Annual Co,re'fence of the UW Centre for the New OED and 71'.rt Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "fon, 1984] Pierre Lafon. 1984. Ddpouilleme,ts et Statistiques en Lexicomdtrie, Gen~ve, Slatkine, Champion. [Gale and Church, 1991] William A.Gale and K~,n- neth W.Church. 1991. Concordances for parallel texts. In Proceedings of the Seventh Annual Co,re'f- ence of the UW Centre for the New OED and 71'.rt Research, Usin 9 Corpora, pp. 40-62, Oxford, (J.K. [Mathieu-Colas, 1988] Michel Mathieu-Colas. 1988.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatically extracti llg ~md representing collocations for language generation", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Shannon ; Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Smadja", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1948, |
|
"venue": "Proceedings of the ~,Sth Annual Meeting of. the Association for Computational Linguistics", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "252--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Typologie de~ noms compos6s, Technical report n \u00b0 7, Paris, Programme de Recherches Coordonndes \"In. formalique ct Linguistique\", University Paris 13. [Slmnnon, 1948] C. Shannon. 1948. The mathemati- cal theory of communication. Bell Systems Technical .Journal, 27. [Smadja and McKeown, 1990] Frank A. Smadja and Kathleen R. McKeown. 1990. Automatically extract- i llg ~md representing collocations for language gener- ation. In : Proceedings of the ~,Sth Annual Meeting of. the Association for Computational Linguistics, pp. 252-259, Pittsburgh.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Frequency histogram" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "stands for the frequency of pairs involving both Li and Lj, b stands for the frequency of pairs involving Li and Lj,, c stands for tile frequency of pairs involving Li, and Lj, d stands for the frequency of pairs involving Li, and The statistical literature proposes many scores which can be used to test the strength of the bond between the two variables of a contingency table. Some are well-known such as the association ratio, close to the concept of mutual information, introduced by [Church and Hanks, 1990]: a IM = log~ (a+b)Ca+c) b)(a + c)(b + c)(b + d) Loglike = a loga + blogb + clogc + dlogd -( a + b)log(a + b) -( a + c) logCa -I-c) -(b + d) logCb + d) -( c + d) logCc + d)" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "associated to the pairs of N ADJ structure: progressi] ] porteur L.:2:._l] The line counts nbi., which are found in the last column, represent the distribution of the adjectives with regards to a given noun. The columns counts nb.j, which are found on the last line, represent the distribution of the" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>tenne parabolique de rdception (parabolic receiving an-</td></tr><tr><td>tenna), this sequence could be a term of length 3 (ob-</td></tr><tr><td>tained either by over-composition or by modification)</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "modulation et d,' d, ~- modulation (modulation and demodulation cqaipmvnls)is equivalent to the sequence gquipement ,h. modul.tt,m et dquipement de d~modulation (modulation equipment and demodulation equipment). Insertion of moditicrs inside a base-term structure does not raise problem, expect when this modifier is an adjective inserted inside a N1 PREP N2 structure. Let us examine the sequence an-", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>H{onde..) = nb(onde,.) lognb(onde,.) -</td></tr><tr><td>( nb( onde,progre~,i l ) log nb( onde,pr ogre,ai ] ) \"k</td></tr><tr><td>nb(onde,porteur) log nb(onde,porteur) + \u2022 \u2022 .)</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "j=l ilj = nb.j log n4 -~ nbo log nblj i=1For example, using the contingency table of the N ^vJ structure above, diversity of the noun onde is equal to:", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |