|
{ |
|
"paper_id": "W94-0103", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:47:07.276937Z" |
|
}, |
|
"title": "The Noisy Channel and the Braying Donkey", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tor Vergata", |
|
"location": { |
|
"settlement": "Roma", |
|
"region": "ITALY" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [ |
|
"Teresa" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tor Vergata", |
|
"location": { |
|
"settlement": "Roma", |
|
"region": "ITALY" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Paola", |
|
"middle": [], |
|
"last": "Velardi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The title of this paper playfully contrasts two rather different approaches to language analysis. The \"Noisy Channel\" 's are the promoters of statistically based approaches to language learning. Many of these studies are based on the Shannons's Noisy Channel model. The \"Braying Donkey\" 's are those oriented towards theoretically motivated language models. They are interested in any type of language expressions (such as the famous \"Donkey Sentences\"), regardless of their frequency in real language, because the focus is the study of human communication. In the past few years, we supported a more balanced approach. While our major concern is applicability to real NLP systems, we think that, after aLl, quantitative methods in Computational Linguistic should provide not only practical tools for language processing, but also some linguistic insight. Since, for sake of space, in this paper we cannot give any complete account of our research, we will present examples of \"linguistically appealing\", automatically acquired, lexical data (selectional restrictions of words) obtained trough an integrated use of knowledge-based and statistical techniques. We discuss the pros and cons of adding symbolic knowledge to the corpus linguistic recipe.", |
|
"pdf_parse": { |
|
"paper_id": "W94-0103", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The title of this paper playfully contrasts two rather different approaches to language analysis. The \"Noisy Channel\" 's are the promoters of statistically based approaches to language learning. Many of these studies are based on the Shannons's Noisy Channel model. The \"Braying Donkey\" 's are those oriented towards theoretically motivated language models. They are interested in any type of language expressions (such as the famous \"Donkey Sentences\"), regardless of their frequency in real language, because the focus is the study of human communication. In the past few years, we supported a more balanced approach. While our major concern is applicability to real NLP systems, we think that, after aLl, quantitative methods in Computational Linguistic should provide not only practical tools for language processing, but also some linguistic insight. Since, for sake of space, in this paper we cannot give any complete account of our research, we will present examples of \"linguistically appealing\", automatically acquired, lexical data (selectional restrictions of words) obtained trough an integrated use of knowledge-based and statistical techniques. We discuss the pros and cons of adding symbolic knowledge to the corpus linguistic recipe.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "All the researchers in the field of Computational Linguistics, no matter what their specific interest may be, must have noticed the impetuous advance of the promoter of statistically based methods in linguistics. This is evident not only because of the growing number of papers in many Computational Linguistic conferences and journals, but also because of the many specific initiatives, such as workshops, special issues, and interest groups. An historical account of this \"empirical renaissance\" is provide in [Church and Mercer, 1993] . The general motivations are: availability of large on-line texts, on one side, emphasis on scalability and concrete deliverables, on the other side. We agree on the claim, supported by the authors, that statistical methods potentially outperform knowledge based methods in terms of coverage and human cost. The human cost., however, is not zero. Most statistically based methods either rely on a more or less shallow level of linguistic preprocessing, or they need non trivial human intervention for an initial estimate of the parameters (training). This applies in particular to statistical methods based on Shannon's Noisy Channel Model (n-gram models). As far as coverage is concerned~ so far no method described in literature could demonstrate an adequate coverage of the linguistic phenomena being studied. For example, in collocational analysis, statistically refiable associations are obtained only for a small fragment of the corpus. The problem of\" \"low counts\" (i.e. linguistic patterns that were never, or rarely found) has not been analyzed appropriately in most papers, as convincingly demonstrated in [Dunning, 1993] . In addition, there are other performance figures, such as adequacy, accuracy and \"linguistic appeal\" of the acquired knowledge for a given application, for which the supremacy of statistics is not entirely demonstrated. Our major objection to purely statistically based approaches is in fact that they treat language expressions like stings of signals. At its extreme, this perspective may lead to results that by no means have practical interest, but give no contribution to the study of language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 512, |
|
"end": 537, |
|
"text": "[Church and Mercer, 1993]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1655, |
|
"end": 1670, |
|
"text": "[Dunning, 1993]", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The \"Noisy Channel\" 's", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "On the other side of the barricade, there are the supporters of more philosophical, and theoretically sound, models of language. We hope these scholars will excuse us for categorising their very serious work under such a funny label. Our point was to playfully emphasise that the principal interest in human models of language communication motivated the study of rather odd language expressions, like the famous \"Donkey Sentences \"1. The importance of these sentences is not their frequency in spoken, or written language (which is probably close to zero), but the specific linguistic phenomena they represent. The supporters of theoretically based approaches cannot be said to ignore the problem of applicability and scalability but this is not a priority in their research. Some of these studies rely on statistical analyses to gain evidence of some phenomenon, or to support empirically a theoretical framework, but the depth of the lexical model posited eventually makes a truly automatic learning impossible or at least difficult on a vast scale. The ensign of this approach is Pustejovsky, who defined a theory of lexical semantics making use of a rich knowledge representation framework, called the qualia structure. Words in the lexicon are proposed to encode all the important aspects of meaning, ranging from their argument structure, primitive decomposition, and conceptual organisation. The theory of qualia has been presented in several papers, but the reader may refer to [Pustejovsky and Boguraev, 1993] , for a rather complete and recent account of this research. Pustejovsky confronted with the problem of automatic acquisition more extensively in [Pustejovsky et al. 1993] . The experiment described, besides producing limited results (as remarked by the author itself), is hardly reproducible on a large scale, since it presupposes the identification of an appropriate conceptual schema that generalises the semantics of the word being studied. The difficulty to define sealable methods for lexical acquisition is an obvious drawback of using a rich lexical model. Admittedly, corpus research is seen by many authors in this area, as a tool to fine-tune lexical structures and support theoretical hypothesis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1487, |
|
"end": 1519, |
|
"text": "[Pustejovsky and Boguraev, 1993]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1666, |
|
"end": 1691, |
|
"text": "[Pustejovsky et al. 1993]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2...and the \"Braying Donkey\"'s", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. Adding semantics to the corpus statistics recipe... Indeed, the growing literature in lexical statistics demonstrates that much can be done using purely statistical methods. This is appealing, since the need for heavy human intervention precluded to NLP techniques a substantial impact on real world applications. However, we should not forget that one of the ultimate objectives of Computational Linguistic is to acquire some deeper insight of human communication. Knowledge-based, or syraboHc, techniques should not be banished as impractical, since no computational system can ever learn anything interesting if it does not embed some, though primitive, semantic model 2. In the last few years, we promoted a more integrated use of statistically and knowledge based models in language learning. Though our major concern is applicability and scalability in NLP systems, we do not believe that the human can be entirely kept out of the loop. However, his/her contribution in defining the semantic bias of a lexical learning system should ideally be reduced to a limited amount of time constrained, well understood, actions, to be performed by easily founded professionals.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2...and the \"Braying Donkey\"'s", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similar constraints are commonly accepted when customising Information Retrieval and Database systems. Since we are very much concerned with sealability asia[ with what we call linguistic appeal, our effort has been to demonstrate that \"some\" semantic knowledge can be modelled at the price of limited human intervention, resulting in a higher informative power of the linguistic data extracted from corpora. With purely statistical approaches, the aequked lexical information has no finguisfic content per se until a human analyst assigns the correct interpretation to the data. Semantic modelling can be more or less coarse, but in any case it provides a means to categorise lwhere the poor donkey brays since it is beated all the time.. 2this is often referred to as the semantic bias in Machine Learning language phenomena rather that sinking the linguist in milfions of data (collocations, ngrams, or the like), and it supports a more finguistically oriented large scale language analysis. Finally, symbolic computation, unlike for statistical computation, adds predictive value to the data, and ensures statistically reliable data even for relatively small corpora. Since in the past 3-4 years all our research was centred on finding a better balance between shallow (statistically based) and deep (knowledge based) methods for lexical learning, we cannot give for sake of brevity any complete account of the methods and algorithms that we propose. The interest reader is referred to [Basili et al, 1993 b and c] , for a summary of ARIOSTO, an integrated tool for extensive acquisition of lexieal knowledge from corpora that we used to demonstrate and validate our approach. The learning algorithms that we def'med, acquire some useful type of lexical knowledge (disambiguation cues, selectional restrictions, word categories) through the statistical processing of syntactically and semantically tagged collocations. The statistical methods are based on distributional analysis (we defined a measure called mutual conditioned plausibility, a derivation of the well known mutual information), and cluster analysis (a COBWEB-like algorithm for word classification is presented in [Basili et al, 1993,a] ). The knowledge based methods are morphosyntactic processing [Basili et al, 1992b ] and some shallow level of semantic categorisation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1490, |
|
"end": 1518, |
|
"text": "[Basili et al, 1993 b and c]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2184, |
|
"end": 2206, |
|
"text": "[Basili et al, 1993,a]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2269, |
|
"end": 2289, |
|
"text": "[Basili et al, 1992b", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2...and the \"Braying Donkey\"'s", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the use of syntactic processing in combination with probability calculus is rather well established in corpus linguistics, we will not discuss here the particular methods, and measures, that we defined. Rather, we will concentrate on semantic categorisation, since this aspect more closely relates to the focus of this workshop: What knowledge can be represented symbolically and how can it be obtained on a large scale ? The title of the workshop, Combing symbolic and statistical approaches.., presupposes that, indeed, one such combination is desirable, and this was not so evident in the literature so far.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2...and the \"Braying Donkey\"'s", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, the what-and-how issue raised by the workshop organisers is a crucial one. It seems there is no way around: the more semantics, the less coverage. Is that so true?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2...and the \"Braying Donkey\"'s", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We think that in part, it is, but not completely. For example, categorizing collocations via semantic tagging, as we propose, add predictive power to the collected collocadons, since it is possible to forecast the probability of collocations that have not been detected in the training corpus. Hence the coverage is, generally speaking, higher. In the next section we will discuss the problem of finding the best source for semantic categorization. There are many open issues here, that we believe an intersting matter of discussion for the workshop. In the last section we (briefly) present an example of very useful type of lexical knowledge that can be extracted by the use of semantic categorization in combination with statistical methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2...and the \"Braying Donkey\"'s", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We first presented the idea of adding semantic tags in corpus analysis in [B'asili et al. 1991 and 1992a] , but other contemporaneous and subsequent papers introduced some notion of semantic categorisation in corpus analysis. [Boggess et al, 1991] used rather fine-tuned seleetional restrictions to classify word pairs and triples detected by an n-gram model based part of speech tagger. [Grishman 1992 ] generalises automatically acquired word triples using a manually prepared full word taxonomy. More recently, the idea of using some kind of semantics seems to gain a wider popularity. [Resnik and Hearst, 1993] use Wordnet categories to tag syntactic associations detected by a shallow parser. [Utsuro et al., 1993] categorise words using the \"Bunrui Goi Hyou\" (Japanese) thesaurus. In ARIOSTO, we initially used hand assigned semantic categories for two italian corpora, since on-line thesaura are notcurrently available in Italian. For an English corpus, we later used Wordnet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 105, |
|
"text": "[B'asili et al. 1991 and 1992a]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 247, |
|
"text": "[Boggess et al, 1991]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 402, |
|
"text": "[Grishman 1992", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 614, |
|
"text": "[Resnik and Hearst, 1993]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 698, |
|
"end": 719, |
|
"text": "[Utsuro et al., 1993]", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sources of semantic categorization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We mark with semantic tags all the words that are included at least in one collocation extracted from each application corpus. In defining semantic tags, we pursued two contrasting requirements: portability and reduced manual cost, on one side, and the value-added to the data by the semantic markers, on the other side. The compromise we conformed to is to select about 10-15 \"naive\" tags, that mediate at best between generality and domain-appropriateness. Hand tagging was performed on a commercial and a legal domain (hereafter CD and LD), both in Italian. Examples of tags in the CD are: MACHINE (grindstone, tractor, engine), BY_PRODUCT (wine, milk, juice). In the LD, examples are: DOCUMENT (law, comma, invoice)and REALESTATE (field, building, house). There are categories in common between the two domains, such as HUMAI~ENTITY, PLACE, etc. The appropriate level of generality for categories, is roughly selected according to the criterion that words in a domain should be evenly distributed among categories. For example, BY_PRODUCT is not at the same level as HUMAN EN1TY in a domain general classification, but in the CD there is a very large number of words in this class. For what concerns ambiguous words, many subtle ambiguities are eliminated because of the generality of the tags. Since all verbs are either ACTs or STATEs, one has no choices in classifying an ambiguous verb like make. This is obviously a simplification, and we will see later its consequences. On the other side, many ambiguous senses of make are not found in a given domain. For example, in the commercial domain, make essentially is used in the sense of manufacturing. Despite the generality of the tags used, we experiment that, while the categorisation of animates and concrete entities is relatively simple, words that do not relate with bodily experience, such as abstract entities and the majority of verbs, pose hard problems. An alternative to manual classification is using on-line thesaura, such as Roget's and Wordnet categories in English 3. We experimented Wordnet on our English domain (remote sensing abstracts, RSD). The use of domain-general categories, such as those found in thesaura, has its evident drawbacks, namely that the categorisation principles used by the linguists are inspired by philosophical concerns and personal intuitions, while the purpose of a type hierarchy in a NLP system is more practical, for example expressing at the highest level 3There is an on-going European initiative to translate Wordnet in other languages, among whiela Iuflian of generality the selectional constrains of words in a given domain. For one such practical objective, a suitable categorization pnnciple is similarity in words usage. Though Wordnet categories rely also on a study of collocations in corpora (the Brown corpus), word similarity in contexts is only one of the classification pdncipia adopted, surely not prevailing. For example, the words resource, archive and file are used in the RSD almost interchangeably (e.g. access, use, read .from resource, archive, file). However, resource and archive have no common supertyp\u00a2 in Wordnet. Another pro.blem is over-ambiguity. Given a specific application, Wordnet tags create many unnecessary ambiguity. For example, we were rather surprised to find the word high classified as a PERSON (--soprano) and as an ORGANIZATION (=high school). \"this wide-speetnma classification is very useful on a purely linguistic ground, but renders the classification unusable as it is, for most practical applications. In the RSD, we had 5311 different words of which 2796 are not classified in Wordnet because they are technical terms, proper nouns and labels. For the \"known\" words, the avergae ambiguity in Wordnet is 4.76 senses per word. In order to reduce part of the ambiguity, we (manually) selected 14 high-level Wordnet nodes, like for example: COGNITION, ARTIFACT, ABSTRACTION, PROPERTY, PERSON, that seemed appropriate for the domain. This reduced the average ambiguity to 1.67, which is still a bit too high (soprano ?), i.e. it does not reflect the ambiguity actually present in the domain. There is clearly the need of using some context-driven disambiguafion method to automatically reduce the ambiguity of Wordnet tags. For example, we are currently experimenting an algorithm to automatically select from Wordnet the \"best level\" categories for a given corpus, and eliminate part of the unwanted ambiguity. The algorithm is based on the Machine Learning method for word categorisation, inspired by the well known study on basic-level categories [Rosch, 1978] , presented in [Basili et al, 1993a] . Other methods that seem applicable to the problem at hand have been presented in the literature [Yarowsky 1992] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 3346, |
|
"end": 3357, |
|
"text": "(--soprano)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 4605, |
|
"end": 4618, |
|
"text": "[Rosch, 1978]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 4634, |
|
"end": 4655, |
|
"text": "[Basili et al, 1993a]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 4754, |
|
"end": 4769, |
|
"text": "[Yarowsky 1992]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sources of semantic categorization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since our objective is to show that adding semantics to the standard corpus linguistics recipe (collocations + statistics) renders the acquired data more linguistically appealing, this section is devoted to the linguistic analysis of a case-based lexicon. The algorithm to acquire the lexicon, implemented in the ARIOSTQLEX system, has been extensively described in [Basili et al, 1993c] . In short, the algorithms works as follows: V_prep_N (sell, to,shareholder) and V_prep_N(assign, to, tax-payer) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 387, |
|
"text": "[Basili et al, 1993c]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 464, |
|
"text": "(sell, to,shareholder)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 500, |
|
"text": "V_prep_N(assign, to, tax-payer)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing wine, statements, and data: on the acquisition of selectionai restrictions in sub languages", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "are later used in AIOSTO LEX for a more refined lexical acquisition phase. We have shown in [Basili et al, 1992a ] that in sub languages there are many unintuitive ways of relating concepts to each other, that would have been very hard to find without the help of an automatic procedure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 112, |
|
"text": "[Basili et al, 1992a", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Then, for each content word w, we acquire all the collocations in which it participates. We select among ambiguous patterns using a preference method described in [Basili et al, 1993 b, d ]. The detected collocations for a word w are then generalised using the coarse grained selectional restrictions 4We did not discuss of syntactic tags for brevity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 187, |
|
"text": "[Basili et al, 1993 b, d", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our (not-so) shallow parser detects productive pairs and triples like verb subject and direct object (N_V and V N, respectively) , prepositional triples between non adjacent words (N_prep_N, V_lxeP_~, etc. acquired during the previous phase. For example, the following collocations including the word measurement in the RSD: prep_N(determineJrom, measurement) and V_prep_N( infer, from, measurement) let the ARIOSTO_LEX system learn the following selectional restriction:", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 128, |
|
"text": "(N_V and V N, respectively)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 205, |
|
"text": "(N_prep_N, V_lxeP_~, etc.", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 359, |
|
"text": "prep_N(determineJrom, measurement)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 399, |
|
"text": "V_prep_N( infer, from, measurement)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "V prep_N(deriveJrom, measurement), V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[COGNrNONI <-(/igurative_source)<-[measurement],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where COGNITION is a Wordnet category for the verbs determine, infer and derive, and figurative_source is one of the conceptual relations used. Notice that the use of conceptual relations is not strictly necessary, though it adds semantic value to the data. One could simply store the syntactic subeategorization of each word along with lhe semantic restriction on the accompanying word in a collocation, e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "something like: measurement. (V_prep_N .from, COGNITION(V)). It is also possible to cluster, for each verb or verbal noun, all the syntactic subcategorization frames for which there is an evidence in the corpus. In this case, lexieal acquisition is entirely automatic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The selectional restrictions extensively acquired by ARIOSTQLEX are a useful type of lexical knowledge that could be used virtually in any NLP system. Importantly, the linguistic material acquired is linguistically appealing since it provides evidence for a systematic study of sub languages. From a cross analysis of words usage in three different domains we gained evidence that many linguistic patterns do not generalise across sub languages. Hence, the application corpus is an ideal source for lexical acqu~ition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ">(beneficiary)->[HUMAl~ENTITY]). These coarse grained selectional restrictions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In fig. 1 we show one of the screen out of ARIOSTO_LEX. The word shown is measurement, very frequent in the RSD, as presented to the linguist. Three windows show, respectively, the lexical entry that ARIOSTQLEX proposes to acquire, a list of accepted patterns for which only one example was found (lexical patterns are generalized only when at least two similar patterns are found), and a list of rejected patterns. The linguist can modify or accept any of these choices.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 9, |
|
"text": "fig. 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each acquire d selectional restriction is represented as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "pre semJex (word, conceptual relation, semantic tag 5, direction, SE, CF) the first four arguments identify the selectional restriction and the direction of the conceptual relation, Le.: produce ->(theme)-> COGNITIVE_CONTENT ->(source)->INSTRUMENTALITY e.g. the satellite produced an image with high accuracy, the NASA produced the data..", |
|
"cite_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 17, |
|
"text": "(word,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 18, |
|
"end": 38, |
|
"text": "conceptual relation,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 39, |
|
"end": 54, |
|
"text": "semantic tag 5,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 55, |
|
"end": 65, |
|
"text": "direction,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 66, |
|
"end": 69, |
|
"text": "SE,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 70, |
|
"end": 73, |
|
"text": "CF)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "in the CD (commercial) we found: produce ->(obj)-> ARTIFACT ->(agent)-> HUMAN_ENTITY ->(instruraent)->MACHINE 6 e.g.: la d/tta produce vino con macchinari propri (*the company produces wine with owned machinery) and in the LD (legal): produce ->agent)->HUMAI~ENTITY ->(theme)-> DOCUMENT e.g.: il contn'buente deve produrre la dichiarazione (the tax payer must produce a statement)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is interesting to see which company the word \"ground\" keeps in the three domains. The RSD is mostly concerned with its physical properties, since we find patterns like: measure ->(obj)-> PROPERTY/A'ITR/BUTE <-(characteristic)<ground (e.g. to measure the feature, emissivity, image ,surface of ground)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the CD, terreno (=ground) ARTIFACT (a system, an archive) . In the CD, the pattern is: , , pr w.sore. I e~(measur~er.t, th4m, A, \"(\". ~.~,'_//, n I I, *. |lOOO\",\". 11000\",\" *\" ). Fe.sv~.. I vx(messw'we~nt, thyme,SO, \"(\" ,G.V.N,nl 1. \".?O00\", ' .?000\", \"-* ). prv.~m, l vMmns4arvm*~t, U'~mm, ~, * (\" ,G.V.H,nl I, \".11000 \u00b0 , *. riO00', ' *\" ). pre.~,4~n. 1 oaCmea~ur'm;nnt, type.o\u00a2,CGl~, \",'.\" ,G.I(.P.N, of, \". 10000 \u00b0 , *. 10000\", \"-\" ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 28, |
|
"text": "the CD, terreno (=ground)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 29, |
|
"end": 60, |
|
"text": "ARTIFACT (a system, an archive)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "rw.e. r,n'a. I a ,~( qt4~ S~ rm~.ir.t ~ aoJoc?., PA, \")\" ,G.f/.ri,n! I, \".22noo\", ' .~2000\", \"-\" ). I'u\" ~.~.um. | *~(men mr,.r~*r.t, anJm=~., Pit, *)\", G.H.P_I/,OF, \".3flOflO',\" .32000\", \"-' ). pre.se~, lt~(~l~sure~ent. ~: |ect ,PRo \")\" ,G_V.R.ni l o \" .2000\" , \" .32000\" ,'-\" ) . pr e_ |era. I e <~ me~sur e~er.t. ::.|ec t. PRo \")',G.tJ.N.n! I. \".~OOO\" ~\". 13000\",\" -\" ). G..q.tl(sir.gle,nii,~eas~r~er~,) ; -CRC Fig.1 The screenout for the lexical entry of the word \"measurement\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 279, |
|
"text": "~: |ect ,PRo \")\" ,G_V.R.ni l o \" .2000\" , \" .32000\" ,'-\" )", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 406, |
|
"text": "G..q.tl(sir.gle,nii,~eas~r~er~,)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 414, |
|
"end": 419, |
|
"text": "Fig.1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "25", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "It is interesting to see how little in common these patterns have. Clearly, this information could not he derived from dictionaries or thesaura.Though the categories used to cluster patterns of use are very high level (especially for verbs), still they capture very well the specific phenomena of each sublanguage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper we supported the idea that \"some amount\" of symbofic knowledge (high-level semantic markers) can be added to the standard lexical statistics recipe with several advantages, among which categorization, predictive power, and linguitic appeal of the acquired knowledge. For sake of space, we could not provide all the evidence (algorithms, data and performance evaluation) to support our arguments. We briefly discussed, and gave examples, of our system for the semiautomatic acquisition, on a large scale, of selectionl restrictions. ARIOSTO_LEX has its merits and limitations. The merit is that it acquires extensively, with limited manual cost, a very useful type of semantic knowledge, usable virtually in any NLP system. We demonstrated with several examples that selectional restrictions do not generalize across sublanguages, and acquiring them by hand is often inintuitive and very time-consuming. The limitation is that the choice of the appropriate conceptual types is non trivial, even when selecting very high-level tags. On the other hand, selecting categories from on-line thesaura poses many problems, particularly because the categorization principia adopted, may not be adequate for the practical purposes of a NLP system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding remarks", |
|
"sec_num": "6." |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "P.I ,re_ I !~r", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "It. ~a. 1 e,4(amam~-~ir~t t,~aarm4r~.l, \"(\" ~ G.N.P.I ,re_ I !~r.<_ 1 e < (~.~ m ~\" eJc *n t ,l~u,-rmse, .1,\" (\", G_i',/_P. ':**", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "V(a.ea sur ~0r.41 ~ t ,~1 |. \u00a2cmdtaC t, 1) G. N.~'( mea sur ~ll~.en t ,nl 1 ,COrrtlpO~, 1) i G. N. V(m, vsur~n~er.t, ~t * I ~,*~blv, | ) G.N.V(m.vsstm't4.~r.t,n~ I", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. N. V(a.ea sur ~0r.41 ~ t ,~1 |. \u00a2cmdtaC t, 1) G. N.~'( mea sur ~ll~.en t ,nl 1 ,COrrtlpO~, 1) i G. N. V(m, vsur~n~er.t, ~t * I ~,*~blv, | ) G.N.V(m.vsstm't4.~r.t,n~ I, :.w.Hve, 1 ) G.N.V(mesc.urcm~nt,~l~ :, pot ~Qrm~ 1)", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "G_N.VOu~:rar~er.t, nl : ~ r~-ov iml ~ | } ~.N_~'(maa~u~'#mqnt,nl I ,relate, 1)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G_N.VOu~:rar~er.t, nl : ~ r~-ov iml ~ | } ~.N_~'(maa~u~'#mqnt,nl I ,relate, 1)", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "(me~sure~r.ent ,nl 1 ~r esp\u00a9nle", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"~" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G,N.~.'(me~sure~r.ent ,nl 1 ~r esp\u00a9nle.~., 1)", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "( ~unphot rca, i t er ~ n ~ I ,mval;ur ore!mr1 t ~ 2 ) G_N_V(mvvsucwr.entm,I 1 ~ ub~", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "G_N_", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G_N_.~( ~unphot rca, i t er ~ n ~ I ,mval;ur ore!mr1 t ~ 2 ) G_N_V(mvvsucwr.entm,I 1 ~ ub~,;~ in.2)", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "~: G_N.H(&Cr.~e?.fc3ounCier',nl t~measur'onmn~;3) G.H.H(pntltnna, n| 1 ~aea~'~.ent, 3) :~' G. N,.'((me t eosa t. rd I ,measur~ent, 3) ~::i} U_N.H(Sa ,e I I I re.n| I, ~easur linen", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "V{", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "~", |
|
"middle": [ |
|
"~" |
|
], |
|
"last": "Sho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ".", |
|
"middle": [ |
|
"," |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "~", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. N. V{ meac, ur e~r.er, t, n ~. ~., ShO..,, ~ ) .~: G_N.H(&Cr.~e?.fc3ounCier',nl t~measur'onmn~;3) G.H.H(pntltnna, n| 1 ~aea~'~.ent, 3) :~' G. N,.'((me t eosa t. rd I ,measur~ent, 3) ~::i} U_N.H(Sa ,e I I I re.n| I, ~easur linen, ,4)", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "P, ff(!nv~ vat ~l~.~t L, frurl, V~ t~ I I | iv, 4 )", |
|
"authors": [ |
|
{ |
|
"first": "\u2022", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "\u2022 ~['] G. N.P, ff(!nv~ vat ~l~.~t L, frurl, V~ t~ I I | iv, 4 )", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "u'~ent) 2 -CgC", |
|
"authors": [ |
|
{ |
|
"first": "%", |
|
"middle": [], |
|
"last": "G_N", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "% G_N,~'/f. llqutc,ntl,mets~..u'~ent) 2 -CgC", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "~/_t/(~,r~il,aelsu~'o~er.t) t -tax G.H_tl(~.aa~ssur~a;mt~H | ,r.ee3. ~) ; -C~C G.t/.tJ(mvWlurC,.~eot,rdl~itartd;rd)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "~/_t/(~,r~il,aelsu~'o~er.t) t -tax G.H_tl(~.aa~ssur~a;mt~H | ,r.ee3. ~) ; -C~C G.t/.tJ(mvWlurC,.~eot,rdl~itartd;rd) t o C/;", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Using word association for syntactic disambiguation, in \"Trends in Artificial Intelligence", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "G.F/.N(mu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Basili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Velardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Lecture Notes in AI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C G.f/.N(mu,nl I ,mea~ar,i:~lr.t) 1 -CRC G_~/_ff(nel~.bo~'_~EE el l.mn~Su~',wa.nt) ! * CRC References [Basili el. al . 1991] R. Basili, M.T. Pazienza, P.Velardi, Using word association for syntactic disambiguation, in \"Trends in Artificial Intelligence\", Ardizzone. Gaglio, Sorbello Eds.. in Lecture Notes in AI. Springer Vedag, 1991. [Basili et al, 1992a] Basili, R., Pazienza, M.T..", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "What can be learned from raw texts ?, forthcoming on Journal of Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Velardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Besili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Velsrdi ; Baalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Velardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Besili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Velardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Besili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Velardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "; L", |
|
"middle": [], |
|
"last": "Boggess", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Computational Lexicons: the Neat Examples and the Odd Exemplars, Proc. of Third Int. Con\u00a3 on Applied Natural Language Processing", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "114--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Velardi. P., Computational Lexicons: the Neat Examples and the Odd Exemplars, Proc. of Third Int. Con\u00a3 on Applied Natural Language Processing, Trento, Italy, 1-3 April, 1992. [Basili et al, 1992 b] Besili, R., M.T. Pazienza, P. Velardl, A shallow syntactic analyzer to extract word associations from corpora, Literary and Linguistic Computing, 1992, vol. 7, n. 2, 114-124. [Basili et al. 1993 a] Basili, R., Pazienza, M.T., Velsrdi, Hierarch/cal clust~ing of verbs, ACL-SIGLEX Workshop on I.~xical Acquisition, Columbus Ohio, June, 1993. [Basili at al, 1993b] Baalli, R., M.T. Pazienza, P. Velardi, What can be learned from raw texts ?, forthcoming on Journal of Machine Translation. [Besili et al, 1993c] Besili, R., M.T. Pazienza, P. Velardi, Acquisition of selectional patterns, fo~ming on Journal of Machine Translation. [Basili et al, 1993d] Besili, R., M.T. Pazienza, P. Velardi, Semi-automatic extraction of linguisitc information for syntactic disambiguation, Applied Artificial Intelligence, vol. 4, 1993. [Boggus et al. 1992] L. Boggess, R. Agarwal, R. Davis, Disambiguation of Prepositional Phrases in Automatically Labeled Technical Texts, Proc. of AAAI 1991 [Church and ME'reef, 1993] K. Church and L. Mercer, Introduction to the sepuial issue on Computational Linguistics using large corpora, Computational Linguistics, vol. 19. n.1, 1993", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Accurate methods for statistical surprise and coincidence", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Dunning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Dunning, t993] T. Dunning, Accurate methods for statistical surprise and coincidence, Computational Linguistics, voL 19. n.l, 1993", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Word-sense disambiguation using statistical models of Roger's categories trained on large corpora", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of COLING-92", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Grishman, 1992] R. Grishman, Acquisition of selectional patterns, Proc. of COLING-92, Nantes, 1992 [Yarowsky 1992] D. Yarowsky, Word-sense disambiguation using statistical models of Roger's categories trained on large corpora, INoc. of COLING- 92, Nantes, 1992", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Principle of categorization, in Cognition and Categorization, Erlbaum 1978", |
|
"authors": [ |
|
{ |
|
"first": "Boguraev ; J", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "; J", |
|
"middle": [], |
|
"last": "Boguraev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Bergler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Anick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Resnik and Hearst. 1993] P. Reanik and M. Hearst, Structural ambiguity and conceptual relations, ACL workshop on very large corpora", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "193--224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Pustejovsky and Boguraev, 1993] J. Pustejovsky and B. Boguraev, Lexical Knowledge representation and natural language processing, Artificial Intelligence, voi.63, n. 1-2, pp. 193-224, October 1993 [Pustejovski et al, 1993] J. Pustejovsky, S. Bergler, and P. Anick, Lexical semantic techniques for corpus analysis, Computational Linguistics, vol. 19, n.2. 1993 [Rosch, 1978 ] E. Rosch, Principle of categorization, in Cognition and Categorization, Erlbaum 1978. [Resnik and Hearst. 1993] P. Reanik and M. Hearst, Structural ambiguity and conceptual relations, ACL workshop on very large corpora, Ohio State University, June 22, 1993", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Verbal case frame acquisition from bilingual corpora, proc", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Utsuro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Utsuro et al, 1993] T. Utsuro, Y. Matsumoto. M. Nagao, Verbal case frame acquisition from bilingual corpora, proc. of UCAI-93", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "First, collocations extracted from the application corpus are clustered according to the semantic and syntactic tag 4 of one or both the co-occurring content words. The result are what we call clustered association d a t a .For example," |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "-t~(~,ur=~.~,~-;,-~-~,-d~i~/C;TTb0b \u2022 ;..Y~=:--,')..'" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "i F~v~v~v~(nrvvs~.~`t*..~r~t~u~vc;~)~*G~P~M~ni~2~2~+~)~ w'v.vun_ lva{~v~',r.vr.t, :nw'v\u00a2~vr Iv, (Cn. ~TTR. \") \u00b0 .G.&.H,.t |, \u00b0.2~000\", \u00b0.20000 \u00b0,'-\u00b0). : rr4. I tinier,. 1 ex(.~sur ~.~nt ~ ~ns~rl~.~lr.t ~ | NS, \")' ,G_H,P.NI -frm' ~ ' .4000o ~ \". 3~000\" , ,, ). pre_li~.~o.le~(~eas~,a'e~cent.character~st,l~,AT'TR Ira, ...... .~\"~-.~.-:~-\" ...... ,~m,r~l ] )re.l|r..~G. le~(:eas~'4~ent~charact4rfstta,t~,\" r ~U|T 1 iro_ I I~c. 1 o<(.~aour ecent, I n.~'ur=|n t, ZH$, \".~\" ,I r.v,surm.t:-) G.rl_Nfseccl~l,nil.~easurtr.ent) ! -CRC" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": ".H.Hl:~...v_~.~-~hv#~)vl,r*i |,mva~u/'~Jrn~ir.~.) ~i: T~ x _ ..I I I( themo ) ~.--('aCt, h.u'~an act|on, I'~man activity':*, \"8Create# hat.urn| s~l~,~rv~lt~ ~lCl~t~f~c dlSCl~11n4\":\u00b0] ( type_Of ) (--('lOstrac~lon*:,, \"\u00a2antent, c\u00a9gr.ltive \u00a2oee:er.t. mental ot~Ject':*, *COOn1 t tof b kr, owleduv\" : \u2022 ) ( object ) --> ['nature: object':', \"~r~perty' :, ) (FIg. lo\u00a2 ) --:) ('let, human aCt!On, I'N.man act~vltv':', \":Ogn1,1On, k\u00a2.ovledge\" :-] ( Instr.,amvnt ) --) ('t.stru'tqlr, ttllty':.] ( Icc|L~lon ) --> r'loc*t'lon'" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td/><td colspan=\"4\">TRANSACTION->(obj)-> terreno</td></tr><tr><td/><td colspan=\"4\">(vendere, acquistare, permutare terreno =</td></tr><tr><td/><td/><td colspan=\"3\">sell, buy, exchange a ground)</td></tr><tr><td/><td colspan=\"4\">AMOUNT <-(source)<-terreno</td></tr><tr><td/><td colspan=\"4\">(e.g. costo, rendita, di terreno= price,</td></tr><tr><td/><td colspan=\"4\">revenue of ( =deriving from the ownership</td></tr><tr><td/><td/><td/><td/><td>o~ ground)</td></tr><tr><td colspan=\"5\">And what is managed in the three domains?</td></tr><tr><td>In</td><td>the</td><td>RSD,</td><td>one</td><td>manages</td></tr><tr><td colspan=\"3\">COGNITIVECONTENT,</td><td/><td>such as image,</td></tr><tr><td colspan=\"3\">information, data</td><td/><td/></tr><tr><td/><td/><td/><td/><td>is the direct</td></tr><tr><td/><td/><td/><td/><td>object of physical ACTs such as cultivate,</td></tr><tr><td/><td/><td/><td/><td>reclaim, plough, etc. But is also found in</td></tr><tr><td/><td/><td/><td/><td>patterns like:</td></tr><tr><td/><td/><td/><td/><td>BYPRODUCT ->(source)-> terreno</td></tr><tr><td/><td/><td/><td/><td>(e.g. patate, carote ed altn'prodotti del</td></tr><tr><td/><td/><td/><td/><td>terreno = potatoes, carrots and other ground</td></tr><tr><td/><td/><td/><td/><td>products)</td></tr><tr><td/><td/><td/><td/><td>in the LD, terreno is a real estate, object of</td></tr><tr><td/><td/><td/><td/><td>transactions, and taxable as such. The</td></tr><tr><td/><td/><td/><td/><td>generaJised pattern is:</td></tr><tr><td/><td/><td/><td/><td>(though it will be available soon), we cannot</td></tr><tr><td/><td/><td/><td/><td>classify automaO~lly.</td></tr></table>", |
|
"html": null, |
|
"text": "6MACHINE is the same as the Wordnet classINSTRUMENTALITY. Notice that we usedWordnet categories for our English corpus only later in our research. Perhaps we could fmd a Wordnet tag name for each of our previous manually assigned tags in the two Italian domains, but this would be only useful for presentation purposes. In fact, since there is not as yet an Italian version of Wordnet", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |