Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W96-0203",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:59:04.118560Z"
},
"title": "Unsupervised Learning of Syntactic Knowledge: methods and measures",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Supervised methods for ambiguity resolution learn in \"sterile\" environments, in absence of syntactic noise. However, in many language engineering applications manually tagged corpora are not available nor easily implemented. On the other side, the \"exportability\" of disambiguation cues acquired from a given, noise-free, domain (e.g. the Wall Street Journal) to other domains is not obvious. Unsupervised methods of lexical learning have, just as well, many inherent limitations. First, the type of syntactic ambiguity phenomena occurring in real domains are much more complex than the standard V N PP patterns analyzed in literature. Second, especially in sublanguages, syntactic noise seems to be a 2S",
"pdf_parse": {
"paper_id": "W96-0203",
"_pdf_hash": "",
"abstract": [
{
"text": "Supervised methods for ambiguity resolution learn in \"sterile\" environments, in absence of syntactic noise. However, in many language engineering applications manually tagged corpora are not available nor easily implemented. On the other side, the \"exportability\" of disambiguation cues acquired from a given, noise-free, domain (e.g. the Wall Street Journal) to other domains is not obvious. Unsupervised methods of lexical learning have, just as well, many inherent limitations. First, the type of syntactic ambiguity phenomena occurring in real domains are much more complex than the standard V N PP patterns analyzed in literature. Second, especially in sublanguages, syntactic noise seems to be a 2S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "systematic phenomenon, because many ambiguities occur within identical phrases. In such cases there is little hope to acquire a higher statistical evidence of the correct attachment. Class-based models may reduce this problem only to a certain degree, depending upon the richness of the sublanguage, and upon the size of the application corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because of these inherent difficulties, we believe that syntactic learning should be a gradual process, in which the most difficult decisions are made as late as possible, using increasingly refined levels of knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper we present an incremental, class-based, unsupervised method to reduce syntactic ambiguity. We show that our method achieves a considerable compression of noise, preserving only those ambiguous patterns for which shallow techniques do not allow reliable decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Several corpus-based methods for syntactic ambiguity resolution have been recently presented in the literature. In (Hindle and Rooth, 1993) hereafter H&R, lexicalized rules are derived according to the probability of noun-preposition or verb-preposition bigrams for ambiguous structures like verb-noun-preposition-noun sequences. This method has been criticised because it does not consider the PP object in the attachment decision scheme. However collecting bigrams rather than trigrams reduces the well known problem of data sparseness.",
"cite_spans": [
{
"start": 115,
"end": 139,
"text": "(Hindle and Rooth, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised vs. supervised models of syntactic learning",
"sec_num": null
},
{
"text": "In subsequent studies, trigrams rather than bigrams were collected from corpora to derive disambiguation cues. In (Collins and Brooks,1995) the problems of data sparseness is approached with a supervised back-off model, with interesting results. In (Resnik and Hearst, 1993) class-based trigrams are obtained by generalizing the PP head, using WordNet synonymy sets. In (Ratnaparkhi et al, 1994) word classes are derived automatically with a clustering procedure. (Franz, 1995 ) uses a loglinear model to estimate preferred attachments according to the linguistic features of co-occurring words (e.g. bigrams, the accompanying noun determiner, etc.). (Brill and Resnik, 1994) use transformationbased error-driven learning (Brill, 1992) to derive disambiguation rules based on simple context information (e.g. right and left adjacent words or POSs).",
"cite_spans": [
{
"start": 114,
"end": 139,
"text": "(Collins and Brooks,1995)",
"ref_id": null
},
{
"start": 249,
"end": 274,
"text": "(Resnik and Hearst, 1993)",
"ref_id": null
},
{
"start": 370,
"end": 395,
"text": "(Ratnaparkhi et al, 1994)",
"ref_id": "BIBREF6"
},
{
"start": 464,
"end": 476,
"text": "(Franz, 1995",
"ref_id": "BIBREF4"
},
{
"start": 651,
"end": 675,
"text": "(Brill and Resnik, 1994)",
"ref_id": "BIBREF2"
},
{
"start": 722,
"end": 735,
"text": "(Brill, 1992)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised vs. supervised models of syntactic learning",
"sec_num": null
},
{
"text": "All these approaches need extensive collections of positive examples (i.e. hand corrected attachment instances) in order to trigger the acquisition process. Probabilistic, backed-off or loglinear models rely entirely on noise-free data, that is, correct parse trees or bracketed structures. In general the training set is the parsed Wall Street Journal (Marcus et al, 1993) , with few exceptions, and the size of the training samples is around 10-20,000 test cases. Some methods do not require manually validated PP attachments, but word collocations are collected from large sets of noise-free data. Unfortunately, in language engineering applications, manually tagged corpora are not widely available nor easily implemented 1. On the other side, the \"exportability\" of disambiguation cues obtained in a given domain (e.g. WSJ) to other domains is not obvious.",
"cite_spans": [
{
"start": 353,
"end": 373,
"text": "(Marcus et al, 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised vs. supervised models of syntactic learning",
"sec_num": null
},
{
"text": "Unsupervised methods have, on their side, serious limitations: * First, the type of occurring syntactic ambiguity phenomena are in the average much more complex than the standard verb-noun-preposition-noun patterns analyzed in literature. H&R method has been proved very weak on complex phenomena like verb-noun-preposition-noun-preposition-noun sequences (see (Franz,1995) ). Other methods (supervised or not) do not consider more complex ambiguous structures.",
"cite_spans": [
{
"start": 361,
"end": 373,
"text": "(Franz,1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised vs. supervised models of syntactic learning",
"sec_num": null
},
{
"text": ". Second, in real environments, and especially in sun languages, syntactic noise seems to be a systematic phenomenon. Many ambiguities occur within several identical phrases, hence the \"wrong\" and the \"right\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised vs. supervised models of syntactic learning",
"sec_num": null
},
{
"text": "associations may gain the same statistical evidence. Therefore, there are intrinsic limitations to the possibility of using purely statistical approaches to ambiguity resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised vs. supervised models of syntactic learning",
"sec_num": null
},
{
"text": "The nature of ambiguous phenomena in untagged corpora has not been studied in detail in the literature although one such analysis would be very useful on a language engineering standpoint. Accordingly, section 2 is devoted to an experimental analysis of complexity and recurrence of ambiguous phenomena in sublanguages. This analysis demonstrates that syntactic disambiguation in large cannot be afforded by the use of knowledge induced exclusively from the corpus. We think that corpus based techniques are useful to significantly reduce, not to eliminate, the ambiguous phenomena. In section 3, we describe an unsupervised, classbased, incremental, syntactic disambiguation method that is aimed at reducing noisy collocates, to the extent that this is allowed by the observation of corpus phenomena. The approach that we support is to reduce syntactic ambiguity through an incremental process. Decisions are deferred until enough evidence has been gained of a noisy phenomenon. First, a kernel of shallow grammatical competence is used to extract a collection of noise-prone syntactic collocates. Then, a global data analysis is performed to review local choices and derive new statistical distributions. This incremental process can be iterated to the point that the system 1 It is not just a matter of time, but also of required linguistic skills (see for example (Marcus et al, 1993) ). reaches a kernel of \"hard\" cases for which there is no more evidence for a reliable decision. The output of the last iteration represents a less noisy environment on which additional learning process can be triggered (e.g. sense disambiguation, acquisition of subcategorization frames, ...). These later inductive phases may rely on some level of a priori knowledge, like for example the naive case relations used in the ARIOSTO_LEX system (Basili et al, 1993c (Basili et al, , 1996 .",
"cite_spans": [
{
"start": 1368,
"end": 1388,
"text": "(Marcus et al, 1993)",
"ref_id": "BIBREF5"
},
{
"start": 1832,
"end": 1852,
"text": "(Basili et al, 1993c",
"ref_id": "BIBREF1"
},
{
"start": 1853,
"end": 1874,
"text": "(Basili et al, , 1996",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised vs. supervised models of syntactic learning",
"sec_num": null
},
{
"text": "and recurrence of ambiguous patterns in corpora",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "In the previous section we pointed out that unsupervised lexical learning methods must cope with complex and repetitive ambiguities. We now describe an experiment to measure these phenomena in corpora. In this experiment, we wish to demonstrate that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "The type of syntactic ambiguities are much more complex than V N PP or N N PP sentences. In a realistic environment, the correct attachment must be selected among several possibilities, not just two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "The fundamental assumption of most common statistical analyses is that the events being analyzed (productive word pairs or triples in our case) are independent. Instead, ambiguous patterns are highly repetitive, especially in sublanguages. This means that in many cases, unless we work in absence of noise, the \"correct\" and \"wrong\" associations in an ambiguous phrase acquires the same or similar statistical evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "To conduct the experiment, we used a shallow syntactic analyzer (SSA) To smooth the weight of ambiguous esl's in lexical learning, each detected esl is weighted by a measure called plausibility. To simplify, the plausibility of a detected esl is roughly inversely proportional to the number of mutually excluding syntactic structures in the text segment that generated the esl (see (Basili et al, 1993a ) for details).",
"cite_spans": [
{
"start": 382,
"end": 402,
"text": "(Basili et al, 1993a",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "In the following, we show examples of collision sets extracted from the LD (an English word by word translation is provided for the sentence fragments that generated a collision set). It is important to observe that the complexity does not arise simply from the number of colliding tuples but also from the structure of ambiguous patterns (e.g. non consecutive word strings, as in the second example To measure the complexity of the ambiguous structures, we collected from fragments of the two corpora all the ambigous collision sets, i.e. those with more than one esl. 10,433 collision sets were found in the ENEA corpus and 30,130 in the LD 3. Figure 1 plots the percentage of colliding esl~s vs. the cardinality of collision sets. The average size of ambiguous collision sets is about 4 in both corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "Of course SSA introduces additional noise due to its shallow nature (see referred papers for an evaluation of performances4), but as far as our experiment is concerned (measuring the complexity of collision sets) SSA still provides a good testbed. In fact, some esl can be missed in a collision set, or some spurious attachment can be detected, but in the average, these phenomena are sufficiently rare and in any case they tend to be equally probable. In the second experiment we measure the recurrence of ambiguous patterns. This phenomenon is known to be typical in sublanguage, but was never analyzed in detail. A straightforward measure of recurrence is provided by the average Mutual Information of colliding esl's. This figure measures the probability of cooccurrence of two esl's in a collision set. If the Mutual Information is high, it means that the measured phenomena (productive word tuples) do not independently occur in collision sets, i.e. they systematically occur in reciprocal ambiguity in the corpus. The consequence is that statistically based lexical learning methods are faced not only with the problem of data sparseness (events that are never or rarely encountered), but also with the problem of systematic ambiguity (events 3The LD test corpus is larger, and in addition, the legal language is more verbous and less concise than the scientific style that characterizes the ENEA corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "4We measured an average of 80% precision and 75% recall over three corpora, one of which in English. that occur always in the same sequence). This phenomenon is likely to be more relevant in sublanguages (medicine, law, engineering) than in narrative texts, but sublanguages are at the basis of many important applications. The average Mutual Information was evaluated by first computing, in the standard way, the Mutual Information of all the pairs of esl's that co-occurred in at least one collision set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "Prob(esli, eslj)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "Mr(est,, esl~) = log2 Prob(esl~)Prob(esl~) (1) where the probability is evaluated over the space of collision sets with cardinality > 1.",
"cite_spans": [
{
"start": 43,
"end": 46,
"text": "(1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "Tables 1 and 2 summarize the results of the experiment. Tables 1 and 2 show the average MI, standard deviation and variance for the two domains. The values in 1 shows that the average MI is close to the perfect correlation 5 and has a small variance, especially in the ENEA corpus that is in technical style. This result could be biased by the esl's occurring just once in the collision sets, hence we repeated the computation for the pair of esFs occurring at a frequency higher than the average (> 2, in both domains). The results are reported in Table 2 . It is seen that the values remain rather high, still with a small variance.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 70,
"text": "Tables 1 and 2",
"ref_id": "TABREF2"
},
{
"start": 549,
"end": 556,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "Clustering the esl~s would seem an obvious way to reduce this problem. Therefore, in a subsequent experiment we clustered the head of PPs in the collision sets using a set of high-level semantic tags (for a discussion STwo esl's occurring exactly as the average (1.9 in LD) are in perfect correlation when their MI is equal to 13.8. on semantic tagging see (Basili et al, , 1993b 6. For example, the esl V_P_N ( to_present, within, apriL15_1974 ) is generalized as:",
"cite_spans": [
{
"start": 357,
"end": 379,
"text": "(Basili et al, , 1993b",
"ref_id": "BIBREF1"
},
{
"start": 404,
"end": 446,
"text": "V_P_N ( to_present, within, apriL15_1974 )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "V_P_N ( to_present, within, TEMPORAL.ENTITY).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "Because of sense ambiguity, the collision sets became 20,353 in the ENEA corpus, and 42,681 in the LD. The average frequency of \"right-generalized\" esl~s is now 4.28 in the ENEA and 4.64 in the LD. The results are summarised in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "Notice that the phenomenon of systematic ambiguity is much less striking (lower MI and higher variance), though it is not eliminated. It is also important that the two corpora, though very different in style, behave in the same way as far as systematic ambiguity is concerned. For example, consider the following sentence fragment: 6Class based approaches are widely employed. Clusters are created by means of distributional techniques in (Ratnaparkhi et al, 1994) , while in (Resnik and Hearst, 1993) low level synonim sets in WordNet are used. Instead, we use high level tags (human, time, abstraction etc.), manually assigned in Itafian domains and automatically assigned from WordNet in English domains. For sake of brevity, we do not re-discuss the matter here. See aforementioned papers.",
"cite_spans": [
{
"start": 439,
"end": 464,
"text": "(Ratnaparkhi et al, 1994)",
"ref_id": "BIBREF6"
},
{
"start": 476,
"end": 501,
"text": "(Resnik and Hearst, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "The conclusion we may derive from these two experiments is that most syntactic disambiguation methods presented in literature are tested in an unrealistic environment. This does not mean that they don't work, but simply that their applicability to real domains is yet to be proven. Application corpora are noisy, may not be very large, and include repetitive and complex ambiguities that are an obstacle to reliable statistical learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "The experiments also stress the importance of class based models of lexical learning. Clustering \"similar\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "phenomena is an obvious way of reducing the problems just outlined. Unfortunately, Table 3 shows that generalization improves, but not eliminates, the problem of repetitive patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "An incremental architecture for unsupervised reduction of syntactic ambiguity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "The previous section shows that we need to be more realistic in approaching the problem of syntactic ambiguity resolution in large. Certain results can be obtained with purely statistical methods, but there are many complex cases for which there seems to be a clear need for less shallow techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "The approach that we have undertaken is to attack the problem of syntactic ambiguity through increasingly refined learning phases. The first stage is noise compression, in which we adopt an incremental syntactic learning method, to create a more suitable framework for subsequent steps of learning. Noise compression is performed essentially by the use of shallow NLP and statistical techniques. This method is described hereafter, while the subsequent steps, that use deeper (rulebased) levels of knowledge, are implemented into the ARIOSTO_LEX lexical learning system, described in (Basili et al., 1993b (Basili et al., , 1933c (Basili et al., and 1996 .",
"cite_spans": [
{
"start": 584,
"end": 605,
"text": "(Basili et al., 1993b",
"ref_id": "BIBREF1"
},
{
"start": 606,
"end": 629,
"text": "(Basili et al., , 1933c",
"ref_id": null
},
{
"start": 630,
"end": 654,
"text": "(Basili et al., and 1996",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "The process of incremental noise reduction works as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "1. First, use a surface grammatical competence (i.e. SSA) to derive the (noise prone) set of observations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "2. Cluster the collocational data according to semantic categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "3. Apply class based disambiguation operators to reduce the initial source of noise, by first disambiguating the non-persistent ambiguity phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "4. Derive new statistical distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "5. Repeat step 2.-4. on the remaining (i.e. persistent) ambiguous phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "The incremental disambiguation activity stops when no more evidence can be derived to solve new ambiguous cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "In order to accomplish the outlined noise reduction process we need: (i) a disambiguation operator and (ii) a disambiguation strategy to eliminate at each step \"some\" noisy collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A feedback algorithm for noise reduction",
"sec_num": null
},
{
"text": "Mutual Conditioned Plausibility (MCPI) (Basili et al.,1993a) . Given an esl, the value of its corresponding MCPl is defined by the following:",
"cite_spans": [
{
"start": 39,
"end": 60,
"text": "(Basili et al.,1993a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The class based disambiguation operator is the",
"sec_num": null
},
{
"text": "Conditioned Plausibility (MCP1) of a prepositional attachment esl(w, mod(p, n)), is: M C Pl(esl( w, rood(p, n ) ) = ~yer pl(esl(w, mod(p, y) ) ) ~vh,yer pl(esl(h, mod(p, y) ) ) ~v~ pl (esl(w, mod(p, y) ",
"cite_spans": [
{
"start": 184,
"end": 201,
"text": "(esl(w, mod(p, y)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DEF(Mutual Conditioned Plausibility): The Mutual",
"sec_num": null
},
{
"text": ") ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEF(Mutual Conditioned Plausibility): The Mutual",
"sec_num": null
},
{
"text": "where F is the high level semantic tag assigned to the modifier n and pl 0 is the plausibility function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEF(Mutual Conditioned Plausibility): The Mutual",
"sec_num": null
},
{
"text": "of the generalized esl's were presented in the previous section. For example to the computation of the MCPI of esl(reddito,(di, persona)) contribute esl's like esl (reddito, ( di, pro f essionista ) ), esl ( reddito, ( di, azienda) ) where professionista, persona and azienda are instances of HUMAN_ENTITIY.",
"cite_spans": [
{
"start": 206,
"end": 233,
"text": "( reddito, ( di, azienda) )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": null
},
{
"text": "After a first scan of the corpus by the SSA and after the computation of global MCPI values, a primary knowledge base is available. This knowledge is fully corpus driven, and it is obtained without a preliminary training set of hand tagged patterns. Each esl in a collision set has its own MCP1 value, that has been globally derived from the corpus. The MCPI is thus employed to remove the less plausible attachments proposed by the grammar, with a consequent reduction in size of the related collision sets. When more than one esl remain in a collision set the system is not forced to decide, and a further disambiguation step is attempted later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": null
},
{
"text": "After the first scan of the corpus by means of the SSA grammar, the corpus is re-written as a set of possibly ambiguous Collision Sets, i.e. if C is the corpus and CSi a Collision Set, we have: , fori\u00a2j,i,j=O, 1,2,...N where N is the total number of collision sets found in the corpus. The cardinality of a generic collision set is directly proportional to the degree of ambiguity of its members. The feedback algorithm tries to reduce the cardinality Table 4 : A general feedback algorithm for noise reduction Table 4 . It should be noted that the above feedback strategy has three main phases: (step 2.2) statistical induction of syntactic preference scores; (step 2.3) testing phase (which is necessary in order to quantify the performance of disambiguation criteria derived from the current statistical distributions); (step 2.3.1) learning phase, to filter out the syntactically odd esl~s (i.e. esl with locally low MCP1 values).",
"cite_spans": [
{
"start": 194,
"end": 218,
"text": ", fori\u00a2j,i,j=O, 1,2,...N",
"ref_id": null
}
],
"ref_spans": [
{
"start": 452,
"end": 459,
"text": "Table 4",
"ref_id": null
},
{
"start": 511,
"end": 519,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Examples",
"sec_num": null
},
{
"text": "C = CSo U CSx U ... W CS~ U ...CSN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": null
},
{
"text": "According to the disambiguate as late as possible strategy, the learning and testing phases have different objectives:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "During the learning phase, the objective is to take only highly reliable decisions, by eliminating those esl's with a very low plausibility, while delaying unreliable choices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "\u2022 During the test phase, the objective is to evaluate the ability of the system at separating, within each collision set, correct from wrong attachment candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "This results in two different disambiguation algorithms: the learning phase is used only to remove hell esl's from the collision sets, without forcing any paradise choice (e.g. a maximum likelihood candidate). In the test phase eslls are classified as (locally) correct and wrong according to their relative values of MCPI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "The learning phase, called ith -learning step, is guided by the following algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "1. Identify all Collision Sets of the corpus, CSi, i = Step 2 is further specified in Table 5 . In step 3 of the Learning algorithm, the new plausibility values are redistributed among the survived esl's according to the following rule: pli (CSi) pli+l (esl(h, mod(p, w) ",
"cite_spans": [
{
"start": 253,
"end": 270,
"text": "(esl(h, mod(p, w)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": ")) = pli pli+l (CSi+i) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "where i is the learning step and CSi+i (C CSi) does not contain esl's that have been placed in hell during step i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "After each learning step the upgraded plausibility values provide newer MCP1 scores that are more reliable because the hell esFs have been discarded. The evaluation of each learning step is carried on by testing the syntactic disambiguation on a selected set of corpus sentences where ambiguities have been manually solved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "The general test algorithm is defined in Table 6 . In Table 6 , notice that precision and recall evaluate the ability of the system both at eliminating truly wrong esl's and accepting truly correct esl~s, since, as remarked in section 2, our objective is noise compression, rather than full syntactic disambiguation. Notice also that, because of their different classification objectives, learning and testing use different decision thresholds.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 48,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 54,
"end": 61,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "Experimental Results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "To evaluate numerically the benefits of the feedback algorithm, several experiments and performance indexes have been evaluated. The corpus selected for experimenting the incremental technique is the LD: the size of the corpus is about 500,000 words. The SSA grammar in LD has about 25 DCG rules and it generates 29 At first, we need to study the system classification parameters, ~r and r (see Tables (5) and (6)) . During the learning phase, we wish to eliminate as many hell esl's as possible, because the more noise has been eliminated from the source syntactic data, the more reliable is the application of the later inductive operators (i.e. ARIOSTO lexical learning system). However we know from the experiments in section 2 that the competence that we are using (shallow NLP and statistical operators) is insufficient to cope with highly repetitive ambiguities. The threshold o\" is therefore a crucial parameter, because it must establish the best trade-off between precision of choices (i.e. it must classify as hell truly noisy eslls) and impact on noise compression (i.e. it must remove as much noise as possible). Table 7 shows the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 395,
"end": 405,
"text": "Tables (5)",
"ref_id": "TABREF6"
},
{
"start": 1126,
"end": 1133,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "To select the best value for ~r, we measured the values of recall and precision (defined in Table 6 ) according to different values for r. These measures have been derived from the early (thus noisy) state of knowledge where just the SSA grammar, and no learning, was applied to the corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "According to the results of Table 7 , r = 0.2 was selected for the better trade-off between recall, precision and coverage. The learning steps have then be performed with a threshold value o\" = 0.2 over the LD corpus. In each phase the corresponding recall and precision have been measured.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "The results of the experiment are summarised in Figure 2. Figure 2 .A plots recall versus precision that have been obtained in the early (prior to learning) stage (Step 0), after 1 (Step 1) and 2 (Step 2) learning iterations. Each measure is evaluated for a different value of the testing threshold r, that varies from 0.5 to 0.0 from left to right in Fig. 2. A. Figure 2 .B plots the Information Gain (Kononenko and Bratko, 1991) an information theory index that, roughly speaking, measures the quality of the statistical distributions of the correct vs. wrong esl's. Figure 2 .D plots the Coverage, i.e. the number of decided cases over the total number of possible decisions. Finally, Table 8 reports the performance (at the Step 0 phase) of the H&R Lexical Association (LA) 7. We experiment this disambiguation operator just because the HLzR method has, among the others, the merit of being easily reproducible.",
"cite_spans": [
{
"start": 402,
"end": 430,
"text": "(Kononenko and Bratko, 1991)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 48,
"end": 66,
"text": "Figure 2. Figure 2",
"ref_id": null
},
{
"start": 352,
"end": 359,
"text": "Fig. 2.",
"ref_id": null
},
{
"start": 363,
"end": 371,
"text": "Figure 2",
"ref_id": null
},
{
"start": 569,
"end": 577,
"text": "Figure 2",
"ref_id": null
},
{
"start": 688,
"end": 696,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "The first four figures give a global overview of the method. In Fig. 2.A ( Step 1), a significant improvement in precision can be observed. For r = 0.5 the improvement in recall (.5) and precision (.85) is more sensible.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 74,
"text": "Fig. 2.A (",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "Furthermore a better coverage (60 %) is shown in Fig. 2 .D (Step 1). A further index to evaluate the status of the system knowledge about the PP-attachment problem is the Information Gain ((Kononenko and Bratko, 1991) and (Basili et al, 1996) ). The posterior probability (see algorithms in Table 5 and 6) improves over the \"blind\" prior probability as much as it increases the confidence of correct eslls and decreases the confidence of wrong esl~s. The improvement is quantified by means of the number of saved bits necessary to describe the correct decisions when moving from prior to posterior probability. The Information Gain does not depend on the selected thresholds, since it acts on all the probability values, and it is related to the complexity of the learning task. It gives a measure of the global trend of the statistical decision model. A significant improvement measured over the testset (12% to 24% relative increment) is shown by Fig. 2 .B as a result of the learning steps. As discussed in (Basili et a1.,1994) , the Information Gain produces performance results that may contrast with precision and recall.",
"cite_spans": [
{
"start": 189,
"end": 218,
"text": "((Kononenko and Bratko, 1991)",
"ref_id": "BIBREF5"
},
{
"start": 223,
"end": 243,
"text": "(Basili et al, 1996)",
"ref_id": "BIBREF2"
},
{
"start": 1012,
"end": 1032,
"text": "(Basili et a1.,1994)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 292,
"end": 299,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 950,
"end": 957,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "In fact, in the learning step 2, we observed decreased performance of precision and recall. The overlearning effect is common of feedback algorithms. Furthermore, the small size of the corpus is likely to anticipate 7Unfike H&R, we did not use the t-score as a decision criteria, but forced the system to decide according to different values of the thresholds r for sake of readabifity of the comparison. Technical details of our treatment of the LA operator within our grammatical framework can be found in (Basili et a1,1994) , 30 this phenomenon. The problem is clearly due to the highly repetitive ambiguities. The system quickly removes from the corpus syntactically wrong esl's with low MCP1. But now let's consider a collision set with two esl's that almost constantly occur together. Their MCPI tends to acquire exactly the same value. Thus, they will stay in the limbo forever. But if one of the two, accidentally the wrong, has an even minimal additional evidence with respect to its competitor, this initially small advantage may be emphasized by the plausibility redistribution rule 38 . Hence once the learning algorithm reaches the \"hard cases\" and is still forced to discriminate, it gets at stuck, and may take accidental decisions. This phenomenon occurs very early in our domains, and this could be easily foreseen according to the high correlation between esl's that we measured. For the current experimental setup, our data show a significant reduction of noise with a significant 40% compression of the data after step 1, and a correspondent slight improvement in precision-recall, given the complexity of the task (see the Lexical Association performance in Table 8 , for a comparison). However, the phenomena that we analyzed in Section 2 have a negative impact on the possibility of a longer incremental learning process. We do not believe that experimenting over different domains would give different results. In fact, the Legal and Environmental sublanguages are very different in style, and not so narrow in scope. Rather, we believe that the size of the corpora may be in fact too small. We could hope in a higher variability of language patterns by training over 1-2 million words corpora.",
"cite_spans": [
{
"start": 508,
"end": 527,
"text": "(Basili et a1,1994)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1680,
"end": 1687,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "Swhereas, for more independent phenomena, 3 should emphasize the right attachments. 1,0 .............................................................. Further improvements could also be obtained using a more refined discriminator than MCP1, but there is no free lunch. If the corpus is our unique source of knowledge, it is not possible to learn things for which there is no evidence. Only if we can rely on some apriori model of the world, even a naive model 9 to guide difficult choices, then we can hope in a better coverage of repetitive phenomena.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 150,
"text": "1,0 ..............................................................",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Learning and Testing disambiguation cues",
"sec_num": null
},
{
"text": "As a conclusion we may claim that corpus-driven lexical learning should result from the interaction of cooperating inductive processes triggered by several knowledge sources. The described method is a combination of numerical techniques (e.g. the probability driven MCP1 disambiguation operator) and some logical devices:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "\u2022 a shallow syntactic analyzer that embodies a surface and portable grammatical competence helpful in triggering the overall induction;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "\u2022 a naive semantic type system to obviate the problem of data sparseness and to give the learning system some explanatory power",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "The interaction of such components has been exploited in an incremental process. In the experiments, the performance over a typical NLP task 10 (i.e. PPdisambiguation) has been significantly improved by this a cooperative approach. Moreover, on the language engineering standpoint the main consequences are a significant data compression and a corresponding improvement of the overall system efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
},
{
"text": "One of the purposes of this paper was to show that, despite the good results recently obtained in the field of corpus-driven lexical learning, we must still demonstrate that NLP techniques, after the advent of lexical statistics, are industrially competitive. And one good way for doing so, is by measuring ourselves with the full complexities of language. More effort should thus be de-Voted in evaluating the performance of lexical learning methods in real world, noisy domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Computational Lexicons: the Neat Examples and the Odd Exemplars",
"authors": [
{
"first": "",
"middle": [],
"last": "References (basili",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Basiti",
"suffix": ""
},
{
"first": "M",
"middle": [
"T"
],
"last": "Pazienza",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of Third Int. Conf. on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "REFERENCES (Basili et a1.,1992) Basiti, R., Pazienza, M.T., Velardi, P., Computational Lexicons: the Neat Examples and the Odd Exemplars, Proc. of Third Int. Conf. on Applied Natural Language Processing, Trento, Italy, 1-3 April, 1992.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating the information gain of probability-based PP-disambiguation methods",
"authors": [
{
"first": "",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 1993,
"venue": "9tike for example the coarse selectional restrictions used by the ARIOSTO_LEX system (see refereed papers) 1\u00b0although inherently hard for an unsupervised noiseprone framework",
"volume": "1",
"issue": "",
"pages": "175--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "9tike for example the coarse selectional restrictions used by the ARIOSTO_LEX system (see refereed papers) 1\u00b0although inherently hard for an unsupervised noise- prone framework (Basili et al,1993a) Basili, R., A. Marziali, M.T. Pazienza, Modelling syntactic uncertainty in lexical acquisition from texts, Journal of Quantitative Linguistics, vol.1, n.1, 1994. (Basili et al,1993b) Basili, R., M.T. Pazienza, P. Velardi, What can be learned from raw texts ?, Journal of Ma- chine Translation, 8:147-173,1993. (Basili et a1,1993c) Basiti, R., M.T. Pazienza, P. Velardi, Acquisition of selectional patterns, Journal of Machine Translation, 8:175-201,1993. (Basili et al.,1994a) Basiti, R., M.T. Pazienza, P.Velardi, A (not-so) shallow parser for collocational analysis, Proc. of Coting '94, Kyoto, Japan, 1994. (Basili et al.,1994b) Basiti, R., M.H.Candito, M.T. Pazienza, P. Velardi, Evaluating the information gain of probability-based PP-disambiguation methods, Proc. of International Conference on New Methods in Lan- guage Processing, Manchester, September 1994.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A rule-based approach to prepositional phrase attachment disambiguation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "M",
"middle": [
"T"
],
"last": "Pazienza",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of the 3rd Conf. on Applied Natural Language Processing",
"volume": "85",
"issue": "",
"pages": "1198--1204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Basiti et a1.,1996), Basili, R., M.T. Pazienza, P.Velardi, An Empirical Symbolic Approach to Natural Language Processing, Artificial Intelligence, to appear on vol. 85, August 1996 (Brill 1992) Brill, E., A simple rule-based part of speech tagger, in Proc. of the 3rd Conf. on Applied Natural Language Processing, ACL, Trento Italy (Brill and Resnik,1994) Brill E., Resnik P., A rule-based ap- proach to prepositional phrase attachment disambigua- tion, in Proc. of COLING 94, 1198-1204",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Prepositional Phrase Attachment trough a Backed-off Model, 3rd. Workshop on Very Large Corpora",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brooks",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Collins and Brooks,1995) Collins M. and Brooks J., Prepo- sitional Phrase Attachment trough a Backed-off Model, 3rd. Workshop on Very Large Corpora, MT, 1995",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A statistical approach to learning prepositional phrase attachment disambiguation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 1995,
"venue": "Hindle and Rooth,1993) Hindle D. and Rooth M., Structural Ambiguity and Lexical Relations",
"volume": "19",
"issue": "",
"pages": "103--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Franz,1995), Franz A., A statistical approach to learn- ing prepositional phrase attachment disambiguation, in Proc. of IJCAI Workshop on New Approaches to Learning for Natural Language Processing, Montreal 1995. (Hindle and Rooth,1993) Hindle D. and Rooth M., Struc- tural Ambiguity and Lexical Relations, Computational Linguistics, 19(1): 103-120.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Information-Based Evaluation Criterion for Classifier's Performance, Machine Learning",
"authors": [
{
"first": "(",
"middle": [],
"last": "Kononenko",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bratko ; Kononenko",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bratko ; Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Linguistics",
"volume": "6",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Kononenko and Bratko, 1991) Kononenko I., I. Bratko, Information-Based Evaluation Criterion for Classi- fier's Performance, Machine Learning, 6,67-80, 1991. (Marcus et al, 1993) Marcus M., Santorini B. and Marcinkiewicz M., Building a large annotated corpus in English: The Penn Tree Bank, Computational Lin- guistics, 19(2): 313-330.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ratnaparkhi, Rynar and Roukos, A maximum entropy model for prepositional phrase attachment",
"authors": [
{
"first": "(",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1993,
"venue": "Resnik and Hearst, 1993) Resnik P. and Hearst M., Structural Ambiguity and Conceptual Relations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "(Ratnaparkhi et al, 1994), Ratnaparkhi, Rynar and Roukos, A maximum entropy model for prepositional phrase at- tachment. In ARPA Workshop on Human language Technology, plainsboro, N J, 1994. (Resnik and Hearst, 1993) Resnik P. and Hearst M., Struc- tural Ambiguity and Conceptual Relations, in Proc. of 1st Workshop on Very Large Corpora, 1993.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Percentage of collision sets Vs. number of colliding tuples for the LD.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Fig-",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": ",7Y ................. 'f,i ............ .... ............... ....... / ........... i .............. / .............. ~,",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "... 0,8 .................................................................. i 0,6 ............................................................... : .............................. i .............................. Cm, erage in three learning steps -Figure 2: Incremental Learning: Experimental Results",
"num": null
},
"TABREF2": {
"content": "<table><tr><td/><td>LD</td><td>ENEA</td></tr><tr><td/><td colspan=\"2\">(30,130 CS) (10,433 CS)</td></tr><tr><td>Average MI</td><td>13.65</td><td>12.9</td></tr><tr><td>IT</td><td>1.8</td><td>0.84</td></tr><tr><td/><td>3.2</td><td>.72</td></tr><tr><td>average frequency of esl's</td><td>1.9</td><td>1.43</td></tr></table>",
"html": null,
"text": "Mutual Information of co-occurring esl's",
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>two domains</td><td/><td/><td/></tr><tr><td/><td>LD</td><td>LD</td><td>ENEA</td></tr><tr><td/><td>(all esl's)</td><td>(high freq.esl's)</td><td>(all esl's )</td><td>(his</td></tr><tr><td>Average MI</td><td>11.5</td><td/><td>11.00</td></tr><tr><td>IT</td><td>3.10</td><td>2.15</td><td>2.65</td></tr><tr><td>0. 2</td><td>9.62</td><td>4.66</td><td>7.05</td></tr></table>",
"html": null,
"text": "Mutual Information of right-generalized esl's in",
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>quency higher than average</td><td/><td/></tr><tr><td/><td>LD</td><td>ENEA</td></tr><tr><td colspan=\"2\">Average MI 11.60</td><td>11.60</td></tr><tr><td>a</td><td>2.05</td><td>1.12</td></tr><tr><td>a z</td><td>4.23</td><td>1.27</td></tr></table>",
"html": null,
"text": "Mutual Information of esl's occurring with fre-",
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>that occurs in the LD corpus almost 200 times.</td></tr><tr><td>The global plausibility of the syntactic collocates (i)</td></tr><tr><td>imposta-di-persona (tax-of-people) and (ii) reddito-di-</td></tr><tr><td>persona (income-of-people)is (i) 91.66 and (ii) 93.69.</td></tr></table>",
"html": null,
"text": "imposta sul reddito delle persone ... ( *... tax on the income of people ...)",
"type_str": "table",
"num": null
},
"TABREF6": {
"content": "<table><tr><td>Disambiguation Algorithm: Learning Phase</td></tr></table>",
"html": null,
"text": "CS = { el,e2,...eN } be any collision set in the corpus, where e~s are esl's Let -~ be the prior probability (pprior). Let MCPI(ei) be the Mutual Conditional Plausibility (2) of ei The posterior probability of el, pposti, REMOVE ei from CS, i.e. PUT it in the hell set OTHERWISE ei is a limbo esl. IF Vi \u00a2 j ei is in hell MOVE ej in the paradise set of all CSi step by step: esl with \"lower\" MCPI values (as globally derived from all the corpus) are filtered out; the MCP1 values are then redistributed among the remaining esl~s. In a picturesque way, we can say that discarded esl~s are damned (the hell is the right place), while survived esl~s are waiting for next judgment (the limbo is the right place for this wait state); at the end of the algorithm, if there is a single winner esl, it will gain the paradise. Persistently ambiguous esl of the corpus may remain still ambiguous within the corresponding collision sets: limbo will be their place forever. The algorithm will try to obtain as many paradise esl~s (i.e.",
"type_str": "table",
"num": null
},
"TABREF8": {
"content": "<table><tr><td colspan=\"4\">Let MCPl(ei) be the Mutual</td></tr><tr><td colspan=\"4\">Conditional Plausibility (2) of ei</td></tr><tr><td colspan=\"4\">The posterior probability of el, pposti, is defined as</td></tr><tr><td>ppos</td><td>-. __ , -</td><td>MCPI(ei)</td></tr><tr><td colspan=\"4\">Let r E [0, 11 be a given test threshold.</td></tr><tr><td colspan=\"4\">For each CS and for each ei E CS DO:</td></tr><tr><td/><td colspan=\"2\">e2_.ez/.t IF -prior &gt; 1 + r THE</td><td>N</td></tr><tr><td/><td colspan=\"3\">(F ei is correct, i.e. manually validated, THEN</td></tr><tr><td/><td/><td colspan=\"2\">++TruePositives;</td></tr><tr><td/><td colspan=\"2\">OTHERWISE</td></tr><tr><td/><td/><td colspan=\"2\">++ FalsePositives;</td></tr><tr><td/><td colspan=\"2\">OTHERWISE IF ~</td><td>&lt; 1 -r THEN</td></tr><tr><td/><td colspan=\"3\">IF e~ is correct pp~, Or~HEN</td></tr><tr><td/><td/><td colspan=\"2\">++ FalseNegatives;</td></tr><tr><td/><td colspan=\"2\">OTHERWISE</td></tr><tr><td/><td/><td colspan=\"2\">++True Negatives;</td></tr><tr><td/><td colspan=\"2\">++Ncases</td></tr><tr><td colspan=\"2\">precision =</td><td/></tr><tr><td/><td/><td colspan=\"2\">TruePositives-~ TrueNe~atives</td></tr><tr><td colspan=\"4\">TruePositives+ TrueNegatives+ FalsePositives\u00f7 FalseNegatives</td></tr><tr><td colspan=\"4\">recall = TruePositives-~ TrueNe~tatives</td></tr><tr><td/><td colspan=\"2\">Ncases</td></tr><tr><td colspan=\"2\">coverage =</td><td/></tr><tr><td colspan=\"4\">TruePositives~TrueNe~atives+ FalsePositives+FalseNegatives</td></tr><tr><td/><td/><td colspan=\"2\">Ncases</td></tr></table>",
"html": null,
"text": "Disambiguation Algorithm: Learning Phase Let CS= { el,e2,...eN } be any collision set the test set and Ncases be the number of test cases. Let -~ be the prior probability (pprior).",
"type_str": "table",
"num": null
},
"TABREF9": {
"content": "<table><tr><td>r</td><td colspan=\"3\">Coverage Recall i Precision</td></tr><tr><td>0.0</td><td>99.8%</td><td>0.75</td><td>0.749</td></tr><tr><td>0.05</td><td>95.0%</td><td>0.72</td><td>0.75</td></tr><tr><td>0.1</td><td>87.4%</td><td>0.69</td><td>0.79</td></tr><tr><td>0.2</td><td>77.8%</td><td>0.62</td><td>0.80</td></tr><tr><td>0.5</td><td>49.9%</td><td>0.42</td><td>0.84</td></tr><tr><td colspan=\"4\">240,493 esl's from the whole corpus. Of these only 10%</td></tr><tr><td colspan=\"4\">of esl's are initially unambiguous, while all the remain-</td></tr><tr><td colspan=\"4\">ing are limbo esl's. A testset of 1,154 hand corrected</td></tr><tr><td colspan=\"4\">collision sets was built. 5,285 different esl's are in the</td></tr><tr><td colspan=\"4\">testset. An average of 25.9% correct groups have been</td></tr><tr><td colspan=\"4\">found in the testset, again demonstrating a great level</td></tr><tr><td colspan=\"3\">of ambiguity in the source data.</td><td/></tr></table>",
"html": null,
"text": "Performance values of the MCP1 without learning",
"type_str": "table",
"num": null
},
"TABREF10": {
"content": "<table><tr><td>r</td><td colspan=\"3\">Coverage Recall Precision</td></tr><tr><td>'0.0</td><td>100%</td><td>0.610</td><td>0.610</td></tr><tr><td>0.05</td><td>96.5%</td><td>0.594</td><td>0.615</td></tr><tr><td>'0.1</td><td>93.8%</td><td>0.578</td><td>0.616</td></tr><tr><td>0.2</td><td>86.4%</td><td>0.544</td><td>0.631</td></tr><tr><td>\"0.5</td><td>71.9%</td><td>0.465</td><td>0.647</td></tr><tr><td colspan=\"4\">ure 2.C measures the Data Compression, that is the</td></tr><tr><td colspan=\"4\">mere reduction of eis's in the corpus. The compres-</td></tr><tr><td colspan=\"4\">sion is measured as the ratio between hell's els's and</td></tr><tr><td colspan=\"3\">the number of the observed esl's.</td><td/></tr></table>",
"html": null,
"text": "Performance values of the LA without learning",
"type_str": "table",
"num": null
}
}
}
}