Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H91-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:32:57.241867Z"
},
"title": "A Statistical Approach to Sense Disambiguation in Machine Translation",
"authors": [
{
"first": "Peter",
"middle": [
"F"
],
"last": "Brown",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Stepheu",
"middle": [
"A Della"
],
"last": "Pietra",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "T",
"middle": [
"D"
],
"last": "Della",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe ~ statisticM technique for assigning senses to words. An instance of ~ word is assigned ;~ sense by asking a question about the context in which the word ~tppears. The qttestlou is constructed to ha, re high mutua,1 i~fformation with the word's translations.",
"pdf_parse": {
"paper_id": "H91-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe ~ statisticM technique for assigning senses to words. An instance of ~ word is assigned ;~ sense by asking a question about the context in which the word ~tppears. The qttestlou is constructed to ha, re high mutua,1 i~fformation with the word's translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An a,lluring a,spect of the staMstica,1 a,pproa,ch to ins,chine tra,nsla,tion rejuvena.ted by Brown, et al., [_1] is the systems.tic framework it provides for a.tta.cking the problem of lexicM dis~tmbigua.tion. For example, the system they describe tra,ns]a.tes th.e French sentence Je vais prendre la ddeision a,s [ will make the decision, thereby correctly interpreting prendre a.s make, The staMstica.l tra.nslation model, which supplies English. tra,nsla,tions of French words, prefers the more common tra.nslation take, but the trigram la.ngu.age mode] recognizes tha.t the three-word sequence make the decision is much more proba])le tha.n take the decision.",
"cite_spans": [
{
"start": 94,
"end": 100,
"text": "Brown,",
"ref_id": null
},
{
"start": 101,
"end": 108,
"text": "et al.,",
"ref_id": null
},
{
"start": 109,
"end": 113,
"text": "[_1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "The system is not a.lwa,ys so successful. It incorrectly renders Je vats prendre ma propre ddcision a.s 1 will take my own decision. Here, the la.nguage model does not realize tha, t take my own decision is improbable beca,use take a,nd decision no longer fall within a. single trigram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "Errors such a.s this a,re common because otlr sta,tistical models o.ly capture loca,l phenomena,; if l, he context necessa,ry to determine ~ transla, tion fa,lls outside the scope of our models, the word is likely to be tra,nsla,ted incorrectly. However, if the re]evant co.-text is encoded locally, the word should be tra, nsla, ted correctly. We ca,n a,chieve this within the traditionM p,~radigm of a.na,lysis -tra,nsfer -synthesis by incorpora,ting into the ana,lysis pha,se a, sense--disa, mbigu~tion compo,ent that assigns sense la, bels to French words. ]if prendre is labeled with one sense in the context of ddcisiou but wil.h a, different sense in other contexts, then the tra,nsla,tion model will learn from training data tha,t the first sense usua,lly tra.nslates to make, where.a,s the other sense usua,lly tra.nslates to take.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "In this paper, we describe a. sta, tistica,1 procedure for constructing a. sense-disambiguation eomponent that label words so as to elucida.te their translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "As described by Brown, et al. []] , in the sta.tistica.1 a.l)proa.ch to transla, tion, one chooses for tile tra,nsla,tion of a. French sentence .F, tha.t English sentence E which ha.s the greatest l)robability, Pr(EIF), a.ccordi,g to a, model of th.e tra, ns]ation process. By Ba.yes' r,,le, Pr(EI ~') = Pr(E) Pr(FIE )/Pr(.F). Since the (lenomina.tor does not del)end on E, the sentence for which Pr(EIF ) is grea, test is also the sentence for which the product Pr(E) Pr(FIE ) is grea~test. The first term in this product is a~ sta, tisticM cha.ra.cterization of the, English ]a.nguage a, nd the second term is a, statistical cha.ra.cteriza,timt of the process by which English sentences are tra.nslated into French. We can compute neither of these probabilities precisely. Rather, in statistical tra.nslat, iou, we employ a. language model P,,od~l(E) which 1)rovide, s a,n estima.te of Pr (E) and a, lrav, slatiov, model which provides a,n estimate of t'r ( Vl/~:).",
"cite_spans": [
{
"start": 16,
"end": 33,
"text": "Brown, et al. []]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "The performance of the system depends on the extent to which these statistical models approximate the actual probabilities. A useful gauge of this is tile cross entropy 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "E,F which measures the average uncertainty that the model has about the English translation E of a French sentence F. A better model has less uncertainty and thus a lower cross entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H(EIF)-= -~ Pr(E,F) log PmoZ~,(EI F) (1)",
"sec_num": null
},
{
"text": "A shortcoming of the architecture described above is that it requires the statistical models to deal directly with English and French sentences. Clearly the probability distributions Pr(E) and Pr(FIE ) over sentences are immensely complicated. On the other hand, in practice the statistical models must be relatively simple in order that their parameters can be reliably estimated from a manageable amount of training data. This usually means that they are restricted to the modeling of local linguistic phenonrena. As a. result, the estimates Pmodcz(E) and Pmodd(F I E) will be inaccurate. This difficulty can be addressed by integrating statistical models into the traditional machine translation architecture of analysis-transfer-synthesis. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H(EIF)-= -~ Pr(E,F) log PmoZ~,(EI F) (1)",
"sec_num": null
},
{
"text": "English sentence E from E t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A synthesis component which reconstructs an",
"sec_num": "3."
},
{
"text": "For statistical modeling we require that the synthesis transformation E ~ ~ E be invertible. Typically, analysis and synthesis will involve a sequence of successive transformations in which F p is incrementally tin this equation and in the remainder of the paper, we use bold face letters (e.g. E) for random variables and roman letters (e.g. E) for the values of random variables. constructed from F, or E is incrementally recovered from E I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A synthesis component which reconstructs an",
"sec_num": "3."
},
{
"text": "'File purpose of analysis and synthesis is to facilitate the task of statistical transfer. This will be the case if the probability distribution Pr (E ~, F ~) is easier to model then the original distribution Pr (E, F). In practice this nleans that E' and F' should encode global linguistic facts about E and F in a local form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A synthesis component which reconstructs an",
"sec_num": "3."
},
{
"text": "The utility of tile analysis and synthesis transformatious can be measured in terms of cross-entropy. Thus transfotma.tions F -+ F' and t~/ ---+ E are useful if we Call construct models ' P ,~od~t( F I E') and P',,,oa+,(E') such that H(E' I r') < H(EIF ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A synthesis component which reconstructs an",
"sec_num": "3."
},
{
"text": "In this paper we present a statistical method for automatically constructing analysis and synthesis transformations which perform cross-lingual word-sense labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "The goal of such transformations is to label the words of a French sentence so as to ehlcidate their English. trauslations, and, conversely, to label the words of an English sentence so as to elucidate their French translations. For exa.mple, in some contexts the French verb prendre translates as to take, but in other contexts it translates as to make. A sense disambiguation transformation, by examining the contexts, might label occurrences of prendre that likely mean to take with one lal)el, and other occurrences of prendre with another label. Then the uncertainty in the translation of prendre given the label would be less than the uncertainty in the translation of prendre without the label. All, hough tile label does not provide any infof mation that is not already present in the context, it encodes this information locally. Thus a local statistical model for the transfer of labeled sentences should be more accurate than one for the transfer of unlal)eled ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "While the translation o:f a word depends on many woMs in its context, we can often obtain information by looking at only a single word. For example, in the sentence .Ic vats prendre ma propre ddeision (I will 'make my own decisiou), tile verb prendre should be translated as make because its object is ddcision. If we replace ddcision by voiture then prendre should be translated as take: Je vais prendre ma propre voiture (l will take my own car). Thus we can reduce the uncertainity in the translation of prendre by asking a question about its object, which is often the first noun to its right, and we might assign a sense to prendre",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "based upon the answer to this question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "In It doute que Ins ndtres gagnent (He doubts that we will win), the word il should be translated as he.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "On the other hand, if we replace doute by faut then il should be translated as it: It faut que les nStres gagnent (It is necessary that we win). Here, we might assign a sense label to il by asking a,bout the identity of the first verb to its right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "These examples motivate a. sense-labeling scheme in which the la.bel of a word is determined by a question aJ)out an informant word in its context. In the first example, the informant of prendre is the first noun to the right; in. the second example, the infof mant of ilis the first verb to the right. If we want to assign n senses to a word then we can consider a question with n answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "We can fit this scheme into the fl:amework of the previous section a.s follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSE DISAMBIGUATION",
"sec_num": null
},
{
"text": "tures E' and F r consist of sequences of words labeled by their senses. Thus F' is a sentence over the expanded vocabulary whose 'words' f' are pairs (f,l) where f is a word in the original French vocabulary and 1 is its sense label. Similarly, E \u00a2 is a sentence over the expanded vocabulary whose words e t are pairs (e, l) where e is a.n English word and l is its sense label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Intermediate Structures. The intermediate struc-",
"sec_num": null
},
{
"text": "French word and each English word we choose an informant site, such as first noun to the left, and an n-ary question about the va,lue of the informant at that site. The analysis transformation F ~ U and the inverse synthesis transfof marion E ~ E ~ map a sentence to the intermediate structure in which each word is labeled by a sense determined by the question a])out its informant. The synthesis transformation E ~ ~ E maps a labeled sentence to a sentence in which the labels have been removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The analysis and synthesis transformations. For each",
"sec_num": null
},
{
"text": "The probability models. We use the translation model that was discussed in [l] for both e;~oaet(F'lE') and for P,nodd(FIE). We use a trigram language model. [1] for P,,~oa~a(E) and",
"cite_spans": [
{
"start": 75,
"end": 78,
"text": "[l]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The analysis and synthesis transformations. For each",
"sec_num": null
},
{
"text": "In order to construct these tra.nsformations we need to choose for each English and French word a.n informant and a question. As suggested in the previous section, a criterion for doing this is that of minimizing the (:ross entropy H(E' I F'). In the remainder of the l)aper we present an algorithm for doing this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The analysis and synthesis transformations. For each",
"sec_num": null
},
{
"text": "We begin by reviewing our statistical model for the translation of a sentence from one language to another ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE TRANSLATION MODEL",
"sec_num": null
},
{
"text": "The l)urpose of a translation model is to compute the prol)al)i]ity P,,odet(T [ S) of transforming a source sentence S into a. target sentence T. For our simple mode], we assume that each word of S independent]y I)rodnces zero or mote words from the target vocabulary and that these words are then ordered to produce T. We use the term alignment to refer to an association between words in T and words in S. 3. The pa.ra,meters of the distortion model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of the Model",
"sec_num": null
},
{
"text": "We determine values for these parameters using maximv.m likelihood training. Thus we collect a large bilingual corpus consisting of pairs of sentences (S, T)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of the Model",
"sec_num": null
},
{
"text": "which are translations of one another, and we seek parameter va.lues that maximize the likelihood of this training data as computed by the model. This is equivalent to minimizing the cross entropy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of the Model",
"sec_num": null
},
{
"text": "If(T IS) = -~ Pt~,i,,(S,T) log P,,,od,t(TI S) (4) S,T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of the Model",
"sec_num": null
},
{
"text": "where Ptr~.i,~(S,T) is the empirical distribution obtained by counting the number of times that the pair (S, T) occurs in the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of the Model",
"sec_num": null
},
{
"text": "The sum over alignments in (2) is too expensive to compute directly since the number of alignments increases exponentially with sentence length. It is useful to approximate this sum by the single term corresponding to the alignment, A(S,T), with greatest probability. We refer to this approximation as the Viterbi approzimation and to A(S,T) as the Viterbi alignment. ",
"cite_spans": [
{
"start": 27,
"end": 30,
"text": "(2)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Viterbi Approximation",
"sec_num": null
},
{
"text": ".$ t~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(10)",
"sec_num": null
},
{
"text": "We wa.nt a similar exl)ression for the cross entropy use the generic symbol ~ to denote ~ normalizing fa.ctor that norgn com, er!s counts to probabilities. We let the actua.1 value of .ol I,e implicit from the context. Thus, for example, in the left ha.nd equation of (7), the normalizing factor is norm = ~,,, c(s, t) which equals tile a,verage length of target sentences. In the right hand equation of (7), the normalizing fa.ctor is the average ]engt.h of source sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(10)",
"sec_num": null
},
{
"text": "The best value of the information is thus a.n infimiim over both the choice for 2. and the choice for the q . This suggests the following iterative procedure for obtaining a good 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(10)",
"sec_num": null
},
{
"text": "1. For given q, find the best E: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(10)",
"sec_num": null
},
{
"text": "E(x) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(10)",
"sec_num": null
},
{
"text": "In this paper we presented a general framework for integrating analysis and synthesis with statistical translation, and within this framework we invcstigated cross-lingnal sense labeling. We gave an algorithm for antoinatically constructing a simple labeling transformation that assigns a sense to a word by asking a question about a single word of the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": null
},
{
"text": "In a companion paper [3] we present results of translation experiments using a sense-labeling cvnlponent that employs a similar algorithn~. We are currently studying the auton~atic construction of more complex transformations which utilize more detailed contextual informa tion.",
"cite_spans": [
{
"start": 21,
"end": 24,
"text": "[3]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": null
}
],
"back_matter": [
{
"text": "where H(S) is the cross entropy of P,+od+t(S) and I(s, t) is tire mutual information between t and s for the probability distribution p(s, t).The additional approximation that we require is HiT) ,~ LTHit) =---LT ~p(t)log pi t) t (12) where p(t) is the marginal of p (s,t) . This amounts to approximating Pmod\u00a2l(T) by the unigram distribution that is closest to it in cross entropy. Granting this, formula (11) is a consequence of (9) and of the identities",
"cite_spans": [
{
"start": 229,
"end": 233,
"text": "(12)",
"ref_id": null
},
{
"start": 266,
"end": 271,
"text": "(s,t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "For sensing target sentences, a question about an informant is a f, nction ~ from the target vocabulary into the set of possible senses. If the informant of t is z, then t is assigned the sense 5(z). We want to choose the function fi(z) to minimize the cross entropy It(S IT'). Front formula (34), we see that this ?~Or77%x:e(z)=cAn exhaustive search for the best ~ requires a computation that is exponential in the number of values of x and is not practical. In previous work [3] we found a good ~ usi,g the flip-flop algorithm [4] , which is only al)l)licable if the number of senses is restricted to two. Since then, we have developed a different Mgorithm that can be used to find 5 for any number of senses.The algorithm uses the technique of alternating minimization, and is similar to the k-means algorithm for determining pattern clusters and to the generalized Lloyd algorithm for designing vector quantitizers. A discussion of alternating minimization, together with refcrences, can be found in Chou [5] .The algorithm is ba,sed on tile fact that, up to a constant independent of 5, the mutual information l(s,t t I t) can be expressed as an infimum over condi- ",
"cite_spans": [
{
"start": 477,
"end": 480,
"text": "[3]",
"ref_id": "BIBREF3"
},
{
"start": 529,
"end": 532,
"text": "[4]",
"ref_id": "BIBREF4"
},
{
"start": 1009,
"end": 1012,
"text": "[5]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HCS IT) = Hi T I S) -HCT) + I/iS), HCt,) = HCt I +) + I(+, t). (13) Target Questions",
"sec_num": null
},
{
"text": "We now present an algorithm for finding good informants and questions for sensing. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SELECTING QUESTIONS",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A stat,istic:xl a.pproa.ch to madline transla",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "((A stat,istic:xl a.pproa.ch to madline transla.tion,))",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Initial estimates of word tra.nsla.tion prol)a.Bilities",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dellal'ietra",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Uella",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "It",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, S. Dellal'ietra, V. Uella.Pietra., and It. Mercer, \"Initial estimates of word tra.nsla.tion prol)a.Bilities.\" In prepa,ra.tion.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word sense disainbigua.tion wing statistica.1 metl~ods,\" in proceeding.^ 29th Annual h4eeting of the ~'ssociatioltjor",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Della",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "Comp~itationnl Lin-g~rislics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, S. Della.Pietra., V. DellaPietra, and R. Mercer, \"Word sense disainbigua.tion wing statistica.1 metl~ods,\" in proceeding.^ 29th Annual h4eeting of the ~'ssociatioltjor Comp~itationnl Lin- g~rislics, (Berkeley, CA), June 1991.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An iterat,ive \"flip-flop\"') a.pproximation of the most inIorma.tive split in the construction of decision trees",
"authors": [
{
"first": "A",
"middle": [
"Na"
],
"last": "Das",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Na",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hamoo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Picheny",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Powell",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the IEEE Inlernnlionir.1 Con,jerence on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Na.das, D. Na.hamoo, M. Picheny, a.nd J. Pow- ell, \"An iterat,ive \"flip-flop\"') a.pproximation of the most inIorma.tive split in the construction of de- cision trees,\" in Proceedings of the IEEE Inlernn- lionir.1 Con,jerence on Acoustics, Speech and Signal Processing, (Toronto, Cana.da.), May 1991.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Chon, Applicntions of I~? , j o r m a t i o~ Theory to Pnttcrn Recognition and the Design of Decision 'I?ree.s and Trellises. PhD t,hesis, Sta.nford Universit,y",
"authors": [],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "1' . Chon, Applicntions of I~? , j o r m a t i o~ Theory to Pnttcrn Recognition and the Design of Decision 'I?ree.s and Trellises. PhD t,hesis, Sta.nford Univer- sit,y, .Inne 1988.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The probability P,,oda(T I S) is the sum of the probabilities of all possible alignnmnts A between S and T The joint probal)ility P,,odft(7', A I S) of T and a patticula.r a.]ignmeut is given by 1',,,od\u00a2,(7', A IS) = (a) H P(tl\"~A(t)) II P(iZA(s) ls)-Pdi.'toTtio'(T, A I S)."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "llere .iA(t) is tile word of ,5' aligned with t in the alignmen t A, a.nd fi.A (s) is the number of words of T aligned with s ill A. Tile distortion model Pdistortlon describes tile ordering of tile words of T. We will not give it explicitly. The parameters in (3) are I. The l)robabilities p(n ] s) that a word s in the source language generates n target words; 2. \"File prol)abilities p(t I s) that s generates the word t;"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Let c(s,t) be the expected number of times that s is aligned with t in the Viterbi alignnmnt of a pair of sentences drawn at random from the training data.. Let c(s, n) be the expected number of times that s is aligned with n words. Then c(s,t) = ~ P,~o,~(S,T)c(s,t l J(S,T) ) S,T e(s,n) = ~ Pt,~i,(S,T)c(s, n I A(S,T) ) (5) S,T where c(.s,t I A) is the number of times that s is aligned with t in the alignment A, and c(s, n I A) is the number of times that s generates n target words in A. It can be shown [2] that these counts are also averages with respect to the model c(s, t) = ~ P,,,oda(S, T) c(s, t I A(,5', T) ) S,T ~(s,~) = ~ P.,o~,(S,T)e(s,,~ I A(S,T)). (6) S,T By normalizing the counts c(s,t) and c(s,n) we obtain probability distributions p(s, t) and p(s, n) 2 these equations and in the remainder of the paper, weThe conditional distributions p(t I s) and p(n Is) are the Viterbi approximation estimates [or the parameters of the model. The marginals satisfy s) and u(t) are the unigram distributions of s and t and Fz(s) = ~ p(n I s)n is the average number of target words aligned with s. These formulae reflect the fact that in any alignment each target word is aligned with exactly one source word. CROSS ENTROPY ]n this section we express the cross entropies H ( S I T ) and ][(S ~ I Tt) in terms of the information between source and target words. In the Viterbi approximation the cross entropy H(T IS) is given by H(T I s) : Lr { H(t I s) + H(n t ~) } (9) where LT is the average length of the target sentences in the training data, and lt(t I s) and It(n I s) are the conditional entropies for the probability distributions 1,(s, t) and p(.., ~): H(t Is) = -~p(s,t) log p(tls) ,%t ,\"(,, I~) : -~p(,,,,~) log v(.,l~)."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "o~.dT I S) P,,~o~z(S),this cross entropy depends on both the translation model, ]',,,oact(T I S), and the language model, P,,.oact(S). We now show that with a suitable additional approxitn ation H(S I T) : Lr { H(n I+) -~(+,t) } + H(S) (~1)"
},
"TABREF2": {
"num": null,
"html": null,
"text": "argmin,D(p(s ( x , t ) ; g(s ( c)). Iunction 2: from the source voca1)iila.ry int'o the set of possible senses. We want to chose 2. I s) + T( n , s' I s ). In analogy to (18), and we can again find a good 2 by alternating minimiza.tion.",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">2. For this El find the best 3:</td><td/><td/></tr><tr><td colspan=\"5\">3. Iterate steps (1) a.nd (2) ilntil no fnrther increase in I ( s , t' I t ) results.</td></tr><tr><td colspan=\"2\">Source Questions</td><td/><td/><td/></tr><tr><td colspan=\"5\">For sensing source sentences, a, question a.bont an</td></tr><tr><td colspan=\"5\">informant is a this is</td></tr><tr><td>equivalent</td><td>to</td><td>~na.ximizing</td><td>the</td><td>sum</td></tr><tr><td>I ( t , s t</td><td/><td/><td/><td/></tr></table>"
}
}
}
}