ACL-OCL / Base_JSON /prefixT /json /tmi /1992.tmi-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1992",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:44:28.207447Z"
},
"title": "Analysis, Statistical Transfer, and Synthesis in Machine Translation",
"authors": [
{
"first": "Peter",
"middle": [
"F"
],
"last": "Brown",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "Pietra",
"middle": [],
"last": "Vincent",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "J",
"middle": [],
"last": "Della",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "Pietra",
"middle": [],
"last": "John",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "D",
"middle": [],
"last": "Lafferty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
},
{
"first": "L",
"middle": [],
"last": "Mercer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We reinterpret the system described by Brown et al. [1] in terms of the analysis-transfersynthesis paradigm common in machine translation. We describe enhanced analysis and synthesis components that apply a number of simple linguistic transformations so the transfer component operates from a string of French morphemes to a string of English morphemes. We report the results of a comparison of the new system with the old system on 100 short test sentences. The new system correctly translates 60% of these sentences while the old system correctly translates only 39% of them.",
"pdf_parse": {
"paper_id": "1992",
"_pdf_hash": "",
"abstract": [
{
"text": "We reinterpret the system described by Brown et al. [1] in terms of the analysis-transfersynthesis paradigm common in machine translation. We describe enhanced analysis and synthesis components that apply a number of simple linguistic transformations so the transfer component operates from a string of French morphemes to a string of English morphemes. We report the results of a comparison of the new system with the old system on 100 short test sentences. The new system correctly translates 60% of these sentences while the old system correctly translates only 39% of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The analysis-transfer-synthesis architecture shown in Figure 1 Brown et al. [1] describe a. statistical model for generating English sentences and for translating these sentences into French. They show that this model can be combined with a stack-based search strategy to make a. system for translating sentences from French to English. Their system is an example of the analysis-transfer-synthesis architecture in which the analysis and synthesis components have become vestigial: their analysis component simply transforms character strings *This work was supported, in part, by DARPA contract N00014-91-C-0135, administered by the Office of Naval Research.",
"cite_spans": [
{
"start": 76,
"end": 79,
"text": "[1]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "into strings of French words while their synthesis component carries out the reverse, transforming strings of English words into character strings. Their statistically based transfer component carries out the transformation from French words to English words directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we elaborate the analysis and synthesis components. Figure 2 shows the structure of our system including its analysis and synthesis components. Notice that we do not model English character strings directly, but rather the intermediate English text. Brown et al. [1] make use of a large collection of aligned French-English sentence pairs [2] as data from which they algorithmically extract the parameters of their translation and language models [3] . In order for us to extract the parameters of our translation and language models, we need a large sample of input-output pairs from the English-to-French translation model. It is important, therefore, that the transformation induced by the synthesis component be invertible, since then, by passing the French member of a French-English sentence pair through the analysis component, and the English member of the pair through the inverse of the synthesis component, we can create an input-output pair for the translation model. From the data processing theorem [4] , we know that we can expect neither the analysis component nor the inverse of the synthesis component to add information to a sentence. At best, they can rearrange information that is already present; at worst, they can actually destroy information, mapping two or more distinct sentences into the same intermediate structure. Sometimes, we intend to destroy information as, for example, when we correct misspelled words or choose a canonical spelling for words with several variants, but the main value of these components comes from making available locally to our primitive statistical models information that is manifest from the global structure of a sentence.",
"cite_spans": [
{
"start": 278,
"end": 281,
"text": "[1]",
"ref_id": null
},
{
"start": 354,
"end": 357,
"text": "[2]",
"ref_id": "BIBREF2"
},
{
"start": 462,
"end": 465,
"text": "[3]",
"ref_id": "BIBREF3"
},
{
"start": 1028,
"end": 1031,
"text": "[4]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remainder of the paper, we describe the five steps that make up the analysis component and the inverse of the synthesis component for our new system and present results showing their effect on the performance of the system. These steps are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Transform character strings to word strings. The intimate familiarity with reading and writing that we all share predisposes us to a ready acceptance of the idea that a body of text is a collection of words strung together when, in reality, it is a collection of characters strung together. In the main, the passage from a string of characters to a string of words is uneventful with words being delineated by blanks and punctuation, but at intervals it becomes tortuous. Successful navigation through a lengthy text demands many, often arbitrary, decisions. We encode many of these decisions in a list of several thousand character sequences that we treat as single words. We also analyze a number of character sequences into two or more words, writing, for example, a les for aux, de le for du, and it was not for 'twasn't. Except for hyphens connecting two words, we treat individual punctuation marks as words. We also treat digits as words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to distinguish between sequences like 1 2 and 12, we attach a code to each word showing how it is connected to the previous word. For English text, we use three codes according as the word is separated by a space from the previous word, connected directly to it, or separated from it by an intervening hyphen. Thus, light house may represent light house, lighthouse, or light-house depending on the value of the connection code attached to house. For French text, we use two additional codes that allow us to represent, for example, a-t-il and qu'il as a il and que il depending on the value of the connection code attached to il.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We must also deal with the complication presented by uppercase and lowercase letters. Simplicio is again giving his master a hard time:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "Simplicio: When do two sequences of characters represent the same word?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "Salviati: When they're the same sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "Simplicio: So the and The are different words?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "Salviati: Don't be ridiculous! You have to ignore differences in case. Simplicio: But how do you know when to ignore case and when not to?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "Salviati: You just know! Sadly, computers do not just know: they have to guess. Happily, the entropy of case is only 0.04 bits per letter [5] , so guessing is not entirely out of the question. We imagine each word to consist of an uncased token together with a case pattern, that specifies the case of each letter. The case pattern of a word in context is a corrupted version of the true-case pattern that it would have in the absence of typographical error or arbitrary convention. Thus, in Today, John works for MacPherson at IBm, the first and last words have as tokens TODAY and IBM, as case patterns UL + and UUL + , and as true-case patterns L + and U + . The case and true-case patterns agree for the remaining words in this example.",
"cite_spans": [
{
"start": 138,
"end": 141,
"text": "[5]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "We assign true-case patterns using the following algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "1. If the word is part of a name, choose as its true-case pattern the most probable true-case pattern for the word that also begins with U.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "2. Otherwise, if the word belongs to a list of words that have a unique true-case pattern choose that pattern.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "3. Otherwise, if the word begins a sentence, choose as its true-case pattern the most probable true-case pattern for the word. 4 . Otherwise, choose as the true-case pattern the case pattern.",
"cite_spans": [
{
"start": 127,
"end": 128,
"text": "4",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "We recognize names with a finite-state machine that incorporates a list of 12,937 common last names and 3,717 common first names, as well as a number of onomastic antecedents (such as Mr., Mlle., Dr., etc.) and a number of onomastic consequents (such as Jr., Sr., Ph.D., etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "We assign a word to the list of words with a unique true-case pattern provided the entropy of case patterns for the word is less than 0.3 bits. In addition, we have examined the 40,000 most frequent English words and assigned a unique true-case pattern to 9,144 of them. We have also examined the 10,000 most frequent French words and have assigned a unique true-case pattern to 3,794 of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "We have determined the most probable case pattern for each of the remaining words by examining a collection of 67 million English words and a collection of 72 million French words, in each case excluding words that begin sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case",
"sec_num": "2.1"
},
{
"text": "Using the steps described above, we have processed 1,778,620 pairs of French and English sentences from our Canadian Hansard corpora [2] . Because many of the words that appear only once in this collection are typographical errors, we excluded all such singletons from our vocabularies. In this way, we arrived at an English vocabulary of 40,806 words, and a French vocabulary of 57,800 words. We replaced all singletons in both texts with the unknown word.",
"cite_spans": [
{
"start": 133,
"end": 136,
"text": "[2]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Data",
"sec_num": "2.2"
},
{
"text": "As a prelude to syntactic and morphological analysis, we tag words in context with parts of speech to show their grammatical function. We use 163 tags for the English text and 157 tags for the French text, roughly categorized as shown in Table 1 . We employ a statistical, hidden Markov model tagging algorithm [6, 7] embodied in a set of programs developed by Merialdo [8] .",
"cite_spans": [
{
"start": 311,
"end": 314,
"text": "[6,",
"ref_id": "BIBREF6"
},
{
"start": 315,
"end": 317,
"text": "7]",
"ref_id": "BIBREF7"
},
{
"start": 370,
"end": 373,
"text": "[8]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Part-of-Speech Annotation",
"sec_num": "3"
},
{
"text": "Tagging algorithms of this type are most successful when their parameters are extracted from a large body of hand-labelled data. We had at our disposal 1.9 million words of hand-labelled English text, divided about evenly between text from the Associated Press newswire and text from the English half of our Hansard data. This data was labelled at Lancaster University under the direction of Geoff Leech. We used 1,666,191 words of this data for training and 232,090 words for smoothing. On the remaining 23,062 words of test data, the trained system correctly labels 94% of the words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Annotation",
"sec_num": "3"
},
{
"text": "We also had available 1,283,344 words of French text from a variety of sources collected and labelled by our colleagues at the IBM Paris Scientific Center [9] , and a second set of 27,454 handlabelled words from the French part of our Hansard data. We trained the parameters using the larger set of data and smoothed them using the smaller set [10, 8] . Because of the small quantity of hand-tagged French Hansard data, we took two additional steps in the hope of better imprinting the stamp of Hansard French on our parameters. First, we re-estimated the parameters by running one iteration of the forward-backward algorithm on an additional corpus of 13,433,404 words of untagged data from the French part of our Hansard corpus [11, 12, 8] . Finally, we used the 27,154 words of tagged Hansard data once again to smooth these new estimates. With this system, we correctly tagged 93% of the words in a new set of 24,649 words of Hansard French hand-labelled for us by our colleagues at the IBM Paris Scientific Center.",
"cite_spans": [
{
"start": 155,
"end": 158,
"text": "[9]",
"ref_id": "BIBREF9"
},
{
"start": 344,
"end": 348,
"text": "[10,",
"ref_id": "BIBREF10"
},
{
"start": 349,
"end": 351,
"text": "8]",
"ref_id": "BIBREF8"
},
{
"start": 730,
"end": 734,
"text": "[11,",
"ref_id": "BIBREF11"
},
{
"start": 735,
"end": 738,
"text": "12,",
"ref_id": "BIBREF12"
},
{
"start": 739,
"end": 741,
"text": "8]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Annotation",
"sec_num": "3"
},
{
"text": "We do not actually perform any syntactic analysis of either the French or the English texts. Instead, we carry out a number of syntactically motivated transformations designed to make sentences in the two languages more similar to one another. Each transformation is made with the aid of a finite state recognizer. Of course, neither English nor French can be described by a simple finite state mechanism. In some cases, therefore, our simple rules will fail to apply where they should or will apply where they should not. While this is regrettable, we take a purely pragmatic attitude toward these errors: if the performance of the system improves when we use a transformation, then the transformation is good, otherwise it is bad.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysis",
"sec_num": "4"
},
{
"text": "We apply two transformations to English sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Transformations",
"sec_num": "4.1"
},
{
"text": "1. We undo question inversion when we can find it; 2. We move adverbs out of multiword verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Transformations",
"sec_num": "4.1"
},
{
"text": "The primary purpose of these transformations is to place the words in a multiword verb in sequence so as to facilitate later morphological analysis. The secondary purpose is to reduce the local statistical variety of English sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Transformations",
"sec_num": "4.1"
},
{
"text": "One of the signals of the interrogative in English is the inversion of the subject and the first word of the verb. Speakers of American English prefer to invert the subject with an auxiliary verb rather than a main verb, and so are more comfortable adding some form of the empty auxiliary do. It is our intention that our question inversion transformation work as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.1.1"
},
{
"text": "Has the grocery store any eggs?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.1.1"
},
{
"text": "\u21d2 The grocery store has any eggs QINV",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.1.1"
},
{
"text": "\u21d2 Why farmers should be growing less wheat QINV Because of errors in grammatical tagging, compounded with the primitive nature of the rules that we employ to achieve this goal, we succeed only about 40% of the time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Will the President run for election again? \u21d2 The President will run for election again QINV Why should farmers be growing less wheat?",
"sec_num": null
},
{
"text": "We move an adverb that is adjacent to a verb, or contained within a multiword verb, to a position immediately following the verb and mark it to show where it originated. We treat not as an adverb for this purpose, and when there is an empty use of a form of to do in the vicinity, we combine it with the not and treat the combination as an adverb. Thus, we intend the following types of transformations John does not like turnips.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adverb Movement",
"sec_num": "4.1.2"
},
{
"text": "\u21d2 John likes do_not_M1 turnips.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adverb Movement",
"sec_num": "4.1.2"
},
{
"text": "Iraq will probably not be completely balkanized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adverb Movement",
"sec_num": "4.1.2"
},
{
"text": "\u21d2 Iraq will be balkanized probably_M2 not_M2 completeIy_M3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adverb Movement",
"sec_num": "4.1.2"
},
{
"text": "Here, the M1 at the end of do-not shows that in the original sentence it preceded the first word ( i n this case, the only word) of the verb; the M2 appended to both probably and not shows that they originally preceded the second word in the sequence will be balkanized; and the M3 at the end of completely shows that it preceded balkanized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adverb Movement",
"sec_num": "4.1.2"
},
{
"text": "We feel that the statistical connection between the subject and the verb is stronger than that between the verb and its object. Therefore, in order to make the best use of the trigram model that we employ for predicting the intermediate English text, we move adverbs to the end of the verb sequence rather than, for example, to the beginning. In this way, the subject and the verb are more likely to fall within the same three-word sequence. Of course, it would be better to move these adverbs out of the way altogether so that we could capture not only the dependence of the verb on its subject, but also the dependence of the object on the verb and on the subject. A more satisfying treatment of this kind must await the development of more general language modelling techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adverb Movement",
"sec_num": "4.1.2"
},
{
"text": "We apply four transformations to French sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "French Transformations",
"sec_num": "4.2"
},
{
"text": "1. We undo question inversion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "French Transformations",
"sec_num": "4.2"
},
{
"text": "2. We combine pairs like ne ... pas, ne ... rien, etc. into single words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "French Transformations",
"sec_num": "4.2"
},
{
"text": "3. We move pronouns that function as direct, indirect, or reflexive objects of verbs to a position following the verb and mark them to show their function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "French Transformations",
"sec_num": "4.2"
},
{
"text": "We move adjectives to a position preceding the nouns that they modify and adverbs to a position following the verbs that they modify.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "These transformations facilitate the morphological analysis of multiword verbs and also move French a little bit in the direction of English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "In French as in English, the interrogative is often signalled by inversion of the subject and the verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.2.1"
},
{
"text": "Unravelling this is easier in French than in English because, when the subject is a pronoun, the French mark the disturbed words by connecting them with a hyphen. When the subject is not a pronoun, the subject and verb retain their declarative order, but a pronoun that agrees with the subject is added after the verb and attached to it by a hyphen. It is our intention that our question inversion transformation work as follows: \u21d2 Jean mange comme un cochon QINV2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.2.1"
},
{
"text": "Mangez-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.2.1"
},
{
"text": "In these examples, the digit after QINV distinguishes between the case when we invert the verb and its pronoun subject and the case when we make that inversion and then discard the pronoun. We successfully unscramble question inversion about 80% of the time. Because it is sometimes difficult to recognize a complex subject, we make most of our mistakes in the QINV2-type questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.2.1"
},
{
"text": "The French can also construct questions by attaching the sequence est-ce que to the front of the corresponding declarative sentence. Therefore, we also perform transformation like the following: \u21d2 Jean a ne_jamais mang\u00e9 comme un cochon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.2.1"
},
{
"text": "Est-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.2.1"
},
{
"text": "Sometimes, we fail to find the second member of the pair, and so we succeed only about 75% of the time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Inversion",
"sec_num": "4.2.1"
},
{
"text": "In French, the definite articles le, la, l', and les can also be used as direct objects. In this use, they precede the verb of which they are the object. When we encounter these or other object pronouns before a verb, we move them to a position following the verb and label them according to our understanding of the function that they serve. The following examples should make our intention clear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "Je vous le donnerai.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "\u21d2 Je donnerai le_DPRO vous_IPRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "Vous vous lavez les mains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "\u21d2 Vous lavez vous_RPRO les mains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "Je y penserai.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "\u21d2 Je penserai \u00e0 y_PRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "J'en ai plus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "\u21d2 Je ai plus de en_PRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "Notice that when moving the allative and ablative pronominal clitics (y and en), we also include a preposition. Some pronouns, such as nous and vous can function either as direct, indirect, or reflexive objects. If we are unsure of which role one of these words is playing, we tag it with _CPRO when it is moved. About 5% of the time we mis-tag a pronoun that we have moved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Object Pronouns",
"sec_num": "4.2.3"
},
{
"text": "To make the French structures presented to the statistical models used in transfer as close a possible to the English structures, we move French adjectives in front of the nouns they modify. We do not record the fact that these adjectives have been moved, and so conflate such phrases as un homme grand and un grand homme. We will remedy this defect in future versions of our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Adverbs and Adjectives",
"sec_num": "4.2.4"
},
{
"text": "We also move French adverbs to a position after the verbs that they modify.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Moving Adverbs and Adjectives",
"sec_num": "4.2.4"
},
{
"text": "The translation system described by Brown et al. [1] treats words as unanalyzed wholes. From the fact that parle is translated as speaks, they adduce no evidence for the translation of parl\u00e9 as spoken. But even regular verbs in French have many distinct forms, some of which can be quite rare. In a 30 million word sample of French text from our Hansard data, only 24 of the 35 different forms of the verb parler actually occur. For less common verbs, fewer than half of the possible forms may be realized in the data. This effusion of disguises for the same underlying object dilutes the effectiveness of our training procedure.",
"cite_spans": [
{
"start": 49,
"end": 52,
"text": "[1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "We perform simple inflectional morphological analysis of verbs, nouns, adjectives, and adverbs so that the fraternity of the several forms of the same word is manifest in the intermediate structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "In English, we analyze the several conjugations of the same verb; the singular and plural forms of The examples below illustrate the level of detail in our morphological analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "He was eating the peas more quickly than I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "\u21d2 He PAST_PROGRESSIVE to_eat the pea N_PLURAL quick er_ADV than I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "Nous en mangeons rarement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "\u21d2 Nous 1ST_PERSON_PLURAL_PRESENT_INDICATIVE manger rare ment_ADV de en_PRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "Ils se sont lav\u00e9s les mains sales. \u21d2 Ils 3RD_PERSON_PLURAL_PAST laver se_RPRO les sale main N_PLURAL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "Notice in the last example that we retain no indication of the original number on French adjectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "We also discard any distinction in gender. Thus, in the intermediate French, adjectives always appear in their masculine singular form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "In a recent paper, Brown et al. [13] describe a method dividing the occurrences of a word in context into a small set of senses so as to achieve a high mutual information between the translation of a word and its sense. In the translation model that we use, we assume that each English word acts independently of the other English words in a. sentence to generate a series of French words [1, 3] . By labelling the words in the intermediate French and English structures with senses that reflect the context in which they occur, we provide some global contextual information to what is essentially a local model of the translation process.",
"cite_spans": [
{
"start": 32,
"end": 36,
"text": "[13]",
"ref_id": "BIBREF13"
},
{
"start": 389,
"end": 392,
"text": "[1,",
"ref_id": null
},
{
"start": 393,
"end": 395,
"text": "3]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Disambiguation",
"sec_num": "6"
},
{
"text": "We assign senses to 1000 of the most frequent French words. For example, we map prendre to prendre_1 in the sentence Je vais prendre ma propre voiture, but to prendre_2 in the sentence Je vais prendre ma propre d\u00e9cision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Disambiguation",
"sec_num": "6"
},
{
"text": "In the corresponding final step of the inverse of the synthesis component, we assign senses to 1000 of the most frequent English words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Disambiguation",
"sec_num": "6"
},
{
"text": "We have compared the performance of a translation system incorporating the analysis and synthesis components described above to a simpler system in which the analysis and synthesis components carry out only the first step of the complete five-step procedure. In both systems, we use a trigram language model in the transfer component as compared with a bigram language model as described by Brown et al. [1] .",
"cite_spans": [
{
"start": 404,
"end": 407,
"text": "[1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "We restrict our attention to vocabularies of 40,809 English words and 57,802 French words. In the enhanced system, morphological analysis reduces these to 33,041 English morphemes and 31,115 French morphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "We estimated the parameters of the translation model for each system from a set of 1,778,620 pairs of French and English sentences from the Canadian Hansard data [1, 2] . Each of these sentences is 30 words or less in length. We tested both systems on the same set of 100 randomly selected Hansard sentences each containing at most 10 words. We judged as acceptable 39 of the translations produced by the simpler system as compared with 60 of those produced by the enhanced system.",
"cite_spans": [
{
"start": 162,
"end": 165,
"text": "[1,",
"ref_id": null
},
{
"start": 166,
"end": 168,
"text": "2]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "We have described analysis and synthesis components for use in a statistical translation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Each of the transformations that make up these components is achieved with the aid of a simple finite-state recognizer. Many of them work poorly and yet, together, they produce a system with a significantly higher translation accuracy. Much of the credit for this successful performance in the face of adversity must be laid at the door of the statistical transfer component, which frames no hypotheses but is guided entirely by the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "In work of this type, it is desirable to be able ascribe certain increments of performance to certain of the steps in the analysis or synthesis component, and thus to assess the value of the various transformations. Making such an assessment would require of us that we construct a series of analysis and synthesis components with different members of the series including different ones of the steps that make up the complete system. Unfortunately, each such construction must have a differently trained statistical transfer component. Because training is a costly undertaking, we have not made any of these collateral investigations and are, therefore, unable to say which of the new analysis and synthesis steps is the most valuable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "P",
"middle": [
"S"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mercer, and P. S. Roossin, \"A statistical approach to machine translation,\" Computational Linguistics, vol. 16, pp. 79-85, June 1990.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Aligning sentences in parallel corpora",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings 29th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, J. C. Lai, and R. L. Mercer, \"Aligning sentences in parallel corpora,\" in Proceed- ings 29th Annual Meeting of the Association for Computational Linguistics, (Berkeley, CA), pp. 169-176, June 1991.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The mathematics of machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. DellaPietra, V. J. DellaPietra, and R. L. Mercer, \"The mathematics of machine translation: Parameter estimation.\" Submitted to Computational Linguistics, 1991.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Information Theory and Reliable Communication",
"authors": [
{
"first": "R",
"middle": [
"G"
],
"last": "Gallager",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. G. Gallager, Information Theory and Reliable Communication. John Wiley & Sons, Inc., 1968.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An estimate of an upper bound for the entropy of english",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": ".",
"middle": [
"J C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. DellaPietra, V. J. DellaPietra, .J. C. Lai, and R. L. Mercer, \"An estimate of an upper bound for the entropy of english.\" Submitted to Computational Linguistics, 1991.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Stochastic modeling for automatic speech understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 1975,
"venue": "Speech Recognition",
"volume": "",
"issue": "",
"pages": "521--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Baker, \"Stochastic modeling for automatic speech understanding,\" in Speech Recognition (R. Reddy, ed.), pp. 521-541, New York: Academic Press, 1975.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Part of speech assignment by a statistical decision algorithm",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bahl",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1976,
"venue": "Abstracts of Papers from the International Symposium on Information Theory",
"volume": "",
"issue": "",
"pages": "88--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Bahl and R. Mercer, \"Part of speech assignment by a statistical decision algorithm,\" in Abstracts of Papers from the International Symposium on Information Theory, (Ronneby, Sweden), pp. 88-89, June 1976.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tagging text with a probabilistic model",
"authors": [
{
"first": "B",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Merialdo, \"Tagging text with a probabilistic model,\" Tech. Rep. RC 15972, IBM Research Division, 1990.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Natural language modeling for phoneme-to-text transcription",
"authors": [
{
"first": "A",
"middle": [],
"last": "Derouault",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1986,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "S",
"issue": "",
"pages": "742--749",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Derouault and B. Merialdo, \"Natural language modeling for phoneme-to-text transcrip- tion,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. S, pp. 742-749, November 1986.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Interpolated estimation of Markov source parameters from sparse data",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1980,
"venue": "Proceedings of the Workshop on Pattern Recognition in Practice",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Jelinek and R. L. Mercer, \"Interpolated estimation of Markov source parameters from sparse data,\" in Proceedings of the Workshop on Pattern Recognition in Practice, (Amsterdam, The Netherlands: North-Holland), May 1980.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An inequality and associated maximization technique in statistical estimation of probabilistic functions of a Markov process",
"authors": [
{
"first": "L",
"middle": [],
"last": "Baum",
"suffix": ""
}
],
"year": 1972,
"venue": "Inequalities",
"volume": "3",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Baum, \"An inequality and associated maximization technique in statistical estimation of probabilistic functions of a Markov process,\" Inequalities, vol. 3, pp. 1-8, 1972.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A maximum likelihood approach to continuous speech recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Bahl",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1983,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "5",
"pages": "179--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. R. Bahl, F. Jelinek, and R. L. Mercer, \"A maximum likelihood approach to continu- ous speech recognition,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-5, pp. 179-190, March 1983.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Word sense disambiguation using statistical methods",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings 29th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "265--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. DellaPietra, V. J. DellaPietra, and R. L. Mercer, \"Word sense disambigua- tion using statistical methods,\" in Proceedings 29th Annual Meeting of the Association for Computational Linguistics, (Berkeley, CA), pp. 265-270, June 1991.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "is one of the classical paradigms in machine translation. The analysis component recasts the source sentence into an intermediate form, the transfer component reworks this intermediate form into a second intermediate form more compatible with the target language, and the synthesis component constructs the target language translation of the original source sentence from this new intermediate form.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Annotate words according to their grammatical function.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Apply some rudimentary syntactic analysis. 4. Extract inflectional morphology. 5. Assign statistically derived senses to some of the common words. The traditional architecture for a machine translation system consists of the three stages of analysis, transfer and synthesis. A French sentence is first analyzed and thereby converted into an intermediate structure which captures the linguistic relationships between different components of the sentence. This structure is then transferred into an intermediate English structure. Finally, an English sentence is synthesized from the intermediate English structure.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "A Traditional Machine Translation Architecture Statistical Transfer in an Analysis-Transfer-Synthesis Architecture. The resulting intermediate representation, for both the French and the English text, is a string of morphs some of which are limited by sense designations.",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "So Bill and bill are the same word? Salviati: No. Bill is a name and a bill is something you pay. With proper names, case matters. Simplicio: What about the two May's in May I pay in May? Salviati: The first one is not a proper name. It's only capitalized because it's the first word in the sentence.",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "the same noun; and the positive, comparative, and superlative forms of adjectives and adverbs. In French, we analyze the several conjugations of the same verb; and masculine, feminine, singular, and plural forms of the same noun or adjective. In both languages, we analyze each verb into a tense marker and an infinitive.",
"uris": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>Simplicio is discussing the nature of words with his master Salviati. Let's listen:</td></tr><tr><td>Simplicio: How do you find words in text?</td></tr><tr><td>Salviati: Words occur between spaces.</td></tr><tr><td>Simplicio: What about however,? Is that one word or two?</td></tr><tr><td>Salviati: Oh well, you have to separate out the commas.</td></tr><tr><td>Simplicio: Periods too?</td></tr><tr><td>Salviati: Of course.</td></tr><tr><td>Simplicio: What about Mr.?</td></tr><tr><td>Salviati: Certain abbreviations have to be handled specially.</td></tr><tr><td>Simplicio: How about shouldn't? One word or two?</td></tr><tr><td>Salviati: One.</td></tr><tr><td>Simplicio: So shouldn't is different from should not?</td></tr><tr><td>Salviati: Yes.</td></tr><tr><td>Simplicio: And Gauss-Bonnet as in the Gauss-Bonnet Theorem?</td></tr><tr><td>Salviati: Two names, two words.</td></tr><tr><td>Simplicio: But if you split words at hyphens, what do you do with vis-\u00e0-vis?</td></tr><tr><td>Salviati: One word-don't ask me why.</td></tr><tr><td>Simplicio: How about stingray?</td></tr><tr><td>Salviati: One word, of course.</td></tr><tr><td>Simplicio: And manta ray?</td></tr><tr><td>Salviati: One word: it's just like stingray.</td></tr><tr><td>Simplicio: But there's a space.</td></tr><tr><td>Salviati: Too bad!</td></tr><tr><td>Simplicio: How about inasmuch as?</td></tr><tr><td>Salviati: Two.</td></tr><tr><td>Simplicio: Are you sure?</td></tr><tr><td>Salviati: No.</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"text": "Dealing with ne . . . pas Often in a French sentence one finds the verb sandwiched between ne and some other word which together serve to negate or otherwise modify the meaning of the sentence. It is our intention to make transformations like the following:",
"content": "<table><tr><td>ce que vous mangez des l\u00e9gumes?</td></tr><tr><td>\u21d2 Vous mangez des l\u00e9gumes EST_CE_QUE</td></tr><tr><td>Est-ce que vous le lui avez donn\u00e9?</td></tr><tr><td>\u21d2 Vous le lui avez donn\u00e9 EST_CE_QUE</td></tr><tr><td>Est-ce que Jean mange comme un cochon?</td></tr><tr><td>\u21d2 Jean mange comme un cochon EST_CE_QUE</td></tr><tr><td>4.2.2 Je ne sais pas.</td></tr><tr><td>\u21d2 Je sais ne_pas.</td></tr><tr><td>Il n'y en a plus.</td></tr><tr><td>\u21d2 II y en a ne_plus.</td></tr><tr><td>Jean n'a jamais mang\u00e9 comme un cochon.</td></tr></table>"
}
}
}
}