Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W03-0302",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:08:57.240644Z"
},
"title": "ProAlign: Shared Task System Description",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta Edmonton",
"location": {
"postCode": "T6G 2E8",
"settlement": "Alberta",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta Edmonton",
"location": {
"postCode": "T6G 2E8",
"settlement": "Alberta",
"country": "Canada"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "ProAlign combines several different approaches in order to produce high quality word word alignments. Like competitive linking, ProAlign uses a constrained search to find high scoring alignments. Like EM-based methods, a probability model is used to rank possible alignments. The goal of this paper is to give a bird's eye view of the ProAlign system to encourage discussion and comparison. 1 Alignment Algorithm at a Glance We have submitted the ProAlign alignment system to the WPT'03 shared task. It received a 5.71% AER on the English-French task and 29.36% on the Romanian-English task. These results are with the no-null data; our output was not formatted to work with explicit nulls. ProAlign works by iteratively improving an alignment. The algorithm creates an initial alignment using search, constraints, and summed \u03c6 2 correlation-based scores (Gale and Church, 1991). This is similar to the competitive linking process (Melamed, 2000). It then learns a probability model from the current alignment, and conducts a constrained search again, this time scoring alignments according to the probability model. The process continues until results on a validation set begin to indicate over-fitting. For the purposes of our algorithm, we view an alignment as a set of links between the words in a sentence pair. Before describing the algorithm, we will define the following notation. Let E be an English sentence e 1 , e 2 ,. .. , e m and let F be a French sentence f 1 , f 2 ,. .. , f n. We define a link l(e i , f j) to exist if e i and f j are a translation (or part of a translation) of one another. We define the null link l(e i , f 0) to exist if e i does not correspond to a translation for any French word in F. The null link l(e 0 , f j) is defined similarly. An alignment",
"pdf_parse": {
"paper_id": "W03-0302",
"_pdf_hash": "",
"abstract": [
{
"text": "ProAlign combines several different approaches in order to produce high quality word word alignments. Like competitive linking, ProAlign uses a constrained search to find high scoring alignments. Like EM-based methods, a probability model is used to rank possible alignments. The goal of this paper is to give a bird's eye view of the ProAlign system to encourage discussion and comparison. 1 Alignment Algorithm at a Glance We have submitted the ProAlign alignment system to the WPT'03 shared task. It received a 5.71% AER on the English-French task and 29.36% on the Romanian-English task. These results are with the no-null data; our output was not formatted to work with explicit nulls. ProAlign works by iteratively improving an alignment. The algorithm creates an initial alignment using search, constraints, and summed \u03c6 2 correlation-based scores (Gale and Church, 1991). This is similar to the competitive linking process (Melamed, 2000). It then learns a probability model from the current alignment, and conducts a constrained search again, this time scoring alignments according to the probability model. The process continues until results on a validation set begin to indicate over-fitting. For the purposes of our algorithm, we view an alignment as a set of links between the words in a sentence pair. Before describing the algorithm, we will define the following notation. Let E be an English sentence e 1 , e 2 ,. .. , e m and let F be a French sentence f 1 , f 2 ,. .. , f n. We define a link l(e i , f j) to exist if e i and f j are a translation (or part of a translation) of one another. We define the null link l(e i , f 0) to exist if e i does not correspond to a translation for any French word in F. The null link l(e 0 , f j) is defined similarly. An alignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A for two sentences E and F is a set of links such that every word in E and F participates in at least one link, and a word linked to e 0 or f 0 participates in no other links. If e occurs in E x times and f occurs in F y times, we say that e and f co-occur xy times in this sentence pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ProAlign conducts a best-first search (with constant beam and agenda size) to search a constrained space of possible alignments. A state in this space is a partial alignment, and a transition is defined as the addition of a single link to the current state. Any link which would create a state that does not violate any constraint is considered to be a valid transition. Our start state is the empty alignment, where all words in E and F are implicitly linked to null. A terminal state is a state in which no more links can be added without violating a constraint. Our goal is to find the terminal state with the highest probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To complete this algorithm, one requires a set of constraints and a method for determining which alignment is most likely. These are presented in the next two sections. The algorithm takes as input a set of English-French sentence pairs, along with dependency trees for the English sentences. The presence of the English dependency tree allows us to incorporate linguistic features into our model and linguistic intuitions into our constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The model used for scoring alignments has no mechanism to prevent certain types of undesirable alignments, such as having all French words align to the same English word. To guide the search to correct alignments, we employ two constraints to limit our search for the most probable alignment. The first constraint is the one-to-one constraint (Melamed, 2000) : every word (except the null words e 0 and f 0 ) participates in exactly one link.",
"cite_spans": [
{
"start": 343,
"end": 358,
"text": "(Melamed, 2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "The second constraint, known as the cohesion constraint (Fox, 2002) , uses the dependency tree (Mel'\u010duk, 1987) of the English sentence to restrict possible link combinations. Given the dependency tree T E and a (partial) alignment A, the cohesion constraint requires that phrasal cohesion is maintained in the French sentence. If two phrases are disjoint in the English sentence, the alignment must not map them to overlapping intervals in the French sentence. This notion of phrasal constraints on alignments need not be restricted to phrases determined from a dependency structure. However, the experiments conducted in (Fox, 2002) indicate that dependency trees demonstrate a higher degree of phrasal cohesion during translation than other structures.",
"cite_spans": [
{
"start": 56,
"end": 67,
"text": "(Fox, 2002)",
"ref_id": "BIBREF2"
},
{
"start": 95,
"end": 110,
"text": "(Mel'\u010duk, 1987)",
"ref_id": "BIBREF6"
},
{
"start": 622,
"end": 633,
"text": "(Fox, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "Consider the partial alignment in Figure 1 . The most probable lexical match for the English word to is the French word\u00e0. When the system attempts to link to and a, the distinct English phrases [the reboot] and [the host to discover all the devices] will be mapped to intervals in the French sentence, creating the induced phrasal in-",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "tervals [\u00e0 . . . [r\u00e9initialisation] . . . p\u00e9riph\u00e9riques].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "Regardless of what these French phrases will be after the alignment is completed, we know now that their intervals will overlap. Therefore, this link will not be added to the partial alignment. To define this notion more formally, let T E (e i ) be the subtree of T E rooted at e i . The phrase span of e i , spanP(e i , T E , A), is the image of the English phrase headed by e i in F given a (partial) alignment A. More precisely, spanP(e i , T E , A) = [k 1 , k 2 ], where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "k 1 = min{j|l(u, j) \u2208 A, e u \u2208 T E (e i )} k 2 = max{j|l(u, j) \u2208 A, e u \u2208 T E (e i )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "The head span is the image of e i itself. We define spanH(e i , T E , A) = [k 1 , k 2 ], where Figure 1 , for the node reboot, the phrase span is [4, 4] and the head span is also [4, 4] ; for the node discover (with the link between to and\u00e0 in place), the phrase span is [2, 11] and the head span is the empty set \u2205.",
"cite_spans": [
{
"start": 146,
"end": 149,
"text": "[4,",
"ref_id": null
},
{
"start": 150,
"end": 152,
"text": "4]",
"ref_id": null
},
{
"start": 179,
"end": 182,
"text": "[4,",
"ref_id": null
},
{
"start": 183,
"end": 185,
"text": "4]",
"ref_id": null
},
{
"start": 271,
"end": 274,
"text": "[2,",
"ref_id": null
},
{
"start": 275,
"end": 278,
"text": "11]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 95,
"end": 103,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "k 1 = min{j|l(i, j) \u2208 A} k 2 = max{j|l(i, j) \u2208 A} In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "With these definitions of phrase and head spans, we define two notions of overlap, originally introduced in (Fox, 2002) as crossings. Given a head node e h and its modifier e m , a head-modifier overlap occurs when:",
"cite_spans": [
{
"start": 108,
"end": 119,
"text": "(Fox, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "spanH(e h , T E , A) \u2229 spanP(e m , T E , A) = \u2205",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "Given two nodes e m1 and e m2 which both modify the same head node, a modifier-modifier overlap occurs when:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "spanP(e m1 , T E , A) \u2229 spanP(e m2 , T E , A) = \u2205 Following (Fox, 2002) , we say an alignment is cohesive with respect to T E if it does not introduce any head-modifier or modifier-modifier overlaps. For example, the alignment A in Figure 1 is not cohesive because spanP (reboot, T E , A) = [4, 4] intersects spanP (discover, T E , A) = [2, 11]. Since both reboot and discover modify causes, this creates a modifiermodifier overlap. One can check for constraint violations inexpensively by incrementally updating the various spans as new links are added to the partial alignment, and checking for overlap after each modification. More details on the cohesion constraint can be found in .",
"cite_spans": [
{
"start": 60,
"end": 71,
"text": "(Fox, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Constraints",
"sec_num": "2"
},
{
"text": "We define the word alignment problem as finding the alignment A that maximizes P (A|E, F ). ProAlign models P (A|E, F ) directly, using a different decomposition of terms than the model used by IBM (Brown et al., 1993) . In the IBM models of translation, alignments exist as artifacts of a stochastic process, where the words in the English sentence generate the words in the French sentence. Our model does not assume that one sentence generates the other. Instead it takes both sentences as given, and uses the sentences to determine an alignment. An alignment A consists of t links {l 1 , l 2 , . . . , l t }, where each l k = l(e i k , f j k ) for some i k and j k . We will refer to consecutive subsets of A as l j i = {l i , l i+1 , . . . , l j }. Given this notation, P (A|E, F ) can be decomposed as follows:",
"cite_spans": [
{
"start": 198,
"end": 218,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "P (A|E, F ) = P (l t 1 |E, F ) = t k=1 P (l k |E, F, l k\u22121 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "At this point, we factor P (l k |E, F, l k\u22121 1 ) to make computation feasible. Let C k = {E, F, l k\u22121 1 } represent the context of l k . Note that both the context C k and the link l k imply the occurrence of e i k and f j k . We can rewrite P (l k |C k ) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "P (l k |C k ) = P (l k , C k ) P (C k ) = P (C k |l k )P (l k ) P (C k , e i k , f j k ) = P (l k |e i k , f j k ) \u00d7 P (C k |l k ) P (C k |e i k , f j k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "Here P (l k |e i k , f j k ) is link probability given a cooccurrence of the two words, which is similar in spirit to Melamed's explicit noise model (Melamed, 2000) . This term depends only on the words involved directly in the link. The ratio P (C k |l k ) P (C k |ei k ,fj k ) modifies the link probability, providing context-sensitive information.",
"cite_spans": [
{
"start": 149,
"end": 164,
"text": "(Melamed, 2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "C k remains too broad to deal with in practical systems. We will consider only a subset FT k of relevant features of C k . We will make the Na\u00efve Bayes-style assumption that these features ft \u2208 FT k are conditionally independent given either l k or (e i k , f j k ). This produces a tractable formulation for P (A|E, F ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "t k=1 \uf8eb \uf8ed P (l k |e i k , f j k ) \u00d7 ft\u2208FT k P (ft|l k ) P (ft|e i k , f j k ) \uf8f6 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "More details on the probability model used by ProAlign are available in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3"
},
{
"text": "For the purposes of the shared task, we use two feature types. Each type could have any number of instantiations for any number of contexts. Note that each feature type is described in terms of the context surrounding a word pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features used in the shared task",
"sec_num": "3.1"
},
{
"text": "The first feature type ft a concerns surrounding links. It has been observed that words close to each other in the source language tend to remain close to each other in the translation (S. Vogel and Tillmann, 1996) . To capture this notion, for any word pair (e i , f j ), if a link l(e i , f j ) exists within a window of two words (where i \u2212 2 \u2264 i \u2264 i + 2 and j \u2212 2 \u2264 j \u2264 j + 2), then we say that the feature ft a (i \u2212 i , j \u2212 j , e i ) is active for this context. We refer to these as adjacency features.",
"cite_spans": [
{
"start": 185,
"end": 214,
"text": "(S. Vogel and Tillmann, 1996)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features used in the shared task",
"sec_num": "3.1"
},
{
"text": "The second feature type ft d uses the English parse tree to capture regularities among grammatical relations between languages. For example, when dealing with French and English, the location of the determiner with respect to its governor is never swapped during translation, while the location of adjectives is swapped frequently. For any word pair (e i , f j ), let e i be the governor of e i , and let rel be the relationship between them. If a link l(e i , f j ) exists, then we say that the feature ft d (j \u2212 j , rel ) is active for this context. We refer to these as dependency features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features used in the shared task",
"sec_num": "3.1"
},
{
"text": "Take for example Figure 2 which shows a partial alignment with all links completed except for those involving the. Given this sentence pair and English parse tree, we can extract features of both types to assist in the alignment of the 1 . The word pair (the 1 , l ) will have an active adjacency feature ft a (+1, +1, host) as well as a dependency feature ft d (\u22121, det). These two features will work together to increase the probability of this correct link. In contrast, the incorrect link (the 1 , les) will have only ft d (+3, det), which will work to lower the link probability, since most determiners are located before their governors.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Features used in the shared task",
"sec_num": "3.1"
},
{
"text": "Since we always work from a current alignment, training the model is a simple matter of counting events in the current alignment. Link probability is the number of time two words are linked, divided by the number of times they co-occur. The various feature probabilities can be calculated by also counting the number of times a feature occurs in the context of a linked pair of words, and the number of times the feature is active for co-occurrences of the same word pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the model",
"sec_num": "3.2"
},
{
"text": "Considering only a single, potentially noisy alignment for a given sentence pair can result in reinforcing errors present in the current alignment during training. To avoid this problem, we sample from a space of probable alignments, as is done in IBM models 3 and above (Brown et al., 1993) , and weight counts based on the likelihood of each alignment sampled under the current probability model. To further reduce the impact of rare, and potentially incorrect events, we also smooth our probabilities using m-estimate smoothing (Mitchell, 1997) .",
"cite_spans": [
{
"start": 271,
"end": 291,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF0"
},
{
"start": 531,
"end": 547,
"text": "(Mitchell, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training the model",
"sec_num": "3.2"
},
{
"text": "The result of the constrained alignment search is a highprecision, word-to-word alignment. We then relax the word-to-word constraint, and use statistics regarding collocations with unaligned words in order to make many-toone alignments. We also employ a further relaxed linking process to catch some cases where the cohesion constraint ruled out otherwise good alignments. These auxiliary methods are currently not integrated into our search or our probability model, although that is certainly a direction for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Alignments",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "We have presented a brief overview of the major ideas behind our entry to the WPT'03 Shared Task. Primary among these ideas are the use of a cohesion constraint in search, and our novel probability model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "This project is funded by and jointly undertaken with Sun Microsystems, Inc. We wish to thank Finola Brady, Bob Kuhns and Michael McHugh for their help. We also wish to thank the WPT'03 reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"S A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, V. S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-312.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A probability model to improve word alignment",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Dekang Lin. 2003. A probability model to improve word alignment. Submitted.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Phrasal cohesion and statistical machine translation",
"authors": [
{
"first": "Heidi",
"middle": [
"J"
],
"last": "Fox",
"suffix": ""
}
],
"year": 2002,
"venue": "2002 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "304--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heidi J. Fox. 2002. Phrasal cohesion and statistical machine translation. In 2002 Conference on Empiri- cal Methods in Natural Language Processing (EMNLP 2002), pages 304-311.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Identifying word correspondences in parallel texts",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Darpa",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Kaufmann",
"suffix": ""
}
],
"year": 1991,
"venue": "4th Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "152--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W.A. Gale and K.W. Church. 1991. Identifying word correspondences in parallel texts. In 4th Speech and Natural Language Workshop, pages 152-157. DARPA, Morgan Kaufmann.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word alignment with cohesion constraint",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Colin Cherry. 2003. Word alignment with cohesion constraint. Submitted.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Models of translational equivalence among words",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 2000. Models of translational equiv- alence among words. Computational Linguistics, 26(2):221-249, June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dependency syntax: theory and practice",
"authors": [
{
"first": "Igor",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor A. Mel'\u010duk. 1987. Dependency syntax: theory and practice. State University of New York Press, Albany.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Machine Learning",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Mitchell. 1997. Machine Learning. McGraw Hill.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "16th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Ney S. Vogel and C. Tillmann. 1996. HMM-based word alignment in statistical translation. In 16th In- ternational Conference on Computational Linguistics, pages 836-841, Copenhagen, Denmark, August.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "An Example of Cohesion Constraint",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Feature Extraction Example",
"type_str": "figure",
"num": null
}
}
}
}