|
{ |
|
"paper_id": "S01-1013", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:35:30.905192Z" |
|
}, |
|
"title": "The Japanese Translation Task: Lexical and Structural Perspectives", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Okazaki", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Takenobu", |
|
"middle": [], |
|
"last": "Tokunagat", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hozumi", |
|
"middle": [], |
|
"last": "Tanakat", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes two distinct attempts at the SENSEVAL-2 Japanese translation task. The first implementation is based on lexical similarity and builds on the results of Baldwin (2001b; 2001a), whereas the second is based on structural similarity via the medium of parse trees and includes a basic model of conceptual similarity. Despite its simplistic nature, the lexical method was found to perform the better of the two, at 49.1% accuracy, as compared to 41.2% for the structural method and 36.8% for the baseline.", |
|
"pdf_parse": { |
|
"paper_id": "S01-1013", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes two distinct attempts at the SENSEVAL-2 Japanese translation task. The first implementation is based on lexical similarity and builds on the results of Baldwin (2001b; 2001a), whereas the second is based on structural similarity via the medium of parse trees and includes a basic model of conceptual similarity. Despite its simplistic nature, the lexical method was found to perform the better of the two, at 49.1% accuracy, as compared to 41.2% for the structural method and 36.8% for the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Translation retrieval is defined as the task of, for a given source language (11) input, retrieving the target language (12) string which best translates it. Retrieval is carried out over a translation memory made up of translation records, that is 11 strings coupled with an 12 translation. A single translation retrieval task was offered in SENSEVAL-2, from Japanese into English, and it is this task that we target in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Conventionally, translation retrieval is carried out by way of determining the 11 string in the translation memory most similar to the input, and returning the 12 string paired with that string as a translation for the input. It is important to realise that at no point is the output compared back to the input to determine its \"translation adequacy\", a job which is left up to the system user.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Determination of the degree of similarity between the input and 11 component of each translation record can take a range of factors into consideration, including lexical (character or word) content, word order, parse tree topology and conceptual similarity. In this paper, we focus on a simple character-based (lexical) method and more sophisticated parse tree comparison (structural) method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Both methods discussed herein are fully unsupervised. The lexical method makes use of no external resources or linguistic knowledge whatsoever. It treats each string as a \"bag of character bigrams\" and calculates similarity according to Dice's Coefficient. The structural method, on the other hand, relies on both morphological and syntactic analysis, in the form of the publicly-available JUMAN (Kuro-hashi and Nagao, 1998b) and KNP (Kurohashi and Nagao, 1998a) systems, respectively, and also the Japanese Goi-Taikei thesaurus (Ikehara et al., 1997) to measure conceptual distance. A parse tree is generated for the 11 component of each translation record, and also each input, and similarity gauged by both topological resemblance between parse trees and conceptual similarity between nodes of the parse tree.", |
|
"cite_spans": [ |
|
{ |
|
"start": 434, |
|
"end": 462, |
|
"text": "(Kurohashi and Nagao, 1998a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 529, |
|
"end": 551, |
|
"text": "(Ikehara et al., 1997)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Translation records used by the two systems were taken exclusively from the translation memory provided for the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the proceeding sections, we briefly review the Japanese translation task ( \u00a7 2) and detail our particular use of the data provided for the task ( \u00a7 3). Next, we outline the lexical method ( \u00a7 4) and structural method ( \u00a7 5), and compare and discuss the performance of the two methods ( \u00a7 6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Japanese translation task data was made up of a translation memory and test set. The translation memory was dissected into 320 disjoint segments according to headwords, with an average of 21.6 translation records per headword (i.e. 6920 translation records overall). The purpose of the task was to select for a given headword which (if any) of the translation records gave a suitable translation for that word. The task stipulated that a maximum of one translation record could be selected for each input (allowing for the possibility of an unassignable output, indicating that no appropriate translation could be found). Translations were selected by way of a translation record ID, and systems were not required to actually identify what part of the 12 string in the selected translation record was the translation for the headword. Translation records took the form of Japanese-English pairings of word clusters, isolated phrases, clauses or sentences containing the headword, at an average of 8.0 Japanese characters 1 and 4.0 English words per translation record. In some instances, multiple semantically-equivalent translations were given for a single expression, such as \"corporation which is in danger of bankruptcy\" and \"unsound corporation\" for abunai kigyo; all such occurrences were marked by the annotator. For some other translation records, the annotator had provided a list of lexical variants or a paraphrase of the Ll expression to elucidate its meaning (not necessarily involving the headword), or made a note as to typical arguments taken by that expression (e.g. \"refers to a person\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the test data, inputs took the form of paragraphs taken from newspaper articles, within which a single headword had been identified for translation. The average input length was 697.9 characters, nearly 90 times the Ll component of each translation record. In its raw form, therefore, the translation task differs from a conventional translation retrieval task in that translation records and inputs are not directly comparable, in the sense that translation records are never going to provide a full translation approximation for the overall input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a:Japting the task data to our purposes, we first earned out limited normalisation of both the translation memory and test data by: (a) replacing all numerical expressions with a common NUM marker and (b) normalising punctuation. ' In order to maximise the disambiguating potential of the translation memory, we next set about automatically deriving as many discrete translation records as possible from the original translation memory. Multiple lexical variants of the same basic translation record (indexed identically) were generated in the case that: (a) a lexical alternate was provided (in which case all variants were listed in parallel); (b) a paraphrase was provided by the annotator (irrespective of whether the paraphrase included the headword or not); (c) syntactic or semantic preferences were listed for particular arguments in the basic translation record (in which case lexical ~.rariants took the form of strings expanded by adding m each preference as a string). At the same time, for each headword, any repetitions of the same Ll string were completely removed from the translation record data. This equates to the assumption that the translation listed first in the translation memory is the most salient or commonplace.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data preparation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This method of translation record derivation resulted in a total of 152 new translation records wh~reas the removal of duplicate Ll strings fo; a given headword resulted in the deletion of 670 translation records; the total number of translation records was thus 6402, at an average of 20.0 translation records per headword.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data preparation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We experimented with a number of methods for abbreviating the inputs, so as to achieve direct comparability between inputs and translation records. First, .we extracted the clause containing the headword mstance to be translated. This was achieved through a number of ad hoc heuristics driven by the analysis of punctuation. These clause-level instances served as the inputs for the str\u2022uctur\u2022al method. We 56 then further \"windowed\" the inputs for the lexical method, by allowing a maximum of 10 characters to either side of the headword. No attempt was made to identify or enforce the observation of word boundaries in this process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data preparation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As stated above, the lexical method is based on character-based indexing, meaning that each string is naively treated as a sequence of characters. Rather than treat each individual character as a single segment, however, we chunk adjacent characters into bigrams in order to capture local character contiguity. String similarity is then determined by way of Dice's Coefficient, calculated according to: B1gram frequency is weighted according to character type: a bigram made up entirely of hiragana charac-t~rs (gener_ally used in functional words/ particles) is given a weight of 0.2 and all other bigrams a weight of 1. Note that Dice's Coefficient ignores segment order, and that each string is thus treated as a \"bag of character bigrams\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The lexical method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "sim1 (IN~,, T Ri) = 2 x LeEIN,~", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The lexical method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our choice of the combination of Dice's Coefficient, character-based indexing and character higrams (rather than any other n-gram order or mixed n-gram model) is based on the findings of Baldwin (2001b; 2001a) , who compared character-and wordbased indexing in combination with both segment order-sensitive and bag-of-words similarity measures and with various n-gram models. As a result of extensive evaluation, Baldwin found the combination of character bigram-based indexing and a bagof-words method (in the form of either the vector space model or Dice's Coefficient) to be optimal. Our choice of Dice's Coefficient over the vector space m?del is due to the vector space model tending to bhthely prefer shorter strings in cases of low-level character overlap, and the ability of Dice's Coefficient to pick up on subtle string similarities under such high-noise conditions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 202, |
|
"text": "Baldwin (2001b;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 209, |
|
"text": "2001a)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The lexical method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Given the limited lexical context in translation records (8.0_ Japanese characters on average), our method 1s highly susceptible to the effects of data sparseness. While we have no immediate way of reconciling this shortcoming, it is possible to make use of_ t~e rich lexical context of the full inputs (i.e. in ong~nal paragraph form rather than clause or windowed clause form). Direct comparison of the full inputs with translation records is undesirable as high levels of spurious matches can be expected outside the scope of the original translation record expression. Inter-comparison of full inputs, on the other hand, provides a primitive model of domain similarity. Assuming that high similarity correlates with a high level of domain correspondence, we can apply a cross-lingual corollary of the \"one sense per discourse\" observation (Gale et al., 1992) in stipulating that a given word will be translated consistently within a given domain. By ascertaining that a given input closely resembles a second input, we can use the combined translation retrieval results for the two inputs to hone in on the optimal translation for the two. We term this procedure domain-based similarity consolidation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 843, |
|
"end": 862, |
|
"text": "(Gale et al., 1992)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The lexical method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The overall retrieval process thus involves: (1) carrying out standard translation retrieval based on the abbreviated input, (2) using the original test set to determine the full input string most similar to the current input, and (3) performing translation retrieval independently using the abbreviated form of the maximally similar alternate input. Numerically, the combined similarity is calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The lexical method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "simz(INm,TRi) = 0.5 (sim1(IN;;,,TRi) +max siml(INmJNn) sim1(IN~, T Ri)) no;im", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The lexical method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where INm is the current input (full form), IN:'n is the abbreviated form of INrn, sim1 is as defined above, and INn is any input string other than the current input. Note that the multiplication by 0.5 simply normalises the output of sim2 to the range [0, 1] . For each input INrn, the ID for that translation record which is deemed most similar to IN m is returned, with translation records occurring earlier in the translation memory selected in the case of a tie. 3", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 256, |
|
"text": "[0,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 259, |
|
"text": "1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The lexical method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The structural method\u2022 contrasts starkly with the lexical method in that it is heavily resourcedependent, requiring a morphological analyser, parser and thesaurus. It operates over the same translation memory data as the lexical method, but uses only the abbreviated forms of the inputs (to the clause level) and does not consider inter-input similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "JUMAN (Kurohashi and Nagao, 1998b) is first used to segment each string (translation records and inputs), based on the output of which, the KNP parser (Kurohashi and Nagao, 1998a ) is used to derive a parse tree for the string. The reason for abbreviating inputs only as far as the clause level for the structural method, is to enhance parseability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 34, |
|
"text": "(Kurohashi and Nagao, 1998b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 178, |
|
"text": "(Kurohashi and Nagao, 1998a", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Further pruning takes place implicitly further downstream as part of the parse tree matching process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "KNP returns a binary parse tree, with leaves corresponding to optionally case-marked phrases. Each leaf node is simplified to the phrase head and the (optional) case marker normalised (according to the KNP output).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As for the lexical method, all translation records corresponding to the current headwQrd are matched against the parse tree for the input, and the ID of the closest-matching tree returned. In comparing a given pair of parse trees T 1 and T 2 , we proceed as follows in direction dE {up, down}:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "1. Set p 1 to the leaf node containing the headword in T 1 , and similarly initialise p 2 in T 2 ; initialise n to 0 (n, concepLsim(p},p})) 4. Increment n by 1, set p 1 and p 2 to their respective adjacent leaf nodes in direction d within the parse tree; goto step 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 139, |
|
"text": "(n, concepLsim(p},p}))", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "2. Ifp~ =f.p~, return (n,O) 3. If p} i-p], return", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Here, p~ is the case marker associated with node pi, p} is the filler associated with node pi, and the =f. operator represents lexical inequality; concepLsim calculates the conceptual similarity of the two fillers in question according to the Goi-Taikei thesaurus (Ikehara et al., 1997) . We do this by, for each sense \u2022 pairing of the fillers, determining the least common hypernym and the number of edges separating each sense node from the least common hypernym. The conceptual distance of the given senses is then determined according to the inverse of the greater of the two edge distances to the hypernym node, and the overall conceptual distance for the two fillers as the minimum such sense-wise conceptual distance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 286, |
|
"text": "(Ikehara et al., 1997)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We match both up and down the tree structure from the headword node, and evaluate the combined similarity as the sum of the individual elements of the returned tuples. That is, if an upward match returned (i, m) and a downward match (j, n), the overall similarity would be (i + j, m + n).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The translation output is the ID of the translation record producing the greatest such similarity, where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "( w, x) > (y, z) iff w > y or ( w = y 1\\ x > z).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As a result, conceptual similarity is essentially a tiebreaking mechanism, and the principal determining factor is the number of phrase levels over which the parse trees match. In the case that there is a tie for best translation, the translation record with the longest Ll string is (arbitrarily) chosen, and in the case that this doesn't resolve the stalemate, a translation record is chosen randomly. In the case that all translation records score (0, 0), we deem there to be no suitable translation in the translation memory, and return unassignable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As mentioned in Section 2, crude selectional preferences (of the form PERSON or BUILDING) were provided on certain argument slots in trans-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The structural method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Accuracy 49.1% 41.2% 36.8% Table 1 : Results lation records. These were supported by semiautomatically mapping the preference type onto the Goi-Taikei thesaurus structure, and modifying the ioperator to non-sense subsumption of the translation record filler by the input selectional preference, in step 3 of the parse tree match algorithm. Selectional preferences were automatically mapped onto nodes of the same name if they existed, and manually linked to the thesaurus otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 34, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lexical Structural Baseline", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The translation retrieval accuracy for the two methods is given in Table 1 , along with a baseline accuracy arrived at through random translation record selection for the given headword. Note that as we attempt to translate all inputs, the presented accuracy figures correspond to both recall and precision.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 74, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The most striking feature of the results is that the lexical method has a clear advantage over the structural method, while both methods outperform the baseline. Obviously, it would be going too far to discount structural methods outright based on this limited evaluation, particularly as the lexical method has undergone extensive testing and tuning over other datasets, whereas the structural method is novel to this task. It is surprising, however, that a technique as simple as the lexical method, requiring no external resources and ignoring even word boundaries and word order, should perform so well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The main area in which the structural method fell short was unassignable inputs where no translation record displayed even the same case marking on the headword. Indeed 130 or 10.8% of inputs were tagged unassignable, despite them comprising only 0.3% of the solution set. Note, however, that even for only those inputs where the structural method was able to produce a match, the lexical method significantly outperformed the structural method (50.2% vs. 45.4%, respectively).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Conversely for the lexical method, at present, a translation record is selected irrespective of the magnitude of the similarity value, and it would be a trivial process to implement a similarity cutoff, below which an unassignable result would be returned. Preliminary analysis of the correlation between the lowest similarity values and inputs annotated as unassignable indicates that this method could be moderately successful (see Baldwin et al. (to appear)).", |
|
"cite_spans": [ |
|
{ |
|
"start": 434, |
|
"end": 448, |
|
"text": "Baldwin et al.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The translation task was designed such that participants didn't get access to annotated inputs until after the submission of final results, meaning that parameter settings and fine-tuning of techniques had to be carried out according to intuition only. Post hoc evaluation of methods such as domain-based similarity consolidation suggests that it does have a significant impact on system performance (Baldwin et al., to appear) , although even in its basic configuration (using clause inputs and no domain-based similarity consolidation), the lexical method is superior to the structural method as presented herein.", |
|
"cite_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 427, |
|
"text": "(Baldwin et al., to appear)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In conclusion, this paper has served to describe each of a lexical and structural translation retrieval method, as applied to the SENSEVAL-2 Japanese translation task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The lexical method modelled strings as a bag of character bigrams, but incorporated a number of novel techniques including domain-based similarity consolidation in reaching a final decision as to the translation record most similar to the input. The structural method, on the other hand, compared parse trees and had recourse to conceptual similarity, but in a relatively rudimentary form. Of the two proposed methods, the lexical method proved to be clearly superior, although both methods were well above the baseline performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Ignoring punctuation but including each numeric digit as a single character.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "freqTR;(e) and len(TRi) are defined similarly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Based on the observation that translation records are roughly ordered according to commonality. Ties were observed 7.5% of the time, with the mean number of top-scoring translation records being 1.12.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This paper was supported in part by the Research Collaboration between the Nippon Telegraph and Telephone Company (NTT) Communication Science Laboratories and CSLI, Stanford University.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The successes and failures of lexical and structural translation retrieval", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Okazaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Tokunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Tanaka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Transactions of the IEICE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Baldwin, A. Okazaki, T. Tokunaga, and H. Tanaka. to appear. The successes and failures of lexical and structural translation retrieval. In Transactions of the IEICE.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Low-cost, high-performance translation retrieval: Dumber is better", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Baldwin. 2001a. Low-cost, high-performance translation retrieval: Dumber is better. In Proc. of the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001), pages 18-25.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Making Lexical Sense of Japanese-English Machine Translation: A Disambiguation Extravaganza", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Baldwin. 2001b. Making Lexical Sense of Japanese-English Machine Translation: A Dis- ambiguation Extravaganza. Ph.D. thesis, Tokyo Institute of Technology.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "One sense per discourse", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of the 4th DARPA Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "233--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Gale, K. Church, and D. Yarowsky. 1992. One sense per discourse. In Proc. of the 4th DARPA Speech and Natural Language Workshop, pages 233-7.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Nihongo Goi Taikei -A Japanese Lexicon", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ikehara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Miyazaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Yokoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Shirai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Nakaiwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ogura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Ooyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Hayashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Ikehara, M. Miyazaki, A. Yokoo, S. Shi- rai, H. Nakaiwa, K. Ogura, Y. Ooyama, and Y. Hayashi. 1997. Nihongo Goi Taikei -A Japanese Lexicon. Iwanami Shoten. 5 volumes. (In Japanese).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Building a Japanese parsed corpus while improving the parsing system", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proc. of the 1st International Conference on Language Resources and Evaluation (LREC'98)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "719--743", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Kurohashi and M. Nagao. 1998a. Building a Japanese parsed corpus while improving the pars- ing system. In Proc. of the 1st International Con- ference on Language Resources and Evaluation (LREC'98), pages 719-24.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Nihongo keitai-.. kaiseki sisutemu JUMAN [Japanese morphological analysis system JUMAN] version 3.5. Technical report", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Kurohashi and M. Nagao. 1998b. Nihongo keitai- .. kaiseki sisutemu JUMAN [Japanese morphologi- cal analysis system JUMAN] version 3.5. Techni- cal report, Kyoto University. (In Japanese).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": ",,TR; min (JreqiN,;,(e),freqTR,(e)) len(IN;;,) + len(TRi) where IN;,.. is the abbreviated version of the input string IN m (see above) and T Ri is a translation record; each e is a character bigram occurring in either IN;,.. or TRi, freqiN* (e) is defined as the weighted frequency of bigram'\" type e in IN;,.., and le~ (IN;,..) is the character bigram length of IN;,... 2", |
|
"uris": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |