|
{ |
|
"paper_id": "D12-1026", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:23:15.011897Z" |
|
}, |
|
"title": "N-gram-based Tense Models for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Zhengxian", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Soochow University", |
|
"location": { |
|
"postCode": "215006", |
|
"settlement": "Suzhou", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Human Language Technology", |
|
"location": { |
|
"postCode": "138632", |
|
"country": "Singapore" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chewlim", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University of Singapore", |
|
"location": { |
|
"postCode": "117417", |
|
"country": "Singapore" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Soochow University", |
|
"location": { |
|
"postCode": "215006", |
|
"settlement": "Suzhou", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Tense is a small element to a sentence, however, error tense can raise odd grammars and result in misunderstanding. Recently, tense has drawn attention in many natural language processing applications. However, most of current Statistical Machine Translation (SMT) systems mainly depend on translation model and language model. They never consider and make full use of tense information. In this paper, we propose n-gram-based tense models for SMT and successfully integrate them into a state-of-the-art phrase-based SMT system via two additional features. Experimental results on the NIST Chinese-English translation task show that our proposed tense models are very effective, contributing performance improvement by 0.62 BLUE points over a strong baseline.", |
|
"pdf_parse": { |
|
"paper_id": "D12-1026", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Tense is a small element to a sentence, however, error tense can raise odd grammars and result in misunderstanding. Recently, tense has drawn attention in many natural language processing applications. However, most of current Statistical Machine Translation (SMT) systems mainly depend on translation model and language model. They never consider and make full use of tense information. In this paper, we propose n-gram-based tense models for SMT and successfully integrate them into a state-of-the-art phrase-based SMT system via two additional features. Experimental results on the NIST Chinese-English translation task show that our proposed tense models are very effective, contributing performance improvement by 0.62 BLUE points over a strong baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "For many NLP applications, such as event extraction and summarization, tense has been regarded as a key factor in providing temporal order. However, tense information has been largely overlooked by current SMT research. Consider the following example: Although the translated text produced by Moses 1 is understandable, it has very odd tense combination from the grammatical aspect, i.e. with tense inconsistency (is/does in REF vs. is/did in Moses). Obviously, slight modification, such as changing \"is\" into \"was\", can much improve the readability of the translated text. It is also interesting to note that such modification can much affect the evaluation. If we change \"did\" to \"does\", the BLEU-4 score increases from 22.65 to 27.86 (as matching the 4-gram \"does not reflect the\" in REF). However, if we change \"is\" to \"was\", the BLEU score decreases from 22.65 to 21.44.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "SRC: \u00f9 ' B$\u00b4e\u00d4 (J , \u00d8 U \u2021N", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The above example seems special. To testify its impact on SMT in wider range, we design a special experiment based on the 2005 NIST MT data (see section 6.1). This data has 4 references. We choose one reference and modify its sentences with error tense 2 . After that, we use other 3 references to measure this reference. The modified reference leads to a sharp drop in BLEU-4 score, from 52.46 to 50.27 in all. So it is not a random phenomenon that tense can affect translation results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The key is how to detect tense errors and choose correct tenses during the translation procedure. By carefully comparing the references with Moses output, we obtain the following useful observations, Observation(1): to most simple sentences, coordinate verbs should be translated with the same tense while they have different tense in Moses output;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Observation(2): to some compound sentences, the subordinate clause should have the consistent tense with its main clause while Moses fails; Observation(3): the diversity of tense usage in a document is common. However, in most cases, the neighbored sentences tends to share the same main tense. In some extreme examples, one tense (past or present), even dominates the whole document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One possible solution to model above observations is using rules. Dorr (2002) refers to six basic English tense structures and defines the possible paired combinations of \"present, past, future\". But the practical cases are very complicated, especially in news domain. There are a lot of complicated sentences in news articles. Our preliminary investigation shows that such six paired combinations can only cover limited real cases in Chinese-English SMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 77, |
|
"text": "Dorr (2002)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper proposes a simple yet effective method to model above observations. For each target sentence in the training corpus, we first parse it and extract its tense sequence. Then, a target-side tense n-gram model is constructed. Such model can be used to estimate the rationality of tense combination in a sentence and thus supervise SMT to reduce tense inconsistency errors against Observations (1) and (2) in the sentence-level. In comparison, Observation (3) actually reflects the tense distributions among one document. After extracting each main tense for each sentence, we build another tense ngram model in the document-level. For clarity, this paper denotes document-level tense as \"inter-tense\" and sentence-level tense as \"intra-tense\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "After that, we propose to integrate such tense models into SMT systems in a dynamic way. It is well known there are many errors in the current MT output (David et al., 2006) . Unlike previously making trouble with reference texts, the BLEU-4 score cannot be influenced obviously by modifying a small part of abnormal sentences in a static way. In our system, both inter-tense and intra-tense model are integrated into a SMT system via additional features and thus can supervise the decoding procedure. During decoding, once some words with correct tense can be determined, with the help of language model and other related features, the small component-\"tense\"-can affect surrounding words and improve the performance of the whole sentence. Our experimental results (see the examples in Sec-tion 6.4) show the effectiveness of this way.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 173, |
|
"text": "(David et al., 2006)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Rather than the rule-based model, our models are fully statistical-based. So they can be easily scaled up and integrated into either phrase-based or syntaxbased SMT systems. In this paper, we employ a strong phrase-based SMT baseline system, as proposed in Gong et al. (2011) , which uses document as translation unit, for better incorporating documentlevel information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 275, |
|
"text": "Gong et al. (2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized as follows: Section 2 reviews the related work. Section 3 and 4 are related to tense models. Section 3 describes the preprocessing work for building tense models. Section 4 presents how to build target-side tense models and discuss their characteristics. Section 5 shows our way of integrating such tense models into a SMT system. Session 6 gives the experimental results. Finally, we conclude this paper in Section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we focus on related work on integrating the tense information into SMT. Since both interand intra-tense models need to analyze and extract tense information, we also give a brief overview on tense prediction (or tagging).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The tense prediction task often needs to build a model based on a large corpus annotated with temporal relations and thus its focus is on how to recognize, interpret and normalize time expressions. As a representative, Lapata and Lascarides (2006) proposed a simple yet effective data-intensive approach. In particular, they trained models on main and subordinate clauses connected with some special temporal marker words, such as \"after\" and \"before\", and employed them in temporal inference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 247, |
|
"text": "Lapata and Lascarides (2006)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Another typical task is cross-lingual tense predication. Some languages, such as English, are inflectional, whose verbs can express tense via certain stems or suffix, while others, such as Chinese often lack inflectional forms. Take Chinese to English translation as example, if Chinese text contains particle word \" (Le)\", the nearest Chinese verb prefers to be translated into English verb with the past tense. Ye and Zhang (2005) , Ye et al. (2007) and Liu et al. (2011) focus on labeling the tenses for keywords in source-side language. Ye and Zhang (2005) first built a small amount of manually-labeled data, which provide the tense mapping from Chinese text to English text. Then, they trained a CRF-based tense classifier to label tense on Chinese documents. Ye et al. (2007) further reported that syntactic features contribute most to the marking of aspectual information. Liu et al. (2011) proposed a parallel mapping method to automatically generate annotated data. In particular, they used English verbs to label tense information for Chinese verbs via a parallel Chinese-English corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 432, |
|
"text": "Ye and Zhang (2005)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 451, |
|
"text": "Ye et al. (2007)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 473, |
|
"text": "Liu et al. (2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 560, |
|
"text": "Ye and Zhang (2005)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 766, |
|
"end": 782, |
|
"text": "Ye et al. (2007)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 881, |
|
"end": 898, |
|
"text": "Liu et al. (2011)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "It is reasonable to label such source-side verb to supervise the translation process since the tense of English sentence is often determined by verbs. The problem is that due to the diversity of English verb inflection, it is difficult to map such Chinese tense information into the English text. To our best knowledge, although above works attempt to serve for SMT, all of them fail to address how to integrate them into a SMT system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Dorr (1992) described a two-level knowledge representation model based on Lexical Conceptual Structures (LCS) for machine translation which integrates the aspectual information and the lexical-semantic information. Her system is based on an inter-lingual model and does not belong to a SMT system. Olsen et al. (2001) relied on LCS to generate appropriately-tensed English translations for Chinese. In particular, they addressed tense reconstruction on a binary taxonomy (present and past) for Chinese text and reported that incorporating lexical aspect features of telicity can obtain a 20% to 35% boost in accuracy on tense realization. Ye et al. (2006) showed that incorporating latent features into tense classifiers can boost the performance. They reported the tense resolution results based on the best-ranked translation text produced by a SMT system. However, they did not report the variation of translation performance after introducing tense information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 317, |
|
"text": "Olsen et al. (2001)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 655, |
|
"text": "Ye et al. (2006)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Machine Translation with Tense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this paper, tense modeling is done on the targetside language. Since our experiments are done on Chinese to English SMT, our tense models are learned only from the English text. In the literature, the taxonomy of English tenses typically includes three basic tenses (i.e. present, past and future) plus their combination with the progressive and perfective aspects. Here, we consider four basic tenses: present, past, future and UNK (unknown) and ignore the aspectual information. Furthermore, we assume that one sentence has only one main tense but maybe has many subordinate tenses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing for Tense Modeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This section describes the preprocessing work of building tense models, which mainly involves how to extract tense sequence via tense verbs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing for Tense Modeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Lapata et al. 2006used syntactic parse trees to find clauses connected with special aspect markers and collected them to train some special classifiers for temporal inference. Inspired by their work, we use the Stanford parser 3 to parse tense sequence for each sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Verbs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Take the following three typical sentences as examples, (a) is a simple sentence which contains two coordinate verbs, while (b) and (c) are compound sentences and (b) contains a quoted text. Figure 1 shows the parse tree with Penn Treebank style for each sentence, which has strict level structures and POS tags for all the terminal words. Here, the level structures mainly contribute to extract main tense for each sentence (to be described in Section 3.2) and POS tags are utilized to detect tense verbs, i.e. verbs with tense information.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 199, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tense Verbs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Normally, POS tags in the parse tree can distinguish five different forms of verbs: the base form (tagged as VB), and forms with overt endings D for past tense, G for present participle, N for past participle, and Z for third person singular present. It is worth noting that VB, VBG and VBN cannot determine the specific tenses by themselves. In addition, the verbs with POS tag \"MD\" need to be specially considered to distinguish future tense from other tenses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Verbs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Algorithm 1 illustrates how to determine what tense a node has. If the return value is not \"UNK\", the node belongs to a tense verb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Verbs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Algorithm 1 Determine the tense of a node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Verbs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The TreeNode of one parse tree, leaf node; Output:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The tense, tense; 1: tense = \"U N K 2: Obtaining the POS tag lpostag from leaf node; 3: Obtaining the word lword from leaf node;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "4: if (lpostag in [\"V BP , \"V BZ ]) then 5: tense = \"present 6: else if (lpostag == \"V BD ]) then 7: tense = \"past 8: else if (lpostag == \"M D ]) then 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if (lword in [\"will , \"ll , \"shall ]) then ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As described in Section 1, the inter-tense (document-level) refers to the main tense of one sentence and the intra-tense (sentence-level) corresponds to all tense sequence of one sentence. This section introduces how to recognize the main tense and extract all useful tense sequence for each sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Extraction Based on Tense Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The idea of determining the main tense is to find the tense verb located in the top level of a parse tree. According to the Penn Treebank style, the method to determine the main tense can be described as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Extraction Based on Tense Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(1) Traverse the parse tree top-down until a tree node containing more than one child is identified, denoted as S m . (2) Consider each child of S m with tag \"VP\", recursively traverse such \"VP\" node to find a tense verb. If found, use it as the main tense and return the tense; if not, go to step (3). (3) Consider each child of S m with tag \"S\", which actually corresponds to subordinate clause of this sentence. Starting from the first subordinate clause, apply the similar policy of step (2) to find the tense verb. If not found, search remaining subordinate clauses. (4) If no tense verb found, return \"UNK\" as the main tense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Extraction Based on Tense Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Here, \"VP\" nodes dominated by S m directly are preferred over those located in subordinate clauses. This is to ensure that the main tense is decided by the top-level tense verb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Extraction Based on Tense Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Take Figure 1 as an example, the main tense of sentence (a) and (b) can be determined only by step (2). The tense verb of \"(VBZ renounces)\" dominated in the \"VP\" tag determines that (a) is in present tense. Similarly the node \"(VBD added)\" indicates that (b) is in past tense. Sentence (c) needs to be further treated by step (3) since there is no \"VP\" nodes dominated by S m directly. The node \"(VBD said)\" located in the first subordinate clause shows its main tense is \"past\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 13, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tense Extraction Based on Tense Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The next task is to extract the tense sequence for each sentence. They are determined by all tense verbs in this sentence according to the strict topdown order. For example, the tense sequence of sentence (a), (b) and (c) are {present, present}, {present, future, past} and {past, past, past}. In order to explore whether the main tense of intra-tense model has an impact on SMT or not, we introduce a special marker \"*\" to denote the main tense. So the tense sequence marked with main tense of (a), (b) and (c) are {present*, present},{present, future, past*} and {past*, past, past}. It is worth noting, the intra-tense model (see Section 4) based on the latter tense sequence is different to the former.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense Extraction Based on Tense Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "After applying the previous method to extract tense for an English text corpus, we can obtain a big tense corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense N-gram Estimation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given the current tense is indexed as t i , we call the previous n \u2212 1 tenses plus the current tense as tense n-gram.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense N-gram Estimation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Based on the tense corpus, tense n-gram statistics can be done according to the Formula 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense N-gram Estimation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (t i |t (i\u2212(n\u22121)) , ..., t (i\u22121)) = count(t (i\u2212(n\u22121)) , . . . , t (i\u22121) , t i ) count(t (i\u2212(n\u22121)) , ..., t (i\u22121) )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Tense N-gram Estimation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here, the function of \"count\" return the tense n-gram frequency. In order to avoid doing specific smoothing work, we estimate tense n-gram probability using SRI language modeling (SRILM) tool (Stolcke, 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 207, |
|
"text": "(Stolcke, 2002)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense N-gram Estimation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To compute the probability of intra-tense n-gram, we first extract all tense sequence for each sentence and put them into a new file. Based on this new file, we can get the intra-tense n-gram model via SRILM tool.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense N-gram Estimation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To compute the probability of inter-tense n-gram, we need to extract the main tense for each sentence at first. Then, for each document, we re-organized the main tenses of all sentences into a special line. After putting all these special lines into a new file, we can use SRILM to obtain the inter-tense n-gram model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tense N-gram Estimation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We construct n-gram-based tense models on English Gigaword corpus (LDC2003T05). This corpus is used to build language model for most SMT systems. It includes 30221 documents (we remove such files: file size is less than 1K or the number of continuous \"UNK\" tenses is greater than 5). Figure 2 shows the unigram and bigram probabilities (Log10-style) for intra-tense and inter-tense.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 292, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Characteristic of Tense N-gram Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The part (a) and (c) in Figure 2 refer to unigram. The horizontal axis indicts tense type, and the vertical axis shows its probabilities. The parts (a) and (c) also indicate \"present\" and \"past\" are two main tense types in news domain.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 32, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Characteristic of Tense N-gram Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The part (b) and (d) refer to bigram. The horizontal axis indicts history tense. Each different colorful bar indicts one current tense. The vertical axis shows the transfer possibilities from a history tense to a current tense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characteristic of Tense N-gram Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The part (b) 4 reflects transfer possibilities of tense types in one sentence. It also slightly reflects some linguistic information. For example, in one sentence, the probability of co-occurrence of \"present \u2192 present\", \"past \u2192 past\" and \"future \u2192 present\" is more than other combinations, which can be against tense inconsistency errors described in Observation (1) and (2) (see Section 1). However, it seems strange that \"present \u2192 past\" exceeds \"present \u2192 future\". We checked our corpus and found a lot of sentences like this-\"the bill has been . . . , he said. \".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Characteristic of Tense N-gram Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The part (d) shows tense type can be switched between two neighbored sentences. However, it shows the strong tendency to use the same tense type for Figure 2 : statistics of intra-tense and inter-tense N-gram neighbored sentences. This statistics conform to the previous observation (3) very much.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 157, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Characteristic of Tense N-gram Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we discuss how to integrate the previous tense models into a SMT system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integrating N-gram-based Tense Models into SMT", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "It is well known that the translation process of SMT can be modeled as obtaining the best translation e of the source sentence f by maximizing following posterior probability (Brown et al., 1993) : ", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 195, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic phrase-based SMT system", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "e best =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic phrase-based SMT system", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "where P (e|f ) is a translation model and P lm is a language model. Our baseline is a modified Moses, which follows Koehn et al. (2003) and adopts similar six groups of features. Besides, the log-linear model ( Och and Ney, 2000) is employed to linearly interpolate these features for obtaining the best translation according to the formula 3:", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 135, |
|
"text": "Koehn et al. (2003)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 229, |
|
"text": "( Och and Ney, 2000)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic phrase-based SMT system", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e best = arg max e M m=1 \u03bb m h m (e, f )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Basic phrase-based SMT system", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "where h m (e, f ) is a feature function, and \u03bb m is the weight of h m (e, f ) optimized by a discriminative training method on a held-out development data (Och, 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 166, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic phrase-based SMT system", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Our system works as follows: When a hypothesis has covered all source-side words during the decoding procedure, the decoder first obtains tense sequence for such hypothesis and computes intra-tense feature F s (see Section 5.3). At the same time, it recognizes the main tense of this hypothesis and associate the main tense of previous sentence to compute inter-tense feature F m (see Section 5.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Workflow of Our System", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Next, the decoder uses such two additional feature values to re-score this hypothesis automatically and choose one hypothesis with highest score as the final translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Workflow of Our System", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "After translating one sentence, the decoder caches its main tense and pass it to the next sentence. When one document has been processed, the decoder cleans this cache.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Workflow of Our System", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In order to successfully implement above workflow, we should firstly design some related features, then resolve another key problem of determining tense (especially main tense) for SMT output. They are described in Section 5.3 and 5.4 respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Workflow of Our System", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Although the previous tense models show strong tendency to use the consistent tenses for one sentence or one document, other tense combinations also can be permitted. So we should use such models in a soft and dynamic way. We design two features: inter-tense feature and intra-tense feature. And the weight of each feature is tuned by the MERT script in Moses packages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Additional Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Given main tense sequence of one document t 1 , . . . , t m , the inter-tense feature F m is calculated according to the following formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Additional Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "F m = m i=2 P (t i |t i\u2212(n\u22121) , . . . , t (i\u22121) )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Two Additional Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The P (\u2022) of formula 4 can be estimated by the formula 1. It is worth noting the first sentence of one document often scares tense information since it corresponds to the title at most cases. To the first sentence, the P (\u2022) value is set to 1 4 (4 tense types). Given tense sequence of one sentence s 1 , . . . , s e (e > 1), the intra-tense feature F s is calculated as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Additional Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "F s = e\u22121 e i=2 P (s i |s i\u2212(n\u22121) , . . . , s (i\u22121) )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Two Additional Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Here the square-root operator is used to avoid punishing translations with long tense sequence. It is worth noting if the sentence only contains one tense, the P (\u2022) value of formula 5 is also set to 1 4 . Since the average length of intra-tense sequence is about 2.5, we mainly consider intra-tense bigram model and thus n equals to 2. 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Additional Features", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The current SMT systems often produce odd translations partly because of abnormal word ordering and uncompleted text etc. For these abnormal translated texts, the syntactic parser cannot work well in our initial experiments, so the previous method to parse main tense and tense sequence of regular texts cannot be applied here too.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining Tense For SMT Output", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Fortunately, the solely utilization of Stanford POS tagger for our SMT output is not bad although it has the same issues described in Och et al. (2002) . The reason may be that phrase-based SMT contains short contexts that POS tagger can utilize while the syntax parser fails.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 151, |
|
"text": "Och et al. (2002)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining Tense For SMT Output", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Once obtaining a completed hypothesis, the decoder will pass it to the Stanford POS tagger and according to tense verbs to get all tense sequence for this hypothesis. However, since the POS tagger can not return the information about level structures, the decoder cannot recognize the main tense from such tense sequence. Liu et al. (2011) once used target-side verbs to label tense of source-side verbs. It is natural to consider whether Chinese verbs can provide similar clues in an opposite direction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 339, |
|
"text": "Liu et al. (2011)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining Tense For SMT Output", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Since Chinese verbs have good correlation with English verbs (described in section 6.2), we obtain Figure 3 : trees for parallel sentences main tense for SMT output according to such tense verb, which corresponds to the \"VV\"(Chinese POS labels are different to English ones, \"VV\" refers to Chinese verb) node in the top level of the source-side parse tree. Take Figure 3 as an example, the English node \"(VBD announced)\" is a tense verb which can tell the main tense for this English sentence. The Chinese verb \"(VV\u00fa \u00d9)\" in the top-level of the Chinese parse tree is just the corresponding part for this English verb.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 107, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 370, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Determining Tense For SMT Output", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "So, before translating one sentence, the decoder first parses it and records the location of one Chinese \"VV\" node which located in the top-level, denotes this location as S area .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining Tense For SMT Output", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Once a completed hypothesis is generated, according to the phrase alignment information, the decoder can map S area into the English location T area and obtain the main tense according to the POS tags in T area .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining Tense For SMT Output", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "If T area does not contain tense verb, such as the verb POS tags in the list of {VB, VBN, VBG}, which cannot tell tense type by themselves, our system permits to find main tense in the left/right 3 words neighbored to T area . And the proportion that the top-level verb of Chinese has a verb correspondence in English can reach to 83% in this way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining Tense For SMT Output", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In our experiment, SRI language modeling toolkit was used to train a 5-Gram general language model on the Xinhua portion of the Gigaword corpus. Word alignment was performed on the training parallel corpus using GIZA++ ( Och and Ney, 2000) in two directions. For evaluation, the NIST BLEU script (version 13) with the default setting is used to calculate the BLEU score (Papineni et al., 2002) , which measures case-insensitive matching of 4-grams. To see whether an improvement is statistically significant, we also conduct significance tests using the paired bootstrap approach (Koehn, 2004) . In this paper, \"***\" and \"**\" denote p-values equal to 0.05, and bigger than 0.05, which mean significantly better, moderately better respectively. We use FBIS as the training data, the 2003 NIST MT evaluation test data as the development data, and the 2005 NIST MT test data as the test data. Table 1 shows the statistics of these data sets (with document boundaries annotated).", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 239, |
|
"text": "( Och and Ney, 2000)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 393, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 593, |
|
"text": "(Koehn, 2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 890, |
|
"end": 897, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setting for SMT", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In this section, an additional experiment is designed to show English Verbs have close correspondence with Chinese Verbs. We use the Stanford POS tagger to tag the Chinese and English sentences in our training corpus respectively at first. Then we utilize Giza++ to build alignment for these special Word-POS pairs. According to the alignment results, we find the corresponding relation for some special POS tags in two languages. The \"Number\" column of Table 2 shows the numbers of Chinese words with \"VV\" tag corresponding to English words with different verb POS tags.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 461, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Correlation of Chinese Verbs and English Verbs", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We found Chinese verbs have more than 77% possibilities to align to English verbs in total. However, our method will fail when some special Chinese sentences only contain noun predicates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Correlation of Chinese Verbs and English Verbs", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "All the experiment results are showed on the table 3. Our Baseline is a modified Moses. The major modification is input and output module in order to translate using document as unit. The performance of our baseline exceeds the baseline reported by Gong et al. (2011) The system denoted as \"Baseline+F m \" integrates the inter-tense feature. The performance boosts 0.57(***) in BLEU score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 267, |
|
"text": "Gong et al. (2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The system denoted as \"Baseline+F s \" integrates the intra-tense feature into the baseline. The improvement is less than the inter-tense model, only 0.31(**). It seems the tenses in one sentence has more flexible formats than the document-level ones. It is worth noting, this method can gain higher performance on the develop data than the one of \"Baseline+F m \" while fail to improve the test data. Maybe the related weight is tuned over-fit.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The system denoted as \"Baseline+F s (*)\" is slightly different from \"Baseline+F s \". This experiment is to check whether the main tense has an impact on intra-tense model or not (see Section 3.2). Here, the intra-tense model based on the tense sequence with main tense marker is slightly different to the model showed in Figure 2 . The results are slightly better than the previous system by 0.13.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 329, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Finally, we use the two features together (Baseline+F m +F s ). The best way improved the performance by 0.62(***) in BLEU score over our baseline. Table 4 shows special examples whose intra-tenses are changed in our proposed system. The example 1 and 2 show such modification can improve the BLEU score but the example 3 obtains negative impact. From these examples, we can see not only tense verbs have changed but also their surrounding words have subtle variation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 155, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Translation sentence 1 8.64 Baseline: the gulf countries , the bahraini royal family members by the military career of part of the banned to their marriage stories like children , have become the theme of television films . 19.71 Ours: the gulf country is a member of the bahraini royal family , a risk by military career risks part of the banned to their marriage like children , has become a story of the television and film industry . 2 17.16 Baseline:economists said that the sharp appreciation of the euro , in the recent investigation continues to have an impact on economic confidence , it is estimated that the economy is expected to rebound to pick up . 24.25 Ours: economists said that the sharp appreciation of the euro , in the recent investigation continued to have an impact on economic confidence and therefore no reason to predict the economy expected to pick up a rebound . 3 73.03 Baseline: the middle east news agency said that , after the concerns of all parties concerned in the middle east peace process , israel and palestine , egypt , the united states , russia and several european countries will hold a meeting in washington . 72.95 Ours: the middle east news agency said that after the concerns of all parties in the middle east peace process , israel and palestine , egypt , the united states , russia and several european countries held a meeting in washington . From the results showed on Table 3 , the document-level tense model seems more effective than the sentence-level one. We manually choose and analyzed 5 documents with significant improvement in the test data. The part (a) of Figure 4 shows the main tense distributions of one reference. The main tense distributions for the baseline and our proposed system are showed in the part (b) and (c) respectively. These documents have different numbers of sentences but all less than 10. The vertical axis indicates different tense: 1 to \"past\", 2 to \"present\", 3 to \"future\" and 4 to \"UNK\". It is obvious that our system has closer distributions to the ones of this reference.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1419, |
|
"end": 1426, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1617, |
|
"end": 1625, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "No. BLEU", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The examples in Table 5 indicate the joint impact of inter-tense and intra-tense model on SMT. Sen-Src: 1)\u00b1\u00da \u00f4\u00ac K \u00b5\u00a3\u02dc^\u00cc \u2021 \u00b4, |AE \u00f4\"\u00ab ; \u00bd\u00c2 \u00f4\u00c2 \" 2)nVd\" )\u02dc|\" +E Cnd \u2022 \u00bcO c \u00e0 \u00dcW \u00a2 \u00cb|\u00f0 , \u00f9\u00b4L o c 5 \u00b1\u00da \u00db \u00c4 g ON nV d\" - \u2021 + <\u00d4 \u00eb\\ \u2039 ! \u0178; \" Ref: 1)Israeli settlers blockaded a major road to protest a mortar attack on the settlement area. 2)PLO leader Abbas had also been allowed to go to the West Bank town of Bethlehem , which is the first time in the past four years Israeli authorities have allowed a senior Palestinian leader to attend Christmas celebrations. Baseline: 1)israel has imposed a main road to protest by mortars attack .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 23, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "No. BLEU", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2)the palestinian leader also visited the west bank cities and towns to bethlehem , which in the past four years , the israeli authorities allowed the palestinian leading figures attended the ceremony . Ours: 1)israel has imposed a main road to protest against the mortars attack .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No. BLEU", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2)leader of the palestinian liberation organization have also been allowed to go to the west bank towns , bethlehem in the past four years . this is the first time the israeli authorities allow palestinian leading figures attended the ceremony . Table 5 : the joint impact of inter-tense and intra-tense models on SMT tence 1) and 2) are two neighbored sentences in one document. Both the reference and our output tend to use the same main tense type, but the former is in \"past\" tense and the latter is in \"present\" tense. The baseline cannot show such tendency. Although our main tense is different to the reference one, the consistent tenses in document level bring better translation results than the baseline. And the tenses in sentence level also show better consistency than the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 253, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "No. BLEU", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper explores document-level SMT from the tense perspective. In particular, we focus on how to build document-level and sentence-level tense models and how to integrate such models into a popular SMT system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Compared with the inter-tense model which greatly improves the performance of SMT, the intra-tense model needs to be further explored. The reasons are many folds, e.g. the failure to exclude quoted texts when modeling intra-tense, since tenses in quoted texts behave much diversely from normal texts. In the future work, we will focus on modeling intratense variation according to specific sentence types and using more features to improve it. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://www.statmt.org/moses/ 2 Such changes are small by mainly modifying one auxiliary verb for a sentence, such as \"is \u2192 was\", \"has \u2192 had\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://nlp.stanford.edu/software/lex-parser.shtml", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The co-occurrence of the \"UNK\" tense and other tense types in one sentence cannot happen, so the \"UNK\" tense is omitted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our experiment, the intra-tense bigram model can obtain the comparable performance to the trigram model. And the inter-tense trigram model can not exceed the bigram one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research is supported by part by NUS FRC Grant R252-000-452-112, the National Natural Sci- ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.F. Brown, S.A. Della Pietra, V.J. Della Pietra and R.L. Mercer. 1992. The Mathematics of Statistical Ma- chine Translation: Parameter Estimation. Computa- tional Linguistics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Error Analysis of Machine Translation Output", |
|
"authors": [ |
|
{ |
|
"first": "Vilar", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Dharo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "697--702", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vilar David, Jia Xu, DHaro L. F., and Hermann Ney. 2006. Error Analysis of Machine Translation Output. In Proceedings of the 5th International Conference on Language Resources and Evaluation, pages 697-702, Genoa, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A parameterized approach to integrating aspect with lexical-semantics for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Bonnie", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Dorr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of of ACL-2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "257--264", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bonnie J. Dorr. 1992. A parameterized approach to integrating aspect with lexical-semantics for machine translation. In Proceedings of of ACL-2002, pages 257-264.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Constraints on the Generation of Tense, Aspect, and Connecting Words from Temporal Expressions", |
|
"authors": [ |
|
{ |
|
"first": "Bonnie", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terry", |
|
"middle": [], |
|
"last": "Gaasterland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bonnie J. Dorr and Terry Gaasterland. 2002. Constraints on the Generation of Tense, Aspect, and Connecting Words from Temporal Expressions. Technical Reports from UMIACS.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Cached-based Document-level Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Zhengxian", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "909--919", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengxian Gong, Min Zhang and Guodong Zhou. 2011. Cached-based Document-level Statistical Ma- chine Translation. In Proceedings of the 2011 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 909-919.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Statistical Phrase-Based Translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danielmarcu", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of NAACL-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz Josef Och ,and DanielMarcu. 2003. Statistical Phrase-Based Translation. In Proceedings of NAACL-2003, pages 48-54.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Statistical Significance Tests for Machine Translation Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "388--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natu- ral Language Processing, pages 388-395.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning Sentence-internal Temporal Relations", |
|
"authors": [ |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "85--117", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mirella Lapata, Alex Lascarides. 2006. Learning Sentence-internal Temporal Relations. Journal of Ar- tificial Intelligence Research, 27:85-117.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning from Chinese-English Parallel Data for Chinese Tense Prediction", |
|
"authors": [ |
|
{ |
|
"first": "Feifan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of IJCNLP-2011", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1116--1124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Feifan Liu, Fei Liu and Yang Liu. 2011. Learning from Chinese-English Parallel Data for Chinese Tense Pre- diction. In Proceedings of IJCNLP-2011, pages 1116- 1124,Chiang Mai, Thailand.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Improved Statistical Alignment Models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of of ACL-2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--447", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved Sta- tistical Alignment Models. In Proceedings of of ACL- 2000, pages 440-447.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A smorgasbord of Features for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sanjeev Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of NAACL-2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--447", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, et.al. 2002. A smorgasbord of Features for Statistical Machine Translation. In Proceedings of NAACL-2004, pages 440-447.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Minimum Error Rate Training in Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of ACL-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of ACL-2003,pages 160-167, Sapporo, Japan,July.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Implicit Cues for Explicit Generation: Using Telicity as a Cue for Tense Structure in a Chinese to English MT System", |
|
"authors": [ |
|
{ |
|
"first": "Mari", |
|
"middle": [], |
|
"last": "Olsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carol", |
|
"middle": [], |
|
"last": "Van Ess-Dykema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Weinberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of MT Summit VIII", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mari Olsen, David Traum,Carol Van Ess-Dykema and Amy Weinberg. 2001. Implicit Cues for Explicit Gen- eration: Using Telicity as a Cue for Tense Structure in a Chinese to English MT System. In Proceedings of MT Summit VIII, Santiago de Compostella, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weijing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of ACL-2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- jing Zhu. 2002. BLEU: A Method for Automatic E- valuation of Machine Translation. In Proceedings of ACL-2002, pages 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "SRILM-an extensible language modeling toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the International Conference on Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "901--904", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Proceedings of the Internation- al Conference on Spoken Language Processing,pages 901-904.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Tense tagging for verbs in cross-lingual context: A case study", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of IJCNLP-2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "885--895", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Ye and Zhu Zhang. 2005. Tense tagging for verbs in cross-lingual context: A case study. In Proceedings of IJCNLP-2005, pages 885-895.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Latent features in Temporal Reference Translation. Fifth SIGHAN Workshop on Chinese Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Li Fossum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Ye, V.li Fossum, Steven Abney. 2006. Laten- t features in Temporal Reference Translation. Fifth SIGHAN Workshop on Chinese Language Processing, pages 48-55.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Aspect Marker Generation in English-to-Chinese Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl-Michael", |
|
"middle": [], |
|
"last": "Schnelder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of MT Summit XI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "521--527", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Ye, Karl-Michael Schnelder, Steven Abney. 2007. Aspect Marker Generation in English-to-Chinese Ma- chine Translation. Proceedings of MT Summit XI, pages 521-527,Copenhagen, Denmark.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "(a) Japan's constitution renounces the right to go to war and prohibits the nation from having military forces except for selfdefense.(b) \"We also hope Hong Kong will not be affected by diseases like the severe acute respiratory syndrome again!\" , added Ms.Yang.(c) Cheng said he felt at home in Hong Kong and he sincerely wished Hong Kong more peaceful and more prosperous.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "The Stanford parse trees with Penn Treebank style", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "if (lword in [\"would , \"could ]) then 16: end if 17: return tense;", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "|e)P lm (e)", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "the comparison of the inter-tense distributions for reference, baseline and our proposed system", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "y3 oe\u00b3 , \u2022 \u2122 \u2021N \u00a5I \u2020 \u00ee \u2020 m \u00ba\u0160 'X \" REF:The embargo is a result of the Cold War and does not reflect the present situation nor the partnership between China and the EU. MOSES: the embargo is the result of the cold war, not reflect the present situation, it did not reflect the partnership with the european union." |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Corpus statistics" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "The Chinese and English Verb Pos Alignment" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "The performance of using different feature combinations" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Examples with tense variation using intra-tense model" |
|
} |
|
} |
|
} |
|
} |