Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:01:11.914892Z"
},
"title": "Automatic Propbank Generation for Turkish",
"authors": [
{
"first": "Koray",
"middle": [],
"last": "Ak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "I\u015f\u0131k Universit\u1e8f Istanbul",
"location": {
"country": "Turkey"
}
},
"email": "[email protected]"
},
{
"first": "Olcay",
"middle": [
"Taner"
],
"last": "Y\u0131ld\u0131z",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u1e8f Istanbul",
"location": {
"country": "Turkey"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic role labeling (SRL) is an important task for understanding natural languages, where the objective is to analyse propositions expressed by the verb and to identify each word that bears a semantic role. It provides an extensive dataset to enhance NLP applications such as information retrieval, machine translation, information extraction, and question answering. However, creating SRL models are difficult. Even in some languages, it is infeasible to create SRL models that have predicate-argument structure due to lack of linguistic resources. In this paper, we present our method to create an automatic Turkish PropBank by exploiting parallel data from the translated sentences of English PropBank. Experiments show that our method gives promising results.",
"pdf_parse": {
"paper_id": "R19-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic role labeling (SRL) is an important task for understanding natural languages, where the objective is to analyse propositions expressed by the verb and to identify each word that bears a semantic role. It provides an extensive dataset to enhance NLP applications such as information retrieval, machine translation, information extraction, and question answering. However, creating SRL models are difficult. Even in some languages, it is infeasible to create SRL models that have predicate-argument structure due to lack of linguistic resources. In this paper, we present our method to create an automatic Turkish PropBank by exploiting parallel data from the translated sentences of English PropBank. Experiments show that our method gives promising results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic role labeling (SRL) is a well defined task that identifies semantic roles of the words in a sentence. Event characteristics and participants are simply identified by answering \"Who did what to whom\" questions. Having this semantic information facilitates NLP applications such as machine translation, information extraction, and question answering. After the development of statistical machine learning methods in the area of computational linguistics, learning complex linguistic knowledge has became feasible for NLP applications. Recent semantic resources specifically for SRL which provides input for developing statistical approaches are FrameNet (Fillmore et al., 2004) , PropBank Palmer, 2002) (2003) , (2005) , (Bonial et al., 2014) and NomBank (2004) . These resources enables us to understand language structure by providing a stable semantic representation.",
"cite_spans": [
{
"start": 661,
"end": 684,
"text": "(Fillmore et al., 2004)",
"ref_id": "BIBREF10"
},
{
"start": 696,
"end": 716,
"text": "Palmer, 2002) (2003)",
"ref_id": null
},
{
"start": 719,
"end": 725,
"text": "(2005)",
"ref_id": null
},
{
"start": 728,
"end": 749,
"text": "(Bonial et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 754,
"end": 768,
"text": "NomBank (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Among these resources PropBank is a commonly used semantic resource which includes predicate -argument structure by stating the roles that each predicate can take along with the annotated corpora. It has been applied to more than 15 different languages. However, manually creating such semantic resource is labor-intensive, timeconsuming and most importantly requires a professional linguistic perspective. Also limited linguistic data further blocks generating PropBanklike resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Various studies such as Zhuang and Zong (2010) , Van der Plas et al. 2011) (2014, Kozhevnikov and Titov (2013) , Akbik et al. (2015) , which transfer semantic information using parallel corpus, are presented to cope with these problems. In this way, semantic information projected from a resource-rich language (English) to a language with inadequate resources and Prop-Bank of the target language is automatically generated. Here the assumption is translated parallel sentences generally share same semantic information. Word and constituent based alignment techniques are widely used to construct mapping between source and target languages for annotation projection. Previous studies report translation divergences and language specific differences affect the quality of the projection. Filtering projections using learning methods is suggested to increase precision. In this paper, we present our study to create automatic Turkish PropBank using parallel sentences from English PropBank.",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "Zhuang and Zong (2010)",
"ref_id": "BIBREF19"
},
{
"start": 82,
"end": 110,
"text": "Kozhevnikov and Titov (2013)",
"ref_id": "BIBREF13"
},
{
"start": 113,
"end": 132,
"text": "Akbik et al. (2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows: we first give brief information about English and Turkish PropBanks in Section 2. In Section 3, Studies for the automatic proposition bank generation are discussed. In the next section proposed methods are presented. First, we explain the annotation projection using parallel sentence trees. Then, we propose methods for aligning parallel sentence phrases not aligned with tree structure. Finally, in Section 5, we conclude with the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "PropBank is the bank of propositions where predicate-argument information of the corpora is annotated and semantic roles or arguments that each verb can take are posited. It is constituted on the Penn Treebank (Marcus et al., 1993) Wall Street Journal [WSJ] . The primary goal is to label syntactic elements in a sentence with specific argument roles to standardize labels for the similar arguments such as the window in John broke the window and the window broke. PropBank uses conceptual labels for arguments from Arg0 to Arg5. Only Arg0 and Arg1 indicate the same roles across different verbs where Arg0 means agent or causer and Arg1 is the patient or theme. The rest of the argument roles can vary across different verbs. They can be instrument, start point, end point, beneficiary, or attribute. Moreover, PropBank uses ArgM's as modifier labels where the role is not specific to the verb group and generalizes over the corpora such as location, temporal, purpose, or cause etc. arguments. The first version of English PropBank, named as The Original PropBank, is constructed for only verbal predicates whereas the latest version includes all syntactic realizations of event and state semantics by focusing different expressions in form of nouns, adjectives and multi-word expressions to represent complete event relations within and across sentences.",
"cite_spans": [
{
"start": 210,
"end": 231,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF14"
},
{
"start": 252,
"end": 257,
"text": "[WSJ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "English PropBank",
"sec_num": "2.1"
},
{
"text": "There have been different attempts to construct Turkish PropBank in the literature. \u015e ahin (2016a; 2016b), \u015e ahin and Adal\u0131 (2017) report semantic role annotation of arguments in the Turkish dependency treebank. They construct PropBank by using ITU-METU-Sabanc\u0131 Treebank (IMST). In these studies, frame files of Turkish PropBank are constructed and extended by utilizing crowdsourcing. 20,060 semantic roles are annotated in 5,635 sentences. The size of the resource is stated as a drawback in the study. Recently, Ak et al. (2018) construct another Turkish Proposition Bank using translated sentences of English PropBank. So far, 9,560 of 17,000 translated sentences are annotated with semantic roles. Also, framesets are created for 1,330 verbs and 1,914 verb senses. These stud-ies constitute a base for Turkish proposition bank, but their size is limited and construction of these proposition banks consumed a lot of time.",
"cite_spans": [
{
"start": 515,
"end": 531,
"text": "Ak et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PropBank Studies for Turkish",
"sec_num": "2.2"
},
{
"text": "PropBanks are also generated automatically for resource-scarce languages by using parallel corpus. In this section, proposition bank studies for automatic generation are presented. Zhuang and Zong (2010) proposed performing SRL on parallel corpus of different languages and merging the result via a joint inference model can improve SRL results for both input languages. In the study an English and Chinese parallel corpus is used. First each predicate is processed by monolingual SRL systems separately for producing argument candidates. After the candidates formed, a Joint Inference model selects the candidate that is reasonable to the both languages. Also, a log-linear model is formulated to evaluate the consistency. This approach increased F1 scores 1.52 and 1.74 respectively for Chinese and English. Van der Plas et al. 2011presents cross-lingual semantic transfer from English to French. English syntactic-semantic annotations were transferred using word alignments to French language. French semantic annotations gathered from the first step were then trained with a French joint syntactic-semantic parser along with the French syntactic annotations trained separately. Joint syntactic-semantic parser is used for learning the relation between semantic and syntactic structure of the target language and reduces the errors arising from the first step. This approach reaches 4% lower than the upper bound for predicates and 9% for arguments. Kozhevnikov (2013) shows SRL model transfer from one language to another can be achieved by using shared feature representation. Shared feature representation for language pairs is constructed based on syntactic and lexical information. Afterwards, a semantic role labeling model is trained for source language and then used for the target language. As a result SRL model of the target language is generated. Process only requires a source language model and parallel data to construct target SRL model. Approach is applied for English, French, Czech and Chinese languages.",
"cite_spans": [
{
"start": 181,
"end": 203,
"text": "Zhuang and Zong (2010)",
"ref_id": "BIBREF19"
},
{
"start": 1453,
"end": 1471,
"text": "Kozhevnikov (2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic PropBank Generation Studies",
"sec_num": "3"
},
{
"text": "In the next study, Van der Plas (2014) improves the labeling results with respect to the previous work (Van der Plas et al., 2011) by building separate models for arguments and predicates. Also, problems of transferring semantic annotations using parallel corpus is examined in the paper. Token-totoken basis annotation transfer, translation shifts, and alignment errors in the previous work is replaced with a global approach that aggregates information at corpus level. Instead of using English semantic annotations of roles and predicate together with French PoS tags to generate French semantic annotations, English annotations of predicates and roles used separately to generate one predicate and one role semantic annotations separately. Akbik et al. propose a two stage approach (Akbik et al., 2015) . In the first stage only filtered semantic annotation is projected. Since high confidence semantic labels projected, resulting target semantic labels will be high in precision and low in recall. In the next stage, completed target language sentences sampled and a classifier is trained to add new labels to boost recall and preserve precision. Proposed system is applied on 7 different languages from 3 different language family. These languages are Chinese, Arabic, French, German, Hindi, Russian, and Spanish.",
"cite_spans": [
{
"start": 786,
"end": 806,
"text": "(Akbik et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic PropBank Generation Studies",
"sec_num": "3"
},
{
"text": "Among the studies for Turkish proposition bank, Ak et al. (2018) is constructed on parallel English -Turkish sentences from the Original English PropBank. We have used the corpus provided in this study to automatically generate proposition bank.",
"cite_spans": [
{
"start": 48,
"end": 64,
"text": "Ak et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Penn Treebank structure offers advantages for building fully tagged data set in accordance with syntactic labels, morphological labels and parallel sentences. We used this structure to add English PropBank labels for each word in the corpus. In this manner, we exploited this parallel dataset to transfer English PropBank annotations to an automatic Turkish PropBank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Turkish PropBank Using Parallel Sentence Trees",
"sec_num": "4.1"
},
{
"text": "Original English PropBank corpus (Palmer et al., 2004) is accessible through Linguistic Data Consortium (LDC). This resource is the initial version of the English PropBank and it only includes the relations with verbal predicates. In the newer versions adjective and noun relations are also annotated. Since we compare projection results with manually annotated corpus (Ak et al., 2018) which only contains verbal relations, we use the initial version of the English PropBank. We downloaded this dataset and imported annotations for the selected sentences. After this step 6,060 sentences among 9,558 were enhanced with the English annotations. Below in Figure 1 , a sample sentence is presented. English annotations are inserted inside \"englishPropbank\" tags right after Turkish annotations which reside in \"propbank\" tags. Some of the words have only English annotation, because there is no word translated in the Turkish sentence for this node. As an example, \"their\" in Figure 1 has annotations in the en-glishPropbank tag but there is no equivalent translation in Turkish, presented as \"*NONE*\", so propbank tag does not exist. English tags have predicate information that annotation belongs to. \"M\u00fc\u015fterilerinin\" (customers) in the same example has \"ARG0$like 01#ARG1$think 01\" in the englishPropbank tag which means there exists at least two words whose root is in verb form. Here the word is annotated with respect to \"like\" and \"think\" separately. We have separated multiple annotations with \"#\" sign and in each annotation predicate label and role is distinguished by \"$\" sign. In the Turkish annotation, WordNet id of the predicate was used instead of predicate label.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "(Palmer et al., 2004)",
"ref_id": "BIBREF16"
},
{
"start": 369,
"end": 386,
"text": "(Ak et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 654,
"end": 662,
"text": "Figure 1",
"ref_id": null
},
{
"start": 974,
"end": 982,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "English PropBank Labels",
"sec_num": "4.1.1"
},
{
"text": "After importing English annotations, it is necessary to determine predicate(s) of the Turkish sentences. Morphological structures of the words are examined to detect predicate candidates. Words were morphologically and semantically analyzed in translated Penn TreeBank. We have used \"mor-phologicalAnalysis\" tag to check the morphological structure of the words. In Figure 1 , sample morphological structure is displayed.",
"cite_spans": [],
"ref_spans": [
{
"start": 366,
"end": 374,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transfering Annotations to Automatic Turkish PropBank Using Parallel Sentences",
"sec_num": "4.1.2"
},
{
"text": "The word which has a verb root and verb according to last inflectional group is treated as the predicate of the sentence. Once we found a word suitable for these conditions, we gathered English PropBank annotation. If it is also labeled as predicate in English proposition bank, we got the predicate label, e.g. like 01, to find annotations with respect to this predicate. We searched for the found Figure 1 : Part of a sentence tree : English PropBank annotations reside in \"englishPropBank\" tags.",
"cite_spans": [],
"ref_spans": [
{
"start": 399,
"end": 407,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transfering Annotations to Automatic Turkish PropBank Using Parallel Sentences",
"sec_num": "4.1.2"
},
{
"text": "predicate label in the annotations and transfered annotations matching with the predicate label. If we could not find a predicate in Turkish sentence or the corresponding English label did not contain Predicate role annotation, we skipped to the next predicate candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfering Annotations to Automatic Turkish PropBank Using Parallel Sentences",
"sec_num": "4.1.2"
},
{
"text": "During the transfer, a mapping was needed due to the difference between English and Turkish (Ak et al., 2018) argument labeling. English PropBank corpus has \"-\" sign in ArgM's like ARGM-TMP and also some of the arguments from Arg1 to Arg5 are labeled with the prepositions such as ARG1-AT, ARG2-BY etc. We processed these differences and then transferred labels into the \"propbank\" tags. After analyzing Turkish sentences we found out some sentences have more than one predicate, so we continued to search for another predicate in the sentence and ran the same procedure for each predicate candidate.",
"cite_spans": [
{
"start": 92,
"end": 109,
"text": "(Ak et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transfering Annotations to Automatic Turkish PropBank Using Parallel Sentences",
"sec_num": "4.1.2"
},
{
"text": "Annotations gathered from the English sentence were compared with the Turkish hand-annotated proposition bank (Ak et al., 2018) . Comparisons were done at the word level by checking the annotations for each corpus. Among the 6,060 sentences enhanced with English PropBank roles, 848 sentences did not have a predicate in Turkish proposition bank. Therefore, in 5,212 sentences, 44,779 word annotations were compared. 31,813 annotations were transferred from English to Turkish. Results of the comparison are presented in Table 1 . 19,373 words annotated with PropBank roles correctly . 6,441 annotations are incorrect, PropBank tags are different in both corpus. 5,999 annotations are undetermined, valid PropBank labels transferred from English annotations but no annotation exists in hand annotated proposition bank. Annotations to be compared is not valid so we did not include this set in the evaluation. When we remove undetermined 5,999 words in the comparison; 19,373 annotations from 25,814 annotations are correct, which gives us \u223c75% accuracy for transferred and comparable set. These 5,999 annotations may be hand-annotated and recompared for validity of the transferred annotations.",
"cite_spans": [
{
"start": 110,
"end": 127,
"text": "(Ak et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 521,
"end": 528,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1.3"
},
{
"text": "In Table 2 , we present occurrences of erroneous annotation transfers. Only top ten occurrences are presented. Arg0-Arg1 transfers are the most occurred incorrect transfers 1,843 among 6,441 incorrect annotations. Second most occurred error is in Arg1-Arg2 labels. Errors in Arg0-Arg1 and Arg1-Arg2 labels forms \u223c44% of the transfer errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1.3"
},
{
"text": "On the other hand, when we look at the all word results, 12,966 roles were not transferred. If we take these untransferred instances as incorrect; 19,373 annotations out of 38,780 annotation are true and the accuracy drops to \u223c50%. However, 8,837 of untransferred annotation are not an-Different Arguments # of Occurrence ARG0-ARG1 1,843 ARG1-ARG2 961 ARG2-ARGMEXT 462 ARG1-PREDICATE 255 ARG0-ARG2 229 ARG4-ARGMEXT 226 ARG1-ARGMPNC 220 ARG1-ARGMMNR 186 ARG1-ARGMTMP 160 ARG1-ARGMLOC 148 ",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 462,
"text": "ARG0-ARG1 1,843 ARG1-ARG2 961 ARG2-ARGMEXT 462 ARG1-PREDICATE 255 ARG0-ARG2 229 ARG4-ARGMEXT 226 ARG1-ARGMPNC 220 ARG1-ARGMMNR",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1.3"
},
{
"text": "In the previous method, annotation projection using parallel sentence trees is discussed. However, finding such a resource in a special format is difficult especially if you are working with a resourcescarce language. Most of the time creating a formatted parallel resource like tree structured sentences complicates translation procedure. In this section, automatic generation with translated sentences without tree structure will be examined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Turkish PropBank Using Parallel Sentence Phrases",
"sec_num": "4.2"
},
{
"text": "For the phrase sentences, English sentences retranslated without tree structure. Prior the annotation projection, linguists in the team annotated phrase sentences and populated \"propbank\" and \"shallowParse\" tags so that we check the correctness after the annotation transfer. 6,511 sentences among 9,557 phrase sentences have predicate according to hand annotations for newly translated sentences. However, only 5,259 sentences have English PropBank annotation, so we take this set to transfer annotations. As you remember, the same number in the previous section was 5,212. Here translation and annotation differences change the processed sentence count. Tag structure of Penn Treebank is preserved to simplify morphologic and semantic analysis requirements during the annotation transfer. In Figure 2 , sample phrase sentence can be seen. Unlikely Figure 1 , syntactic tags which indicate tree structure are not included. We used original tree formatted English sentence to extract English propbank annotations. However, since the target sentence do not have tree structure definition we used other word alignment methods to determine annotation projection.",
"cite_spans": [],
"ref_spans": [
{
"start": 794,
"end": 802,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 850,
"end": 858,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phrase Sentence Structure",
"sec_num": "4.2.1"
},
{
"text": "In order to transfer annotations, first we tried to match predicates of English sentence and Turkish translation. Again we utilize \"morphological-Analysis\" tags to determine predicate candidates in the phrase sentence. Words which have a verb root and verb according to last inflectional group is treated as the predicate candidates of the sentence. Once we found all the words ensuring these conditions, we gathered all English PropBank annotation labels which are tagged as \"Predicate\" in 'englishPropbank\" tag. To align predicates in different languages, we tried to exploit WordNet's (Ehsani et al., 2018) interlingual mapping capabilities. For each predicate in English sentence we find Turkish translation by searching English synset id in the WordNet. English synset id is located in englishSemantics tags as in the sample in Figure 1 . If there exists any translation in the WordNet, we take Turkish synset id and search it in the predicate candidates found for phrase sentence. Whenever translation found, we align predicates and try to transfer annotation with respect to aligned English label. For annotation transfer of other arguments we again align words using Word-Net's interlingual mapping. An example WordNet record is presented in Figure 3 .",
"cite_spans": [
{
"start": 588,
"end": 609,
"text": "(Ehsani et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 833,
"end": 841,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1250,
"end": 1258,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Semantic Alignment Using WordNet",
"sec_num": "4.2.2"
},
{
"text": "First results gathered with only WordNet mapping were very low. True annotation count is 2,195 among 29,168 annotations tagged manually which yields 7.53%. However, transferred false annotation count is only 342. System heavily relies on semantic annotations for both English and Turkish words where some of the words failed to have semantic annotation. We look deeper into dataset provided by Ak et al. (2018) , 11,006 English words do not have semantic annotation so we failed to match these words with Turkish counterparts.",
"cite_spans": [
{
"start": 394,
"end": 410,
"text": "Ak et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Alignment Using WordNet",
"sec_num": "4.2.2"
},
{
"text": "Some words are not annotated semantically such as, proper nouns, time, date, numbers, ordinal numbers, percentiles, fractional numbers, number intervals, and reel numbers. Most of these words are same in Turkish translation so we matched English and Turkish words by string match. For example if a sentence contains proper noun \"Dow Jones\", the same string also exists in the Turkish translation too. However, it may take additional suffixes, so we only check whether English words starts with Turkish root word. Also, translational differences are encountered like decimal separator in English is \".\" where some Turkish translations \",\" is used. We replace this differences by looking whether the first morphological tag is \"NUM\". After these tunings, we rerun the procedure and get 2,680 true and 531 false annotations which increases true annotations to 9.19%. Another problem is erroneous semantic annotations. If English and Turkish semantic annotation is not right, alignment is not possible. Even in the best scenario where both word is annotated, if Word-Net mapping is incomplete, an alignment can not be established.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Alignment Using WordNet",
"sec_num": "4.2.2"
},
{
"text": "As an alternative we decided to reinforce annotation transfer by using constituent boundaries identified with shallowParse tags by our linguist team mates. Example of shallowParse tags can be seen in Figure 2 . Prior to the annotation transfer, phrase sentences are annotated for constituent boundaries which can be used to group argument roles in the sentence. After transferring annotations with respect to semantic annotations, we run another method over phrase sentences which calculates maximum argument types for each constituent and tags any untagged word with the calculated max argument role within the constituent boundary. This procedure further enhance true annotations to 4,255 but also increase false annotations to 1,202. After constituent boundary calculation, correct annotation transfer percent is increased to \u223c14.59%. In Figure 4 annotation of the sentence 7076.train is presented. Untagged words in \"\u00d6zne\" and \"Zarf T\u00fcmleci\" constituent boundaries are tagged with the found argument role within the boundary. Note that, we did not use the constituent types but we use boundaries of the constituents.",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 208,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 841,
"end": 849,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Alignment Using WordNet",
"sec_num": "4.2.2"
},
{
"text": "Word alignment through semantic relation requires fair semantic annotation for both languages and also sufficient semantic mapping between languages. We search different word alignment methods between English and Turkish sentences. IBM Figure 4 : Annotation reinforced with respect to constituent boundaries: (1) English sentence (2) constituent boundaries identified with shallow-Parse tags for sentence in 7076.train, (3) Argument roles for the same sentence after annotation transfer, (4) Argument roles for the same sentence after reinforce method.",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 244,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "alignment models offer solution to our word alignment problem. IBM Models are mainly used for statistical machine translation to train a translation model and an alignment model. IBM Model 1 (Brown et al., 1993) is the primary word alignment model offered by IBM. It is widely used for solving word alignments while working with parallel corpora. It is a generative probabilistic model that calculates probabilities for each word alignment from source sentence to target sentence. It takes a corpus of paired sentences from two languages as training data. These paired sentences are possible translation of the sentences from source language to target. With this training corpus, parameters of the model estimated using EM (expectation maximization). IBM Model 2 has an additional model for alignment and introduce alignment distortion parameters. We decided to use IBM model 1 & 2 to establish word alignments instead of Word-Net's interligual mapping. We input sentence pairs and gather alignment probabilities for each English word to Turkish equivalent. 244,024 word pairs are taken as output where for each English word, 10 most probable Turkish words are listed. Alignment probabilities for word \"Reserve\" is presented in Table 3 and 4 for IBM Model 1 and 2 respectively.",
"cite_spans": [
{
"start": 191,
"end": 211,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1228,
"end": 1235,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "After gathering alignment data, we transfer annotations to phrase sentences from English Prop-Bank labels in the tree structured sentences. All words tagged with \"PREDICATE\" tag in English sentence are stored into a map which includes predicate label from the \"englishPropbank\" tag e.g. \"like 01\" and English word from the \"english\" tag e.g. \"like\". Then we search alignments for each found English predicate. Here we observed that aligned Turkish words may not occur in the phrase sentence as they found in the alignment table. Words may include additional suffixes, so we use Finite State Machine(FSM) morphological analyzer available in our NLP Toolkit of Ak et al. (2018) to extract roots of the aligned Turkish words. Since we have several possible morphological parse for each aligned word, we created an array for possible roots. In parallel, we found predicate candidates from the phrase sentence as we stated in the previous methods. Then we tried to match aligned words and possible roots with the found predicate candidates. If there exists a predicate candidate that matches with the aligned word or one of its roots in the array, we tagged the candidate as \"PREDICATE\" and update map as predicate label and synset id of Turkish predicate. After finishing predicate discovery, we transfer annotations for found predicates. To do that we look for the annotations with respect to the predicate labels in the map. For each record in map we took the predicate label and corresponding Turkish synset id. When we found an annotation with this predicate label, first we extract the argument and try to find aligned word for the processed English word. For the alignment again we find the most probable word from the table and use FSM morphological analyzer to extract possible roots. Then for each word we search Turkish sentence to match words with aligned word or possible roots extracted. If matched Turkish words do not have argument annotation, we transfer argument with the synset id found in the map record.",
"cite_spans": [
{
"start": 659,
"end": 675,
"text": "Ak et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "As we discuss in the previous annotation transfer procedure 4.2.2, some of the English words such as proper nouns, time, date, numbers, ordinal numbers, percentiles, fractional numbers, number intervals, and reel numbers stay same or take additional suffixes in Turkish translation. So we include the same method used for matching these words. In a case words are not aligned with the information from alignment table, and a valid annotation present in English word, we search exact string match or any word starts with the root of English word in the Turkish sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "We run our procedure with IBM Model 1 & 2 separately. We add reinforce step previously used in Section 4.2.2. Unlikely previous attempts, after examining language structure we decided to add rules to tag any untagged words after annotation transfer. We observed argument types affect noun inflections, for some argument types the last word in constituent boundary is taking certain suffixes. So first we find untagged word and select the last word in its constituent boundary. Since we run reinforce step beforehand, only untagged constituents exists in the sentence. In this respect, we set the following rules to determine argument annotation for untransfered words;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "\u2022 For nouns and proper nouns:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "-Have no suffix then ARG0 -Last morpheme tag is \"ACCUSATIVE\" (-(y)H, -nH) or \"DATIVE\" (-(y)A, -nA) then ARG1 -Last morpheme tag is \"LOCATIVE\" (-DA, -nDA) or \"ABLATIVE\" (-DAn, -nDAn ) then ARGMLOC -Last morpheme tag is \"INSTRUMENTAL\" (-(y)lA) then ARG2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "\u2022 For all word types -Morphological parse contains date, time then ARGMTMP -Morphological parse contains cardinal number, fraction, percent, range, real number, ordinal number then ARGMEXT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "We use these rules to tag any untagged word. After applying these rules annotation transfer result is as shown in Table 5 and 6. Results show that rules applied slightly change the correct annotations. For model 1 rules output much more correct annotation than the incorrect ones whereas in model 2 the number of correct and incorrect annotations gathered are nearly same. However, precision for model 1 is improved to 59.44% and for model 2 precision become 59.86%. ",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Word Alignment Using IBM Alignment Models",
"sec_num": "4.2.3"
},
{
"text": "We proposed methods to generate automatic Turkish proposition bank by transferring crosslanguage semantic information. Using the parallelism with English proposition bank gives us an opportunity to create a proposition bank in a short time with less effort. We currently have 64% accuracy with the hand-annotated proposition bank (Ak et al., 2018) for parallel sentence trees. When we consider only transferred annotations, accuracy is rising to \u223c75%. We also present annotation projection to phrase sentences using WordNet and IBM alignment models. WordNet alignment heavily relies on semantic annotations, correct annotations transferred after this method is \u223c14.59%. However, 4,255 correct argument roles are transferred among 5,457 arguments which means 79% of the transferred roles are correct. To increase annotation transfer for phrase sentences, we have also proposed alignment with IBM Model 1 and 2. Both models yields \u223c60% correct annotations. Annotations transferred with these methods can provide a basis for proposition bank creation in resource-scarce languages. Annotations may then be checked quickly by the annotators and proposition bank reach the final state.",
"cite_spans": [
{
"start": 330,
"end": 347,
"text": "(Ak et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ARG2-for (2) [Daha az s\u0131k\u0131 t\u00fcrden bir Senato versiyonu]\u00d6 zne -Subject [a\u015fag\u0131 yukar\u0131 be\u015f y\u0131l i\u00e7in]Zarf T\u00fcmleci -Adverbial Clause [d\u00fc\u015f\u00fclebilirligi]Nesne -Object [ertelerdi",
"authors": [
{
"first": "",
"middle": [],
"last": "Argm-Mod",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ARGM-MOD [defer]Predicate [the deductibility]ARG1 for [roughly five years.]ARG2-for (2) [Daha az s\u0131k\u0131 t\u00fcrden bir Senato versiyonu]\u00d6 zne -Subject [a\u015fag\u0131 yukar\u0131 be\u015f y\u0131l i\u00e7in]Zarf T\u00fcmleci -Adverbial Clause [d\u00fc\u015f\u00fclebilirligi]Nesne -Object [ertelerdi.]Y\u00fcklem -Predicate (3) [Daha az s\u0131k\u0131 t\u00fcrden bir]NONE [Senato]ARG0 [versiyonu]NONE [a\u015fag\u0131 yukar\u0131]ARG2",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The nombank project: An interim report",
"authors": [
{
"first": "References",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reeves",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Macleod",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Szekely",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zielinska",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation",
"volume": "",
"issue": "",
"pages": "24--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Meyers A., R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman. 2004. The nombank project: An interim report. In HLT- NAACL 2004 Workshop: Frontiers in Corpus Anno- tation. Association for Computational Linguistics, Boston, Massachusetts, USA, pages 24-31.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Construction of a Turkish proposition bank",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ak",
"suffix": ""
},
{
"first": "O",
"middle": [
"T"
],
"last": "Y\u0131ld\u0131z",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Esgel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Toprak",
"suffix": ""
}
],
"year": 2018,
"venue": "Turkish Journal of Electrical Engineering and Computer Science",
"volume": "26",
"issue": "",
"pages": "570--581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Ak, O. T. Y\u0131ld\u0131z, V. Esgel, and C. Toprak. 2018. Construction of a Turkish proposition bank. Turk- ish Journal of Electrical Engineering and Computer Science 26:570 -581.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generating high quality proposition banks for multilingual semantic role labeling",
"authors": [
{
"first": "A",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Chiticariu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Danilevsky",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL (1). The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "397--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Akbik, L. Chiticariu, M. Danilevsky, Y. Li, S. Vaithyanathan, and H. Zhu. 2015. Generating high quality proposition banks for multilingual se- mantic role labeling. In ACL (1). The Association for Computer Linguistics, pages 397-407.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Propbank: Semantics of new predicate types",
"authors": [
{
"first": "C",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bonn",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Conger",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Hwang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Bonial, J. Bonn, K. Conger, J. D. Hwang, and M. Palmer. 2014. Propbank: Semantics of new predicate types. In Proceedings of the Ninth In- ternational Conference on Language Resources and Evaluation (LREC-2014). European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "S",
"middle": [
"A D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Comput. Linguist",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, V. J. D. Pietra, S. A. D. Pietra, and R. L. Mercer. 1993. The mathematics of statistical ma- chine translation: Parameter estimation. Comput. Linguist. 19(2):263-311.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Framing of verbs for turkish propbank",
"authors": [
{
"first": "G",
"middle": [
"G"
],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "TurCLing 2016 in conj. with 17th International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. G. \u015e ahin. 2016a. Framing of verbs for turkish prop- bank. In TurCLing 2016 in conj. with 17th Interna- tional Conference on Intelligent Text Processing and Computational Linguistics (CICLING 2016).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Verb sense annotation for turkish propbank via crowdsourcing",
"authors": [
{
"first": "G",
"middle": [
"G"
],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "17th International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. G. \u015e ahin. 2016b. Verb sense annotation for turkish propbank via crowdsourcing. In 17th International Conference on Intelligent Text Processing and Com- putational Linguistics (CICLING 2016).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Annotation of semantic roles for the Turkish proposition bank. Language Resources and Evaluation",
"authors": [
{
"first": "G",
"middle": [
"G"
],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Adal\u0131",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. G. \u015e ahin and E. Adal\u0131. 2017. Annotation of seman- tic roles for the Turkish proposition bank. Language Resources and Evaluation .",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Constructing a wordnet for Turkish using manual and automatic annotation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ehsani",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Solak",
"suffix": ""
},
{
"first": "O",
"middle": [
"T"
],
"last": "Y\u0131ld\u0131z",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Transactions on Asian Low-Resource Language Information Processing",
"volume": "17",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Ehsani, E. Solak, and O. T. Y\u0131ld\u0131z. 2018. Construct- ing a wordnet for Turkish using manual and auto- matic annotation. ACM Transactions on Asian Low- Resource Language Information Processing 17(3).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "FrameNet and Representing the Link between Semantic and Syntactic Relations, Institute of Linguistics",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
}
],
"year": 2004,
"venue": "Language and Linguistics Monographs Series B",
"volume": "",
"issue": "",
"pages": "19--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. J. Fillmore, J. Ruppenhofer, and Collin F. Baker. 2004. FrameNet and Representing the Link be- tween Semantic and Syntactic Relations, Institute of Linguistics, Academia Sinica, Taipei, pages 19-62. Language and Linguistics Monographs Series B.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "From treebank to propbank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "LREC. European Language Resources Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Kingsbury and M. Palmer. 2002. From treebank to propbank. In LREC. European Language Resources Association.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Propbank: The next level of treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Treebanks and Lexical Theories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Kingsbury and M. Palmer. 2003. Propbank: The next level of treebank. In Proceedings of Treebanks and Lexical Theories. V\u00e4xj\u00f6, Sweden.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cross-lingual transfer of semantic role labeling models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kozhevnikov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (1). The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "1190--1200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kozhevnikov and I. Titov. 2013. Cross-lingual transfer of semantic role labeling models. In ACL (1). The Association for Computer Linguistics, pages 1190-1200.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of en- glish: The penn treebank. Computational linguistics 19(2):313-330.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Comput. Linguist",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Comput. Linguist. 31(1):71-106.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Proposition bank i. Philadelphia: Linguistic Data Consortium. LDC2004T14",
"authors": [
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Babko-Malaya",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Cotton",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Snyder",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Palmer, P. Kingsbury, O. Babko-Malaya, S. Cotton, and B. Snyder. 2004. Proposition bank i. Philadel- phia: Linguistic Data Consortium. LDC2004T14.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Global methods for cross-lingual semantic role and predicate labelling",
"authors": [
{
"first": "L",
"middle": [],
"last": "Van Der Plas",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING. ACL",
"volume": "",
"issue": "",
"pages": "1279--1290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Van der Plas, M. Apidianaki, and C. Chen. 2014. Global methods for cross-lingual semantic role and predicate labelling. In COLING. ACL, pages 1279- 1290.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Scaling up automatic cross-lingual semantic role annotation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Van Der Plas",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL (Short Papers). The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "299--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Van der Plas, P. Merlo, and J. Henderson. 2011. Scaling up automatic cross-lingual semantic role an- notation. In ACL (Short Papers). The Association for Computer Linguistics, pages 299-304.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Joint inference for bilingual semantic role labeling",
"authors": [
{
"first": "T",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "304--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Zhuang and C. Zong. 2010. Joint inference for bilin- gual semantic role labeling. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics, Stroudsburg, PA, USA, EMNLP '10, pages 304-314.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Part of a phrase sentence : Translated words in Turkish tags. Helper tags gives additional information for each word.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Sample WordNet record found by searching \"ENG31-01781131-v\", English synset id, from the sentence inFigure 1.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF2": {
"content": "<table><tr><td>: Counts of different argument annotations</td></tr><tr><td>between transferred annotations and hand annota-</td></tr><tr><td>tions.</td></tr><tr><td>notated in the hand-annotated corpus. Only 4,129</td></tr><tr><td>are valid PropBank arguments. In this respect, if</td></tr><tr><td>we count only valid arguments for untransferred</td></tr><tr><td>annotations, accuracy is \u223c65%.</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF4": {
"content": "<table><tr><td colspan=\"3\">: Word alignment probabilities for English</td></tr><tr><td colspan=\"3\">word \"Reserve\" calculated by IBM Model 1.</td></tr><tr><td colspan=\"3\">English Word Turkish Word Probability</td></tr><tr><td>Reserve</td><td>Reserve</td><td>0.67700755</td></tr><tr><td>Reserve</td><td>Rezerv</td><td>0.14360766</td></tr><tr><td>Reserve</td><td>Federe</td><td>0.06154614</td></tr><tr><td>Reserve</td><td>Bankas\u0131</td><td>0.05265972</td></tr><tr><td>Reserve</td><td>tasarruf</td><td>0.03072182</td></tr><tr><td>Reserve</td><td>kurulu\u015flar\u0131na</td><td>0.02117394</td></tr><tr><td colspan=\"2\">Reserve\u00fczerindeki</td><td>0.01111856</td></tr><tr><td>Reserve</td><td>bu</td><td>0.00212005</td></tr><tr><td>Reserve</td><td>kurumlar\u0131na</td><td>0.00004452</td></tr><tr><td>Reserve</td><td>Merkez</td><td>0.00000002</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>: Word alignment probabilities for English</td></tr><tr><td>word \"Reserve\" calculated by IBM Model 2.</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF7": {
"content": "<table><tr><td colspan=\"4\">IBM Model 2 + Reinforce + Rules</td></tr><tr><td/><td>Transfered</td><td colspan=\"2\">Untransfered</td></tr><tr><td colspan=\"2\">Correct Incorrect Undetermined 14,457 17,464 9,635</td><td>H.A. Not H.A.</td><td>1,078 2,075</td></tr><tr><td>Total</td><td colspan=\"2\">41,556 Total</td><td>3,153</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Results for IBM Model 1 alignment.",
"html": null
},
"TABREF8": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Results for IBM Model 2 alignment.",
"html": null
}
}
}
}