|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:21:39.169808Z" |
|
}, |
|
"title": "Cross-lingual Semantic Role Labelling with the Valpal database knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Chinmay", |
|
"middle": [], |
|
"last": "Choudhary", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University of Ireland", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Colm", |
|
"middle": [], |
|
"last": "O'riordan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University of Ireland", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Cross-lingual Transfer Learning typically involves training a model on a high-resource source language and applying it to a lowresource target language. In this work we introduce a lexical database called Valency Patterns Leipzig (ValPal) which provides the argument pattern information about various verb-forms in multiple languages including low-resource languages. We also provide a framework to integrate the ValPal database knowledge into the state-of-the-art LSTM based model for cross-lingual semantic role labelling. Experimental results show that integrating such knowledge resulted in am improvement in performance of the model on all the target languages on which it is evaluated.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Cross-lingual Transfer Learning typically involves training a model on a high-resource source language and applying it to a lowresource target language. In this work we introduce a lexical database called Valency Patterns Leipzig (ValPal) which provides the argument pattern information about various verb-forms in multiple languages including low-resource languages. We also provide a framework to integrate the ValPal database knowledge into the state-of-the-art LSTM based model for cross-lingual semantic role labelling. Experimental results show that integrating such knowledge resulted in am improvement in performance of the model on all the target languages on which it is evaluated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Semantic role labeling (SRL) is the task of identifying various semantic arguments such as Agent, Patient, Instrument, etc. for each of the target verb (predicate) within an input sentence. SRL is useful as an intermediate step in numerous high level NLP tasks, such as information extraction (Christensen et al., 2011; Bastianelli et al., 2013) , automatic document categorization (Persson et al., 2009 ), text-summarization (Khan et al., 2015) question-answering (Shen and Lapata, 2007) etc. State of the art approaches to SRL such as (Zhou and Xu, 2015; He et al., 2017a,b; Wang et al., 2021) are supervised approaches which require a large annotated dataset to be trained on, thus limiting their utility to only high-resorce languages. This issue of data-sparsity (in low-resource languages) has been effectively addressed with numerous cross-lingual approaches to SRL including Annotation Projection approaches (Pad\u00f3 and Lapata, 2009; Kozhevnikov and Titov, 2013; Akbik et al., 2015; Aminian et al., 2019a) , Model Transfer approaches (McDonald and Nivre, 2013; Swayamdipta et al., 2016; Daza and Frank, 2019; Cai and Lapata, 2020a) and Machine Translation approaches (Fei et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 319, |
|
"text": "(Christensen et al., 2011;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 345, |
|
"text": "Bastianelli et al., 2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 403, |
|
"text": "(Persson et al., 2009", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 445, |
|
"text": "(Khan et al., 2015)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 488, |
|
"text": "(Shen and Lapata, 2007)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 556, |
|
"text": "(Zhou and Xu, 2015;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 557, |
|
"end": 576, |
|
"text": "He et al., 2017a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 595, |
|
"text": "Wang et al., 2021)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 916, |
|
"end": 939, |
|
"text": "(Pad\u00f3 and Lapata, 2009;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 940, |
|
"end": 968, |
|
"text": "Kozhevnikov and Titov, 2013;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 988, |
|
"text": "Akbik et al., 2015;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 1011, |
|
"text": "Aminian et al., 2019a)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1040, |
|
"end": 1066, |
|
"text": "(McDonald and Nivre, 2013;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1067, |
|
"end": 1092, |
|
"text": "Swayamdipta et al., 2016;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1093, |
|
"end": 1114, |
|
"text": "Daza and Frank, 2019;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1115, |
|
"end": 1137, |
|
"text": "Cai and Lapata, 2020a)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1173, |
|
"end": 1191, |
|
"text": "(Fei et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we use the Valency Patterns Leipzig (ValPal) online database 1 (Hartmann et al., 2013) which is a multilingual lexical database, originally created by the linguistic research community to study the similarities and differences in verb-patterns for various world languages. Furthermore, we provide a framework to utilise the knowledge available in Valpal database to improve the performance of the state-of-the-art cross-lingual approach to SRL task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 100, |
|
"text": "(Hartmann et al., 2013)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Valency Patterns Leipzig (ValPal) is a comprehensive multilingual lexical database which provides semantic and syntactic information about different verb-forms in various languages including many low-resource languages. The ValPal database provides values for the following features for each verb-form: 1. Valency: the total number of arguments that a base verb-form can take.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ValPal Database", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. Argument-pattern: the type and order of arguments taken by a base verb-form in its most common usage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ValPal Database", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. Alterations: the alternate argument-patterns that can be taken by either the base verb-form or any of its morphological variant. Table 1 depicts the information about three lexical units namely cook, kochen and cuocere as provided in the ValPal database. Please note that Table 1 lists only a few of all the alterations provided for these verb-forms in ValPal database due to space constraints. Lexical units cook, kochen and cuocere are English, German and Italian words representing base verb-form for verb activity COOKING.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 139, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 282, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ValPal Database", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1 http://ValPal.info/ 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ValPal Database", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In ValPal database each argument-pattern (including alteration) is coded with a unique codingframe. For example, in Table 1 , the argumentpattern of English base verb-form cook, is coded as follows", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 123, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Coding of Argument-patterns", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1 \u2212 nom > V.subj[1] > 2 \u2212 acc", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding of Argument-patterns", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The code indicates that the base verb-form cook takes 2 arguments in its most common usage (valency of 2). The first argument is cooker (indicated as 1-nom) and the second one is Cookedfood (indicated as 2-acc). V.subj [1] indicates the verb with the first argument as its agent. The order of arguments are cooker-V-cooked food (eg: She is cooking the fish.). Verb-form cook also has an alteration called Causative-Inchoative with the derived argumentpattern as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 222, |
|
"text": "[1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding of Argument-patterns", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "2 \u2212 acc > V.subj[1]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding of Argument-patterns", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "This argument pattern indicates that verb-form can also have order of arguments as cooked food-V with Agent argument missing from the sentence (eg: The fish is cooking).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding of Argument-patterns", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "ValPal provides a unique coding-set for each language. The codes in these coding sets indicate various argument-types including modifier argumenttypes. For example, codes NP-Nom, NP-acc and LOC-NP indicate the AGENT (Arg0), PATIENT (Arg1) and modifier LOCATION (ArgM-LOC) arguments respectively in the coding-sets of all languages. The codes with+NP and mit+NP-dat indicate INSTRUMENT argument in English and German coding-sets. Similarly, codes UTT-NP indicate the argument TEMPORAL in most codingsets. In these codes, the NP indicates the index of valency occupied the respective argument within the argument pattern (eg: code 2\u2212acc in argument pattern 2 \u2212 acc > V.subj[1] indicates argumenttype PATIENT with the valency-index of 2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding-sets", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As already explained, the ValPal database also provides a list of alternate argument-patterns (called alterations) for each verb-form. Some of these alterations are morpho-independent as they can be taken by the respective base-verb in any morphological form, whereas others are morphodependent as they can be taken by the respective verb only in a specific morphological form. For example, both the Reflexive-Passive and Impersonal Passive alterations of the italian base verb-form cuocere, outlined in Table 1 are morpho-dependent alterations as these alterations are observed only when the verb-form possesses morpheme si. The ValPal database is originally created by the linguistic research community, typically to study the similarities and differences in verb-patterns for various world languages. However this knowledge can also be used by NLP research community for building the models for data-sparse languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 504, |
|
"end": 511, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Alteration Types", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "One shortcoming of the Valpal database is that its vocabulary is limited for many languages. If we encounter a verb in the training-set that is missing in ValPal, we utilised the FrameNet database to extract the desired argument-pattern and alterations of it from ValPal itself.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FrameNet to aid ValPal", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "To extract this knowledge about the missing verb, firstly we extracted the frame of the missing verb from the respective FrameNet database. Subsequently we extracted a replacement-verb that belongs to the same frame (as that of the missing verb) and is available in ValPal database. Finally, we assigned the argument-pattern and alterations of this replacement-verb to the missing verb. For example, the verb barbecue is missing from Val-Pal database. Yet, the verb barbecue belongs to frame COOKING-45.1 in English FrameNet (Barkley). Another verb-form called cook belong to the same frame (COOKING-45.1) and is available in ValPal database. Thus we use argumentpatters provided in ValPal for verb-form cook as the argument-patterns for barbecue. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FrameNet to aid ValPal", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Ambitransitive Alternation:2 \u2212 nom > V \u2032 .subj[2] (Das Wasser kocht.) cuocere Italian 1 > V.subj[1] > 2 Reflexive-Passive:2 > siV \u2032 .subj[2] > daP arteDi + 1 (La carne si cuoce con atten- zione.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FOL rules from ValPal", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Impersonal Passive:siP assV \u2032 > da + 1 (Quando si\u00e8 (stati) cotti dal sole si diventa di color rosso intenso.) form tie is outlined as equation 1 (as Q). We use this as an example to demonstrate the process of converting an argument-pattern to a FOL rule.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FOL rules from ValPal", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Q = 1 \u2212 nom > V.subj[1] > 2 \u2212 acc > LOC \u2212 3(> with + 4) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FOL rules from ValPal", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this step, we translate all the Valpal's argumentpatterns (including alterations) for all lexical verbforms in the target-language l, to the Propbank Orders. The entire process of translating a Val-Pal argument-pattern P of any language l into a Propbank Label-order involves two simple textprocessing sub-steps described as sections 3.1.1 and 3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translate argument-patters to Propbank Order", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As already explained in section 2.2, the Valpal database provides a unique coding-set for each language. In this subset, we examined the entire coding-set for language l to identify the codes that refer to a modifier argument-type (eg: LOC-NP and UTT-NP etc. in English coding-set for LOCA-TION and TEMPORAL modifier-arguments), and created a mapping table that maps these modifierargument codes to the corresponding Propbank annotations (eg: LOC-NP mapped to ARGM-LOC; UTT-NP mapped to ARGM-TMP etc.). The coding-set of any language in the ValPal database is small thus making it feasible to manually create such mapping table.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Replace modifier argument-types", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Subsequently, we used this mapping table to replace all modifier argument-patterns (if any) in the argument-pattern P being translated, with corresponding Propbank label. After replacing the modifier argument-types we reduce the valency-index of all the arguments following the replaced modifier argument, in the argument-pattern being translated, by one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Replace modifier argument-types", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Q = 1 \u2212 nom > V.subj[1] > 2 \u2212 acc > ARGM \u2212 LOC(> with + 3) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Replace modifier argument-types", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "For example, the argument-pattern outlined in equation 1 comprises only one modifier argumenttype namely LOC3. We replaced this with the cor-responding Propbank label namely ARGM-LOC and reduced the valency-index of all argumenttypes following this replaced argument-pattern by 1 (thus (with + 4) is re-written as (with + 3)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Replace modifier argument-types", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Hence the argument-pattern in Equation 1 would be re-written as equation 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Replace modifier argument-types", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "After replacing all modifier argument-types in the argument-patterns by the process described in section 3.1.1, we simply replace all left over arguments in the ValPal argument-pattern P by string as 'ARGx' where x is valencyIndex \u2212 1. Hence argument 1 \u2212 nom, 2 \u2212 acc and with + 3 (with valency Indexes as 1, 2, 3 respectively) in equation 2 would be replaced by Arg0, Arg1 and Arg2 respectively. Finally, we replaced V subj[N P ] with V and removed all bracket symbols. Hence argumentpattern outlined as equation 2 would be translated as equation 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rewrite all non-modifier argument types", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "Q = ARG0 > V > ARG1 > ARG \u2212 LOC > ARG2 (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rewrite all non-modifier argument types", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "Once having represented all argument-patterns (including alterations) for all lexical verb-forms of language l as allowed Propbank Label-orders, we rewrite each verb-form and Propbank Label-order pair as a FOL rule. For example the pair of verbform tie and its corresponding allowed Propbank Label-order outlined as equation 4, is represented by the FOL rule indicated as equation 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Write Propbank Label order as FOL rule", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "f = baseF orm(V, tie) \u2228 pattern(Y, Q) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Write Propbank Label order as FOL rule", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Here Q is the Propbank label-order outlined in equation 3, and Y is the sequence of Propbank tagsequence predicted by a neural-network model for any input token-seq. The logic-constraint in equation 5 would be true if the verb for which the arguments are being predicted is a variant of base verb-form tie and the predicted SRL tag sequence Y satisfies the label order Q. While checking whether a predicted SRL tag sequence follows a specific order, we ignore the 'O' annotations ('O' indicates semantic role label 'NULL' in the Propbank Annotation scheme). For example the SRL tag sequences ARG0, ARG0, O, O, V, ARG1, ARG-LOC, O, ARG2 follows the argument-pattern. To check if the verb for which the arguments are being predicted is a morphological variant of the specific base verb-form, we perform stemming of both base verb-form and the token from the sentence which is tagged 'V' by the model. If the stem strings are equal we consider the verb token to be a variant of base verb-form. If an argument-pattern (represented as Propbank label-order) is for a morpho-dependent alteration, then the morphological constraint is also added to the FOL rule representing the argument-pattern. For example, in table 1 the argument-pattern Reflexive-Passive is a morpho-dependent alteration. This argument-pattern is represented as FOL defined by equation 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Write Propbank Label order as FOL rule", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "f = baseF orm(V, cuocere)\u2228 morphoF orm(V, si) \u2228 pattern(Y,Q) (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Write Propbank Label order as FOL rule", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "HereQ represents the corresponding labelsequence for Argument-pattern.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Write Propbank Label order as FOL rule", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The rule morphoF orm(V, si) constraints the verb V to have morphene si for the rule to be true. Hence we obtain a set of FOL rules F l representing the entire Valpal database knowledge about language l (with each verb-form and argumentpatterns pair provided in the Valpal database for the language l as a single FOL-rule f \u2208 F l ). These FOL rules are used during the fine-tuning of a cross-lingual neural-network model for SRL in target-language l. During fine-tuning, the model is always rewarded if it predicts an SRL tag-seq Y which satisfies atleast one of the FOL rule f \u2208 F l , and penalised otherwise. Section 4.3 will explain the fine-tuning process in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Write Propbank Label order as FOL rule", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We utilized the state-of-the-art approach to Crosslingual SRL in low-resource languages, proposed by (Cai and Lapata, 2020b) as our Base Approach. The approach comprises two key components namely Semantic Role Labeler and Semantic Role Compressor. The Semantic Role Labeler is a simple Bi-LSTM model with Biaffine Role Scorer (Dozat and Manning, 2016) . Given input sentence X = x 1 ...x T of length T, the model accepts pre-trained multilingual contextualized word-embedding e x i and predicate indicator embedding p x i for all x i \u2208 X as input. For each word x i \u2208 X, the topmost biaffine layer computes the scores of all semantic roles to be assigned to x i as s i \u2208 R |nr| where n r is the size of semantic role set. Hence the probability values of all SRL labels to be assigned to word x i can be computed by applying the softmax function over s i . Subsequently, the Semantic Role Compressor is another Bi-LSTM model which compresses the useful information about arguments, predicates and their roles from the outputs of the Semantic Role Labeller (e.g., by automatically filtering unrelated or conflicting information) in a matrix R \u2208 R nr * dr where d r denotes the length of hidden representation for each semantic role. The approach assumes the availability of a fully annotated source language corpus and parallel corpus of source-target sentences for training. Each model-training step involves two independent sequential sub-steps namely the the supervised training and the cross-lingual training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 124, |
|
"text": "(Cai and Lapata, 2020b)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 351, |
|
"text": "(Dozat and Manning, 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base Approach", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the source-language training sub-step, a batch is randomly selected from the annotated sourcelanguage corpus, to train both Semantic Role Labeler and Semantic Role Compressor simultaneously by minimizing the total loss computed by equation 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base Approach", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L total = L CE + L KL", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Base Approach", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here L CE is the Cross-entropy loss between true labels and labels predicted by the Labeler whereas L KL is the KL Divergence loss (Kullback and Leibler, 1951) between distributions predicted by the Compressor and the Labeler. After the supervised training sub-step, a batch from the parallel source-target data to perform the cross-lingual training sub-step. We refer to the original work (Cai and Lapata, 2020b) for the details of the cross-lingual training sub-step and the inference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 159, |
|
"text": "(Kullback and Leibler, 1951)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 413, |
|
"text": "(Cai and Lapata, 2020b)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base Approach", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this work we modified the training process described in section 4.1 to include the Valpal knowledge into the model parameters. Each training step in our proposed training step involves four independent sequential sub-steps. Firstly, in the Labeler pre-training sub-step, we randomly sample a batch from the annotated source-language corpus and the Semantic Role Labeler is trained on it by minimizing the crossentropy loss (L CE ) between true and predicted roles. Secondly, in the Labeler fine-tuning, the Valpal knowledge is injected in the parameters of the Semantic Role Labeler by the process described in section 4.3. Thirdly, in the Compressor training sub-step the Semantic Role Compressor is trained on the sampled source-language batch by minimizing the KL Divergence loss (L KL ) between distributions predicted by the Compressor and the fine-tuned Labeler (Labeler parameters are fixed in this sub-step). Finally we perform the cross-lingual training sub-step which is identical to as performed by the original authors (section 4.1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training with Valpal knowledge", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This section describes the framework adopted by us to induce the target-language specific ValPal database knowledge expressed as a set of FOL rules F l , into the pre-trained Semantic Role Labeler. Our framework is inspired by the Deep Probabilistic Logic (DPL) framework proposed by (Wang and Poon, 2018) . The framework assumes the availability of only an unlabelled targetlanguage corpus. Hence, for the Labeler finetuning sub-step, we randomly sample a batch from the already available parallel source-target data and utilised only the target language part of it. Let X = x 1 .....x T be an input sentence and Y = y 1 .....y T be any SRL-tag sequence. Further let \u03a8 be the pre-trained Bi-LSTM based Semantic Role Labeler, such that \u03a8(X, Y ) denotes the conditional probability P (Y |X) as outputted by the final softmax layer of \u03a8. The fine-tuning of this pre-trained \u03a8 to specific target-language l requires an unlabelled targetlanguage training corpus. Given such unlabelled target-language-corpus X targ , for each X \u2208 X targ we input sentence X into the pre-trained \u03a8 to compute the most probable SRL-tag sequence Y as Y = argmax\u0176 (\u03a8(x,\u0176 )). Subsequently we input both the sentence X and it's predicted mostprobable SRL tag-seq Y in all the FOL rules in F l to compute their value (as 0.0 or 1.0). DPL framework defines the conditional probability distribution P (F l , Y |X) as equation 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 305, |
|
"text": "(Wang and Poon, 2018)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeler fine-tuning with ValPal", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (F l , Y |X) = f \u2208F l exp(w.f (X, Y )).\u03a8(X, Y ) exp(w)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Labeler fine-tuning with ValPal", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The framework assumes the Knowledgeconstraints to be log-linear thus defines each knowledge-constraint as exp(w.f (X, Y )) where f \u2208 F l is the FOL rule representing the respective knowledge-constraint. Here w is the pre-decided reward-weight assigned to all constraints. Hence the predicted output-sequence Y would be rewarded (as its likelihood would increase by a factor of exp(w)) if it follows one of the defined argument-patterns in ValPal database for the respective verb for which the arguments are being predicted (f (X, Y ) = 1.0). However no penalty is awarded for not following the correct Argument-pattern.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeler fine-tuning with ValPal", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The ideal way to optimize the weights (finetune) of the model \u03a8 is by minimizing P (F l |X) and updating the parameters through backpropagation. We can compute P (F l |X) by summing over all possible SRL-tag sequences as P (F l |X) = \u03a3 Y P (F l , Y |X). However computing P (F l , Y |X) by equation 4 with all possible output-sequences, and subsequently backpropagating through it, for each training example is computationally very inexpensive. Thus DPL framework also provides a more efficient EMbased approach (Moon, 1996) to the parameter fine-tuning which is adopted by us. The full process of learning the parameters of \u03a8 (initialized with parameters pre-trained on source language) is outlined as Algorithm 1. For each ", |
|
"cite_spans": [ |
|
{ |
|
"start": 512, |
|
"end": 524, |
|
"text": "(Moon, 1996)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "; repeat for each X \u2208 X targ do \u25b7 E-Step Y \u2190 argmax\u0176 (\u03a8(X,\u0176 )) q(Y ) \u2190 P (F l , Y |X) \u25b7 by equation 7 \u25b7 M-Step \u03a8 \u2190 argmin\u03a8(D KL (q(Y )||\u03a8(X, Y )))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "end for until convergence training-example X \u2208 X targ , the Algorithm 1 implements three steps. In the first-step, it predicts the most probable SRL-tag sequence Y for the given training-example X as Y = argmax\u0176 (\u03a8(x,\u0176 )) with current parameter values for \u03a8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "In the E-step, q(Y ) = P (F l , Y |X) is computed by applying equation 4 with current parameters of \u03a8. Finally in the M-step it keeps q(Y ) as fixed and updates parameters of \u03a8 by minimizing the KL-divergence (Kullback and Leibler, 1951) loss between q(Y ) and the probability of Y from \u03a8(X, Y ) (i.e. P (Y |X)). In other words, in each epoch step, the model first computes the joint likelihood of F l and Y i.e P (F l , Y |X) with current model parameters, and subsequently it updates the parameters to predict likelihood of Y i.e., to be as close to P (F l , Y |X) as possible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 237, |
|
"text": "(Kullback and Leibler, 1951)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "This section described the experiments performed by us to evaluate the proposed model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We experimented with four languages namely English (en), German (de), Chinese (zh) and Italian (it) as these languages are covered in both the ValPal database as well as in the CoNLL 2009 Shared task (Hajic et al., 2009) dataset. The Semantic Role Labeller requires a fullyannotated training dataset in the high-resource source-language. We utilized the Universal Proposition Banks provided at https://github.com/ System-T/UniversalPropositions provided for CoNLL 2009 Shared task, for training of the Semantic Role Labeller and the evaluation of various systems. On the other hand, the Semantic Role Compressor component requires sentence-paired parallel corpora in source and target languages. We used the Europarl parallel text-corpus (Koehn et al., 2005) , and the large-scale EN-ZH parallel corpus (Xu, 2019) to train the Semantic Role Compressor, as used by (Cai and Lapata, 2020b) . We used the target-language part of the same parallelcorpora independently for the Valpal knowledge induction, as the Valpal database knowledge induction simply requires unlabelled text-corpus in the target-language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 220, |
|
"text": "(Hajic et al., 2009)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 758, |
|
"text": "(Koehn et al., 2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 803, |
|
"end": 813, |
|
"text": "(Xu, 2019)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 864, |
|
"end": 887, |
|
"text": "(Cai and Lapata, 2020b)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We computed the language-independent BERT-Embeddings to be fed into the networks using pre-trained Multilingual BERT (mBERT) (Wu and Dredze, 2019) model. Given a sentence S, we tokenised the whole sentence using the WordPiece tokeniser (Wu et al., 2016 (Cai and Lapata, 2020b) this token-sequence into pre-trained mBERT provided by (Turc et al., 2019) . Embedding of any word w \u2208 S i.e. e w is computed by taking average of mBERT outputs of all Wordpiece tokens corresponding to word w. Subsequently these wordembeddings are frozen during the training of the networks. Table 2 outlines the hyper-parameters used during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 146, |
|
"text": "(Wu and Dredze, 2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 252, |
|
"text": "(Wu et al., 2016", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 276, |
|
"text": "(Cai and Lapata, 2020b)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 351, |
|
"text": "(Turc et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 576, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model-configurations", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We compared the performance of our proposed model against the base-model (4.1) as well as numerous other state-of-the-art baselines. These baselines include two annotation projection based models namely Bootstrap (Aminian et al., 2017) and CModel (Aminian et al., 2019b) , as well as two strong mixture-of-experts models namely MOE (Guo et al., 2018) which focus on combining language specific features automatically as well as MAN-MOE which learns language-invariant features with the multinomial adversarial network as a shared feature extractor. We also compared with PGN (Fei et al., 2020) which is the state-of-the-art translationbased model which translates the source annotated corpus into the target language, performs annotation projection, and subsequently trains the model on both source and the translated corpus. We utilised the source-code provided by the authors of each of these baselines to train and test them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 235, |
|
"text": "(Aminian et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 270, |
|
"text": "(Aminian et al., 2019b)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 350, |
|
"text": "(Guo et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 593, |
|
"text": "PGN (Fei et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Algorithm 2 Full Training process. Here, the function FineTune represents the process outlined as algorithm 1 and function CrossTrain represents the cross-lingual training procedure adopted by (Cai and Lapata, 2020b) . L CE is cross-entropy loss and L KL is KL divergence loss Require: Annotated Source language corpus {X T agged , Y T agged }; Parallel Source-target Corpus {X S P arallel , X T P arallel }; set of FOL rules representing entire Valpal db knowledge of target language F l ; batch-size b; Number of Epochs E Initialize:", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 216, |
|
"text": "(Cai and Lapata, 2020b)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Semantic Role Labeler \u03a8; Semantic Role Compressor \u03a6 steps \u2190 |X T g |/b for epoch \u2190 1 to E do for step \u2190 1 to steps do X, Y \u2190 Sample({X T g , Y T g },b) X S , X T \u2190 Sample({X S P r , X T P r },b) \u25b7 Labeler pre-training \u03a8 \u2190 argmin\u03a8(D CE (Y ||\u03a8(X))) \u25b7 Labeler Fine-tuning \u03a8 \u2190 F ineT une(X T , F L , \u03a8, b) \u25b7 Compressor training \u03a6 \u2190 argmin\u03a6(D KL (\u03a8(X)||\u03a6(X)))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "\u25b7 Cross-lingual training \u03a6, \u03a8 \u2190 CrossT rain(X S , X T , \u03a8, \u03a8) end for end for 6 Results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In the first set of experiments we trained the models on a single source language English and tested these on the target languages zh, it and de. In these settings, we trained the models on English UPB train-dataset and tested them on the UPB test-sets of the target-languages. Table 3 shows the labeled F-scores achieved on each of these targetlanguages. In table 4, the Base-wo-Compressor refers to the base model without the SRL compressor, whereas Base-full refers to the full base model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 285, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Monolingual training", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Results in Table 3 , show that for both Basewo-Compressor and Base-full model, adding Valpal database knowledge improved its performance on all three target languages. Furthermore, for all three target-languages, the improvement in performance of both Base-wo-Compressor and Base- Table 4 , outlines the results obtained under the polyglot training settings. For each experiment within these settings, the models are trained on a joint polyglot corpus of the three out of four languages namely en, it, de and zh, excluding the target language for which the results are outlined. For each experiment within these settings, the training corpus size is always fixed to 600,000 tokens to ensure controlled experiment-settings. We created such polyglot corpus by randomly sam-pling sentences from UPB train-set for each of the three source-languages until the token-size becomes approximately equal to 100,000, concatenated all these sampled datasets and randomly shuffled the order. Alignment-projection based approaches and the Base-full are not evaluated in the polyglot settings as these approaches require parallel-aligned source and target language sentence-pairs. Results show that adding Valpal knowledge improves the performance of Base-wo-Compressor model, even within the polyglot settings, Furthermore, it is observed that although Basewo-Compressor model performs better in polyglot training settings as compared to monolingual settings for most of the target languages, the improvement in performance of Base-wo-Compressor due to Valpal knowledge injection is same is both settings. This is because the fine-tuning of model with Valpal database knowledge is performed only with the unlabelled targetlanguage corpus. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 18, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 288, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Monolingual training", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "It can be observed in Tables 3 and 4 that the improvement on target-language is much lower than the improvements observed on zh, de and en. The reason being that we extended the Valpal vocabulary of en, zh and de using English Framenet (Barkley), Chinese Framenet (Yang et al., 2018) and German Framenet (of Texas) by the process described in section 2.4. However Italian Framenet is not publicly available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 283, |
|
"text": "(Yang et al., 2018)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 36, |
|
"text": "Tables 3 and 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance with extended vocabularies", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We indeed performed experiments to analyze the impact of vocabulary extension on the performances. Table 5 outlines the results of these experiments. It can be observed in the table that extending the vocabulary of Valpal with the Framenet does lead to significant improvement in performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 106, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance with extended vocabularies", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Valency Patterns Leipzig (ValPal) is a multilingual lexical database which provides the knowledge about the argument-patterns of various verbforms in multiple languages including numerous low-resource languages. The database is originally created by the linguistic community to study the similarities and differences in the verb-patterns for various world's languages. In this work we utilised this database to improve the performance of the state-of-the-art cross-lingual model for SRL task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We evaluated a framework to integrate the entire Valpal knowledge about any low-resource targetlanguage into an LSTM based model. Our proposed framework only requires an unannotated target language corpus for the knowledge integration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Generating high quality proposition banks for multilingual semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Chiticariu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Danilevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunyao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivakumar", |
|
"middle": [], |
|
"last": "Vaithyanathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huaiyu", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "397--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Laura Chiticariu, Marina Danilevsky, Yunyao Li, Shivakumar Vaithyanathan, and Huaiyu Zhu. 2015. Generating high quality proposition banks for multilingual semantic role labeling. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 397-407.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Transferring semantic roles using translation and syntactic information", |
|
"authors": [ |
|
{ |
|
"first": "Maryam", |
|
"middle": [], |
|
"last": "Aminian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Sadegh Rasooli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1710.01411" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maryam Aminian, Mohammad Sadegh Rasooli, and Mona Diab. 2017. Transferring semantic roles using translation and syntactic information. arXiv preprint arXiv:1710.01411.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cross-lingual transfer of semantic roles: From raw text to semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Maryam", |
|
"middle": [], |
|
"last": "Aminian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Sadegh Rasooli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.03256" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maryam Aminian, Mohammad Sadegh Rasooli, and Mona Diab. 2019a. Cross-lingual transfer of seman- tic roles: From raw text to semantic roles. arXiv preprint arXiv:1904.03256.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Cross-lingual transfer of semantic roles: From raw text to semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Maryam", |
|
"middle": [], |
|
"last": "Aminian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Sadegh Rasooli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.03256" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maryam Aminian, Mohammad Sadegh Rasooli, and Mona Diab. 2019b. Cross-lingual transfer of seman- tic roles: From raw text to semantic roles. arXiv preprint arXiv:1904.03256.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "English framenet", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Icsi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Barkley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ICSI Barkley. English framenet.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Textual inference and meaning representation in human robot interaction", |
|
"authors": [ |
|
{ |
|
"first": "Emanuele", |
|
"middle": [], |
|
"last": "Bastianelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Castellucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Croce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Joint Symposium on Semantic Processing. Textual Inference and Structures in Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emanuele Bastianelli, Giuseppe Castellucci, Danilo Croce, and Roberto Basili. 2013. Textual inference and meaning representation in human robot interac- tion. In Proceedings of the Joint Symposium on Se- mantic Processing. Textual Inference and Structures in Corpora, pages 65-69.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Alignment-free cross-lingual semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3883--3894", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rui Cai and Mirella Lapata. 2020a. Alignment-free cross-lingual semantic role labeling. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3883-3894.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Alignment-free cross-lingual semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3883--3894", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rui Cai and Mirella Lapata. 2020b. Alignment-free cross-lingual semantic role labeling. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3883-3894.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Multisource cross-lingual model transfer: Learning what to share", |
|
"authors": [ |
|
{ |
|
"first": "Xilun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Hassan Awadallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xilun Chen, Ahmed Hassan Awadallah, Hany Has- san, Wei Wang, and Claire Cardie. 2018. Multi- source cross-lingual model transfer: Learning what to share. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "An analysis of open information extraction based on semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Janara", |
|
"middle": [], |
|
"last": "Christensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the sixth international conference on Knowledge capture", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janara Christensen, Stephen Soderland, and Oren Et- zioni. 2011. An analysis of open information extrac- tion based on semantic role labeling. In Proceedings of the sixth international conference on Knowledge capture, pages 113-120.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Translate and label! an encoder-decoder approach for crosslingual semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Angel", |
|
"middle": [], |
|
"last": "Daza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.11326" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angel Daza and Anette Frank. 2019. Translate and label! an encoder-decoder approach for cross- lingual semantic role labeling. arXiv preprint arXiv:1908.11326.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Deep biaffine attention for neural dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.01734" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency pars- ing. arXiv preprint arXiv:1611.01734.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Cross-lingual semantic role labeling with highquality translated training corpus", |
|
"authors": [ |
|
{ |
|
"first": "Meishan", |
|
"middle": [], |
|
"last": "Hao Fei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.06295" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Fei, Meishan Zhang, and Donghong Ji. 2020. Cross-lingual semantic role labeling with high- quality translated training corpus. arXiv preprint arXiv:2004.06295.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Multi-source domain adaptation with mixture of experts", |
|
"authors": [ |
|
{ |
|
"first": "Jiang", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Darsh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiang Guo, Darsh J Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of ex- perts. EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Llu\u0131s marquez, adam meyers, joakim nivre, sebastian pad\u00f3, jan\u0161tep\u00e1nek, pavel stran\u00e1k, mihai surdeanu, nianwen xue, and yi zhang. 2009. the conll-2009 shared task: Syntactic and semantic dependencies in multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Hajic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [ |
|
"Antonia" |
|
], |
|
"last": "Mart\u0131", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Hajic, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, and Maria Antonia Mart\u0131. 2009. Llu\u0131s marquez, adam meyers, joakim nivre, sebas- tian pad\u00f3, jan\u0161tep\u00e1nek, pavel stran\u00e1k, mihai sur- deanu, nianwen xue, and yi zhang. 2009. the conll- 2009 shared task: Syntactic and semantic dependen- cies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 1-18.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The Valency Patterns Leipzig online database", |
|
"authors": [], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iren Hartmann, Martin Haspelmath, and Bradley Tay- lor, editors. 2013. The Valency Patterns Leipzig on- line database. Max Planck Institute for Evolution- ary Anthropology, Leipzig.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Deep semantic role labeling: What works and what's next", |
|
"authors": [ |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "473--483", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luheng He, Kenton Lee, Mike Lewis, and Luke Zettle- moyer. 2017a. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 473-483.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Jointly predicting predicates and arguments in neural semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "473--483", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luheng He, Kenton Lee, Mike Lewis, and Luke Zettle- moyer. 2017b. Jointly predicting predicates and ar- guments in neural semantic role labeling. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 473-483.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A framework for multi-document abstractive summarization based on semantic role labelling", |
|
"authors": [ |
|
{ |
|
"first": "Atif", |
|
"middle": [], |
|
"last": "Khan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naomie", |
|
"middle": [], |
|
"last": "Salim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yogan Jaya", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Applied Soft Computing", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "737--747", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atif Khan, Naomie Salim, and Yogan Jaya Kumar. 2015. A framework for multi-document abstrac- tive summarization based on semantic role labelling. Applied Soft Computing, 30:737-747.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "MT summit", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn et al. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79-86. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Crosslingual transfer of semantic role labeling models", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Kozhevnikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1190--1200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Kozhevnikov and Ivan Titov. 2013. Cross- lingual transfer of semantic role labeling models. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1190-1200.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "On information and sufficiency. The annals of mathematical statistics", |
|
"authors": [ |
|
{ |
|
"first": "Solomon", |
|
"middle": [], |
|
"last": "Kullback", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Leibler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1951, |
|
"venue": "", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathe- matical statistics, 22(1):79-86.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Yvonne irmbach-brundage, yoav goldberg, dipanjan das, kuzman ganchev, keith hall, slav petrov, hao zhang, oscar t\u00e4ckstr\u00f6m, claudia bedini, n\u00faria bertomeu castell\u00f3, and jungmee lee. universal dependency annotation for multilingual parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "92--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald and Joakim Nivre. 2013. Yvonne irmbach-brundage, yoav goldberg, dipanjan das, kuzman ganchev, keith hall, slav petrov, hao zhang, oscar t\u00e4ckstr\u00f6m, claudia bedini, n\u00faria bertomeu castell\u00f3, and jungmee lee. universal dependency an- notation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics, pages 92-97.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The expectation-maximization algorithm", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Todd K Moon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "IEEE Signal processing magazine", |
|
"volume": "13", |
|
"issue": "6", |
|
"pages": "47--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Todd K Moon. 1996. The expectation-maximization algorithm. IEEE Signal processing magazine, 13(6):47-60.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Crosslingual annotation projection for semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "307--340", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2009. Cross- lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36:307- 340.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Text categorization using predicate-argument structures", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Persson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Nugues", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 17th Nordic Conference of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--149", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Persson, Richard Johansson, and Pierre Nugues. 2009. Text categorization using predicate-argument structures. In Proceedings of the 17th Nordic Con- ference of Computational Linguistics (NODALIDA 2009), pages 142-149.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Using semantic roles to improve question answering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceed- ings of the 2007 joint conference on empirical meth- ods in natural language processing and computa- tional natural language learning (EMNLP-CoNLL), pages 12-21.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Greedy, joint syntacticsemantic parsing with stack lstms", |
|
"authors": [ |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.08954" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Greedy, joint syntactic- semantic parsing with stack lstms. arXiv preprint arXiv:1606.08954.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Well-read students learn better: On the importance of pre-training compact models", |
|
"authors": [ |
|
{ |
|
"first": "Iulia", |
|
"middle": [], |
|
"last": "Turc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.08962v2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962v2.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Deep probabilistic logic: A unifying framework for indirect supervision", |
|
"authors": [ |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoifung", |
|
"middle": [], |
|
"last": "Poon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1808.08485" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hai Wang and Hoifung Poon. 2018. Deep probabilistic logic: A unifying framework for indirect supervi- sion. arXiv preprint arXiv:1808.08485.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "An mrc framework for semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxian", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaofei", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2109.06660" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nan Wang, Jiwei Li, Yuxian Meng, Xiaofei Sun, and Jun He. 2021. An mrc framework for semantic role labeling. arXiv preprint arXiv:2109.06660.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of bert", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.09077" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of bert. arXiv preprint arXiv:1904.09077.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.08144" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Nlp chinese corpus: Large scale chinese corpus for nlp", |
|
"authors": [ |
|
{ |
|
"first": "Bright", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bright Xu. 2019. Nlp chinese corpus: Large scale chi- nese corpus for nlp.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Transfer of frames from english framenet to construct chinese framenet: a bilingual corpus-based approach", |
|
"authors": [ |
|
{ |
|
"first": "Tsung-Han", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hen-Hsen", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "An-Zi", |
|
"middle": [], |
|
"last": "Yen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hsin-Hsi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsung-Han Yang, Hen-Hsen Huang, An-Zi Yen, and Hsin-Hsi Chen. 2018. Transfer of frames from en- glish framenet to construct chinese framenet: a bilin- gual corpus-based approach. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018).", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "End-to-end learning of semantic role labeling using recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1127--1137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1127-1137.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"num": null, |
|
"text": "Sample verb-form knowledge in Valpal database", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Algorithm 1 Fine-tuning of Semantic Role Labeller Require: Target Language corpus X targ ; set of FOL rules F l representing the entire Valpal database knowledge; Pre-trained LSTM based Semantic Role Labeller \u03a8; Number of Epochs N", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Hyper-parameter settings for input and train-ing (first block), semantic role labeler (second block) and semantic role compressor (third block). Semantic role labeler and Semantic role compressor are same as</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"text": "Results for Monoloingual settings (with extended vocab for de and zh)full models due to Valpal knowledge injection are same i.e 0.7 for it, 4.5 for de and 4.6 for zh (average 3.3). This provides the evidence that the improvement is indeed due to the Valpal Knowledge injection.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Model</td><td>it</td><td>de</td><td>zh</td><td>en</td><td>avg</td></tr><tr><td>MAN-</td><td colspan=\"5\">57.7 66.2 65.9 66.0 63.9</td></tr><tr><td>MOE</td><td/><td/><td/><td/><td/></tr><tr><td>MoE</td><td colspan=\"5\">57.1 63.5 66.1 64.1 62.7</td></tr><tr><td>PGN</td><td colspan=\"5\">58.0 65.7 66.9 67.8 64.6</td></tr><tr><td>Base-wo-</td><td colspan=\"5\">37.6 50.2 48.9 49.9 46.6</td></tr><tr><td>Compressor</td><td/><td/><td/><td/><td/></tr><tr><td>Base-wo-</td><td colspan=\"5\">38.5 54.7 53.6 54.8 50.4</td></tr><tr><td>Compressor</td><td/><td/><td/><td/><td/></tr><tr><td>+ Valpal</td><td/><td/><td/><td/><td/></tr><tr><td>Increase</td><td>0.9</td><td>4.5</td><td>4.7</td><td>4.9</td><td>3.8</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"text": "Results in Polygot settings", |
|
"type_str": "table", |
|
"content": "<table><tr><td>6.2 Polyglot training</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"text": "Results with and without ext-vocab", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |