|
{ |
|
"paper_id": "S13-1039", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:41:33.981684Z" |
|
}, |
|
"title": "Predicting the Compositionality of Multiword Expressions Using Translations in Multiple Languages", |
|
"authors": [ |
|
{ |
|
"first": "Bahar", |
|
"middle": [], |
|
"last": "Salehi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "NICTA Victoria Research Laboratory", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Cook", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Melbourne Victoria", |
|
"location": { |
|
"postCode": "3010", |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we propose a simple, languageindependent and highly effective method for predicting the degree of compositionality of multiword expressions (MWEs). We compare the translations of an MWE with the translations of its components, using a range of different languages and string similarity measures. We demonstrate the effectiveness of the method on two types of English MWEs: noun compounds and verb particle constructions. The results show that our approach is competitive with or superior to state-of-the-art methods over standard datasets.", |
|
"pdf_parse": { |
|
"paper_id": "S13-1039", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we propose a simple, languageindependent and highly effective method for predicting the degree of compositionality of multiword expressions (MWEs). We compare the translations of an MWE with the translations of its components, using a range of different languages and string similarity measures. We demonstrate the effectiveness of the method on two types of English MWEs: noun compounds and verb particle constructions. The results show that our approach is competitive with or superior to state-of-the-art methods over standard datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A multiword expression (MWE) is any combination of words with lexical, syntactic or semantic idiosyncrasy (Sag et al., 2002; Baldwin and Kim, 2009) , in that the properties of the MWE are not predictable from the component words. For example, with ad hoc, the fact that neither ad nor hoc are standalone English words, makes ad hoc a lexicallyidiosyncratic MWE; with shoot the breeze, on the other hand, we have semantic idiosyncrasy, as the meaning of \"to chat\" in usages such as It was good to shoot the breeze with you 1 cannot be predicted from the meanings of the component words shoot and breeze.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 124, |
|
"text": "(Sag et al., 2002;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 125, |
|
"end": 147, |
|
"text": "Baldwin and Kim, 2009)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality of MWEs", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Semantic idiosyncrasy has been of particular interest to NLP researchers, with research on binary compositional/non-compositional MWE clas-sification (Lin, 1999; , or a three-way compositional/semi-compositional/noncompositional distinction (Fazly and Stevenson, 2007) . There has also been research to suggest that MWEs span the entire continuum from full compositionality to full non-compositionality (McCarthy et al., 2003; Reddy et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 161, |
|
"text": "(Lin, 1999;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 268, |
|
"text": "(Fazly and Stevenson, 2007)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 426, |
|
"text": "(McCarthy et al., 2003;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 446, |
|
"text": "Reddy et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality of MWEs", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Investigating the degree of MWE compositionality has been shown to have applications in information retrieval and machine translation (Acosta et al., 2011; Venkatapathy and Joshi, 2006) . As an example of an information retrieval system, if we were looking for documents relating to rat race (meaning \"an exhausting routine that leaves no time for relaxation\" 2 ), we would not be interested in documents on rodents. These results underline the need for methods for broad-coverage MWE compositionality prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 155, |
|
"text": "(Acosta et al., 2011;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 185, |
|
"text": "Venkatapathy and Joshi, 2006)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality of MWEs", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this research, we investigate the possibility of using an MWE's translations in multiple languages to measure the degree of the MWE's compositionality, and investigate how literal the semantics of each component is within the MWE. We use Panlex to translate the MWE and its components, and compare the translations of the MWE with the translations of its components using string similarity measures. The greater the string similarity, the more compositional the MWE is.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality of MWEs", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Whereas past research on MWE compositionality has tended to be tailored to a specific MWE type (McCarthy et al., 2007; Kim and Baldwin, 2007; Fazly et al., 2009) , our method is applicable to any MWE type in any language. Our experiments over two English MWE types demonstrate that our method is competitive with state-of-the-art methods over standard datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 118, |
|
"text": "(McCarthy et al., 2007;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 141, |
|
"text": "Kim and Baldwin, 2007;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 161, |
|
"text": "Fazly et al., 2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality of MWEs", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most previous work on measuring MWE compositionality makes use of lexical, syntactic or semantic properties of the MWE. One early study on MWE compositionality was Lin (1999) , who claimed that the distribution of non-compositional MWEs (e.g. shoot the breeze) differs significantly from the distribution of expressions formed by substituting one of the components with a semantically similar word (e.g. shoot the wind). Unfortunately, the method tends to fall down in cases of high statistical idiosyncrasy (or \"institutionalization\"): consider frying pan which is compositional but distributionally very different to phrases produced through wordsubstitution such as sauteing pan or frying plate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 174, |
|
"text": "Lin (1999)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Some research has investigated the syntactic properties of MWEs, to detect their compositionality (Fazly et al., 2009; McCarthy et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 118, |
|
"text": "(Fazly et al., 2009;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 141, |
|
"text": "McCarthy et al., 2007)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The assumption behind these methods is that noncompositional MWEs are more syntactically fixed than compositional MWEs. For example, make a decision can be passivised, but shoot the breeze cannot. One serious problem with syntax-based methods is their lack of generalization: each type of MWE has its own characteristics, and these characteristics differ from one language to another. Moreover, some MWEs (such as noun compounds) are not flexible syntactically, no matter whether they are compositional or non-compositional (Reddy et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 544, |
|
"text": "(Reddy et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Much of the recent work on MWEs focuses on their semantic properties, measuring the semantic similarity between the MWE and its components using different resources, such as WordNet (Kim and Baldwin, 2007) or distributional similarity relative to a corpus (e.g. based on Latent Semantic Analysis: Schone and Jurafsky (2001) , , Reddy et al. (2011) ). The size of the corpus is important in methods based on distributional similarity. Unfortunately, however, large corpora are not available for all languages. Reddy et al. (2011) hypothesize that the number of common co-occurrences between a given MWE and its component words indicates the de-gree of compositionality of that MWE. First, the cooccurrences of a given MWE/word are considered as the values of a vector. They then measure the Cosine similarity between the vectors of the MWE and its components. presented four methods to measure the compositionality of English verb particle constructions. Their best result is based on the previously-discussed method of Lin (1999) for measuring compositionality, but uses a more-general distributional similarity model to identify synonyms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 205, |
|
"text": "Baldwin, 2007)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 323, |
|
"text": "Schone and Jurafsky (2001)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 347, |
|
"text": "Reddy et al. (2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 528, |
|
"text": "Reddy et al. (2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1019, |
|
"end": 1029, |
|
"text": "Lin (1999)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recently, a few studies have investigated using parallel corpora to detect the degree of compositionality (Melamed, 1997; Moir\u00f3n and Tiedemann, 2006; de Caseli et al., 2010; Salehi et al., 2012) . The general approach is to word-align the source and target language sentences and analyse alignment patterns for MWEs (e.g. if the MWE is always aligned as a single \"phrase\", then it is a strong indicator of non-compositionality). de Caseli et al. (2010) consider non-compositional MWEs to be those candidates that align to the same target language unit, without decomposition into word alignments. Melamed (1997) suggests using mutual information to investigate how well the translation model predicts the distribution of words in the target text given the distribution of words in the source text. Moir\u00f3n and Tiedemann (2006) show that entropy is a good indicator of compositionality, because word alignment models are often confused by non-compositional MWEs. However, this assumption does not always hold, especially when dealing with high-frequency non-compositional MWEs. Salehi et al. (2012) tried to solve this problem with high frequency MWEs by using word alignment in both directions. 3 They computed backward and forward entropy to try to remedy the problem with especially high-frequency phrases. However, their assumptions were not easily generalisable across languages, e.g., they assume that the relative frequency of a specific type of MWE (light verb constructions) in Persian is much greater than in English.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 121, |
|
"text": "(Melamed, 1997;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 149, |
|
"text": "Moir\u00f3n and Tiedemann, 2006;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 173, |
|
"text": "de Caseli et al., 2010;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 194, |
|
"text": "Salehi et al., 2012)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 611, |
|
"text": "Melamed (1997)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 798, |
|
"end": 825, |
|
"text": "Moir\u00f3n and Tiedemann (2006)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1076, |
|
"end": 1096, |
|
"text": "Salehi et al. (2012)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Although methods using bilingual corpora are intuitively appealing, they have a number of drawbacks. The first and the most important problem is data: they need large-scale parallel bilingual corpora, which are available for relatively few language pairs. Second, since they use statistical measures, they are not suitable for measuring the compositionality of MWEs with low frequency. And finally, most experiments have been carried out on English paired with other European languages, and it is not clear whether the results translate across to other language pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this research, we use the translations of MWEs and their components to estimate the relative degree of compositionality of a MWE. There are several resources available to translate words into various languages such as Babelnet (Navigli and Ponzetto, 2010 ), 4 Wiktionary, 5 Panlex (Baldwin et al., 2010) and Google Translate. 6 As we are ideally after broad coverage over multiple languages and MWEs/component words in a given language, we exclude Babelnet and Wiktionary from our current research. Babelnet covers only six languages at the time of writing this paper, and in Wiktionary, because it is constantly being updated, words and MWEs do not have translations into the same languages. This leaves translation resources such as Panlex and Google Translate. However, after manually analysing the two resources for a range of MWEs, we decided not to use Google Translate for two reasons: (1) we consider the MWE out of context (i.e., we are working at the type level and do not consider the usage of the MWE in a particular sentence), and Google Translate tends to generate compositional translations of MWEs out of context; and (2) Google Translate provides only one translation for each component word/MWE. This left Panlex.", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 257, |
|
"text": "(Navigli and Ponzetto, 2010", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 306, |
|
"text": "(Baldwin et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 330, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Panlex is an online translation database that is freely available. It contains lemmatized words and MWEs in a large variety of languages, with lemmabased (and less frequently sense-based) links between them. The database covers more than 1353 languages, and is made up of 12M lemmas and expressions. The translations are sourced from handmade electronic dictionaries, making it more accu-rate than translation dictionaries generated automatically, e.g. through word alignment. Usually there are several direct translations for a word/MWE from one language to another, as in translations which were extracted from electronic dictionaries. If there is no direct translation for a word/MWE in the database, we can translate indirectly via one or more pivot languages (indirect translation: Soderland et al. 2010). For example, English ivory tower has direct translations in only 13 languages in Panlex, including French (tour d'ivoire) but not Esperanto. There is, however, a translation of tour d'ivoire into Esperanto (ebura turo), allowing us to infer an indirect translation between ivory tower and ebura turo.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We evaluate our method over two datasets, as described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "REDDY (Reddy et al., 2011) : 90 English (binary) noun compounds (NCs), where the overall NC and each component word has been annotated for compositionality on a scale from 0 (non-compositional) to 5 (compositional). In order to avoid issues with polysemy, the annotators were presented with each NC in a sentential context. The authors tried to achieve a balance of compositional and noncompositional NCs: based on a threshold of 2.5, the dataset consists of 43 (48%) compositional NCs, 46 (51%) NCs with a compositional usage of the first component, and 54 (60%) NCs with a compositional usage of the second component.", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 26, |
|
"text": "(Reddy et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "BANNARD (Bannard, 2006 ): 160 English verb particle constructions (VPCs) were annotated for compositionality relative to each of the two component words (the verb and the particle). Each annotator was asked to annotate each of the verb and particle as yes, no or don't know. Based on the majority annotation, among the 160 VPCs, 122 (76%) are verb-compositional and 76 (48%) are particlecompositional.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 22, |
|
"text": "(Bannard, 2006", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We compute the proportion of yes tags to get the compositionality score. This dataset, unlike REDDY, does not include annotations for the compositionality of the whole VPC, and is also less balanced, containing more VPCs which are verb-compositional than verb-non-compositional. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To predict the degree of compositionality of an MWE, we require a way to measure the semantic similarity of the MWE with its components. Our hypothesis is that compositional MWEs are more likely to be word-for-word translations in a given language than non-compositional MWEs. Hence, if we can locate the translations of the components in the translation of the MWE, we can deduce that it is compositional. Our second hypothesis is that the more languages we use as the basis for determining translation similarity between the MWE and its component words, the more accurately we will be able to estimate compositionality. Thus, rather than using just one translation language, we experiment with as many languages as possible. Figure 1 provides a schematic outline of our method. The MWE and its components are translated using Panlex. Then, we compare the translation of the MWE with the translations of its components. In order to locate the translation of each component in the MWE translation, we use string simi- larity measures. The score shown in Figure 1 is derived from a given language. In Section 6, we show how to combine scores across multiple languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 727, |
|
"end": 735, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1054, |
|
"end": 1062, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As an example of our method, consider the English-to-Persian translation of kick the bucket as a non-compositional MWE and make a decision as a semi-compositional MWE (Table 1) . 7 By locating the translation of decision (tasmim) in the translation of make a decision (tasmim gereftan), we can deduce that it is semi-compositional. However, we cannot locate any of the component translations in the translation of kick the bucket. Therefore, we conclude that it is non-compositional. Note that in this simple example, the match is word-level, but that due to the effects of morphophonology, the more likely situation is that the components don't match exactly (as we observe in the case of khadamaat and khedmat for the public service example), which motivates our use of string similarity measures which can capture partial matches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 180, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 176, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We consider the following string similarity measures to compare the translations. In each case, we normalize the output value to the range [0, 1], where 1 indicates identical strings and 0 indicates completely different strings. We will indicate the translation of the MWE in a particular language t as MWE t , and the translation of a given component in language t as component t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The LCS measure finds the longest common substring between two strings. For example, the LCS between ABABC and BABCAB is BABC. We calculate a normalized similarity value based on the length of the LCS as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Longest common substring (LCS):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LongestCommonString (MWE t , component t ) min(len(MWE t ), len(component t ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Longest common substring (LCS):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Levenshtein (LEV1): The Levenshtein distance calculates for the number of basic edit operations required to transpose one word into the other. Edits consist of single-letter insertions, deletions or substitutions. We normalize LEV1 as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Longest common substring (LCS):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 \u2212 LEV1 (MWE t , component t ) max(len(MWE t ), len(component t ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Longest common substring (LCS):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One well-documented feature of Levenshtein distance (Baldwin, 2009) is that substitutions are in fact the combination of an addition and a deletion, and as such can be considered to be two edits. Based on this observation, we experiment with a variant of LEV1 with this penalty applied for substitutions. Similarly to LEV1, we normalize as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 67, |
|
"text": "(Baldwin, 2009)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Levenshtein with substitution penalty (LEV2):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 \u2212 LEV2 (MWE t , component t ) len(MWE t ) + len(component t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Levenshtein with substitution penalty (LEV2):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This method is based on the Needleman-Wunsch algorithm, 8 and was developed to locally-align two protein sequences (Smith and Waterman, 1981) . It finds the optimal similar regions by maximizing the number of matches and minimizing the number of gaps necessary to align the two sequences. For example, the optimal local sequence for the two sequences below is AT\u2212\u2212ATCC, in which \"-\" indicates a gap:", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 141, |
|
"text": "(Smith and Waterman, 1981)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Smith Waterman (SW)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "8 The Needleman-Wunsch (NW) algorithm, was designed to align two sequences of amino-acids (Needleman and Wunsch, 1970) . The algorithm looks for the sequence alignment which maximizes the similarity. As with the LEV score, NW minimizes edit distance, but also takes into account character-tocharacter similarity based on the relative distance between characters on the keyboard. We exclude this score, because it is highly similar to the LEV scores, and we did not obtain encouraging results using NW in our preliminary experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 118, |
|
"text": "(Needleman and Wunsch, 1970)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Smith Waterman (SW)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As the example shows, it looks for the longest common string but has an in-built mechanism for including gaps in the alignment (with penalty). This characteristic of SW might be helpful in our task, because there may be morphophonological variations between the MWE and component translations (as seen above in the public service example). We normalize SW similarly to LCS: len(alignedSequence) min(len(MWE t ), len(component t ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Seq1: ATGCATCCCATGAC Seq2: TCTATATCCGT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given the scores calculated by the aforementioned string similarity measures between the translations for a given component word and the MWE, we need some way of combining scores across component words. 9 First, we measure the compositionality of each component within the MWE (s 1 and s 2 ):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "s 1 = f 1 (sim 1 (w 1 , MWE ), ..., sim i (w 1 , MWE )) s 2 = f 1 (sim 1 (w 2 , MWE ), ..., sim i (w 2 , MWE ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where sim is a string similarity measure, sim i indicates that the calculation is based on translations in language i, and f 1 is a score combination function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Then, we compute the overall compositionality of the MWE (s 3 ) from s 1 and s 2 using f 2 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "s 3 = f 2 (s 1 , s 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Since we often have multiple translations for a given component word/MWE in Panlex, we exhaustively compute the similarity between each MWE translation and component translation, and use the highest similarity as the result of sim i . If an instance does not have a direct/indirect translation in Panlex, we assign a default value, which is the mean of the highest and lowest annotation score (2.5 for REDDY and 0.5 for BANNARD). Note that word order is not an issue in our method, as we calculate the similarity independently for each MWE component.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this research, we consider simple functions for f 1 such as mean, median, product, min and max. was selected to be the same as f 1 in all situations, except when we use mean for f 1 . Here, following Reddy et al. 2011, we experimented with weighted mean:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "f 2 (s 1 , s 2 ) = \u03b1s 1 + (1 \u2212 \u03b1)s 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Based on 3-fold cross validation, we chose \u03b1 = 0.7 for REDDY. 10 Since we do not have judgements for the compositionality of the full VPC in BANNARD (we instead have separate judgements for the verb and particle), we cannot use f 2 for this dataset. observed that nearly all of the verb-compositional instances were also annotated as particle-compositional by the annotators. In line with this observation, we use s 1 (based on the verb) as the compositionality score for the full VPC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 64, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computational Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our method is based on the translation of an MWE into many languages. In the first stage, we chose 54 languages for which relatively large corpora were available. 11 The coverage, or the number of instances which have direct/indirect translations in Panlex, varies from one language to another. In preliminary experiments, we noticed that there is a high correlation (about 0.50 for BANNARD and Table 4 : The 10 best languages for the particle component of BANNARD using LCS. about 0.80 for REDDY) between the usefulness of a language and its translation coverage on MWEs. Therefore, we excluded languages with MWE translation coverage of less than 50%. Based on nested 10-fold cross validation in our experiments, we select the 10 most useful languages for each crossvalidation training partition, based on the Pearson correlation between the given scores in that language and human judgements. 12 The 10 best languages are selected based only on the training set for each fold. (The languages selected for each fold will later be used to predict the compositionality of the items in the testing portion for that fold.) In Tables 2, 3 f 1 sim() N1 and 4, we show how often each language was selected in the top-10 languages over the combined 100 (10\u00d710) folds of nested 10-fold cross validation, based on LCS. 13 The tables show that the selected languages were mostly consistent over the folds. The languages are a mixture of Romance, Germanic and languages from other families (based on Voegelin and Voegelin (1977)), with no standout language which performs well in all cases (indeed, no language occurs in all three tables). Additionally, there is nothing in common between the verb and the particle top-10 languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1311, |
|
"end": 1313, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 402, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1124, |
|
"end": 1135, |
|
"text": "Tables 2, 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language Selection", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "N2 NC Mean", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Selection", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "As mentioned before, we perform nested 10-fold cross-validation to select the 10 best languages on the training data for each fold. The selected languages for a given fold are then used to compute s 1 Table 6 : Correlation on BANNARD (VPC), based on the best-10 languages for the verb and particle individually and s 2 (and s 3 for NCs) for each instance in the test set for that fold. The scores are compared with human judgements using Pearson's correlation. The results are shown in Tables 5 and 6 . Among the five functions we experimented with for f 1 , Mean performs much more consistently than the others. Median is less prone to noise, and therefore performs better than Prod, Max and Min, but it is still worse than Mean. For the most part, LCS and SW perform better than the other measures. There is little to separate these two methods, partly because they both look for a sequence of similar characters, unlike LEV1 and LEV2 which do not consider contiguity of match.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 208, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 500, |
|
"text": "Tables 5 and 6", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The results support our hypothesis that using multiple target languages rather than one, results in a more accurate prediction of MWE compositionality. Our best result using the 10 selected languages on REDDY is 0.649, as compared to the best singlelanguage correlation of 0.497 for Portuguese. On BANNARD, the best LCS result for the verb component is 0.406, as compared to the best single-language correlation of 0.350 for Lithuanian. Reddy et al. (2011) reported a correlation of 0.714 on REDDY. Our best correlation is 0.649. Note that Reddy et al. (2011) base their method on identification of MWEs in a corpus, thus requiring MWEspecific identification. Given that this has been shown to be difficult for MWE types including English VPCs (McCarthy et al., 2003; Baldwin, 2005) , the fact that our method is as competitive as this is highly encouraging, especially when you consider that it can equally be applied to different types of MWEs in other languages. Moreover, the computational processing required by methods based on distributional similarity is greater than our method, as it does not require processing a large corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 456, |
|
"text": "Reddy et al. (2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 559, |
|
"text": "Reddy et al. (2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 731, |
|
"end": 767, |
|
"text": "English VPCs (McCarthy et al., 2003;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 782, |
|
"text": "Baldwin, 2005)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Finally, we experimented with combining our method (STRINGSIM MEAN ) with a reimplementation of the method of Reddy et al. (2011) , based on simple averaging, as detailed in Table 7 . The results are higher than both component methods and the stateof-the-art for REDDY, demonstrating the complementarity between our proposed method and methods based on distributional similarity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 129, |
|
"text": "Reddy et al. (2011)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 181, |
|
"text": "Table 7", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In Table 8 , we compare our results (STRINGSIM MEAN ) with those of , who interpreted the dataset as a binary classification task. The dataset used in their study is a subset of BANNARD, containing 40 VPCs, of which 29 (72%) were verb compositional and 23 (57%) were particle compositional. By applying a threshold of 0.5 over the output of our regression model, we binarize the VPCs into the compositional and non-compositional classes. According to the results shown in Table 6 , LCS is a better similarity measure for this task. Our proposed method has higher results than the best results of , in part due to their reliance on VPC identification, and the low recall on the task, as reported in the paper. Our proposed method does not rely on a corpus or MWE identification.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 479, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We analyse items in REDDY which have a high difference (more than 2.5) between the human annotation and our scores (using LCS and Mean). The words are cutting edge, melting pot, gold mine and ivory tower, which are non-compositional accord-ing to REDDY. After investigating their translations, we came to the conclusion that the first three MWEs have word-for-word translations in most languages. Hence, they disagree with our hypothesis that wordfor-word translation is a strong indicator of compositionality. The word-for-word translations might be because of the fact that they have both compositional and non-compositional senses, or because they are calques (loan translations). However, we have tried to avoid such problems with calques by using translations into several languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "For ivory tower (\"a state of mind that is discussed as if it were a place\") 14 we noticed that we have a direct translation into 13 languages. Other languages have indirect translations. By checking the direct translations, we noticed that, in French, the MWE is translated to tour and tour d'ivoire. A noisy (wrong) translation of tour \"tower\" resulted in wrong indirect translations for ivory tower and an inflated estimate of compositionality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "In this study, we proposed a method to predict MWE compositionality based on the translation of the MWE and its component words into multiple languages. We used string similarity measures between the translations of the MWE and each of its components to predict the relative degree of compositionality. Among the four similarity measures that we experimented with, LCS and SW were found to be superior to edit distance-based methods. Our best results were found to be competitive with state-of-theart results using vector-based approaches, and were also shown to complement state-of-the-art methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "10" |
|
}, |
|
{ |
|
"text": "In future work, we are interested in investigating whether alternative ways of combining our proposed method with vector-based models can lead to further enhancements in results. These models could be especially effective when comparing translations which are roughly synonymous but not string-wise similar. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "10" |
|
}, |
|
{ |
|
"text": "Precision Recall F-score (\u03b2 = 1) Accuracy 0.608 0.666 0.636 0.600 STRINGSIM MEAN 0.862 0.718 0.774 0.693 Table 8 : Results for the classification task. STRINGSIM MEAN is our method using Mean for f 1 NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 112, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The example is taken from http://www. thefreedictionary.com", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This definition is from WordNet 3.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The IBM models (Brown et al., 1993), e.g., are not bidirectional, which means that the alignments are affected by the alignment direction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://lcl.uniroma1.it/babelnet/ 5 http://www.wiktionary.org/ 6 http://translate.google.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that the Persian words are transliterated into English for ease of understanding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that in all experiments we only combine scores given by the same string similarity measure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We considered values of \u03b1 from 0 to 1, incremented by 0.1.11 In future work, we intend to look at the distribution of translations of the given MWE and its components in corpora for many languages. The present method does not rely on the availability of large corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that for VPCs, we calculate the compositionality of only the verb part, because we don't have the human judgements for the whole VPC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since our later results show that LCS and SW have higher results, we only show the best languages using LCS. These largely coincide with those for SW.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This definition is from Wordnet 3.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Timothy Baldwin, Su Nam Kim, and the anonymous reviewers for their valuable comments and suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Identification and treatment of multiword expressions applied to information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Aline", |
|
"middle": [], |
|
"last": "Otavio Costa Acosta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Villavicencio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Viviane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moreira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the ALC Workshop on MWEs: from Parsing and Generation to the Real World", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--109", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Otavio Costa Acosta, Aline Villavicencio, and Viviane P Moreira. 2011. Identification and treatment of multi- word expressions applied to information retrieval. In Proceedings of the ALC Workshop on MWEs: from Parsing and Generation to the Real World (MWE 2011), pages 101-109.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Multiword expressions", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Su", |
|
"middle": [ |
|
"Nam" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Handbook of Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin and Su Nam Kim. 2009. Multiword expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing. CRC Press, Boca Raton, USA, 2nd edition.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "An empirical model of multiword expression decomposability", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Bannard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takaaki", |
|
"middle": [], |
|
"last": "Tanaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominic", |
|
"middle": [], |
|
"last": "Widdows", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the ACL-2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin, Colin Bannard, Takaaki Tanaka, and Dominic Widdows. 2003. An empirical model of multiword expression decomposability. In Proceed- ings of the ACL-2003 Workshop on Multiword Expres- sions: Analysis, Acquisition and Treatment, pages 89- 96, Sapporo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Panlex and lextract: Translating all words of all languages of the world", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Pool", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Colowick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "37--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin, Jonathan Pool, and Susan M Colow- ick. 2010. Panlex and lextract: Translating all words of all languages of the world. In Proceedings of the 23rd International Conference on Computational Lin- guistics: Demonstrations, pages 37-40.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The deep lexical acquisition of English verb-particle constructions", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computer Speech and Language, Special Issue on Multiword Expressions", |
|
"volume": "19", |
|
"issue": "4", |
|
"pages": "398--414", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin. 2005. The deep lexical acquisition of English verb-particle constructions. Computer Speech and Language, Special Issue on Multiword Expres- sions, 19(4):398-414.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The hare and the tortoise: Speed and reliability in translation retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Machine Translation", |
|
"volume": "23", |
|
"issue": "4", |
|
"pages": "195--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin. 2009. The hare and the tortoise: Speed and reliability in translation retrieval. Machine Translation, 23(4):195-240.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A statistical approach to the semantics of verbparticles", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Bannard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Bannard, Timothy Baldwin, and Alex Lascarides. 2003. A statistical approach to the semantics of verb- particles. In Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment-Volume 18, pages 65-72.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Acquiring Phrasal Lexicons from Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [ |
|
"James" |
|
], |
|
"last": "Bannard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin James Bannard. 2006. Acquiring Phrasal Lexicons from Corpora. Ph.D. thesis, University of Edinburgh.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Alignment-based extraction of multiword expressions. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Helena", |
|
"middle": [], |
|
"last": "Medeiros De Caseli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Ramisch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "44", |
|
"issue": "", |
|
"pages": "59--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helena Medeiros de Caseli, Carlos Ramisch, Maria das Gra\u00e7as Volpe Nunes, and Aline Villavicencio. 2010. Alignment-based extraction of multiword expressions. Language Resources and Evaluation, 44(1):59-77.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Distinguishing subtypes of multiword expressions using linguistically-motivated statistical measures", |
|
"authors": [ |
|
{ |
|
"first": "Afsaneh", |
|
"middle": [], |
|
"last": "Fazly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suzanne", |
|
"middle": [], |
|
"last": "Stevenson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the ACL 2007 Workshop on A Broader Perspective on Multiword Expressions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Afsaneh Fazly and Suzanne Stevenson. 2007. Dis- tinguishing subtypes of multiword expressions using linguistically-motivated statistical measures. In Pro- ceedings of the ACL 2007 Workshop on A Broader Per- spective on Multiword Expressions, pages 9-16.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Unsupervised type and token identification of idiomatic expressions", |
|
"authors": [ |
|
{ |
|
"first": "Afsaneh", |
|
"middle": [], |
|
"last": "Fazly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Cook", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suzanne", |
|
"middle": [], |
|
"last": "Stevenson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computational Linguistics", |
|
"volume": "35", |
|
"issue": "1", |
|
"pages": "61--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Afsaneh Fazly, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Detecting compositionality of english verb-particle constructions using semantic similarity", |
|
"authors": [ |
|
{ |
|
"first": "Nam", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 7th Meeting of the Pacific Association for Computational Linguistics (PACLING 2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Nam Kim and Timothy Baldwin. 2007. Detecting compositionality of english verb-particle constructions using semantic similarity. In Proceedings of the 7th Meeting of the Pacific Association for Computational Linguistics (PACLING 2007), pages 40-48.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Automatic identification of noncompositional phrases", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "317--324", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekang Lin. 1999. Automatic identification of non- compositional phrases. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 317- 324.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Detecting a continuum of compositionality in phrasal verbs", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "73--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy, Bill Keller, and John Carroll. 2003. Detecting a continuum of compositionality in phrasal verbs. In Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment-Volume 18, pages 73-80.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Detecting compositionality of verbobject combinations using selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sriram", |
|
"middle": [], |
|
"last": "Venkatapathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind K", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "369--379", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy, Sriram Venkatapathy, and Aravind K Joshi. 2007. Detecting compositionality of verb- object combinations using selectional preferences. In Proceedings of the 2007 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP- CoNLL), pages 369-379.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automatic discovery of noncompositional compounds in parallel data", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Fifth Workshop on Very Large Corpora. EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dan Melamed. 1997. Automatic discovery of non- compositional compounds in parallel data. In Pro- ceedings of the Fifth Workshop on Very Large Cor- pora. EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Identifying idiomatic expressions using automatic word-alignment", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Begona Villada Moir\u00f3n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the EACL 2006 Workshop on Multi-wordexpressions in a multilingual context", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Begona Villada Moir\u00f3n and J\u00f6rg Tiedemann. 2006. Identifying idiomatic expressions using automatic word-alignment. In Proceedings of the EACL 2006 Workshop on Multi-wordexpressions in a multilingual context, pages 33-40.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Babelnet: Building a very large multilingual semantic network", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [ |
|
"Paolo" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--225", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2010. Ba- belnet: Building a very large multilingual semantic network. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 216-225, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Saul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Needleman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wunsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "Journal of molecular biology", |
|
"volume": "48", |
|
"issue": "3", |
|
"pages": "443--453", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saul B Needleman and Christian D Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of molecular biology, 48(3):443-453.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "An empirical study on compositionality in compound nouns", |
|
"authors": [ |
|
{ |
|
"first": "Siva", |
|
"middle": [], |
|
"last": "Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "210--218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in com- pound nouns. In Proceedings of IJCNLP, pages 210- 218.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Multiword expressions: A pain in the neck for nlp", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Sag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Bond", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 3rd International Conference on Intelligent Text Processing Computational Linguistics (CICLing-2002)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Sag, Timothy Baldwin, Francis Bond, Ann Copes- take, and Dan Flickinger. 2002. Multiword ex- pressions: A pain in the neck for nlp. In Proceed- ings of the 3rd International Conference on Intelligent Text Processing Computational Linguistics (CICLing- 2002), pages 189-206. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Automatic identification of Persian light verb constructions", |
|
"authors": [ |
|
{ |
|
"first": "Bahar", |
|
"middle": [], |
|
"last": "Salehi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Narjes", |
|
"middle": [], |
|
"last": "Askarian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Afsaneh", |
|
"middle": [], |
|
"last": "Fazly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 13th International Conference on Intelligent Text Processing Computational Linguistics (CICLing-2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "201--210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bahar Salehi, Narjes Askarian, and Afsaneh Fazly. 2012. Automatic identification of Persian light verb con- structions. In Proceedings of the 13th International Conference on Intelligent Text Processing Computa- tional Linguistics (CICLing-2012), pages 201-210.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Is knowledgefree induction of multiword unit dictionary headwords a solved problem", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Schone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 6th Conference on Empirical Methods in Natural Language Processing (EMNLP 2001)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "100--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Schone and Dan Jurafsky. 2001. Is knowledge- free induction of multiword unit dictionary headwords a solved problem. In Proceedings of the 6th Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP 2001), pages 100-108.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Identification of common molecular subsequences", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Waterman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "Molecular Biology", |
|
"volume": "147", |
|
"issue": "", |
|
"pages": "195--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "TF Smith and MS Waterman. 1981. Identification of common molecular subsequences. Molecular Biology, 147:195-197.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Panlingual lexical translation via probabilistic inference", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kobi", |
|
"middle": [], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Skinner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Sammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bilmes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Artificial Intelligence", |
|
"volume": "174", |
|
"issue": "9", |
|
"pages": "619--637", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Soderland, Oren Etzioni, Daniel S Weld, Kobi Reiter, Michael Skinner, Marcus Sammer, Jeff Bilmes, et al. 2010. Panlingual lexical translation via proba- bilistic inference. Artificial Intelligence, 174(9):619- 637.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Using information about multi-word expressions for the word-alignment task", |
|
"authors": [ |
|
{ |
|
"first": "Sriram", |
|
"middle": [], |
|
"last": "Venkatapathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aravind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sriram Venkatapathy and Aravind K Joshi. 2006. Us- ing information about multi-word expressions for the word-alignment task. In Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties, pages 20-27.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Classification and index of the world's languages", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Frederick Voegelin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florence", |
|
"middle": [ |
|
"Marie" |
|
], |
|
"last": "Voegelin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles Frederick Voegelin and Florence Marie Voegelin. 1977. Classification and index of the world's lan- guages, volume 4. Elsevier Science Ltd.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Schematic of our proposed method", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "English MWEs and their components with their translation in Persian. Direct matches between the translation of a MWE and its components are shown in bold; partial matches are underlined.", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "The 10 best languages for REDDY using LCS.", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">: The 10 best languages for the verb component</td></tr><tr><td colspan=\"2\">of BANNARD using LCS.</td><td/></tr><tr><td/><td>VPC:particle</td><td/></tr><tr><td>Language</td><td colspan=\"2\">Frequency Family</td></tr><tr><td>French</td><td>100</td><td>Romance</td></tr><tr><td>Icelandic</td><td>100</td><td>Germanic</td></tr><tr><td>Thai</td><td>100</td><td>Kam-thai</td></tr><tr><td>Indonesian</td><td>92</td><td>Indonesian</td></tr><tr><td>Spanish</td><td>90</td><td>Romance</td></tr><tr><td>Tamil</td><td>87</td><td>Dravidian</td></tr><tr><td>Turkish</td><td>83</td><td>Turkic</td></tr><tr><td>Catalan</td><td>79</td><td>Romance</td></tr><tr><td>Occitan</td><td>76</td><td>Romance</td></tr><tr><td>Romanian</td><td>69</td><td>Romance</td></tr></table>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Correlation on REDDY (NCs). N1, N2 and NC, are the first component of the noun compound, its second component, and the noun compound itself, respectively.", |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>SW</td><td>0.637</td><td>0.735</td></tr><tr><td>LCS</td><td>0.649</td><td>0.742</td></tr><tr><td>LEV1</td><td>0.523</td><td>0.724</td></tr><tr><td>LEV2</td><td>0.577</td><td>0.726</td></tr></table>", |
|
"text": "sim() STRINGSIM MEAN STRINGSIM MEAN + Reddy et al.", |
|
"type_str": "table" |
|
}, |
|
"TABREF10": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Correlation after combining Reddy et al.'s method and our method with Mean for f 1 (STRINGSIM MEAN ). The correlation using Reddy et al.'s method is 0.714.", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |