Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R13-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:55:49.907784Z"
},
"title": "Capturing Anomalies in the Choice of Content Words in Compositional Distributional Semantic Space",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Kochmar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Laboratory University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this work, we present a new task for testing compositional distributional semantic models. Recently, there has been a spate of research into how distributional representations of individual words can be combined to represent the meaning of phrases. Vecchi et al. (2011) have shown that some compositional models, including the additive and multiplicative models of Mitchell and Lapata (2008; 2010) and the linear map-based model of Baroni and Zamparelli (2010), can be applied to detect semantically anomalous adjectivenoun combinations. We extend their experiments and apply these models to the combinations extracted from texts written by learners of English. Our work contributes to the field of compositional distributional semantics by introducing a new test paradigm for semantic models and shows how these models can be used for error detection in language learners' content word combinations.",
"pdf_parse": {
"paper_id": "R13-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "In this work, we present a new task for testing compositional distributional semantic models. Recently, there has been a spate of research into how distributional representations of individual words can be combined to represent the meaning of phrases. Vecchi et al. (2011) have shown that some compositional models, including the additive and multiplicative models of Mitchell and Lapata (2008; 2010) and the linear map-based model of Baroni and Zamparelli (2010), can be applied to detect semantically anomalous adjectivenoun combinations. We extend their experiments and apply these models to the combinations extracted from texts written by learners of English. Our work contributes to the field of compositional distributional semantics by introducing a new test paradigm for semantic models and shows how these models can be used for error detection in language learners' content word combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Vector-based (distributional) models are widely used for representing the meaning of single words. They rely on the assumption that word meaning can be learned from the linguistic environment and can be approximated by a word's distribution across contexts. Words are represented as vectors in a high-dimensional space, with vector dimensions encoding word co-occurrence with contextual elements -other words within a local window, words linked by specific dependencies to the target word, and so forth. Distributional models provide a clear basis for interpreting word meaning, as well as a simple means for measuring semantic similarity. These properties have been exploited in many NLP tasks, including automatic thesaurus extraction (Grefenstette, 1994) , word sense induction (Sch\u00fctze, 1998) and disambiguation (McCarthy et al., 2004) , collocation extraction (Schone and Jurafsky, 2001 ) and others.",
"cite_spans": [
{
"start": 737,
"end": 757,
"text": "(Grefenstette, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 781,
"end": 796,
"text": "(Sch\u00fctze, 1998)",
"ref_id": "BIBREF21"
},
{
"start": 801,
"end": 839,
"text": "disambiguation (McCarthy et al., 2004)",
"ref_id": null
},
{
"start": 865,
"end": 891,
"text": "(Schone and Jurafsky, 2001",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast to single words, the distribution of phrases cannot be used as a reliable approximation of their meaning, as phrase vectors are much sparser. Irrespective of the size of the corpus considered, some content word combinations will remain unattested as a consequence of their Zipf-like distributions. For example, Vecchi et al. (2011) have shown that both semantically acceptable and semantically deviant word combinations will be absent from large English corpora. A promising alternative is to use compositional models which combine distributional vectors for the component words in some way, for example, using a direct vector combination function (Kintsch, 2001; Mitchell and Lapata, 2008; Erk and Pad\u00f3, 2008; Thater et al., 2010) or linear transformations on vectors (Baroni and Zamparelli, 2010) .",
"cite_spans": [
{
"start": 323,
"end": 343,
"text": "Vecchi et al. (2011)",
"ref_id": "BIBREF24"
},
{
"start": 660,
"end": 675,
"text": "(Kintsch, 2001;",
"ref_id": null
},
{
"start": 676,
"end": 702,
"text": "Mitchell and Lapata, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 703,
"end": 722,
"text": "Erk and Pad\u00f3, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 723,
"end": 743,
"text": "Thater et al., 2010)",
"ref_id": "BIBREF23"
},
{
"start": 781,
"end": 810,
"text": "(Baroni and Zamparelli, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In spite of the spate of recent work in this area, the question of how to combine word representations is far from answered. Compositional models can be assessed by their ability both to provide a solid theoretical basis for meaning composition and to represent composite meaning for relevant practical tasks. Promising results have been shown with such models on similarity detection and paraphrase ranking (Mitchell and Lapata, 2008; Erk and Pad\u00f3, 2008; Thater et al., 2010) , adjectivenoun vector prediction (Baroni and Zamparelli, 2010) and semantic anomaly detection (Vecchi et al., 2011) . Of these tasks, the latter appears to be particularly challenging since it addresses the ability of compositional models to account for linguistic productivity.",
"cite_spans": [
{
"start": 408,
"end": 435,
"text": "(Mitchell and Lapata, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 436,
"end": 455,
"text": "Erk and Pad\u00f3, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 456,
"end": 476,
"text": "Thater et al., 2010)",
"ref_id": "BIBREF23"
},
{
"start": 511,
"end": 540,
"text": "(Baroni and Zamparelli, 2010)",
"ref_id": "BIBREF2"
},
{
"start": 572,
"end": 593,
"text": "(Vecchi et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "No corpus can effectively sample all possible content word combinations. On the other hand, some corpus-attested word combinations may appear semantically deviant when considered out of context (for example, when they are used metaphorically). Vecchi et al. (2011) have focused on unattested adjective-noun (AN) combinations and noted that if a combination does not occur in a corpus, it may be due to various reasons including data sparsity as well as nonsensicality. The task of distinguishing between the two cases is challenging. Vecchi et al. use the following examples:",
"cite_spans": [
{
"start": 244,
"end": 264,
"text": "Vecchi et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) a. blue rose b. residential steak",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Whereas both may well be unattested in a corpus, the concept of blue rose is perfectly conceivable while that of residential steak is nonsensical and only interpretable in specifically-constructed discourse contexts. Vecchi et al. argue that there should be a detectable difference between the model-generated representations for the semantically deviant combinations and those for the acceptable ones, and assess compositional models by their ability to capture this difference. Vecchi et al. have created a set of corpus-unattested AN combinations, annotated them as semantically acceptable or deviant, and applied the additive (add) and multiplicative (mult) models of Mitchell and Lapata (2008) and adjective-specific linear maps (alm) of Baroni and Zamparelli (2010) .",
"cite_spans": [
{
"start": 217,
"end": 230,
"text": "Vecchi et al.",
"ref_id": null
},
{
"start": 672,
"end": 698,
"text": "Mitchell and Lapata (2008)",
"ref_id": "BIBREF16"
},
{
"start": 743,
"end": 771,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given that promising results have been obtained in their experiments, we propose that a useful extension of this task is to test the compositional models on errors in content word combinations extracted from texts written by learners of English. This task provides a natural setting for testing semantic models on genuine examples and is a potential practical application for such models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language learners' errors are diverse, but many of them can naturally be explained in terms of nonproductive, semantically anomalous combination of content words (Leacock et al., 2010) . Learners may lack robust intuitions about words' selectional preferences and subtle differences in meaning, so they may confuse near-synonyms, overuse words with broad meaning, and otherwise choose words inappropriately. Consider the following examples extracted from our data:",
"cite_spans": [
{
"start": 162,
"end": 184,
"text": "(Leacock et al., 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) a. * big importance vs great importance b. * economical crisis vs economic crisis c. * deep regards vs kind regards d. best moment vs best time",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These examples illustrate that learner errors can often be explained by confusions stemming from similar meaning (2a) or form (2b). When a word combination appears to be nonsensical as in 2c, the words chosen might still be related to the appropriate ones in the learner's mental lexicon. We recognise that although error detection in learners' content word combinations is a natural extension to semantic anomaly detection, it also poses additional difficulties that semantic models might not be able to deal with. For example, some erroneous word combinations may not be completely devoid of compositional meaning, while violating language conventions. However, semantic models might still be able to capture some of these conventions. Another challenge is that some expressions cannot be unambiguously classified as either correct or incorrect, as their interpretation depends on the context of use: best moment (2d) is appropriate when used to denote a short period of time, but it is often incorrectly used by learners instead of best time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To make our work comparable with previous work on semantic anomaly, we investigate AN combinations extracted from texts written by nonnative speakers of English, and apply the add, mult and alm models of semantic composition. The main contributions of this work are to show that error detection in content word combinations provides a natural testbed and useful application for the compositional distributional models, and that the results obtained on this task provide a more natural estimate of the models' performance than ones based on artificially constructed examples. If the compositional distributional models can distinguish between correct and incorrect content word combinations, these models can then be used for writing or pedagogical assistance. To the best of our knowledge, this is the first attempt to handle learner errors in the choice of content words using compositional distributional semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Plan of the paper. We overview related work on error detection and discuss the three models of semantic composition in Section 2. Section 3 presents the data and experimental setup. We discuss the results of our experiments in Section 4 and conclude in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research on error detection has mostly been concerned with function words, such as determiners and prepositions (Leacock et al., 2010; Dale et al., 2012) . Such errors are more frequent, but they are also more systematic which makes them easier to detect. Function words constitute a closed class, so the set of possible corrections is also limited. By comparison, errors in content word combinations pose a bigger challenge. Since content words primarily express meaning rather than encode syntax, detection and correction of such errors depend on a system's ability, in the limit, to recognise the communicative intent of the writer. Moreover, the set of possible corrections is much larger than for function words.",
"cite_spans": [
{
"start": 112,
"end": 134,
"text": "(Leacock et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 135,
"end": 153,
"text": "Dale et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Detection in Content Words",
"sec_num": "2.1"
},
{
"text": "Previous work has either focused on correction alone assuming that errors are already detected (Liu et al., 2009; Dahlmeier and Ng, 2011) , or has reformulated the task as writing improvement (Shei and Pain, 2000; Wible et al., 2003; Chang et al., 2008; Futagi et al., 2008; Park et al., 2008; Yi et al., 2008) . In the former case error detection, which is a difficult task in itself, is not addressed, while in the latter case it is integrated into that of suggesting alternatives according to some metric (for example, frequency or mutual information). In some cases, a database of typical errors in word combinations is collected from learner texts and suggestions are only made for these errorprone combinations. Otherwise suggestions will be made for many acceptable phrases.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "(Liu et al., 2009;",
"ref_id": "BIBREF13"
},
{
"start": 114,
"end": 137,
"text": "Dahlmeier and Ng, 2011)",
"ref_id": "BIBREF5"
},
{
"start": 192,
"end": 213,
"text": "(Shei and Pain, 2000;",
"ref_id": "BIBREF22"
},
{
"start": 214,
"end": 233,
"text": "Wible et al., 2003;",
"ref_id": "BIBREF25"
},
{
"start": 234,
"end": 253,
"text": "Chang et al., 2008;",
"ref_id": null
},
{
"start": 254,
"end": 274,
"text": "Futagi et al., 2008;",
"ref_id": null
},
{
"start": 275,
"end": 293,
"text": "Park et al., 2008;",
"ref_id": "BIBREF19"
},
{
"start": 294,
"end": 310,
"text": "Yi et al., 2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Detection in Content Words",
"sec_num": "2.1"
},
{
"text": "In this work, we treat error detection in the choice of content words as an independent task and assess the ability of compositional distributional models to discriminate incorrect from correct AN combinations -a frequent source of error in learner texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Detection in Content Words",
"sec_num": "2.1"
},
{
"text": "In the additive and multiplicative compositional models of Mitchell and Lapata (2008; , the components of the composite vector are obtained by component-wise operations applied to the word vectors. If c is a word combination vector and a and b are word vectors, then c's i-th component is the sum of the i-th components of a and b for the add model:",
"cite_spans": [
{
"start": 59,
"end": 85,
"text": "Mitchell and Lapata (2008;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Composition by Component-wise Operations",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = a i + b i",
"eq_num": "(1)"
}
],
"section": "Composition by Component-wise Operations",
"sec_num": "2.2"
},
{
"text": "and the product of the corresponding components for the mult model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition by Component-wise Operations",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = a i b i",
"eq_num": "(2)"
}
],
"section": "Composition by Component-wise Operations",
"sec_num": "2.2"
},
{
"text": "An advantage of using these models is that they provide a clear and simple interpretation of vector composition, requiring no training or tuning. They have also been shown to be promising models of composition in a number of NLP tasks, including semantic anomaly detection (Vecchi et al., 2011) . However, the principal weakness of these models is that they use commutative operations, and therefore fail to represent the difference in the grammatical function of the component words, their order, and \"headedness\". For example, these models would produce the same composite vectors for component vector and vector component.",
"cite_spans": [
{
"start": 273,
"end": 294,
"text": "(Vecchi et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Composition by Component-wise Operations",
"sec_num": "2.2"
},
{
"text": "In addition, the add model does not take \"incompatibility\" of constituent vectors along individual dimensions into account. If one vector has a high value in its i-th dimension while another vector has 0, the composed vector will receive the high value from the first input vector, even though, intuitively, this dimension should get 0 or near-0 value. This problem does not arise with the mult model. On the other hand, the mult model is heavily biased towards dimensions with high values in both input vectors (Baroni et al., 2012) .",
"cite_spans": [
{
"start": 512,
"end": 533,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Composition by Component-wise Operations",
"sec_num": "2.2"
},
{
"text": "The adjective-specific linear maps of Baroni and Zamparelli (2010) take the grammatical functions of the words within a combination into account. Focusing on AN combinations, they try to model the fact that adjectives modify nouns and the resulting combination is nominal. They note that the meaning of nouns can be represented with their distributional vectors, but the meaning of attributive adjectives cannot be fully captured by their distribution alone: for example, new in new friend is not the same as new in new shoes. The meaning of the adjective new is defined through its application to the denotations of the nouns. Therefore, Baroni and Zamparelli (2010) suggest treating adjectives as distributional functions that map between semantic vectors representing nouns to ones representing AN combinations. Within this approach, adjectives are represented with weight matrices. The composition is defined by matrix-by-vector multiplication as follows:",
"cite_spans": [
{
"start": 38,
"end": 66,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF2"
},
{
"start": 639,
"end": 667,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Functions and Linear Maps",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (noun) = def F \u00d7 a = b",
"eq_num": "(3)"
}
],
"section": "Distributional Functions and Linear Maps",
"sec_num": "2.3"
},
{
"text": "where F is the matrix representing an adjective and encoding function f, which maps the input noun vector a to the output AN vector b. The ij-th cell of the matrix contains the weight determining how much the component corresponding to the jth context element in the noun vector contributes to the value assigned to the i-th context element in the AN vector (Baroni et al., 2012) . These weights are estimated separately for each adjective from all corpus-observed noun-AN vector pairs using (multivariate) partial least squares regression.",
"cite_spans": [
{
"start": 358,
"end": 379,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Functions and Linear Maps",
"sec_num": "2.3"
},
{
"text": "3 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Functions and Linear Maps",
"sec_num": "2.3"
},
{
"text": "We have extracted a set of AN combinations from the publicly available CLC-FCE dataset (Yannakoudakis et al., 2011), a subset of the Cambridge Learner Corpus (CLC), 1 which is a large corpus of texts produced by English language learners sitting Cambridge Assessment's examinations. 2 These texts have been manually errorcoded (Nicholls, 2003) . Using the error annotation, we have divided extracted ANs into two subsets -correctly used ANs and those that are annotated with error codes due to inappropriate choice of an adjective or/and noun. 3 For the ANs that are used correctly in some contexts and incorrectly in others we use the most frequent annotation from the data.",
"cite_spans": [
{
"start": 283,
"end": 284,
"text": "2",
"ref_id": null
},
{
"start": 327,
"end": 343,
"text": "(Nicholls, 2003)",
"ref_id": "BIBREF18"
},
{
"start": 544,
"end": 545,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.1"
},
{
"text": "Our test set contains 4681 correct and 530 incorrect combinations. In contrast to Vecchi et al. (2011) , who have used a limited set of constituent adjectives and nouns and an approximately equal number of semantically acceptable and deviant combinations, our test set is more skewed towards correct combinations and consists of a wider range of constituent words. It also includes ANs occurring in the BNC 4 -3294 of the correct test ANs and 256 of the incorrect ones are corpus-attested. The set of corpus-attested ANs annotated as incorrect in our data includes lowfrequency combinations from the BNC, as well as combinations whose error-annotation depends on context. We believe that this test set reflects practical applications of semantic anomaly detection more closely. 5",
"cite_spans": [
{
"start": 82,
"end": 102,
"text": "Vecchi et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.1"
},
{
"text": "1 http://www.cup.cam.ac.uk/gb/elt/catalogue/subject/custom/ item3646603/Cambridge-International-Corpus-Cambridge-Learner-Corpus/ 2 http://www.cambridgeenglish.org 3 The corresponding error codes are RJ and RN. 4 http://www.natcorp.ox.ac.uk/ 5 The examples extracted for our experiments are publicly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.1"
},
{
"text": "In constructing the semantic space we follow the procedure outlined in Vecchi et al. (2011) . We populate the semantic space with a large number of distributional vectors for the target elements -constituent nouns and adjectives from the test ANs, and the most frequent nouns and adjectives from a corpus of English as well as AN combinations of these words. To estimate the frequency rankings, we use a concatenation of two wellformed English corpora -the 100M word BNC and the Web-derived 2B word ukWaC corpus. 6 The semantic space is represented by a matrix encoding word co-occurrences, with the rows representing the target elements and the columns representing a set of 10K context words consisting of 6,590 nouns, 1,550 adjectives and 1,860 verbs most frequent in the combined corpus. The ijth cell of the original matrix contains a sentenceinternal co-occurrence count of the i-th target element with the j-th context word. The raw sentence-internal co-occurrence counts from the original matrix have been transformed into Local Mutual Information scores (Baroni and Zamparelli, 2010; Evert, 2005 ).",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "Vecchi et al. (2011)",
"ref_id": "BIBREF24"
},
{
"start": 513,
"end": 514,
"text": "6",
"ref_id": null
},
{
"start": 1063,
"end": 1092,
"text": "(Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 1093,
"end": 1104,
"text": "Evert, 2005",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Space Construction",
"sec_num": "3.2"
},
{
"text": "An interesting research question is how much data are needed to obtain reliable word cooccurrence counts. We estimate the word cooccurrence statistics using the BNC only, and leave it for future research to explore the impact of estimating them from larger corpora, for example, the ukWaC or the concatenated corpus mentioned above. We lemmatise, tag and parse the data with the RASP system (Briscoe et al., 2006; Andersen et al., 2008) , and extract all statistics at the lemma level.",
"cite_spans": [
{
"start": 391,
"end": 413,
"text": "(Briscoe et al., 2006;",
"ref_id": "BIBREF3"
},
{
"start": 414,
"end": 436,
"text": "Andersen et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Space Construction",
"sec_num": "3.2"
},
{
"text": "The target elements are selected as follows: we first select the 4K adjectives and 8K nouns which are most frequent in the concatenated corpus. In each case, we exclude the top 50 most frequent words since those may have too general meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Space Construction",
"sec_num": "3.2"
},
{
"text": "Next, we extract the constituent adjectives and nouns from our test data and populate the semantic space with the words not yet contained in it. As a result, our semantic space contains 8,364 nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Space Construction",
"sec_num": "3.2"
},
{
"text": "Since we aim at investigating AN behaviour in a highly-populated semantic space, we add more AN combinations to that. We select 218 very frequent adjectives (occurring more than 100K but available at http://www.cl.cam.ac.uk/\u02dcek358/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Space Construction",
"sec_num": "3.2"
},
{
"text": "6 http://wacky.sslmit.unibo.it/ less than 740K times) and merge them with the adjectives from the test ANs. We generate all possible AN combinations by crossing this combined set of adjectives and the set of 8,364 nouns. This results in a set of ANs of which 1,6M combinations are corpus-attested. From these we randomly choose 62,205 ANs that occur more than 100 times in the corpus. As a result, we populate our semantic space with ANs with the number of unique corpus-attested combinations per adjective ranging from 1 to 1,226 and being 84.52 on average. Since we apply our approach to real data, we cannot avoid having a different number of training examples for different adjectives. It is worth exploring how many training examples are needed for a single adjective, since some highly frequent adjectives may have more training examples in the data, while some adjectives may require more training examples than others due to polysemy or lack of strong selectional preferences. Finally, we check our test set against the combined corpus and add 1,131 test ANs which are corpus-attested but not yet contained in the semantic space. Our final semantic space consists of 8,364 nouns, 4,353 adjectives and 63,336 corpusattested ANs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Space Construction",
"sec_num": "3.2"
},
{
"text": "We perform all operations on vectors in the full semantic space, using a 76,053 \u00d7 10K matrix. We leave it for future research to perform dimensionality reduction (for example, using Singular Value Decomposition) and to compare the results with the ones reported here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Space Construction",
"sec_num": "3.2"
},
{
"text": "For the add and mult models, the AN vectors are obtained by component-wise addition and multiplication without normalisation. For the alm model, the weight coefficients are estimated with multivariate partial least squares regression using the R pls package (Mevik and Wehrens, 2007) , using the leave-one-out training regime. This model is computationally expensive since a separate weight matrix must be learned for each adjective and since we use the non-reduced semantic space. Therefore, for the experiments presented here we limit the number of test adjectives to 38. The selected adjectives are, on the one hand, frequently misused by language learners, and, on the other, have a manageable number of training examples. The reduced set of test ANs consists of 347 combinations.",
"cite_spans": [
{
"start": 258,
"end": 283,
"text": "(Mevik and Wehrens, 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Composition Methods",
"sec_num": "3.3"
},
{
"text": "The number of latent variables used by the training algorithm depends on the number of available noun-AN training pairs. We have gradually changed this number from 3 to 20 depending on the adjective and the number of available training pairs with the aim of keeping the independentvariable-to-training-item ratio stable. However, we have not optimised this number and leave it for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition Methods",
"sec_num": "3.3"
},
{
"text": "Once the composite vectors are obtained, the next question is how to distinguish between the vectors for correct and anomalous combinations. Vecchi et al. (2011) propose three simple measures for distinguishing between the two sets of vectors:",
"cite_spans": [
{
"start": 141,
"end": 161,
"text": "Vecchi et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Semantic Anomaly",
"sec_num": "3.4"
},
{
"text": "1. Vector Length (VLen): they hypothesise that vectors for anomalous ANs are shorter than those for acceptable ones. Since the distributional vectors encode word occurrence, words that do not \"match\" semantically should have their co-occurrence counts distributed differently along the dimensions, and their composition is expected to have many near-0 values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Semantic Anomaly",
"sec_num": "3.4"
},
{
"text": "2. Cosine with the Noun Vector (CosN): they hypothesise that in nonsensical ANs the meaning of the input nouns is degraded and their model-generated vectors are situated further away from the original noun vectors. For example, since a big dog is still a dog and an *extensive dog is less clearly so, in the semantic space the vector for big dog would be closer to that of dog than the vector for *extensive dog to dog. Semantically deviant ANs are expected to have lower cosine between their vectors and the original noun vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Semantic Anomaly",
"sec_num": "3.4"
},
{
"text": "it is hypothesised that deviant ANs will have fewer close neighbours and be more \"isolated\" in the semantic space. This is measured by the average cosine with the top 10 nearest neighbours, which is assumed to be lower for anomalous ANs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Density of the AN Neighbourhood (Dens):",
"sec_num": "3."
},
{
"text": "We hypothesise that some cues alternative to the ones already proposed may also be effective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Density of the AN Neighbourhood (Dens):",
"sec_num": "3."
},
{
"text": "1. Cosine with the Adjective Vector (CosA): since both add and mult models are symmetric and both input vectors contribute to the Table 1 : p values for the add model output combination equally, we also measure the distance to the original adjective vector.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Density of the AN Neighbourhood (Dens):",
"sec_num": "3."
},
{
"text": "2. Ranked Density (RDens): we define close proximity to the model-generated AN vector as the neighbourhood populated with vectors for which the cosine to the AN vector is higher than 0.8. Since the number of close neighbours is different for different ANs, we measure ranked density as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Density of the AN Neighbourhood (Dens):",
"sec_num": "3."
},
{
"text": "N i=1 rank i distance i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Density of the AN Neighbourhood (Dens):",
"sec_num": "3."
},
{
"text": "where N is the number of neighbours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Density of the AN Neighbourhood (Dens):",
"sec_num": "3."
},
{
"text": "imity (Num): the number of close neighbours itself can be used as a measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Neighbours within Close Prox-",
"sec_num": "3."
},
{
"text": "4. Component Overlap (COver): we assume that AN combinations, unless they are idiomatic, are similar to the constituent words or combinations with the same constituents. The models can be assessed by their ability to place the AN vector in the neighbourhood populated by similar words and combinations. We measure this as the proportion of nearest neighbours containing same constituent words as in the tested ANs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Neighbours within Close Prox-",
"sec_num": "3."
},
{
"text": "We use the measures described above and compute the difference between the mean values for the correct and incorrect model-generated ANs. We apply the unpaired t-test, assuming a twotailed distribution, to assess the statistical significance of the difference between these values. In Tables 1 to 3 we report p values estimating statistical significance at the 0.05 level, and statistical significance is marked with an asterisk ( * ).",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 298,
"text": "Tables 1 to 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We assume that there might be a difference between the corpus-attested and corpus-unattested * ) 0.0300 ( * ) 0.0001 ( * ) Num 0.0001 ( * ) 0.0091 ( * ) 0.0001 ( * ) COver 0.0041 ( * ) 0.0096 ( * ) 0.7317 Table 2 : p values for the mult model test ANs, with each of the subgroups being more homogeneous than the entire test set. Our corpusunattested examples are more similar to the ANs considered by Vecchi et al. (2011) . We report the results on the full set of test ANs, as well as on each of the two subgroups separately. Our goals are to:",
"cite_spans": [
{
"start": 93,
"end": 96,
"text": "* )",
"ref_id": null
},
{
"start": 401,
"end": 421,
"text": "Vecchi et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "\u2022 comparatively evaluate performance of the three composition models;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "\u2022 assess the appropriateness of the proposed metrics;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "\u2022 investigate models' performance on the corpus-attested and corpus-unattested combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Of the three composition models, the mult model (Table 2) shows the best results overall. The alm model (Table 3) shows statistically significant difference between the model-generated vectors for the correct and incorrect combinations with the cosines and component overlap, but it does not detect the difference on the corpusunattested subset with any of the metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 57,
"text": "(Table 2)",
"ref_id": null
},
{
"start": 104,
"end": 113,
"text": "(Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparative Performance of the Models",
"sec_num": "4.1"
},
{
"text": "The add model (Table 1) shows statistically significant differences only with the cosine measures on the corpus-unattested subset. The poor performance of this model may be due to its weaknesses outlined in Section 2.2. Also, Baroni and Zamparelli (2010) note that normalisation may help improving its performance.",
"cite_spans": [
{
"start": 226,
"end": 254,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 14,
"end": 23,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparative Performance of the Models",
"sec_num": "4.1"
},
{
"text": "Cosines to the original input vectors show promising results with all three models. In contrast to the results reported by Vecchi et al. (2011) , the density of the semantic neighbourhood does not differ significantly with any of the models, but since Table 4 : Top 3 neighbours for each model many of the combinations tested in our experiments are not genuinely anomalous, the fact that they are situated in densely populated semantic neighbourhoods is not surprising. Measures based on close proximity neighbourhood -RDens and Num -show statistical difference when applied to the mult-generated vectors only. With COver, the alm model, followed by the mult model, produce sensible results. Table 4 shows the top 3 nearest neighbours found by the models for the correct AN bad intention and the incorrect * bad information. The latter is annotated as incorrect since its meaning is quite vague and a possible correction is inaccurate information. Note that only the alm model is able to discriminate between the correct and the incorrect word combinations suggesting sensible nearest neighbours for bad intention and less sensible ones for *bad information.",
"cite_spans": [
{
"start": 123,
"end": 143,
"text": "Vecchi et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 4",
"ref_id": null
},
{
"start": 692,
"end": 699,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appropriateness of the Metrics",
"sec_num": "4.2"
},
{
"text": "Our results show that the models perform differently on the two subsets and somewhat better on corpus-attested ANs. However, the results also confirm that appropriate models and metrics can be found to distinguish between correct and incorrect ANs in both subsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attested vs Unattested Combinations",
"sec_num": "4.3"
},
{
"text": "In this paper we have introduced a new task on which compositional distributional semantic models can be tested. Our results support the hypothesis that semantic models can be applied to detect errors in the choice of content words by English language learners. The original contribution of our paper is to show how compositional and distributional semantics can be linked to error detection to provide a solution to a practical task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our results suggest that with the metrics considered it is easier to detect the difference between the model-generated vectors for the correct and incorrect word combinations with the multiplicative model. On the other hand, qualitative analysis suggests that the adjective-specific linear maps of Baroni and Zamparelli (2010) are superior, since they place the model-generated vectors in semantically sensible neighbourhoods.",
"cite_spans": [
{
"start": 298,
"end": 326,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We plan to investigate further whether the use of a bigger corpus for collecting word co-occurrence statistics provides more reliable counts, and whether dimensionality reduction and/or normalisation of the models improves the results. We also plan to apply the alm model to a larger number of examples. Some other models such as the ones by Erk and Pad\u00f3 (2008) and Thater et al. (2010) which take selectional preferences and context into account may yield better results on this task, and we plan to test this experimentally in the future. Finally, since these models can discriminate between correct and anomalous combinations, the next step is to incorporate them into an error detection classifier.",
"cite_spans": [
{
"start": 342,
"end": 361,
"text": "Erk and Pad\u00f3 (2008)",
"ref_id": "BIBREF7"
},
{
"start": 366,
"end": 386,
"text": "Thater et al. (2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We are grateful to Cambridge ESOL, a division of Cambridge Assessment, and Cambridge University Press for supporting this research and for granting us access to the CLC for research purposes. We would like to thank Helen Yannakoudakis, \u00d8istein Andersen and the anonymous reviewers for their valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The BNC parsed with RASP4UIMA",
"authors": [
{
"first": "\u00d8",
"middle": [],
"last": "Andersen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nioche",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andersen \u00d8., Nioche J., Briscoe T. and Carroll J. 2008. The BNC parsed with RASP4UIMA. In Proceedings of the 6th International Conference on Language Re- sources and Evaluation (LREC).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Frege in Space: A Program for Compositional Distributional Semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni M., Bernardi R. and Zamparelli R. 2012. Frege in Space: A Program for Compo- sitional Distributional Semantics. http: //clic.cimec.unitn.it/composes/ materials/frege-in-space.pdf",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nouns are vectors, adjectives are matrices: Representing adjectivenoun constructions in semantic space",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the EMNLP-2010",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni M. and Zamparelli R. 2010. Nouns are vectors, adjectives are matrices: Representing adjective- noun constructions in semantic space. In Proceed- ings of the EMNLP-2010, pp. 1183-1193.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Second Release of the RASP System",
"authors": [
{
"first": "E",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Watson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL-2006 Interactive Presentation Sessions",
"volume": "",
"issue": "",
"pages": "59--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Briscoe E., Carroll J., and Watson R. 2006. The Sec- ond Release of the RASP System. In Proceedings of the COLING/ACL-2006 Interactive Presentation Sessions, pp. 59-68.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An automatic collocation writing assistant for Taiwanese EFL learners: A case of corpusbased NLP technology",
"authors": [
{
"first": "Y",
"middle": [
"C"
],
"last": "Chang",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
},
{
"first": "H",
"middle": [
"J"
],
"last": "Chen",
"suffix": ""
},
{
"first": "H",
"middle": [
"C"
],
"last": "Liou",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "21",
"issue": "",
"pages": "283--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang Y.C., Chang J.S., Chen H.J., and Liou H.C. 2012. An automatic collocation writing assistant for Taiwanese EFL learners: A case of corpus- based NLP technology. Computer Assisted Lan- guage Learning, 21(3):283-299.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Correcting Semantic Collocation Errors with L1-induced Paraphrases",
"authors": [
{
"first": "D",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the EMNLP-2011",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dahlmeier D. and Ng H.T. 2011. Correcting Se- mantic Collocation Errors with L1-induced Para- phrases. In Proceedings of the EMNLP-2011, pp. 107-117.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "HOO 2012: A Report on the Preposition and Determiner Error Correction Shared Task",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Anisimoff",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Narroway",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 7th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "54--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dale R., Anisimoff I., and Narroway G. 2012. HOO 2012: A Report on the Preposition and Determiner Error Correction Shared Task. In Proceedings of the 7th Workshop on Innovative Use of NLP for Build- ing Educational Applications, pp. 54-62.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Structured Vector Space Model for Word Meaning in Context",
"authors": [
{
"first": "K",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the EMNLP-2008",
"volume": "",
"issue": "",
"pages": "897--906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erk K. and Pad\u00f3 S. 2008. A Structured Vector Space Model for Word Meaning in Context. In Proceedings of the EMNLP-2008, pp. 897-906.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Statistics of Word Cooccurrences. Dissertation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Evert",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evert S. 2005. The Statistics of Word Cooccurrences. Dissertation, Stuttgart University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A computational approach to detecting collocation errors in the writing of non-native speakers of English",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Futagi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Deane",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Assisted Language Learning",
"volume": "21",
"issue": "4",
"pages": "353--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Futagi Y., Deane P., Chodorow M., and Tetreault J. 2009. A computational approach to detecting col- location errors in the writing of non-native speakers of English. Computer Assisted Language Learning, 21(4):353-367.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Explorations in Automatic Thesaurus Discovery",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette G. 1994. Explorations in Automatic The- saurus Discovery. Kluwer Academic Publishers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automated Grammatical Error Detection for Language Learners",
"authors": [
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leacock C., Chodorow M., Gamon M. and Tetreault J. 2010. Automated Grammatical Error Detection for Language Learners. Morgan and Claypool Publish- ers.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automated suggestions for miscollocations",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Liu",
"suffix": ""
},
{
"first": "-E",
"middle": [],
"last": "Wible",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Tsao N.-L",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 4th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "47--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu A. L.-E., Wible D., and Tsao N.-L. 2009. Auto- mated suggestions for miscollocations. In Proceed- ings of the 4th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 47-50.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Finding predominant word senses in untagged text",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Koeling",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "280--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCarthy D., Koeling R., Weeds J. and Carroll J. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pp. 280- 287.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The pls package: Principal component and partial least squares regression in R",
"authors": [
{
"first": "B",
"middle": [],
"last": "Mevik",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wehrens",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Statistical Software",
"volume": "18",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mevik B. and Wehrens R. 2007. The pls package: Principal component and partial least squares re- gression in R. Journal of Statistical Software, 18(2).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell J. and Lapata M. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pp. 236-244.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Composition in distributional models of semantics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "34",
"issue": "",
"pages": "1388--1429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell J. and Lapata M. 2010. Composition in dis- tributional models of semantics. Cognitive Science, 34:1388-1429.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Cambridge Learner Corpus: Error coding and analysis for lexicography and ELT",
"authors": [
{
"first": "D",
"middle": [],
"last": "Nicholls",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Corpus Linguistics conference",
"volume": "",
"issue": "",
"pages": "572--581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholls D. 2003. The Cambridge Learner Cor- pus: Error coding and analysis for lexicography and ELT. In Proceedings of the Corpus Linguistics con- ference, pp. 572-581.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Is the sky pure today? AwkChecker: an assistive tool for detecting and correcting collocation errors",
"authors": [
{
"first": "T",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Lank",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Poupart",
"suffix": ""
},
{
"first": "Terry",
"middle": [
"M"
],
"last": "",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 21st annual ACM symposium on User interface software and technology",
"volume": "",
"issue": "",
"pages": "121--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Park T., Lank E., Poupart P., and Terry M. 2008. Is the sky pure today? AwkChecker: an assistive tool for detecting and correcting collocation errors. In Proceedings of the 21st annual ACM symposium on User interface software and technology, pp. 121- 130.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Is Knowledge-Free Induction of Multiword Unit Dictionary Headwords a Solved Problem",
"authors": [
{
"first": "P",
"middle": [],
"last": "Schone",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "100--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schone P. and Jurafsky D. 2001. Is Knowledge-Free Induction of Multiword Unit Dictionary Headwords a Solved Problem?. Pittsburg, PA, pp. 100-108.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sch\u00fctze H. 1998. Automatic word sense discrimina- tion. Computational Linguistics, 24(1):97-123.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "An ESL Writer's Collocation Aid",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Shei",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pain",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Assisted Language Learning",
"volume": "13",
"issue": "2",
"pages": "167--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shei C.C. and Pain H. 2000. An ESL Writer's Collo- cation Aid. Computer Assisted Language Learning, 13(2):167-182.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Contextualizing Semantic Representations Using Syntactically Enriched Vector Models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Thater",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pinkal",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "948--957",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thater S., F\u00fcrstenau, H., and Pinkal M. 2010. Contex- tualizing Semantic Representations Using Syntacti- cally Enriched Vector Models. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pp. 948-957.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Linear) maps of the impossible: Capturing semantic anomalies in distributional space",
"authors": [
{
"first": "E",
"middle": [],
"last": "Vecchi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the DISCO Workshop at ACL-2011",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vecchi E., Baroni M. and Zamparelli R. 2011. (Lin- ear) maps of the impossible: Capturing semantic anomalies in distributional space. In Proceedings of the DISCO Workshop at ACL-2011, pp. 1-9.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bootstrapping in a language-learning environment",
"authors": [
{
"first": "H",
"middle": [],
"last": "Wible",
"suffix": ""
},
{
"first": "C.-H",
"middle": [],
"last": "Kwo",
"suffix": ""
},
{
"first": "N.-L",
"middle": [],
"last": "Tsao",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lin H.-L",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Computer Assisted Learning",
"volume": "19",
"issue": "4",
"pages": "90--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wible H., Kwo C.-H., Tsao N.-L., Liu A., and Lin H.- L. 2003. Bootstrapping in a language-learning en- vironment. Journal of Computer Assisted Learning, 19(4):90-102.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A New Dataset and Method for Automatically Grading ESOL Texts",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "180--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannakoudakis H., Briscoe T. and Medlock B. 2011. A New Dataset and Method for Automatically Grad- ing ESOL Texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies, 1:180- 189.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A Web-based English Proofing System for English as a Second Language Users",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the third International Joint Conference on Natural Language Processing (IJCNLP)",
"volume": "",
"issue": "",
"pages": "619--624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi X., Gao J., and Dolan W.B. 2008. A Web-based En- glish Proofing System for English as a Second Lan- guage Users. In Proceedings of the third Interna- tional Joint Conference on Natural Language Pro- cessing (IJCNLP), pp. 619-624.",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"num": null,
"text": "p values for the alm model",
"type_str": "table",
"content": "<table><tr><td>AN bad intention</td><td>* bad information</td></tr><tr><td>add bad,</td><td>information,</td></tr><tr><td>bad company,</td><td>other information,</td></tr><tr><td>bad image</td><td>real information</td></tr><tr><td colspan=\"2\">mult uncomplicated, uncomplicated,</td></tr><tr><td>improbable,</td><td>improbable,</td></tr><tr><td>suggestive</td><td>humane</td></tr><tr><td>alm intention,</td><td>people,</td></tr><tr><td colspan=\"2\">main intention, blind people,</td></tr><tr><td>real intention</td><td>like-minded</td></tr></table>",
"html": null
}
}
}
}