Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:40.001323Z"
},
"title": "\"Could you make me a favour and do coffee, please?\": Implications for Automatic Error Correction in English and Dutch",
"authors": [
{
"first": "Sophia",
"middle": [],
"last": "Katrenko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UiL-OTS Utrecht University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The correct choice of words has proven challenging for learners of a second language and errors of this kind form a separate category in error typology. This paper focuses on one known example of two verbs that are often confused by non-native speakers of Germanic languages, to make and to do. We conduct experiments using syntactic information and immediate context for Dutch and English. Our results show that the methods exploiting syntactic information and distributional similarity yield the best results. 1. Can information on semantic classes of direct",
"pdf_parse": {
"paper_id": "S12-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "The correct choice of words has proven challenging for learners of a second language and errors of this kind form a separate category in error typology. This paper focuses on one known example of two verbs that are often confused by non-native speakers of Germanic languages, to make and to do. We conduct experiments using syntactic information and immediate context for Dutch and English. Our results show that the methods exploiting syntactic information and distributional similarity yield the best results. 1. Can information on semantic classes of direct",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When learning a second language, non-native speakers make errors at all levels of linguistic analysis, from pronunciation and intonation to language use. Word choice errors form a substantial part of all errors made by learners and may also be observed in writing or speech of native speakers. This category of errors includes homophones. Some commonly known confusions in English are accept-except, advice-advise, buy-by-bye, ate-eight, to name but a few. Other errors can be explained by a non-native speaker's inability to distinguish between words because there exists only one corresponding word in their native language. For example, Portuguese and Spanish speakers have difficulties to differentiate between te doen (to do) and te maken (to make), and Turkish between kunnen (can), weten (to know) and kennen (to know) in Dutch (Coenen et al., 1979) . Adopting terminology from Golding and Roth (1999) and Rozovskaya and Roth (2010) , do/make and kunnen/kennen/weten form two confusion sets. However, unlike the case of kunnen/kennen/weten, where the correct choice is often determined by syntactic context 1 , the choice between to make and to do can be motivated by semantic factors. It has been argued in the literature that the correct use of these verbs depends on what is being expressed: to do is used to refer to daily routines and activities, while to make is used to describe constructing or creating something. Since word choice errors have different nature, we hypothesize that there may exist no uniform approach to correct them.",
"cite_spans": [
{
"start": 829,
"end": 856,
"text": "Dutch (Coenen et al., 1979)",
"ref_id": null
},
{
"start": 885,
"end": 908,
"text": "Golding and Roth (1999)",
"ref_id": "BIBREF4"
},
{
"start": 913,
"end": 939,
"text": "Rozovskaya and Roth (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "State-of-the-art spell-checkers are able to detect spelling and agreement errors but fail to find words used incorrectly, e.g. to distinguish to make from to do. Motivated by the implications that the correct prediction of two verbs of interest may have for automatic error correction, we model the problem of choosing the correct verb in a similar vein to selectional preferences. The latter has been considered for a variety of applications, e. g. semantic role labeling (Zapirain et al., 2009) . Words such as be or do have been often excluded from consideration because they are highly polysemous and \"do not select strongly for their arguments\" (McCarthy and Carroll, 2003) . In this paper, we study whether semantic classes of arguments may be used to determine the correct predicate (e.g., to make or to do) and consider the following research questions: objects potentially help to correct verb choice errors?",
"cite_spans": [
{
"start": 473,
"end": 496,
"text": "(Zapirain et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 650,
"end": 678,
"text": "(McCarthy and Carroll, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. How do approaches using contextual and syntactic information compare when predicting to make vs. to do?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organised as follows. Section 2.1 discusses the methods, followed by Section 2.2 on data. The experimental findings are presented in Section 2.3. We conclude in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We re-examine several approaches to selectional preferences in the context of error correction. Existing methods fall into one of two categories, either those relying on information from WordNet (Mc-Carthy and Carroll, 2003) , or data-driven (Erk, 2007; Schulte im Walde, 2010; Pado et al., 2007) . For the purpose of our study, we focus on the latter.",
"cite_spans": [
{
"start": 195,
"end": 224,
"text": "(Mc-Carthy and Carroll, 2003)",
"ref_id": null
},
{
"start": 242,
"end": 253,
"text": "(Erk, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 254,
"end": 277,
"text": "Schulte im Walde, 2010;",
"ref_id": "BIBREF12"
},
{
"start": 278,
"end": 296,
"text": "Pado et al., 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2"
},
{
"text": "For each verb in question, we have a frequencybased ranking list of nouns co-occurring with it (verb-object pairs) which we use for the first two methods. Rooth et al. (1999) have proposed a soft-clustering method to determine selectional preferences, which models the joint distribution of nouns n and verbs v by conditioning them on a hidden class c. The probability of a pair (v, n) then equals",
"cite_spans": [
{
"start": 155,
"end": 174,
"text": "Rooth et al. (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (v, n) = c\u2208C P (c)P (v|c)P (n|c)",
"eq_num": "(1)"
}
],
"section": "Latent semantic clustering (LSC)",
"sec_num": null
},
{
"text": "Similarity-based method The next classifier we use combines similarity between nouns with ranking information and is a modification of the method described in (Pado et al., 2007) . First, for all words n i on the ranking list their frequency scores are normalised between 0 and 1, f i . Then, they are weighed by the similarity score between a new noun n j and a corresponding word on the ranking list, n i , and the noun with the highest score (1-nearest neighbour) is selected:",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Pado et al., 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantic clustering (LSC)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg max n i f i \u00d7 sim(n j , n i )",
"eq_num": "(2)"
}
],
"section": "Latent semantic clustering (LSC)",
"sec_num": null
},
{
"text": "Finally, two highest scores for each verb's ranking list are compared and the verb with higher score is selected as a preferred one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantic clustering (LSC)",
"sec_num": null
},
{
"text": "In addition, if we sum over all seen words instead of choosing the nearest neighbour, this will lead to the original approach by Pado et al. (2007) . In the experimental part we consider both approaches (the original method is referred to as SMP while the nearest neighbour approach is marked by SMknn) and study whether there is any difference between the two when a verb that allows many different arguments is considered (e.g., it may be better to use the nearest neighbour approach for to do rather than aggregating over all similarity scores).",
"cite_spans": [
{
"start": 129,
"end": 147,
"text": "Pado et al. (2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantic clustering (LSC)",
"sec_num": null
},
{
"text": "Bag-of-words (BoW) approach This widely used approach to document classification considers contextual words and their frequencies to represent documents (Zellig, 1954) . We restrict the length of the context around two verbs (within a window of \u00b12 and \u00b13 around the focus word, make or do) and build a Naive Bayes classifier.",
"cite_spans": [
{
"start": 153,
"end": 167,
"text": "(Zellig, 1954)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantic clustering (LSC)",
"sec_num": null
},
{
"text": "Both verbs, to make and to do, license complements of various kinds, e. g. they can be mono-transitive, ditransitive, and complex transitive (sentences 1, 2, and 3, respectively). Furthermore, make can be part of idiomatic ditransitives (e.g., make use of, make fun of, make room for) and phrasal mono-transitives (e.g., make up) . For English, we use one of the largest corpora available, the PukWAC (over 2 billion words, 30GB) ( Baroni et al., 2009) , which has been parsed by MaltParser (Nivre and Scholz, 2004) . We extract all sentences with to do or to make (based on lemmata). The verb to make occurs in 2,13% of sentences, and the verb to do in 3,27% of sentences in the PukWAC corpus. Next, we exclude from consideration phrasal mono-transitives and select sentences where verb complements are nouns (Table 1) .",
"cite_spans": [
{
"start": 432,
"end": 452,
"text": "Baroni et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 491,
"end": 515,
"text": "(Nivre and Scholz, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 810,
"end": 819,
"text": "(Table 1)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2.2"
},
{
"text": "For experiments in Dutch, we use the \"Wikipedia Dump Of 2010\" corpus, which is a part of Lassy Large corpus (159 million tokens), and is parsed by the Alpino parser (Bouma et al., 2001) . Unlike in English data, to make occurs here more often than to do (3,3% vs. 1%). This difference can be explained by the fact that to do is also an auxiliary verb in English which leads to more occurrences in total. Similarly to the English data set, phrasal monotransitives are filtered out. Finally, the sentences that contain either to make or to do from wiki01 up to wiki07 (19,847 sentences in total) have been selected for training and wiki08 (1,769 sentences in total) for testing. To be able to compare our results against the performance on English data, we sample a subset from PukWAC which is of the same size as Dutch data set and is referred to as EN (sm).",
"cite_spans": [
{
"start": 165,
"end": 185,
"text": "(Bouma et al., 2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.2"
},
{
"text": "To measure distributional similarity for the nearest neighbour method, we use first-order and second-order similarity based on Lin's information theoretic measure (Lin, 1998) . For both languages, similarity scores have been derived given a subset of Wikipedia (276 million tokens for English and 114 million tokens for Dutch) using the DISCO API (Kolb, 2009) . Table 2 and Table 3 summarize our results. When referring to similarity-based methods, the symbols (f) and (s) indicate first-order and second-order similarity. For the BoW models, \u00b12 and \u00b13 corresponds to the context length. The performance is measured by true positive rate (TP) per class, overall accuracy (Acc) and coverage (Cov). The former indicates in how many cases the correct class label (make or do) has been predicted, while the latter shows how many examples a system was able to classify. Coverage is especially indicative for LCS and semantic similarity approaches because they may fail to yield predictions. For these methods, we provide two evaluations. First, in order to be able to compare results against the BoW approach, we measure accuracy and coverage on all test examples. In such a case, if some direct objects occur very often in the test set and are classified correctly, accuracy scores will be boosted. Therefore, we also provide the second evaluation where we measure accuracy and coverage on (unique) test examples regardless of how frequent they are. This evaluation will give us a better insight into how well LCS and similarity-based methods work. Finally, we tested several settings for the LSC method and the results presented here are obtained for 20 clusters and 50 iterations. We remove stop words 2 but do not take any other preprocessing steps.",
"cite_spans": [
{
"start": 163,
"end": 174,
"text": "(Lin, 1998)",
"ref_id": "BIBREF6"
},
{
"start": 347,
"end": 359,
"text": "(Kolb, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 362,
"end": 381,
"text": "Table 2 and Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2.2"
},
{
"text": "For both languages, it is more difficult to predict to do than to make, although the differences in performance on Dutch data (NL) are much smaller than on English data (EN (sm) ). An interesting observation is that using second-order similarity slightly boosts performance for to make but is highly undesirable for predicting to do (decrease in accuracy for around 15%) in Dutch. This may be explained by the fact that the objects of to do are already very generic. Our findings on English data are that the similaritybased approach is more sensitive to the choice of aggregating over all words in the training set or selecting the nearest neighbour. In particular, we obtained better performance when choosing the nearest neighbour for to do but aggregating over all scores for to make. The results on Dutch and English data are in general not always comparable. In addition to the differences in performance of similarity-based methods, the BoW models work better for predicting to do in English but to make in Dutch.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "(EN (sm)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "2.3"
},
{
"text": "As expected, similarity-based approaches yield higher coverage than LSC, although the latter is superior in terms of accuracy (in all cases but to do in English). Since LSC turned out to be the most computationally efficient method, we have also run it on larger subsets of the PukWAC data set, up to the entire corpus. We have not noticed any signifi- cant changes in performance; the results for the entire data set, EN (all), are given in the first row of Table 2 . Table 3 shows the results for the methods using direct object information on unique objects, which gives a more realistic assessment of their performance. At closer inspection, we noticed that many non-classified cases in Dutch refer to compounds. For instance, bluegrassmuziek (bluegrass music) cannot be compared against known words in the training set. In order to cover such cases, existing methods may benefit from morphological analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 469,
"end": 476,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "2.3"
},
{
"text": "In order to predict the use of two often confused verbs, to make and to do, we have compared two methods to modeling selectional preferences against the bag-of-words approach. The BoW method is always outperformed by LCS and similarity-based approaches, although the differences in performance are much larger for to do in Dutch and for to make in English. In this study, we do not use any corpus of non-native speakers' errors and explore how well it is possible to predict one of two verbs provided that the context words have been chosen correctly. In the future work, we plan to label all incorrect uses of to make and to do and to correct them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "3"
},
{
"text": "Kunnen is a modal verb followed by the main verb, kennen takes a direct object as in, e.g., to know somebody, and weten is often followed by a clause (as in I know that).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use stop word lists for English and Dutch from http: //snowball.tartarus.org/algorithms/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author thanks anonymous reviewers for their valuable comments. This work is supported by a VICI grant number 277-80-002 by the Netherlands Organisation for Scientific Research (NWO).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The WaCky Wide Web: A Collection of Very Large Linguistically Processed Web-Crawled Corpora",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
},
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
}
],
"year": 2009,
"venue": "Language Resources and Evaluation",
"volume": "43",
"issue": "3",
"pages": "209--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Silvia Bernardini and Adriano Fer- raresi and Eros Zanchetta. 2009. The WaCky Wide Web: A Collection of Very Large Linguistically Pro- cessed Web-Crawled Corpora. Language Resources and Evaluation 43(3), pp. 209-226.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Alpino: Wide-coverage Computational Analysis of Dutch",
"authors": [
{
"first": "Gosse",
"middle": [],
"last": "Bouma",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics in the Netherlands 2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gosse Bouma, Gertjan van Noord, and Robert Malouf. 2001. Alpino: Wide-coverage Computational Analysis of Dutch. In Computational Linguistics in the Nether- lands 2000. Enschede.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Leren van fouten: een analyse van de meest voorkomende Nederlandse taalfouten, die gemaakt worden door Marokkaanse, Turkse, Spaanse en Portugese kinderen",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jos\u00e9e",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Coenen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Van Wiggen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bok-Bennema",
"suffix": ""
}
],
"year": 1979,
"venue": "Stichting ABC, Contactorgaan voor de Innovatie van het Onderwijs",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9e A. Coenen, W. van Wiggen, and R. Bok-Bennema. 1979. Leren van fouten: een analyse van de meest voorkomende Nederlandse taalfouten, die gemaakt worden door Marokkaanse, Turkse, Spaanse en Por- tugese kinderen. Amsterdam: Stichting ABC, Contac- torgaan voor de Innovatie van het Onderwijs.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A simple, similarity-based model for selectional preferences",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL 2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In Proceedings of ACL 2007. Prague, Czech Republic, 2007.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Winnow-Based Approach to Context-Sensitive Spelling Correction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Golding",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "1-3",
"pages": "107--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew R. Golding and Dan Roth. 1999. A Winnow- Based Approach to Context-Sensitive Spelling Correc- tion. Machine Learning 34(1-3), pp. 107-130.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Experiments on the difference between semantic similarity and relatedness",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Kolb",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 17th Nordic Conference on Computational Linguistics -NODALIDA '09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Kolb. 2009. Experiments on the difference be- tween semantic similarity and relatedness. In Pro- ceedings of the 17th Nordic Conference on Compu- tational Linguistics -NODALIDA '09, Odense, Den- mark, May 2009.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic Retrieval and Clustering of Similar Words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL 1998",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. Automatic Retrieval and Clustering of Similar Words. In Proceedings of COLING-ACL 1998, Montreal.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Disambiguating nouns, verbs and adjectives using automatically acquired selectional preferences",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "4",
"pages": "639--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy and John Carroll. 2003. Disambiguat- ing nouns, verbs and adjectives using automatically acquired selectional preferences. Computational Lin- guistics, 29(4), pp. 639-654.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deterministic dependency parsing of English text",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Scholz",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING 04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING 04.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Flexible, Corpus-Based Modelling of Human Plausibility Judgements",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Ulrike",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP/CoNLL",
"volume": "",
"issue": "",
"pages": "400--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3, Ulrike Pad\u00f3 and Katrin Erk. 2007. Flex- ible, Corpus-Based Modelling of Human Plausibility Judgements. In Proceedings of EMNLP/CoNLL 2007. Prague, Czech Republic, pp. 400-409.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Inducing a Semantically Annotated Lexicon via EM-Based Clustering",
"authors": [
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Detlef",
"middle": [],
"last": "Prescher",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of ACL 99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mats Rooth, Stefan Riezler and Detlef Prescher. 1999. Inducing a Semantically Annotated Lexicon via EM- Based Clustering. In Proceedings of ACL 99.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating Confusion Sets for Context-Sensitive Error Correction",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rozovskaya",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "961--970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Rozovskaya and Dan Roth. 2010. Generating Confusion Sets for Context-Sensitive Error Correction. In Proceedings of EMNLP, pp. 961-970.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Comparing Computational Approaches to Selectional Preferences -Second-Order Co-Occurrence vs",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1381--1388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte im Walde. 2010. Comparing Com- putational Approaches to Selectional Preferences - Second-Order Co-Occurrence vs. Latent Semantic Clusters. In Proceedings of the 7th International Con- ference on Language Resources and Evaluation, Val- letta, Malta, pp. 1381-1388.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Generalizing over Lexical Features: Selectional Preferences for Semantic Role Classification",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Be\u00f1at Zapirain",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. Suntec",
"volume": "",
"issue": "",
"pages": "73--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Be\u00f1at Zapirain, Eneko Agirre and Llu\u00eds M\u00e0rquez. 2009. Generalizing over Lexical Features: Selectional Pref- erences for Semantic Role Classification. In Proceed- ings of the ACL-IJCNLP 2009 Conference Short Pa- pers. Suntec, Singapore, pp. 73-76.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "1. Andrew made [a cake] dobj . 2. Andrew made [his mum] iobj [a cake] dobj .",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "The number of sentences in English (EN) and Dutch (NL) corpora (the last two columns correspond to the number of sentences where direct objects are nouns).",
"num": null
},
"TABREF4": {
"content": "<table><tr><td colspan=\"8\">: True positive rate (TP, %), accuracy (Acc, %) and coverage (Cov, %) for the experiments on English (EN)</td></tr><tr><td colspan=\"2\">and Dutch (NL) data.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>LANG</td><td>Method</td><td colspan=\"6\">TP (to make) Cov (to make) TP (to do) Cov (to do) Acc (all) Cov (all)</td></tr><tr><td colspan=\"2\">EN (sm) LSC</td><td>80.88</td><td>77.12</td><td>52.60</td><td>74.76</td><td>73.73</td><td>76.51</td></tr><tr><td/><td>SMP (f)</td><td>73.17</td><td>97.29</td><td>45.99</td><td>90.78</td><td>66.49</td><td>95.60</td></tr><tr><td/><td>SMP (s)</td><td>77.00</td><td>97.29</td><td>33.69</td><td>90.78</td><td>66.36</td><td>95.60</td></tr><tr><td/><td colspan=\"2\">SMknn (f) 31.18</td><td>97.29</td><td>82.35</td><td>90.78</td><td>43.76</td><td>95.60</td></tr><tr><td/><td colspan=\"2\">SMknn (s) 4.36</td><td>98.82</td><td>98.93</td><td>90.78</td><td>25.76</td><td>95.60</td></tr><tr><td>NL</td><td>LSC</td><td>94.85</td><td>63.40</td><td>86.59</td><td>76.64</td><td>92.39</td><td>66.83</td></tr><tr><td/><td>SMP (f)</td><td>87.55</td><td>81.37</td><td>77.00</td><td>93.45</td><td>84.24</td><td>84.50</td></tr><tr><td/><td>SMP (s)</td><td>91.16</td><td>81.37</td><td>54.00</td><td>93.45</td><td>80.52</td><td>84.50</td></tr><tr><td/><td colspan=\"2\">SMknn (f) 80.72</td><td>81.37</td><td>76.00</td><td>93.45</td><td>79.66</td><td>84.50</td></tr><tr><td/><td colspan=\"2\">SMknn (s) 85.54</td><td>81.37</td><td>55.00</td><td>93.45</td><td>76.79</td><td>84.50</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "True positive rate (TP, %), accuracy (Acc, %) and coverage (Cov, %) for the experiments on English (EN) and Dutch (NL) unique direct objects.",
"num": null
}
}
}
}