Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:18.706688Z"
},
"title": "Flexible Domain Adaptation for Automated Essay Scoring Using Correlated Linear Regression",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Phandi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ming",
"middle": [
"A"
],
"last": "Kian",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "",
"middle": [],
"last": "Chai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hwee",
"middle": [
"Tou"
],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most of the current automated essay scoring (AES) systems are trained using manually graded essays from a specific prompt. These systems experience a drop in accuracy when used to grade an essay from a different prompt. Obtaining a large number of manually graded essays each time a new prompt is introduced is costly and not viable. We propose domain adaptation as a solution to adapt an AES system from an initial prompt to a new prompt. We also propose a novel domain adaptation technique that uses Bayesian linear ridge regression. We evaluate our domain adaptation technique on the publicly available Automated Student Assessment Prize (ASAP) dataset and show that our proposed technique is a competitive default domain adaptation algorithm for the AES task.",
"pdf_parse": {
"paper_id": "D15-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "Most of the current automated essay scoring (AES) systems are trained using manually graded essays from a specific prompt. These systems experience a drop in accuracy when used to grade an essay from a different prompt. Obtaining a large number of manually graded essays each time a new prompt is introduced is costly and not viable. We propose domain adaptation as a solution to adapt an AES system from an initial prompt to a new prompt. We also propose a novel domain adaptation technique that uses Bayesian linear ridge regression. We evaluate our domain adaptation technique on the publicly available Automated Student Assessment Prize (ASAP) dataset and show that our proposed technique is a competitive default domain adaptation algorithm for the AES task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Essay writing is a common task evaluated in schools and universities. In this task, students are typically given a prompt or essay topic to write about. Essay writing is included in high-stakes assessments, such as Test of English as a Foreign Language (TOEFL) and Graduate Record Examination (GRE). Manually grading all essays takes a lot of time and effort for the graders. This is what Automated Essay Scoring (AES) systems are trying to alleviate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automated Essay Scoring uses computer software to automatically evaluate an essay written in an educational setting by giving it a score. Work related to essay scoring can be traced back to 1966 when Ellis Page created a computer grading software called Project Essay Grade (PEG). Research on AES has continued through the years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The recent Automated Student Assessment Prize (ASAP) Competition 1 sponsored by the Hewlett Foundation in 2012 has renewed interest on this topic. The agreement between the scores assigned by state-of-the-art AES systems and the scores assigned by human raters has been shown to be relatively high. See Shermis and Burstein (2013) for a recent overview of AES.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AES is usually treated as a supervised machine learning problem, either as a classification, regression, or rank preference task. Using this approach, a training set in the form of human graded essays is needed. However, human graded essays are not readily available. This is perhaps why research in this area was mostly done by commercial organizations. After the ASAP competition, research interest in this area has been rekindled because of the released dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the recent AES related work is promptspecific. That is, an AES system is trained using essays from a specific prompt and tested against essays from the same prompt. These AES systems will not work as well when tested against a different prompt. Furthermore, generating the training data each time a new prompt is introduced will be costly and time consuming.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose domain adaptation as a solution to this problem. Instead of hiring people to grade new essays each time a new prompt is introduced, domain adaptation can be used to adapt the old prompt-specific system to suit the new prompt. This way, a smaller number of training essays from the new prompt is needed. In this paper, we propose a novel domain adaptation technique based on Bayesian linear ridge regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. In Section 2, we give an overview of related work on AES and domain adaptation. Section 3 describes the AES task and the features used. Section 4 presents our novel domain adaptation algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 5 describes our data, experimental setup, and evaluation metric. Section 6 presents and discusses the results. We conclude in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first introduce related work on automated essay scoring, followed by domain adaptation in the context of natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since the first AES system, Project Essay Grade, was created in 1966, a number of commercial systems have been deployed. One such system, erater (Attali and Burstein, 2004) , is even used as a replacement for the second human grader in the Test of English as a Foreign Language (TOEFL) and Graduate Record Examination (GRE). Other AES commercial systems also exist, such as Intel-liMetric 2 and Intelligent Essay Assessor (Foltz et al., 1999) .",
"cite_spans": [
{
"start": 145,
"end": 172,
"text": "(Attali and Burstein, 2004)",
"ref_id": "BIBREF0"
},
{
"start": 422,
"end": 442,
"text": "(Foltz et al., 1999)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Essay Scoring",
"sec_num": "2.1"
},
{
"text": "AES is generally considered as a machine learning problem. Some work, such as PEG (Page, 1994) and e-rater, considers it as a regression problem. PEG uses a large number of features with regression to predict the human score. e-rater uses natural language processing (NLP) techniques to extract a smaller number of complex features, such as grammatical error and lexical complexity, and uses them with stepwise linear regression (Attali and Burstein, 2004) . Others like (Larkey, 1998) take the classification approach. (Rudner and Liang, 2002) uses Bayesian models for classification and treats AES as a text classification problem. Intelligent Essay Assessor uses Latent Semantic Analysis (LSA) (Landauer et al., 1998) as a measure of semantic similarity between essays. Other recent work uses the preference ranking based approach (Yannakoudakis et al., 2011; Chen and He, 2013) .",
"cite_spans": [
{
"start": 82,
"end": 94,
"text": "(Page, 1994)",
"ref_id": "BIBREF15"
},
{
"start": 429,
"end": 456,
"text": "(Attali and Burstein, 2004)",
"ref_id": "BIBREF0"
},
{
"start": 520,
"end": 544,
"text": "(Rudner and Liang, 2002)",
"ref_id": "BIBREF18"
},
{
"start": 697,
"end": 720,
"text": "(Landauer et al., 1998)",
"ref_id": "BIBREF12"
},
{
"start": 834,
"end": 862,
"text": "(Yannakoudakis et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 863,
"end": 881,
"text": "Chen and He, 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Essay Scoring",
"sec_num": "2.1"
},
{
"text": "In this paper, we also treat AES as a regression problem, following PEG and e-rater. We use regression because the range of scores of the essays could be very large and a classification approach does not work well in this case. It also allows us to model essay scores as continuous values and scale them easily in the case of different score ranges between the source essay prompt and the target essay prompt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Essay Scoring",
"sec_num": "2.1"
},
{
"text": "The features used differ among the systems, ranging from simple features (e.g., word length, essay length, etc) to more complex features (e.g., grammatical errors). Some of these features are generic in the sense that they could apply to all kinds of prompts. Such features include the number of spelling errors, grammatical errors, lexical complexity, etc. Others are prompt-specific features such as bag of words features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Essay Scoring",
"sec_num": "2.1"
},
{
"text": "The knowledge learned from a single domain might not be directly applicable to another domain. For example, a named entity recognition system trained on labeled news data might not perform as well on biomedical texts (Jiang and Zhai, 2007) . We can solve this problem either by getting labeled data from the other domain, which might not be available, or by performing domain adaptation. Domain adaptation is the task of adapting knowledge learned in a source domain to a target domain. Various approaches to this task have been proposed and used in the context of NLP. Some commonly used approaches include EasyAdapt (Daum\u00e9 III, 2007) , instance weighting (IW) (Jiang and Zhai, 2007) , and structural correspondence learning (SCL) (Blitzer et al., 2006) .",
"cite_spans": [
{
"start": 217,
"end": 239,
"text": "(Jiang and Zhai, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 618,
"end": 635,
"text": "(Daum\u00e9 III, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 662,
"end": 684,
"text": "(Jiang and Zhai, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 732,
"end": 754,
"text": "(Blitzer et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "2.2"
},
{
"text": "We can divide the approaches of domain adaptation into two categories based on the availability of labeled target data. The case where a small number of labeled target data is available is usually referred to as supervised domain adaptation (such as EasyAdapt and IW). The case where no labeled target domain data is available is usually referred to as unsupervised domain adaptation (such as SCL). In our work, we focus on supervised domain adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "2.2"
},
{
"text": "Daum\u00e9 III (2007) described a domain adaptation scheme called EasyAdapt which makes use of feature augmentation. Suppose we have a feature vector x in the original feature space. This scheme will map this instance using the mapping functions \u03a6 s (x) and \u03a6 t (x) for the source and target domain respectively, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "2.2"
},
{
"text": "\u03a6 s (x) = x, x, 0 \u03a6 t (x) = x, 0, x ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "2.2"
},
{
"text": "and 0 is a zero vector of length |x|. This adaptation scheme is attractive because of its simplicity and ease of use as a pre-processing step, and also because it performs quite well despite its simplicity. It has been used in various NLP tasks such as word segmentation (Monroe et al., 2014) , machine translation , word sense disambiguation (Zhong et al., 2008) , and short answer scoring (Heilman and Madnani, 2013) . Our work is an extension of this scheme in the sense that our work is a generalization of EasyAdapt.",
"cite_spans": [
{
"start": 271,
"end": 292,
"text": "(Monroe et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 343,
"end": 363,
"text": "(Zhong et al., 2008)",
"ref_id": "BIBREF21"
},
{
"start": 391,
"end": 418,
"text": "(Heilman and Madnani, 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "2.2"
},
{
"text": "This section describes the Automated Essay Scoring (AES) task and the features we use for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Essay Scoring",
"sec_num": "3"
},
{
"text": "In AES, the input to the system is a student essay, and the output is the score assigned to the essay. The score assigned by the AES system will be compared against the human assigned score to measure their agreement. Common agreement measures used include Pearson's correlation, Spearman's correlation, and quadratic weighted Kappa (QWK). We use QWK in this paper, which is also the evaluation metric in the ASAP competition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3.1"
},
{
"text": "We model the AES task as a regression problem and use Bayesian linear ridge regression (BLRR) as our learning algorithm. We choose BLRR as our learning algorithm so as to use the correlated BLRR approach which will be explained in Section 4. We use an open source essay scoring system, EASE (Enhanced AI Scoring Engine) 3 , to extract the features. EASE is created by one of the winners of the ASAP competition so the features they use have been proven to be robust. Table 1 gives the features used by EASE.",
"cite_spans": [],
"ref_spans": [
{
"start": 467,
"end": 474,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Features and Learning Algorithm",
"sec_num": "3.2"
},
{
"text": "Useful n-grams are defined as n-grams that separate good scoring essays and bad scoring essays, determined using the Fisher test (Fisher, 1922) . Good scoring essays are essays with a score greater than or equal to the average score, and the remainder are considered as bad scoring essays. The top 201 n-grams with the highest Fisher values are then chosen as the bag features. We perform the calculation of useful n-grams separately for source and target domain essays, and join them together using set union during the domain adaptation experiment. This is done to prevent the system from choosing only n-grams from the source domain as the useful n-grams, since the number of source domain essays is much larger than the target domain essays. EASE uses NLTK (Bird et al., 2009) for POS tagging and stemming, aspell for spellchecking, and WordNet (Fellbaum, 1998) to get the synonyms. Correct POS tags are generated using a grammatically correct text (provided by EASE). The POS tag sequences not included in the correct POS tags are considered as bad POS. EASE uses scikit-learn (Pedregosa et al., 2011) for extracting unigram and bigram features. For linear regression, a constant feature of value one is appended for the bias.",
"cite_spans": [
{
"start": 129,
"end": 143,
"text": "(Fisher, 1922)",
"ref_id": "BIBREF7"
},
{
"start": 761,
"end": 780,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 849,
"end": 865,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 1082,
"end": 1106,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Learning Algorithm",
"sec_num": "3.2"
},
{
"text": "First, consider the single-task setting. Let x \u2208 R p be the feature vector of an essay. p represents the number of features in x. The generative model for an observed real-valued score y is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "\u03b1 \u223c \u0393(\u03b1 1 , \u03b1 2 ), \u03bb \u223c \u0393(\u03bb 1 , \u03bb 2 ), w \u223c N (0, \u03bb \u22121 I), f (x) def = x T w, y \u223c N (f (x i ), \u03b1 \u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "Here, \u03b1 and \u03bb are Gamma distributed hyperparameters of the model; w \u2208 R p is the Normal distributed weight vector of the model; f is the latent function that returns the \"true\" score of an essay represented by x by linear combination; and y is the noisy observed score of x. Now, consider the two-task setting, where we indicate the source task and the target task by superscripts s and t. Given an essay with feature vector x, we consider its observed scores y s and y t when evaluated in task s and task t separately. We have scale hyper-parameters \u03b1 and \u03bb sampled as before. In addition, we have the correlation \u03c1 between the two tasks. The generative model relating the two tasks is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "\u03c1 \u223c p \u03c1 , w t , w s \u223c N (0, \u03bb \u22121 I), f t (x) def = x T w t , f s (x) def = \u03c1x T w t + (1 \u2212 \u03c1 2 ) 1/2 x T w s , y t \u223c N (f t (x), \u03b1 \u22121 ), y s \u223c N (f s (x), \u03b1 \u22121 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "where p \u03c1 is a chosen distribution over the correlation; and w t and w s are the weight vectors of the target and the source tasks respectively, and they are identically distributed but independent. In this setting, it can be shown that the correlation between latent scoring functions for the target and the source tasks is \u03c1. That is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(f t (x)f s (x )) = \u03bb \u22121 \u03c1x T x .",
"eq_num": "(1)"
}
],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "This, in fact, is a generalization of the EasyAdapt scheme, for which the correlation \u03c1 is fixed at 0.5 [ (Daum\u00e9 III, 2007) , see eq. 3]. Two other common values for \u03c1 are 1 and 0; the former corresponds to a straightforward concatenation of the source and target data, while the latter is the shared-hyperparameter setting which shares \u03b1 and \u03bb between the source and target domain. Through adjusting \u03c1, the model traverses smoothly between these three regimes of domain adaptation. EasyAdapt is attractive because of its (frustratingly) ease of use via encoding the correlation within an expanded feature representation scheme. In the same way, the current setup can be achieved readily by the expanded feature representation",
"cite_spans": [
{
"start": 106,
"end": 123,
"text": "(Daum\u00e9 III, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03a6 t (x) = x, 0 p , \u03a6 s (x) = \u03c1x, (1 \u2212 \u03c1 2 ) 1/2 x",
"eq_num": "(2)"
}
],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "in R 2p for the target and the source tasks. Associated with this expanded feature representation is the weight vector w def = (w t , w s ) also in R 2p . As we shall see in Section 4.1, such a representation eases the estimation of the parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "The above model is related to the multi-task Gaussian Process model that has been used for joint emotion analysis (Beck et al., 2014) . There, the intrinsic coregionalisation model (ICM) has been used with squared-exponential covariance function. Here, we use the simpler linear covariance function (Rasmussen and Williams, 2006) , and this leads to Bayesian linear ridge regression. There are two reasons for this choice. The first is that linear combination of carefully chosen features, especially lexical ones, usually gives good performance in NLP tasks. The second is in the preceding paragraph: an intuitive feature expansion representation of the domain adaptation process that allows ease of parameter estimation.",
"cite_spans": [
{
"start": 114,
"end": 133,
"text": "(Beck et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 299,
"end": 329,
"text": "(Rasmussen and Williams, 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "The above model is derived from the Cholesky decomposition",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "1 \u03c1 \u03c1 1 = 1 0 \u03c1 (1 \u2212 \u03c1 2 ) 1/2 1 \u03c1 0 (1 \u2212 \u03c1 2 ) 1/2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "of the desired correlation matrix that will eventually lead to equation (1). Other choices are possible, as long as equation 1is satisfied. However, the current choice has the desired property that the w t portion of the combined weight vector is di-rectly interpretable as the weights for the features in the target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlated Bayesian Linear Ridge Regression",
"sec_num": "4"
},
{
"text": "We estimate the parameters (\u03b1, \u03bb, \u03c1) of the model using penalized maximum likelihood. For \u03b1 and \u03bb, the gamma distributions are used. For \u03c1, we impose a distribution with density p \u03c1 (\u03c1) = 1 + a \u2212 2a\u03c1, a \u2208 [\u22121, 1]. This distribution is supported only in [0, 1]; negative \u03c1s are not supported because we think that negative transfer of information from source to target domain prompts in this essay scoring task is improbable. In our application, we slightly bias the correlations towards zero with a = 1/10 in order to ameliorate spurious correlations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "For the training data, let there be n t examples in the target domain and n s in the source domain. Let X t (resp. X s ) be the n t -by-p (resp. n s -by-p) design matrix for the training data in the target (resp. source) domain. Let y t and y s be the corresponding observed essay scores. The expanded feature matrix due to equation 2is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "X def = X t 0 \u03c1X s (1 \u2212 \u03c1 2 ) 1/2 X s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "Similarly, let y be the stacking of y t and y s . Let K def = \u03bb \u22121 XX T + \u03b1 \u22121 I, which is also known as the Gramian for the observations. The log marginal likelihood of the training data is (Rasmussen and Williams, 2006)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "L = \u2212 1 2 y T K \u22121 y \u2212 1 2 log |K| \u2212 n t + n s 2 log 2\u03c0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "This is penalized to give L p by adding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "(\u03b1 1 \u2212 1) log(\u03b1) \u2212 \u03b1 2 \u03b1 + \u03b1 1 log \u03b1 2 \u2212 log \u0393(\u03b1 1 ) +(\u03bb 1 \u2212 1) log(\u03bb) \u2212 \u03bb 2 \u03bb + \u03bb 1 log \u03bb 2 \u2212 log \u0393(\u03bb 1 ) + log(1 + a \u2212 2a\u03c1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "The estimation of these parameters is then done by optimising L p . In our implementation, we use scikit-learn for estimating \u03b1 and \u03bb in an inner loop, and we use gradient descent for estimating \u03c1 in the outer loop using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "\u2202L p \u2202\u03c1 = 1 2 tr \u03b3\u03b3 T \u2212 K \u22121 \u2202K \u2202\u03c1 \u2212 2a 1 + a \u2212 2a\u03c1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "\u03b3 def = K \u22121 y and \u2202K \u2202\u03c1 = \u03bb \u22121 0 X t (X s ) T X s (X t ) T 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood Estimation",
"sec_num": "4.1"
},
{
"text": "We report the mean prediction as the score of an essay. This uses the mean weight vector w = \u03bb \u22121 X T K \u22121 y \u2208 R 2p , which may be partitioned into two vectorsw t andw s , each in R p . The prediction of a new essay represented by x * in the target domain is then given by x T * w t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "4.2"
},
{
"text": "In this section, we will give a brief description of the dataset we use, describe our experimental setup, and explain the evaluation metric we use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use the ASAP dataset 4 for our domain adaptation experiments. This dataset contains 8 prompts of different genres. The average length of the essays differs for each prompt, ranging from 150 to 650 words. The essays were written by students ranging in grade 7 to grade 10. All the essays were graded by at least 2 human graders. The genres include narrative, argumentative, or response. The prompts also have different score ranges, as shown in Table 2 . We pick four pairs of essay prompts to perform our experiments. In each experiment, one of the essay prompts from the pair will be the source domain and the other essay prompt will be the target domain. The essay set pairs we choose are 1 \u2192 2, 3 \u2192 4, 5 \u2192 6, and 7 \u2192 8, where the pair 1 \u2192 2 denotes using prompt 1 as the source domain and prompt 2 as the target domain, for example. These pairs are chosen based on the similarities in their genres, score ranges, and median scores. The aim is to have similar source and target domains for effective domain adaptation.",
"cite_spans": [],
"ref_spans": [
{
"start": 447,
"end": 454,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We use 5-fold cross validation on the ASAP training data for evaluation. This is because the official test data of the competition is not released to the public. We divide the target domain data randomly into 5 folds. One fold is used as the test data, while the remaining four folds are collected together and then sub-sampled to obtain the target-domain training data. The sizes of the subsampled target-domain training data are 10, 25, 50 and 100, with the larger sets containing the smaller sets. All essays from the source domain are used. Genre Avg len Range Median 1 1,783 ARG 350 2-12 8 2 1,800 ARG 350 1-6 3 3 1,726 RES 150 0-3 1 4 1,772 RES 150 0-3 1 5 1,805 RES 150 0-4 2 6 1,800 RES 150 0-4 2 7 1,569 NAR 250 0-30 16 8 723 NAR 650 0-60 36 Table 2 : Selected details of the ASAP data. For the genre column, ARG denotes argumentative essays, RES denotes response essays, and NAR denotes narrative essays.",
"cite_spans": [],
"ref_spans": [
{
"start": 545,
"end": 807,
"text": "Genre Avg len Range Median 1 1,783 ARG 350 2-12 8 2 1,800 ARG 350 1-6 3 3 1,726 RES 150 0-3 1 4 1,772 RES 150 0-3 1 5 1,805 RES 150 0-4 2 6 1,800 RES 150 0-4 2 7 1,569 NAR 250 0-30 16 8 723 NAR 650 0-60 36 Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "Our evaluation considers the following four ways in which we train the AES model: SourceOnly Using essays from the source domain only;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "TargetOnly Using 10, 25, 50, and 100 sampled essays from the target domain only;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "SharedHyper Using correlated Bayesian linear ridge regression (BLRR) with \u03c1 fixed to 0 on source domain essays and sampled essays from the target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "EasyAdapt As SharedHyper, but with \u03c1 = 0.5;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "Concat As SharedHyper, but with \u03c1 fixed to 1.0;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "ML-\u03c1 Using correlated BLRR with \u03c1 maximizing the likelihood of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "Since the source and target domain may have different score ranges, we scale the scores linearly to range from \u22121 to 1. When predicting on the test essays, the predicted scores of our system will be linearly scaled back to the target domain score range and rounded to the nearest integer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "We build upon scikit-learn's implementation of BLRR for our learning algorithm. To ameliorate the effects of different scales of features, we normalize the features: length, POS, and prompt features are linearly scaled to range from 0 to 1 according to the training data; and the feature values for bag-of-words features are log(1 + count) instead of the actual counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "We use scikit-learn version 0.15.2, NLTK version 2.0b7, and aspell version 0.60.6.1 in this experiment. The BLRR code (bayes.py) in scikitlearn is modified to obtain valid likelihoods for use in the outer loop for estimating \u03c1. We use scikitlearn's default value for the parameters \u03b1 1 , \u03b1 2 , \u03bb 1 , and \u03bb 2 which is 10 \u22126 . Table 3 : In-domain experimental results.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Score Set # Essays",
"sec_num": null
},
{
"text": "Quadratic weighted Kappa (QWK) is used to measure the agreement between the human rater and the system. We choose to use this evaluation metric since it is the official evaluation metric of the ASAP competition. Other work such as (Chen and He, 2013) that uses the ASAP dataset also uses this evaluation metric. QWK is calculated using",
"cite_spans": [
{
"start": 231,
"end": 250,
"text": "(Chen and He, 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.3"
},
{
"text": "\u03ba = 1 \u2212 i,j w i,j O i,j i,j w i,j E i,j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.3"
},
{
"text": "where matrices O, (w i,j ), and E are the matrices of observed scores, weights, and expected scores respectively. Matrix O i,j corresponds to the number of essays that receive a score i by the first rater and a score j by the second rater. The weight entries are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.3"
},
{
"text": "w i,j = (i \u2212 j) 2 /(N \u2212 1) 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.3"
},
{
"text": "where N is the number of possible ratings. Matrix E is calculated by taking the outer product between the score vectors of the two raters, which are then normalized to have the same sum as O.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "5.3"
},
{
"text": "In-domain results for comparison First, we determine indicative upper bounds on the QWK scores using Bayesian linear ridge regression (BLRR). To this end, we perform 5-fold cross validation by training and testing within each domain. This is also done with linear support vector machine (SVM) regression to confirm that BLRR is a competitive method for this task. In addition, since the ASAP data has at least 2 human annotators for each essay, we also calculate the human agreement score. The results are shown in Table 3. We see that the BLRR scores are close to the the human agreement scores for prompt 1 and using weights learned in the in-domain setting; see Table 1 for the complete list of features. For domains 2, 4, 6, and 8, which are the target domains in the domain adaptation experiments, these ratios are 0.37, 0.73, 0.69, and 0.93. The ratios for the other four domains are similarly high. This shows that bag-of-words features play a significant role in the prediction of the essay scores. We examine the number of bag-of-words features that 100 additional target domain essays would add to SourceOnly; that is, we compare the bag-of-words features for SourceOnly with those of SharedHyper, EasyAdapt, Concat, and ML-\u03c1 for n t = 100.",
"cite_spans": [],
"ref_spans": [
{
"start": 665,
"end": 672,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "The numbers of these additional features, averaged over the five folds, are 269, 351, 377, and 291 for target domains 2, 4, 6, and 8 respectively. In terms of percentages, these are 67%, 87%, 94%, and 72% more features over SourceOnly. Such a large number of additional bag-of-words features contributed by target-domain essays, together with the fact that these features are given high weights, means that target-domain essays are important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "We now compare the four domain adaptation methods: SharedHyper, EasyAdapt, Concat, and ML-\u03c1. We recall that the first three are constrained cases of the last by fixing \u03c1 to 0, 0.5, and 1 respectively. First, we see that SharedHyper is a rather poor domain adaptation method for AES, because it gives the lowest QWK score, except for the case of using 25, 50, and 100 target essays in adapting from prompt 7 to prompt 8, where it is better than Concat. In fact, its scores are generally close to the TargetOnly scores. This is unsurprising, since in SharedHyper the weights are effectively not shared between the target and source training examples: only the hyper-parameters \u03b1 and \u03bb are shared. This is a weak form of information sharing between the target and source domains. Hence, we expect this to perform suboptimally when the target and source domains bear more than spurious relationship, which is indeed the case here because we have chosen the source and target domain pairs based on their similarities, as described in Section 5.1. We now focus on EasyAdapt, Concat, and ML-\u03c1, which are the better domain adaptation methods from our results. We see that ML-\u03c1 either gives the best or second-best scores, except for the one case of 5 \u2192 6 with 10 target essays. In comparison, although Concat performs consis-tently well for 1 \u2192 2, 3 \u2192 4, and 5 \u2192 6, its QWK scores for 7 \u2192 8 are quite poor and even lower than those of TargetOnly for 25 or more target essays. In contrast to Concat, EasyAdapt performs well for 7 \u2192 8 but not so well for the other three domain pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing domain adaptation methods",
"sec_num": null
},
{
"text": "Let us examine the reason for contrasting results between EasyAdapt and Concat to appreciate the flexibility afforded by ML-\u03c1. The \u03c1 estimated by ML-\u03c1 for the pairs 1 \u2192 2, 3 \u2192 4, 5 \u2192 6, and 7 \u2192 8 with 100 target essays are 0.81, 0.97, 0.76, and 0.63 averaged over five folds. The lower estimated correlation \u03c1 for 7 \u2192 8 means that prompt 7 and prompt 8 are not as similar as the other pairs are. In such a case as this, Concat, which in effect considers the target domain to be exactly the same as the source domain, can perform very poorly. For the other three pairs which are more similar, the correlation of 0.5 assumed by EasyAdapt is not strong enough to fully exploit the similarities between the domains. Unlike Concat and EasyAdapt, ML-\u03c1 has the flexibility to allow it to traverse effectively between the different degrees of domain similarity or relatedness based on the source domain and target domain training data. In view of this, we consider ML-\u03c1 to be a competitive default domain adaptation algorithm for the AES task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing domain adaptation methods",
"sec_num": null
},
{
"text": "In retrospect of our present results, it can be obvious why prompts 7 and 8 are not as similar as we would have hoped for more effective domain adaptation. Both prompts ask for narrative essays, and these by nature are very promptspecific and require words and phrases relating directly to the prompts. In fact, referring to a previous discussion on the contributions by targetdomain essays, we see that weights for the bagof-words features for prompt 8 contribute a high of 93% of the total. When we examine the bagof-words features, we see that prompt 7 (which is to write about patience) contributes only 19% to the bag-of-words features of prompt 8 (which is to write about laughter) in the in-domain experiment. This means that 81% of the bag-of-words features, which are important to narrative essays, must be contributed by the target-domain essays relating to prompt 8. Future work on domain adaptation for AES can explore chosing the prior p \u03c1 on \u03c1 to better reflect the nature of the essays involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing domain adaptation methods",
"sec_num": null
},
{
"text": "In this work, we investigate the effectiveness of using domain adaptation when we only have a small number of target domain essays. We have shown that domain adaptation can achieve better results compared to using just the small number of target domain data or just using a large amount of data from a different domain. As such, our research will help reduce the amount of annotation work needed to be done by human graders to introduce a new prompt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://www.kaggle.com/c/asap-aes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.vantagelearning.com/products/intellimetric/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/edx/ease",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.kaggle.com/c/asap-aes/data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013-T2-1-150.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Table 4 : QWK scores of the six methods on four domain adaptation experiments, ranging from using 10 target-domain essays (second column) to 100 target-domain essays (fifth column). The scores are the averages over 5 folds. Setting a \u2192 b means the AES system is trained on essay set a and tested on essay set b. For each set of six results comparing the methods, the best score is boldfaced and the second-best score is underlined.prompts 5 to 8, but fall short by 10% to 20% for prompts 2 to 4. We also see that BLRR is comparable to linear SVM regression, giving almost the same performance for prompts 4 to 7; slightly poorer performance for prompts 1 to 3; and much better performance for prompt 8. The subsequent discussion in this section will refer to the BLRR scores in Table 3 for in-domain scores.Importance of domain adaptation The results of the domain adaptation experiments are tabulated in Table 4 , where the best scores are boldfaced and the second-best scores are underlined. As expected, for pairs 1 \u2192 2, 3 \u2192 4, and 5 \u2192 6, all the scores are below their corresponding upper bounds from the in-domain setting in Table 3 . However, for pair 7 \u2192 8, the QWK score for domain adaptation with 100 target essays outperforms that of the in-domain, albeit only by 0.4%. This can be explained by the small number of essays in prompt 8 that can be used in both the indomain and domain adaptation settings, and that domain adaptation additionally involves prompt 7 which has more than twice the number of essays; see column two in Table 2 . Hence, domain adaptation is effective in the context of small number of target essays with large number of source essays. This can also be seen in Table 4 where we have simulated small number of target essays with sizes 10, 25, 50, and 100. When we compare the scores of TargetOnly against the best scores and second-best scores, we find that domain adaptation is effective and important in improving the QWK scores. By the above argument alone, one might have thought that an overwhelming large number of source domain essays was sufficient for the target domain. However, this is not true. When we compare the scores of SourceOnly against the best scores and second-best scores, we find that domain adaptation again improves the QWK scores. In fact, with just 10 additional target domain essays, effective domain adaptation can improve over SourceOnly for all target domains 2, 4, 6, and 8 respectively. This is the first time where the effects of domain adaptation are shown in the AES task. In addition, the large improvement with a small number of additional target domain essays in 5 \u2192 6 and 7 \u2192 8 suggests the high domain-dependence nature of the task: learning on one essay prompt and testing on another should be strongly discouraged.Contributions by target-domain essays It is instructive to understand why domain adaptation is important for AES. To this end, we estimate the contribution of bag-of-words features to the overall prediction by computing the ratio i over bag-of-words features w 2 i i over all features w 2 i",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 4",
"ref_id": null
},
{
"start": 779,
"end": 786,
"text": "Table 3",
"ref_id": null
},
{
"start": 906,
"end": 913,
"text": "Table 4",
"ref_id": null
},
{
"start": 1131,
"end": 1138,
"text": "Table 3",
"ref_id": null
},
{
"start": 1539,
"end": 1546,
"text": "Table 2",
"ref_id": null
},
{
"start": 1696,
"end": 1703,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "QWK Scores",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automated essay scoring with e-rater R v. 2.0",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
}
],
"year": 2004,
"venue": "Educational Testing Service",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali and Jill Burstein. 2004. Automated es- say scoring with e-rater R v. 2.0. Technical report, Educational Testing Service.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Joint emotion analysis via multi-task Gaussian processes",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Beck, Trevor Cohn, and Lucia Specia. 2014. Joint emotion analysis via multi-task Gaussian pro- cesses. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly Media.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Domain adaptation with structural correspondence learning",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automated essay scoring by maximizing human-machine agreement",
"authors": [
{
"first": "Hongbo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongbo Chen and Ben He. 2013. Automated essay scoring by maximizing human-machine agreement. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Frustratingly easy domain adaptation",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adap- tation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. Bradford.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On the interpretation of \u03c7 2 from contingency tables, and the calculation of p",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fisher",
"suffix": ""
}
],
"year": 1922,
"venue": "Journal of the Royal Statistical Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald A Fisher. 1922. On the interpretation of \u03c7 2 from contingency tables, and the calculation of p. Journal of the Royal Statistical Society.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The intelligent essay assessor: Applications to educational technology",
"authors": [
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Darrell",
"middle": [],
"last": "Foltz",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Laham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Landauer",
"suffix": ""
}
],
"year": 1999,
"venue": "Interactive Multimedia Electronic Journal of Computer-Enhanced Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter W Foltz, Darrell Laham, and Thomas K Lan- dauer. 1999. The intelligent essay assessor: Appli- cations to educational technology. Interactive Mul- timedia Electronic Journal of Computer-Enhanced Learning.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An empirical comparison of features and tuning for phrase-based machine translation",
"authors": [
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spence Green, Daniel Cer, and Christopher D. Man- ning. 2014. An empirical comparison of features and tuning for phrase-based machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Ets: domain adaptation and stacking for short answer scoring",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Nitin Madnani. 2013. Ets: do- main adaptation and stacking for short answer scor- ing. In Proceedings of the Seventh International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Instance weighting for domain adaptation in NLP",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In Pro- ceedings of the 45th Annual Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An introduction to latent semantic analysis",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Darrell",
"middle": [],
"last": "Foltz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Laham",
"suffix": ""
}
],
"year": 1998,
"venue": "Discourse Processes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K Landauer, Peter W Foltz, and Darrell La- ham. 1998. An introduction to latent semantic anal- ysis. Discourse Processes.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic essay grading using text categorization techniques",
"authors": [
{
"first": "S",
"middle": [],
"last": "Leah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Larkey",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 21st International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leah S Larkey. 1998. Automatic essay grading using text categorization techniques. In Proceedings of the 21st International ACM SIGIR Conference on Re- search and Development in Information Retrieval.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word segmentation of informal Arabic with domain adaptation",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Monroe, Spence Green, and Christopher D Man- ning. 2014. Word segmentation of informal Ara- bic with domain adaptation. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Computer grading of student prose, using modern concepts and software",
"authors": [
{
"first": "Ellis",
"middle": [],
"last": "Batten",
"suffix": ""
},
{
"first": "Page",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1994,
"venue": "The Journal of Experimental Education",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellis Batten Page. 1994. Computer grading of student prose, using modern concepts and software. The Journal of Experimental Education.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Gaussian Processes for Machine Learning",
"authors": [
{
"first": "Carl",
"middle": [
"Edward"
],
"last": "Rasmussen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"K I"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. MIT Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automated essay scoring using Bayes' theorem",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Tahung",
"middle": [],
"last": "Rudner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2002,
"venue": "The Journal of Technology, Learning and Assessment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence M Rudner and Tahung Liang. 2002. Au- tomated essay scoring using Bayes' theorem. The Journal of Technology, Learning and Assessment.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Handbook of Automated Essay Evaluation: Current Applications and New Directions",
"authors": [],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D. Shermis and Jill Burstein, editors. 2013. Handbook of Automated Essay Evaluation: Current Applications and New Directions. Routledge.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A new dataset and method for automatically grading ESOL texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Word sense disambiguation using OntoNotes: An empirical study",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Yee Seng",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi Zhong, Hwee Tou Ng, and Yee Seng Chan. 2008. Word sense disambiguation using OntoNotes: An empirical study. In Proceedings of the 2008 Con- ference on Empirical Methods in Natural Language Processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Number of sentence ending punctuation symbols ( \".\", \"?\", or \"!\") Average word length Part of speech (POS) Number of bad POS n-grams Number of bad POS n-grams divided by the total number of words in the essay Prompt Number of words in the essay that appears in the prompt Number of words in the essay that appears in the prompt divided by the total number of words in the essay Number of words in the essay which is a word or a synonym of a word that appears in the prompt Number of words in the essay which is a word or a synonym of a word that appears in the prompt divided by the total number of words in the essay Bag of words Count of useful unigrams and bigrams (unstemmed) Count of stemmed and spell corrected useful unigrams and bigrams"
},
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
}
}
}
}