|
{ |
|
"paper_id": "W18-0506", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:26:33.846031Z" |
|
}, |
|
"title": "Second Language Acquisition Modeling", |
|
"authors": [ |
|
{ |
|
"first": "Burr", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brust", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Gustafson", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Masato", |
|
"middle": [], |
|
"last": "Hagiwara", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Madnani", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "ETS", |
|
"location": { |
|
"settlement": "Princeton", |
|
"region": "NJ", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present the task of second language acquisition (SLA) modeling. Given a history of errors made by learners of a second language, the task is to predict errors that they are likely to make at arbitrary points in the future. We describe a large corpus of more than 7M words produced by more than 6k learners of English, Spanish, and French using Duolingo, a popular online language-learning app. Then we report on the results of a shared task challenge aimed studying the SLA task via this corpus, which attracted 15 teams and synthesized work from various fields including cognitive science, linguistics, and machine learning.", |
|
"pdf_parse": { |
|
"paper_id": "W18-0506", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present the task of second language acquisition (SLA) modeling. Given a history of errors made by learners of a second language, the task is to predict errors that they are likely to make at arbitrary points in the future. We describe a large corpus of more than 7M words produced by more than 6k learners of English, Spanish, and French using Duolingo, a popular online language-learning app. Then we report on the results of a shared task challenge aimed studying the SLA task via this corpus, which attracted 15 teams and synthesized work from various fields including cognitive science, linguistics, and machine learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "As computer-based educational apps increase in popularity, they generate vast amounts of student learning data which can be harnessed to drive personalized instruction. While there have been some recent advances for educational software in domains like mathematics, learning a language is more nuanced, involving the interaction of lexical knowledge, morpho-syntactic processing, and several other skills. Furthermore, most work that has applied natural language processing to language learner data has focused on intermediate-toadvanced students of English, particularly in assessment settings. Much less work has been devoted to beginners, learners of languages other than English, or ongoing study over time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose second language acquisition (SLA) modeling as a new computational task to help broaden our understanding in this area. First, we describe a new corpus of language learner data, containing more than 7.1M words, annotated for production errors that were made by more than 6.4k learners of English, Spanish, and French, during their first 30 days of learning with Duolingo (a popular online language-learning app).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Then we report on the results of a \"shared task\" challenge organized by the authors using this SLA modeling corpus, which brought together 15 research teams. Our goal for this work is threefold: (1) to synthesize years of research in cognitive science, linguistics, and machine learning, (2) to facilitate cross-dialog among these disciplines through a common large-scale empirical task, and in so doing (3) to shed light on the most effective approaches to SLA modeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our learner trace data comes from Duolingo: a free, award-winning, online language-learning platform. Since launching in 2012, more than 200 million learners worldwide have enrolled in Duolingo's game-like courses, either via the website 1 or mobile apps. Figure 1 (a) is a screen-shot of the home screen, which specifies the game-like curriculum. Each icon represents a skill, aimed at teaching thematically or grammatically grouped words or concepts. Learners can tap an icon to access lessons of new material, or to review material once all lessons are completed. Learners can also choose to get a personalized practice session that reviews previouslylearned material from anywhere in the course by tapping the \"practice weak skills\" button.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 264, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Shared Task Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To create the SLA modeling corpus, we sampled from Duolingo users who registered for a course and reached at least the tenth row of skill icons within the month of November 2015. By limiting the data to new users who reach this level of the course, we hope to better capture beginners' broader language-learning process, including repeated interaction with vocabulary and grammar Figure 1: Duolingo screen-shots for an English-speaking student learning French (iPhone app, 2017). (a) The home screen, where learners can choose to do a \"skill\" lesson to learn new material, or get a personalized practice session by tapping the \"practice weak skills\" button. (b-d) Examples of the three exercise types included in our shared task experiments, which require the student to construct responses in the language they are learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Collection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "over time. Note that we excluded all learners who took a placement test to skip ahead in the course, since these learners are likely more advanced.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Collection", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "An important question for SLA modeling is: to what extent does an approach generalize across languages? While the majority of Duolingo users learn English-which can significantly improve job prospects and quality of life (Pinon and Haydon, 2010 )-Spanish and French are the second and third most popular courses. To encourage researchers to explore language-agnostic features, or unified cross-lingual modeling approaches, we created three tracks: English learners (who speak Spanish), Spanish learners (who speak English), and French learners (who speak English).", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 244, |
|
"text": "(Pinon and Haydon, 2010", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Three Language Tracks", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The goal of the task is as follows: given a history of token-level errors made by the learner in the learning language (L2), accurately predict the errors they will make in the future. In particular, we focus on three Duolingo exercise formats that require the learners to engage in active recall, that is, they must construct answers in the L2 through translation or transcription. Figure 1 (b) illustrates a reverse translate item, where learners are given a prompt in the language they know (e.g., their L1 or native language), and translate it into the L2. Figure 1 (c) illustrates a reverse tap item, which is a simpler version of the same format: learners construct an answer using a bank of words and distractors. Figure 1 (d) is a listen item, where learners hear an utterance in the L2 they are learning, and must transcribe it. Duolingo does include many other exercise formats, but we focus on these three in the current work, since constructing L2 responses through translation or transcription is associated with deeper levels of processing, which in turn is more strongly associated with learning (Craik and Tulving, 1975) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1111, |
|
"end": 1136, |
|
"text": "(Craik and Tulving, 1975)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 391, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 569, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 721, |
|
"end": 729, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Label Prediction Task", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Since each exercise can have multiple correct answers (due to synonyms, homophones, or ambiguities in tense, number, formality, etc.), Duolingo uses a finite-state machine to align the learner's response to the most similar reference answer form a large set of acceptable responses, based on token string edit distance (Levenshtein, 1966) . For example, Figure 1 2 shows how we use these alignments to generate labels for the SLA modeling task. In this case, an English (from Spanish) learner was asked to translate, \"\u00bfCu\u00e1ndo puedo ayudar?\" and wrote \"wen can help\" instead of \"When can I help?\" This produces two errors (a typo and a missing pronoun). We ignore capitalization, punctuation, and accents when matching tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 338, |
|
"text": "(Levenshtein, 1966)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 362, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Label Prediction Task", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Sample data from the resulting corpus can be found in Figure 3 . Each token from the reference answer is labeled according to the alignment with the learner's response (the final column: 0 for correct and 1 for incorrect). Tokens are grouped together by exercise, including user-, exercise-, and session-level meta-data in the previous line (marked by the # character). We included all exercises done by the users sampled from the 30-day data collection window.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 62, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Set Format", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The overall format is inspired by the Universal Dependencies (UD) format 2 . Column 1 is a unique B64-encoded token ID, column 2 is a token (word), and columns 3-6 are morpho-syntactic features from the UD tag set (part of speech, morphology features, and dependency parse labels and edges). These were generated by processing the aligned reference answers with Google SyntaxNet (Andor et al., 2016) . Because UD tags are meant to be language-agnostic, it was our goal to help make cross-lingual SLA modeling more straightforward by providing these features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 399, |
|
"text": "(Andor et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Set Format", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Exercise meta-data includes the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Set Format", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "\u2022 user: 8-character unique anonymous user ID for each learner (B64-encoded) \u2022 countries: 2-character ISO country codes from which this learner has done exercises \u2022 days: number of days since the learner started learning this language on Duolingo \u2022 client: session device platform \u2022 session: session type (e.g., lesson or practice) \u2022 format: exercise format (see Figure 1) \u2022 time: the time (in seconds) it took the learner to submit a response for this exercise. Lesson sessions (about 77% of the data set) are where new words or concepts are introduced, although lessons also include previously-learned material (e.g., each exercise attempts to introduce only one new word or inflection, so all other tokens should have been seen by the student be- . fore). Practice sessions (22%) should contain only previously-seen words and concepts. Test sessions (1%) are mini-quizzes that allow a student to skip out of a single skill in the curriculum (i.e., the student may have never seen this content before in the Duolingo app, but may well have had prior knowledge before starting the course). It is worth mentioning that for the shared task, we did not provide actual learner responses, only the closest reference answers. Releasing such data (at least in the TEST set) would by definition give away the labels and might undermine the task. However, we plan to release a future version of the corpus that is enhanced with additional meta-data, including the actual learner responses.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 371, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Set Format", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "2 http://universaldependencies.org TRAIN DEV TEST Track Users Tokens (Err) Tokens (Err) Tokens (Err) English 2.6k 2.6M (13%) 387k (14%) 387k (15%) Spanish 2.6k 2.0M (14%) 289k (16%) 282k (16%) French 1.2k 927k (16%) 138k (18%) 136k (18%) Overall 6.4k 5.5M (14%) 814k (15%) 804k (16%)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Set Format", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The data were released in two phases. In phase 1 (8 weeks), TRAIN and DEV partitions were released with labels, along with a baseline system and evaluation script, for system development. In phase 2 (10 days), the TEST partition was released without labels, and teams submitted predictions to CodaLab 3 for blind evaluation. To allow teams to compare different system parameters or features, they were allowed to submit up to 10 predictions total (up to 2 per day) during this phase. Table 1 reports summary statistics for each of the data partitions for all three tracks. We created TRAIN, DEV, and TEST partitions as follows. For each user, the first 80% of their exercises were placed in the TRAIN set, the subsequent 10% in DEV, and the final 10% in TEST. Hence the three data partitions are sequential, and contain ordered observations for all users.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 484, |
|
"end": 491, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Challenge Timeline", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "Note that because the three data partitions are sequential, and the DEV set contains observations that are potentially valuable for making TEST set predictions, most teams opted to combine the TRAIN and DEV sets to train their systems in final phase 2 evaluations. Figure 3: Sample exercise data from an English learner over time: roughly two, five, and ten days into the course.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenge Timeline", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "We use area under the ROC curve (AUC) as the primary evaluation metric for SLA modeling (Fawcett, 2006) . AUC is a common measure of ranking quality in classification tasks, and can be interpreted as the probability that the system will rank a randomly-chosen error above a randomlychosen non-error. We argue that this notion of ranking quality is particularly useful for evaluating systems that might be used for personalized learning, e.g., if we wish to prioritize words or exercises for an individual learner's review based on how likely they are to have forgotten or make errors at a given point in time. We also report F1 score-the harmonic mean of precision and recall-as a secondary metric, since it is more common in similar skewed-class labeling tasks (e.g., Ng et al., 2013) . Note, however, that F1 can be significantly improved simply by tuning the classification threshold (fixed at 0.5 for our evaluations) without affecting AUC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 103, |
|
"text": "(Fawcett, 2006)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 769, |
|
"end": 785, |
|
"text": "Ng et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "A total of 15 teams participated in the task, of which 13 responded to a brief survey about their approach, and 11 submitted system description papers. All but two of these teams submitted predictions for all three language tracks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Official shared task results are reported in Table 2. System ranks are determined by sorting teams according to AUC, and using DeLong's test (DeLong et al., 1988) to identify statistical ties. For the remainder of this section, we provide a summary of each team's approach, ordered by the team's average rank across all three tracks. Certain teams are marked with modeling choice indicators (\u2662, \u2663, \u2021), which we discuss further in \u00a75.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 162, |
|
"text": "(DeLong et al., 1988)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "SanaLabs (Nilsson et al., 2018 ) used a combination of recurrent neural network (RNN) predictions with those of a Gradient Boosted Decision Tree (GBDT) ensemble, trained independently for each track. This was motivated by the observation that RNNs work well for sequence data, while GBDTs are often the best-performing non-neural model for shared tasks using tabular data. They also engineered several token context features, and learner/token history features such as number of times seen, time since last practice, etc.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 30, |
|
"text": "(Nilsson et al., 2018", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "singsound (Xu et al., 2018) used an RNN architecture using four types of encoders, representing different types of features: token context, linguistic information, user data, and exercise format. The RNN decoder integrated information from all four encoders. Ablation experiments revealed the context encoder (representing the token) contributed the most to model performance, while the linguistic encoder (representing grammatical information) contributed the least.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 27, |
|
"text": "(Xu et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "NYU (Rich et al., 2018) used an ensemble of GBDTs with features engineered based on psychological theories of cognition. Predictions for each track were averaged between a track-specific model and a unified model (trained on data from all three tracks). In addition to the word, user, and exercise features provided, the authors included word lemmas, corpus frequency, L1-L2 cognates, and features indicating user motivation and diligence (derived from usage patterns), and others. Ablation studies indicated that most of the performance was due to the user and token features. TMU (Kaneko et al., 2018) used a combination of two bidirectional RNNs-the first to predict potential user errors at a given token, and a second to track the history of previous answers by each user. These networks were jointly trained through a unified objective function. The authors did not engineer any additional features, but did train a single model for all three tracks (using a track ID feature to distinguish among them).", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 23, |
|
"text": "(Rich et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 582, |
|
"end": 603, |
|
"text": "(Kaneko et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "CECL (Bestgen, 2018 ) used a logistic regression approach. The base feature set was expanded to include many feature conjunctions, including word n-grams crossed with the token, user, format, and session features provided with the data set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 19, |
|
"text": "(Bestgen, 2018", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Cambridge (Yuan, 2018) trained two RNNsa sequence labeler, and a sequence-to-sequence model taking into account previous answers-and found that averaging their predictions yielded the best results. They focused on the English track, experimenting with additional features derived from other English learner corpora. Hyper-parameters were tuned for English and used as-is for other tracks, with comparable results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 22, |
|
"text": "(Yuan, 2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "UCSD (Tomoschuk and Lovelett, 2018 ) used a random forest classifier with a set of engineered features motivated by previous research in memory and linguistic effects in SLA, including \"word neighborhoods,\" corpus frequency, cognates, and repetition/experience with a given word. The system also included features specific to each user, such as mean and variance of error rates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 34, |
|
"text": "(Tomoschuk and Lovelett, 2018", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "LambdaLab used GBDT models independently for each track, deriving their features from confirmatory analysis of psychologically-motivated hypotheses on the TRAIN set. These include proxies for student engagement, spacing effect, response time, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "nihalnayak (Nayak and Rao, 2018) used a logistic regression model similar to the baseline, but added features inspired by research in codemixed language-learning where context plays an important role. In particular, they included word, part of speech, and metaphone features for previous:current and current:next token pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 32, |
|
"text": "(Nayak and Rao, 2018)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Grotoco (Klerke et al., 2018) also used logistic regression, including word lemmas, frequency, cognates, and user-specific features such as word error rate. Interestingly, the authors found that ignoring each user's first day of exercise data improved their predictions, suggesting that learners first needed to familiarize themselves with app before their data were reliable for modeling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 29, |
|
"text": "(Klerke et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "jilljenn (Vie, 2018) used a deep factorization machine (DeepFM), a neural architecture developed for click-through rate prediction in recommender systems. This model allows learning from both lower-order and higher-order induced features and their interactions. The DeepFM outperformed a simple logistic regression baseline without much additional feature engineering.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 20, |
|
"text": "(Vie, 2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Other teams did not submit system description papers. However, according to a task organizer survey ymatusevych used a linear model with multilingual word embeddings, corpus frequency, and several L1-L2 features such as cognates. Additionally, simplelinear used an ensemble of some sort (for the French track only). renhk and zlb241 provided no details about their systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "SLAM_baseline is the baseline system provided by the task organizers. It is a simple logistic regression using data set features, trained separately for each track using stochastic gradient descent on the TRAIN set only.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "SLA modeling is a rich problem, and presents a opportunity to synthesize work from various subfields in cognitive science, linguistics, and machine learning. This section highlights a few key concepts from these fields, and how they relate to the approaches taken by shared task participants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Item response theory (IRT) is a common psychometric modeling approach used in educational software (e.g., Chen et al., 2005) . In its simplest form (Rasch, 1980) , an IRT model is a logistic regression with two weights: one representing the learner's ability (i.e., user ID), and the other representing the difficulty of the exercise or test item (i.e., token ID). An extension of this idea is the additive factor model (Cen et al., 2008) which adds additional \"knowledge components\" (e.g., lexical, morphological, or syntactic features). Teams that employed linear models (including our baseline) are essentially all additive factor IRT models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 124, |
|
"text": "Chen et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 161, |
|
"text": "(Rasch, 1980)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 438, |
|
"text": "(Cen et al., 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For decades, tutoring systems have also employed sequence models like HMMs to perform knowledge tracing (Corbett and Anderson, 1995) , a way of estimating a learner's mastery of knowledge over time. RNN-based approaches that encode user performance over time (i.e., that span across exercises) are therefore variants of deep knowledge tracing (Piech et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 132, |
|
"text": "(Corbett and Anderson, 1995)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 363, |
|
"text": "(Piech et al., 2015)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Relatedly, the spacing effect (Dempster, 1989 ) is the observation that people will not only learn but also forget over time, and they remember more effectively through scheduled practices that are spaced out. Settles and Meeder (2016) and Ridgeway et al. (2017) recently proposed non-linear regressions that explicitly encode the rate of forgetting as part of a decision surface, however none of the current teams chose to do this. Instead, forgetting was either modeled through engineered features (e.g., user/token histories), or opaquely handled by sequential RNN architectures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 45, |
|
"text": "(Dempster, 1989", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 235, |
|
"text": "Settles and Meeder (2016)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 262, |
|
"text": "Ridgeway et al. (2017)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "SLA modeling also bears some similarity to research in grammatical error detection (Leacock et al., 2010) and correction (Ng et al., 2013) . For these tasks, a model is given a (possibly ill-formed) sequence of words produced by a learner, and the task is to identify which are mistakes. SLA modeling is in some sense the opposite: given a well-formed sequence of words that a learner should be able to produce, identify where they are likely to make mistakes. Given these similarities, a few teams adapted state-of-the-art GEC/GED approaches to create their SLA modeling systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 105, |
|
"text": "(Leacock et al., 2010)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 138, |
|
"text": "(Ng et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Finally, multitask learning (e.g., Caruana, 1997 ) is the idea that machine learning systems can do better at multiple related tasks by trying to solve them simultaneously. For example, recent work in machine translation has demonstrated gains through learning to translate multiple languages with a unified model (Dong et al., 2015) . Similarly, the three language tracks in this work presented an opportunity to explore a unified multitask framework, which a few teams did with positive results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 48, |
|
"text": "Caruana, 1997", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 333, |
|
"text": "(Dong et al., 2015)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this section, we analyze the various modeling choices explored by the different teams in order to shed light on what kinds of algorithmic and feature engineering decisions appear to be useful for the SLA modeling task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Meta-Analyses", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Here we attempt to answer the question of whether particular machine learning algorithms have a significant impact on task performance. For example, the results in Table 2 suggest that the algorithmic choices indicated by (\u2662, \u2663, \u2021) are particularly effective. Is this actually the case?", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 171, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Algorithms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To answer this question, we partitioned the TEST set into 6.4k subsets (one for each learner), and computed per-user AUC scores for each team's predictions (83.9k observations total). We also coded each team with indicator variables to describe their algorithmic approach, and used a regression analysis to determine if these algorithmic variations had any significant effects on learnerspecific AUC scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To analyze this properly, however, we need to determine whether the differences among modeling choices are actually meaningful, or can simply be explained by sampling error due to random variations among users, teams, or tracks. To do this, we use a linear mixed-effects model (cf., Baayen, 2008, Ch. 7) . In addition to modeling the fixed effects of the various learning algorithms, we can also model the random effects represented by the user ID (learners may vary by ability), the team ID (teams may differ in other aspects not captured by our schema, e.g., the hardware used), and the track ID (tracks may vary inherently in difficulty). Table 3 presents a mixed-effects analysis for the algorithm variations used by at least 3 teams. The intercept can be interpreted as the \"average\" AUC of .786. Controlling for the random effects of user (which exhibits a wide standard deviation of \u00b1.086 AUC), team (\u00b1.013), and track (\u00b1.011), three of the algorithmic choices are at least marginally significant (p < .1). For example, we might expect a system that uses RNNs to model learner mastery over time would add +.028 to learner-specific AUC (all else being equal). Note that most teams' systems that were not based on RNNs or tree ensembles used logistic regression, hence the \"linear model\" effect is negligible (effectively treated as a control condition in the analysis).", |
|
"cite_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 303, |
|
"text": "Baayen, 2008, Ch. 7)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 642, |
|
"end": 649, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Algorithms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "These results suggest two key insights for SLA modeling. First, non-linear algorithms are particularly desirable 4 , and second, multitask learning approaches that share information across tracks (i.e., languages) are also effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We would also like to get a sense of which features, if any, significantly affect system performance. Table 4 lists features provided with the SLA modeling data set, as well as several newlyengineered feature types that were employed by at least three teams (note that the precise details may vary from team to team, but in our view aim to cap- . ture the same phenomena). We also include each feature's popularity and an effect estimate 5 . Broadly speaking, results suggest that feature engineering had a much smaller impact on system performance than the choice of learning algorithm. Only \"response time\" and \"days in course\" showed even marginally significant trends.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 109, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Sets", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Of particular interest is the observation that morpho-syntactic features (described in \u00a72.4) actually seem to have weakly negative effects. This echoes singsound's finding that their linguistic encoder contributed the least to system performance, and Cambridge determined through ablation studies that these features in fact hurt their system. One reasonable explanation is that these automaticallygenerated features contain too many systematic parsing errors to provide value. (Note that NYU artificially introduced punctuation to the exercises and re-parsed the data in their work.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As for newly-engineered features, word information such as frequency, semantic embeddings, and stemming were popular. It may be that these features showed such little return because our corpus was too biased toward beginners-thus representing a very narrow sample of language-for these features to be meaningful. Cognate features were an interesting idea used by a few teams, and may have been more useful if the data included users from a wider variety of different L1 language backgrounds. Spaced repetition features also exhibited marginal (but statistically insignificant) gains. We posit that the 30-day window we used for data collection was simply not long enough for these features to capture more longterm learning (and forgetting) trends.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Another interesting research question is: what is the upper-bound for this task? This can be estimated by treating each team's best submission as an independent system, and combining the results using ensemble methods in a variety of ways. Such analyses have been previously applied to other shared task challenges and meta-analyses (e.g., Malmasi et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 361, |
|
"text": "Malmasi et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The oracle system is meant to be an upperbound: for each token in the TEST set, the oracle outputs the team prediction with the lowest error for that particular token. We also experiment with stacking (Wolpert, 1992) by training a logistic regression classifier using each team's prediction as an input feature 6 . Finally, we also pool system predictions together by taking their average (mean). Table 5 reports AUC for various ensemble methods as well as some of the top performing team systems for all three tracks. Interestingly, the oracle is exceptionally accurate (>.993 AUC and >.884 F1, not shown). This indicates that the potential upper limit of performance on this task is quite high, since there exists a near-perfect ranking of tokens in the TEST set based only on predictions from these 15 diverse participating teams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 216, |
|
"text": "(Wolpert, 1992)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 404, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ensemble Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The stacking classifier produces significantly better rankings than any of the constituent systems alone, while the average (over all teams) ranked between the 3rd and 4th best system in all three tracks. Inspection of stacking model weights revealed that it largely learned to trust the topperforming systems, so we also tried simply averaging the top 3 systems for each track, and this method was statistically tied with stacking for the English and French tracks (p = 0.002 for Spanish). Interestingly, the highest-weighted team in each track's stacking model was singsound (+2.417 on average across the three models), followed teams and learning algorithms. It would be interesting to revisit these ideas using a more diverse and longitudinal data set in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To support ongoing research in SLA modeling, current and future releases of our data set will be publicly maintained online at:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "https://doi.org/10.7910/DVN/8SWHNO.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "https://www.duolingo.com", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://codalab.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Interestingly, the only linear model to rank among the top 5 (CECL) relied on combinatorial feature conjunctionswhich effectively alter the decision surface to be non-linear with respect to the original features. The RNN hidden nodes and GBDT constituent trees from other top systems may in fact be learning to represent these same feature conjunctions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is similar to the analysis in \u00a75.1, except that we regress on each feature separately. That is, a feature is the only fixed effect in the model (alongside intercept), while still controlling for user, team, and track random effects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that we only have TEST set predictions for each team. While we averaged stacking classifier weights across 10 folds using cross-validation, the reported AUC is still likely an over-estimate, since the models were in some sense trained on the TEST set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to acknowledge Bo\u017cena Paj\u0105k, Joseph Rollinson, and Hideki Shima for their help planning and co-organizing the shared task. Eleanor Avrunin and Natalie Glance made significant contributions to early versions of the SLA modeling data set, and Anastassia Loukina and Kristen K. Reyher provided helpful advice regarding mixed-effects modeling. Finally, we would like to thank the organizers of the NAACL-HLT 2018 Workshop on Innovative Use of NLP for Building Educational Applications (BEA) for providing a forum for this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "English Spanish French by NYU (+1.632), whereas the top-performing team SanaLabs had a surprisingly lower weight (+0.841). This could be due to the fact that their system was itself an ensemble of an RNN and GBDT models, which were used (in isolation) by each of the other two teams. This seems to add further support for the effectiveness of combining these algorithms for the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this work, we presented the task of second language acquisition (SLA) modeling, described a large data set for studying this task, and reported on the results of a shared task challenge that explored this new domain. The task attracted strong participation from 15 teams, who represented a wide variety of fields including cognitive science, linguistics, and machine learning. Among our key findings is the observation that, for this particular formulation of the task, the choice of learning algorithm appears to be more important than clever feature engineering. In particular, the most effective teams employed sequence models (e.g., RNNs) that can capture user performance over time, and tree ensembles (e.g., GBDTs) that can capture non-linear relationships among features. Furthermore, using a multitask framework-in this case, a unified model that leverages data from all three language tracks-can provide further improvements.Still, many teams opted for a simpler algorithm (e.g., logistic regression) and concentrated instead on more psychologically-motivated features. While these teams did not always perform as well, several demonstrated through ablation studies that these features can be useful within the limitations of the algorithm. It is possible that the constraints of the SLA modeling data set (beginner language, homogeneous L1 language background, short 30-day time frame, etc.) prevented these features from being more useful across different", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Globally normalized transition-based neural networks", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Andor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Alberti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Presta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev, S. Petrov, and M. Collins. 2016. Glob- ally normalized transition-based neural networks. CoRR, abs/1603.06042.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Analyzing Linguistic Data: A Practical Introduction to Statistics using R", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Baayen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.H. Baayen. 2008. Analyzing Linguistic Data: A Practical Introduction to Statistics using R. Cam- bridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Predicting second language learner successes and mistakes by means of conjunctive features", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bestgen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Bestgen. 2018. Predicting second language learner successes and mistakes by means of conjunctive fea- tures. In Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Multitask learning. Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "41--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Caruana. 1997. Multitask learning. Machine Learn- ing, 28:41-75.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Comparing two IRT models for conjunctive skills", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Cen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Koedinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Junker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Intelligent Tutoring Systems (ITS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "796--798", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Cen, K. Koedinger, and B. Junker. 2008. Compar- ing two IRT models for conjunctive skills. In Pro- ceedings of the Conference on Intelligent Tutoring Systems (ITS), pages 796-798. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Personalized e-learning system using item response theory", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computers & Education", |
|
"volume": "44", |
|
"issue": "3", |
|
"pages": "237--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C.M. Chen, H.M. Lee, and Y.H. Chen. 2005. Person- alized e-learning system using item response theory. Computers & Education, 44(3):237-255.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Feature engineering for second language acquisition modeling", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Hauff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Houben", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Chen, C. Hauff, and G.J. Houben. 2018. Feature engineering for second language acquisition model- ing. In Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Corbett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Anderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "253--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A.T. Corbett and J.R. Anderson. 1995. Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted In- teraction, 4(4):253-278.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Depth of processing and the retention of words in episodic memory", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"I M" |
|
], |
|
"last": "Craik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Tulving", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Journal of Experimental Psychology", |
|
"volume": "104", |
|
"issue": "", |
|
"pages": "268--294", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F.I.M. Craik and E. Tulving. 1975. Depth of process- ing and the retention of words in episodic memory. Journal of Experimental Psychology, 104:268-294.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Delong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Delong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Clarke-Pearson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Biometrics", |
|
"volume": "44", |
|
"issue": "", |
|
"pages": "837--845", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E.R. DeLong, D.M. DeLong, and D.L. Clarke-Pearson. 1988. Comparing the areas under two or more corre- lated receiver operating characteristic curves: a non- parametric approach. Biometrics, 44:837-845.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Spacing effects and their implications for theory and practice", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Dempster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Educational Psychology Review", |
|
"volume": "1", |
|
"issue": "4", |
|
"pages": "309--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F.N. Dempster. 1989. Spacing effects and their impli- cations for theory and practice. Educational Psy- chology Review, 1(4):309-330.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Multi-task learning for multiple language translation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1723--1732", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Dong, H. Wu, W. He, D. Yu, and H. Wang. 2015. Multi-task learning for multiple language transla- tion. In Proceedings of the Association for Compu- tational Linguistics (ACL), pages 1723-1732. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An introduction to ROC analysis", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Fawcett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Pattern Recognition Letters", |
|
"volume": "27", |
|
"issue": "8", |
|
"pages": "861--874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Fawcett. 2006. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861-874.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "TMU system for SLAM-2018", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kaneko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kajiwara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Komachi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Kaneko, T. Kajiwara, and M. Komachi. 2018. TMU system for SLAM-2018. In Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Grotoco@SLAM: Second language acquisition modeling with simple features, learners and task-wise models", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Klerke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Alonso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Klerke, H.M. Alonso, and B. Plank. 2018. Gro- toco@SLAM: Second language acquisition mod- eling with simple features, learners and task-wise models. In Proceedings of the NAACL-HLT Work- shop on Innovative Use of NLP for Building Educa- tional Applications (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Automated grammatical error detection for language learners", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gamon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Synthesis Lectures on Human Language Technologies", |
|
"volume": "3", |
|
"issue": "1", |
|
"pages": "1--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Leacock, M. Chodorow, M. Gamon, and J. Tetreault. 2010. Automated grammatical error detection for language learners. Synthesis Lectures on Human Language Technologies, 3(1):1-134.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Binary codes capable of correcting deletions, insertions, and reversals", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Levenshtein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1966, |
|
"venue": "Soviet Physics Doklady", |
|
"volume": "10", |
|
"issue": "8", |
|
"pages": "707--710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V.I. Levenshtein. 1966. Binary codes capable of cor- recting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707-710.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A report on the 2017 Native Language Identification shared task", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Evanini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Cahill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pugh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Hamill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Napolitano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the EMNLP Workshop on Innovative Use of NLP for Building Educational Applications (BEA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "62--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Malmasi, K. Evanini, A. Cahill, J. Tetreault, R. Pugh, C. Hamill, D. Napolitano, and Y. Qian. 2017. A report on the 2017 Native Language Identification shared task. In Proceedings of the EMNLP Work- shop on Innovative Use of NLP for Building Edu- cational Applications (BEA), pages 62-75, Copen- hagen, Denmark. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Context based approach for second language acquisition", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Nayak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N.V. Nayak and A.R. Rao. 2018. Context based ap- proach for second language acquisition. In Pro- ceedings of the NAACL-HLT Workshop on Innova- tive Use of NLP for Building Educational Applica- tions (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The CoNLL-2013 shared task on grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Hadiwinoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H.T. Ng, S.M. Wu, Y. Wu, C. Hadiwinoto, and J. Tetreault. 2013. The CoNLL-2013 shared task on grammatical error correction. In Proceedings of the Conference on Computational Natural Language Learning (CoNLL), pages 1-12. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Second language acquisition modeling: An ensemble approach", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Nilsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Osika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sydorchuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Sahin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Huss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Nilsson, A. Osika, A. Sydorchuk, F. Sahin, and A. Huss. 2018. Second language acquisition mod- eling: An ensemble approach. In Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Deep knowledge tracing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Piech", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bassen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ganguli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sahami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Guibas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sohl-Dickstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "505--513", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Piech, J. Bassen, J. Huang, S. Ganguli, M. Sahami, L.J. Guibas, and J. Sohl-Dickstein. 2015. Deep knowledge tracing. In Advances in Neural Informa- tion Processing Systems (NIPS), pages 505-513.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The benefits of the English language for individuals and societies: Quantitative indicators from Cameroon", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pinon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Haydon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Euromonitor International for the British Council", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Pinon and J. Haydon. 2010. The benefits of the En- glish language for individuals and societies: Quanti- tative indicators from Cameroon, Nigeria, Rwanda, Bangladesh and Pakistan. Technical report, Eu- romonitor International for the British Council.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Probabilistic models for some intelligence and attainment tests", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rasch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Rasch. 1980. Probabilistic models for some in- telligence and attainment tests. The University of Chicago Press.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Modeling second-language learning from a psychological perspective", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Popp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Halpern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Gureckis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Rich, P.O. Popp, D. Halpern, A. Rothe, and T. Gureckis. 2018. Modeling second-language learning from a psychological perspective. In Pro- ceedings of the NAACL-HLT Workshop on Innova- tive Use of NLP for Building Educational Applica- tions (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Forgetting of foreign-language skills: A corpusbased analysis of online tutoring software", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ridgeway", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Mozer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Cognitive Science", |
|
"volume": "41", |
|
"issue": "4", |
|
"pages": "924--949", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Ridgeway, M.C. Mozer, and A.R. Bowles. 2017. Forgetting of foreign-language skills: A corpus- based analysis of online tutoring software. Cognitive Science, 41(4):924-949.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A trainable spaced repetition model for language learning", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Meeder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1848--1858", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Settles and B. Meeder. 2016. A trainable spaced repetition model for language learning. In Proceed- ings of the Association for Computational Linguis- tics (ACL), pages 1848-1858. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A memorysensitive classification model of errors in early second language learning", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Tomoschuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lovelett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Tomoschuk and J. Lovelett. 2018. A memory- sensitive classification model of errors in early sec- ond language learning. In Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Deep factorization machines for knowledge tracing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Vie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J.J. Vie. 2018. Deep factorization machines for knowl- edge tracing. In Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Ed- ucational Applications (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Stacked generalization", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wolpert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Neural Networks", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "241--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D.H. Wolpert. 1992. Stacked generalization. Neural Networks, 5(2):241-259.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "CLUF: A neural model for second language acquisition modeling", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Xu, J. Chen, and L. Qin. 2018. CLUF: A neural model for second language acquisition modeling. In Proceedings of the NAACL-HLT Workshop on Inno- vative Use of NLP for Building Educational Appli- cations (BEA). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Neural sequence modelling for learner error prediction", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Educational Applications (BEA). ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Z. Yuan. 2018. Neural sequence modelling for learner error prediction. In Proceedings of the NAACL-HLT Workshop on Innovative Use of NLP for Building Ed- ucational Applications (BEA). ACL.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "An illustration of how data labels are generated. Learner responses are aligned with the most similar reference answer, and tokens from the reference that do not match are labeled errors.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "(b) shows an example of corrective feedback based on such an alignment.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Figure 2 shows how we use these alignments to generate labels for the SLA modeling task. In this case, an English (from Spanish) learner was asked to translate, \"\u00bfCu\u00e1ndo puedo ayudar?\" and wrote \"wen can help\" instead of \"When can I help?\" This produces two errors (a typo and a missing pronoun). We ignore capitalization, punctuation, and accents when matching tokens.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"text": "Summary of the SLA modeling data set." |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>oMGsnnH/0101</td><td>When</td><td>ADV</td><td>PronType=Int|fPOS=ADV++WRB</td><td>advmod</td><td>4 1</td></tr><tr><td>oMGsnnH/0102</td><td>can</td><td>AUX</td><td>VerbForm=Fin|fPOS=AUX++MD</td><td>aux</td><td>4 0</td></tr><tr><td>oMGsnnH/0103</td><td>I</td><td>PRON</td><td>Case=Nom|Number=Sing|Person=1|PronType=Prs|fPOS=PRON++PRP</td><td>nsubj</td><td>4 1</td></tr><tr><td>oMGsnnH/0104</td><td>help</td><td>VERB</td><td>VerbForm=Inf|fPOS=VERB++VB</td><td>ROOT</td><td>0 0</td></tr><tr><td colspan=\"5\"># user:XEinXf5+ countries:CO days:5.707 client:android session:practice format:reverse_translate time:22</td><td/></tr><tr><td>W+QU2fm70301</td><td>He</td><td>PRON</td><td>Case=Nom|Gender=Masc|Number=Sing|Person=3|PronType=Prs|fPOS=PRON++PRP</td><td>nsubj</td><td>3 0</td></tr><tr><td>W+QU2fm70302</td><td>'s</td><td>AUX</td><td>Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin|fPOS=AUX++VBZ</td><td>aux</td><td>3 1</td></tr><tr><td>W+QU2fm70303</td><td>wearing</td><td>VERB</td><td>Tense=Pres|VerbForm=Part|fPOS=VERB++VBG</td><td>ROOT</td><td>0 0</td></tr><tr><td>W+QU2fm70304</td><td>two</td><td>NUM</td><td>NumType=Card|fPOS=NUM++CD</td><td>nummod</td><td>5 0</td></tr><tr><td>W+QU2fm70305</td><td>shirts</td><td>NOUN</td><td>Number=Plur|fPOS=NOUN++NNS</td><td>dobj</td><td>3 0</td></tr><tr><td colspan=\"5\"># user:XEinXf5+ countries:CO days:10.302 client:web session:lesson format:reverse_translate time:28</td><td/></tr><tr><td>vOeGrMgP0101</td><td>We</td><td>PRON</td><td>Case=Nom|Number=Plur|Person=1|PronType=Prs|fPOS=PRON++PRP</td><td>nsubj</td><td>2 0</td></tr><tr><td>vOeGrMgP0102</td><td>eat</td><td>VERB</td><td>Mood=Ind|Tense=Pres|VerbForm=Fin|fPOS=VERB++VBP</td><td>ROOT</td><td>0 1</td></tr><tr><td>vOeGrMgP0103</td><td>cheese</td><td>NOUN</td><td>Degree=Pos|fPOS=ADJ++JJ</td><td>dobj</td><td>2 1</td></tr><tr><td>vOeGrMgP0104</td><td>and</td><td>CONJ</td><td>fPOS=CONJ++CC</td><td>cc</td><td>2 0</td></tr><tr><td>vOeGrMgP0105</td><td>they</td><td>PRON</td><td>Case=Nom|Number=Plur|Person=3|PronType=Prs|fPOS=PRON++PRP</td><td>nsubj</td><td>6 0</td></tr><tr><td>vOeGrMgP0106</td><td>eat</td><td>VERB</td><td>Mood=Ind|Tense=Pres|VerbForm=Fin|fPOS=VERB++VBP</td><td>conj</td><td>2 1</td></tr><tr><td>vOeGrMgP0107</td><td>fish</td><td>NOUN</td><td>fPOS=X++FW</td><td>dobj</td><td>6 0</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "# user:XEinXf5+ countries:CO days:2.678 client:web session:practice format:reverse_translate time:6" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Final results. Ranks (\u2191) are determined by statistical ties (see text). Markers indicate which systems</td></tr><tr><td>include recurrent neural architectures (\u2662), decision tree ensembles (\u2663), or a multitask model across all tracks ( \u2021).</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"text": "Mixed-effects analysis of learning algorithms." |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Summary of system features-both provided</td></tr><tr><td>(top) and team-engineered (bottom)-with team popu-</td></tr><tr><td>larity and univariate mixed-effects estimates.</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |