ACL-OCL / Base_JSON /prefixB /json /bea /2022.bea-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:10:25.574640Z"
},
"title": "Fine-tuning Transformers with Additional Context to Classify Discursive Moves in Mathematics Classrooms",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Suresh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Jacobs",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {}
},
"email": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Perkoff",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Sumner",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Talk moves\" are specific discursive strategies used by teachers and students to facilitate conversations in which students share their thinking, and actively consider the ideas of others, and engage in rich discussions. Experts in instructional practices often rely on cues to identify and document these strategies, for example by annotating classroom transcripts. Prior efforts to develop automated systems to classify teacher talk moves using transformers achieved a performance of 76.32% F1. In this paper, we investigate the feasibility of using enriched contextual cues to improve model performance. We applied state-of-the-art deep learning approaches for Natural Language Processing (NLP), including Robustly optimized bidirectional encoder representations from transformers (Roberta) with a special input representation that supports previous and subsequent utterances as context for talk moves classification. We worked with the publically available TalkMoves dataset, which contains utterances sourced from real-world classroom sessions (human-transcribed and annotated). Through a series of experimentations, we found that a combination of previous and subsequent utterances improved the transformers' ability to differentiate talk moves (by 2.6% F1). These results constitute a new state of the art over previously published results and provide actionable insights to those in the broader NLP community who are working to develop similar transformer-based classification models.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Talk moves\" are specific discursive strategies used by teachers and students to facilitate conversations in which students share their thinking, and actively consider the ideas of others, and engage in rich discussions. Experts in instructional practices often rely on cues to identify and document these strategies, for example by annotating classroom transcripts. Prior efforts to develop automated systems to classify teacher talk moves using transformers achieved a performance of 76.32% F1. In this paper, we investigate the feasibility of using enriched contextual cues to improve model performance. We applied state-of-the-art deep learning approaches for Natural Language Processing (NLP), including Robustly optimized bidirectional encoder representations from transformers (Roberta) with a special input representation that supports previous and subsequent utterances as context for talk moves classification. We worked with the publically available TalkMoves dataset, which contains utterances sourced from real-world classroom sessions (human-transcribed and annotated). Through a series of experimentations, we found that a combination of previous and subsequent utterances improved the transformers' ability to differentiate talk moves (by 2.6% F1). These results constitute a new state of the art over previously published results and provide actionable insights to those in the broader NLP community who are working to develop similar transformer-based classification models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There is a strong theoretical and empirical basis for encouraging students' active participation in inquiry-based and socially constructed classroom environments (Vygotsky, 1978; Webb et al., 2008) . Numerous efforts exist to support teachers to become more purposeful and effective in their efforts to facilitate such environments (Herbel-Eisenmann, 2017; Chen et al., 2020) . Most approaches to providing teachers with detailed feedback about their discourse strategies require highly trained human observers (Correnti et al., 2015; Wolf et al., 2005) . However, recent research has shown that the development and training of deep learning models to automate and scale certain discourse analyses from instructional episodes is feasible (Song et al., 2021) , effective (Demszky et al., 2021) , and reliable (Donnelly et al., 2017; Jensen et al., 2020; Suresh et al., 2019) .",
"cite_spans": [
{
"start": 162,
"end": 178,
"text": "(Vygotsky, 1978;",
"ref_id": "BIBREF40"
},
{
"start": 179,
"end": 197,
"text": "Webb et al., 2008)",
"ref_id": "BIBREF41"
},
{
"start": 332,
"end": 356,
"text": "(Herbel-Eisenmann, 2017;",
"ref_id": "BIBREF13"
},
{
"start": 357,
"end": 375,
"text": "Chen et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 511,
"end": 534,
"text": "(Correnti et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 535,
"end": 553,
"text": "Wolf et al., 2005)",
"ref_id": "BIBREF42"
},
{
"start": 738,
"end": 757,
"text": "(Song et al., 2021)",
"ref_id": "BIBREF32"
},
{
"start": 770,
"end": 792,
"text": "(Demszky et al., 2021)",
"ref_id": "BIBREF10"
},
{
"start": 808,
"end": 831,
"text": "(Donnelly et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 832,
"end": 852,
"text": "Jensen et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 853,
"end": 873,
"text": "Suresh et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Accountable talk theory offers well-defined, research-based practices for teachers to engage in high-quality instruction, including the use of specific talk moves that promote students' equitable participation in a rigorous learning environment Resnick et al., 2018) . By using talk moves, teachers place the \"intellectual heavy lifting\" and balance of talk toward students and help ensure that the discussions will be purposeful, coherent, and productive (Michaels et al., 2010) . Talk moves support classroom discourse to move beyond the traditional Initiate, Response, Evaluate linguistic sequence (Mehan, 1979) ; namely, by replacing the act of evaluating with practices that support a collective understanding that builds on and extends mathematical ideas .In this way, talk moves enable dialogue shifts from teacher directed recitation to true discussions in which knowledge is informally shared and constructed rather than transmitted.",
"cite_spans": [
{
"start": 245,
"end": 266,
"text": "Resnick et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 456,
"end": 479,
"text": "(Michaels et al., 2010)",
"ref_id": "BIBREF22"
},
{
"start": 601,
"end": 614,
"text": "(Mehan, 1979)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper draws inspiration from speech recognition systems for spoken dialog systems to investigate the feasibility of applying a novel input representation that utilizes tokens from previous and subsequent utterances to classify teacher talk moves (Schukat-Talamazzini et al., 1994) . We explore three different context setups: previous-only utterances, subsequent-only utterances, and both previous and subsequent utterances (equal numbers of each) with different window sizes. In addition to the longer dialog window experiments, we re-port findings from fine-tuning transformers such as BigBird (Zaheer et al., 2020) and Longformer (Beltagy et al., 2020) which are architected to support longer sequences. Similarly, we report findings from fine-tuning MathBERT, a transformer architecture that was trained to establish semantic correspondence between mathematical formulas and their corresponding context (Peng et al., 2021) . For training and evaluation, we use the TalkMoves dataset comprising 567 lesson transcripts derived from video recordings of K-12 mathematics classrooms . The main contributions of this work are summarized as follows:",
"cite_spans": [
{
"start": 251,
"end": 285,
"text": "(Schukat-Talamazzini et al., 1994)",
"ref_id": "BIBREF30"
},
{
"start": 601,
"end": 622,
"text": "(Zaheer et al., 2020)",
"ref_id": "BIBREF46"
},
{
"start": 638,
"end": 660,
"text": "(Beltagy et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 912,
"end": 931,
"text": "(Peng et al., 2021)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We provide evidence for improved performance when fine-tuning transfomers with longer dialog windows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We observed that transformer architectures designed to handle longer contexts such as Longformer do not provide any additional benefit in differentiating instructional strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We observed that math-based models pretrained on mathematical formula understanding do not provide any improvement over the generic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section briefly describes the accountable talk theory framework, followed by a literature review on deep learning models for Natural Language Processing (NLP) focused on adding additional contexts and learning long-term dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Accountable talk theory identifies and defines an explicit set of discourse moves intended to elicit a response within a classroom lesson (O'Connor and Michaels, 2019). These well-defined discursive techniques have been incorporated into various instructional practices and frameworks e.g., (Boston, 2012; Candela et al., 2020; Michaels et al., 2010) . Their specificity makes talk moves well-suited for supervised multi-label sentence-pair classification. A number of research teams have made considerable progress in developing automated \"intelligent agents\" that are trained to emulate the role of the teacher. These agents prompt students to use designated aspects of accountable talk, such as revoicing and asking students to agree/disagree with another student. They typically act as facilitators or tutors during small group, text-based, online settings, taking part in and helping to focus the discussion at opportune moments e.g. (Adamson et al., 2013; Hmelo-Silver et al., 2013; Tegos et al., 2015) . and team developed an online application that provides personalized feedback to teachers on their classroom discourse practices, including the prevalence of talk moves. The system is fully automated and requires no human processing beyond the initial uploading of classroom recordings. Such education-focused NLP applications are in high demand to provide reliable feedback to teachers based on the accountable talk theory.",
"cite_spans": [
{
"start": 291,
"end": 305,
"text": "(Boston, 2012;",
"ref_id": "BIBREF2"
},
{
"start": 306,
"end": 327,
"text": "Candela et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 328,
"end": 350,
"text": "Michaels et al., 2010)",
"ref_id": "BIBREF22"
},
{
"start": 939,
"end": 961,
"text": "(Adamson et al., 2013;",
"ref_id": "BIBREF0"
},
{
"start": 962,
"end": 988,
"text": "Hmelo-Silver et al., 2013;",
"ref_id": "BIBREF14"
},
{
"start": 989,
"end": 1008,
"text": "Tegos et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Accountable talk theory framework",
"sec_num": "2.1"
},
{
"text": "The introduction of transformers has revolutionized the field of natural language processing. Unlike Recurrent Neural Networks (RNNs) and Long Short Term Memory networks (LSTMs), where training is performed sequentially, the design of transformer architecture enables parallel processing and allows for the creation of rich latent embeddings (Vaswani et al., 2017) . Latent contextual representation of utterances through the self-attention mechanism makes transformers a powerful tool for various downstream applications such as question answering and text summarization (Devlin et al., 2018) . Research efforts to learn long-term dependencies with transformers were first introduced in Transformer-XL . Transformer-XL is a novel architecture that focuses on learning dependencies beyond the fixed length of vanilla transformers without disrupting the temporal coherence. This is achieved by saving the hidden state sequence of the previous segment to be used as context for the current segments, also known as the segment-level recurrence mechanism. In addition, to better encode the relationship between words, Transformer-XL uses relative positional embeddings. Results show that Transformer-XL can learn dependencies across the text with a window size of 900 words. Following Transformer-XL, proposed XL-Net, which is a generalized autoregressive pretraining method that leverages the capabilities of Transformer-XL to solve the pre-train-finetune discrepancy commonly identified in early architectures such as BERT. XL-Net introduced two new developments. As an extension to the standard Causal Language Modeling (CLM), XL-Net uses permutation language mod-eling, which considers all possible permutations of the words within a sentence during the training phase. Also, XL-Net uses a secondary attention stream that focuses on the positional information of the predicted token. This additional attention stream led XL-Net to outperform many contemporary transformer architectures in downstream tasks, such as text classification. Similarly, to address the problem of processing long sequences with transformers, (Beltagy et al., 2020) introduced Longformer, which extends vanilla transformers with a modified self-attention mechanism to process long documents. The classic self-attention mechanism in BERT is computationally expensive, which explains the restriction of the maximum sequence length of 512 tokens. Instead, Longformer combines dilated sliding windows with global attention to achieve similar performance. As a result of reducing the computational complexity, Longformer can process long input sequences beyond the previously defined segment length of 512 tokens. Like Longfomers, Big-Bird (Zaheer et al., 2020 ) uses a sparse attention mechanism that includes a random attention component.",
"cite_spans": [
{
"start": 342,
"end": 364,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 572,
"end": 593,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 2118,
"end": 2140,
"text": "(Beltagy et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 2710,
"end": 2730,
"text": "(Zaheer et al., 2020",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers for additional context and long-term dependencies",
"sec_num": "2.2"
},
{
"text": "Over the past few years, we have seen an increasing trend in other approaches to supporting transformers to learn long-term dependencies, such as modifying pre-training methods and the classic attention mechanism. For example, to learn dependencies across documents, (Xie et al., 2020) adopted a simple approach to truncate the document used for classification. Similarly, ) used a chunking approach where documents were broken down into multiple chunks, and the activations were then combined to perform the tasks. Another recent example is the BERT-Seq model for classifying Collaborative Problem Solving (Pugh et al., 2021). The BERT-Seq model uses a special input representation that combines embeddings from adjacent utterances as contextual cues for the model. Building on the prior work, we explored new ways to enrich transformers with additional contextual cues.",
"cite_spans": [
{
"start": 267,
"end": 285,
"text": "(Xie et al., 2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers for additional context and long-term dependencies",
"sec_num": "2.2"
},
{
"text": "Currently, generating information about teachers' discourse strategies requires highly trained instructional experts to hand-code transcripts from classroom sessions (Correnti et al., 2015; Wolf et al., 2005) , an approach that is expensive and not readily scalable. Encouragingly, a small number of researchers have recently trained computer models to automate and scale discourse analyses from instructional episodes, detecting educationally important discursive features such as instructional talk, authentic teacher questions, elaborated evaluation, and uptake (Dale et al., 2022; Demszky et al., 2021; Jensen et al., 2020) . In prior work, (Suresh et al., 2021b,a) fine-tuned Roberta (Liu et al., 2019) to classify talk moves for each teacher utterance from a given classroom transcript. The input to Roberta was student-teacher sentence pairs, where the student sentence appeared immediately prior to the teacher's utterance. This paper builds upon the previous work to add contextual cues to transformers in various ways and evaluate their performance using the TalkMoves dataset. We experiment with modifying the input representation by combining multiple previous and subsequent utterances as context to classify teacher talk moves. This work serves as an example of how we can find new ways to use advances in natural language processing with classic ideas from speech recognition systems for spoken dialog system to capture the rich conversations between teachers and students in order to improve performance in applied domains such as education.",
"cite_spans": [
{
"start": 166,
"end": 189,
"text": "(Correnti et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 190,
"end": 208,
"text": "Wolf et al., 2005)",
"ref_id": "BIBREF42"
},
{
"start": 565,
"end": 584,
"text": "(Dale et al., 2022;",
"ref_id": "BIBREF9"
},
{
"start": 585,
"end": 606,
"text": "Demszky et al., 2021;",
"ref_id": "BIBREF10"
},
{
"start": 607,
"end": 627,
"text": "Jensen et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 645,
"end": 669,
"text": "(Suresh et al., 2021b,a)",
"ref_id": null
},
{
"start": 689,
"end": 707,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Work and Novelty",
"sec_num": "3"
},
{
"text": "This section discusses the different approaches we took to enrich contextual cues in the TalkMoves model in an effort to enhance performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "The TalkMoves dataset used in this study comprises 567 transcripts, including 174,186 teacher and 59,874 student utterances . All the transcripts were human-generated from classroom audio and video recordings from K-12 mathematics classrooms. They were annotated for six teacher talk moves by two experts who established high inter-rater reliability (Suresh et al., 2021b . The talk moves in the dataset follow an uneven distribution, with certain moves being much more frequent than others (Figure 1 ). \"Keeping everyone together\" and \"pressing for accuracy\" are the most frequently used, whereas \"getting students to relate\" and \"pressing for reasoning\" are the least common. For training and testing split, we used the same split specified by in the TalkMoves dataset. Each teacher utterance in the TalkMoves dataset is annotated with one of six dif-ferent teacher talk moves and \"None\". These talk moves are broadly classified into three categories based on their instructional purpose (Resnick et al., 2018): (1) accountability to the learning community, (2) accountability to content knowledge, and (3) accountability to rigorous thinking. See Table 1 for a brief description of each talk move, along with examples.",
"cite_spans": [
{
"start": 350,
"end": 371,
"text": "(Suresh et al., 2021b",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 491,
"end": 500,
"text": "(Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1150,
"end": 1158,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "In this study, we began working with transformers to classify talk moves. Prior attempts using non-transformers architecture achieved lower performance (65% F1 compared to 76.32% F1 with transformers) (Suresh et al., 2019 (Suresh et al., , 2021b . The fine-tuned Roberta model proposed in ) employed a input representation of studentteacher sentence pairs to combine any given teacher utterance with the immediately prior student utterance (Suresh et al., 2021b) . In order to understand the gaps in this model's performance, conducted an error analysis using a confusion matrix to consider examples where the Talk-Moves models were underperforming and often generated misclassifications. An initial analysis of those examples revealed several instances where the actual real-world context for the misclassified teacher utterance extended beyond the current representation of the previous student utterance. For example, consider the following dialogue \"Student: Yes; Teacher: What do you think?\". With limited context, it seems unclear if the teacher was relating to what a student said earlier or trying to prompt them to think. This challenge of limited context from prior work motivated us to find new ways to add contextual information to the existing models in order to improve performance.",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "(Suresh et al., 2019",
"ref_id": "BIBREF37"
},
{
"start": 222,
"end": 245,
"text": "(Suresh et al., , 2021b",
"ref_id": "BIBREF35"
},
{
"start": 440,
"end": 462,
"text": "(Suresh et al., 2021b)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Research Motivation",
"sec_num": "4.2"
},
{
"text": "Constraints on the number of sequences in vanilla transformers, such as BERT and Roberta, prevents the direct application of transformers where there is a reliance on long-term dependencies. For example, consider a classroom session where a teacher encourages student X to think based on what student Y said earlier in the session. Without the expanded dialogue context, it can be challenging for transformers (and even humans) to classify the utterances. If we could expand the representation of available information such that it included the entire classroom session, the transformers may be more likely to learn to establish the long-term de-pendencies across the focal utterances or tokens. Given the importance of local context (Kovaleva et al., 2019) , our input representation was modified from student-teacher sentence pairs to a fixed-size window surrounding each teacher utterance. This adjusted representation is atypical compared to the recommended input for fine-tuning, where a unique token separates two sequences (i.e., [SEP] in Bert and </s> in Roberta) (Devlin et al., 2018; Liu et al., 2019) . There is a general notion that fine-tuning multiple utterances with multiple separator tokens, while theoretically possible, is not likely to work well. This notion was motivated by vanilla transformers, which were originally pre-trained on individual sentences or sentence pairs. We challenge this assumption by including additional past and future utterances in our adjusted input representation (Figure 2 ).",
"cite_spans": [
{
"start": 734,
"end": 757,
"text": "(Kovaleva et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 1037,
"end": 1042,
"text": "[SEP]",
"ref_id": null
},
{
"start": 1072,
"end": 1093,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 1094,
"end": 1111,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1512,
"end": 1521,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Context-addition experiments",
"sec_num": "4.3"
},
{
"text": "To establish a baseline performance level and generate information regarding the impact of context in classifying talk moves, we began with a simple input representation that includes only the target teacher utterance without any additional context. The output layer was a softmax over seven classes i.e., the six talk moves and \"none\" (no talk move). We also reproduced results from prior work on Roberta-base . Following that, we experimented with three context setups: previous-only utterances, subsequent-only utterances, and both previous and subsequent utterances (equal numbers of each). In each setup, we evaluated several different window sizes. For example, the previous-only condition with a window size of three would have the immediately previous three utterances (with student(s) and/or the teacher as the speakers) serving as context cues for classifying the target utterance. If there was no prior utterance (such as at the start of a classroom session), we prepended empty strings. Similarly, given the previous and subsequent utterances condition with a window size of two, the target utterance would have two previous utterances prepended to the left and two subsequent utterances appended to the right. Separator tokens differentiated all of the utterances. As an additional preprocessing step, all utterances were truncated to 30 tokens long. The choice of truncation length was decided based on the distribution of sequence length (number of tokens) for all utterances in the dataset (see Figure 3) . A token size of 30 accounted for more than 95% of the utterances in the dataset (two standard deviations from the mean of the sequence length of seven tokens). We then fine-tuned transformers on the TalkMoves training set with different parameters using Amazon EC2 instances. We followed the recommended parameters from (Suresh et al., 2019 including learning rate (2e-5, 3e-5, 4e-5, 5e-5), number of epochs (3-6), batch size (4,8,16,32), warmup steps (0,100,1000) and maximum sequence length (512 for Roberta-like models) and (512,1024 for Longformer and BigBird). The performance on the testing set after fine-tuning is reported based on F1 measures and MCC (Suresh et al., 2021a) . These measures work well for skewed datasets like Talk-Moves (Chicco and Jurman, 2020; Suresh et al., 2021b) . The code was implemented in Python 3.8 with Pytorch and HuggingFace library (Wolf et al., 2019) . In addition to the context-addition experiments with Roberta-base, we fine-tuned similar transformers architectures. XLNet, Longformer and BigBird are transformer architectures which support longer sequences. Since the TalkMoves dataset is composed of utterances from K-12 mathematics classrooms, we fine-tuned MathBERT, a pretrained architecture with focus on mathematical formula understanding.",
"cite_spans": [
{
"start": 1843,
"end": 1863,
"text": "(Suresh et al., 2019",
"ref_id": "BIBREF37"
},
{
"start": 2183,
"end": 2205,
"text": "(Suresh et al., 2021a)",
"ref_id": "BIBREF33"
},
{
"start": 2269,
"end": 2294,
"text": "(Chicco and Jurman, 2020;",
"ref_id": "BIBREF6"
},
{
"start": 2295,
"end": 2316,
"text": "Suresh et al., 2021b)",
"ref_id": "BIBREF35"
},
{
"start": 2395,
"end": 2414,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 1511,
"end": 1520,
"text": "Figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Context-addition experiments",
"sec_num": "4.3"
},
{
"text": "In this section, we present the results from our experiments that involved providing additional context to transformers to support the process of learning long-term dependencies. The experiments were repeated with ten random seeds, and the average score is reported (Table 2, 3) . For brevity, we report performance only on Roberta-base (the best performing model from (Suresh et al., 2021b) as indicated in the first column of ( Table 2) and transformers such as Longformer and Bigbird (Table 3) . All the models are Base models (Large models are beyond the scope of this work). In the second column, we describe the context that was provided to the target teacher utterance for classification. For example, Previous 1 should be interpreted as a single previous utterance prepended to the target teacher's utterance. Similarly, Subsequent 1 should be interpreted as a single subsequent utterance appended to the target utterance. The third and final column describes the performance of the testing set.",
"cite_spans": [
{
"start": 369,
"end": 391,
"text": "(Suresh et al., 2021b)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 266,
"end": 278,
"text": "(Table 2, 3)",
"ref_id": "FIGREF1"
},
{
"start": 430,
"end": 438,
"text": "Table 2)",
"ref_id": null
},
{
"start": 487,
"end": 497,
"text": "(Table 3)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "For imbalanced datasets like TalkMoves, the Matthew Correlation Coefficient (MCC) and F1 measure are good indicators of model performance. An MCC score of +1 indicates a perfect correlation while 0 indicates a random correlation and -1 indicates a negative correlation. Similarly, the F1 score ranges from 0-100% where 100% indicates perfect performance. We begin with the No-Context condition which achieved a performance of 71.93% F1. On prepending the immediately prior or subsequent student utterance, the model achieved a performance of 76.32% F1 ). Next we turn to results from various context conditions with different window sizes followed by results from Longformer, BigBird, and other models. The maximum sequence length in most of these models was 512 with the exception of Longformer and Bigbird which had a sequence length upto 1024. The results presented in this work are comprehensive but not exhaustive since training and testing for all possible models and parameters is infeasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The results table clearly illustrates the impor-tance of context in enhancing performance. Starting with Roberta-Base, the performance on the previous-only condition gradually increased with an increase in window-size and saturated for larger window-sizes. Similarly, we observed an improvement in performance for the subsequent-only condition. However, we did not see any significant improvement for larger window-sizes in this condition, possibly due to the negative impact in performance on \"Revoicing\" and \"Restating\" which rely on immediately prior student sentences. Moreover, the combination of previous and subsequent utterances resulted in the best performing model. The performance gradually increased proportionally with a window size up to 7 before saturating. Likewise, the performance on Longformer, XLNet and BigBird were comparable with similar input representation. The most surprising result was the performance on MathBert which was signficantly lower than other models. In summary, Roberta-Base with equal previous-subsequent condition (size =7) outperformed rest of the models and constitutes the state-of-the-art results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The primary motivation of the error analysis using a confusion matrix was to improve the performance on the under-performing talk move categories and identify patterns among the misclassfied utterances to be leveraged as features for the models. When comparing the confusion matrix from prior work ) (see Table 4 ), the current study shows a significant improvement in performance across all the teacher talk moves labels except \"Restating\" (see Table 5 ). With \"Restating\", we hypothesize that the decrease in performance was a result of supplementing additional context. Further analysis has to be performed in order to validate this claim.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 446,
"end": 453,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Based on the results from our experiments to improve the performance of a talk moves classifier using transformers, it is evident that longer dialog windows play an important role in differentiating talk moves. We successfully validated that the local discursive context is an important feature in classifying teacher talk moves. We generated a 4% F1 increase in performance when including a single additional utterance (either previous or subsequent) as compared to the no-context condition. Also, we observed that previous utterances are more impactful than future utterances for classifying talk moves. This finding is not surprising given that several talk moves, such as the teacher \"restating\" and \"revoicing\" what a student has already said, depend entirely on previous utterances as context. We also observed that context windows with a combination of previous and future utterances outperform either condition alone. Finally, we found that a window size of seven previous and subsequent utterances achieves the best performance. Beyond the identified size of seven, the performance decreases. It is possible that much earlier or much later utterances provide confusing or conflicting contextual information, which hinders model performance. It is equally likely that longer dialog windows could lead to overfitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Prior efforts to address the imbalanced nature of TalkMoves dataset through weighted loss resulted in reduced performance (Suresh et al., 2019) . As an alternative, we attempted to generate synthetic samples of tokenized utterances through SMOTE (Synthetic Minority Oversampling Data) (Chawla et al., 2002) . With SMOTE, it was challenging to retain the syntactic information of the generated examples. It was also difficult to generate the supporting contextual student and teacher utterances. Preliminary efforts did not yield any improvement in performance.",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Suresh et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 285,
"end": 306,
"text": "(Chawla et al., 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "To further improve the performance, we have identified two future directions that appear worthwhile to consider: (1) experimenting with punctuation and other linguistic markers in the existing TalkMoves dataset and (2) collecting more training data. In the TalkMoves dataset, all the punctuation and other non-alphanumeric characters from the teacher and student utterances were removed. These text processing steps are typical for most text-based NLP applications to produce text that closely aligns with the output of Automated Speech Recognition (ASR) systems. However, we hypothesize that punctuation could play a significant role in differentiating one talk move from another. For example, \"Agreed?\" with a question mark can be considered an instance of \"Keeping everyone together\" whereas \"Agreed\" as a statement would be an instance of \"None.\" It remains to be determined the extent to which including punctuation markers might impact the performance of the models. Similarly, we can try incorporating speaker turns to indicate a student or teacher turn in previous and subsequent utterances as additional features to the model. Another option that warrants consideration is supplementing data for the purpose of model pretraining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "TalkMoves dataset (github.com/SumnerLab/TalkMoves) is a relatively small dataset for pretraining transformers when compared to Roberta which was pretrained on millions of data points. At the same time, we recognize the challenge in the collecting and annotating thousands of classroom transcripts. Moreover, there are important privacy concerns and other ethical considerations, given that these data involve minors, use proper names (which can be critical information for talk moves classification), and can be challenging to access in large quantities. We could potentially explore active learning to achieve greater accuracy with limited samples (Settles, 2009) . Active learning is often sought as an option in machine learning applications where unlabeled instances are abundantly available (Schr\u00f6der et al., 2021) .",
"cite_spans": [
{
"start": 649,
"end": 664,
"text": "(Settles, 2009)",
"ref_id": "BIBREF31"
},
{
"start": 796,
"end": 819,
"text": "(Schr\u00f6der et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Documenting consequential elements of classroom instruction and providing teachers with feedback on their practices are critical endeavors in the education field. Taking into consideration the strong need to provide reliable feedback to teachers on productive classroom discourse, we need robust models to automatically classify teacher talk moves with high reliability. In this paper, we report on a number of experiments that involved providing longer dialog windows to the transformers in an effort to improve model performance. Based on these experiments, we generated a state-of-the-art 2.6% F1 improvement in performance (78.92% F1) over the previous models, primarily by adding a set number of previous and subsequent utterances to the input representation. Clearly, there are both challenges and opportunities for the development of innovative uses of AI techniques, particularly as they can be incorporated into tools that support teacher and student learning. The findings from this research open new avenues for exploration that can benefit both the education and NLP communi- ties who might adopt our methods in applications where the local context may prove critical to improving performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "The research team would like to thank Eddie Dombower and his team at Curve 10 for their contributions to the design and implementation of the TalkBack application. This material is based upon work supported by the National Science Foundation under Grant Numbers 1600325 and 1837986. This research was supported by the NSF National AI Institute for Student-AI Teaming (iSAT) under grant DRL 2019805. The opinions expressed are those of the authors and do not represent views of the NSF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Intensification of group knowledge exchange with academically productive talk agents",
"authors": [
{
"first": "David",
"middle": [],
"last": "Adamson",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Ashe",
"suffix": ""
},
{
"first": "Hyeju",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yaron",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"Penstein"
],
"last": "Ros\u00e9",
"suffix": ""
}
],
"year": 2013,
"venue": "CSCL (1)",
"volume": "",
"issue": "",
"pages": "10--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Adamson, Colin Ashe, Hyeju Jang, David Yaron, and Carolyn Penstein Ros\u00e9. 2013. Intensification of group knowledge exchange with academically pro- ductive talk agents. In CSCL (1), pages 10-17.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05150"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Assessing instructional quality in mathematics",
"authors": [
{
"first": "Melissa",
"middle": [],
"last": "Boston",
"suffix": ""
}
],
"year": 2012,
"venue": "The Elementary School Journal",
"volume": "113",
"issue": "1",
"pages": "76--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melissa Boston. 2012. Assessing instructional quality in mathematics. The Elementary School Journal, 113(1):76-104.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discourse actions to promote student access",
"authors": [
{
"first": "G",
"middle": [],
"last": "Amber",
"suffix": ""
},
{
"first": "Melissa",
"middle": [
"D"
],
"last": "Candela",
"suffix": ""
},
{
"first": "Juli",
"middle": [
"K"
],
"last": "Boston",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dixon",
"suffix": ""
}
],
"year": 2020,
"venue": "Mathematics Teacher: Learning and Teaching PK",
"volume": "12",
"issue": "4",
"pages": "266--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amber G Candela, Melissa D Boston, and Juli K Dixon. 2020. Discourse actions to promote student access. Mathematics Teacher: Learning and Teaching PK-12, 113(4):266-277.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Smote: synthetic minority over-sampling technique",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"W"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"O"
],
"last": "Bowyer",
"suffix": ""
},
{
"first": "W Philip",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kegelmeyer",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of artificial intelligence research",
"volume": "16",
"issue": "",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artifi- cial intelligence research, 16:321-357.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficacy of video-based teacher professional development for increasing classroom discourse and student learning",
"authors": [
{
"first": "Gaowei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "K",
"middle": [
"H"
],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Sherice",
"middle": [
"N"
],
"last": "Chan",
"suffix": ""
},
{
"first": "Lauren",
"middle": [
"B"
],
"last": "Clarke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Resnick",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of the Learning Sciences",
"volume": "29",
"issue": "4-5",
"pages": "642--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaowei Chen, Carol KK Chan, Kennedy KH Chan, Sherice N Clarke, and Lauren B Resnick. 2020. Ef- ficacy of video-based teacher professional develop- ment for increasing classroom discourse and student learning. Journal of the Learning Sciences, 29(4- 5):642-680.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation",
"authors": [
{
"first": "Davide",
"middle": [],
"last": "Chicco",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Jurman",
"suffix": ""
}
],
"year": 2020,
"venue": "BMC genomics",
"volume": "21",
"issue": "1",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davide Chicco and Giuseppe Jurman. 2020. The advan- tages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC genomics, 21(1):1-13.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving teaching at scale: Design for the scientific measurement and learning of discourse practice. Socializing Intelligence Through Academic Talk and Dialogue",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Correnti",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Kay"
],
"last": "Stein",
"suffix": ""
},
{
"first": "Margaret",
"middle": [
"S"
],
"last": "Smith",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Greeno",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Ashley",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Correnti, Mary Kay Stein, Margaret S Smith, James Scherrer, Margaret McKeown, James Greeno, and Kevin Ashley. 2015. Improving teaching at scale: Design for the scientific measurement and learning of discourse practice. Socializing Intelligence Through Academic Talk and Dialogue. AERA, 284.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Transformer-xl: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.02860"
]
},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language mod- els beyond a fixed-length context. arXiv preprint arXiv:1901.02860.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Toward the automated analysis of teacher talk in secondary ela classrooms. Teaching and Teacher Education",
"authors": [
{
"first": "Amanda",
"middle": [
"J"
],
"last": "Meghan E Dale",
"suffix": ""
},
{
"first": "Sarah",
"middle": [
"A"
],
"last": "Godley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Capello",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Patrick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Donnelly",
"suffix": ""
},
{
"first": "K D'",
"middle": [],
"last": "Sidney",
"suffix": ""
},
{
"first": "Sean",
"middle": [
"P"
],
"last": "Mello",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kelly",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "110",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meghan E Dale, Amanda J Godley, Sarah A Capello, Patrick J Donnelly, Sidney K D'Mello, and Sean P Kelly. 2022. Toward the automated analysis of teacher talk in secondary ela classrooms. Teaching and Teacher Education, 110:103584.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Can automated feedback improve teachers' uptake of student ideas",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Heather",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Piech",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Jing Liu, Heather C Hill, Dan Juraf- sky, and Chris Piech. 2021. Can automated feedback improve teachers' uptake of student ideas? evidence from a randomized controlled trial in a large-scale online course.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Words matter: automatic detection of teacher questions in live classroom discourse using linguistics, acoustics, and context",
"authors": [
{
"first": "J",
"middle": [],
"last": "Patrick",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [],
"last": "Donnelly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blanchard",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Olney",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "Sidney K D'",
"middle": [],
"last": "Nystrand",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mello",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Seventh International Learning Analytics & Knowledge Conference",
"volume": "",
"issue": "",
"pages": "218--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick J Donnelly, Nathaniel Blanchard, Andrew M Olney, Sean Kelly, Martin Nystrand, and Sidney K D'Mello. 2017. Words matter: automatic detection of teacher questions in live classroom discourse using linguistics, acoustics, and context. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference, pages 218-227. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mathematics Discourse in Secondary Classrooms: A Practice-based Resource for Professional Learning: Facilitator Guide",
"authors": [
{
"first": "A",
"middle": [],
"last": "Beth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Herbel-Eisenmann",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth A Herbel-Eisenmann. 2017. Mathematics Dis- course in Secondary Classrooms: A Practice-based Resource for Professional Learning: Facilitator Guide. Math Solutions.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The international handbook of collaborative learning",
"authors": [
{
"first": "Cindy",
"middle": [
"E"
],
"last": "Hmelo-Silver",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Chinn",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "O'donnell",
"suffix": ""
},
{
"first": "Carol",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cindy E Hmelo-Silver, Clark A Chinn, Angela M O'Donnell, and Carol Chan. 2013. The international handbook of collaborative learning.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Promoting rich discussions in mathematics classrooms: Using personalized, automated feedback to support reflection and instructional change. Teaching and Teacher Education",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Karla",
"middle": [],
"last": "Scornavacco",
"suffix": ""
},
{
"first": "Charis",
"middle": [],
"last": "Harty",
"suffix": ""
},
{
"first": "Abhijit",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "Vivian",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Jacobs, Karla Scornavacco, Charis Harty, Abhi- jit Suresh, Vivian Lai, and Tamara Sumner. 2022. Promoting rich discussions in mathematics class- rooms: Using personalized, automated feedback to support reflection and instructional change. Teaching and Teacher Education.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Toward automated feedback on teacher discourse to enhance teacher learning",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Jensen",
"suffix": ""
},
{
"first": "Meghan",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Patrick",
"suffix": ""
},
{
"first": "Cathlyn",
"middle": [],
"last": "Donnelly",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "Sidney K D'",
"middle": [],
"last": "Godley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mello",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Jensen, Meghan Dale, Patrick J Donnelly, Cath- lyn Stone, Sean Kelly, Amanda Godley, and Sid- ney K D'Mello. 2020. Toward automated feedback on teacher discourse to enhance teacher learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bert for coreference resolution: Baselines and analysis",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.09091"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Omer Levy, Daniel S Weld, and Luke Zettlemoyer. 2019. Bert for coreference reso- lution: Baselines and analysis. arXiv preprint arXiv:1908.09091.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Revealing the dark secrets of bert",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.08593"
]
},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. arXiv preprint arXiv:1908.08593.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning lessons. Harvard University Press",
"authors": [
{
"first": "Hugh",
"middle": [],
"last": "Mehan",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugh Mehan. 1979. Learning lessons. Harvard Univer- sity Press Cambridge, MA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Conceptualizing talk moves as tools: Professional development approaches for academically productive discussion. Socializing intelligence through talk and dialogue",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Michaels",
"suffix": ""
},
{
"first": "Catherine O'",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "347--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Michaels and Catherine O'Connor. 2015. Con- ceptualizing talk moves as tools: Professional de- velopment approaches for academically productive discussion. Socializing intelligence through talk and dialogue, pages 347-362.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Accountable talk\u00ae sourcebook",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Michaels",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Catherine",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Megan",
"middle": [
"Williams"
],
"last": "Hall",
"suffix": ""
},
{
"first": "Lauren",
"middle": [
"B"
],
"last": "Resnick",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Michaels, Mary Catherine O'Connor, Megan Williams Hall, and Lauren B Resnick. 2010. Accountable talk\u00ae sourcebook. Pittsburg, PA: Institute for Learning University of Pittsburgh.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Supporting teachers in taking up productive talk moves: The long road to professional learning at scale. International",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Catherine",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Michaels",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Educational Research",
"volume": "97",
"issue": "",
"pages": "166--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine O'Connor and Sarah Michaels. 2019. Sup- porting teachers in taking up productive talk moves: The long road to professional learning at scale. Inter- national Journal of Educational Research, 97:166- 175.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Scaling down\" to explore the role of talk in learning: From district intervention to controlled classroom study. Socializing intelligence through academic talk and dialogue",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Catherine",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Michaels",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chapin",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "111--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine O'Connor, Sarah Michaels, and Suzanne Chapin. 2015. Scaling down\" to explore the role of talk in learning: From district intervention to controlled classroom study. Socializing intelligence through academic talk and dialogue, pages 111-126.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Mathbert: A pre-trained model for mathematical formula understanding",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Liangcai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2105.00377"
]
},
"num": null,
"urls": [],
"raw_text": "Shuai Peng, Ke Yuan, Liangcai Gao, and Zhi Tang. 2021. Mathbert: A pre-trained model for math- ematical formula understanding. arXiv preprint arXiv:2105.00377.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Say what? automatic modeling of collaborative problem solving skills from student speech in the wild",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Shree",
"middle": [
"Krishna"
],
"last": "Pugh",
"suffix": ""
},
{
"first": "Arjun",
"middle": [
"Ramesh"
],
"last": "Subburaj",
"suffix": ""
},
{
"first": "Angela",
"middle": [
"Eb"
],
"last": "Rao",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Sidney K D'",
"middle": [],
"last": "Andrews-Todd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mello",
"suffix": ""
}
],
"year": 2021,
"venue": "International Educational Data Mining Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel L Pugh, Shree Krishna Subburaj, Arjun Ramesh Rao, Angela EB Stewart, Jessica Andrews-Todd, and Sidney K D'Mello. 2021. Say what? automatic mod- eling of collaborative problem solving skills from stu- dent speech in the wild. International Educational Data Mining Society.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Accountable talk: Instructional dialogue that builds the mind",
"authors": [
{
"first": "B",
"middle": [],
"last": "Lauren",
"suffix": ""
},
{
"first": "Christa",
"middle": [],
"last": "Resnick",
"suffix": ""
},
{
"first": "Sherice",
"middle": [
"N"
],
"last": "Sc Asterhan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2018,
"venue": "The International Academy of Education (IAE) and the International Bureau of Education (IBE) of the United Nations Educational, Scientific and Cultural Organization (UNESCO)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauren B Resnick, Christa SC Asterhan, and Sherice N Clarke. 2018. Accountable talk: Instructional dia- logue that builds the mind. Geneva, Switzerland: The International Academy of Education (IAE) and the International Bureau of Education (IBE) of the United Nations Educational, Scientific and Cultural Organization (UNESCO).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Andreas Niekler, and Martin Potthast. 2021. Uncertainty-based query strategies for active learning with transformers",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Schr\u00f6der",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2107.05687"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Schr\u00f6der, Andreas Niekler, and Martin Pot- thast. 2021. Uncertainty-based query strategies for active learning with transformers. arXiv preprint arXiv:2107.05687.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Speech recognition for spoken dialogue systems",
"authors": [
{
"first": "E",
"middle": [],
"last": "Schukat-Talamazzini",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Niemann",
"suffix": ""
}
],
"year": 1994,
"venue": "Progress and Prospects of Speech Research and Technology: Proc. of the CRIM/FORWISS Workshop",
"volume": "1",
"issue": "",
"pages": "110--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Schukat-Talamazzini, T Kuhn, and H Niemann. 1994. Speech recognition for spoken dialogue systems. In Progress and Prospects of Speech Research and Tech- nology: Proc. of the CRIM/FORWISS Workshop, PAI, volume 1, pages 110-120.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Active learning literature survey",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2009. Active learning literature survey.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Automatic classification of semantic content of classroom dialogue",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shunwei",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Tianyong",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Zixin",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2021,
"venue": "Journal of Educational Computing Research",
"volume": "59",
"issue": "3",
"pages": "496--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Song, Shunwei Lei, Tianyong Hao, Zixin Lan, and Ying Ding. 2021. Automatic classification of se- mantic content of classroom dialogue. Journal of Educational Computing Research, 59(3):496-521.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Using ai to promote equitable classroom discussions: The talkmoves application",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Charis",
"middle": [],
"last": "Clevenger",
"suffix": ""
},
{
"first": "Vivian",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Artificial Intelligence in Education",
"volume": "",
"issue": "",
"pages": "344--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Suresh, Jennifer Jacobs, Charis Clevenger, Vi- vian Lai, Chenhao Tan, James H Martin, and Tamara Sumner. 2021a. Using ai to promote equitable class- room discussions: The talkmoves application. In International Conference on Artificial Intelligence in Education, pages 344-348. Springer.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The talkmoves dataset: K-12 mathematics lesson transcripts annotated for teacher and student discursive moves",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Charis",
"middle": [],
"last": "Harty",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Perkoff",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2022,
"venue": "13th International Conference on Language Resources and Evaluation (LREC 2022",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Suresh, Jennifer Jacobs, Charis Harty, Margaret Perkoff, James H Martin, and Tamara Sumner. 2022. The talkmoves dataset: K-12 mathematics lesson transcripts annotated for teacher and student discur- sive moves. 13th International Conference on Lan- guage Resources and Evaluation (LREC 2022).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Using transformers to provide teachers with personalized feedback on their classroom discourse: The talkmoves application",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Vivian",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Suresh, Jennifer Jacobs, Vivian Lai, Chenhao Tan, Wayne Ward, James H Martin, and Tamara Sum- ner. 2021b. Using transformers to provide teach- ers with personalized feedback on their classroom discourse: The talkmoves application. AAAI 2021",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Spring Symposium on Artificial Intelligence for K-12 Education",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spring Symposium on Artificial Intelligence for K-12 Education.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Automating analysis and feedback to improve mathematics teachers' classroom discourse",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Sumner",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Foland",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI conference on artificial intelligence",
"volume": "33",
"issue": "",
"pages": "9721--9728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Suresh, Tamara Sumner, Jennifer Jacobs, Bill Foland, and Wayne Ward. 2019. Automating analy- sis and feedback to improve mathematics teachers' classroom discourse. In Proceedings of the AAAI con- ference on artificial intelligence, volume 33, pages 9721-9728.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Promoting academically productive talk with conversational agent interventions in collaborative learning settings",
"authors": [],
"year": 2015,
"venue": "Stergios Tegos, Stavros Demetriadis, and Anastasios Karakostas",
"volume": "87",
"issue": "",
"pages": "309--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stergios Tegos, Stavros Demetriadis, and Anastasios Karakostas. 2015. Promoting academically produc- tive talk with conversational agent interventions in collaborative learning settings. Computers & Educa- tion, 87:309-325.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Interaction between learning and development. Readings on the development of children",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Vygotsky",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "23",
"issue": "",
"pages": "34--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Vygotsky. 1978. Interaction between learning and development. Readings on the development of chil- dren, 23(3):34-41.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The role of teacher instructional practices in student collaboration",
"authors": [
{
"first": "M",
"middle": [],
"last": "Noreen",
"suffix": ""
},
{
"first": "Megan",
"middle": [
"L"
],
"last": "Webb",
"suffix": ""
},
{
"first": "Marsha",
"middle": [],
"last": "Franke",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Ing",
"suffix": ""
},
{
"first": "Tondra",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Deanna",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Battey",
"suffix": ""
}
],
"year": 2008,
"venue": "Contemporary educational psychology",
"volume": "33",
"issue": "3",
"pages": "360--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noreen M Webb, Megan L Franke, Marsha Ing, Angela Chan, Tondra De, Deanna Freund, and Dan Battey. 2008. The role of teacher instructional practices in student collaboration. Contemporary educational psychology, 33(3):360-381.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Classroom talk for rigorous reading comprehension instruction",
"authors": [
{
"first": "Mikyung Kim",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"C"
],
"last": "Crosson",
"suffix": ""
},
{
"first": "Lauren",
"middle": [
"B"
],
"last": "Resnick",
"suffix": ""
}
],
"year": 2005,
"venue": "Reading Psychology",
"volume": "26",
"issue": "1",
"pages": "27--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikyung Kim Wolf, Amy C Crosson, and Lauren B Resnick. 2005. Classroom talk for rigorous read- ing comprehension instruction. Reading Psychology, 26(1):27-53.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Huggingface's transformers: State-ofthe-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of- the-art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Unsupervised data augmentation for consistency training",
"authors": [
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "6256--6268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmenta- tion for consistency training. Advances in Neural Information Processing Systems, 33:6256-6268.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for lan- guage understanding. Advances in neural informa- tion processing systems, 32.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Big bird: Transformers for longer sequences",
"authors": [
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Guru",
"middle": [],
"last": "Guruganesh",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Kumar Avinava Dubey",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Ainslie",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Ontanon",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Qifan",
"middle": [],
"last": "Ravula",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "17283--17297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago On- tanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. Advances in Neural Information Processing Systems, 33:17283-17297.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Distribution of teacher talk moves in the TalkMoves dataset"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Modifying the input representation to support additional previous and subsequent utterances"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Number of utterances (frequency) vs sequence length (number of tokens) in TalkMoves dataset"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Category</td><td>Talk move</td><td>Description</td><td>Example</td></tr><tr><td/><td/><td>Teacher Talk Moves</td><td/></tr><tr><td>Learning</td><td>Keeping everyone to-</td><td>Prompting students to be ac-</td><td>\"What did Eliza just say her</td></tr><tr><td>Community</td><td>gether</td><td>tive listeners and orienting</td><td>equation was?\"</td></tr><tr><td/><td/><td>students to each other</td><td/></tr><tr><td>Learning</td><td>Getting students to re-</td><td>Prompting students to react to</td><td>\"Do you agree with Juan that</td></tr><tr><td>Community</td><td>late to another's ideas</td><td>what a classmate said</td><td>the answer is 7/10?\"</td></tr><tr><td>Learning</td><td>Restating</td><td>Repeating all or part of what</td><td>\"Add two here.\"</td></tr><tr><td>Community</td><td/><td>a student said word for word</td><td/></tr><tr><td>Content</td><td>Pressing for accuracy</td><td>Prompting students to make a</td><td>\"Can you give an example of</td></tr><tr><td>Knowledge</td><td/><td>mathematical contribution or</td><td>an ordered pair?\"</td></tr><tr><td/><td/><td>use mathematical language</td><td/></tr><tr><td>Rigorous</td><td>Revoicing</td><td>Repeating what a student said</td><td>\"Julia told us she would add</td></tr><tr><td>Thinking</td><td/><td>but adding on or changing the</td><td>two here.\"</td></tr><tr><td/><td/><td>wording</td><td/></tr><tr><td>Rigorous</td><td colspan=\"2\">Pressing for reasoning Prompting students to explain,</td><td>\"Why could I argue that the</td></tr><tr><td>Thinking</td><td/><td>provide evidence, share their</td><td>slope should be increasing?\"</td></tr><tr><td/><td/><td>thinking behind a decision, or</td><td/></tr><tr><td/><td/><td>connect ideas or representa-</td><td/></tr><tr><td/><td/><td>tions</td><td/></tr></table>",
"html": null,
"text": "Teacher talk moves from TalkMoves dataset",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Model</td><td>Context</td><td>MCC F1 (%)</td></tr><tr><td colspan=\"2\">Roberta-Base Previous 7 -Subsequent 7</td><td>0.7805 78.92</td></tr><tr><td>MathBERT</td><td>Previous 7 -Subsequent 7</td><td>0.6890 70.18</td></tr><tr><td>XLNet</td><td>Previous 7 -Subsequent 7</td><td>0.7709 78.06</td></tr><tr><td>Longformer</td><td>Previous 7 -Subsequent 7</td><td>0.7752 78.47</td></tr><tr><td>BigBird</td><td>Previous 7 -Subsequent 7</td><td>0.7694 77.89</td></tr><tr><td>BigBird</td><td colspan=\"2\">Previous 10 -Subsequent 10 0.7603 77.11</td></tr></table>",
"html": null,
"text": "Performance on classification of teacher talk moves on other models",
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Roberta-Base (Immediate Student)</td><td>Actual</td><td/><td/><td/><td/><td/><td/><td colspan=\"3\">Precision Recall F1</td></tr><tr><td>0 -None</td><td/><td colspan=\"3\">42786 1779 67</td><td>54</td><td>232</td><td colspan=\"2\">1091 74</td><td>0.93</td><td>0.93</td><td>0.934</td></tr><tr><td>1 -Keeping Everyone together</td><td/><td>1599</td><td colspan=\"4\">6549 106 139 99</td><td>518</td><td>30</td><td>0.73</td><td>0.72</td><td>0.73</td></tr><tr><td>2 -Getting students to relate</td><td/><td>171</td><td>177</td><td colspan=\"2\">715 0</td><td>2</td><td>120</td><td>33</td><td>0.71</td><td>0.59</td><td>0.64</td></tr><tr><td>3 -Restating</td><td>Predicted</td><td>112</td><td>18</td><td>3</td><td colspan=\"2\">932 21</td><td>12</td><td>0</td><td>0.79</td><td>0.85</td><td>0.82</td></tr><tr><td>4 -Revoicing</td><td/><td>562</td><td>72</td><td>2</td><td>47</td><td colspan=\"2\">1063 44</td><td>0</td><td>0.72</td><td>0.59</td><td>0.62</td></tr><tr><td>5 -Pressing for accuracy</td><td/><td>762</td><td>367</td><td/><td>9</td><td>51</td><td colspan=\"3\">8289 669 0.82</td><td>0.86</td><td>0.84</td></tr><tr><td>6 -Pressing for reasoning</td><td/><td>56</td><td>6</td><td colspan=\"2\">315 1</td><td>1</td><td>86</td><td colspan=\"2\">753 0.79</td><td>0.82</td><td>0.80</td></tr></table>",
"html": null,
"text": "Confusion matrix from Roberta-Base with Immediate student utterance as context",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Roberta-Base (Previous 7 -Subsequent 7) Actual</td><td/><td/><td/><td/><td/><td/><td colspan=\"3\">Precision Recall F1</td></tr><tr><td>0 -None</td><td/><td colspan=\"2\">14594 522</td><td>42</td><td>40</td><td colspan=\"2\">122 312</td><td>16</td><td>0.94</td><td>0.93</td><td>0.94</td></tr><tr><td>1 -Keeping Everyone together</td><td/><td>512</td><td colspan=\"2\">2321 53</td><td>26</td><td>26</td><td>130</td><td>4</td><td>0.77</td><td>0.76</td><td>0.76</td></tr><tr><td>2 -Getting students to relate</td><td/><td>31</td><td>23</td><td colspan=\"2\">206 0</td><td>0</td><td>37</td><td>9</td><td>0.64</td><td>0.67</td><td>0.65</td></tr><tr><td>3 -Restating</td><td>Predicted</td><td>25</td><td>8</td><td>1</td><td colspan=\"2\">263 7</td><td>2</td><td>0</td><td>0.73</td><td>0.86</td><td>0.79</td></tr><tr><td>4 -Revoicing</td><td/><td>179</td><td>24</td><td>0</td><td>25</td><td colspan=\"2\">326 7</td><td>1</td><td>0.66</td><td>0.58</td><td>0.62</td></tr><tr><td>5 -Pressing for accuracy</td><td/><td>207</td><td>112</td><td>21</td><td>5</td><td>12</td><td colspan=\"2\">2678 41</td><td>0.84</td><td>0.87</td><td>0.85</td></tr><tr><td>6 -Pressing for reasoning</td><td/><td>8</td><td>2</td><td>1</td><td>0</td><td>0</td><td>27</td><td colspan=\"2\">242 0.77</td><td>0.86</td><td>0.82</td></tr></table>",
"html": null,
"text": "Confusion matrix from Roberta-Base with Previous-7 and Subsequent-7 utterances as context. Compared toTable 4, we see an improvement in F1 score for almost all of the talk moves except Restating.",
"num": null
}
}
}
}