Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:04:50.030706Z"
},
"title": "Towards Using EEG to Improve ASR Accuracy",
"authors": [
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"addrLine": "5000 Forbes Avenue",
"postCode": "15213-3891",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Kai-Min",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"addrLine": "5000 Forbes Avenue",
"postCode": "15213-3891",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jack",
"middle": [],
"last": "Mostow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"addrLine": "5000 Forbes Avenue",
"postCode": "15213-3891",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We report on a pilot experiment to improve the performance of an automatic speech recognizer (ASR) by using a single-channel EEG signal to classify the speaker's mental state as reading easy or hard text. We use a previously published method (Mostow et al., 2011) to train the EEG classifier. We use its probabilistic output to control weighted interpolation of separate language models for easy and difficult reading. The EEG-adapted ASR achieves higher accuracy than two baselines. We analyze how its performance depends on EEG classification accuracy. This pilot result is a step towards improving ASR more generally by using EEG to distinguish mental states.",
"pdf_parse": {
"paper_id": "N12-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "We report on a pilot experiment to improve the performance of an automatic speech recognizer (ASR) by using a single-channel EEG signal to classify the speaker's mental state as reading easy or hard text. We use a previously published method (Mostow et al., 2011) to train the EEG classifier. We use its probabilistic output to control weighted interpolation of separate language models for easy and difficult reading. The EEG-adapted ASR achieves higher accuracy than two baselines. We analyze how its performance depends on EEG classification accuracy. This pilot result is a step towards improving ASR more generally by using EEG to distinguish mental states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Humans use speech to communicate what's on their mind. However, until now, automatic speech recognizers (ASR) and dialogue systems have had no direct way to take into account what is going on in a speaker's mind. Some work has attempted to infer cognitive states from volume and speaking rate to adapt language modeling (Ward and Vega, 2009) or from query click logs (Hakkani-T\u00fcr et al., 2011) to detect domains. A new way to address this limitation is to infer mental states from electroencephalogram (EEG) signals.",
"cite_spans": [
{
"start": 320,
"end": 341,
"text": "(Ward and Vega, 2009)",
"ref_id": "BIBREF5"
},
{
"start": 367,
"end": 393,
"text": "(Hakkani-T\u00fcr et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EEG is a voltage signal that can be measured on the surface of the scalp, arising from large areas of coordinated neural activity. This neural activity varies as a function of development, mental state, and cognitive activity, and EEG can measurably detect such variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, a few companies have scaled back medical grade EEG technology to create portable EEG headsets that are commercially available and simple to use. The NeuroSky MindSet TM (2009) , for example, is an audio headset equipped with a single-channel EEG sensor. It measures the voltage between an electrode that rests on the forehead and electrodes in contact with the ear. Unlike the multi-channel electrode nets worn in labs, the sensor requires no gel or saline for recording, and requires no expertise to wear. Even with the limitations of recording from only a single sensor and working with untrained users, Furthermore, Mostow et al.(2011) used its output signal to distinguish easy from difficult reading, achieving above-chance accuracy. Here we build on that work by using the output of such classifiers to adapt language models for ASR and thereby improve recognition accuracy.",
"cite_spans": [
{
"start": 176,
"end": 185,
"text": "TM (2009)",
"ref_id": null
},
{
"start": 629,
"end": 648,
"text": "Mostow et al.(2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most similar work is Jou and Schultz's (2008) use of electromyographic (EMG) signals generated by human articulatory muscles in producing speech. They showed that augmenting acoustic features with these EMG features can achieve rudimentary silent speech detection. Pasley et al. (2012) used electrocorticographic (ECoG) recordings from nonprimary auditory cortex in the human superior temporal gyrus to reconstruct acoustic information in speech sounds. Our work differs from these efforts in that we use a consumer-grade single-channel EEG sensor measuring frontal lobe activities, and that we use the detected mental state just to help improve ASR performance rather than to dictate or reconstruct speech, which are much harder tasks.",
"cite_spans": [
{
"start": 25,
"end": 49,
"text": "Jou and Schultz's (2008)",
"ref_id": "BIBREF1"
},
{
"start": 269,
"end": 289,
"text": "Pasley et al. (2012)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 describes how to use machine learning to distinguish mental states associated with easy and difficult readings. Section 3 describes how we use EEG classifier output to adapt ASR language models. Section 4 uses an oracle simulation to show how increasing EEG classifier accuracy will affect ASR accuracy. Section 5 concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use training and testing data from Mostow et al.'s (2011) experiment, which presented text passages, one sentence at a time, to 10 adults and 11 nine-to ten-yearolds wearing a Neurosky Mindset TM (2009) . They read three easy and three difficult texts aloud, in alternating order. The \"easy\" passages were from texts classified by the Common Core Standards 1 at the K-1 level. The \"difficult\" passages were from practice materials for the Graduate Record Exam 2 and the ACE GED test 3 . Across the reading conditions, passages ranged from 62 to 83 words long. Although instructed to read the text aloud, the readers (especially children) did not always read correctly or follow the displayed sentences.",
"cite_spans": [
{
"start": 38,
"end": 60,
"text": "Mostow et al.'s (2011)",
"ref_id": null
},
{
"start": 196,
"end": 205,
"text": "TM (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mental State Classification Using EEG",
"sec_num": "2"
},
{
"text": "Following Mostow et al. (2011) , we trained binary logistic regression classifiers to estimate the probability that an EEG signal is associated with reading an easy (or difficult) sentence. As features for logistic regression we used the streams of values logged by the MindSet:",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Mostow et al. (2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mental State Classification Using EEG",
"sec_num": "2"
},
{
"text": "1. The raw EEG signal, sampled at 512 Hz 2. A filtered version of the raw signal, also sampled at 512 Hz, which is raw signal smoothed over a window of 2 seconds 3. Proprietary \"attention\" and \"meditation\" measures, reported at 1 Hz 4. A power spectrum of 1Hz bands from 1-256 Hz, reported at 8 Hz 5. An indicator of signal quality, reported at 1 Hz Head movement or system instability led to missing or poor-quality EEG data for some utterances, which we excluded in order to focus on utterances with clear acoustic and EEG signals. The features for each utterance consisted of measures 1-4, averaged over the utterance, excluding the 15% of observations where measure 5 reported poor signals. After filtering, the data includes 269 utterances from adults and 243 utterances from children, where 327 utterances are for the easy passages and 185 utterances are for the difficult passages. To balance the classes, we used the undersampling method for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mental State Classification Using EEG",
"sec_num": "2"
},
{
"text": "We trained a reader-specific classifier on each reader's data from all but one text passage, tested it on each sentence in the held-out passage, performed this procedure for each passage, and averaged the results to crossvalidate accuracy within readers. We computed classification accuracy as the percentage of utterances classified correctly. Classification accuracy for adults', children's, and total oral reading was 71.49%, 58.74%, and 65.45% respectively. A one-tailed t-test, with classification accuracy on an utterance as the random variable, showed that EEG classification was significantly better than chance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mental State Classification Using EEG",
"sec_num": "2"
},
{
"text": "Traditional ASR decodes a word sequence W * from the acoustic model and language model as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "W * = argmax W P (W | A) (1) = argmax W P (A | W ) \u2022 P (W ) P (A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "To incorporate EEG, we include mental state N as an additional observation in the decoding procedure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "W * = argmax W P (W | A, N ) (2) = argmax W P (A | W ) \u2022 P (W | N ) P (A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "The six passages use a vocabulary of 430 distinct words. To evaluate the impact on ASR accuracy of using EEG to adapt language models, we needed acoustic models appropriate for the speakers. For adult speech, we used the US English HUB4 Acoustic Model from CMU Sphinx. For children's speech, we used Project LISTEN's acoustic models trained on children's oral reading.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "We used separate trigram language models (with bigram and unigram backoff) for easy and difficult text -EasyLM, trained on the three easy passages, and Diffi-cultLM, trained on the three difficult passages. Both language models used the same lexicon, consisting of the 430 words in all six target passages. All experiments used the same ASR parameter values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "As a gold standard, all utterances were manually transcribed by a native English speaker. To measure ASR performance, we computed Word Accuracy (WACC) as the number of words recognized correctly minus insertions divided by number of words in the reference transcripts for each reader, and averaged them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "Then we can adapt the language model to estimate P (W | N ) using mental state information. Using the EEG classifier described in Section 2, we adapted the language model separately for each utterance, using three types of language model adaptation: hard selection, soft selection, and combination with ASR output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Adaptation for ASR",
"sec_num": "3"
},
{
"text": "Given the probabilistic estimate that a given utterance was easy or difficult (S Easy (N ) and S Difficult (N )), hard selection simply picks EasyLM if the utterance was likelier to be easy, or DifficultLM otherwise:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Selection of Language Models",
"sec_num": "3.1"
},
{
"text": "P Hard (W | N ) = I C (N ) \u2022 P Easy (W ) (3) + (1 \u2212 I C (N )) \u2022 P Diff (W ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Selection of Language Models",
"sec_num": "3.1"
},
{
"text": "Here ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Selection of Language Models",
"sec_num": "3.1"
},
{
"text": "I C (N ) = 1 if S Easy (N ) > S Difficult (N ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Selection of Language Models",
"sec_num": "3.1"
},
{
"text": "P Random (W ) = I R \u2022 P Easy (W ) (4) + (1 \u2212 I Random ) \u2022 P Diff (W ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Selection of Language Models",
"sec_num": "3.1"
},
{
"text": "Here I R is randomly set to 0 or 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Selection of Language Models",
"sec_num": "3.1"
},
{
"text": "Mental state classification based on EEG is imperfect, and using only the corresponding language model (Ea-syLM or DifficultLM) to decode the target utterance is liable to perform worse when the classifier is wrong. Thus, we use the classifier's probabilistic estimate that the utterance is easy (or difficult) as interpolation weights to linearly combine EasyLM and DifficultLM:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "P Soft (W | N ) = w Easy (N ) \u2022 P Easy (W ) (5) + w Diff (N ) \u2022 P Diff (W ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "Here w Easy (N ) and w Diff (N ) are from classifier's output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w Easy (N ) = S Easy (N ), w Diff (N ) = S Diff (N )",
"eq_num": "(6)"
}
],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "Additionally, we can adjust the range of weights by smoothing the probability outputted by the EEG classifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w Easy (N ) = \u03b4 + S Easy (N ) 2\u03b4 + 1 ,",
"eq_num": "(7)"
}
],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "Diff (N ) = \u03b4 + S Diff (N ) 2\u03b4 + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "Here S Easy (N ) (or S Diff (N )) is the classifier's probabilistic estimate that the sentence is easy (or difficult) and \u03b4 is the smoothing weight, which we set to 0.5. After smoothing the probabilities, w Easy (N ) and w Diff (N ) each lie within the interval [0.25, 0.75], and w Easy (N ) + w Diff (N ) = 1. That is, Soft Selection with smoothing interpolates the two language models, but assigns a weight of at least 0.25 to each one to reduce the impact of EEG classifier errors. Notice that \u03b4 = 0 is equivalent to EEG Soft Selection without smoothing. For comparison, the Equal Weight baseline interpolates EasyLM and DifficultLM with equal weights:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P Equal (W ) = 0.5 \u2022 P Easy (W ) + 0.5 \u2022 P Diff (W )",
"eq_num": "(8)"
}
],
"section": "Soft Selection of Language Models",
"sec_num": "3.2"
},
{
"text": "Given the ASR results from the Equal Weight baseline, we can derive S Easy (N ) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination with ASR Output",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S Easy (N ) = \u03b1 \u2022 S Easy (N )",
"eq_num": "(9)"
}
],
"section": "Combination with ASR Output",
"sec_num": "3.3"
},
{
"text": "+ (1 \u2212 \u03b1) \u2022 P Easy (W 0 ) P Easy (W 0 ) + P Diff (W 0 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination with ASR Output",
"sec_num": "3.3"
},
{
"text": "Here we can estimate S Easy (N ) based on the classifier's output and the probability of the recognized words W 0 in EasyLM. We can derive S Diff (N ) in the same way. Then we can use (5) and 7to re-decode the utterances by using S Easy (N ) and S Diff (N ). Here \u03b1 is a linear interpolation weight, where we set to 0.5 to give equal weights to ASR output and EEG. For comparison, the ASR baseline uses weights from only the ASR results, where \u03b1 = 0. Notice that the case of \u03b1 = 1 is equivalent to EEG Soft Selection with smoothing. Table 1 shows the performance of our proposed approaches and the corresponding baselines as measured by WACC. According to one-tailed t-tests with word accuracy of an utterance as the random variable, the results in boldface are significantly better tgan their respective baselines (p \u2264 0.05).",
"cite_spans": [],
"ref_spans": [
{
"start": 533,
"end": 540,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Combination with ASR Output",
"sec_num": "3.3"
},
{
"text": "Hard Selection (row b) outperforms the Random Pick baseline (row a). Soft Selection without smoothing (row d) has similar performance as Hard Selection because the classifier often outputs probability estimates that are either 1 or 0. However, Soft Selection with smoothing (row e) outperforms the Equal Weight baseline (row c). The Weight from ASR baseline (row f) is better than the other baselines. Weight from ASR and EEG (row g) can further improve performance, but it's not better than Soft Selection with smoothing (row e) -evidence that EEG gives good estimation for choosing language models. In short, Table 1 shows that using EEG to choose between EasyLM and DifficultLM achieves higher ASR accuracy than the baselines that do not use EEG.",
"cite_spans": [],
"ref_spans": [
{
"start": 611,
"end": 618,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Proposed Approaches",
"sec_num": "3.4"
},
{
"text": "Comparing the first two baselines, the Equal Weight baseline (row c) outperforms the Random Pick baseline (row a) in every column, because the loss in ASR accuracy from picking the wrong language model outweighs the improvement from picking the right one. Similarly, EEG-based Soft Selection with smoothing (row e) outperforms EEG-based Hard Selection (row b) in every column because the interpolated language model is more robust to EEG classification error. The third base-line, Weight from ASR (row f) depends solely on ASR results to estimate weights; it performs better than other baselines, but not as well as EEG-based Soft Selection with smoothing (row e). That is, using EEG alone can weight the two language models better than ASR alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Proposed Approaches",
"sec_num": "3.4"
},
{
"text": "To explore the relationship between EEG classifier accuracy and the effect of EEG-based adaptation on ASR accuracy, we simulate different classification accuracies and used Hard Selection to predict the resulting ASR accuracy by selecting between the ASR output from Ea-syLM and DifficultLM according to the simulated classifier accuracy. We use the resulting Word Accuracy to predict ASR performance at that level of EEG classifier accuracy. Figure 1 plots predicted ASR WACC against simulated EEG classification accuracy. As expected, the predicted ASR accuracy increases as EEG classification accuracy increases, for both groups (adults and children) and both levels of difficulty (easy and difficult). However, Figure 1a and 1b shows that WACC was much lower for children than for adults, especially on difficult utterances, where even 100% simulated EEG classifier accuracy achieves barely 20% WACC. One explanation is that on difficult sentences, children produced reading mistakes and/ or off-task speech. In contrast, adults read better and stayed on task. Not only is predicted ASR accuracy higher on adults' reading, it improves substantially as simulated EEG classifier accuracy increases.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 451,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 715,
"end": 725,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Oracle Simulation",
"sec_num": "4"
},
{
"text": "This paper shows that classifying EEG signals from an inexpensive single-channel device can help adapt language models to significantly improve ASR performance. An interpolated language model smoothed to compensate for classification errors yielded the best performance. ASR performance depended on the accuracy of mental state classification. Future work includes improving EEG classification accuracy, detecting other relevant mental states, such as emotion, and improving ASR by using word-level EEG classification. A neurologically-informed ASR may better capture what people intend to communicate, and augment acoustic input with non-verbal cues to ASR or dialogue systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.corestandards.org 2 http://majortests.com/gre/reading comprehension.php 3 http://college.cengage.com:80/devenglish/resources/reading ace/students",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A080628 to Carnegie Mellon University. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied of the Institute or the U.S. Department of Education. We thank the students, educators, and LISTENers who helped create our data, and the reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bootstrapping domain detection using query click logs for new domains Proceedings of InterSpeech",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hakkani-Tnr",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Heck",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "709--712",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hakkani-Tnr, D., Tur, G., Heck, L., and Shriberg, E. 2011. Bootstrapping domain detection using query click logs for new domains Proceedings of InterSpeech, 709-712.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ears: Electromyograpical Automatic Recognition of Speech",
"authors": [
{
"first": "S.-C",
"middle": [
"S"
],
"last": "Jou",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Biosignals",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jou, S.-C. S. and Schultz, T.. 2008. Ears: Electromyograpical Automatic Recognition of Speech. Proceedings of Biosig- nals, 3-12.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Toward Exploiting EEG Input in a Reading Tutor",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mostow",
"suffix": ""
},
{
"first": "K.-M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 15th International Conference on Artificial Intelligence in Education",
"volume": "",
"issue": "",
"pages": "230--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mostow, J., Chang, K.-M., and Nelson, J. 2011. Toward Ex- ploiting EEG Input in a Reading Tutor. Proceedings of the 15th International Conference on Artificial Intelligence in Education, 230-237.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "NeuroSky's Sense TM Meters and Detection of Mental State",
"authors": [],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NeuroSky 2009. NeuroSky's Sense TM Meters and Detection of Mental State: Neurisky, Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Reconstructing speech from auditory cortex",
"authors": [
{
"first": "B",
"middle": [
"N"
],
"last": "Pasley",
"suffix": ""
}
],
"year": 2012,
"venue": "PLos Biology",
"volume": "10",
"issue": "1",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pasley, B. N. and et al. 2012. Reconstructing speech from au- ditory cortex. PLos Biology, 10(1), 1-13.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards the use of cognitive states in language modeling",
"authors": [
{
"first": "N",
"middle": [
"G"
],
"last": "Ward",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vega",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ASRU",
"volume": "",
"issue": "",
"pages": "323--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ward, N. G. and Vega, A. 2009. Towards the use of cognitive states in language modeling. Proceedings of ASRU, 323-326.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "The simulated accuracy graphs plot the predicted ASR word accuracy against the level of EEG classification accuracy simulated by an oracle.",
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td/><td>WACC</td><td colspan=\"5\">Adult Easy Difficult All Easy Difficult All Child</td></tr><tr><td>(a)</td><td>Baseline 1: Random Pick</td><td>54.5</td><td>51.2</td><td>53.8 32.8</td><td>14.7</td><td>30.6</td></tr><tr><td>(b)</td><td>EEG-based: Hard Selection</td><td>57.6</td><td>49.4</td><td>52.7 36.4</td><td>17.0</td><td>32.8</td></tr><tr><td>(c)</td><td>Baseline 2: Equal Weight</td><td>63.2</td><td>59.9</td><td>56.5 37.3</td><td>19.5</td><td>33.4</td></tr><tr><td colspan=\"3\">(d) EEG-based: Soft Selection w/o smoothing 57.2</td><td>48.8</td><td>52.4 35.8</td><td>17.2</td><td>32.5</td></tr><tr><td colspan=\"2\">(e) EEG-based: Soft Selection w/ smoothing</td><td>66.0</td><td>62.3</td><td>64.2 39.8</td><td>22.7</td><td>36.2</td></tr><tr><td>(f)</td><td>Baseline 3: Weight from ASR (\u03b1 = 0)</td><td>63.8</td><td>60.6</td><td>61.5 39.2</td><td>20.0</td><td>35.0</td></tr><tr><td>(g)</td><td>Weight from ASR and EEG (\u03b1 = 0.5)</td><td>64.5</td><td>63.4</td><td>63.5 39.2</td><td>21.9</td><td>36.0</td></tr><tr><td/><td colspan=\"5\">Table 1: ASR performance of proposed approaches using EEG-based classification of mental states.</td><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">For comparison,</td></tr><tr><td/><td/><td colspan=\"5\">the Random Pick baseline randomly picks either EasyLM</td></tr><tr><td/><td/><td colspan=\"2\">or DifficultLM:</td><td/><td/><td/></tr></table>",
"text": "and P Easy (W ) and P Diff (W ) are the probability of word W in EasyLM and DifficultLM, respectively.",
"type_str": "table",
"html": null,
"num": null
}
}
}
}