ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:35:37.221675Z"
},
"title": "Impact of ASR on Alzheimer's Disease Detection: All Errors are Equal, but Deletions are More Equal than Others",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Balagopalan",
"suffix": "",
"affiliation": {
"laboratory": "Winterlight Labs Toronto",
"institution": "",
"location": {
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Ksenia",
"middle": [],
"last": "Shkaruta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Tech Atlanta",
"location": {
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic Speech Recognition (ASR) is a critical component of any fully-automated speechbased dementia detection model. However, despite years of speech recognition research, little is known about the impact of ASR accuracy on dementia detection. In this paper, we experiment with controlled amounts of artificially generated ASR errors and investigate their influence on dementia detection. We find that deletion errors affect detection performance the most, due to their impact on the features of syntactic complexity and discourse representation in speech. We show the trend to be generalisable across two different datasets for cognitive impairment detection. As a conclusion, we propose optimising the ASR to reflect a higher penalty for deletion errors in order to improve dementia detection performance.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic Speech Recognition (ASR) is a critical component of any fully-automated speechbased dementia detection model. However, despite years of speech recognition research, little is known about the impact of ASR accuracy on dementia detection. In this paper, we experiment with controlled amounts of artificially generated ASR errors and investigate their influence on dementia detection. We find that deletion errors affect detection performance the most, due to their impact on the features of syntactic complexity and discourse representation in speech. We show the trend to be generalisable across two different datasets for cognitive impairment detection. As a conclusion, we propose optimising the ASR to reflect a higher penalty for deletion errors in order to improve dementia detection performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There is a rapid growth in the number of people living with Alzheimer's disease (AD) (Alzheimer's Association, 2018) . Clinical research has shown that quantifiable signs of cognitive decline associated with AD and mild cognitive impairment (MCI) are detectable in spontaneous speech (Bucks et al., 2000; Sajjadi et al., 2012) . Machine learning (ML) models have proved to be successful in detecting AD using speech and language variables, such as syntactic and lexical complexity of language extracted from the transcripts of the speech Meil\u00e1n et al., 2012; Rentoumi et al., 2014) . Since transcripts should be accurate enough to properly represent syntactic and linguistic characteristics, current approaches (Fraser et al., 2013; Zhu et al., 2019) frequently rely on 100% accurate human-created transcripts produced by trained transcriptionists. However in real-life speech-based applications of AD detection, ASR is used and it produces noisy, error-prone transcripts (Yousaf et al., 2019) . To our best knowledge, while the importance of well-performing ASR in speech classification has been studied in depth (Zhou et al., 2016) , no prior research was done to understand what patterns of speech are influenced the most by ASR errors such as word deletions and substitutions, and how this impacts performance of AD detection using ML models.",
"cite_spans": [
{
"start": 85,
"end": 116,
"text": "(Alzheimer's Association, 2018)",
"ref_id": null
},
{
"start": 284,
"end": 304,
"text": "(Bucks et al., 2000;",
"ref_id": "BIBREF3"
},
{
"start": 305,
"end": 326,
"text": "Sajjadi et al., 2012)",
"ref_id": "BIBREF19"
},
{
"start": 538,
"end": 558,
"text": "Meil\u00e1n et al., 2012;",
"ref_id": "BIBREF9"
},
{
"start": 559,
"end": 581,
"text": "Rentoumi et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 711,
"end": 732,
"text": "(Fraser et al., 2013;",
"ref_id": "BIBREF4"
},
{
"start": 733,
"end": 750,
"text": "Zhu et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 972,
"end": 993,
"text": "(Yousaf et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1114,
"end": 1133,
"text": "(Zhou et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on this issue and study the effect of deletion, insertion and substitution errors on lexico-syntactic language features and their resulting effect on classification performance. The effect of these errors on binary AD-healthy classification performance is studied and suggestions are provided on how to improve ASR in order to maintain reasonable AD classification performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We identify that deletion errors affect the classification more than substitution and insertion errors on two datasets of spontaneous impaired speech. The effect of these deletion errors are most profound on features related to syntactic complexity and discourse representations in speech, such as production rules, word-level structure and repetitions. These features are also identified as being the most important for the classification task using a feature gradient-based importance metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "DementiaBank (DB) The DementiaBank 1 dataset is a large dataset of pathological speech. It consists of narrative picture descriptions from participants aged between 45 to 90 (Becker et al., 1994) . Out of the 210 participants in the study, 117 were diagnosed with AD (180 samples of speech) and 93 were healthy (HC, 229 samples). Voice recordings and manual transcriptions (following CHAT protocol (MacWhinney, 2000) samples. This dataset is used for the experiments in Section 4, 5, and 6. Healthy Aging (HA) The Healthy Aging dataset (Balagopalan et al., 2018) consists of speech samples of 97 participants with no cognitive impairment diagnosis, all older than 50 years. Every participant describes a picture, analogous to the DB dataset. The dataset constitutes 8.5 hours of audio with manual transcriptions. Each speech sample is associated with a score on the Montreal Cognitive Assessment (MoCA) (Nasreddine et al., 2005) . Based on published cut-off scores (Nasreddine et al., 2005) for presence of MCI (minimum score for healthy participants is 26), we obtain class-labels for this dataset.",
"cite_spans": [
{
"start": 174,
"end": 195,
"text": "(Becker et al., 1994)",
"ref_id": "BIBREF2"
},
{
"start": 398,
"end": 416,
"text": "(MacWhinney, 2000)",
"ref_id": "BIBREF8"
},
{
"start": 536,
"end": 562,
"text": "(Balagopalan et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 903,
"end": 928,
"text": "(Nasreddine et al., 2005)",
"ref_id": "BIBREF12"
},
{
"start": 965,
"end": 990,
"text": "(Nasreddine et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "The Automatic Speech Recognition (ASR) system we use for this work is based on the opensource Kaldi toolkit (Povey et al., 2011) . ASR uses ASPiRE chain model trained on multi-condition Fisher English corpus as a 3-gram language model. Rates of ASR errors for healthy and impaired speakers for DB and HA datasets are in Table 1 . Majority of errors arise from deletions and substitutions for both datasets and groups.",
"cite_spans": [
{
"start": 108,
"end": 128,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 320,
"end": 327,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "ASR Setup",
"sec_num": "2.2"
},
{
"text": "Following previous studies Balagopalan et al., 2018) , we automatically extract 507 lexico-syntactic and acoustic features. To simplify the presentation, the extracted features are aggregated into the following major groups:",
"cite_spans": [
{
"start": 27,
"end": 52,
"text": "Balagopalan et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Aggregation",
"sec_num": "3.1"
},
{
"text": "Syntactic Complexity: features to analyze the syntactic complexity of speech, such as number of occurrence of various production rules, mean length of clause (in words) etc. Lexical Complexity and Richness : measures of lexical density and variation, such as average familiarity scores of all nouns, age of word acquisition, frequency of POS tags etc. Discourse mapping: features that help identify cohesion in speech using a speech graph-based representation of message organization in speech (Mota et al., 2012) . Examples of features include the number of edges in the graph, number of selfloops, cosine-distance across unique utterances etc.",
"cite_spans": [
{
"start": 494,
"end": 513,
"text": "(Mota et al., 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Aggregation",
"sec_num": "3.1"
},
{
"text": "Additionally, we extract features quantifying difficulty in finding the right words (e.g. filled pauses), measures related to description of content in the picture (e.g. number of content units), coherence in speaking at local and global level, and acoustic measures. such as MFCC and Zero Crossing Rate related voice representations (full list in App.A.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Aggregation",
"sec_num": "3.1"
},
{
"text": "We introduce artificial ASR errors to understand if any specific error type influences the classification performance more than others. In previous research it was shown that lexical and syntactic groups of features extracted from transcripts of speech have different predictive power in dementia classification . As such, we hypothesize that different ASR error types may influence the features differently and would cause different effects on classification performance. The non-artificial output of ASR combines the errors of deletion, insertion and substitution in some proportion, thus not allowing analysis of the individual effects of each error type separately. This is why we generate each type of errors artificially.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial ASR Errors",
"sec_num": "3.2.1"
},
{
"text": "We follow a method similar to the one used by Fraser et al. (2013) to artificially add errors to manual transcripts at predefined 20%, 40% and 60% WER rates. All altered words w, where w refers to a word in gold-standard manual transcripts, are selected at random. The following modifications are done: a) deletion -word instance w is deleted, b) insertion -new word w 1 is added after the word w, c) substitution -word w is replaced with a new word w 1 .",
"cite_spans": [
{
"start": 46,
"end": 66,
"text": "Fraser et al. (2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Addition Method",
"sec_num": "3.3"
},
{
"text": "For deletion we simply delete random words from manual transcript at a specified rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Addition Method",
"sec_num": "3.3"
},
{
"text": "To substitute word w, we select a unigram from 2,000 most used unigrams from Fisher language model that has the smallest Levenshtein distance with word w based on the phonemic model from The Carnegie Mellon Pronouncing on Pronouncing Dictionary (Weide, 1998). If word w is not found in the Fisher language model a random unigram from the top 2,000 is used for substitution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Addition Method",
"sec_num": "3.3"
},
{
"text": "For insertion, we select a word from the bigram list from the language model that has the highest probability to follow after word w and insert it if it does not match the following word in transcript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Addition Method",
"sec_num": "3.3"
},
{
"text": "In case of a match, the next most probable word is inserted. If word w is not found in bigram list a random unigram is used for insertion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Addition Method",
"sec_num": "3.3"
},
{
"text": "To verify if simulated errors are a fair approximation of what is seen on a true ASR output, we have calculated the BLEU score (Papineni et al., 2002) between the manual and ASR-generated transcripts and compared them to the BLEU score between the manual transcripts and the transcripts with artificially simulated errors. The correlation between these two BLEU scores is strong and significant for both datasets (Spearman \u21e2 = 0.72, p < 0.001 for DB; \u21e2 = 0.66, p < 0.001 for HA), i.e. transcripts with simulated errors are corrupted with respect to the manual transcripts in a similar manner as the ASR-generated transcripts are.",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Addition Method",
"sec_num": "3.3"
},
{
"text": "We perturb all lexico-syntactic features or equivalently features that could be affected by ASR errors such as deletions, insertions, and/or substitutions, to mimic random sources of errors using Gaussian noise. We do this to compare and differentiate from the consequences of ASR errors. This modification is implemented by adding a randomized number to the extracted feature values where the mean of the number added to a given feature is zero and the standard deviation varies depending on the amount of noise we add (see App.A.2 for details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Addition",
"sec_num": "3.3.1"
},
{
"text": "Model: All our experiments are based on predictions obtained from a 2-hidden layer neural network (see App.A.3 for details). We chose this model type and parameter-setting since it attained performance on-par with previously published results with 10-fold cross-validation on gold-standard manual DB transcripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Setup",
"sec_num": "3.4"
},
{
"text": "We evaluate performance of classifying samples of speech to two classes -AD or healthy -using the DB dataset. Figure 1 shows that deletion errors affect classification performance significantly more than insertion and substitution errors do. 40% of deletions reduce F1 score by more than 10%, while 40% of insertions only result in 2.8%, and 40% of substitutions -in 6.3% of F1 score reduction. These differences become even more pronounced with adding a bigger amount of errors. Trajectory of F1 score with varying levels of noise is substantially different from that with varying deletion errors but not that with insertions or substitutions, showing that insertion and substitution errors influence classification performance in a way that is similar to a random noise. Deletion errors, however, have a significantly stronger effect on classification. It is also interesting to note that the model utilizing automatic transcripts from ASR retains a level of performance at 74.96% (Table 2) , which is comparable to the potential decrease in performance due to the rate of ASR deletion errors. Different effects of errors on classification performance suggest that some features, extracted from the speech samples and used as an input for the classification algorithm, are affected far more substantially by deletions rather than any other type of errors. This leads us to inspect the correlation of feature values and the amount of deletions.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 983,
"end": 992,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Changes in Classification Performance Due to Simulated Errors",
"sec_num": "4"
},
{
"text": "In order to understand why deletions errors influence the classification performance significantly more than other error types, we identify features maintaining higher correlation with the amount of deletions than that with the amount of insertions and substitutions. We observe 18 features in total that distinctively correlate with deletions. Out of these, the absolute majority of 15 features (83.33% of all selected) are associated with syntactic complexity (production rules of a constituency parser) and discourse phenomena (graph self-loop with 3 edges) and 3 (16.7%) -with lexical richness in speech. Other feature groups, such as acoustic features or those associated with word finding difficulty, do not meet the required conditions. Such results show that syntactic structure of language is much more vulnerable to deletions than to other ASR errors. This can be explained by the fact that insertions and substitutions use words from the language model (i.e. most probable words) for the modifications, which to some extent helps maintain basic syntactic rules and structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distinctive Effects of Deletion Errors",
"sec_num": "5"
},
{
"text": "Correlation between the number of deletions and features of syntactic structure shows the vulnerability of the feature group representing syntactic complexity and discourse phenomena to ASR deletion errors. However, it does not explain a decrease in classification performance when adding deletion errors. In Section 6 we inspect if features of syntactic complexity are more influential in AD detection than other characteristics of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distinctive Effects of Deletion Errors",
"sec_num": "5"
},
{
"text": "In order to quantify the importance of input features for classification, we obtain the gradient of the output prediction loss with respect to input features on a manually-transcribed version of the DB dataset. We define gradient-based importance for feature k for an input, X i,j , in the training set for a classification model as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "imp i,j,k = @L(y i,j , p i,j ) @X i,j,k",
"eq_num": "(1)"
}
],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "where L denotes the loss criterion (binary crossentropy loss), y i,j is the ground-truth label, p i,j \u21e2 [0, 1] is the prediction probability; p i,j > 0.5 denotes an AD prediction, k is a given feature (1 to D ), and i is a number of samples (1 to N j ) in the training set in fold j of the DB dataset classification setup. Hence, to obtain the average importance for feature k in a single fold, we compute:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "imp j,k = 1/N j N j X i=1 @L(y i,j , p i,j ) @X i,j,k",
"eq_num": "(2)"
}
],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "This importance is then averaged across the 10- Table 3 : Importance of the two feature groups, summarised as the mean value of the top-10 most important features selected for HC and AD components, number of features having significant Spearman correlation with deletion errors, and the rank of each group.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "folds to obtain the final importance, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "imp k = 1/10 10 X j=1 imp j,k",
"eq_num": "(3)"
}
],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "In order to interpret high-level patterns of input importance, we aggregate the feature importances into the groups defined in Section 3.1, where aggregation of importances involves averaging the absolute gradient-importance, |imp k |, of features belonging to that group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "Results provided in Table 3 show that the average normalised importance of the features associated with syntactic complexity and discourse is higher than the average importance of lexical richness features, when top-10 most important features across all the groups are selected for comparison.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "To conclude, the feature group of syntactic complexity and discourse phenomena is affected significantly and distinctively the most by deletion errors as seen in Section 5. This group is also important for classification as seen in Table 3 , indicating why classification is affected significantly by deletion errors. Hence, we track the effects from the initial step of adding artificial errors of different amounts to obtaining the final predictions in this manner.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model-based Analysis of Feature Importance",
"sec_num": "6"
},
{
"text": "In order to test how well our conclusions generalise to a different dataset of impaired speech, we repeat the same experiments performed on DB on the HA dataset (Section 2.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalisability Evaluation",
"sec_num": "7"
},
{
"text": "We follow the same method, as described in Section 3 to extract the features and classify samples. Similarly to the results obtained on DB data, with HA deletion errors affect classification performance the most. Furthermore, deletion errors differentiate the same feature group of syntactic complexity and discourse phenomena: with HA dataset, 39 features correlate with deletions stronger than with insertions or substitutions, with 79.49% of features belonging to the aggregate group of syntactic complexity and discourse, and 20.51% -to the group of lexical richness. The rank of feature groups, based on the average absolute Spearman correlation of all the features included in the groups, correspond to the rank observed with DB dataset, with a stronger significant correlation corresponding to the group of syntactic complexity, rather than lexical richness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalisability Evaluation",
"sec_num": "7"
},
{
"text": "We observe that simulated deletion errors have a strong effect on classification performance when detecting cognitive impairment from speech and language, which can be traced back to their effect on syntactic complexity and discourse representations. With this observation in mind, the practical suggestion would be to optimise the ASR to reflect a higher penalty for deletion errors to improve dementia detection performance. For example, the decoder can be parametrised to find a balance between insertions and deletions, so that the number of deletion errors is minimised.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "However, dealing with deletions in training time is not trivial, so in future work, we will focus on the optimisation of ASR performance and its effect on AD detection. Careful ASR error management, following previous work by Simonnet et al. (2017) , could help enable strong fully-automated speechbased predictive models for dementia detection.",
"cite_spans": [
{
"start": 226,
"end": 248,
"text": "Simonnet et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "https://dementia.talkbank.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Alzheimer's disease facts and figures",
"authors": [],
"year": 2018,
"venue": "Alzheimer's & Dementia",
"volume": "14",
"issue": "3",
"pages": "367--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alzheimer's Association. 2018. 2018 Alzheimer's dis- ease facts and figures. Alzheimer's & Dementia, 14(3):367-429.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Effect of Heterogeneous Data for Alzheimer's Disease Detection from Speech",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Balagopalan",
"suffix": ""
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Rudzicz",
"suffix": ""
},
{
"first": "Marzyeh",
"middle": [],
"last": "Ghassemi",
"suffix": ""
}
],
"year": 2018,
"venue": "NIPS Workshop on Machine Learning for Health ML4H",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aparna Balagopalan, Jekaterina Novikova, Frank Rudzicz, and Marzyeh Ghassemi. 2018. The Effect of Heterogeneous Data for Alzheimer's Disease De- tection from Speech. In NIPS Workshop on Machine Learning for Health ML4H.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The natural history of Alzheimer's disease: description of study cohort and accuracy of diagnosis",
"authors": [
{
"first": "T",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boiler",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Oscar",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"L"
],
"last": "Saxton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcgonigle",
"suffix": ""
}
],
"year": 1994,
"venue": "Archives of Neurology",
"volume": "51",
"issue": "6",
"pages": "585--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James T Becker, Fran\u00e7ois Boiler, Oscar L Lopez, Ju- dith Saxton, and Karen L McGonigle. 1994. The natural history of Alzheimer's disease: description of study cohort and accuracy of diagnosis. Archives of Neurology, 51(6):585-594.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Analysis of spontaneous, conversational speech in dementia of Alzheimer type: Evaluation of an objective technique for analysing lexical performance",
"authors": [
{
"first": "S",
"middle": [],
"last": "Romola",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Bucks",
"suffix": ""
},
{
"first": "Joanne",
"middle": [
"M"
],
"last": "Singh",
"suffix": ""
},
{
"first": "Gordon",
"middle": [
"K"
],
"last": "Cuerden",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wilcock",
"suffix": ""
}
],
"year": 2000,
"venue": "Aphasiology",
"volume": "14",
"issue": "1",
"pages": "71--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romola S Bucks, Sameer Singh, Joanne M Cuer- den, and Gordon K Wilcock. 2000. Analysis of spontaneous, conversational speech in dementia of Alzheimer type: Evaluation of an objective tech- nique for analysing lexical performance. Aphasiol- ogy, 14(1):71-91.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic speech recognition in the diagnosis of primary progressive aphasia",
"authors": [
{
"first": "Kathleen",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Rudzicz",
"suffix": ""
},
{
"first": "Naida",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Rochon",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the fourth workshop on speech and language processing for assistive technologies",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen Fraser, Frank Rudzicz, Naida Graham, and Elizabeth Rochon. 2013. Automatic speech recogni- tion in the diagnosis of primary progressive aphasia. In Proceedings of the fourth workshop on speech and language processing for assistive technologies, pages 47-54.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Linguistic features identify Alzheimer's disease in narrative speech",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "Jed",
"middle": [
"A"
],
"last": "Fraser",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Meltzer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rudzicz",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Alzheimer's Disease",
"volume": "49",
"issue": "2",
"pages": "407--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen C Fraser, Jed A Meltzer, and Frank Rudzicz. 2016. Linguistic features identify Alzheimer's dis- ease in narrative speech. Journal of Alzheimer's Dis- ease, 49(2):407-422.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic analysis of syntactic complexity in second language writing",
"authors": [
{
"first": "Xiaofei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2010,
"venue": "International journal of corpus linguistics",
"volume": "15",
"issue": "4",
"pages": "474--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofei Lu. 2010. Automatic analysis of syntactic com- plexity in second language writing. International journal of corpus linguistics, 15(4):474-496.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Childes Project: Tools for Analyzing Talk: Vol. II: The Database",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Macwhinney",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian MacWhinney. 2000. The Childes Project: Tools for Analyzing Talk: Vol. II: The Database. Mahwah.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Acoustic markers associated with impairment in language processing in Alzheimer's disease. The Spanish journal of psychology",
"authors": [
{
"first": "J",
"middle": [
"G"
],
"last": "Juan",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Meil\u00e1n",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Mart\u00ednez-S\u00e1nchez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carro",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "S\u00e1nchez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "15",
"issue": "",
"pages": "487--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan JG Meil\u00e1n, Francisco Mart\u00ednez-S\u00e1nchez, Juan Carro, Jos\u00e9 A S\u00e1nchez, and Enrique P\u00e9rez. 2012. Acoustic markers associated with impairment in lan- guage processing in Alzheimer's disease. The Span- ish journal of psychology, 15(2):487-494.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speech graphs provide a quantitative measure of thought disorder in psychosis",
"authors": [
{
"first": "B",
"middle": [],
"last": "Natalia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mota",
"suffix": ""
},
{
"first": "A",
"middle": [
"P"
],
"last": "Nivaldo",
"suffix": ""
},
{
"first": "Nathalia",
"middle": [],
"last": "Vasconcelos",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"C"
],
"last": "Lemos",
"suffix": ""
},
{
"first": "Osame",
"middle": [],
"last": "Pieretti",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [
"A"
],
"last": "Kinouchi",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Cecchi",
"suffix": ""
},
{
"first": "Sidarta",
"middle": [],
"last": "Copelli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ribeiro",
"suffix": ""
}
],
"year": 2012,
"venue": "PloS one",
"volume": "7",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalia B Mota, Nivaldo AP Vasconcelos, Nathalia Lemos, Ana C Pieretti, Osame Kinouchi, Guillermo A Cecchi, Mauro Copelli, and Sidarta Ribeiro. 2012. Speech graphs provide a quantitative measure of thought disorder in psychosis. PloS one, 7(4):e34928.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The montreal cognitive assessment, moca: a brief screening tool for mild cognitive impairment",
"authors": [
{
"first": "Natalie",
"middle": [
"A"
],
"last": "Ziad S Nasreddine",
"suffix": ""
},
{
"first": "Val\u00e9rie",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "B\u00e9dirian",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Charbonneau",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Whitehead",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Collin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jeffrey",
"suffix": ""
},
{
"first": "Howard",
"middle": [],
"last": "Cummings",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chertkow",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of the American Geriatrics Society",
"volume": "53",
"issue": "4",
"pages": "695--699",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziad S Nasreddine, Natalie A Phillips, Val\u00e9rie B\u00e9dirian, Simon Charbonneau, Victor Whitehead, Isabelle Collin, Jeffrey L Cummings, and Howard Chertkow. 2005. The montreal cognitive assessment, moca: a brief screening tool for mild cognitive impair- ment. Journal of the American Geriatrics Society, 53(4):695-699.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Lexical Features Are More Vulnerable, Syntactic Features Have More Predictive Power",
"authors": [
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Aparna",
"middle": [],
"last": "Balagopalan",
"suffix": ""
},
{
"first": "Ksenia",
"middle": [],
"last": "Shkaruta",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Rudzicz",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP Workshop on on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jekaterina Novikova, Aparna Balagopalan, Ksenia Shkaruta, and Frank Rudzicz. 2019. Lexical Fea- tures Are More Vulnerable, Syntactic Features Have More Predictive Power. In EMNLP Workshop on on Noisy User-generated Text.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine learning research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The Kaldi speech recognition toolkit. Technical report, IEEE Signal Processing Society.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Features and machine learning classification of connected speech samples from patients with autopsy proven Alzheimer's disease with and without additional vascular pathology",
"authors": [
{
"first": "Vassiliki",
"middle": [],
"last": "Rentoumi",
"suffix": ""
},
{
"first": "Ladan",
"middle": [],
"last": "Raoufian",
"suffix": ""
},
{
"first": "Samrah",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Celeste A De",
"middle": [],
"last": "Jager",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Garrard",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Alzheimer's Disease",
"volume": "42",
"issue": "s3",
"pages": "3--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vassiliki Rentoumi, Ladan Raoufian, Samrah Ahmed, Celeste A de Jager, and Peter Garrard. 2014. Fea- tures and machine learning classification of con- nected speech samples from patients with autopsy proven Alzheimer's disease with and without addi- tional vascular pathology. Journal of Alzheimer's Disease, 42(s3):S3-S17.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Abnormalities of connected speech in semantic dementia vs Alzheimer's disease",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Seyed",
"suffix": ""
},
{
"first": "Karalyn",
"middle": [],
"last": "Sajjadi",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Patterson",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Tomek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nestor",
"suffix": ""
}
],
"year": 2012,
"venue": "Aphasiology",
"volume": "26",
"issue": "6",
"pages": "847--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seyed Ahmad Sajjadi, Karalyn Patterson, Michal Tomek, and Peter J Nestor. 2012. Abnormali- ties of connected speech in semantic dementia vs Alzheimer's disease. Aphasiology, 26(6):847-866.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Asr error management for improving spoken language understanding",
"authors": [
{
"first": "Edwin",
"middle": [],
"last": "Simonnet",
"suffix": ""
},
{
"first": "Sahar",
"middle": [],
"last": "Ghannay",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Camelin",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
},
{
"first": "Renato",
"middle": [],
"last": "De Mori",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edwin Simonnet, Sahar Ghannay, Nathalie Camelin, Yannick Est\u00e8ve, and Renato de Mori. 2017. Asr er- ror management for improving spoken language un- derstanding. In Interspeech 2017.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The CMU pronouncing dictionary",
"authors": [
{
"first": "",
"middle": [],
"last": "Robert L Weide",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert L Weide. 1998. The CMU pronouncing dictionary. URL: http://www. speech. cs. cmu. edu/cgibin/cmudict.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A comprehensive study of mobile-health based assistive technology for the healthcare of dementia and alzheimer's disease (ad). Health Care Management Science",
"authors": [
{
"first": "Kanwal",
"middle": [],
"last": "Yousaf",
"suffix": ""
},
{
"first": "Zahid",
"middle": [],
"last": "Mehmood",
"suffix": ""
},
{
"first": "Tanzila",
"middle": [],
"last": "Israr Ahmad Awan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saba",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanwal Yousaf, Zahid Mehmood, Israr Ahmad Awan, Tanzila Saba, Riad Alharbey, Talal Qadah, and Mayda Abdullateef Alrige. 2019. A comprehensive study of mobile-health based assistive technology for the healthcare of dementia and alzheimer's dis- ease (ad). Health Care Management Science, pages 1-23.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Speech Recognition in Alzheimer's Disease and in its Assessment",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rudzicz",
"suffix": ""
}
],
"year": 2016,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "1948--1952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Zhou, Kathleen C Fraser, and Frank Rudzicz. 2016. Speech Recognition in Alzheimer's Disease and in its Assessment. In INTERSPEECH, pages 1948-1952.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Detecting cognitive impairments by agreeing on interpretations of linguistic features",
"authors": [
{
"first": "Zining",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Rudzicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zining Zhu, Jekaterina Novikova, and Frank Rudzicz. 2019. Detecting cognitive impairments by agreeing on interpretations of linguistic features. NAACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Effect of a controlled amount of ASR errors and random noise on classification performance.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"text": ") are available for all",
"content": "<table><tr><td colspan=\"2\">Dataset</td><td>Del (%)</td><td>Ins (%)</td><td>Sub (%)</td></tr><tr><td>DB</td><td>HC AD</td><td>54.14 56.98</td><td>4.27 3.89</td><td>41.59 39.13</td></tr><tr><td>HA</td><td>HC MCI</td><td>24.37 21.78</td><td>13.11 14.81</td><td>62.52 63.40</td></tr></table>",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "Rates of ASR errors on DB and HA datasets.",
"content": "<table/>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "Effect of original ASR on classification performance with the DB dataset.",
"content": "<table/>",
"num": null
}
}
}
}