ACL-OCL / Base_JSON /prefixN /json /nlpmc /2020.nlpmc-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:18.535915Z"
},
"title": "Towards Understanding ASR Error Correction for Medical Conversations",
"authors": [
{
"first": "Anirudh",
"middle": [],
"last": "Mani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Abridge AI Inc",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Shruti",
"middle": [],
"last": "Palaskar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sandeep",
"middle": [],
"last": "Konam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Abridge AI Inc",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Domain Adaptation for Automatic Speech Recognition (ASR) error correction via machine translation is a useful technique for improving out-of-domain outputs of pre-trained ASR systems to obtain optimal results for specific in-domain tasks. We use this technique on our dataset of Doctor-Patient conversations using two off-the-shelf ASR systems: Google ASR (commercial) and the ASPIRE model (open-source). We train a Sequenceto-Sequence Machine Translation model and evaluate it on seven specific UMLS Semantic types, including Pharmacological Substance, Sign or Symptom, and Diagnostic Procedure to name a few. Lastly, we breakdown, analyze and discuss the 7% overall improvement in word error rate in view of each Semantic type.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Domain Adaptation for Automatic Speech Recognition (ASR) error correction via machine translation is a useful technique for improving out-of-domain outputs of pre-trained ASR systems to obtain optimal results for specific in-domain tasks. We use this technique on our dataset of Doctor-Patient conversations using two off-the-shelf ASR systems: Google ASR (commercial) and the ASPIRE model (open-source). We train a Sequenceto-Sequence Machine Translation model and evaluate it on seven specific UMLS Semantic types, including Pharmacological Substance, Sign or Symptom, and Diagnostic Procedure to name a few. Lastly, we breakdown, analyze and discuss the 7% overall improvement in word error rate in view of each Semantic type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Off-the-shelf ASR systems like Google ASR are becoming increasingly popular each day due to their ease of use, accessibility, scalability and most importantly, effectiveness. Trained on large datasets spanning different domains, these services enable accurate speech-to-text capabilities to companies and academics who might not have the option of training and maintaining a sophisticated state-ofthe-art in-house ASR system. However, for all the benefits these cloud-based systems provide, there is an evident need for improving their performance when used on in-domain data such as medical conversations. Approaching ASR Error Correction as a Machine Translation task has proven to be useful for domain adaptation and resulted in improvements in word error rate and BLEU score when evaluated on Google ASR output (Mani et al., 2020) .",
"cite_spans": [
{
"start": 815,
"end": 834,
"text": "(Mani et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, it is important to analyze and understand how domain adapted speech may vary from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transcript Reference you also have a pacemaker because you had sick sinus syndrome and it's under control Google ASR you also have a taste maker because you had sick sinus syndrome and it's under control S2S you also have a pacemaker because you had sick sinus syndrome and it's under control Reference like a heart disease uh atrial fibrillation Google ASR like a heart disease asian populations S2S like a heart disease atrial fibrillation Table 1 : Examples from Reference, Google ASR transcription and corresponding S2S model output for two medical words, \"pacemaker\" and \"atrial fibrillation\". In this work, we investigate how adapting transcription to domain and context can help reduce such errors, especially with respect to medical words categorized under different Semantic types of the UMLS ontology.",
"cite_spans": [],
"ref_spans": [
{
"start": 442,
"end": 449,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "ASR outputs. We approach this problem by using two different types of metrics -1) overall transcription quality, and 2) domain specific medical information. For the first one, we use standard speech metric like word error rate for two different ASR system outputs, namely, Google Cloud Speech API 1 (commercial), and ASPIRE model (open-source) (Peddinti et al., 2015) . For the second type of evaluation, we use the UMLS 2 ontology (O., 2004) and analyze the S2S model output for a subset of semantic types in the ontology using a variety of performance metrics to build an understanding of effect of the Sequence to Sequence transformation.",
"cite_spans": [
{
"start": 344,
"end": 367,
"text": "(Peddinti et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 432,
"end": 442,
"text": "(O., 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "While the need for ASR correction has become more and more prevalent in recent years with the successes of large-scale ASR systems, machine translation and domain adaptation for error correction are still relatively unexplored. In this paper, we build upon the work done by Mani et al. (Mani et al., 2020) . However, D'Haro and Banchs (D'Haro and Banchs, 2016) first explored the use of machine translation to improve automatic transcription and they applied it to robot commands dataset and human-human recordings of tourism queries dataset. ASR error correction has also been performed based on ontology-based learning in (Anantaram et al., 2018) . They investigate the use of including accent of speaker and environmental conditions on the output of pre-trained ASR systems. Their proposed approach centers around bioinspired artificial development for ASR error correction. (Shivakumar et al., 2019) explore the use of noisy-clean phrase context modeling to improve ASR errors. They try to correct unrecoverable errors due to system pruning from acoustic, language and pronunciation models to restore longer contexts by modeling ASR as a phrase-based noisy transformation channel. Domain adaptation with off-the-shelf ASR has been tried for pure speech recognition tasks in high and low resource scenarios with various training strategies Renals, 2014, 2015; Meng et al., 2017; Sun et al., 2017; Shinohara, 2016; Dalmia et al., 2018) but the goal of these models was to build better ASR systems that are robust to domain change. Domain adaptation for ASR transcription can help improve the performance of domain-specific downstream tasks such as medication regimen extraction (Selvaraj and Konam, 2019).",
"cite_spans": [
{
"start": 274,
"end": 305,
"text": "Mani et al. (Mani et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 624,
"end": 648,
"text": "(Anantaram et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1343,
"end": 1362,
"text": "Renals, 2014, 2015;",
"ref_id": null
},
{
"start": 1363,
"end": 1381,
"text": "Meng et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 1382,
"end": 1399,
"text": "Sun et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 1400,
"end": 1416,
"text": "Shinohara, 2016;",
"ref_id": "BIBREF9"
},
{
"start": 1417,
"end": 1437,
"text": "Dalmia et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Using the reference texts and pre-trained ASR hypothesis, we have access to parallel data that is in-domain (reference text) and out-of-domain (hypothesis from ASR), both of which are transcriptions of the same speech signal. With this parallel data, we now frame the adaptation task as a translation problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for Error Correction",
"sec_num": "3"
},
{
"text": "Sequence-to-Sequence Models : Sequence-tosequence (S2S) models (Sutskever et al., 2014) have been applied to various sequence learning tasks including speech recognition and machine translation. Attention mechanism (Bahdanau et al., 2014) is used to align the input with the output sequences in these models. The encoder is a deep stacked Long Short-Term Memory Network and the decoder is a shallower uni-directional Gated Recurrent Unit acting as a language model for decoding the input sequence into either the transcription (ASR) or the translation (MT). Attention-based S2S models do not require alignment information between the source and target data, hence useful for monotonic and non-monotonic sequence-mapping tasks. In our work, we are mapping ASR output to reference hence it is a monotonic mapping task where we use this model.",
"cite_spans": [
{
"start": 63,
"end": 87,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 215,
"end": 238,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for Error Correction",
"sec_num": "3"
},
{
"text": "We use a dataset of 3807 de-identified Doctor-Patient conversations containing 288,475 utterances split randomly into 230,781 training utterances and 28,847 for validation and test each. The total vocabulary for the machine translation task is 12,934 words in the ASR output generated using Google API and ground truth files annotated by humans in the training set. We only train word-based translation models in this study to match ASR transcriptions and ground truth with further downstream evaluations. To choose domain-specific medical words, we use a pre-defined ontology by Unified Medical Language System (UMLS) (O., 2004), giving us an exhaustive list of over 20,000 medications. We access UMLS ontology through the Quickumls package (Soldaini and Goharian, 2016) , and use seven semantic types -Pharmacological Substance (PS), Sign or Symptom (SS), Diagnostic Procedure (DP), Body Part, Organ, or Organ Component (BPOOC), Disease or Syndrome (DS), Laboratory or Test Result (LTR), and Organ or Tissue Function (OTF). These are thereby referred by their acronyms in this paper. These seven semantic types were chosen to cover a spread of varied number of utterances available for each type's presence, from lowest (OTF) to the highest (PS) in our dataset.",
"cite_spans": [
{
"start": 742,
"end": 771,
"text": "(Soldaini and Goharian, 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Alignment: Since the ground truth is at utterance level, and ASR system output transcripts are at word level, specific alignment handling techniques are required to match the output of multiple ASR systems. This is achieved using utterance level timing information i.e., start and end time of an utterance, and obtaining the corresponding words in the ASR system output transcript based on word-level timing information (start and end time of each word). To make sure same utterance ID is used across all ASR outputs and the ground truth, we first process our primary ASR output transcripts from Google Cloud Speech API based on the ground truth and create random training, validation and test splits. For each ground truth utterance in these dataset splits, we also generate corresponding utterances from ASPIRE output transcripts similar to the process mentioned above. This results in two datasets corresponding to Google Cloud Speech and ASPIRE ASR models, where utterance IDs are conserved across datasets. However, this does lead to ASPIRE dataset having a lesser utterances as we process Google ASR outputs first in an effort maximize the size of our primary ASR model dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Pre-trained ASR: We use the Google Cloud Speech API for Google ASR transcription and the JHU ASPIRE model (Peddinti et al., 2015) as two off-the-shelf ASR systems in this work. Google Speech API is a commercial service that charges users per minute of speech transcribed, while the ASPIRE model is an open-source ASR model. We explore the trends we observe in both-a commercial API as well as an open-source model. metrics, with an absolute improvement of 7% in WER and a 4 point absolute improvement in BLEU scores on Google ASR. While the Google ASR output can be stripped of punctuation for a better comparison, it is an extra post-processing step and breaks the direct output modeling pipeline. If necessary, ASPIRE model output and the references can be inserted with punctuation as well.",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "(Peddinti et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "In Table 4 , we compare S2S adapted outputs with Google ASR for each semantic type, broken down by Precision, Recall and F1 scores. The two outputs are also compared directly by counting utterances where S2S model made the utterance better with respect to a semantic term -it was present in the reference and S2S output but not Google ASR, and cases where S2S model made the utterance worse -semantic term was present in the reference and Google ASR but not S2S output. We refer to this metric as semantic intersection in this work. As observed, the F1 scores are higher for S2S outputs for all the semantic types in the Ontology, except for one (BPOOC) where it ties. In terms of Precision and Recall too, S2S performs better for most categories. These numbers can be discussed with a couple of underlying factors -how common or rare the semantic terms are on average for each semantic type, and how many training examples has the model seen for those terms. This is important to consider as Google ASR learns on a much larger vocabulary of words spanning many different domains, where as S2S is trained on a domain specific dataset. For example, we see a large gain on Precision for DP, which can be attributed to the rarity of the terms under this category, like 'echocardiogram', 'pacemaker', etc. Its also for this reason we see only a slight improvement in Precision for PS even though it has the most number of training examples. Many of the medication names are rare, but a lot of them are pretty common nowadays even though they are domain specific, like 'aspirin'. Moreover, this is also supported by the numbers observed for BPOOC, where terms like 'legs', 'heart' and 'lungs' are the top 3 most frequently occurring words.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.2"
},
{
"text": "The number of unique terms for the S2S output are lower in comparison to Google ASR and reference as observed in Table 4 . This might indicate that the S2S model is incorrectly modifying some Google ASR output medical terms which may not have as many examples in the Training set. However, our semantic intersection metric indicates that we get an overall improvement in all categories, except for DP. We hypothesize this to be largely due to a combination of how rare the words are, and the overall number of training examples for DP being low. When we calculate semantic intersection on the Full set, we get almost equal results for S2S and Google ASR outputs, 0.5 and 0.6 respectively. When we look at our top 5 and bottom 5 least frequent terms for each semantic types, almost all the terms overlap between S2S, Google ASR and reference, even though the number of unique terms might be less for S2S. Overall, it is evident from analyzing the results that as the number of occurrences increases for each medical term, the performance of the S2S model in identifying errors and correcting them increases rapidly, as shown in Table 2 and Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1127,
"end": 1146,
"text": "Table 2 and Table 4",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.2"
},
{
"text": "In a production environment, the S2S model may be confidently used for correcting ASR errors for top K most frequently occurring medical terms, where the value of K must be decided based on the dataset available for training. Future extension of this work will also be looking into the class imbalance problem for a more robust performance on different semantic types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.2"
},
{
"text": "We present an analysis of how ASR Error Correction using Machine Translation impacts the different semantic types of the UMLS ontology for a medical conversation. We run the S2S model on a dataset of Doctor-Patient conversations as a post-processing step to optimize the Google off-theshelf ASR system. We use different input representations and compare the performance of our S2S model using WER and BLEU scores on Google ASR and ASPIRE outputs. We deep dive into how our adaptation model affect medical WER for each semantic type, and breakdown the results using Precision, Recall, F1 and Semantic Intersection numbers between S2S and Google ASR. We establish the robustness of S2S model performance for more frequently occurring medical terms. In the future, we want to explore other representations like phonemes which might capture ASR errors better, and address the class imabalance problem for rarer medical terms in different semantic types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://cloud.google.com/speech-to-text/ 2 The Unified Medical Language System is a collection of medical thesauri maintained by the US National Library of Medicine",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the University of Pittsburgh Medical Center (UPMC) and Abridge AI Inc. for providing access to de-identified data of Doctor-Patient conversations used in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Repairing asr output by artificial development and ontology based learning",
"authors": [
{
"first": "C",
"middle": [],
"last": "Anantaram",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Sangroya",
"suffix": ""
},
{
"first": "Mrinal",
"middle": [],
"last": "Rawat",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Chhabra",
"suffix": ""
}
],
"year": 2018,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "5799--5801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Anantaram, Amit Sangroya, Mrinal Rawat, and Aish- warya Chhabra. 2018. Repairing asr output by arti- ficial development and ontology based learning. In IJCAI, pages 5799-5801.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Domain robust feature extraction for rapid low resource asr development",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Dalmia",
"suffix": ""
},
{
"first": "Xinjian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "258--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Dalmia, Xinjian Li, Florian Metze, and Alan W Black. 2018. Domain robust feature ex- traction for rapid low resource asr development. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 258-265. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic correction of asr outputs by using machine translation",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "D'",
"middle": [],
"last": "Haro",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Fernando D'Haro and Rafael E Banchs. 2016. Au- tomatic correction of asr outputs by using machine translation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Asr error correction and domain adaptation using machine translation",
"authors": [
{
"first": "Anirudh",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Palaskar",
"suffix": ""
},
{
"first": "Nimshi",
"middle": [],
"last": "Venkat Meripo",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Konam",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anirudh Mani, Shruti Palaskar, Nimshi Venkat Meripo, Sandeep Konam, and Florian Metze. 2020. Asr error correction and domain adaptation using ma- chine translation. In 2020 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP). IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised adaptation with domain separation networks for robust speech recognition",
"authors": [
{
"first": "Zhong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Zhuo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vadim",
"middle": [],
"last": "Mazalov",
"suffix": ""
},
{
"first": "Jinyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.08010"
]
},
"num": null,
"urls": [],
"raw_text": "Zhong Meng, Zhuo Chen, Vadim Mazalov, Jinyu Li, and Yifan Gong. 2017. Unsupervised adaptation with domain separation networks for robust speech recognition. arXiv preprint arXiv:1711.08010.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The unified medical language system (umls): integrating biomedical terminology",
"authors": [
{
"first": "O",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic Acids Res",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/nar/gkh061"
],
"PMID": [
"14681409"
],
"PMCID": [
"PMC308795"
]
},
"num": null,
"urls": [],
"raw_text": "Bodenreider O. 2004. The unified medical language system (umls): integrating biomedical terminol- ogy. Nucleic Acids Res. 2004 Jan 1;32(Database issue):D267-70. doi: 10.1093/nar/gkh061. PubMed PMID: 14681409; PubMed Central PMCID: PMC308795. Nucleic Acids Res.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Jhu aspire system: Robust lvcsr with tdnns, ivector adaptation and rnn-lms",
"authors": [
{
"first": "Vijayaditya",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Guoguo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vimal",
"middle": [],
"last": "Manohar",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "539--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijayaditya Peddinti, Guoguo Chen, Vimal Manohar, Tom Ko, Daniel Povey, and Sanjeev Khudanpur. 2015. Jhu aspire system: Robust lvcsr with tdnns, ivector adaptation and rnn-lms. In 2015 IEEE Work- shop on Automatic Speech Recognition and Under- standing (ASRU), pages 539-546. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Medication regimen extraction from medical conversations",
"authors": [
{
"first": "P",
"middle": [],
"last": "Sai",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Selvaraj",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Konam",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.04961"
]
},
"num": null,
"urls": [],
"raw_text": "Sai P Selvaraj and Sandeep Konam. 2019. Medica- tion regimen extraction from medical conversations. arXiv preprint arXiv:1912.04961.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adversarial multi-task learning of deep neural networks for robust speech recognition",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Shinohara",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Shinohara. 2016. Adversarial multi-task learn- ing of deep neural networks for robust speech recog- nition. Proc. Interspeech 2016.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning from past mistakes: improving automatic speech recognition output via noisy-clean phrase context modeling",
"authors": [
{
"first": "Haoqi",
"middle": [],
"last": "Prashanth Gurunath Shivakumar",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Georgiou",
"suffix": ""
}
],
"year": 2019,
"venue": "APSIPA Transactions on Signal and Information Processing",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prashanth Gurunath Shivakumar, Haoqi Li, Kevin Knight, and Panayiotis Georgiou. 2019. Learning from past mistakes: improving automatic speech recognition output via noisy-clean phrase context modeling. APSIPA Transactions on Signal and In- formation Processing, 8.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Quickumls: a fast, unsupervised approach for medical concept extraction",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Soldaini",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2016,
"venue": "MedIR workshop, sigir",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luca Soldaini and Nazli Goharian. 2016. Quickumls: a fast, unsupervised approach for medical concept extraction. In MedIR workshop, sigir, pages 1-4.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An unsupervised deep domain adaptation approach for robust speech recognition",
"authors": [
{
"first": "Sining",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Binbin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Yanning",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Neurocomputing",
"volume": "257",
"issue": "",
"pages": "79--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sining Sun, Binbin Zhang, Lei Xie, and Yanning Zhang. 2017. An unsupervised deep domain adap- tation approach for robust speech recognition. Neu- rocomputing, 257:79-87.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27, pages 3104-3112. Curran Associates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning hidden unit contributions for unsupervised speaker adaptation of neural network acoustic models",
"authors": [
{
"first": "Pawel",
"middle": [],
"last": "Swietojanski",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2014,
"venue": "Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pawel Swietojanski and Steve Renals. 2014. Learning hidden unit contributions for unsupervised speaker adaptation of neural network acoustic models. In Spoken Language Technology Workshop (SLT), 2014",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Differentiable pooling for unsupervised speaker adaptation",
"authors": [
{
"first": "Pawel",
"middle": [],
"last": "Swietojanski",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2015,
"venue": "Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4305--4309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pawel Swietojanski and Steve Renals. 2015. Differ- entiable pooling for unsupervised speaker adapta- tion. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4305-4309. IEEE.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Breakdown of the Full Data based on REF.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"text": "Results for adaptive training experiments with Google ASR and ASPIRE model. We compare absolute gains in WER and BLEU scores with un-adapted ASR output.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"text": ", 532 0.86 , 0.85 0.61 , 0.55 0.72 , 0.67 0.10, 0.02 DS 210, 302, 310 0.75 , 0.75 0.68 , 0.68 0.76 , 0.75 0.03, 0.02 BPOOC 173, 235, 222 0.82 , 0.81 0.70 , 0.70 0.75 , 0.75 0.02, 0.02 SS 144, 169, 181 0.87 , 0.88 0.74 , 0.72 0.8 , 0.79 0.03, 0.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Ontology Unique Terms</td><td/><td colspan=\"2\">S2S adpt, ASR o/p</td></tr><tr><td/><td>S, G, R</td><td>P</td><td>R</td><td>F1</td><td>SI</td></tr><tr><td>PS</td><td colspan=\"5\">282, 39301</td></tr><tr><td>DP</td><td>54, 73, 82</td><td colspan=\"4\">0.89 , 0.75 0.65 , 0.70 0.75 , 0.72 0.02, 0.07</td></tr><tr><td>LTR</td><td>26, 26, 33</td><td colspan=\"4\">0.77 , 0.85 0.67 , 0.61 0.72 , 0.71 0.07, 0.01</td></tr><tr><td>OTF</td><td>26, 32, 26</td><td colspan=\"4\">0.79 , 0.74 0.79 , 0.77 0.79 , 0.75 0.04, 0.02</td></tr></table>"
},
"TABREF5": {
"text": "Medical WER results per Ontology for adaptive training experiments on Test data. We use Precision, Recall, F1 and Semantic Intersection (as defined in 5.2) metrics for comparing S2S model output to Google ASR.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}