ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-demos.38.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:43:01.481181Z"
},
"title": "Conversational Agent for Daily Living Assessment Coaching Demo",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Gaydhani",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Raymond",
"middle": [],
"last": "Finzel",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Sheena",
"middle": [],
"last": "Dufresne",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Maria",
"middle": [],
"last": "Gini",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Serguei",
"middle": [
"Vs"
],
"last": "Pakhomov",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Conversational Agent for Daily Living Assessment Coaching (CADLAC) is a multi-modal conversational agent system designed to impersonate \"individuals\" with various levels of ability in activities of daily living (ADLs: e.g., dressing, bathing, mobility, etc.) for use in training professional assessors how to conduct interviews to determine one's level of functioning. The system is implemented on the Mind-Meld platform for conversational AI and features a Bidirectional Long Short-Term Memory topic tracker that allows the agent to navigate conversations spanning 18 different ADL domains, a dialogue manager that interfaces with a database of over 10,000 historical ADL assessments, a rule-based Natural Language Generation (NLG) module, and a pre-trained open-domain conversational sub-agent (based on GPT-2) for handling conversation turns outside of the 18 ADL domains. CADLAC is delivered via state-of-the-art web frameworks to handle multiple conversations and users simultaneously and is enabled with voice interface. The paper includes a description of the system design and evaluation of individual components followed by a brief discussion of current limitations and next steps.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Conversational Agent for Daily Living Assessment Coaching (CADLAC) is a multi-modal conversational agent system designed to impersonate \"individuals\" with various levels of ability in activities of daily living (ADLs: e.g., dressing, bathing, mobility, etc.) for use in training professional assessors how to conduct interviews to determine one's level of functioning. The system is implemented on the Mind-Meld platform for conversational AI and features a Bidirectional Long Short-Term Memory topic tracker that allows the agent to navigate conversations spanning 18 different ADL domains, a dialogue manager that interfaces with a database of over 10,000 historical ADL assessments, a rule-based Natural Language Generation (NLG) module, and a pre-trained open-domain conversational sub-agent (based on GPT-2) for handling conversation turns outside of the 18 ADL domains. CADLAC is delivered via state-of-the-art web frameworks to handle multiple conversations and users simultaneously and is enabled with voice interface. The paper includes a description of the system design and evaluation of individual components followed by a brief discussion of current limitations and next steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A person's ability to function independently in everyday life depends on multiple factors including, but not limited to, intact physical and mental capacity. In the United States, significant public resources are dedicated to providing assistance to those in need. A key aspect of assistance programs is to provide ongoing assessment of individuals * Equal contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "to determine their level of functioning (e.g. independent, needs supervision, needs physical assistance, or dependent) and their specific needs in order to provide assistance appropriately. These assessments are conducted by certified assessors specifically trained for this purpose. A challenge in the assessment process is to ensure consistency across large numbers of assessors with various degrees of experience and interview skills and to prepare novice assessors for the variety of interactions they will experience in the field. The Conversational Agent for Daily Living Assessment Coaching (CADLAC) is designed to coach certified assessors to conduct their assessment interviews in a natural conversational style that simulates real interactions. Previously, dialogue systems similar to CADLAC have been developed (Campillos Llanos et al., 2015; Nirenburg et al., 2008; Jaffe et al., 2015; Laleye et al., 2020) . These systems simulate \"Virtual Patients\", which are used in healthcare education. CADLAC is tailored to support novel application domains of function and disability. An example of the interaction with the conversational agent is shown in Figure 1 . The interface and a video highlighting the system can be found here 1 2 .",
"cite_spans": [
{
"start": 822,
"end": 853,
"text": "(Campillos Llanos et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 854,
"end": 877,
"text": "Nirenburg et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 878,
"end": 897,
"text": "Jaffe et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 898,
"end": 918,
"text": "Laleye et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1160,
"end": 1168,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We used two sources of data in order to inform CADLAC system design, train machine learning models, and to develop a database to support rulebased approaches used by the system. One source of data consisted of a survey that was administered to certified assessors, and the other consisted of anonymized historical assessment data shared by the Minnesota Department of Human Services (DHS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "We designed a survey to collect sample dialogues from certified assessors. This survey was administered to approximately 1,700 assessors statewide. The assessors were asked to recall some of their past assessments and provide examples of interactions that they had with people during the assessment interviews. Specifically, each example consists of up to 3 dialogue turns between the assessor and the person being interviewed, the gender and age category of the person, domain of the conversation, and the person's ability level within the domain. The data consists of assessments of activities of daily living (ADLs -e.g., walking) and instrumental activities of daily living (iADLS -e.g., paying bills) in 18 functional domains related to personal cares, movement, household management, and eating/meal preparation. We also manually annotated the assessor questions for 6 intents: challenges, preferences, equipment, helper, generic, and frequency. We were able to collect a total of 2,885 dialogues through the survey. A sample record from the resulting dataset, including the annotations for intents, is shown in Table 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Survey Data",
"sec_num": "2.1"
},
{
"text": "CADLAC relies on a database of over 10,000 historical assessments, conducted by experienced certified assessors and managed by Minnesota DHS. Each historical assessment contains fields that indicate the person's ability to function in ADLs and Domain: Grooming Ability: Physical Assistance Assessor-1: \"Can you tell me about how you take care of your grooming needs?\" intent -generic Participant-1: \"I have a hard time.\" Assessor-2: \"Can you brush your hair?\" intent -challenges Participant-2: \"No, I can't reach my hair to get it brushed in the back.\" Assessor-3: \"Who helps you to brush your hair?\" intent -helper Participant-3: \"My daughter helps me to brush my hair.\" Age: 65-84 Gender: Female iADLs in addition to basic demographic information such as age range and sex of the person being assessed. It also contains certified assessors' notes taken during the assessments. These notes represent very brief descriptions of the assessed person's challenges, preferences, and equipment they use to help them, among other information organized by the ADL and iADL domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Profiles",
"sec_num": "2.2"
},
{
"text": "Historical data was anonymized by DHS staff for inclusion in CADLAC by removing any individually identifiable information including individuals' names and exact age information that was converted to age ranges. Furthermore, sensitive personal information such as phone number, email, location, etc. was excluded from the historical data, keeping the privacy of the individuals protected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Profiles",
"sec_num": "2.2"
},
{
"text": "These anonymized historical assessments are used to generate synthetic profiles of \"individuals\" that specify varying levels of independence in everyday functioning and specific needs. These profiles are created by mapping the categorical attributes related to the independence levels in the historical assessment to those levels specified for the conversational agent (CA). Additionally, assessor notes about challenges, preferences, and equipment from the historical data were populated in the synthetic profiles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Profiles",
"sec_num": "2.2"
},
{
"text": "The profiles are used to customize the CA and generate natural language responses that are tailored to the question asked by the assessor and are as consistent as possible with all of the information in the profile. For example, if the synthetic profile states that the individual being assessed is completely dependent on external assistance in the mobility domain, the responses generated by CAD-LAC to a question about the ability to perform heavy housekeeping should not indicate any degree of independence in this domain either. The profiles include a numeric representation of the independence level of the \"person\" represented by the profile. These numeric representations are used to compare assessments produced by novice assessors using CADLAC for training to those produced by experienced assessors, and to provide summary feedback about the assessment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Profiles",
"sec_num": "2.2"
},
{
"text": "Despite the fact that these profiles are based on data from real individuals assessed in the state of Minnesota, the profiles may potentially convey biases present in the underlying data. In order to minimize potential systematic bias, the historical data used to construct the profiles were randomly sampled from a diverse population of assessed individuals with equal proportions by sex and with the following race distribution: 17.1% African American; 2.4% American Indian; Asian or Pacific Islander 7.7%; Hispanic 2.6%; White 64.4%; Two or more races 1.1%; and Unknown race 4.6%. The current prototype of CADLAC does not use race information; however, this information is available in the underlying data and can be used to adjust the composition of the synthetic profile database as needed for assessor training purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Profiles",
"sec_num": "2.2"
},
{
"text": "CADLAC is implemented on the MindMeld platform for conversational AI applications (Raghuvanshi et al., 2018 ) that relies on a commonly used modular dialogue system design consisting broadly of natural language understanding (NLU), natural language generation (NLG) and a dialogue state tracker/manager (DM) components. These components of CADLAC prototype have been developed using a hybrid machine learning and rule-based approaches. The prototype is currently deployed via a web service written in Python with the modern asynchronous web Responder framework. This web service is responsible for accepting requests from a user-facing web client, managing user sessions, and passing conversation objects into the Dialogue Parser. The web-based client supports text-only, voice-only or hybrid modalities. This demonstration will focus on showing the natural dialogue between human users and CADLAC aimed at assessing the level of functioning of the \"individual\" impersonated by CADLAC and the feedback provided to the users regarding their assessments. The system architecture is shown in Figure 2 .",
"cite_spans": [
{
"start": 82,
"end": 107,
"text": "(Raghuvanshi et al., 2018",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 1089,
"end": 1097,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "System Design",
"sec_num": "3"
},
{
"text": "The domain classifier (a.k.a. topic tracker) categorizes the input query into one of 18 domains related to ADLs and iADLs, as well as two additional domains: \"generic follow-up question\" and \"unsupported\". CADLAC's domain recognizer comprises a BiLSTM neural network (Hochreiter and Schmidhuber, 1997 ) that we trained on available survey data using GloVe embeddings (Pennington et al., 2014) to represent the semantics of input tokens. We evaluated this model using 10-fold crossvalidation resulting in a mean f-score of 0.801 and an accuracy of 0.830 across all domains.",
"cite_spans": [
{
"start": 267,
"end": 300,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF6"
},
{
"start": 367,
"end": 392,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Classifier",
"sec_num": "4.1"
},
{
"text": "Next, the NLU module recognizes the intent of the user query. In our case, each domain has the following intents that reflect the nature of the ques-tions asked by assessors: challenges, preferences, generic, equipment, unsupported, helper, and frequency. These intents specify the type of information that the assessor wants to elicit. We used the survey data to train an intent classifier for each domain using the same BiLSTM architecture that we used for the domain recognizer. The results of 10-fold cross-validation for this component consist of a range of f-scores from 0.704 to 0.927 that vary by domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intent Classifier",
"sec_num": "4.2"
},
{
"text": "We also trained a Named Entity Recognizer to identify the words or phrases, referred to as \"entities\", present in the input query (e.g., shirt, shoes, pants are entities in the dressing domain). These entities are then used to fill the empty slots in the natural language response or select an appropriate response from the knowledge base. We also use a rule-based language parser within MindMeld to model the dependencies between the recognized entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognizer",
"sec_num": "4.3"
},
{
"text": "The dialogue manager consists of the dialogue state tracker, which maps the input query to appropriate dialogue states. Each dialogue state is responsible for handling a particular type of query. We use a rule-based and pattern matching procedure, which depends on the domain and the intent of the input query, to define the dialogue states. One of the important functionalities of the CA is to handle follow-up questions as illustrated in Figure 3 . For this purpose, we use the domain of the previous turn and make a transition to the dialogue state specified by the intent of the current turn. If the intent of the question is unsupported, then we use the intent of the previous turn and the domain of the current turn, and make a transition to the corresponding dialogue state. The unsupported queries are handled by the neural model based on GPT-2 (Zhang et al., 2020) as illustrated in Figure 4 .",
"cite_spans": [
{
"start": 853,
"end": 873,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 440,
"end": 448,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 892,
"end": 900,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Dialogue Manager",
"sec_num": "4.4"
},
{
"text": "We use a rule-based approach in which we first look up a field in the knowledge-base of historical assessments that corresponds to the identified topic and intent for a specific synthetic profile (e.g., challenges[intent] with dressing[domain]). Information contained in historical assessments is underspecified and is not usable as a natural language response. For example, it may contain a note \"Be- havioral issues\" for challenges with dressing. We manually annotated a subset of over 100 assessments, where the annotators were instructed to become familiar with the person's level of functioning in various domains and use that knowledge to convert the historical notes to a format that would sound more natural yet still consistent with the synthetic profile (e.g., \"Behavioral issues\" note for a 5 year old child's assessment would be converted to \"He can't dress by himself because he throws a tantrum each time he has to change clothes.\") The current prototype of CADLAC's dialogue manager queries the knowledge base for these manually converted responses and returns a response that most closely matches the named entities mentioned in the user's question. If no natural language response is found, CADLAC generates a generic response randomly chosen from a set of responses consistent with the synthetic profile (e.g., for a profile of a person who requires intermittent physical assistance with dressing, the response may be \"I need someone to help me with this\"). We are currently experimenting with transformer neural models used in machine translation in order to determine if they can \"learn\" the mapping between the original historical assessment notes and the natural language responses; however, the current demo does not include these models yet. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Generation",
"sec_num": "5"
},
{
"text": "The feedback to users being trained to perform assessments is provided via a visual interface designed to compare users' assessments to those stored in synthetic profiles as illustrated in Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 197,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Feedback",
"sec_num": "6"
},
{
"text": "In order to enable voice input-output capabilities in CADLAC we implemented a Automatic Speech Recognition (ASR) and a Text-to-Speech (TTS) web services. Both services are implemented using PyTorch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Services",
"sec_num": "7"
},
{
"text": "Voice activity is streamed from the web client to the web server in real time using an implementation of WebRTC peer connections. The WebRTC protocols are available in most modern browsers, and include hooks to access media devices, standards for establishing peer connections, and asynchronous data channels. The implementation of WebRTC that was used for the python web server was AIORTC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Services",
"sec_num": "7"
},
{
"text": "After voice data arrive at the server they are passed to the ASR service, which transcribes English words from the speech utterance. These words take the place of the text from the chat interface for the rest of the conversational turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Services",
"sec_num": "7"
},
{
"text": "We trained an ASR system based on Baidu's Deep Speech 2 architecture (Amodei et al., 2016) implemented in PyTorch 3 consisting of 3 convolutional neural network (CNN) layers, followed by 5 bidirectional recurrent neural network (RNN) layers with gated recurrent units (GRU), a single lookeahead convolution layer followed by a fully connected layer and a single softmax layer. The system was trained using the Connectionist Temporal Classification (CTC) loss function (Graves et al., 2006) .",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Amodei et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 468,
"end": 489,
"text": "(Graves et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR Service",
"sec_num": "7.1"
},
{
"text": "In addition to the default greedy search decoding over the hypotheses produced by the softmax layer, the system's implementation also can use a beam search decoder with a standard n-gram language model. We used default hyperparameters: size of the RNN layers was set to 800 GRU units; starting learning rate was set to 0.0003 with the annealing parameter set to 1.1 and momentum of 0.9. Audio signal processing consisted of transforming the audio from the time to the frequency domain via Short-time Fourier transform as implemented by the Python librosa library. The signal was sampled in frames of 20 milliseconds overlapping by 10 milliseconds. The resulting input vectors to the first CNN layer of the Deep Speech 2 network consisted of 160 values representing the power spectrum of each frame.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR Service",
"sec_num": "7.1"
},
{
"text": "A collection of speech corpora available from the Linguistic Data Consortium was used as training data. These corpora include the Wall Street Journal (WSJ: LDC93S6A, LDC94S13B), Resource Management (RM -LDC93S3A), TIMIT (LDC93S1), FFMTIMIT (LDC96S32), DCIEM/HCRC (LDC96S38), USC-SFI MALACH corpus (LDC2019S11), Switchboard-1 (LDC97S62), and Fisher (LDC2004S13, LDC2005S13). In addition to these corpora, we used the following publicly available data: TalkBank (CMU, ISL, SBCSAE collections) (MacWhinney and Wagner, 2010), Common Voice (CV: Version 1.0)) corpus 4 , Voxforge corpus 5 , TED-LIUM corpus (Release 2) (Rousseau et al., 2014) , LibriSpeech (Panayotov et al., 2015) , Flicker8K (Hodosh et al., 2013) , CSTR VCTK corpus (Veaux et al., 2017) , and the Spoken Wikipedia Corpus (SWC-English (K\u00f6hn et al., 2016) ). Audio samples from all of these these data sources were split into pieces shorter than 25 seconds in duration. The total size of the resulting corpus was approximately 4,991 hours of audio (2,000 hours contributed by the Fisher corpus alone). Finally, we also used audio data from various prior studies that were conducted at the University of Minnesota consisting of story recall, verbal fluency, and spontaneous narrative tasks. With the exception of the Fisher and Switchboard corpora, all other data were recorded at a minimum of 16 kHz sampling frequency. The Fisher and Switchboard corpora contain narrow-band telephone conversations sampled at 8 KHz. All data were either downsampled or upsampled and converted using the SoX toolkit 6 to a single channel 16 bit 16 kHz PCM WAVE format.",
"cite_spans": [
{
"start": 613,
"end": 636,
"text": "(Rousseau et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 651,
"end": 675,
"text": "(Panayotov et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 688,
"end": 709,
"text": "(Hodosh et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 729,
"end": 749,
"text": "(Veaux et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 797,
"end": 816,
"text": "(K\u00f6hn et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR Service",
"sec_num": "7.1"
},
{
"text": "The performance of the ASR service was evaluated off-line using the heldout portion of the TED-LIUM corpus. Without using a language model for rescoring the output of the neural model (greedy decoding), the word error rate (WER) and character error rate (CER) of our ASR system were 18.84 and 5.24, which are comparable to those previously reported for the same dataset also using a Deep Speech 2 system (WER: 28.1, CER: 9.2) (Hernandez et al., 2018) . Using a 4-gram language model constructed with the SRILM Toolkit (Stolcke, 2002) from the English language portion of the 1 Billion words text corpus 7 model with Kneser-Ney smoothing (Ney et al., 1994) resulted in improving ASR accuracy to WER: 15.73 and CER: 4.57.",
"cite_spans": [
{
"start": 426,
"end": 450,
"text": "(Hernandez et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 518,
"end": 533,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF26"
},
{
"start": 637,
"end": 655,
"text": "(Ney et al., 1994)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR Service",
"sec_num": "7.1"
},
{
"text": "We used a pre-trained model based on Tacotron2 (Shen et al., 2017) and WaveGlow (Prenger et al., 2018) for the text-to-speech service. This model was implemented in PyTorch and is based on the NVIDIA's GitHub repositories for Tacotron2 8 and WaveGlow 9 . The Tacotron2 model converts the input text to mel spectrograms and then the WaveGlow model uses the mel spectrograms to generate speech. The Tacotron2 implementation used here slightly differs from the one described in by Shen et al. (2017) : it uses Dropout (Srivastava et al., 2014) regularization instead of Zoneout (Krueger et al., 2016) for the LSTM layers, and replaces the WaveNet model with the WaveGlow model. The models are trained on the LJ Speech (Ito and Johnson, 2017) dataset using mixed precision training (Micikevicius et al., 2017) .",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "(Shen et al., 2017)",
"ref_id": null
},
{
"start": 80,
"end": 102,
"text": "(Prenger et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 478,
"end": 496,
"text": "Shen et al. (2017)",
"ref_id": null
},
{
"start": 515,
"end": 540,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 575,
"end": 597,
"text": "(Krueger et al., 2016)",
"ref_id": null
},
{
"start": 778,
"end": 805,
"text": "(Micikevicius et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TTS Service",
"sec_num": "7.2"
},
{
"text": "The above model generates speech in female voice since it is trained on the LJ Speech dataset, which has voice samples from a single female speaker. However, our system has synthetic profiles for both males and females. In order to generate speech for a male profile, the current implementation relies on pitch manipulation tech-niques. Specifically, we use the phonetics software Praat (Boersma and Weenink, 2018) through the library Parselmouth (Jadoul et al., 2018) , which exposes the functionality and algorithms of Praat in Python. To change the female voice to a male voice, we set the parameter formant shift ratio to 0.85 and new pitch median to 100 Hz. The formant shift ratio determines the frequencies of the formants and the new pitch median determines the median pitch of the male voice. Using these specific values of the parameters gives us the best results. However, we are currently exploring ways to retrain the Tacotron2 and WaveGlow model on a male voice dataset to generate better quality outputs.",
"cite_spans": [
{
"start": 387,
"end": 414,
"text": "(Boersma and Weenink, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 447,
"end": 468,
"text": "(Jadoul et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TTS Service",
"sec_num": "7.2"
},
{
"text": "One of the limitations of the current implementation of CADLAC is that it does not currently learn from user input. One of the next key steps in further development of this system is to implement active learning components for domain and intent classification, ASR, and other supervised components of the system. We are also currently developing a formal evaluation of the usability of this system with human end-users. Specifically, we plan to use metrics of sensibility and specificity for each system response as proposed by Adiwardana et al. (2020) in addition to overall subjective measures of dialogue success, conversation naturalness, and intelligibility of responses. We also plan to evaluate the system for any potential bias in responses generated by the system and develop ways of un-biasing the system via hybrid rule-based and data-driven approaches .",
"cite_spans": [
{
"start": 528,
"end": 552,
"text": "Adiwardana et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Steps",
"sec_num": "8"
},
{
"text": "The work on this project was supported by funding from the Minnesota Department of Human Services. We would like to thank the people at DSD and MNIT for help with project specifications, gathering of historical data, and expert guidance on domain-specific aspects of the project. We would also like to thank Pamela Miller, Sidney Kiltie, and Elise Moore for help with transforming certified assessor notes to natural language format and Julia Garbuz for helping to develop and conduct the surveys of DHS assessors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "9"
},
{
"text": "Demo: https://rxinformatics.net/cadlac 2 Video: https://vimeo.com/500734362",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/SeanNaren/ deepspeech.pytorch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://voice.mozilla.org 5 http://www.voxforge.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://sox.sourceforge.net 7 https://github.com/ciprian-chelba/ 1-billion-word-language-modeling-benchmark 8 https://github.com/NVIDIA/tacotron2 9 https://github.com/NVIDIA/waveglow",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Deep speech 2: End-to-end speech recognition in english and mandarin",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Rishita",
"middle": [],
"last": "Sundaram Ananthanarayanan",
"suffix": ""
},
{
"first": "Jingliang",
"middle": [],
"last": "Anubhai",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Battenberg",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Case",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Catanzaro",
"suffix": ""
},
{
"first": "Guoliang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jingdong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhijie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Chrzanowski",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Coates",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Diamos",
"suffix": ""
},
{
"first": "Niandong",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Erich",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Elsen",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "Linxi",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Fougner",
"suffix": ""
},
{
"first": "Caixia",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Awni",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Hannun",
"suffix": ""
},
{
"first": "Lappi Vaino",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Johannes",
"suffix": ""
},
{
"first": "Cai",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Billy",
"middle": [],
"last": "Ju",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Jun",
"suffix": ""
},
{
"first": "Libby",
"middle": [],
"last": "Legresley",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Weigao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiangang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dongpeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Prenger",
"suffix": ""
},
{
"first": "Zongfeng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Vinay",
"middle": [],
"last": "Raiman",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Shubho",
"middle": [],
"last": "Seetapun",
"suffix": ""
},
{
"first": "Kavya",
"middle": [],
"last": "Sengupta",
"suffix": ""
},
{
"first": "Anuroop",
"middle": [],
"last": "Srinet",
"suffix": ""
},
{
"first": "Haiyuan",
"middle": [],
"last": "Sriram",
"suffix": ""
},
{
"first": "Liliang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jidong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kaifu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhijian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhiqian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Likai",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning",
"volume": "48",
"issue": "",
"pages": "173--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guo- liang Chen, Jie Chen, Jingdong Chen, Zhijie Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Ke Ding, Niandong Du, Erich Elsen, Jesse En- gel, Weiwei Fang, Linxi Fan, Christopher Fougner, Liang Gao, Caixia Gong, Awni Hannun, Tony Han, Lappi Vaino Johannes, Bing Jiang, Cai Ju, Billy Jun, Patrick LeGresley, Libby Lin, Junjie Liu, Yang Liu, Weigao Li, Xiangang Li, Dong- peng Ma, Sharan Narang, Andrew Ng, Sherjil Ozair, Yiping Peng, Ryan Prenger, Sheng Qian, Zongfeng Quan, Jonathan Raiman, Vinay Rao, San- jeev Satheesh, David Seetapun, Shubho Sengupta, Kavya Srinet, Anuroop Sriram, Haiyuan Tang, Lil- iang Tang, Chong Wang, Jidong Wang, Kaifu Wang, Yi Wang, Zhijian Wang, Zhiqian Wang, Shuang Wu, Likai Wei, Bo Xiao, Wen Xie, Yan Xie, Dani Yo- gatama, Bin Yuan, Jun Zhan, and Zhenyao Zhu. 2016. Deep speech 2: End-to-end speech recogni- tion in english and mandarin. In Proceedings of the 33rd International Conference on International Con- ference on Machine Learning -Volume 48, ICML'16, page 173-182. JMLR.org.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Praat: doing phonetics by computer",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Boersma",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weenink",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Boersma and David Weenink. 2018. Praat: doing phonetics by computer [Computer program].",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Description of the Patient-Genesys dialogue system",
"authors": [
{
"first": "Dhouha",
"middle": [],
"last": "Leonardo Campillos Llanos",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Anne-Laure",
"middle": [],
"last": "Bilinski",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Ligozat",
"suffix": ""
},
{
"first": "Sophie",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosset",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "438--440",
"other_ids": {
"DOI": [
"10.18653/v1/W15-4660"
]
},
"num": null,
"urls": [],
"raw_text": "Leonardo Campillos Llanos, Dhouha Bouamor,\u00c9ric Bilinski, Anne-Laure Ligozat, Pierre Zweigenbaum, and Sophie Rosset. 2015. Description of the Patient- Genesys dialogue system. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 438-440, Prague, Czech Republic. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Faustino",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning, ICML '06",
"volume": "",
"issue": "",
"pages": "369--376",
"other_ids": {
"DOI": [
"10.1145/1143844.1143891"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented se- quence data with recurrent neural networks. In Pro- ceedings of the 23rd International Conference on Machine Learning, ICML '06, pages 369-376, New York, NY, USA. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ted-lium 3: Twice as much data and corpus repartition for experiments on speaker adaptation",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Hernandez",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Sahar",
"middle": [],
"last": "Ghannay",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Tomashenko",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
}
],
"year": 2018,
"venue": "Speech and Computer",
"volume": "",
"issue": "",
"pages": "198--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Hernandez, Vincent Nguyen, Sahar Ghan- nay, Natalia Tomashenko, and Yannick Est\u00e8ve. 2018. Ted-lium 3: Twice as much data and corpus repartition for experiments on speaker adaptation. In Speech and Computer, pages 198-208, Cham. Springer International Publishing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "47",
"issue": "",
"pages": "853--899",
"other_ids": {
"DOI": [
"10.1613/jair.3994"
]
},
"num": null,
"urls": [],
"raw_text": "M. Hodosh, P. Young, and J. Hockenmaier. 2013. Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics. Journal of Artificial Intelligence Research, 47:853-899.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The lj speech dataset",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Ito",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Introducing Parselmouth: A Python interface to Praat",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Jadoul",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "De Boer",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Phonetics",
"volume": "71",
"issue": "",
"pages": "1--15",
"other_ids": {
"DOI": [
"10.1016/j.wocn.2018.07.001"
]
},
"num": null,
"urls": [],
"raw_text": "Yannick Jadoul, Bill Thompson, and Bart de Boer. 2018. Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71:1-15.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Interpreting questions with a log-linear ranking model in a virtual patient dialogue system",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Jaffe",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Schuler",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Danforth",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.3115/v1/W15-0611"
]
},
"num": null,
"urls": [],
"raw_text": "Evan Jaffe, Michael White, William Schuler, Eric Fosler-Lussier, Alex Rosenfeld, and Douglas Dan- forth. 2015. Interpreting questions with a log-linear ranking model in a virtual patient dialogue system. In Proceedings of the Tenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 86-96, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mining the spoken wikipedia for speech data and beyond",
"authors": [
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Stegen",
"suffix": ""
},
{
"first": "Timo",
"middle": [],
"last": "Baumann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arne K\u00f6hn, Florian Stegen, and Timo Baumann. 2016. Mining the spoken wikipedia for speech data and beyond. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mohammad Pezeshki",
"authors": [
{
"first": "David",
"middle": [],
"last": "Krueger",
"suffix": ""
},
{
"first": "Tegan",
"middle": [],
"last": "Maharaj",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Kram\u00e1r",
"suffix": ""
}
],
"year": null,
"venue": "Yoshua Bengio, Aaron Courville, and Chris Pal. 2016. Zoneout: Regularizing rnns by randomly preserving hidden activations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Krueger, Tegan Maharaj, J\u00e1nos Kram\u00e1r, Moham- mad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, and Chris Pal. 2016. Zoneout: Regularizing rnns by randomly preserving hidden activations.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A French medical conversations corpus annotated for a virtual patient dialogue system",
"authors": [
{
"first": "A",
"middle": [
"A"
],
"last": "Fr\u00e9jus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Laleye",
"suffix": ""
},
{
"first": "Antonia",
"middle": [],
"last": "Ga\u00ebl De Chalendar",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Blani\u00e9",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Brouquet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Behnamou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "574--580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9jus A. A. Laleye, Ga\u00ebl de Chalendar, Antonia Blani\u00e9, Antoine Brouquet, and Dan Behnamou. 2020. A French medical conversations corpus an- notated for a virtual patient dialogue system. In Pro- ceedings of the 12th Language Resources and Evalu- ation Conference, pages 574-580, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Does gender matter? towards fairness in dialogue systems",
"authors": [
{
"first": "Haochen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jamell",
"middle": [],
"last": "Dacon",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zitao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2020. Does gender matter? towards fairness in dialogue systems.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Transcribing, searching and data sharing: The CLAN software and the TalkBank data repository. Gesprachsforschung: Online-Zeitschrift Zur Verbalen Interaktion",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Macwhinney",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "11",
"issue": "",
"pages": "154--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian MacWhinney and Johannes Wagner. 2010. Transcribing, searching and data sharing: The CLAN software and the TalkBank data repository. Gesprachsforschung: Online-Zeitschrift Zur Ver- balen Interaktion, 11:154-173.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On structuring probabilistic dependencies in stochastic language modelling",
"authors": [
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Ute",
"middle": [],
"last": "Essen",
"suffix": ""
},
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
}
],
"year": 1994,
"venue": "Computer Speech and Language",
"volume": "8",
"issue": "",
"pages": "1--38",
"other_ids": {
"DOI": [
"10.1006/csla.1994.1001"
]
},
"num": null,
"urls": [],
"raw_text": "Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependencies in stochas- tic language modelling. Computer Speech and Lan- guage, 8:1-38.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language understanding in Maryland virtual patient",
"authors": [
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Beale",
"suffix": ""
},
{
"first": "Marjorie",
"middle": [],
"last": "Mcshane",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Jarrell",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Fantry",
"suffix": ""
}
],
"year": 2008,
"venue": "Coling 2008: Proceedings of the workshop on Speech Processing for Safety Critical Translation and Pervasive Applications",
"volume": "",
"issue": "",
"pages": "36--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergei Nirenburg, Stephen Beale, Marjorie McShane, Bruce Jarrell, and George Fantry. 2008. Language understanding in Maryland virtual patient. In Col- ing 2008: Proceedings of the workshop on Speech Processing for Safety Critical Translation and Per- vasive Applications, pages 36-39, Manchester, UK. Coling 2008 Organizing Committee.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Librispeech: An asr corpus based on public domain audio books",
"authors": [
{
"first": "V",
"middle": [],
"last": "Panayotov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5206--5210",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2015.7178964"
]
},
"num": null,
"urls": [],
"raw_text": "V. Panayotov, G. Chen, D. Povey, and S. Khudanpur. 2015. Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 5206-5210.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Waveglow: A flow-based generative network for speech synthesis",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Prenger",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valle",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2018. Waveglow: A flow-based generative network for speech synthesis.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Developing production-level conversational interfaces with shallow semantic parsing",
"authors": [
{
"first": "Arushi",
"middle": [],
"last": "Raghuvanshi",
"suffix": ""
},
{
"first": "Lucien",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Raghunathan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "157--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arushi Raghuvanshi, Lucien Carroll, and Karthik Raghunathan. 2018. Developing production-level conversational interfaces with shallow semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 157-162.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Enhancing the ted-lium corpus with selected data for language modeling and more ted talks",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Rousseau",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Del\u00e9glise",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Rousseau, Paul Del\u00e9glise, and Yannick Est\u00e8ve. 2014. Enhancing the ted-lium corpus with selected data for language modeling and more ted talks. In LREC.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Yannis Agiomyrgiannakis, and Yonghui Wu. 2017. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Ron",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Zongheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Rif",
"middle": [
"A"
],
"last": "Skerry-Ryan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saurous",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, and Yonghui Wu. 2017. Natural tts synthesis by condi- tioning wavenet on mel spectrogram predictions.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Mach. Learn. Res",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural net- works from overfitting. J. Mach. Learn. Res., 15(1):1929-1958.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Srilm -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. Srilm -an extensible language modeling toolkit. In INTERSPEECH.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Cstr vctk corpus: English multispeaker corpus for cstr voice cloning toolkit",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Veaux",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Yamagishi",
"suffix": ""
},
{
"first": "Kirsten",
"middle": [],
"last": "Macdonald",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Veaux, Junichi Yamagishi, and Kirsten Macdonald. 2017. Cstr vctk corpus: English multi- speaker corpus for cstr voice cloning toolkit.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dialogpt: Large-scale generative pre-training for conversational response generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL, system demonstration",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt: Large-scale generative pre-training for conversational response generation. In ACL, system demonstration.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example of interaction with the conversational agent.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "CADLAC system architecture.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Response to a follow-up question. The second question of the conversation refers to the previous domain of dressing.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Response to off-topic questions.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Assessment feedback. Top row shows values in the profile only for those domains assessed up to a checkpoint. Bottom row shows user-selected assessments.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"content": "<table/>",
"text": "Example dialogue from the survey.",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}