ACL-OCL / Base_JSON /prefixN /json /nlp4posimpact /2021.nlp4posimpact-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:41:49.162363Z"
},
"title": "A Grounded Well-being Conversational Agent with Multiple Interaction Modes: Preliminary Results",
"authors": [
{
"first": "Xinxin",
"middle": [],
"last": "Yan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"addrLine": "San Diego La Jolla",
"postCode": "92093",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Ndapa",
"middle": [],
"last": "Nakashole",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"addrLine": "San Diego La Jolla",
"postCode": "92093",
"region": "CA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctorpatient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctorpatient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "NLP is in a position to bring-forth scalable, costeffective solutions for promoting well-being. Such solutions can serve many segments of the population such as people living in medically underserved communities with limited access to clinicians, and people with limited mobility. These solutions can also serve those interested in selfmonitoring (Torous et al., 2014 ) their own health. There is evidence that these technologies can be effective (Mayo-Wilson, 2007; Fitzpatrick et al., 2017) . However, despite interest, such technologies suffer from low adoption (Donkin et al., 2013) . One hypothesis for this limited adoption is the loss of human interaction which is central to doctor-patient encounters (Fitzpatrick et al., 2017) . In this paper we seek to address this limitation via a conversational agent that emulates one aspect of in-person doctor-patient interactions: a human avatar to facilitate grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence.",
"cite_spans": [
{
"start": 347,
"end": 367,
"text": "(Torous et al., 2014",
"ref_id": "BIBREF11"
},
{
"start": 447,
"end": 466,
"text": "(Mayo-Wilson, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 467,
"end": 492,
"text": "Fitzpatrick et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 565,
"end": 586,
"text": "(Donkin et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 709,
"end": 735,
"text": "(Fitzpatrick et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The human body is complex and information about how it functions fill entire books. Yet it is important for individuals to know about conditions that can affect the human body, in order to practice continued monitoring and prevention to keep severe medical situations at bay. To this end, our wellbeing agent includes a medical question answering interaction mode (MedicalQABot). For mental health, social isolation and loneliness can have adverse health consequences such as anxiety, depression, and suicide. Our well-being agent includes a social interaction mode (SocialBot), wherein the agent can be an approximation of human a companion. The MedicalQABot is less conversational but accomplishes the task of answering questions. The SocialBot seeks to be conversational while providing some information. And, there is a third interaction mode, the Chatbot, which in our work is used as a last-resort mode, it is conversational but does not provide much information of substance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To test the ideas of our proposed agent, we are developing a grounded well-being conversational agent, called \"Marie Bot Wellbeing\". This paper presents a sketch of the high level design of our Marie system, and some preliminary results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An important consideration when developing technology for healthcare is that there is low toler-Colon cancer typically affects older adults, though it can happen at any age.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Marie, who is at risk of getting colon cancer?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Which body parts are affected?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Colon cancer begins in the large intestine (colon)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let me show you on my body \u2026 Figure 1 : An illustration of the MedicalQA interaction mode. Here the agent's answer is grounded on our human avatar. The affected body part, the large intestine, is highlighted on the avatar.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ance for errors. Erroneous information can have severe negative consequences. We design the med-icalQABot, and the SocialBot with this consideration in mind. Our design philosophy consists of the following tenets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Reputable answers: Only provide answers to questions for which we have answers from reputable sources, instead of considering information from every corner of the Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Calibrated confidence scores: Even though the answers come from reputable sources, there are various decisions that are involved that the model must make including which specific answer to retrieve for a given question. For these predictions by our models, we must know what we do not know, and provide only information about which the model is fairly certain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Visualize: Whenever an answer can be visualized to some degree, we should provide a visualization to accompany the text answer to help clarify, and reduce misunderstanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. Graceful failure: when one of the interaction modes fails, another interaction mode can take over.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Organization In what follows, we discuss how the above tenets are manifested in our agent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: We begin with a high-level overview of the design of the different parts of the agent (Sections 2 to 4); We next discuss the current prototype implementation and preliminary results (Section 5); We next present related work (Section 6); and close with a discussion (Section 7) and concluding remarks (Section 8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In navigating between the different interaction modes, we design our system as follows. Based on the user utterance, we automatically predict using a binary classifier to switch between different interaction modes ( MedicalQABot vs SocialBot). Suppose that the classifier predicts that the utterance is a question asking for medical information on a topic, and suppose our medicalQA determines that we have no information on that topic, our goal is to then let the SocialBot take over if it has information on that topic and can meaningfully hold a conversation about it. For the SocialBot, when missing the necessary information, our goal is to have it fall back to Chatbot mode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes and Dialog Management",
"sec_num": "2"
},
{
"text": "Their long, fine-textured coat requires considerable care and grooming.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes and Dialog Management",
"sec_num": "2"
},
{
"text": "Marie, I am thinking of getting an Afghan hound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes and Dialog Management",
"sec_num": "2"
},
{
"text": "Ok, but how about temperament?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes and Dialog Management",
"sec_num": "2"
},
{
"text": "They can be aloof and dignified, but happy and clownish when playing Are they good with cats?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes and Dialog Management",
"sec_num": "2"
},
{
"text": "This breed, as is the case with many sighthounds, has a high prey drive and may not get along with small animals. Some aspects of the human body are wellunderstood, many diseases and medical conditions have been studied for many years. Thus a lot of medical questions have already been asked, and their answers are known. Thus one approach to medicalQA is a retrieval-based one which consists of two steps: First, we collect and create a knowledge vault of frequently asked questions and their curated answers from reputable sources. Second, given a user question, we must match it to one of the questions in the QA knowledge vault. However, when people pose their questions, they are not aware of the exact words used in the questions of the knowledge vault. We must therefore match user questions to the correct question in the knowledge vault. A simple approach is keyword search. However, this misses a lot of compositional effects. One other way is to treat this as a problem of entailment. Where given a user question, we can find, in the knowledge vault, the questions that entail the user question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction Modes and Dialog Management",
"sec_num": "2"
},
{
"text": "We develop a human avatar to help users better understand medical information. And also to help them to more precisely specify their questions. The avatar is meant to be used in two ways. The human avatar was illustrated by a medical illustrator we hired from Upwork.com.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grounding to Human Anatomy Avatar",
"sec_num": "3.2"
},
{
"text": "Bot \u2192 Patient: When an answer contains body parts, relevant body parts are highlighted on the avatar. \"this medical condition affects the following body parts \". An illustration of this direction is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 216,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grounding to Human Anatomy Avatar",
"sec_num": "3.2"
},
{
"text": "Patient \u2192 Bot: When the user describes their condition, they can point by clicking. \"I am not feeling well here\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grounding to Human Anatomy Avatar",
"sec_num": "3.2"
},
{
"text": "For the SocialBot, we propose to create a knowledge vault of topics that will enable the bot to have engaging conversations with humans on topics of interest including current events. For example, the bot can say \"Sure, we can talk about German beer\" or. \"I see you want to talk about Afghan hounds\"\". The topics will be mined from Wikipedia, news sources, and social media including Reddit. For the SocialBot, we wish to model the principles of a good conversation: having something interesting to say, and showing interest in what the conversation partner says (Ostendorf, 2018) 5 Prototype Implementation & Preliminary Experiments",
"cite_spans": [
{
"start": 563,
"end": 580,
"text": "(Ostendorf, 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SocialBot Mode",
"sec_num": "4"
},
{
"text": "Having discussed the high-level design goals, in the following sections we present specifics of our initial prototype. Our prototype's language understanding capabilities are limited. They can be thought of as placeholders that allowed us to quickly develop a prototype. These simple capabilities will be replaced as we develop more advanced language processing methods for our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SocialBot Mode",
"sec_num": "4"
},
{
"text": "We describe the data used in our current prototype.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We collected Medline data 1 , containing 1031 high level medical topics. We extracted the summaries and split the text into questions and answers. We generated several data files from this dataset: question-topic pair data, answertopic pair data and question-answer pair data. The data size and split information is presented in Table 3. We will describe their usage in detail in the following sections",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Medline Data",
"sec_num": null
},
{
"text": "Medical Dialogue Data We use the MedDialog dataset (Zeng et al., 2020) which has 0.26 million dialogues between patients and doctors. The raw dialogues were obtained from healthcaremagic.com and icliniq.com. We also use the MedQuAD (Medical Question Answering Dataset) dataset (Ben Abacha and Demner-Fushman, 2019) which contains 47457 medical question-answer pairs created from 12 NIH 2 websites.",
"cite_spans": [
{
"start": 51,
"end": 70,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Medline Data",
"sec_num": null
},
{
"text": "News Category Dataset We also use the News category dataset from Kaggle 3 . It contains 41 topics. We use the data in 39 topics, without \"Healthy Living\" and \"Wellness\", which might be related to the medical domain. We extract the short description from the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Medline Data",
"sec_num": null
},
{
"text": "We collected questions and comments from 30 subreddits. We treat each subreddit as one topic. The number of questions for each topic is shown in Table 7 . This Reddit data is to be used for our SocialBot.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Reddit Data",
"sec_num": null
},
{
"text": "As shown in Figure 3 , our system makes a number of decisions upon receiving a user utterance. First, the system predicts if the utterance should be handled by the MedicalQABot or by the SocialBot.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "5.2"
},
{
"text": "If the MedicalQABot is predicted to handle the utterance, then an additional decision is made. This decision predicts which Medical topic the utterance is about. If we are not certain, the system puts the user in the loop, by asking them to confirm the topic. If the user says the top predicted topic is not the correct one, we present them with the next topic in the order, and ask them again, up to 4 times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "5.2"
},
{
"text": "Test 35797 If the SocialBot is predicted to handle the utterance, the goal is to have the system decide between various general topics and current events for which the system has collected information. If the topic is outside of the scope of what the SocialBot knows, the system resorts to a ChatBot, that may just give generic responses, and engage in chitchat dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Train 286370 Valid 35796",
"sec_num": null
},
{
"text": "We train this classifier to determine whether the user's input is related to the medical domain. We use the output from BERT encoder as the input to a linear classification layer trained with a crossentropy loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mode Prediction Classifier",
"sec_num": "5.3"
},
{
"text": "We choose the positive examples from MedQuAD Dataset, and negative examples from News Category Dataset. The training data information is shown in Table 1 . And the evaluation results are shown in Table 2 . This performance is potentially better than in real-life settings, because the medical (medline) vs non-medical (Kaggle news) data is cleanly separated. In reality, a user utterance might be \"I am not happy, I have a headache\" they may not want to get medical advise, but simply to just chat a bit to distract them from the headache.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 153,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 196,
"end": 203,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Mode Prediction Classifier",
"sec_num": "5.3"
},
{
"text": "Medical Topic Classifier If the user utterance is routed to the MedicalQABot, the MedicalQABot first predicts the medical category of the user's input. We use Medline Data, which contains 1031 topics, to train this classifier. The dataset information is shown in Table 3 . The evaluation results of our medical topic classifier is shown in Table 4 . Figure 3, we ask a topic confirmation question after the topic classifier, which is used to let the user confirm the correctness of the output from Topic Figure 3 : Our proposed pipeline. Section 5 has more details on the implementation of our current prototype.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 340,
"end": 347,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 350,
"end": 356,
"text": "Figure",
"ref_id": null
},
{
"start": 504,
"end": 512,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "MedicalQA Implementation",
"sec_num": "5.4"
},
{
"text": "Test 615 classifier. But we do not always need the confirmation. We set a threshold for the confidence score of the classifier. If the confidence score is higher than the threshold, meaning that our classifier is confident enough in the output, we will skip the confirmation question and retrieve the answer directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Train 12082 Valid 3021",
"sec_num": null
},
{
"text": "To make the classifier confidence scores more reliable, we use posterior calibration to encourage the confidence level to correspond to the probability that the classifier is correct (Chuan Guo, 2017; Schwartz et al., 2020) . The method learns a parameter, called temperature or T . Temperature is introduced to the output logits of the model as follows:",
"cite_spans": [
{
"start": 183,
"end": 200,
"text": "(Chuan Guo, 2017;",
"ref_id": "BIBREF2"
},
{
"start": 201,
"end": 223,
"text": "Schwartz et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Train 12082 Valid 3021",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "pred = arg i max exp(z i /T ) \u03a3 j exp(z j /T )",
"eq_num": "(1)"
}
],
"section": "Train 12082 Valid 3021",
"sec_num": null
},
{
"text": "{z i } is the logits of the model and T is the temperature that needs to be optimized. T is optimized on a validation set to maximize the log-likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Train 12082 Valid 3021",
"sec_num": null
},
{
"text": "Precision 0.7585 Recall 0.7621 F-1 score 0.7603 Accuracy 0.7597 MedicalQA Retriever After we determine the topic of the user's input, we can retrieve the answer from the Medline Dataset. We split the paragraphs in Medline data into single sentences and label them with the topics they belong to. We train the retriever using the augmented Medline data. We split the dataset into train, validation and test set using the ratio 8:1:1. The current retriever is based on BERT NextSentencePrediction model. We use the score from the model to determine the rank of each answer, and concatenate top 3 as the response of the agent. The evaluation result is shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 658,
"end": 665,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Train 12082 Valid 3021",
"sec_num": null
},
{
"text": "Our initial version for the human avatar contains 49 key body parts for front and 33 key body parts for the back. The front and back body part keywords are shown in Table 8 and 9. As future work, our goal is a more complete avatar with a comprehensive list of body parts.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 8",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "MedicalQA Grounding with Human Avatar",
"sec_num": "5.5"
},
{
"text": "Example grounded answers in our prototype system are shown in Figures 4 and 5 . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MedicalQA Grounding with Human Avatar",
"sec_num": "5.5"
},
{
"text": "For our SocialBot, we currently have collected data from Reddit where each subreddits corresponds to a topic as shown in Table7. The topic classifier, posterior calibrator, and answer retriever are the same as in the MedicalQABot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SocialBot Implementation",
"sec_num": "5.6"
},
{
"text": "What is implemented is the last resort ChatBot, for which we have two versions: one is derived from a base language model, and another derived from a fine-tuned language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ChatBot Implementation",
"sec_num": "5.7"
},
{
"text": "We use a large scale pretrained language model, OpenAI GPT, as our base language model. We use the idea of transfer learning, which starts from a language model pre-trained on a large corpus, and then fine-tuned on end task. This idea was inspired by the huggingface convai project (Wolf, 2019) .",
"cite_spans": [
{
"start": 282,
"end": 294,
"text": "(Wolf, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": null
},
{
"text": "Fine-tuning on Medical Dialogue Dataset: We use the Medical Dialogue Data (Zeng et al., 2020) to fine-tune the pre-trained language model. We use the questions as chat history and answers as current reply. The training set contains the portion from healthcaremagic and the test set the portion from icliniq",
"cite_spans": [
{
"start": 74,
"end": 93,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": null
},
{
"text": "The evaluation results of our language model ChatBot are shown in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Language Models",
"sec_num": null
},
{
"text": "PPL pre-trained model 5.4277 227.6291 fine-tuned model 3.2750 26.4423 6 Related Work Medical Conversational Agents Academic and industry NLP research continues to push the frontiers of conversational agents, for example Meena from Google trained on a large collection of raw text (Daniel Adiwardana, 2020) . In that work, it was found that end-to-end neural network with sufficiently low perplexity can surpass the sensibleness and specificity of existing chatbots that rely on complex, handcrafted frameworks. Medical dialogue has also been pursued from various angles for automatic diagnosis (Wei et al., 2018; Xu et al., 2019) .",
"cite_spans": [
{
"start": 280,
"end": 305,
"text": "(Daniel Adiwardana, 2020)",
"ref_id": null
},
{
"start": 594,
"end": 612,
"text": "(Wei et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 613,
"end": 629,
"text": "Xu et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLL",
"sec_num": null
},
{
"text": "Grounding to Human Avatar IBM Research developed a human avatar for patient-doctor interactions (Elisseeff, 2007) with a focus on visualizing electronic medical records. By clicking on a particular body part on the avatar, the doctor can trigger the search of medical records and retrieve relevant information. Their focus on electronic medical records is different from our grounded medical question answering focus.",
"cite_spans": [
{
"start": 96,
"end": 113,
"text": "(Elisseeff, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLL",
"sec_num": null
},
{
"text": "Another work (Charette, 2013) analyzed whether This study showed that poor communication between doctors and patients often leads patients to not follow their prescribed treatments regimens. Their thesis is that avatar system can help patients better understanding the doctor's diagnosis. They put medical data, FDA data and user-generated content into a single site that let people search this integrated content by clicking on a virtual body.",
"cite_spans": [
{
"start": 13,
"end": 29,
"text": "(Charette, 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLL",
"sec_num": null
},
{
"text": "Quality and Quantity of Data In order for users to find the agent useful, and for the agent to really have a positive impact, we must provide answers to more questions. We need to extract more questions from a diverse set of reputable sources, while improving coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Technical Challenges",
"sec_num": "7.1"
},
{
"text": "Comprehensive Visualizations For the visualization, and human avatar grounding to be useful, a more comprehensive avatar is required, with all the parts that make up the human body. Medical ontologies such as the SNOMED CT part of Unified Medical Language System (UMLS) 4 contain a comprehensive list of the human body structures, which we can exploit and provide to a medical 4 https://www.nlm.nih.gov/research/umls/index.html illustrator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Technical Challenges",
"sec_num": "7.1"
},
{
"text": "Privacy When we deploy our system, we will respect user privacy, by not asking for identifiers. Additionally, we will store our data anonymously. Any real-world data will only accessible to researchers directly involved with our study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7.2"
},
{
"text": "False Information False or erroneous information in our data sources could lead our agent to present answers with potentially dire consequences. Our approach of only answering medical questions for which we have high quality, human curated answers seeks to address this concern.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7.2"
},
{
"text": "System Capabilities Transparency Following prior work on automated health systems, our goal is to be clear and transparent about system capabilities (Kretzschmar et al., 2019) .",
"cite_spans": [
{
"start": 149,
"end": 175,
"text": "(Kretzschmar et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7.2"
},
{
"text": "We have presented a high level overview of the design philosophy of Marie Bot Wellbeing, a grounded, multi-interaction mode well-being conversational agent. The agent is designed to mitigate the limited adoption that plagues agents for healthcare despite patient interest. We reported details of our prototype implementation, and preliminary results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "There is much more to be done to fully realize Marie, which is part of our ongoing work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Question ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SubReddit",
"sec_num": null
},
{
"text": "https://medlineplus.gov/xml.html 2 https://www.nih.gov/ 3 https://www.kaggle.com/rmisra/news-category-dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A question-entailment approach to question answering",
"authors": [
{
"first": "Asma",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC Bioinform",
"volume": "20",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3119-4"
]
},
"num": null,
"urls": [],
"raw_text": "Asma Ben Abacha and Dina Demner-Fushman. 2019. A question-entailment approach to question answer- ing. BMC Bioinform., 20(1):511:1-511:23.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Can Avatars Help Close the Doctor-Patient Communication Gap?",
"authors": [
{
"first": "Robert",
"middle": [
"N"
],
"last": "Charette",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert N. Charette. 2013. Can Avatars Help Close the Doctor-Patient Communication Gap?",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On Calibration of Modern Neural Networks",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Yu Sun Kilian",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Weinberger Chuan Guo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pleiss",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.04599v2"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun Kilian Q. Weinberger Chuan Guo, Geoff Pleiss. 2017. On Calibration of Modern Neural Networks. arXiv:1706.04599v2.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Rethinking the doseresponse relationship between usage and outcome in an online intervention for depression: randomized controlled trial",
"authors": [
{
"first": "Liesje",
"middle": [],
"last": "Donkin",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Hickie",
"suffix": ""
},
{
"first": "Sharon",
"middle": [
"L"
],
"last": "Christensen",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Naismith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neal",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of medical Internet research",
"volume": "15",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liesje Donkin, Ian B Hickie, Helen Christensen, Sharon L Naismith, Bruce Neal, Nicole L Cock- ayne, and Nick Glozier. 2013. Rethinking the dose- response relationship between usage and outcome in an online intervention for depression: random- ized controlled trial. Journal of medical Internet re- search, 15(10):e231.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "IBM Research Unveils 3D Avatar to Help Doctors Visualize Patient Records and Improve Care",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Elisseeff",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Elisseeff. 2007. IBM Research Unveils 3D Avatar to Help Doctors Visualize Patient Records and Improve Care.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial",
"authors": [
{
"first": "Kathleen",
"middle": [
"Kara"
],
"last": "Fitzpatrick",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Darcy",
"suffix": ""
},
{
"first": "Molly",
"middle": [],
"last": "Vierhile",
"suffix": ""
}
],
"year": 2017,
"venue": "JMIR mental health",
"volume": "4",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior ther- apy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR mental health, 4(2):e19.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Can your phone be your therapist? young people's ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support",
"authors": [
{
"first": "Kira",
"middle": [],
"last": "Kretzschmar",
"suffix": ""
},
{
"first": "Holly",
"middle": [],
"last": "Tyroll",
"suffix": ""
},
{
"first": "Gabriela",
"middle": [],
"last": "Pavarini",
"suffix": ""
}
],
"year": 2019,
"venue": "Biomedical informatics insights",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kira Kretzschmar, Holly Tyroll, Gabriela Pavarini, Ar- ianna Manzini, Ilina Singh, and NeurOx Young Peo- ple's Advisory Group. 2019. Can your phone be your therapist? young people's ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomedical in- formatics insights, 11:1178222619829083.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: a meta-analysis",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Mayo-Wilson",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological medicine",
"volume": "37",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Mayo-Wilson. 2007. Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: a meta-analysis. Psychological medicine, 37(8):1211-author.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a socialbot: Lessons learned from 10m conversations",
"authors": [
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL Keynotes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mari Ostendorf. 2018. Building a socialbot: Lessons learned from 10m conversations. NAACL Keynotes.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The right tool for the job: Matching model and instance complexities",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "6640--6651",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A. Smith. 2020. The right tool for the job: Matching model and instance complexities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6640-6651. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Smartphone ownership and interest in mobile applications to monitor symptoms of mental health conditions",
"authors": [
{
"first": "John",
"middle": [],
"last": "Torous",
"suffix": ""
},
{
"first": "Rohn",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Matcheri",
"middle": [],
"last": "Keshavan",
"suffix": ""
}
],
"year": 2014,
"venue": "JMIR mHealth and uHealth",
"volume": "2",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Torous, Rohn Friedman, and Matcheri Keshavan. 2014. Smartphone ownership and interest in mobile applications to monitor symptoms of mental health conditions. JMIR mHealth and uHealth, 2(1):e2.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Task-oriented dialogue system for automatic diagnosis",
"authors": [
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Qianlong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Huaixiao",
"middle": [],
"last": "Tou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xuan-Jing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Dai",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "201--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuan-Jing Huang, Kam-Fai Wong, and Xiang Dai. 2018. Task-oriented dialogue sys- tem for automatic diagnosis. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 201-207.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "How to build a State-of-the-Art Conversational AI with Transfer Learning",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf. 2019. How to build a State-of-the-Art Conversational AI with Transfer Learning.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "End-to-end knowledge-routed relational dialogue system for automatic diagnosis",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qixian",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Jianheng",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7346--7353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jian- heng Tang, and Liang Lin. 2019. End-to-end knowledge-routed relational dialogue system for au- tomatic diagnosis. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 7346-7353.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Meddialog: A large-scale medical dialogue dataset",
"authors": [
{
"first": "Guangtao",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Wenmian",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zeqian",
"middle": [],
"last": "Ju",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sicheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ruisi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiaqi",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Ruoyu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "9241--9250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangtao Zeng, Wenmian Yang, Zeqian Ju, Yue Yang, Sicheng Wang, Ruisi Zhang, Meng Zhou, Jiaqi Zeng, Xiangyu Dong, Ruoyu Zhang, et al. 2020. Meddialog: A large-scale medical dialogue dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9241-9250.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "An illustration of the SocialBot interaction mode 3 MedicalQABot Mode 3.1 Knowledge vault of QA pairs",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Human avatar visual answer example from our prototype: The affected body part, the liver, is highlighted on the avatar.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Human Avatar Visual Answer Example From our Prototype: Diabetes/Blood Sugar and how the avatars help close the doctor-patient communication gap.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Valid accuracy 0.9970</td></tr><tr><td>Test accuracy 0.9972</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Interaction Mode Prediction Data",
"num": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Interaction Mode Prediction Evaluation Results",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>: Medical Topic Classifier Training Data Infor-</td></tr><tr><td>mation</td></tr><tr><td>Train accuracy 0.8812</td></tr><tr><td>Test accuracy 0.8358</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF4": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Medical Topic Classifier Evaluation Results",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "MedicalQA Retriever Evaluation Results",
"num": null
},
"TABREF6": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Language Model Evaluation. Negative log likelihood (NLL) and Perplexity (PPL)",
"num": null
},
"TABREF8": {
"content": "<table><tr><td>ankle</td><td>arm</td><td>breast</td></tr><tr><td>cheeks</td><td>chin</td><td>collar bone</td></tr><tr><td>ear lobe</td><td>ear</td><td>elbow</td></tr><tr><td>eyebrows</td><td>eyelashes</td><td>eyelids</td></tr><tr><td>eyes</td><td>finger</td><td>foot</td></tr><tr><td>forehead</td><td>groin</td><td>hair</td></tr><tr><td>hand</td><td>heart</td><td>hip</td></tr><tr><td>intestines</td><td>jaw</td><td>knee</td></tr><tr><td>lips</td><td>liver</td><td>lungs</td></tr><tr><td>mouth</td><td>neck</td><td>nipple</td></tr><tr><td>nose</td><td>nostril</td><td>pancreas</td></tr><tr><td>pelvis</td><td>rectum</td><td>ribs</td></tr><tr><td>shin</td><td>shoulder blade</td><td>shoulder</td></tr><tr><td>spinal cord</td><td>spine</td><td>stomach</td></tr><tr><td>teeth</td><td>thigh</td><td>throat</td></tr><tr><td>thumb</td><td>toes</td><td>tongue</td></tr><tr><td>waist</td><td>wrist</td><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Number of questions we extracted from each SubReddit",
"num": null
},
"TABREF9": {
"content": "<table><tr><td>ankle</td><td>anus</td><td>arm</td></tr><tr><td>back</td><td>brain</td><td>buttocks</td></tr><tr><td>calf</td><td>ear lobe</td><td>ear</td></tr><tr><td>elbow</td><td>finger</td><td>foot</td></tr><tr><td>heart</td><td>intestines</td><td>kidney</td></tr><tr><td>knee</td><td>liver</td><td>lungs</td></tr><tr><td>neck</td><td>palm</td><td>pancreas</td></tr><tr><td>pelvis</td><td>rectum</td><td>ribs</td></tr><tr><td>scalp</td><td colspan=\"2\">shoulder blade shoulder</td></tr><tr><td>spinal cord</td><td>spine</td><td>stomach</td></tr><tr><td>thigh</td><td>thumb</td><td>wrist</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Human Avatar Front Body Parts",
"num": null
},
"TABREF10": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Human Avatar Back Body Keywords. Some body parts can be visualized from both the front and back.",
"num": null
}
}
}
}