ACL-OCL / Base_JSON /prefixN /json /nlpmc /2021.nlpmc-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:20.852162Z"
},
"title": "Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization",
"authors": [
{
"first": "Bharath",
"middle": [],
"last": "Chintagunta",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Namit",
"middle": [],
"last": "Katariya",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Amatriain",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In medical dialogue summarization, summaries must be coherent and must capture all the medically relevant information in the dialogue. However, learning effective models for summarization require large amounts of labeled data which is especially hard to obtain. We present an algorithm to create synthetic training data with an explicit focus on capturing medically relevant information. We utilize GPT-3 as the backbone of our algorithm and scale 210 human labeled examples to yield results comparable to using 6400 human labeled examples (\u223c30x) leveraging low-shot learning and an ensemble method. In detailed experiments, we show that this approach produces high quality training data that can further be combined with human labeled data to get summaries that are strongly preferable to those produced by models trained on human data alone both in terms of medical accuracy and coherency.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In medical dialogue summarization, summaries must be coherent and must capture all the medically relevant information in the dialogue. However, learning effective models for summarization require large amounts of labeled data which is especially hard to obtain. We present an algorithm to create synthetic training data with an explicit focus on capturing medically relevant information. We utilize GPT-3 as the backbone of our algorithm and scale 210 human labeled examples to yield results comparable to using 6400 human labeled examples (\u223c30x) leveraging low-shot learning and an ensemble method. In detailed experiments, we show that this approach produces high quality training data that can further be combined with human labeled data to get summaries that are strongly preferable to those produced by models trained on human data alone both in terms of medical accuracy and coherency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With increasing usage of telehealth platforms , large scale ecosystems of providers and patients have become apparent. This has exacerbated the need for comprehensive visit summaries of the medical dialogues by the attending practitioner in order to facilitate accurate handoffs to other care providers or as a means of recording the interaction. However, having providers write summaries after each encounter is not only time consuming but also costly, limiting the scalability of telehealth platforms (Shanafelt et al., 2016) In these settings, an automated summarizer that can assist the practitioners can be extremely valuable. However, an important challenge of end-toend medical dialogue summarization is the lack of large scale annotated datasets. Annotation of medical dialogues is expensive and slow because they need to be curated by trained experts. This is further compounded by the fact that labeled data may not be publicly shared because of patient privacy concerns and HIPAA regulations.",
"cite_spans": [
{
"start": 503,
"end": 527,
"text": "(Shanafelt et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent approaches to summarization (Qi et al., 2020; Zhang et al., 2019) use transfer learning where a pre-trained model (e.g. through self supervision of learning a language model) is fine tuned with a labeled dataset. However, fine-tuning still requires hundreds to thousands of labeled examples to obtain reasonable performance. Methods such as (Joshi et al., 2020) aim to partially overcome these issues through modeling strategies that directly learn important inductive biases from smaller amounts of data. In addition, (Joshi et al., 2020) also handled data sparsity by leveraging a key insight of sequential nature of information flow in a medical dialogue: global summary of the dialogue can be composed from local dialogue turns (snippets). This enables collecting training data for snippets as opposed to the full conversation -an insight, we use in our paper as well.",
"cite_spans": [
{
"start": 35,
"end": 52,
"text": "(Qi et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 53,
"end": 72,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 348,
"end": 368,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 526,
"end": 546,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, OpenAI developed GPT-3, a neural language model that is capable of natural language generation and completion of tasks like classification, question-answering, and summarization (Brown et al., 2020) . The focus of that work is to enable task-agnostic and zero-shot or low-shot performance as opposed to a pre-trained model that needs to be fine-tuned separately on every downstream task. In this paper, we investigate the following question: How can a low-shot learner such as GPT-3 be leveraged to scale training data for medical dialogue summarization models? In answering this question within the context of GPT-3 as a black box proprietary API 1 , we took into account multiple considerations:",
"cite_spans": [
{
"start": 188,
"end": 208,
"text": "(Brown et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Medical Correctness (Joshi et al., 2020) : Medical summarization warrants high recall and therefore the summarizer should be good at (1) capturing all the medical information (med-ications, symptoms, etc.) discussed in the dialogue and (2) discern all the affirmatives and negatives on medical conditions correctly (e.g. no allergies, having a cough for 2 days).",
"cite_spans": [
{
"start": 22,
"end": 42,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Privacy Concerns: At inference time, an API call to external services such GPT-3 may not always be possible due to HIPAA and privacy concerns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Practitioner in the loop: The technique needs to be easily amenable to a feedback loop that allows for leveraging manually curated human labels. This feedback loop is extremely important because the diversity and the long tail of data distribution in medical dialogue means that there can be parts of the summary that need to be edited by practitioners for medical correctness and completeness. Note that these edits can be used as additional data for improving the underlying model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Taking into account these considerations, this paper makes the following contributions ( Figure 1 for a quick overview):",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 97,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce a medically-aware GPT-3 data labeler, GPT-3-ENS , that combines medical knowledge and an ensemble of GPT-3 for the purpose of medical dialogue summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce the idea of using GPT-3-ENS as a dataset generator to facilitate learning an in-house summarization model. Our experiments show that we can obtain the same performance as that of human labeled dataset with 30x smaller amount of human labeled data. With only 210 expert curated summaries and GPT-3 as a labeled data simulator, we can mimic the performance of a summarization model trained on 6400 expert curated summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 By combining generated datasets from GPT-3-ENS with a human labeled dataset, we show that we can obtain better performance than models trained on either one of the data sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is structured as follows: \u00a7 2 discusses related work, \u00a7 3 explores whether GPT-3 can be used directly for medical summarization, \u00a7 4 introduces our approach, \u00a7 5 and \u00a7 6 describe our datasets and metrics respectively while \u00a7 7 illustrates our experiments. We end the paper with \u00a7 8 discussing our conclusions and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Overview of our proposed approach: we train models on a mix of GPT-3-ENS synthesized and human labeled data to get performance better than models trained on either of the sources 2 Related work Summarization Emergence of sequence to sequence models and attention mechanisms (Sutskever et al., 2014) has led to rapid progress on extractive (Nallapati et al., 2017) , abstractive (Nallapati et al., 2016; Zhang et al., 2019) and hybrid models (See et al., 2017; Gu et al., 2016) for summarization. Much of the recent work has shown these models to generate near-human coherent summaries while retaining reasonable factual correctness. Dialogue summarization: While most neural summarization has focused on news corpora, recent work has tried to tackle unique challenges associated with summarizing dialogues. (Goo and Chen, 2018) proposes using dialogue history encoders based on the type of dialogue section to inform the generation. (Liu et al., 2019a) propose using key points as a means of categorizing sections of dialogue.",
"cite_spans": [
{
"start": 284,
"end": 308,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 349,
"end": 373,
"text": "(Nallapati et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 388,
"end": 412,
"text": "(Nallapati et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 413,
"end": 432,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 451,
"end": 469,
"text": "(See et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 470,
"end": 486,
"text": "Gu et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 817,
"end": 837,
"text": "(Goo and Chen, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 943,
"end": 962,
"text": "(Liu et al., 2019a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Medical dialogue summarization Existing work (Alsentzer and Kim, 2018; Zhang et al., 2018; Liu et al., 2019b; Krishna et al., 2020a,b; Joshi et al., 2020) in this space focuses on effective summarization by incorporating medical knowledge from a modeling perspective. Our work also focuses on incorporating medical knowledge from a data labeling perspective. We show how we leverage pretrained language models and low-shot learning (Brown et al., 2020) to collect labeled data for medical dialogue summarization. We also show how this data can improve performance over models that are trained solely on existing human labeled data.",
"cite_spans": [
{
"start": 45,
"end": 70,
"text": "(Alsentzer and Kim, 2018;",
"ref_id": "BIBREF0"
},
{
"start": 71,
"end": 90,
"text": "Zhang et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 91,
"end": 109,
"text": "Liu et al., 2019b;",
"ref_id": "BIBREF14"
},
{
"start": 110,
"end": 134,
"text": "Krishna et al., 2020a,b;",
"ref_id": null
},
{
"start": 135,
"end": 154,
"text": "Joshi et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 Background: Can GPT-3 serve as a medical summarizer?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ignoring the privacy concerns and practitioner-inthe-loop considerations, we first explore whether GPT-3 (Brown et al., 2020 ) is a good medical summarizer by itself. GPT-3 takes as input a priming context to perform the task on a previously unseen example. Priming context refers to the text description of a task and a few demonstrations of the task being accomplished (in our case, that would be dialogue snippet summarization). Table 1 column 2 provides examples of summaries generated by the GPT-3 model. We can clearly see that it misses a number of important pieces of information in the snippets -first, missing medical concepts making the summary unusable (Rows 1-2). Second, the model may not always get the affirmations correct (Row 3). Third, the summary may repeat redundant information from the doctor's queries (Row 4).",
"cite_spans": [
{
"start": 99,
"end": 124,
"text": "GPT-3 (Brown et al., 2020",
"ref_id": null
}
],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on these observations, we might prematurely conclude that GPT-3 can not be used for medical summarization task. However, our key observation in exploring GPT-3 is that it is sensitive to the priming context (also reported in (Liu et al., 2021) ), as the model does not learn but just adheres to the examples given. As we show in 4, we exploit this variability in GPT-3 output via ensembling and infusion of medical knowledge so that it can be used as a part of an effective low-shot learning approach to medical summarization.",
"cite_spans": [
{
"start": 231,
"end": 249,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We are interested in a model that uses only a small amount of human labeled data to learn an effec- Table 1 : Input dialogue snippets along with summaries generated by GPT-3 in column 2 and our approach, GPT-3-ENS , in column 3. tive medical dialogue summarizer. At the same time, we want such a model to be used in a practical practitioner-in-the-loop setting where medical correctness and patient privacy are of paramount importance.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "Snippet GPT-3 GPT-3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "In order to achieve these goals, we propose a two-pronged approach 1. Introduce GPT-3-ENS where we infuse medical knowledge into GPT-3 and use it within an inner loop to make it effective at medical summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "2. Leverage GPT-3-ENS as a data generator to obtain a large training set 2 to train an in-house medical dialogue summarization model. Such an in-house model can be used at inference time without the practical constraints related to protecting patient privacy that would require full de-identification to be applied in any conversation, if we were to access the GPT-3 service. It also lends itself well to the practioner-in-the-loop setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "4.1 GPT-3-ENS : Medically-aware ensemble of As discussed in 3, GPT-3 is quite sensitive to the priming context. While one approach may be to provide GPT-3 with the most informative context for a task, this itself is a daunting task and can potentially be tackled if we had a large number of labeled examples (which is the exact problem we want to tackle with GPT-3). Drawing on the learning from vast literature in ensembling techniques c.f. (Bishop et al., 1995) , our first key insight is that if we can generate multiple summaries from GPT-3 using a variety of priming contexts, then we should be able to ensemble these outputs to identify the summary that is ideal for the dialogue. This insight leads to a question on how to ensemble multiple text summaries. The answer to this question relies on the core requirement for medical summarization: we care about the coverage of medical concepts mentioned and therefore the best ensembling function is the one that returns the summary with the most medical information in the dialog input.",
"cite_spans": [
{
"start": 442,
"end": 463,
"text": "(Bishop et al., 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "In Algorithm 1 we provide our approach to the medically aware GPT-3 ensemble GPT-3-ENS . We assume access to a small set of labeled examples L. For each input dialog snippet, D, we get K summaries, by invoking GPT-3 each time with N examples sampled randomly without replacement from L. We also assume access to a medical entity extractor that can discern the medical concepts from both the dialogue snippet and the summary. The algorithm returns the best summary that has the highest recall in terms of capturing the medical concepts in the dialogue. For this purpose, we use an in-house medical concept extractor MEDICALEN-TITYRECOGNIZER that can identify medical concepts from a given piece of text. This extractor has access to the universe of medical concepts based on Unified Medical Knowledge Systems 3 , which includes patient symptoms, disorders, laboratory tests and medications. Note that any medical entity recognizer (cf. (Fu et al., 2019) and references therein) that has coverage for all these types of medical concepts found in medical conversations can be used.",
"cite_spans": [
{
"start": 935,
"end": 952,
"text": "(Fu et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "Algorithm 1 Medically aware GPT-3 ensemble summarizer (GPT-3-ENS ) Require: dialogue snippet T , ensembling trials K, universe L of labeled examples, medical entity extractor M edicalEntityRecognizer, GPT3 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "C * \u2190 M edicalEntityRecognizer(T ) 2: for i \u2190 1, \u2022 \u2022 \u2022 , K do 3: S \u2190 sample N examples from L 4: summary i \u2190 GPT3(S, T ) 5: C i \u2190 M edicalEntityRecognizer( summary i ) 6: end for 7: best \u2190 arg max i |C i \u2229C * | |C * | 8: return summary best",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "Reconsider Table 1 for qualitative comparison between GPT-3 and GPT-3-ENS . We can see that summaries obtained using GPT-3-ENS capture the medical concepts comprehensively (shown in bold) and also have better grammatical structure. We also quantitatively validate the summaries on a small data set distinct from what is used for priming(see \u00a7 6.2 for guidelines). In Figure 2 , based on doctor evaluation, we can see that GPT-3-ENS is significantly better at summarization than GPT-3 . : Doctor evaluation of which among GPT-3 and GPT-3-ENS summaries they considered \"best\" showing that GPT-3-ENS is a better approach for labeling",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 1",
"ref_id": null
},
{
"start": 367,
"end": 375,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Infusing Medical Knowledge in GPT-3 for use as a Data Generator",
"sec_num": "4"
},
{
"text": "We use GPT-3-ENS described in 4.1 as our labeled data generator. In particular, we use our approach to collect a large amount of labeled examples that serve as inputs to training an off-the-shelf summarization model. This resolves the concern of using GPT-3 in a real world application where the patient's conversation (in its raw form) needs to be exchanged with an external third party such as OpenAI/GPT-3 which may not have design/privacy regulations around HIPAA. In our approach, however, with the help of experts, it is easy to ensure that the dialogues that will used for priming as well as in the training set are chosen following privacy protocols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-3-ENS as a data labeler",
"sec_num": "4.2"
},
{
"text": "We collected a random subset of medical conversation dialogues from our chat-based telemedicine platform. Often medical conversation follows a linear ordering of medical history gathering (understanding patient symptoms) that enables creating the summary of the dialog by stitching together summaries of the snippets in chronological order (Joshi et al., 2020) . Therefore, we split each dialogue into a series of local dialogue snippets using a simple heuristic: the turns between two subsequent questions by a physician corresponds to a snippet. The length of these snippets ranged anywhere from two turns (a physician question and patient response) to ten turns. We had medical doctors 4 summarize these snippets. The doctors were asked to summarize the sections as they would for a typical clinical note by including all of the relevant history taking information. If a local snippet did not contain any history taking information it was excluded from annotations. For example in the beginning or end of conversations there may be turns that are purely greetings and not part of the patient history taking process. Further some snippets maybe purely educational in nature and are excluded as well. We eventually obtained a total of 6900 labeled snippet-summary pairs.",
"cite_spans": [
{
"start": 340,
"end": 360,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "Human labeled dataset train/test split: From the 6900 labeled snippet-summary pairs (denoted as H 6900 ), we generated a randomly sampled test set T = 500 that we use in all our evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "The dataset H 6900 \u2212 T is used to generate the priming dataset for GPT-3 related models as well as the datasets we use to train our summarization models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "GPT-3-ENS dataset: Let GCF k p be the dataset of size p generated using GPT-3-ENS with k ensembling trials. To generate dataset GCF K=k , we require {H n } k i=1 datasets (note the independence on p), and thus n \u00d7 k labeled examples for priming. These n \u00d7 k examples are randomly sampled from the universe of human labeled examples H 6900 \u2212 T . In our experiments, we sample without replacement so that no examples are reused across the k tries. To allow comparison between our experiments with different K values, we use the same seed for random sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "Multiple studies have shown that automated metrics in NLP do not always correlate well to human judgments as they may not fully capture coherent sentence structure and semantics (Stephen Roller, 2020; Kry\u015bci\u0144ski et al., 2019) . Since medical dialogue summarization would be used to assist health care, it is important for doctors to evaluate the quality of the output.",
"cite_spans": [
{
"start": 201,
"end": 225,
"text": "Kry\u015bci\u0144ski et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "6"
},
{
"text": "While we measure model performance on standard metrics of ROUGE (Lin, 2004) 5 , we also measure a model's effectiveness in capturing the medical concepts that are of importance, and their negations (Joshi et al., 2020) Medical Concept Coverage: The concept coverage set of metrics captures the coverage of medical terms in the model's output summary with respect to the ground truth. In particular, let C be the set of medical concepts in the reference summary and\u0108 be the set of concepts in the summary output by the",
"cite_spans": [
{
"start": 64,
"end": 75,
"text": "(Lin, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 76,
"end": 77,
"text": "5",
"ref_id": null
},
{
"start": 198,
"end": 218,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated metrics",
"sec_num": "6.1"
},
{
"text": "model. Then, Concept recall = N n=1 |\u0108 (n) \u2229C (n) | N n=1 |C (n) | and Concept precision = N n=1 |\u0108 (n) \u2229C (n) | N n=1 |\u0108 (n) | .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated metrics",
"sec_num": "6.1"
},
{
"text": "We use these to compute a Concept F1 6 We use an in-house medical entity extractor to extract medical concepts in the summary. Medical concepts in the decoded summary that weren't present in the original conversation would be false positives and vice versa for false negatives.",
"cite_spans": [
{
"start": 37,
"end": 38,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated metrics",
"sec_num": "6.1"
},
{
"text": "Negation Correctness: To measure the effectiveness of the model to identify the negated status of medical concepts, we use Negex (Harkema et al., 2009) to determine negated concepts. Of the concepts present in the decoded summary, we evaluate precision and recall on whether the decoded negations were accurate for the decoded concepts and compute a negation F1 6 .",
"cite_spans": [
{
"start": 129,
"end": 151,
"text": "(Harkema et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated metrics",
"sec_num": "6.1"
},
{
"text": "We also had doctors, who serve patients on our telehealth platform, evaluate the summaries produced by the models. Given the local dialogue snippets and the generated summary, we asked them to evaluate the extent to which the summary captured factually correct and medically relevant information from the snippet. Depending on what percentage of the concepts were correctly mentioned in the decoded summary of the provided snippet, the doctors graded the summaries with All (100%), Most (at least 75%), Some (at least 1 fact but less than 75%), None (0%) labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doctor Evaluation",
"sec_num": "6.2"
},
{
"text": "We also formulated a comparison task where given summaries generated by different models and the associated dialogue, they were asked which summary was the \"best\" from a usability perspective. Usability was defined as whether the summary could stand in as a replacement for reading the dialogue snippet i.e. whether it captures the correct concepts from the snippet and whether the negations are accurate. The doctors had the ability to use \"all\" and \"none\" in this task depending on if all models being compared captured a good summary or if none of them did.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doctor Evaluation",
"sec_num": "6.2"
},
{
"text": "To avoid bias, the doctors do not know the model that produced the summary in both the experiments. In the comparison task, the summaries were provided in randomized order so that there is no bias in the order of presentation of the summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doctor Evaluation",
"sec_num": "6.2"
},
{
"text": "Additional models considered: To evaluate the efficacy of GPT-3-ENS as a source of labeled data generator, we considered models with distinct objective functions for abstractive and hybrid (abstractive/extractive) summarization. We used PEGASUS (Zhang et al., 2019) for abstractive summarization and Dr. Summarize which we denote as DRSUM (Joshi et al., 2020) for extractive summarization. For DRSUM , we also use their best performing variant (referred as 2M-PGEN in (Joshi et al., 2020) ) which penalizes generator loss and favors extractive copying.",
"cite_spans": [
{
"start": 245,
"end": 265,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 339,
"end": 359,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 468,
"end": 488,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "7"
},
{
"text": "Implementation Details: We used GPT-3 via the API released by OpenAI 7 . Maximum response length was set to 128 tokens, temperature to 0.6 and presence and frequency penalties both set to 0. For GPT-3-ENS , we use K = 10 ensembling trials for all our experiments, unless otherwise specified. We observed that N = 21 was the maximum number of examples we could prime GPT-3 with given the maximum context window length of 2048 tokens for the API. We therefore fix the size of our priming dataset to be 21 in all experiments which invoke GPT-3. Hence we set L to be a random subset of 210 examples from H 6900 \u2212 T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "7"
},
{
"text": "We followed parameter settings for DR-SUM from (Joshi et al., 2020) for pretraining on the CNN-Dailymail dataset. We then fine-tuned on our summarization task dataset with a batch size of 16, source_max_tokens = 400, response_max_tokens = 200 and max_grad_norm clipped at 2.0, for two epochs with a learning rate of 0.15 using Adagrad optimizer.",
"cite_spans": [
{
"start": 47,
"end": 67,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "7"
},
{
"text": "We used the PEGASUS implementation that is pretrained on CNN-Dailymail 8 provided by (Wolf et al., 2020) . We fine-tuned it on our summarization task dataset with an effective batch size of 256, source_max_tokens = 512, response_max_tokens = 128 for two epochs using Adafactor 9 optimizer at the default settings in Hugging Face. For both PEGASUS and DRSUM , we used a beam size of four for decoding.",
"cite_spans": [
{
"start": 85,
"end": 104,
"text": "(Wolf et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "7"
},
{
"text": "We compare PEGASUS and DRSUM trained on human labeled data H 6400 and GPT-3-ENS synthesized data GCF K=10 6400 . Note that synthesizing GCF K=10 6400 needed all of 21 \u2022 10 = 210 human labeled examples, where 21, as a reminder, is the maximum number of inputs that can be used for priming. Table 2 compares quantitative performance of PEGASUS and DRSUM trained on these two datasets. The main observation is that with only Table 2 : Automated evaluation of summarization models trained with different data labeling methodologies.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 2",
"ref_id": null
},
{
"start": 422,
"end": 429,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training summarization models using data labeled by GPT-3-ENS",
"sec_num": "7.1"
},
{
"text": "Note that the amount of human labeled data is still pretty low (210), compared to 6400 when we do not use our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training summarization models using data labeled by GPT-3-ENS",
"sec_num": "7.1"
},
{
"text": "210 human labeled examples, our approach GPT-3-ENS is able to generate a large amount of training data for both pre-trained summarization models, PEGASUS and DRSUM , in such a manner that they yield comparable (or better perfomance) than if they had been trained with only 6400(\u223c30x) human labeled examples. For PEGASUS , the summarization performance improves drastically compared to model fine-tuned using only the human labeled data. We hypothesize that data generated from GPT-3-ENS can serve as quality training data for abstractive models such as PEGASUS but not so much for hybrid models such as DRSUM due to GPT-3 being a generative language model. The summaries written by our human doctors have writing structure similar to that of a hybrid summarization model such as DR-SUM that is more extractive in nature. This can explain why DRSUM did not show performance gain when using generated data from GPT-3-ENS . The key, however, is that it still did perform on par.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training summarization models using data labeled by GPT-3-ENS",
"sec_num": "7.1"
},
{
"text": "In the same Table 2 , we also present the results with increased amounts of data (12800 and 25600) from GPT-3-ENS . There is little or no further improvement in the automated metrics of concept and negation F1. However, ROUGE-L F1 improves reflecting the improvements in coherency of the summaries. We leave this area as future work to explore.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training summarization models using data labeled by GPT-3-ENS",
"sec_num": "7.1"
},
{
"text": "Since GPT-3 relies on limited local priming context (N = 21) it may not be agile in providing robust summaries for a multitude of variations in snippets, focusing on the exploitation part of the exploration-exploitation trade-off. We hypothesize that best summaries then will be synthesized by a model trained on a dataset with human and GPT-3-ENS labeled examples. To evaluate this, we introduced a mixing parameter \u03b1, the ratio of GPT-3-ENS labeled examples to human labeled examples. For instance, with 6400 human labeled examples, \u03b1 = 0.5 implies the dataset contains 6400 human labeled examples along with 0.5 * 6400 = 3200 GPT-3-ENS generated examples. We experiment with \u03b1 = 0.5, 1, 2, 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of combining human labeled data with data labeled by GPT-3-ENS",
"sec_num": "7.2"
},
{
"text": "From Table 4 , we observe that for both PEGA-SUS and DRSUM , mixture of human labeled and GPT-3-ENS data consistently improves almost all automated metrics for all \u03b1 values 10 The lift in metrics is lower for DRSUM , again illustrating the idea we highlighted in \u00a7 7.1 of GPT-3-ENS data being more amenable to abstractive models such as PEGASUS than for hybrid or extractive-biased models such as DRSUM . Table 3 provides qualitative comparison between summaries generated by each of these models.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 4",
"ref_id": null
},
{
"start": 405,
"end": 412,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Effect of combining human labeled data with data labeled by GPT-3-ENS",
"sec_num": "7.2"
},
{
"text": "For simplicity, we chose the smallest GPT-3-ENS mix i.e. \u03b1 = 0.5 for human evaluation where we ask doctors to evaluate summaries from model trained on human, GPT-3-ENS and human+GPT-3-ENS data. Figure 3 and Figure 4 show that doctors prefer summaries from the model trained on the mixture data over those produced by models trained on human or GPT-3-ENS data alone, in terms of amount of medical information captured as well as the overall quality of the summary. Furthermore, Figure 3 (b) also shows that for PEGASUS , doctors prefer the summaries from a model trained on GCF K=10 6400 (which needed only 210 human labeled examples) over those produced by a model trained on 6400 human labeled examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 202,
"text": "Figure 3",
"ref_id": null
},
{
"start": 207,
"end": 215,
"text": "Figure 4",
"ref_id": null
},
{
"start": 477,
"end": 485,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of combining human labeled data with data labeled by GPT-3-ENS",
"sec_num": "7.2"
},
{
"text": "We introduced a medically-aware GPT-3 data labeler, GPT-3-ENS , for the task of medical conversation summarization. At the heart of the approach is a medically aware ensembling criterion that ensembles multiple summaries for an input from a powerful low-shot learner such as GPT-3. We showed that this approach can generate quality Table 4 : Combining human labeled datasets with datasets generated using our proposed approach Figure 3 : Doctor evaluation of amount of medical information covered by summaries provided by PEGA-SUS models and which ones they considered \"best\" Figure 4 : Doctor evaluation of amount of medical information covered by summaries provided by DR-SUM models and which ones they considered \"best\" training data for medical dialogue summarization models while ensuring medical correctness. We show that using a very small number of human labeled examples, 210, we are able to produce more medically correct and better quality summaries than using roughly thirty times as many human labeled examples for two different summarization models. In this work we used a simple ensembling technique that dialogue summaries should retain all the medical information discussed in the dialogue. Future work could be to improve our ensembling function to take into account other medical priors such as affirmations and importance/relevance of the information in the dialog. Snippet Summary Prompt PT: Today spit out a bit of mucus and noticed a bit of blood. DR: Okay, how long have you been on these medications? PT: About 2 years Has been on these medications for about 2 years Today spit out a bit of mucus and noticed a bit of blood. [STOP] Okay, how long have you been on these medications?[SEP]About 2 years [SUMMARIZED] Has been on these medications for about 2 years.",
"cite_spans": [
{
"start": 1650,
"end": 1656,
"text": "[STOP]",
"ref_id": null
},
{
"start": 1726,
"end": 1738,
"text": "[SUMMARIZED]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 4",
"ref_id": null
},
{
"start": 427,
"end": 435,
"text": "Figure 3",
"ref_id": null
},
{
"start": 576,
"end": 584,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "[STOP] DR: Is the bleeding from the anal opening and not the vagina? Has something similar happened before? PT: yes from the anal opening ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We utilize a fairly simple prompt to have GPT-3 generate summaries. Each example (snippet_text, summary_text) is concatenated to the empty string with the following transformation: \"{snip-pet_text}[SUMMARY]{summary_text}[STOP]\" to form the prompt. We seperate the conversational turns in snippet_text with the \"[SEP]\" token. Table 5 shows a prompt that would be generated and used to prime GPT-3 given two examples. As mentioned in \u00a7 7 in our experiments we use 21 examples to generate a prompt",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "A GPT-3 Prompt",
"sec_num": null
},
{
"text": "https://beta.openai.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Unlike data at inference time, training data is fixed and can be ensured to be privacy protected",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nlm.nih.gov/research/ umls/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These are the same doctors who practice on the same telemedicine platform.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the following package with default configuration: https://github.com/google-research/ google-research/tree/master/rouge 6 Note if there are no concepts detected in the snippet and summary by the entity extractor, then a conservative F1 score of 0 is given for that example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://beta.openai.com/ 8 https://huggingface.co/google/ pegasus-cnn_dailymail 9 https://huggingface.co/transformers/ main_classes/optimizer_schedules.html# adafactor-pytorch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note here that the claim is not that increasing \u03b1 improves metrics but that mixing GPT-3-ENS and human labeled data improves metrics over models trained only using human data. We leave it as a future work on how to trade-off between human and GPT-3-ENS labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Extractive summarization of EHR discharge notes",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer and Anne Kim. 2018. Extractive summarization of EHR discharge notes. CoRR, abs/1810.12085.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural networks for pattern recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bishop",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher M Bishop et al. 1995. Neural networks for pattern recognition. Oxford university press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Development of clinical concept extraction applications: A methodology review",
"authors": [
{
"first": "Sunyang",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sungrim",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"J"
],
"last": "Peterson",
"suffix": ""
},
{
"first": "Feichen",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yanshan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Yiqing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Sunghwan",
"middle": [],
"last": "Sohn",
"suffix": ""
},
{
"first": "Hongfang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunyang Fu, David Chen, Sijia Liu, Sungrim Moon, Kevin J. Peterson, Feichen Shen, Yanshan Wang, Li- wei Wang, Andrew Wen, Yiqing Zhao, Sunghwan Sohn, and Hongfang Liu. 2019. Development of clinical concept extraction applications: A method- ology review. CoRR, abs/1910.11377.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts",
"authors": [
{
"first": "Chih-Wen",
"middle": [],
"last": "Goo",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Wen Goo and Yun-Nung Chen. 2018. Abstrac- tive dialogue summarization with sentence-gated modeling optimized by dialogue acts. CoRR, abs/1809.05715.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1631--1640",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1154"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Context: An algorithm for determining negation, experiencer, and temporal status from clinical reports",
"authors": [
{
"first": "Henk",
"middle": [],
"last": "Harkema",
"suffix": ""
},
{
"first": "John",
"middle": [
"N"
],
"last": "Dowling",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Thornblade",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "Chapman",
"suffix": ""
}
],
"year": 2009,
"venue": "Biomedical Natural Language Processing",
"volume": "42",
"issue": "",
"pages": "839--851",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henk Harkema, John N. Dowling, Tyler Thornblade, and Wendy W. Chapman. 2009. Context: An al- gorithm for determining negation, experiencer, and temporal status from clinical reports. Journal of Biomedical Informatics, 42(5):839 -851. Biomedi- cal Natural Language Processing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dr. summarize: Global summarization of medical dialogue by exploiting local structures",
"authors": [
{
"first": "Anirudh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Namit",
"middle": [],
"last": "Katariya",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Amatriain",
"suffix": ""
},
{
"first": "Anitha",
"middle": [],
"last": "Kannan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.08666"
]
},
"num": null,
"urls": [],
"raw_text": "Anirudh Joshi, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2020. Dr. summarize: Global sum- marization of medical dialogue by exploiting local structures. arXiv preprint arXiv:2009.08666.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generating soap notes from doctor-patient conversations",
"authors": [
{
"first": "Kundan",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Sopan",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"P"
],
"last": "Bigham",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kundan Krishna, Sopan Khosla, Jeffrey P. Bigham, and Zachary C. Lipton. 2020a. Generating soap notes from doctor-patient conversations.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Extracting structured data from physician-patient conversations by predicting noteworthy utterances",
"authors": [
{
"first": "Kundan",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Pavel",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schloss",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Jeffrey",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Bigham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lipton",
"suffix": ""
}
],
"year": 2020,
"venue": "Explainable AI in Healthcare and Medicine",
"volume": "",
"issue": "",
"pages": "155--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kundan Krishna, Amy Pavel, Benjamin Schloss, Jef- frey P Bigham, and Zachary C Lipton. 2020b. Ex- tracting structured data from physician-patient con- versations by predicting noteworthy utterances. In Explainable AI in Healthcare and Medicine, pages 155-169. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Evaluating the factual consistency of abstractive text summarization",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Kry\u015bci\u0144ski",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Kry\u015bci\u0144ski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the factual consistency of abstractive text summarization.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic dialogue summary generation for customer service",
"authors": [
{
"first": "Chunyi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jieping",
"middle": [],
"last": "Ye",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, KDD '19",
"volume": "",
"issue": "",
"pages": "1957--1965",
"other_ids": {
"DOI": [
"10.1145/3292500.3330683"
]
},
"num": null,
"urls": [],
"raw_text": "Chunyi Liu, Peng Wang, Jiang Xu, Zang Li, and Jieping Ye. 2019a. Automatic dialogue summary generation for customer service. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, KDD '19, page 1957-1965, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "What makes good in-context examples for gpt-3?",
"authors": [
{
"first": "Jiachang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-3?",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Topic-aware pointergenerator networks for summarizing spoken conversations",
"authors": [
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Sheldon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ai",
"middle": [
"Ti"
],
"last": "Aw",
"suffix": ""
},
{
"first": "Nancy",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, and Nancy F. Chen. 2019b. Topic-aware pointer- generator networks for summarizing spoken conver- sations.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "COVID-19 transforms health care through telemedicine: evidence from the field",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Devin M Mann",
"suffix": ""
},
{
"first": "Rumi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chunara",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Oded",
"middle": [],
"last": "Testa",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of the American Medical Informatics Association. Ocaa072",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/jamia/ocaa072"
]
},
"num": null,
"urls": [],
"raw_text": "Devin M Mann, Ji Chen, Rumi Chunara, Paul A Testa, and Oded Nov. 2020. COVID-19 transforms health care through telemedicine: evidence from the field. Journal of the American Medical Informatics Asso- ciation. Ocaa072.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17",
"volume": "",
"issue": "",
"pages": "3075--3081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Proceedings of the Thirty-First AAAI Con- ference on Artificial Intelligence, AAAI'17, page 3075-3081. AAAI Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "\u00c7aglar",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Gu\u00ec",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1028"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, \u00c7aglar Gu\u00cc \u2021l\u00e7ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training",
"authors": [
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Dayiheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Jiusheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ruofei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter Liu, and Christopher Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction",
"authors": [
{
"first": "Tait",
"middle": [
"D"
],
"last": "Shanafelt",
"suffix": ""
},
{
"first": "Lotte",
"middle": [
"N"
],
"last": "Dyrbye",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Sinsky",
"suffix": ""
},
{
"first": "Omar",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Satele",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Sloan",
"suffix": ""
},
{
"first": "Colin",
"middle": [
"P"
],
"last": "West",
"suffix": ""
}
],
"year": 2016,
"venue": "Mayo Clinic Proceedings",
"volume": "91",
"issue": "",
"pages": "836--848",
"other_ids": {
"DOI": [
"10.1016/j.mayocp.2016.05.007"
]
},
"num": null,
"urls": [],
"raw_text": "Tait D. Shanafelt, Lotte N.Dyrbye, Christine Sinsky, Omar Hasan, Daniel Satele, Jeff Sloan, and Colin P. West. 2016. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clinic Proceedings, 91:836-848.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Vol- ume 2, NIPS'14, page 3104-3112, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Scao",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Lhoest",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.08777"
]
},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning to summarize radiology findings",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daisy",
"middle": [
"Yi"
],
"last": "Ding",
"suffix": ""
},
{
"first": "Tianpei",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Curtis",
"middle": [
"P"
],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Daisy Yi Ding, Tianpei Qian, Christo- pher D. Manning, and Curtis P. Langlotz. 2018. Learning to summarize radiology findings. CoRR, abs/1809.04698.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 2: Doctor evaluation of which among GPT-3 and GPT-3-ENS summaries they considered \"best\" showing that GPT-3-ENS is a better approach for labeling"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The bleeding is from the anal opening. Is the bleeding from the anal opening and not the vagina? Has something similar happened before?[SEP]yes from the anal opening[SUMMARIZED]The bleeding is from the anal opening.[STOP]"
},
"TABREF3": {
"content": "<table><tr><td>Models</td><td>Train Data Source</td><td>Negation</td><td>Metrics Concept</td><td>ROUGE-L</td></tr><tr><td/><td/><td>F1</td><td>F1</td><td>F1</td></tr><tr><td>PEGASUS</td><td>H 6400</td><td>21.09</td><td>35.96</td><td>55.59</td></tr><tr><td>\u03b1 = 0.5</td><td>H 6400 + GCF K=10 3200</td><td>30.14</td><td>43.49</td><td>62.45</td></tr><tr><td>\u03b1 = 1</td><td>H 6400 + GCF K=10 6400</td><td>30.70</td><td>43.73</td><td>60.63</td></tr><tr><td>\u03b1 = 2</td><td>H 6400 + GCF K=10 12800</td><td>29.43</td><td>41.02</td><td>59.85</td></tr><tr><td>\u03b1 = 3</td><td>H 6400 + GCF K=10 25600</td><td>31.93</td><td>44.68</td><td>61.05</td></tr><tr><td>DRSUM</td><td>H 6400</td><td>26.75</td><td>39.95</td><td>52.70</td></tr><tr><td>\u03b1 = 0.5</td><td>H 6400 + GCF K=10 3200</td><td>27.51</td><td>40.46</td><td>53.39</td></tr><tr><td>\u03b1 = 1</td><td>H 6400 + GCF K=10 6400</td><td>27.18</td><td>40.36</td><td>51.00</td></tr><tr><td>\u03b1 = 2</td><td>H 6400 + GCF K=10 12800</td><td>27.19</td><td>40.68</td><td>53.07</td></tr><tr><td>\u03b1 = 3</td><td>H 6400 + GCF K=10 25600</td><td>26.33</td><td>39.89</td><td>52.29</td></tr></table>",
"type_str": "table",
"text": "Input conversation snippets along with summaries generated by models trained on different data",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "Prompt for GPT-3 given two examples",
"num": null,
"html": null
}
}
}
}