Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D08-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:54.051294Z"
},
"title": "Revealing the Structure of Medical Dictations with Conditional Random Fields",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Jancsary",
"suffix": "",
"affiliation": {
"laboratory": "Austrian Research Institute for Artificial Intelligence",
"institution": "",
"location": {
"addrLine": "Freyung 6/6",
"postCode": "A-1010",
"settlement": "Vienna"
}
},
"email": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Matiasek",
"suffix": "",
"affiliation": {
"laboratory": "Austrian Research Institute for Artificial Intelligence",
"institution": "",
"location": {
"addrLine": "Freyung 6/6",
"postCode": "A-1010",
"settlement": "Vienna"
}
},
"email": ""
},
{
"first": "Harald",
"middle": [],
"last": "Trost",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Medical University Vienna",
"location": {
"country": "Austria"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic processing of medical dictations poses a significant challenge. We approach the problem by introducing a statistical framework capable of identifying types and boundaries of sections, lists and other structures occurring in a dictation, thereby gaining explicit knowledge about the function of such elements. Training data is created semiautomatically by aligning a parallel corpus of corrected medical reports and corresponding transcripts generated via automatic speech recognition. We highlight the properties of our statistical framework, which is based on conditional random fields (CRFs) and implemented as an efficient, publicly available toolkit. Finally, we show that our approach is effective both under ideal conditions and for real-life dictation involving speech recognition errors and speech-related phenomena such as hesitation and repetitions.",
"pdf_parse": {
"paper_id": "D08-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic processing of medical dictations poses a significant challenge. We approach the problem by introducing a statistical framework capable of identifying types and boundaries of sections, lists and other structures occurring in a dictation, thereby gaining explicit knowledge about the function of such elements. Training data is created semiautomatically by aligning a parallel corpus of corrected medical reports and corresponding transcripts generated via automatic speech recognition. We highlight the properties of our statistical framework, which is based on conditional random fields (CRFs) and implemented as an efficient, publicly available toolkit. Finally, we show that our approach is effective both under ideal conditions and for real-life dictation involving speech recognition errors and speech-related phenomena such as hesitation and repetitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It is quite common to dictate reports and leave the typing to typists -especially for the medical domain, where every consultation or treatment has to be documented. Automatic Speech Recognition (ASR) can support professional typists in their work by providing a transcript of what has been dictated. However, manual corrections are still needed. In particular, speech recognition errors have to be corrected. Furthermore, speaker errors, such as hesitations or repetitions, and instructions to the transcriptionist have to be removed. Finally, and most notably, proper structuring and formatting of the report has to be performed. For the medical domain, fairly clear guidelines exist with regard to what has to be dictated, and how it should be arranged. Thus, missing headings may have to be inserted, sentences must be grouped into paragraphs in a meaningful way, enumeration lists may have to be introduced, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of the work presented here was to ease the job of the typist by formatting the dictation according to its structure and the formatting guidelines. The prerequisite for this task is the identification of the various structural elements in the dictation which will be be described in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "complaint dehydration weakness and diarrhea full stop Mr. Will Shawn is a 81-year-old cold Asian gentleman who came in with fever and Persian diaper was sent to the emergency department by his primary care physician due him being dehydrated period . . . neck physical exam general alert and oriented times three known acute distress vital signs are stable . . . diagnosis is one chronic diarrhea with hydration he also has hypokalemia neck number thromboctopenia probably duty liver cirrhosis . . . a plan was discussed with patient in detail will transfer him to a nurse and facility for further care . . . end of dictation Figure 1 shows a fragment of a typical report as recognized by ASR, exemplifying some of the problems we have to deal with:",
"cite_spans": [],
"ref_spans": [
{
"start": 625,
"end": 633,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Punctuation and enumeration markers may be dictated or not, thus sentence boundaries and numbered items often have to be inferred; \u2022 the same holds for (sub)section headings;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 finally, recognition errors complicate the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dehydration, weakness and diarrhea.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CHIEF COMPLAINT",
"sec_num": null
},
{
"text": "Mr. Wilson is a 81-year-old Caucasian gentleman who came in here with fever and persistent diarrhea. He was sent to the emergency department by his primary care physician due to him being dehydrated. . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HISTORY OF PRESENT ILLNESS",
"sec_num": null
},
{
"text": "He is alert and oriented times three, not in acute distress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PHYSICAL EXAMINATION GENERAL:",
"sec_num": null
},
{
"text": "VITAL SIGNS: Stable. . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PHYSICAL EXAMINATION GENERAL:",
"sec_num": null
},
{
"text": "1. Chronic diarrhea with dehydration. He also has hypokalemia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIAGNOSIS",
"sec_num": null
},
{
"text": "2. Thromboctopenia, probably due to liver cirrhosis. . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIAGNOSIS",
"sec_num": null
},
{
"text": "The plan was discussed with the patient in detail. Will transfer him to a nursing facility for further care. . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PLAN AND DISCUSSION",
"sec_num": null
},
{
"text": "When properly edited and formatted, the same dictation appears significantly more comprehensible, as can be seen in figure 2. In order to arrive at this result it is necessary to identify the inherent structure of the dictation, i.e. the various hierarchically nested segments. We will recast the segmentation problem as a multi-tiered tagging problem and show that indeed a good deal of the structure of medical dictations can be revealed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fig. 2: A typical medical report",
"sec_num": null
},
{
"text": "The main contributions of our paper are as follows: First, we introduce a generic approach that can be integrated seamlessly with existing ASR solutions and provides structured output for medical dictations. Second, we provide a freely available toolkit for factorial conditional random fields (CRFs) that forms the basis of aforementioned approach and is also applicable to numerous other problems (see section 6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fig. 2: A typical medical report",
"sec_num": null
},
{
"text": "The structure recognition problem dealt with here is closely related to the field of linear text segmentation with the goal to partition text into coherent blocks, but on a single level. Thus, our task generalizes linear text segmentation to multiple levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A meanwhile classic approach towards domainindependent linear text segmentation, C99, is presented in Choi (2000) . C99 is the baseline which many current algorithms are compared to. Choi's algorithm surpasses previous work by Hearst (1997) , who proposed the Texttiling algorithm. The best results published to date are -to the best of our knowledge -those of Lamprier et al. (2008) .",
"cite_spans": [
{
"start": 102,
"end": 113,
"text": "Choi (2000)",
"ref_id": "BIBREF1"
},
{
"start": 227,
"end": 240,
"text": "Hearst (1997)",
"ref_id": "BIBREF3"
},
{
"start": 361,
"end": 383,
"text": "Lamprier et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The automatic detection of (sub)section topics plays an important role in our work, since changes of topic indicate a section boundary and appropriate headings can be derived from the section type. Topic detection is usually performed using methods similar to those of text classification (see Sebastiani (2002) for a survey). Matsuov (2003) presents a dynamic programming algorithm capable of segmenting medical reports into sections and assigning topics to them. Thus, the aims of his work are similar to ours. However, he is not concerned with the more fine-grained elements, and also uses a different machinery.",
"cite_spans": [
{
"start": 294,
"end": 311,
"text": "Sebastiani (2002)",
"ref_id": "BIBREF19"
},
{
"start": 327,
"end": 341,
"text": "Matsuov (2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "When dealing with tagging problems, statistical frameworks such as HMMs (Rabiner, 1989) or, recently, CRFs (Lafferty et al., 2001 ) are most commonly applied. Whereas HMMs are generative models, CRFs are discriminative models that can incorporate rich features. However, other approaches to text segmentation have also been pursued. E.g., McDonald et al. (2005) present a model based on multilabel classification, allowing for natural handling of overlapping or non-contiguous segments.",
"cite_spans": [
{
"start": 72,
"end": 87,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF16"
},
{
"start": 107,
"end": 129,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF7"
},
{
"start": 339,
"end": 361,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, the work of Ye and Viola (2004) bears similarities to ours. They apply CRFs to the parsing of hierarchical lists and outlines in handwritten notes, and thus have the same goal of finding deep structure using the same probabilistic framework.",
"cite_spans": [
{
"start": 21,
"end": 40,
"text": "Ye and Viola (2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For representing our segmentation problem we use a trick that is well-known from chunking and named entity recognition, and recast the problem as a tagging problem in the so-called BIO 1 notation. Since we want to assign a type to every segment, OUTSIDE labels are not needed. However, we perform seg- mentation on multiple levels, therefore multiple label chains are required. Furthermore, we also want to assign types to certain segments, thus the labels need an encoding for the type of segment they represent. Figure 3 illustrates this representation: B-T i denotes the beginning of a segment of type T i , while I-T i indicates that the segment of type T i continues. By adding label chains, it is possible to group the segments of the previous chain into coarser units. Tree-like structures of unlimited depth can be expressed this way 2 . The gray lines in figure 3 denote dependencies between nodes. Node labels also depend on the input token sequence in an arbitrarily wide context window.",
"cite_spans": [],
"ref_spans": [
{
"start": 514,
"end": 522,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Problem Representation",
"sec_num": "3"
},
{
"text": "... t 1 t 2 t 3 t 4 time step ... ... ... ... ... ... ... ... ... t 5 t 6 tokens level 1 level 2 level 3 ... < < < ... B-T 3 B-T 4 B-T 1 I-T 3 I-T 4 I-T 1 I-T 3 I-T 4 B-T 2 I-T 3 I-T 4 I-T 2 B-T 3 I-T 4 B-T 2 I-T 3 I-T 4 I-T 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Representation",
"sec_num": "3"
},
{
"text": "The raw data available to us consists of two parallel corpora of 2007 reports from the area of medical consultations, dictated by physicians. The first corpus, C RCG , consists of the raw output of ASR (figure 1), the other one, C COR , contains the corresponding corrected and formatted reports (figure 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "In order to arrive at an annotated corpus in a for-mat suitable for the tagging problem, we first have to analyze the report structure and define appropriate labels for each segmentation level. Then, every token has to be annotated with the appropriate begin or inside labels. A report has 625 tokens on average, so the manual annotation of roughly 1.25 million tokens seemed not to be feasible. Thus we decided to produce the annotations programmatically and restrict manual work to corrections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "When inspecting reports in C COR , a human reader can easily identify the various elements a report consists of, such as headings -written in bold on a separate line -introducing sections, subheadings -written in bold followed by a colon -introducing subsections, and enumerations starting with indented numbers followed by a period. Going down further, there are paragraphs divided into sentences. Using these structuring elements, a hierarchic data structure comprising all report elements can be induced. Sections and subsections are typed according to their heading. There exist clear recommendations on structuring medical reports, such as E2184-02 (ASTM International, 2002) . However, actual medical reports still vary greatly with regard to their structure. Using the aforementioned standard, we assigned the (sub)headings that actually appeared in the data to the closest type, introducing new types only when absolutely necessary. Finally we arrived at a structure model with three label chains:",
"cite_spans": [
{
"start": 654,
"end": 680,
"text": "(ASTM International, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of report structure",
"sec_num": "4.1"
},
{
"text": "\u2022 Sentence level, with 4 labels: Heading, Subheading, Sentence, Enummarker \u2022 Subsection level, with 45 labels: Paragraph, Enumelement, None and 42 subsection types (e.g. VitalSigns, Cardiovascular ...) \u2022 Section level, with 23 section types (e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of report structure",
"sec_num": "4.1"
},
{
"text": "ReasonForEncounter, Findings, Plan ...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of report structure",
"sec_num": "4.1"
},
{
"text": "Since the reports in C COR are manually edited they are reliable to parse. We employed a broad-coverage dictionary (handling also multi-word terms) and a domain-specific grammar for parsing and layout information. A regular heading grammar was used for mapping (sub)headings to the defined (sub)section labels (for details see Jancsary (2008) ). The output of the parser is a hedge data structure from which the annotation labels can be derived easily. However, our goal is to develop a model for recognizing the report structure from the dictation, thus we have to map the newly created annotation of reports in C COR onto the corresponding reports in C RCG . The basic idea here is to align the tokens of C COR with the tokens in C RCG and to copy the annotations (cf. figure 4 3 ). There are some peculiarities we have to take care of during alignment:",
"cite_spans": [
{
"start": 327,
"end": 342,
"text": "Jancsary (2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus annotation",
"sec_num": "4.2"
},
{
"text": "C COR OP C RCG . . . . . . . . . . . . . . . B \u2212 Head CHIEF del Head COMPLAINT sub complaint B \u2212 Head B \u2212 Sent Dehydration sub dehydration B \u2212 Sent Sent , del",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus annotation",
"sec_num": "4.2"
},
{
"text": "1. non-dictated items in C COR (e.g. punctuation, headings) cost function. It assigns tokens that are similar (either from a semantic or phonetic point of view) a low cost for substitution, whereas dissimilar tokens receive a prohibitively expensive score. Costs for deletion and insertion are assigned inversely. Semantic similarity is computed using Wordnet (Fellbaum, 1998) and UMLS (Lindberg et al., 1993) . For phonetic matching, the Metaphone algorithm (Philips, 1990) was used (for details see Huber et al. (2006) ).",
"cite_spans": [
{
"start": 360,
"end": 376,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF2"
},
{
"start": 386,
"end": 409,
"text": "(Lindberg et al., 1993)",
"ref_id": null
},
{
"start": 459,
"end": 474,
"text": "(Philips, 1990)",
"ref_id": "BIBREF15"
},
{
"start": 501,
"end": 520,
"text": "Huber et al. (2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus annotation",
"sec_num": "4.2"
},
{
"text": "The annotation discussed above is the first step towards building a training corpus for a CRF-based approach. What remains to be done is to provide observations for each time step of the observed entity, i.e. for each token of a report; these are expected to give hints with regard to the annotation labels that are to be assigned to the time step. The observations, associated with one or more annotation labels, are usually called features in the machine learning literature. During CRF training, the parameters of these features are determined such that they indicate the significance of the observations for a certain label or label combination; this is the basis for later tagging of unseen reports. We use the following features for each time step of the reports in C COR and C RCG :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Generation",
"sec_num": "4.3"
},
{
"text": "\u2022 Lexical features covering the local context of \u00b1 2 tokens (e.g., patient@0, the@-1, is@1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Generation",
"sec_num": "4.3"
},
{
"text": "\u2022 Syntactic features indicating the possible syntactic categories of the tokens (e.g., NN@0, JJ@0, DT@-1 and be+VBZ+aux@1) \u2022 Bag-of-word (BOW) features intend to capture the topic of a text segment in a wider context of \u00b1 10 tokens, without encoding any order. Tokens are lemmatized and replaced by their UMLS concept IDs, if available, and weighed by TF. Thus, different words describing the same concept are considered equal. \u2022 Semantic type features as above, but using UMLS semantic types instead of concept IDs provide a coarser level of description. \u2022 Relative position features: The report is divided into eight parts corresponding to eight binary features; only the feature corresponding to the part of the current time step is set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Generation",
"sec_num": "4.3"
},
{
"text": "Conditional random fields (Lafferty et al., 2001) are conditional models in the exponential family. They can be considered a generalization of multinomial logistic regression to output with non-trivial internal structure, such as sequences, trees or other graphical models. We loosely follow the general notation of Sutton and McCallum (2007) in our presentation.",
"cite_spans": [
{
"start": 26,
"end": 49,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF7"
},
{
"start": 316,
"end": 342,
"text": "Sutton and McCallum (2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "Assuming an undirected graphical model G over an observed entity x and a set of discrete, interdependent random variables 4 y, a conditional random field describes the conditional distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y|x; \u03b8) = 1 Z(x) c\u2208G \u03c6 c (y c , x; \u03b8 c )",
"eq_num": "(1)"
}
],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "The normalization term Z(x) sums over all possible joint outcomes of y, i.e., ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z(x) = y p(y |x; \u03b8)",
"eq_num": "(2)"
}
],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "\u03c6 c (y c , x; \u03b8 c ) = exp \uf8eb \uf8ed |\u03b8c| k=1 \u03bb ck f ck (x, y c ) \uf8f6 \uf8f8 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "In practice, for efficiency reasons, independence assumptions have to be made about variables y \u2208 y, so G is restricted to small cliques (say, (|c| \u2264 3). Thus, the sufficient statistics only depend on a limited number of variables y c \u2286 y; they can, however, access the whole observed entity x. This is in contrast to generative approaches which model a joint distribution p(x, y) and therefore have to extend the independence assumptions to elements x \u2208 x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "The factor-specific parameters \u03b8 c of a CRF are typically tied for certain cliques, according to the problem structure (i.e., \u03b8 c 1 = \u03b8 c 2 for two cliques c 1 , c 2 with tied parameters). E.g., parameters are usually tied across time if G is a sequence. The factors can then be partitioned into a set of clique templates C = {C 1 , C 2 , . . . C P }, where each clique template C p is a set of factors with tied parameters \u03b8 p and corresponding sufficient statistics {f pk (\u2022)}. The CRF can thus be rewritten as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "p(y|x) = 1 Z(x) Cp\u2208C \u03c6 c \u2208Cp \u03c6 c (y c , x; \u03b8 p ) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "Furthermore, in practice, the sufficient statistics {f pk (\u2022)} are computed from a subset x c \u2286 x that is relevant to a factor \u03c6 c (\u2022). In a sequence labelling task, tokens x \u2208 x that are in temporal proximity to an output variable y \u2208 y are typically most useful. Nevertheless, in our notation, we will let factors depend on the whole observed entity x to denote that all of x can be accessed if necessary. For our structure recognition task, the graphical model G exhibits the structure shown in figure 3, i.e., there are multiple connected chains of variables with factors defined over single-node cliques and two-node cliques within and between chains; the parameters of factors are tied across time. This corresponds to the factorial CRF structure described in Sutton and McCallum (2005) . Structure recognition using conditional random fields then involves two separate steps: parameter estimation, or training, is concerned with selecting the parameters of a CRF such that they fit the given training data. Prediction, or testing, determines the best label assignment for unknown examples.",
"cite_spans": [
{
"start": 766,
"end": 792,
"text": "Sutton and McCallum (2005)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Recognition with CRFs",
"sec_num": "5"
},
{
"text": "Given IID training data D = {x (i) , y (i) } N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": ", parameter estimation determines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 * = argmax \u03b8 N i p(y (i) |x (i) ; \u03b8 )",
"eq_num": "(5)"
}
],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "i.e., those parameters that maximize the conditional probability of the CRF given the training data. In the following, we will not explicitly sum over",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "; as Sutton and McCallum (2007) note, the training instances x (i) , y (i) can be considered disconnected components of a single undirected model G.",
"cite_spans": [
{
"start": 5,
"end": 31,
"text": "Sutton and McCallum (2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "We thus assume G and its factors \u03c6 c (\u2022) to extend over all training instances. Unfortunately, (5) cannot be solved analytically. Typically, one performs maximum likelihood estimation (MLE) by maximizing the conditional log-likelihood numerically:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(\u03b8) = Cp\u2208C \u03c6 c \u2208Cp |\u03b8p| k=1 \u03bb pk f pk (x, y c ) \u2212 log Z(x)",
"eq_num": "(6)"
}
],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "Currently, limited-memory gradient-based methods such as LBFGS (Nocedal, 1980) are most commonly employed for that purpose 5 . These require the partial derivatives of (6), which are given by:",
"cite_spans": [
{
"start": 63,
"end": 78,
"text": "(Nocedal, 1980)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2202 \u2202\u03bb pk = \u03c6 c \u2208Cp f pk (x, y c ) \u2212 y c f pk (x, y c )p(y c |x)",
"eq_num": "(7)"
}
],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "and expose the intuitive form of a difference between the expectation of a sufficient statistic according to the empiric distribution and the expectation according to the model distribution. The latter term requires marginal probabilities for each clique c, denoted by p(y c |x). Inference on the graphical model G (see sec 5.2) is needed to compute these.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "Depending on the structure of G, inference can be very expensive. In order to speed up parameter estimation, which requires inference to be performed for every training example and for every iteration of the gradient-based method, alternatives to MLE have been proposed that do not require inference. We show here a factor-based variant of pseudolikelihood as proposed by Sanner et al. (2007) :",
"cite_spans": [
{
"start": 372,
"end": 392,
"text": "Sanner et al. (2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "p (\u03b8) = Cp\u2208C \u03c6 c \u2208Cp log p(y c |x, MB (\u03c6 c )) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "where the factors are conditioned on the Markov blanket, denoted by M B 6 . The gradient of (8) can be computed similar to (7), except that the marginals p c (y c |x) are also conditioned on the Markov blanket, i.e., p c (y c |x, MB (\u03c6 c )). Due to its dependence on the Markov blanket of factors, pseudolikelihood cannot be applied to prediction, but only to parameter estimation, where the \"true\" assignment of a blanket is known.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.1"
},
{
"text": "We employ a Gaussian prior for training of CRFs in order to avoid overfitting. Hence, if f (\u03b8) is the original objective function (e.g., log-likelihood or log-pseudolikelihood), we optimize a penalized version f (\u03b8) instead, such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "5.1.1"
},
{
"text": "f (\u03b8) = f (\u03b8) \u2212 |\u03b8| k=1 \u03bb 2 k 2\u03c3 2 and \u2202f \u2202\u03bb k = \u2202f \u2202\u03bb k \u2212 \u03bb k \u03c3 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "5.1.1"
},
{
"text": "The tuning parameter \u03c3 2 determines the strength of the penalty; lower values lead to less overfitting. Gaussian priors are a common choice for parameter estimation of log-linear models (cf. Sutton and McCallum (2007) ).",
"cite_spans": [
{
"start": 191,
"end": 217,
"text": "Sutton and McCallum (2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "5.1.1"
},
{
"text": "Inference on a graphical model G is needed to efficiently compute the normalization term Z(x) and marginals p c (y c |x) for MLE, cf. equation 6. Using belief propagation (Yedidia et al., 2003) , more precisely its sum-product variant, we can compute the beliefs for all cliques c \u2208 G. In a treeshaped graphical model G, these beliefs correspond exactly to the marginal probabilities p c (y c |x). However, if the graph contains cycles, so-called loopy belief propagation must be performed. The message updates are then re-iterated according to some schedule until the messages converge. We use a TRP schedule as described by Wainwright et al. (2002) . The resulting beliefs are then only approximations to the true marginals. Moreover, loopy belief propagation is not guaranteed to terminate in general -we investigate this phenomenon in section 6.5.",
"cite_spans": [
{
"start": 171,
"end": 193,
"text": "(Yedidia et al., 2003)",
"ref_id": "BIBREF24"
},
{
"start": 626,
"end": 650,
"text": "Wainwright et al. (2002)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5.2"
},
{
"text": "With regard to the normalization term Z(x), as equation 2shows, naive computation requires summing over all assignments of y. This is too expensive to be practical. Fortunately, belief propagation produces an alternative factorization of p(y|x); i.e., the conditional distribution defining the CRF can be expressed in terms of the marginals gained during sum-product belief propagation. This representation does not require any additional normalization, so Z(x) need not be computed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5.2"
},
{
"text": "Once the parameters \u03b8 have been estimated from training data, a CRF can be used to predict the labels of unknown examples. The goal is to find:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y * = argmax y p(y |x; \u03b8)",
"eq_num": "(9)"
}
],
"section": "Prediction",
"sec_num": "5.3"
},
{
"text": "i.e., the assignment of y that maximizes the conditional probability of the CRF. Again, naive computation of (9) is intractable. However, the max-product variant of loopy belief propagation can be applied to approximately find the MAP assignment of y (maxproduct can be seen as a generalization of the wellknown Viterbi algorithm to graphical models). For structure recognition in medical reports, we employ a post-processing step after label prediction with the CRF model. As in Jancsary (2008) , this step enforces the constraints of the BIO notation and applies some trivial non-local heuristics that guarantee a consistent global view of the resulting structure.",
"cite_spans": [
{
"start": 480,
"end": 495,
"text": "Jancsary (2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction",
"sec_num": "5.3"
},
{
"text": "For evaluation, we generally performed 3-fold crossvalidation for all performance measures. We created training data from the reports in C COR so as to simulate a scenario under ideal conditions, i.e., perfect speech recognition and proper dictation of punctuation and headings, without hesitation or repetitions. In contrast, the data from C RCG reflects real-life conditions, with a wide variety of speech recognition error rates and speakers frequently hesitating, repeating themselves and omitting punctuation and/or headings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "Depending on the experiment, two different subsets of the two corpora were considered:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "\u2022 C {COR,RCG}-ALL : All 2007 reports were used, resulting in 1338 training examples and 669 testing examples at each CV-iteration. \u2022 C {COR,RCG}-BEST : The corpus was restricted to those 1002 reports that yielded the lowest word error rate during alignment (see section 4.2). Each CV-iteration hence amounts to 668 training examples and 334 testing examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "From the crossvalidation runs, a 95%-confidence interval for each measure was estimated as follows: where\u0232 is the sample mean, s is the sample standard deviation, N is the sample size (3), \u03b1 is the desired significance level (0.05) and t (\u03b1/2,N \u22121) is the upper critical value of the t-distribution with N \u2212 1 degrees of freedom. The confidence intervals are indicated in the \u00b1 column of tables 1, 2 and 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Y \u00b1 t (\u03b1/2,N \u22121) s \u221a N =\u0232 \u00b1 t (0.025,2) s \u221a 3",
"eq_num": "(10)"
}
],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "For CRF training, we minimized the penalized, negative log-pseudolikelihood using LBFGS with m = 3. The variance of the Gaussian prior was set to \u03c3 2 = 1000. All supported features were used for univariate factors, while the bivariate factors within chains and between chains were restricted to bias weights. For testing, loopy belief propagation with a TRP schedule was used in order to determine the maximum a posteriori (MAP) assignment. We use VieCRF, our own implementation of factorial CRFs, which is freely available at the author's homepage 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "In order to determine the number of required training iterations, an experiment was performed that compares the progress of the Accuracy measure on a validation set to the progress of the loss function on a training set. The data was randomly split into a training set (2/3 of the instances) and a validation set. Accuracy on the validation set was computed using the intermediate CRF parameters \u03b8 t every 5 iterations of LBFGS. The resulting plot (figure 5) demonstrates that the progress of the loss function corresponds well to that of the Accuracy measure, Table 2 : Accuracy on a high-quality subset thus an \"early stopping\" approach might be tempting to cut down on training times. However, during earlier stages of training, the CRF parameters seem to be strongly biased towards high-frequency labels, so other measures such as macro-averaged F1 might suffer from early stopping. Hence, we decided to allow up to 800 iterations of LBFGS. Table 1 shows estimated accuracies for C COR-ALL and C RCG-ALL . Overall, high accuracy (> 97%) can be achieved on C COR-ALL , showing that the approach works very well under ideal conditions. Performance is still fair on the noisy data (C RCG-ALL ; Accuracy > 86%). It should be noted that the labels are unequally distributed, especially in chain 0 (there are very few BEGIN labels). Thus, the baseline is substantially high for this chain, and other measures may be better suited for evaluating segmentation quality (cf. section 6.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 561,
"end": 568,
"text": "Table 2",
"ref_id": null
},
{
"start": 945,
"end": 952,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis of training progress",
"sec_num": "6.1"
},
{
"text": "Measuring the effect of the imprecise reference annotation of C RCG is difficult without a corresponding, manually created golden standard. However, to get a feeling for the impact of the noise induced by speech recognition errors and sloppy dictation Table 3 : Per-chain WindowDiff on the full corpus on the quality of the semi-automatically generated annotation, we conducted an experiment with subsets C COR-BEST and C RCG-BEST . The results are shown in table 2. Comparing these results to table 1, one can see that overall accuracy decreased for C COR-BEST , whereas we see an increase for C RCG-BEST . This effect can be attributed to two different phenomena:",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "On the effect of noisy training data",
"sec_num": "6.3"
},
{
"text": "\u2022 In C COR-BEST , no quality gains in the annotation could be expected. The smaller number of training examples therefore results in lower accuracy. \u2022 Fewer speech recognition errors and more consistent dictation in C RCG-BEST allow for better alignment and thus a better reference annotation. This increases the actual prediction performance and, furthermore, reduces the number of label predictions that are erroneously counted as a misprediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the effect of noisy training data",
"sec_num": "6.3"
},
{
"text": "Thus, it is to be expected that manual correction of the automatically created annotation results in significant performance gains. Preliminary annotation experiments have shown that this is indeed the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the effect of noisy training data",
"sec_num": "6.3"
},
{
"text": "Accuracy is not the best measure to assess segmentation quality, therefore we also conducted experiments using the WindowDiff measure as proposed by Pevzner and Hearst (2002) . WindowDiff returns 0 in case of a perfect segmentation; 1 is the worst possible score. However, it only takes into account segment boundaries and disregards segment types. Table 3 shows the WindowDiff scores for C COR-ALL and C RCG-ALL . Overall, the scores are quite good and are consistently below 0.2. Furthermore, C RCG-ALL scores do not suffer as badly from inaccurate reference annotation, since \"near misses\" are penalized less strongly. Table 4 : Convergence behaviour of loopy BP",
"cite_spans": [
{
"start": 149,
"end": 174,
"text": "Pevzner and Hearst (2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 349,
"end": 356,
"text": "Table 3",
"ref_id": null
},
{
"start": 622,
"end": 629,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation quality",
"sec_num": "6.4"
},
{
"text": "In section 5.2, we mentioned that loopy BP is not guaranteed to converge in a finite number of iterations. Since we optimize pseudolikelihood for parameter estimation, we are not affected by this limitation in the training phase. However, we use loopy BP with a TRP schedule during testing, so we must expect to encounter non-convergence for some examples. Theoretical results on this topic are discussed by Heskes (2004) . We give here an empirical observation of convergence behaviour of loopy BP in our setting; the maximum number of iterations of the TRP schedule was restricted to 1,000. Table 4 shows the percentage of examples converging within this limit and the average number of iterations required by the converging examples, broken down by the different corpora. From these results, we conclude that there is a connection between the quality of the annotation and the convergence behaviour of loopy BP. In practice, even though loopy BP didn't converge for some examples, the solutions after 1,000 iterations where satisfactory.",
"cite_spans": [
{
"start": 408,
"end": 421,
"text": "Heskes (2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 593,
"end": 600,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Convergence of loopy belief propagation",
"sec_num": "6.5"
},
{
"text": "We have presented a framework which allows for identification of structure in report dictations, such as sentence boundaries, paragraphs, enumerations, (sub)sections, and various other structural elements; even if no explicit clues are dictated. Furthermore, meaningful types are automatically assigned to subsections and sections, allowing -for instance -to automatically assign headings, if none were dictated. For the preparation of training data a mechanism has been presented that exploits the potential of parallel corpora for automatic annotation of data. Using manually edited formatted reports and the corresponding raw output of ASR, reference annotation can be generated that is suitable for learning to iden-tify structure in ASR output. For the structure recognition task, a CRF framework has been employed and multiple experiments have been performed, confirming the practicability of the approach presented here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "7"
},
{
"text": "One result deserving further investigation is the effect of noisy annotation. We have shown that segmentation results improve when fewer errors are present in the automatically generated annotation. Thus, manual correction of the reference annotation will yield further improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "7"
},
{
"text": "Finally, the framework presented in this paper opens up exciting possibilities for future work. In particular, we aim at automatically transforming report dictations into properly formatted and rephrased reports that conform to the requirements of the relevant domain. Such tasks are greatly facilitated by the explicit knowledge gained during structure recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "7"
},
{
"text": "BEGIN -INSIDE -OUTSIDE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note, that since we omit a redundant top-level chain, this structure technically is a hedge rather than a tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ". dictated words that do not occur in C COR (meta instructions, repetitions) 3. non-identical but corresponding items (recognition errors, reformulations)Since it is particularly necessary to correctly align items of the third group, standard string-edit distance based methods(Levenshtein, 1966) need to be augmented. Therefore we use a more sophisticated3 This approach can easily be generalized to multiple label chains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our case, the discrete outcomes of the random variables y correspond to the annotation labels described in the previous section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recently, stochastic gradient descent methods such as Online LBFGS(Schraudolph et al., 2007) have been shown to perform competitively.6 Here, the Markov blanket of a factor \u03c6 c denotes the set of variables occurring in factors that share variables with \u03c6 c , noninclusive of the variables of \u03c6 c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.ofai.at/\u02dcjeremy.jancsary/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work presented here has been carried out in the context of the Austrian KNet competence network COAST. We gratefully acknowledge funding by the Austrian Federal Ministry of Economics and Labour, and ZIT Zentrum fuer Innovation und Technologie, Vienna. The Austrian Research Institute for Artificial Intelligence is supported by the Austrian Federal Ministry for Transport, Innovation, and Technology and by the Austrian Federal Ministry for Science and Research.Furthermore, we would like to thank our anonymous reviewers for many insightful comments that helped us improve this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ASTM E2184-02: Standard specification for healthcare document formats",
"authors": [
{
"first": "",
"middle": [],
"last": "Astm International",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ASTM International. 2002. ASTM E2184-02: Standard specification for healthcare document formats.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Advances in domain independent linear text segmentation",
"authors": [
{
"first": "Freddy",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the first conference on North American chapter of the Association for Computation Linguistics",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freddy Choi. 2000. Advances in domain independent linear text segmentation. In Proceedings of the first conference on North American chapter of the Associa- tion for Computation Linguistics, pages 26-33.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "WordNet: an electronic lexical database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Fellbaum. 1998. WordNet: an electronic lexical database. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Texttiling: Segmenting text into multi-paragraph subtopic passages",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "1",
"pages": "36--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):36-47.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the uniqueness of loopy belief propagation fixed points",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Heskes",
"suffix": ""
}
],
"year": 2004,
"venue": "Neural Comput",
"volume": "16",
"issue": "11",
"pages": "2379--2413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Heskes. 2004. On the uniqueness of loopy belief propagation fixed points. Neural Comput., 16(11):2379-2413.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mismatch interpretation by semantics-driven alignment",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Huber",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Jancsary",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of KONVENS '06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Huber, Jeremy Jancsary, Alexandra Klein, Jo- hannes Matiasek, and Harald Trost. 2006. Mismatch interpretation by semantics-driven alignment. In Pro- ceedings of KONVENS '06.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic mod- els for segmenting and labeling sequence data. In Pro- ceedings of the Eighteenth International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Toward a more global and coherent segmentation of texts",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lamprier",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Amghar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Levrat",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Saubion",
"suffix": ""
}
],
"year": 2008,
"venue": "Applied Artificial Intelligence",
"volume": "23",
"issue": "",
"pages": "208--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Lamprier, T. Amghar, B. Levrat, and F. Saubion. 2008. Toward a more global and coherent segmen- tation of texts. Applied Artificial Intelligence, 23:208- 234, March.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Binary codes capable of correcting deletions, insertions and reversals",
"authors": [
{
"first": "Vladimir",
"middle": [
"I"
],
"last": "Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "Soviet Physics Doklady",
"volume": "10",
"issue": "8",
"pages": "707--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady, 10(8):707-710.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical methods for text segmentation and topic detection",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Matsuov",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Matsuov. 2003. Statistical methods for text segmentation and topic detection. Master's the- sis, Rheinisch-Westf\u00e4lische Technische Hochschule Aachen.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Flexible text segmentation with structured multilabel classification",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP)",
"volume": "",
"issue": "",
"pages": "987--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Flexible text segmentation with structured multilabel classification. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 987-994.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Updating Quasi-Newton matrices with limited storage",
"authors": [
{
"first": "Jorge",
"middle": [],
"last": "Nocedal",
"suffix": ""
}
],
"year": 1980,
"venue": "Mathematics of Computation",
"volume": "35",
"issue": "",
"pages": "773--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorge Nocedal. 1980. Updating Quasi-Newton matri- ces with limited storage. Mathematics of Computa- tion, 35:773-782.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A critique and improvement of an evaluation metric for text segmentation",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Pevzner",
"suffix": ""
},
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Pevzner and Marti Hearst. 2002. A critique and improvement of an evaluation metric for text segmen- tation. Computational Linguistics, 28(1), March.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Hanging on the metaphone",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Philips",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Language",
"volume": "",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Philips. 1990. Hanging on the metaphone. Computer Language, 7(12).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A tutorial on hidden Markov models and selected applications in speech recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. R. Rabiner. 1989. A tutorial on hidden Markov mod- els and selected applications in speech recognition. Proceedings of the IEEE, 77:257-286, February.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning CRFs with hierarchical features: An application to go",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Sanner",
"suffix": ""
},
{
"first": "Thore",
"middle": [],
"last": "Graepel",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Herbrich",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2007,
"venue": "International Conference on Machine Learning (ICML) workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Sanner, Thore Graepel, Ralf Herbrich, and Tom Minka. 2007. Learning CRFs with hierarchical fea- tures: An application to go. International Conference on Machine Learning (ICML) workshop.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A stochastic Quasi-Newton Method for online convex optimization",
"authors": [
{
"first": "Nicol",
"middle": [
"N"
],
"last": "Schraudolph",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "G\u00fcnter",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of 11th International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicol N. Schraudolph, Jin Yu, and Simon G\u00fcnter. 2007. A stochastic Quasi-Newton Method for online convex optimization. In Proceedings of 11th International Conference on Artificial Intelligence and Statistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Machine learning in automated text categorization",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Computing Surveys",
"volume": "34",
"issue": "1",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabrizio Sebastiani. 2002. Machine learning in auto- mated text categorization. ACM Computing Surveys, 34(1):1-47.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Composition of Conditional Random Fields for transfer learning",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technologies / Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton and Andrew McCallum. 2005. Composi- tion of Conditional Random Fields for transfer learn- ing. In Proceedings of Human Language Technologies / Empirical Methods in Natural Language Processing (HLT/EMNLP).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An introduction to Conditional Random Fields for relational learning",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Introduction to Statistical Relational Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton and Andrew McCallum. 2007. An intro- duction to Conditional Random Fields for relational learning. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tree-based reparameterization framework for analysis of sum-product and related algorithms",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Wainwright",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"S"
],
"last": "Willsky",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Transactions on Information Theory",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Wainwright, Tommi Jaakkola, and Alan S. Will- sky. 2002. Tree-based reparameterization framework for analysis of sum-product and related algorithms. IEEE Transactions on Information Theory, 49(5).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning to parse hierarchical lists and outlines using Conditional Random Fields",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Viola",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Ninth International Workshop on Frontiers in Handwriting Recognition (IWFHR'04)",
"volume": "",
"issue": "",
"pages": "154--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Ye and Paul Viola. 2004. Learning to parse hi- erarchical lists and outlines using Conditional Ran- dom Fields. In Proceedings of the Ninth International Workshop on Frontiers in Handwriting Recognition (IWFHR'04), pages 154-159. IEEE Computer Soci- ety.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Understanding Belief Propagation and its Generalizations, Exploring Artificial Intelligence in the New Millennium",
"authors": [
{
"first": "Jonathan",
"middle": [
"S"
],
"last": "Yedidia",
"suffix": ""
},
{
"first": "William",
"middle": [
"T"
],
"last": "Freeman",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2003,
"venue": "Science & Technology Books",
"volume": "",
"issue": "8",
"pages": "236--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan S. Yedidia, William T. Freeman, and Yair Weiss, 2003. Understanding Belief Propagation and its Gen- eralizations, Exploring Artificial Intelligence in the New Millennium, chapter 8, pages 236-239. Science & Technology Books, January.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Raw output of speech recognition"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Multi-level segmentation as tagging problem"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Mapping labels via alignment"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "and ensures the probabilistic interpretation of p(y|x). The graphical model G describes interdependencies between the variables y; we can then model p(y|x) via factors \u03c6 c (\u2022) that are defined over cliques c \u2208 G. The factors \u03c6 c (\u2022) are computed from sufficient statistics {f ck (\u2022)} of the distribution (corresponding to the features mentioned in the previous section) and depend on possibly overlapping sets of parameters \u03b8 c \u2286 \u03b8 which together form the parameters \u03b8 of the conditional distribution:"
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Accuracy vs. loss function on C RCG-ALL"
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Estimated Accuracies</td><td/><td colspan=\"3\">Estimated Accuracies</td></tr><tr><td/><td>Acc.</td><td>\u00b1</td><td/><td>Acc.</td><td>\u00b1</td></tr><tr><td colspan=\"3\">Average 96.48% 0.82</td><td colspan=\"3\">Average 87.73% 2.07</td></tr><tr><td>Chain 0</td><td colspan=\"2\">99.55% 0.08</td><td>Chain 0</td><td colspan=\"2\">93.77% 0.68</td></tr><tr><td>Chain 1</td><td colspan=\"2\">94.64% 0.23</td><td>Chain 1</td><td colspan=\"2\">87.59% 1.79</td></tr><tr><td>Chain 2</td><td colspan=\"2\">95.25% 2.16</td><td>Chain 2</td><td colspan=\"2\">81.81% 3.79</td></tr><tr><td>Joint</td><td colspan=\"2\">90.65% 2.15</td><td>Joint</td><td>70.91%</td><td>4.50</td></tr><tr><td colspan=\"2\">(a) CCOR-BEST</td><td/><td colspan=\"2\">(b) CRCG-BEST</td></tr></table>",
"num": null,
"text": "Accuracy on the full corpus",
"html": null
}
}
}
}