Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:04:57.909642Z"
},
"title": "Getting More from Segmentation Evaluation",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Scaiano",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Ottawa Ottawa",
"location": {
"postCode": "K1N 6N5",
"region": "ON",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Ottawa Ottawa",
"location": {
"postCode": "K1N 6N5",
"region": "ON",
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce a new segmentation evaluation measure, WinPR, which resolves some of the limitations of WindowDiff. WinPR distinguishes between false positive and false negative errors; produces more intuitive measures, such as precision, recall, and F-measure; is insensitive to window size, which allows us to customize near miss sensitivity; and is based on counting errors not windows, but still provides partial reward for near misses.",
"pdf_parse": {
"paper_id": "N12-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce a new segmentation evaluation measure, WinPR, which resolves some of the limitations of WindowDiff. WinPR distinguishes between false positive and false negative errors; produces more intuitive measures, such as precision, recall, and F-measure; is insensitive to window size, which allows us to customize near miss sensitivity; and is based on counting errors not windows, but still provides partial reward for near misses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "WindowDiff (Pevzner and Hearst, 2002) has become the most frequently used measure to evaluate segmentation. Segmentation is the task of dividing a stream of data (text or other media) into coherent units. These units may be motivated topically (Malioutov and Barzilay, 2006) , structurally (Stokes, 2003) (Malioutov et al., 2007) (Jancsary et al., 2008) , or visually (Chen et al., 2008) , depending on the domain and task. Segmentation evaluation is difficult because exact comparison of boundaries is too strict; a partial reward is required for close boundaries.",
"cite_spans": [
{
"start": 11,
"end": 37,
"text": "(Pevzner and Hearst, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 244,
"end": 274,
"text": "(Malioutov and Barzilay, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 290,
"end": 304,
"text": "(Stokes, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 305,
"end": 329,
"text": "(Malioutov et al., 2007)",
"ref_id": "BIBREF5"
},
{
"start": 330,
"end": 353,
"text": "(Jancsary et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 368,
"end": 387,
"text": "(Chen et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\"The WindowDiff metric is a variant of the P k measure, which penalizes false positives and near misses equally.\" (Malioutov et al., 2007) . WindowDiff uses a sliding window over the segmentation; each window is evaluated as correct or incorrect. WindowDiff is effectively 1 \u2212 accuracy for all windows, but accuracy is sensitive to the balance of positive and negative data being evaluated. The positive and negative balance is determined by the window size. Small windows produce more negatives, thus WindowDiff recommends using a window size (k) of half the average segment length. This produces an almost equal number of positive windows (containing boundaries) and negative windows (without boundaries).",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "(Malioutov et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WindowDiff",
"sec_num": "2"
},
{
"text": "Equation 1 represents the window size (k), where N is the total number of sentences (or content units). Equation 2 is WindowDiff's traditional definition, where R is the number of reference boundaries in the window from i to i+k, and C is the number of computed boundaries in the same window. The comparison (> 0) is sometimes forgotten, which produces strange values not bound between 0 and 1; thus we prefer equation 3 to represent WindowDiff, as it emphasizes the comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WindowDiff",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k = N 2 * number of segments (1) WindowDiff = 1 N \u2212 k N \u2212k i=0 (|R i,i+k \u2212 C i,i+k | > 0)(2) WindowDiff = 1 N \u2212 k N \u2212k i=0 (R i,i+k = C i,i+k )",
"eq_num": "(3)"
}
],
"section": "WindowDiff",
"sec_num": "2"
},
{
"text": "Figure 1 illustrates WindowDiff's sliding window evaluation. Each rectangle represents a sentence, while the shade indicates to which segment it truly belongs (reference segmentation). The vertical line represents a computed boundary. This example contains a near miss (misaligned boundary). In this example, we are using a window size of 5. The columns i, R, C, W represent the window position, the number of boundaries from the reference (true) segmentation in the window, the number of boundaries from the computed segmentation in the window, and whether the values agree, respectively. Only windows up to i = 5 are shown, but to process the entire segmentation 8 windows are required. Franz et al. (2007) note that WindowDiff does not allow different segmentation tasks to optimize different aspects, or tolerate different types of errors. Tasks requiring a uniform theme in a segment might tolerate false positives, while tasks requiring complete ideas or complete themes might accept false negatives. Georgescul et al. (2009) note that while Win-dowDiff technically penalizes false positives and false negatives equally, false positives are in fact more likely; a false positive error occurs anywhere were there are more computed boundaries than boundaries in the reference, while a false negative error can only occur when a boundary is missed. Consider figure 1, only 3 of the 8 windows contain a boundary; only those 3 windows may have false negatives (a missed boundary), while all other windows may contain false positives (too many boundaries). Lamprier et al. (2008) note that errors near the beginning and end of a segmentation are actually counted slightly less than other errors. Lamprier offers a simple correction for this problem, by adding k \u2212 1 phantom positions, which have no boundaries, at the beginning and at the end sequence. The addition of these phantom boundaries allows for windows extending outside the segmentation to be evaluated, and thus allowing for each position to be count k times. Example E in figure 4 in the next section will illustrate this point. Consider example D in figure 4; this error will only be accounted for in the first window, instead of the typical k windows.",
"cite_spans": [
{
"start": 689,
"end": 708,
"text": "Franz et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 1007,
"end": 1031,
"text": "Georgescul et al. (2009)",
"ref_id": "BIBREF2"
},
{
"start": 1557,
"end": 1579,
"text": "Lamprier et al. (2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WindowDiff",
"sec_num": "2"
},
{
"text": "i R C W 0 0 0 D 1 0 0 D 2 0 1 X 3 1 D 4 1 1 D 5 1 0 X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WindowDiff",
"sec_num": "2"
},
{
"text": "Furthermore, tasks may want to adjust sensitivity or reward for near misses. Naturally, one would be inclined to adjust the window size, but changing the window size will change the balance of positive windows and negative windows. Changing this balance has a significant impact on how WindowDiff functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WindowDiff",
"sec_num": "2"
},
{
"text": "Some researchers have questioned what the Win-dowDiff value tells us; how do we interpret it?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WindowDiff",
"sec_num": "2"
},
{
"text": "WinPR is derived from WindowDiff, but differs on one main point: WinPR evaluates boundary positions, while WindowDiff evaluates regions (or windows). WinPR is a set of equations (4-7) ( Figure 2 ) producing a confusion matrix. The confusion matrix allows for the distinction between false positive and negative errors, and can be used with Precision, Recall, and F-measure. Furthermore, the window size may be changed to adjust near-miss sensitivity without affecting the the interpretation of the confusion matrix.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "N is the number of content units and k represents the window size. WinPR includes the Lamprier (2008) correction, thus the sum is from 1 \u2212 k to N instead of 1 to N \u2212 k as with WindowDiff. min and max refer to the tradition computer science functions which select the minimal or maximal value from a set of two values. True negatives (5) start with a negative term, which removes the value of the phantom positions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "Each WinPR equation is a summation over all windows. To understand the intuition behind each equation, consider Figure 3 . R and C represent the number of boundaries from the reference and computed segmentations, respectively, in the i th window, up to a maximum of k. The overlapping region represents the TPs. The difference is the error, while the sign of the difference indicates whether they are FPs or FNs. The WinPR equations select the difference using the max function, forcing negative values to 0. The remainder, up to k, represents the TNs. ",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "k C i R i 0 0 C R TP error TN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "True Positives = TP = N i=1\u2212k min(R i,i+k , C i,i+k )",
"eq_num": "(4)"
}
],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "True",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Negatives = TN = \u2212k(k \u2212 1) + N i=1\u2212k (k \u2212 max(R i,i+k , C i,i+k ))",
"eq_num": "(5)"
}
],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "False Positives = FP = N i=1\u2212k max(0, C i,i+k \u2212 R i,i+k )",
"eq_num": "(6)"
}
],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "False Negatives = FN = N i=1\u2212k max(0, R i,i+k \u2212 C i,i+k )",
"eq_num": "(7)"
}
],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "Figure 2: Equations for the WinPR confusion matrix 3 = (6/2). Each window contains 3 content units, thus we consider 4 potential boundary positions (the edges are inclusive).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WinPR",
"sec_num": "3"
},
{
"text": "Missed boundary C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A) Correct boundary B)",
"sec_num": null
},
{
"text": "Near boundary D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A) Correct boundary B)",
"sec_num": null
},
{
"text": "Extra boundary E)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A) Correct boundary B)",
"sec_num": null
},
{
"text": "Extra boundaries Example A provides a baseline for comparison; B is a false negative (a missed boundary); C is a near miss; D is an extra boundary at the beginning of the sequence, providing an example of Lamprier's criticism. E includes two errors near each other. Notice how the additional errors in E have have a very small impact on the WindowDiff value. WindowDiff should penalize an error k times, once for each window in which it appears, with the exception of near misses which have partial reward and penalization. D is only penalized in one window, because most of the other windows would be outside the sequence. E contains two errors, but they are not fully penalized because they appear in overlapping windows. Furthermore, using a single metric does not indicate if the errors are false positives or false negatives. This information is important to the development of a segmentation algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A) Correct boundary B)",
"sec_num": null
},
{
"text": "If we apply WinPR to examples A-E, we get the results in Table 2 . We will calculate precision and recall using the WinPR confusion matrix, shown under WinP and WinR respectively. You will note that we can easily see whether an error is a false positive or a false negative. As we would expect, false positives affect precision, and false negatives affect recall. Near misses manifest as equal parts false positive and false negative. In example E, each error is counted, unlike WindowDiff. In Table 2 , note that each potential boundary position is considered k (the window size) times. Thus, each positive or negative boundary assignment is counted k times; near misses producing a blend of values: TP, FP, FN. We refer to the normalized con-fusion matrix (or normalized WinPR), as the confusion matrix divided by the window size. If near misses are not considered, this confusion matrix gives the exact count of boundary assignments.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 494,
"end": 501,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "A) Correct boundary B)",
"sec_num": null
},
{
"text": "What is not apparent in Table 2 , is that WinPR is insensitive to window size, with the exception of near misses. Thus adjusting the window size can be used to adjust the tolerance or sensitivity to near misses. Large window sizes are more forgiving of near misses, smaller window size are more strict.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "A) Correct boundary B)",
"sec_num": null
},
{
"text": "WinPR does not provide any particular values indicating the number of near misses, their distance, or contribution to the evaluation. Because WinPR's window size only affects near miss sensitivity, and not the positive/negative balance like in WindowDiff, we can subtract two normalized confusion matrices using different window sizes. The difference between the confusion matrices gives the impact of near misses under different window sizes. Choosing a very strict window size (k = 1), and subtracting it from another window size would effectively provide the contribution of the near misses to the confusion matrix. In many circumstances, using several window sizes may be desirable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Near Misses and Window Size",
"sec_num": "3.1"
},
{
"text": "We ran numerous tests on artificial segmentation data composed of 40 segments, with a mean segment length of 40 content units, and standard deviations varying from 10 to 120. All tests showed that a false positive or a false negative error is always penalized k times, as expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variations in Segment Size: Validation by Simulation",
"sec_num": "3.2"
},
{
"text": "Using a reference segmentation of 40 segments, we derived two flawed segments: we added 20 extra boundaries to one, and removed 18 boundaries from the other. Both produced WindowDiff values of 0.22, while WinPR provided WinP = 0.66 and WinR = 1.0 for the addition of boundaries and WinP = 1.00 and WinR = 0.54 for the removal of boundaries. WinPR highlights the differences in the nature of the two flawed segmentations, while WinDiff masks both the number and types of errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WinPR Applied to a Complete Segmentation",
"sec_num": "3.3"
},
{
"text": "We presented a new evaluation method for segmentation, called WinPR because it produces a confusion matrix from which Precision and Recall can be derived. WinPR is easy to implement and provides more detail on the types of errors in a computed segmentation, as compared with the reference. Some of the major benefits of WinPR, as opposed to Win-dowDiff are presented below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "1. Distinct counting of false positives and false negatives, which helps in algorithm selection for downstream tasks and helps with analysis and optimization of an algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "2. The confusion matrix is easier to interpret than a WindowDiff value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "3. WinPR counts errors from boundaries, not windows, thus close errors are not masked 4. Precision, and Recall are easier to understand than WindowDiff.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "5. F-measure is effective when a single value is required for comparison. 2008 WinPR counts boundaries, not windows, which has analytical benefits, but WindowDiff's counting of windows provides an evaluation of segmentation by region. Thus WindowDiff is more appropriate when an evaluator is less interested in the types and the number of errors and more interested in the percentage of the sequence that is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "Thanks to Dr. Stan Szpakowicz for all his help refining the arguments and the presentation of this paper. Thanks to Anna Kazantseva for months of discussions about segmentation and the evaluation problems we each faced. Thanks to Natural Sciences and Engineering Research Council of Canada (NSERC) for funding our research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Movie scene segmentation using background information",
"authors": [
{
"first": "L",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liao",
"suffix": ""
}
],
"year": 2008,
"venue": "Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L Chen, YC Lai, and H Liao. 2008. Movie scene segmentation using background information. Pattern Recognition, Jan.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "User-oriented text segmentation evaluation measure",
"authors": [
{
"first": "M",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mccarley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2007,
"venue": "SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Franz, J McCarley, and J Xu. 2007. User-oriented text segmentation evaluation measure. SIGIR '07 Pro- ceedings of the 30th annual international ACM SIGIR conference on Research and development in informa- tion retrieval, Jan.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An analysis of quantitative aspects in the evaluation of thematic segmentation algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Georgescul",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Armstrong",
"suffix": ""
}
],
"year": 2009,
"venue": "SigDIAL '06 Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Georgescul, A Clark, and S Armstrong. 2009. An analysis of quantitative aspects in the evaluation of the- matic segmentation algorithms. SigDIAL '06 Proceed- ings of the 7th SIGdial Workshop on Discourse and Dialogue, Jan.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Revealing the structure of medical dictations with conditional random fields",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jancsary",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Matiasek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trost",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Jancsary, J Matiasek, and H Trost. 2008. Revealing the structure of medical dictations with conditional ran- dom fields. EMNLP '08 Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing, Jan.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On evaluation methodologies for text segmentation algorithms",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lamprier",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Amghar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levrat",
"suffix": ""
}
],
"year": 2008,
"venue": "19th IEEE International Conference on Tools with Artificial Intelligence",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S Lamprier, T Amghar, and B Levrat. 2008. On evalu- ation methodologies for text segmentation algorithms. 19th IEEE International Conference on Tools with Ar- tificial Intelligence -Vol.2, Jan.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Making sense of sound: Unsupervised topic segmentation over acoustic input",
"authors": [
{
"first": "I",
"middle": [],
"last": "Malioutov",
"suffix": ""
},
{
"first": "; I",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Malioutov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I Malioutov and R Barzilay. 2006. Minimum cut model for spoken lecture segmentation. ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Associ- ation for Computational Linguistics, Jan. I Malioutov, A Park, R Barzilay, and R Glass. 2007. Making sense of sound: Unsupervised topic segmen- tation over acoustic input. Proceeding of the Annual Meeting of the Association for Computation Linguis- tics 2007, Jan.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A critique and improvement of an evaluation metric for text segmentation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Pevzner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L Pevzner and M Hearst. 2002. A critique and improve- ment of an evaluation metric for text segmentation. Computational Linguistics, Jan.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Spoken and written news story segmentation using lexical chains",
"authors": [
{
"first": "",
"middle": [],
"last": "Stokes",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: HLT-NAACL2003 Student Research Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N Stokes. 2003. Spoken and written news story seg- mentation using lexical chains. Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: HLT-NAACL2003 Student Re- search Workshop, Jan.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Illustration of counting boundaries in windows",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "WinPR within Window Counting Demostration Consider how WindowDiff and WinPR handle the examples in Figure 4. These examples use the same basic representation as Figure 1 in section 2. Each segment is 6 units long and the window size is",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Example segmentations",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "the window size can customize an evaluation's tolerance of near misses 8. WinPR provides a method of detecting the impact of near misses on an evaluation",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"text": "lists the number of correct and incorrect windows, and the WindowDiff value for each example.",
"html": null,
"content": "<table><tr><td colspan=\"4\">Example Correct Incorrect WindowDiff</td></tr><tr><td>A</td><td>10</td><td>0</td><td>0</td></tr><tr><td>B</td><td>6</td><td>4</td><td>0.4</td></tr><tr><td>C</td><td>8</td><td>2</td><td>0.2</td></tr><tr><td>D</td><td>9</td><td>1</td><td>0.1</td></tr><tr><td>E</td><td>4</td><td>6</td><td>0.6</td></tr><tr><td colspan=\"4\">Table 1: WindowDiff values for examples A to E</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"num": null,
"text": "WinPR values for examples A to E",
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}