Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "M95-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:12:47.646824Z"
},
"title": "STATISTICAL SIGNIFICANCE OF MUC-6 RESULT S",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "M95-1004",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "The results of the MUC-6 evaluation must be analyzed to determine whether close scores significantl y distinguish systems or whether the differences in those scores are a matter of chance. In order to do such an analysis , a method of computer intensive hypothesis testing was developed by SAIC for the MUC-3 results and has been use d for distinguishing MUC scores since that time . The implementation of this method for the MUC evaluations was firs t described in [1] and later the concepts behind the statistical model were explained in a more understandable manne r in [2] . This paper gives the results of the statistical testing for the three MUC-6 tasks where a single metric could b e associated with a system's performance .",
"cite_spans": [
{
"start": 466,
"end": 469,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 573,
"end": 576,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "The general method employed to analyze the MUC-6 results is the Approximate Randomization method described in [3] . It is a computer intensive method which approximates the entire sample space in such a way as t o allow us to determine the significance of the differences in F-Measures between each pair of systems and th e confidence in that significance . The general method was applied on the basis of a message-by-message shuffling of a pair of MUC systems' responses to rule out differences that could have occurred by chance and to give us a picture o f the similarities of the systems in terms of performance .",
"cite_spans": [
{
"start": 110,
"end": 113,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "The method sorts systems into like and unlike categories . The results are shown in the following three table s for Named Entity, Template Element, and Scenario Template . These three all use the F-Measure as the single measur e for systems as defined in [4] and in the MUC-6 Test Scores appendix to this proceedings . The parameters in the F -Measure used are such that recall and precision scores are combined with equal weighting . Note that Coreference was not characterized by F or any other unified measure because of the linkages that were being evaluated . Of course, an F-Measure is calculable, but more research is necessary before we can conclude that it will combine recall an d precision in a way that is meaningful for these evaluations .",
"cite_spans": [
{
"start": 255,
"end": 258,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "The statistical results reported here are based on the strictest cutoff point for significance level (0 .01) an d high confidence in the assigned level (at least 99%) . What this method does not tell us is a numerical range withi n which F is not a significant distinguisher (such as plus or minus 3%) . Instead it provides lists of similar systems . We have to be careful to not confuse the numerical order of the F-Measures with a ranking of systems and to instead loo k at the groupings on these charts . If a group or a single system is off by itself, then that group or single system i s significantly different from its non-members . However, if there is overlap (and there is a lot of it in these results), the n the ranking of the grouped systems is impossible. In addition, two similarly acting systems could use very differen t approaches to data extraction, so there may be some other value that distinguishes these systems that has not been measured in MUC-6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "To prevent human error, the entire process of doing the statistical analysis is automated . An awk program extracts tallies that appear in the score report output by the scoring software and puts them in a file to be fed to the C program for approximate randomization . The C program re-calculates F-measure, recall, and precision from raw tallies for higher accuracy than during the approximate randomization comparisons . The scoring program is slow i n emacslisp and would be slowed further by calculations with higher accuracy . The statistical program outputs th e significance and confidence levels in a matrix format for the analyst to inspect . Although 10,000 shuffles are carried out, the C program is fast . Results are depicted in lists of systems that are all equivalent, i .e ., the differences in thei r scores were due to chance .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing",
"sec_num": null
},
{
"text": "The results are reported in a tabular format . The row headings contain the F-Measures for the systems an d the rows are ordered from highest to lowest F. The columns are ordered in the same way as the rows and the header s contain the numerical order of the F values rather than the F value itself because of the size of the table on the page .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "To use the table, you first determine which system you are interested in and identify its F-Measure in the left column, then look across the row or down the corresponding column to see which systems' F-Measures its F-Measure is not significantly different from . The systems that make up that group can be considered to have gotte n their different F-Measures just by chance .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "You can see, for instance, that among the Named Entity systems, the two lowest scoring systems ar e significantly different from each other and all of the all of the other systems . The two systems above them form a group which are significantly different from the other systems, but not from each other . A similar case appears i n Template Element at the low and high end of the scores . However, the important thing to note is that there is a larg e amount of overlap otherwise . The Scenario Template test shows even more overlap than the other two tasks .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "The groupings in these tables allow an ordering that is less clean than we would like, but that is realistic a t this point in the evaluation methodology research . In addition to looking at the scores, evaluation research on a mor e granular level is needed to understand the differences in the systems' performance . Such research could revea l strengths and weaknesses in extracting certain information and lead to test designs that focus research in areas tha t will directly impact operational value . Also, other factors that are of interest to consumers, such as speed , development data requirements, and so on, need to be considered when making comprehensive comparisons o f systems .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "The entire community would benefit from more refined measured values and a better understanding of ho w the differences in human performance influence the results . Distinguishing systems at such a strict cutoff as we use i n the statistics may only be justified if variations in human performance are smaller . After all, it is the human interpretation of the task definitions that informs the systems during development . Especially in Named Entity where machine performance and human performance are close, we would expect to see inherent human differences i n interpreting language during both system and answer key development to be a considerable factor holding th e machines back .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
}
],
"back_matter": [
{
"text": "Similar4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 2 0 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NE Statistical Results",
"sec_num": null
},
{
"text": "48.14 3 3 6/ 3 3 3 3 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3 3 3 6/ 3 3 3 3",
"sec_num": "48.96"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating Message Understanding Systems : An Analysi s of the Third Message Understanding Conference (MUC-3)",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N., Hirschman, L ., and D . Lewis (1993) \"Evaluating Message Understanding Systems : An Analysi s of the Third Message Understanding Conference (MUC-3) \" Computational Linguistics 19(3) .",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Statistical Significance of the MUC-4 Results",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Fourth Messag e Understanding Conference (MUC-4)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N . (1992) . \"The Statistical Significance of the MUC-4 Results\" Proceedings of the Fourth Messag e Understanding Conference (MUC-4) . Morgan Kaufmann, Publishers . San Mateo, CA .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Computer Intensive Methods for Testing Hypotheses : An Introduction",
"authors": [
{
"first": "W",
"middle": [],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noreen, W. (1989) Computer Intensive Methods for Testing Hypotheses : An Introduction . John Wiley & Sons .",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Information Retrieval",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Rijsbergen, C .J . (1979) Information Retrieval. London : Butterworths .",
"links": null
}
},
"ref_entries": {}
}
}