|
{ |
|
"paper_id": "M98-1030", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:16:15.370394Z" |
|
}, |
|
"title": "", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Vilain", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Aberdeen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Dennis", |
|
"middle": [], |
|
"last": "Connolly", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Lynette", |
|
"middle": [], |
|
"last": "Hirschman", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Each record consists of a header and a list of slots. The header is an identification string for the object, followed by the token \":=\" on the same line. The header's identification string is enclosed in angle brackets, and consists of three pieces of information: Object type Document Number One-up Number Each slot in the body of the record consists of a slot name, followed by a colon, and the slot's fills. Set fills and string fills may be enclosed in matching single or double quotes. The format of pointer fills is the same as that of the string which identifies an object in its header.", |
|
"pdf_parse": { |
|
"paper_id": "M98-1030", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Each record consists of a header and a list of slots. The header is an identification string for the object, followed by the token \":=\" on the same line. The header's identification string is enclosed in angle brackets, and consists of three pieces of information: Object type Document Number One-up Number Each slot in the body of the record consists of a slot name, followed by a colon, and the slot's fills. Set fills and string fills may be enclosed in matching single or double quotes. The format of pointer fills is the same as that of the string which identifies an object in its header.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The information extraction task descriptions often include a BNF which describes the different types of objects in the task. The scorer makes some some further assumptions about the format of template files which are not specified in the BNF's:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template File Caveats", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "all objects from a document should be grouped in one place in the template file. an object's header should be on its own line. if a line has a slot name, the name should be the first non-blank token on the line. there should be only one fill per line. a line containing a fill may have \"link information\" at the end of the line: SLOT_NAME: \"a slot fill\" ##392#404#textsfilename This is a pair of pound signs (\"##\") followed by the \"start offset\" of the fill, then a single pound sign followed by the \"end offset\" of the fill, then another single pound sign, followed by the name of the texts file. None of the offset information is used in scoring, but it may be used in later versions of the scorer to highlight portions of the texts file. At present the scorer reads the start offset and end offset, but ignores the name of the texts file. The texts file name should not contain any pound signs. comments may be inserted into the template files on lines that have a pound sign or a semicolon as the very first character on a line.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template File Caveats", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The coreference and named entity tasks involve adding Standard Generalized Markup Language (SGML) to the the texts file to create the key and response files.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SGML Task Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "SGML is a very flexible and powerful language for adding structure to computer documents. The MUC scoring software recognizes a subset of SGML when it scores the coreference and named entity tasks. This discussion is a (very) simplified description of SGML.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Scoring Software's View of SGML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An SGML tag is a character string inserted into a text file. Tags usually come in pairs, consisting of an open tag and a close tag. A pair of tags enclose a section of the text. For example, here is a piece of text, then the same text with some SGML tags added.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Scoring Software's View of SGML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Be glad you don't work On the Bungle-bung bridge, That they're building Across Boober Bay at Bum Ridge. <ADVICE> Be glad you don't work On the <STRUCTURE>Bungle-bung bridge</STRUCTURE>, That they're building Across <BODY TYPE=\"WATER\">Boober Bay</BODY> at <LOC>Bum Ridge</LOC>. </ADVICE> Open tags start with an open angle bracket, and are followed immediately by the generic identifier for that type of tag. Next come a sequence of attribute definitions for that type of tag. The end of an open tag is the close angle bracket. Close tags start with an open angle bracket, then a slash and the same generic identifier as close tag. Close tags don't have attributes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Scoring Software's View of SGML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the above example, the three tag pairs have generic identifiers ADVICE, STRUCTURE, BODY, and LOC. Only the BODY tag has an attribute, named TYPE, with a value of WATER.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Scoring Software's View of SGML", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In all MUC tasks, the texts file already has some SGML tags. In the coreference and named entity tasks, the annotators and systems add more tags to the texts to create the keys and responses. The scoring software converts the tags (together with the text they enclose) into objects which have the same internal structure as the objects for the information extraction tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversion of SGML tags to MUC objects", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For example, here's some text marked up with TIMEX tags, which were part of the MUC6 named entity task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversion of SGML tags to MUC objects", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "<TIMEX TYPE=\"DATE\" ALT=\"fiscal 1994\">the first six months of fiscal 1994</TIMEX> The scorer would convert the text into an object which in a template file would look like this:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversion of SGML tags to MUC objects", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "<TIMEX-DOCNUM1-1> := TEXT: \"the first six months of fiscal 1994\" /\"fiscal 1994\" TYPE: DATE", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversion of SGML tags to MUC objects", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the coreference and named entity tasks, there are some things to be careful of when you are preparing keys or responses. One thing is to not delete or insert any characters outside of the SGML tags. Doing this almost always confuses the scoring software and lowers the score. To see if you've changed anything you shouldn't have, you can use the unix \"sed\" command, or something similar, as in this example with the coreference tags:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SGML task caveats", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "unix% sed 's/<COREF[^>]*>//g' rsp | sed 's/<\\/COREF[^>]*>//g' >rsp.notags unix% diff texts rsp.notags", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SGML task caveats", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The sed command above removes the COREF tags from the responses file (named rsp), and then compares what's left to the original texts file (named texts). The diff command will then show what part of the original texts file has been changed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SGML task caveats", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The MUC scoring software prints several reports to show how the key and response compared. There is a score report, which only shows \"the numbers.\" There's also report summary, which shows in more detail how the key and response objects were aligned. For the coreference task, there is a \"partitions\" file, which shows how the key and response equivalence classes compared. And there is a \"map history\" file, which gives a detailed, if not very readable, description of how the objects were aligned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Output File Formats", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The \"report summary\" files show how the fills and objects of the keys and responses align. There are three types of report summary files: one for the coreference task, one for the named entity task, and one for the information extraction tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Report Summary Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here's a section of a report summary file from the coreference task: Document 930620083 COR \"Clinton\" \"Clinton\" COR \"Clinton\" \"Clinton\" COR \"the White House\" \"White House\" COR \"The current briefing room\" \"The current briefing room\" MIS \"allies of the securities exchanges\" \"\" MIS \"securities\" \"\" MIS \"Clinton transition officials\" \"\" MIS \"government\" \"\" MIS \"the committee\" \"\" SPU \"\" \"Kitty Higgins\" SPU \"\" \"an aide\" SPU \"\" \"Michigan\" OPT \"the Clinton camp\" \"\" OPT \"the government\" \"\" OPT \"briefing\" \"\" A coreference report summary shows how the COREF objects were aligned by the scorer. Each line has three fields. The first field is a three letter abbreviation telling how a pair of objects are aligned. The abbeviations are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Correct. The key and response objects agree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COR", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Missing. There was a key object but no response object. SPU", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MIS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Spurious. There was a response object but no key object. OPT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MIS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Optional. There was a key object but no response object, but the key object was marked \"optional\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MIS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The second field gives the text from the key object (if any), and the third field gives the text from the reponse object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MIS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here's a section of a report summary from the Named Entity task:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "--------------------------------------------------------------------------------------------------------- Document 930620083 TAG TYPE TEXT KEY_TYPE RSP_TYPE KEY_TEXT RSP_TEXT --------------------------------------------------------------------------------------------------------- ENAMEX", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "cor cor PERSON PERSON \"Consuela Washington\" \"Consuela Washington\" ENAMEX cor inc PERSON PERSON \"John Dingell\" \"Washington\" ENAMEX cor inc PERSON PERSON \"Carter\" \"Tim Wirth\" TIMEX cor cor DATE DATE \"01/19/93\" \"01/19/93\" ENAMEX mis mis PERSON \"Washington\" \"\" ENAMEX spu spu ORGANIZATION \"\" \"Exchange\" ENAMEX spu spu ORGANIZATION \"\" \"Old Executive Office\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The named entity report summary file gives a one-line-per-object-pair description of how the objects were aligned. Each line has seven fields. The first is the generic identifier of the tag which defines the object. The second and third contain three-letter abbreviations for how the key and response objects or fills compared. The abbreviations are: cor Correct. The key and response fills agree. inc Incorrect. The key and response fills disagree. mis", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Missing. There was a key fill but no response fill. spu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Spurious. There was a response fill but no key fill. opt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Optional. There was a key object but no response object, but the key object was marked \"optional\". The key object's fills are also counted as \"optional\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The fourth and fifth fields are the key and response TYPE fills, if there are any. The sixth and seventh fields are the key and response TEXT fields. If the key contained more than one TEXT fill (through use of the ALT attribute), the one that was aligned with the response fill is the one shown. Missing. There was a key but no response. spu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Spurious. There was a response but no key. opt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Optional. There was a key but no response, and the key object or slot was marked optional. uns Unscored. The object or slot isn't scored. rem", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Removed. This is for pointers to optional objects. If a key pointer points to an optional key object that was not aligned with any response object, the fill is \"removed,\" and doesn't count toward the score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The second column shows the name of the slots for the key and response object for the line (and the lines following if there are multiple fills in the slot).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The third and fourth columns show the key and response object records, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Report Summaries", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the coreference task, there is an extra report generated, which shows the COREF objects' equivalence classes, and how they are partitioned by the comparison between keys and responses. Key equivalence classes are surrounded by star characters (*****), and response equivalence classes by equal signs (=====).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference \"Partition\" Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here is a portion of a partition file that gives one key equivalence class from a MUC 6 document. Each line containing COREF objects begins with a \"C\" or an \"M\", for \"correct\" or \"missing.\" Correct objects' lines have, in order from left to right, the start offset of the noun phrase in the texts file. the end offset of the noun phrase in the texts file. the ID of the key COREF object. the ID of the key COREF object to which this object points (or \"NULL\" if the object has no REF attribute). the ID of the response COREF object aligned with the key coref object. the noun phrase that was marked up to create the object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference \"Partition\" Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the \"missing\" objects' lines, the fields are the same except the response object's ID is, of course, missing. Note that there are blank lines between some of the COREF object lines. These show the partitions of the key equivalence class by the response. While the key ties together every noun phrase between the stars, the response doesn't, so there are \"breaks\" in the equivalence class. These breaks are what are counted to get the recall error. The precision error is got from the response equivalence classes in a symmetric manner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference \"Partition\" Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The \"map history\" output file is meant primarily for other computer programs to read. It consists of one large Tcl-style list. Each element of this list is itself a list which corresponds to one \"document\" from the keys and/or responses file. The document lists also contain lists, and this nesting of lists continues on down to the single fill level. Lists in the hierarchy consist of attribute name/attribute value pairs. Attribute names start with a hyphen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Map History Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In hierarchy order, the attributes are: How a single string fill looks when it is compared to another fill. Leading and trailing whitespace has been trimmed, certain substrings have been removed, and all intertoken whitespaces are changed to single space characters. For example, the string \"a corporation that manages the Seaport\" would be changed to \"that manages the seaport\" (depending on how the scorer is configured), because the premodifier \"a\" and the corporate designator \"corporation\" are both removed, and all characters are made lowercase. docnum the string identifying the document in the texts file doctallies the totals of the (in order) possible, actual, correct, partial, incorrect, missing, spurious, and noncommittal single-fill \"tallies\" for the entire document. doc_section in the SGML tasks (named entity and coreference), the name SGML tags which enclose the object in the texts document, e.g. \"HEADLINE\" or \"TEXT\". fill the fill as it appeared in the key or response file (with one exception: in the coreference task's REF fill, this is how the REF attribute would look if it were written as a template object pointer). key_obj_id", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Hierarchy of Map History Lists", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Document level -docnum -doctallies -class_pairs Class pair level -class_name -class_tallies -obj_pairs Object Pair level -obj_pair_status -obj_pair_tallies -key_obj_id -key_obj_optional -key_obj_rep_id -key_obj_start_offset -key_obj_end_offset -rsp_obj_id -rsp_obj_optional -rsp_obj_rep_id -rsp_obj_start_offset -rsp_obj_end_offset -doc_section -slot_pairs Slot pair level -slot_name -slot_tallies -key_slot_optional -rsp_slot_optional -multi-fill_pairs Multi-fill", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Hierarchy of Map History Lists", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The identification string of the key object of the pair. key_obj_optional Whether the object in the key is marked optional. This attribute has no value following it in the list; its presence alone means the key was marked optional. key_obj_rep_id Almost always the same as the key_obj_id. In the scenario template task of past MUC's, there have been objects in the key that are \"identical\". All objects are put in equivalence classes (different from the equivalence classes of the coreference task), so that pointers to any object in an equivalence class are still counted correct, even though they don't point to exactly the same object. key_obj_end_offset in the SGML tasks (named entity and coreference), the position in the texts file, measured from the beginning of the file, where the close tag for the object is. key_obj_start_offset in the SGML tasks (named entity and coreference), the position in the texts file, measured from the beginning of the file, where the open tag for the object is. key_single_fill a list describing the single fill from the key. key_slot_optional whether the slot was marked optional in the key. This attribute has no value associated with it. If the attribute name is there, it means the slot was marked optional. multi_fill_pairs the list describing how the key slot fill alternatives were aligned with the response alternatives. (When scoring system responses, there should be only one response alternative. For interannotator comparisons, both key and response may have many alternatives.) multi_fill_tallies the tallies for the single fills in this pairing of alternatives (see multi_fill_pairs). obj_pair_status How the objects of a pair compared at the object-level; correct, incorrect, etc. obj_pair_tallies the tallies for the single fills in this pair of objects. obj_pairs a list describing how objects of one type were aligned. rsp_obj_id", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Hierarchy of Map History Lists", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The identification string of the rsp object of the pair. rsp_obj_optional Whether the object in the response is marked optional. rsp_obj_rep_id Almost always the same as the rsp_obj_id. In the scenario template task of past MUC's, there have been objects in response that are identical. All objects are put in equivalence classes (different from the equivalence classes of the coreference task), so that pointers to any object in an equivalence class are still counted correct, even though they don't point to exactly the same object. rsp_single_fill a list describing the single fill from the response. single_fill_pair_status A three-character abbreviation for how the two single fills in a pair compared. single_fill_pair_tallies", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Hierarchy of Map History Lists", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another way for writing the single_fill_pair_status, that is compatible with all other tallies up the hierarchy. single_fill_pairs the list describing how one list of key single fills (possibly from many alternatives) was aligned with one list of response single fills. slot_name the name of the slots which are paired here. slot_pairs the list describing how the two objects' slots compared. slot_tallies the tallies for the single fills in this slot's comparison. type the type of the single fill (set fill, string fill, or pointer fill).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Hierarchy of Map History Lists", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Information Extraction Score Report", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure shows one page from a scores file for the MUC-6 scenario template task. There is one page of scores for each document in the task, plus one page for the totals over all documents. Each page is divided into four sections. The first section shows the \"text filtering\" or \"relevance\" scores. These have to do with judging whether each document is even relevant to the scenario the NLP system should be looking for. The second section gives the object scores, which shows how the keys and response agree at the object level. The third section shows how well the keys and responses agree at the slot fill level. Only the slot scores determine the final scores, which are the last thing on a page.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The template element and template relation score reports are identical to the scenario template score reports, except that they have no text filtering section. 0 0 2| 62 0 50| 0 0 97 0 100 100 type 110 69| 50 0 1| 59 18 12| 45 72 54 26 2 61 locale 41 7| 4 0 3| 34 0 8| 10 57 83 0 43 90 country 41 7| 6 0 1| 34 0 5| 15 86 83 0 14 85 comment 0 0| 0 0 0| 0 0 15| 0 0 0 0 0 0 person | | | name 130 138| 82 0 13| 35 43 7| 63 59 27 31 14 53 alias 83 79| 56 0 3| 24 20 5| 67 71 29 25 5 46 title 79 78| 60 0 0| 19 18 5| 76 77 24 23 0 38 comment 0 0| 0 0 0| 0 0 1| 0 0 0 0 0 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 654, |
|
"text": "0 0 2| 62 0 50| 0 0 97 0 100 100 type 110 69| 50 0 1| 59 18 12| 45 72 54 26 2 61 locale 41 7| 4 0 3| 34 0 8| 10 57 83 0 43 90 country 41 7| 6 0 1| 34 0 5| 15 86 83 0 14 85 comment 0 0| 0 0 0| 0 0 15| 0 0 0 0 0 0 person | | | name 130 138| 82 0 13| 35 43 7| 63 59 27 31 14 53 alias 83 79| 56 0 3| 24 20 5| 67 71 29 25 5 46 title 79 78| 60 0 0| 19 18 5| 76 77 24 23 0 38 comment 0 0| 0 0 0| 0 0 1| 0 0 0 0 0", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Score Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "------------------------------------------------------------------------------ POS ACT| COR PAR INC | MIS SPU NON| REC PRE UND OVG SUB ERR ------------------------+-------------+--------------+------------------------ TEXT FILTERI 100 100| 86 0 14| 0 0 0| 86 86 0 0 14 14 ------------------------+-------------+--------------+------------------------ OBJ SCORES | | | template 0 0| 0 0 0| 0 0 100| 0 0 0 0 0 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-----------------------+-------------+--------------+------------------------ SLOT SCORES | | | template | | | doc-nr 0 0| 0 0 0| 0 0 100| 0 0 0 0 0 0 content 195", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "0 ------------------------+-------------+--------------+------------------------ ALL", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score Files", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here is a page from a score report for the named entity task: type 926 937| 878 0 20| 28 39 21| 95 94 3 4 2 9 text 926 937| 876 0 22| 28 39 21| 95 93 3 4 2 9 status 0 0| 0 0 0| 0 0 38| 0 0 0 0 0 0 alt 0 0| 0 0 0| 0 0 0| 0 0 0 0 0 0 timex | | | type 111 112| 107 0 0| 4 5 6| 96 96 4 4 0 8 text 111 112| 98 0 9| 4 5 11| 88 88 4 4 8 16 status 0 0| 0 0 0| 0 0 6| 0 0 0 0 0 0 alt 0 0| 0 0 0| 0 0 0| 0 0 0 0 0 0 numex | | | type 93 101| 90 0 0| 3 11 0| 97 89 3 11 0 13 text 93 101| 90 0 0| 3 11 0| 97 89 3 11 0 13 status The report has several parts: subtask scores Each named entity tag contains an attribute categorizing the marked-up text. This section shows how well the response did for each category. section scores Each document is already marked up with SGML even before the keys and responses are made. This section summarizes how the response did for each \"section\" of the SGML document. object scores Tallies at the object level. These tallies don't contribute to the final score at the bottom of the page. slot scores Tallies at the slot level. It is the slot level tallies which are used to determine the final score.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 647, |
|
"text": "type 926 937| 878 0 20| 28 39 21| 95 94 3 4 2 9 text 926 937| 876 0 22| 28 39 21| 95 93 3 4 2 9 status 0 0| 0 0 0| 0 0 38| 0 0 0 0 0 0 alt 0 0| 0 0 0| 0 0 0| 0 0 0 0 0 0 timex | | | type 111 112| 107 0 0| 4 5 6| 96 96 4 4 0 8 text 111 112| 98 0 9| 4 5 11| 88 88 4 4 8 16 status 0 0| 0 0 0| 0 0 6| 0 0 0 0 0 0 alt 0 0| 0 0 0| 0 0 0| 0 0 0 0 0 0 numex | | | type 93 101| 90 0 0| 3 11 0| 97 89 3 11 0 13 text 93 101| 90 0 0| 3 11 0| 97 89 3 11 0 13 status", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Named Entity Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "------------------------------------------------------------------------------ POS ACT| COR PAR INC | MIS SPU NON| REC PRE UND OVG SUB ERR ------------------------+-------------+--------------+------------------------ SUBTASK SCORES | | | enamex | | |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "0 0 0| 0 0 0| 0 0 0 0 0 0 ------------------------+-------------+--------------+------------------------ SECT SCORES | | |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-----------------------+-------------+--------------+------------------------ OBJ SCORES | | |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "------------------------+-------------+--------------+------------------------ SLOT SCORES | | | enamex | | |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "0 0| 0 0 0| 0 0 0| 0 0 0 0 0 0 alt 0 0| 0 0 0| 0 0 0| 0 0 0 0 0 0 ------------------------+-------------+--------------+------------------------ ALL", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here is a coreference task score report: There is one line for each document in the corpus. From left to right, the fields of each line are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Key Rsp Document", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "No. Cls Cls Recall Precision", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "the document number of the line's article.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference Score Report", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The number equivalence classes of COREF objects in the key and response, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The recall score, as a fraction and as a percent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The precision score, as a fraction and as a percent. 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "the f-score, if you give recall and precision equal weight. 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The scoring software has three configuration files, that you use to specify how the keys and responses are compared. The reason there are three files is partly historical and partly because parsing some of the configuration options differs a little. In future versions the three files will probably coalesce into one file.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Configuration File Formats", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "You must specify the name of the main configuration file on the command line when you invoke the scorer. The configuration file tells the scorer how to compare the keys and responses. It consists of a list of options. Each option is specified by a colon (\":\") as the first character of a line, followed immediately (no spaces) by the name of the option. After some more spaces come the value or values of the option. Values are separated by spaces. Values which themselves contain spaces must be enclosed in single or double quotes. The current options are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Configuration File Format", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The strings which declare the scorer objects' types and give their score report names and mapping order. This is a required entry in the configuration file. Each class_def string is a quadruple of tokens: the name of the class 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "class_defs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "the name of the class that you want to appear in the score report 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "class_defs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "either \"scored\" or \"unscored,\" depending on whether you want the object-level scoring to count this class of object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "class_defs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that this value doesn't affect whether the fills within the objects are scored. Slot-level scoring is specified in the slot_defs option, described below. the map threshold. The f-score for each slot of an object is calculated and multiplied by that slot's map weight.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The weighted f-scores are them summed, and if they exceed the object's map threshold, then the response and key objects are deemed similar enough to be aligned. For the past couple of MUC's, the threshold has been set to 0 and the map weights have be made really big, so that if the two objects agree in just one fill of one slot, they may be aligned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Here is an example of the class_def option, for the named entity task:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ":class_defs \"enamex enamex scored 0\" \"numex numex scored 0\" \"timex timex scored 0\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The class def strings should be in the order that you want the classes of objects aligned. For the named entity, template element and coreference tasks, this order is unimportant. But for the template relation and scenario template tasks, the pointer fills are judged correct or incorrect based on whether or not the objects they point to are aligned. So the aligning should always start with objects that contain no pointer fills, and proceed to objects whose only pointer fills reference objects without pointer fills, etc. (See the section on how the TR and ST tasks are scored, below.) content_name In the scenario template task, the name of the slot in the \"template\" object (see the template_name option below) which must have fills if the document is relevant to the ST task, and which must not have fills if the document is not relevant to the ST task. Default: \"content\". corporate_designators A list of substrings which will be removed from string fills before they are compared. As the name implies, it's usually a list of strings like \"corporation\", \"ltd\", etc. Note that if you want to remove substrings that themselves have postmodifiers (see below), you must specify the substrings with postmodifiers changed to spaces, and all resulting spaces in the corporate designator string squashed into one. For instance, if you don't want the string \"S.A. DE C.V.\" to affect stringfill comparisons, it should go into the configuration file as \"S A DE C V\", with one space between the \"A\" and the \"DE\". (But for the coreference task, you should not take out the postmodifiers.) Default: the empty list. doc_section_groups Used in the Named Entity task, to group doc_section (see below) scores. In MET 2, the documents in the texts file have various SGML formats. For example, in some documents there is a HEADLINE tag, but in other documents, the tag is called HL. To get a score for all document sections which are the same semantically, but differ in their tags, you can \"group\" the similar tags, by putting, for example, \"Headline HEADLINE HL\" as one value for this option. The first token in a value string is what you want to call the group. The rest of the tokens are the name of doc sections. You must also specify the rest of the tokens in the \"doc_sections\" option described below. If this option is not in the configuration file, the doc_sections scores are used. If it's specified, the scorer only gives the tallies for the names given. Default: None; uses doc_sections instead. doc_sections", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The names of the SGML sections that should be parsed for coreference or named entity objects, and which will be used to report \"document section scores\" in the named entity task. The default is this list of sections: <DOC>, <DATELINE>, <DD>, <HEADLINE>, and <TEXT>. Note that as long as the documents are enclosed by <DOC>, all of the objects will be parsed by default. Having tags that don't really occur in the documents won't hurt anything. If one section is nested in another section, it is the innermost section which will be reported for the score. For example, if there are HEADLINE's inside the TEXT, the objects will be considered to be inside the HEADLINE. dump_map_history", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Whether or not to print the map history report. Default: \"no\". If anything else, the map history will be printed. equatable_objects In the scenario template task, which objects may possibly be identical. In MUC 6, the \"IN_AND_OUT\" objects were like this. Default: no objects. key_file", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the keys file. Default: \"keys\". map_history_file", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the map history output file. Default: \"map_history\" muc_base_directory A string that is prepended to the names of all filename options. This allows you to give the absolute pathname of all filenames without a lot of typing. Defaults to the empty string. ne_subtask_names A list of strings, each with three tokens. The first token is the object type. The second token is the slot name. The third token is the fill value. The tallies for all fills of that value in that slot in that type of object will be reported in the NE subtask section of the score report. Default: the following strings: \"enamex type organization\" \"enamex type person\" \"enamex type location\" \"enamex type other\" \"timex type date\" \"timex type time\" \"timex type other\" \"numex type money\" \"numex type percent\" \"numex type other\" optional_status_slot", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the slot in all objects that you use to specify that an objects is optional, by putting the string \"OPTIONAL\" or \"OPT\" as the slot's only fill. partition_file", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the partition output file for the coreference task. postmodifiers A list of strings that are changed to spaces in stringfills before the stringfills are compared. Usually used so that punctuation marks don't affect the comparisons. Default: the empty list. premodifiers A list of tokens that are removed from the beginning of stringfills before they are compared. Usually used so that the words \"a,\" \"an,\" and \"the\" don't affect the scoring. report_field_separator A character string that is printed between the fields of the information extraction-style \"report summary\" files. The default is the vertical bar (\"|\"). report_summary_file", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the report summary file. Default: \"report_summary\". response_file", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the responses file. Default: \"responses\". score_report_file", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the scores file. Default: \"scores\". scoring_method One of either \"key2response\" or \"key2key\". \"Key2response\" is the default. Key2key is used for interannotator comparisons. scoring_task One of \"coreference\", \"named_entity\", \"template_element\", \"template_relation\", and \"scenario_template.\" There is no default for this option. It must specified. sgml_ALT_slot In the named entity task, the name of the slot whose contents will be moved into the TEXT slot (see sgml_TEXT_slot below), as an alternative to the contents got from the text between the SGML tags. sgml_DOCNUM_gid", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the SGML tag which identifies the section which holds the document numbers. Default: DOCNO. Note that every document in the keys or responses file must have this section. The document number is simply every digit [0-9] in the specified document section. sgml_DOC_gid", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the SGML tags which enclose one entire document. Default: \"DOC\". sgml_ID_slot", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the coreference task, the name of the attribute of the tags for the task which give the unique identification string for the object. Default: \"ID\". sgml_MIN_slot", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the coreference task, the name of the attribute which holds the \"head\" of the noun phrases enclosed in the coreference tags. Default: \"MIN\". sgml_REF_slot", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the coreference task, the name of the attribute which holds the pointer some other \"identical\" object in the document. Default: \"REF\". sgml_TEXT_slot", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the named entity and coreference tasks, the name of the slot into which the text between the open and close tags goes. Default: \"TEXT\". sgml_TYPE_slot In the named entity task, the name of the slot for the categorization subtask. Default: \"TYPE\". sgml_alternative_separator", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the named entity and coreference tasks, the character which separates alternatives within attribute values. Note that for the current tasks, this is only relevant for the keys. Default: the vertical bar character, (\"|\"). sgml_attribute_quote_char", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the named entity and coreference tasks, if a tag's attribute value is a string which contains a double quote, the scorer's parser will become confused. This option contains the character which has been substituted for the double quote in the keys or responses file. (Again, in the current tasks, this will only affect how the keys are prepared, since the response don't have attributes that might contain quotes.) Default: the \"star\" character (\"*\"). slot_defs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The list of slot definitions. This is a required entry in the configuration file. Each slot definition consists of six tokens: The name of the class of object to which the slot belongs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The name of the slot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The name of the slot that you want printed in the score report file.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Either \"scored\" or \"unscored,\" depending on whether you want the fills of this slot to be scored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The map weight. See the entry for the class_def option for an explanation of this number.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The slot type; either \"set\", \"string\", or \"pointer\" (you may put anything here for a pointer slot. The scorer only looks to see that it isn't \"set\" or \"string\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Here's an example of the slot_defs option for the named entity class:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": ":slot_defs \"enamex text text scored 4 string\" \"enamex type type scored 4 set\" \"enamex status status unscored 4 set\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\"enamex alt alt unscored 4 string\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\"timex text text scored 4 string\" \"timex type type scored 4 set\" \"timex status status unscored 4 set\" \"timex alt alt unscored 4 string\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\"numex text text scored 4 string\" \"numex type type scored 4 set \" \"numex status status unscored 4 set\" \"numex alt alt unscored 4 string\" stringfill_correct_comparison one of \"ORIG\", \"STRAIGHTENED\", or \"CLEAN\". Which part of a pair of stringfills is compared to see if they match. If ORIG, the original stringfills are compared. If STRAIGHTENED, some massaging is performed: Whitespaces are trimmed before and after the fills, and all whitespaces between the tokens are turned into single spaces. If CLEAN, the premodifiers, postmodifiers, and corporate designators strings (see the option descriptions for these last three) are removed from the string. Default: CLEAN stringfill_partial_comparison If the stringfills don't match correctly, the comparison used to see if partial credit is given for the match. See stringfill_correct_comparison for the possible values. In additions to the three values listed there, you may specifiy NONE (the default) if you want no partial credit given. template_name", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The name of the \"template\" object used in the scenario template object. This object has a \"content\" slot (see the \"content_name\" option) whose filling or leaving empty determines whether document is relevant to the scenario. For scoring the text-filtering part of the task, only one one template object per document will be checked for content. Default \"TEMPLATE\". use_IE_report_summary Defaults to \"no\". If anything else, the one-line-per-object report summaries used for the named entity and coreference tasks will be replaced with the template-object-record-style report summaries used in the Information Extraction tasks. (This option doesn't affect the TE, TR, or ST tasks).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The methods for scoring the Template Element, Template Relation, Scenario Template, and Named Entity tasks are very similar. From the standpoint of calculating scores, The template element (TE) task is the basic task of these four. This section will explain how TE is scored, and subsequent sections will tell how the NE, TR, and ST tasks can be seen as extensions to TE scoring.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation of Scores Template Element (TE) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Simply put, the final score for the four tasks is found by aligning the key objects with the response objects and then comparing the objects' single fills. Structures are aligned at each level of the object/slot/multi-fill/single-fill structure hierarchy. However, it is the single-fill alignments that we count to get the score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation of Scores Template Element (TE) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The result of aligning one key single fill to one response single fill (or of leaving one key or response single fill unaligned) is called a tally. There are six kinds of tallies:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation of Scores Template Element (TE) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "COR Correct the two single fills are considered identical. INC Incorrect the two single fills are not identical. PAR Partially Correct the two single fills are not identical, but partial credit should still be given. MIS Missing a key object has no response object aligned with it. SPU Spurious a response object has no key object aligned with it. NON Noncommittal the alignment doesn't contribute anything to the scoring.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation of Scores Template Element (TE) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given a set of tallies, there are several values calculated in the alignment and final scoring.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation of Scores Template Element (TE) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The number of fills in the key which contribute to the final score. Intuitively, information extraction systems often sacrifice precision for recall, or vice versa. If a system is tuned to \"catch everything\" (good recall), it often catches more than it should (bad precision). And if it tries to be conservative (good precision), it tends to miss some information (bad recall). When evaluating responses, then, one has to be careful about comparing one response from a system tuned for high recall to another response from a system tuned for high precision. van Rijsbergen's F-measure is used to combine recall and precision measures into one measure. The formula for F is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2 ((beta) + 1.0) * P * R F = ------------------------ 2 ((beta) * P) + R", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where beta is the relative weight of precision and recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following measures are also calculated from the tallies, and are in the score report: When aligning two multi-fills, the scoring software pairs all single-fills of the multi-fills. For example, if the key multi-fill has three single-fills, and the response multi-fill has two multi-fills, then the scorer creates six pairs of single-fills. Each single-fill pair has an F-score associated with it. The scorer sorts these single-fill pairs by F-score in decreasing order. It then proceeds down the sorted list, picking out pairs of single-fills for which neither single-fill has been chosen yet, and adding them to the final alignment for that pair of multi-fills. Any key or response single fills left over (in our example, there would be a key single fill left) is tallied as missing or spurious.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A key slot is aligned with a response slot when the two slots have the same name. The lone multi-fill in the response slot is aligned with the multi-fill in the key slot that results in the best multi-fill-to-multi-fill F-score. Any leftover multi-fills in the key slot are unscored, and are tallied as \"noncommittal\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Key objects are aligned with response objects of the same object \"type\" or \"class\" To choose which objects are paired, the scorer first generates all possible pairs of objects in the class. The F-score for each pair of objects is calculated from the way the objects' single-fills align. The weighted F-score is also calculated, by multiplying each slot-pair's F-score by the mapping weight of that slot, and summing the factors. The object pairs are sorted by (unweighted) F-score in decreasing order. Then the scorer proceeds down the sorted list, picking out pairs of objects for which neither single-fill has been chosen yet, and for which the weighted F-score exceeds the threshold for that type of object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If any objects are left over after this, the scorer looks for any key objects which are marked \"optional\". The single fills of these objects are tallied as non-committal. If any key objects are left after this, their single-fills are tallied as missing. The single fills of any leftover response objects are tallied as spurious.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When all classes of objects have been aligned, the tallies are summed, and the resulting measures are calculated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Possible", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the TR and ST tasks, the scoring proceeds just as in TE scoring, but the order of alignment of objects is important. It is helpful to look at the classes of objects in a TR or ST task as vertices of a topological graph. If one type of object has a slot containing pointers to another type of object, then the graph has a directed edge from the first class to the pointed-to class:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Relation (TR) and Scenario Template (ST) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When comparing a key pointer fill to a response pointer fill, the only way the scorer can compare the pointers is by looking to see if the objects to which they point have already been aligned by the scorer. If they have, and if the object pointed to by the key pointer is aligned to the object pointed to by the response pointer, then the pointers are tallied as correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Relation (TR) and Scenario Template (ST) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since pointer correctness is defined in this way, the directed graph cannot have any directed cycles in it. Further, the scorer has to align the objects so that any pointed-to objects must already have been aligned. So in the above figure, the order of mapping could be D-B-C-A or D-C-B-A. Any other order would confuse the scorer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Relation (TR) and Scenario Template (ST) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The only other difference between the TR and ST task and the TE task is the existence of implicitly optional objects in the key. In TR, a \"relation\" object that points to an optional \"template element\" object is optional, whether it's marked optional or not. And in ST, an object is implicitly optional if the only pointers pointing to that object are in optional slots or in one one multi-fill of a slot, but not in another multi-fill of the same slot (ie, there is an alternative multi-fill in the slot that doesn't point to the object).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Relation (TR) and Scenario Template (ST) Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Named Entity task is scored like the Template Element task, except that the objects which are aligned must come from SGML elements in the same position of the original text file. For instance, if in the key the name \"Bill Clinton\" is tagged in the first paragraph of an article, and in the response \"Bill Clinton\" is tagged in the tenth paragraph, the objects will not be aligned, even if they would give an F-score of 100%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity (NE) Task Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ":use_IE_report_summary yes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": {}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "POS = COR + INC + PAR + MIS ACT ActualThe number of fills in the response.ACT = COR + INC + PAR + SPU REC Recall a measure of how much of the key fills were produced in the response. COR + (0.5 * PAR) REC = -----------------POS PRE Precision a measure of how much of the response fills are actually in the key. COR + (0.5 * PAR) PRE = -----------------ACT", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "------------------COR + INC + PAR ERR Error per response fill INC + (0.5 * PAR) + SPU + MIS ERR = -------------------------------COR + INC + PAR + SPU + MIS", |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Single fill substructure level</td></tr><tr><td>-type</td></tr><tr><td>-fill</td></tr><tr><td>-clean_fill</td></tr><tr><td>-start_offset</td></tr><tr><td>-end_offset</td></tr><tr><td>Description of Map History Attributes</td></tr><tr><td>pair level</td></tr><tr><td>-multi_fill_tallies</td></tr><tr><td>-single_fill_pairs</td></tr><tr><td>Single fill pair level</td></tr><tr><td>-single_fill_pair_status</td></tr><tr><td>-single_fill_pair_tallies</td></tr><tr><td>-key_single_fill</td></tr><tr><td>-rsp_single_fill</td></tr></table>", |
|
"text": "Attribute values are strings, lists, integers, or nonexistent if the attribute's presence alone implies something. At present, the attributes are: class_name the name of a object type, e.g. \"ENAMEX\". class_pairs a list describing how the groups of objects of the same type were aligned. class_tallies single-fill tallies for all objects of one type in one document. clean_fill", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |