Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "M95-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:12:52.715258Z"
},
"title": "STERLING SOFTWARE : AN NLTOOLSET-BASED SYSTEM FOR MUC-6",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "M95-1020",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "For a little over two years, Sterling Software ITD has been developing the Automatic Templatin g System (ATS) [1] for automatically extracting entity and event data in the counter-narcotics domain from military messages. This system, part of the Counter Drug Intelligence System (CDIS), was built around the NLToolset [2] , which was originally developed by GE and is now being developed and supported b y Lockheed-Martin . Early results showed that the system was performing better than the human analyst s in all aspects.",
"cite_spans": [
{
"start": 110,
"end": 113,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 318,
"end": 321,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "ATS was in its final delivery phase at the same time as our MUC-6 development . We elected to participate despite this conflict, but it did limit us to 4 person-weeks on MUC-6, forcing us to scale bac k from our original plans and only participate in the NE and TE tasks . The results were more tha n gratifying. (Figure 1 ) consists of 5 major components, applied in sequence : Lexical Analysis, Reduction, Extraction, Merging, Postprocessing. It was designed to share as much of the processin g sequence between tasks as possible . The processing for NE followed the identical sequence of step s (Lexical Analysis, and Reduction) as was followed for the TE and ST tasks, then diverged to its ow n Postprocessing component to write the NE file . The Reduction steps taken to identify portions of text for marking in NE also filled the slots with the appropriate text for the TE task. The processing specific to S T diverged after all the phrase-level Reductions for NE and TE had been performed .",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 322,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "NE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SYSTEM DESIGN",
"sec_num": null
},
{
"text": "Merging toke r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TE expectations expectation s Extraction",
"sec_num": null
},
{
"text": "Reduction ~s equen 1 Extraction I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TE expectations expectation s Extraction",
"sec_num": null
},
{
"text": "The heart of the system is a sophisticated pattern-matcher, which is used repeatedly in the course o f processing to identify text for Reduction or Extraction. While the NLToolset also provides a parser, afte r some initial development we abandoned it on ATS, and did not use it on MUC-6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ST expect",
"sec_num": null
},
{
"text": "The Lexical Analysis component has several subcomponents . First, a tokenizer converts the input string for the entire article into a sequence of tokens . We modified the NLToolset-supplied tokenizer to try to prevent it from reordering or dropping text in ways that made it difficult to map back to th e original text when writing the NE output file; we also modified it to preserve upper-vs lower-cas e information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": null
},
{
"text": "The second step in Lexical Analysis is the actual lexicon lookup, which attaches information from th e lexicon to the tokens . This includes morphological analysis, which was useful primarily for determinin g the root form of nationalities, such as \"Canadian\" -> CANADA. It also includes finding multi-token lexicon entries, such as \"New York\" and \"Coca-Cola\" . Since we weren't using the parser, the part-ofspeech obtained by a lexical lookup was of interest mainly if it was something like city-name or orgname ; we did also try to prevent the inappropriate inclusion of verbs, prepositions, etc in names, wit h mixed results .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": null
},
{
"text": "The third step in Lexical Analysis is the insertion of special marker tokens to indicate capitalize d words . This was needed to be able to usethat information in name recognition, since there did not appea r to be any good way to get the pattern matcher to use the capitalization information contained in th e original tokens .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": null
},
{
"text": "Finally, Lexical Analysis splits the token sequence into sentences, including one each for headline , dateline, and date.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": null
},
{
"text": "The Reduction components each consist of one or more stages of applying the NLToolset's pattern matcher to phrases . Any phrase matched is \"reduced\", usually but not always to a single multi-token, o r \"mtoken\" . In each stage, all the patterns appropriate to that stage are tried on each sentence in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduction",
"sec_num": null
},
{
"text": "The very first reduction stage is a \"junk\" reduction to delete tables so they are not seen by subsequent reduction stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduction",
"sec_num": null
},
{
"text": "Each subsequent reduction has two useful side-effects : 1) identifying which tokens form the heart of the reduction and therefore should be marked for the NE task, and 2) filling the slots of the mtokens wit h appropriate pieces of the text that was reduced, for the TE task . Note that these two purposes often conflict --for example, city, state references and date ranges were supposed to have pieces marke d separately, but were reduced to single mtokens with one set of slot fillers . This called for some carefu l engineering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduction",
"sec_num": null
},
{
"text": "The applications of reduction patterns are done in sequence rather than all at once for a number o f reasons : First, some references to a person, organization, or location may not be recognizable b y themselves, but other references to the same thing may be easier to spot . Therefore, every new thing reduced is added to a temporary lexicon, and another reduction step is applied to look for othe r references (with certain allowed variations) to those same things ; for example, relatively easy-torecognize references to \"Mr. Jones\" or \"Robert L . James\" would enable later recognition of the more problematic \"Barnaby Jones\" and \"James\" . And when adding to this lexicon, appropriate variations in a n (organization) name are included so that they would be recognized if they occured ; for example: Name Possible variations \"Paramount Pictures Corp .\" \"Paramount\" \"Paramount Pictures \" \"New York Post\" \"Post\" \"Kidder , Peabody & Co.\" \"Kidder\" \"Kidder Peabody \" \"National Labor Relations Board\" \"NLRB \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduction",
"sec_num": null
},
{
"text": "When such a \"secondary\" organization reference is reduced, the text is put in the org_alias slot; the full form is pulled from the lexicon and put in the org_name slot to ensure proper merging (see below) of the two referents .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduction",
"sec_num": null
},
{
"text": "Second, the results of reductions can be used to provide additional context for later reductions; for example, person reduction is done after organization, so a reduced organization can help the patter n matcher recognize a person, as in the token sequence [ARTIE MCDONALD , *ORG* 'S PRESIDENT] , where *ORG* is the mtoken produced by the earlier reduction . A reduction can also involve multiple previously-reduced mtokens, filling the slots of one with information from another ; for example, the reduction of the token sequence [*ORG* , A *LOC* -BASED MANUFACTURER] includes filling th e org_descriptor, org_locale, and org_country slots of *ORG* with the descriptive phrase and th e information from *LOC* .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduction",
"sec_num": null
},
{
"text": "An Extraction component uses the results of a pattern match to generate an \"expectation\" and fill its slots with pieces of the text matched . For ST, a typical expectation represents an event, with the person, organization, date, etc mtokens in the clause that was matched being used to fill its slots . For TE, each expectation is a trivial one containing one person or organization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction",
"sec_num": null
},
{
"text": "The NLToolset provides a merging tool, which merges expectations of the same type (person , organization, etc) as long as the fillers of their corresponding slots do not conflict; a conflict occurs if both have a filler, the fillers are different, and the slot is not allowed to have multiple fillers . Obviously, the org_alias and org_descriptor slots were allowed to have multiple fillers and org_name was not .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merging",
"sec_num": null
},
{
"text": "During reduction, our system actually splits a person's name across slots called given_name , family_name, and suffix_name, so that the expectations for, say, \"Harry L . James, Jr .\" and \"Mr. James \" would be merged. It also carefully fills slots such as org_type and a few others added just for thi s purpose so as to prevent improper merges ; for example, it reduces the token sequence [THE *ORG * UNIT] to two *ORG* mtokens, one old and one new, with slots filled so that they could not merge wit h each other .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merging",
"sec_num": null
},
{
"text": "Initially, we relied on this merging tool to bring together separated org names and descriptors, such a s \"NEC Corp . ... the giant Japanese computer manufacturer\". We soon found, however, that even with careful use of slot fillers to prevent descriptors for commercial organizations from merging with, say, th e name of a government organization or a library, too many merges were incorrect . We therefore devised a separate stack mechanism which keeps track of the org mtokens for each sentence; when an or g descriptor is reduced in the final TE reduction stage, the stack is searched starting at the current sentence , to find the closest suitable referent that precedes the descriptor, and to add the descriptor text to th e mtoken for that referent. This approach worked quite well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merging",
"sec_num": null
},
{
"text": "For the NE task, the postprocessing step consists of traversing the token sequences in parallel with the original text, writing the original text and inserting markers as the reduction results attached to eac h token indicated . We had to go back to original text to include those portions of the article header whic h were not processed, and to recover from cases where the tokenizer had dropped characters despite our modifications .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postprocessing",
"sec_num": null
},
{
"text": "For the TE task, the postprocessing step consists of traversing the list of expectations and writing a template for each, performing final clean-ups like removing duplicate aliases, combining th e person_name pieces, skipping slots used only to control merging, etc .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postprocessing",
"sec_num": null
},
{
"text": "The bulk of the time spent in knowledge engineering was spent developing the patterns for all th e Reduction and Extraction stages . These patterns were devised to take advantage of all the loca l contextual clues we could come up with, including uppervs lower-case information and descriptiv e appositives . Our results show that this approach works well ; and the modularity of the patterns makes i t easy to add coverage as we discover additional clues (such as those we discuss in the walkthrough with respect to organizations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "The reliance on case information meant that headlines were a bit of a problem ; despite giving them somewhat special treatment, our error rate was higher there than elsewhere: I-IL 136 142 119 0 7 16 10 0 88 84 7 11 22 6 DD 60 60 60 0 0 0 0 0 100 100 0 0 0 0 DL 52 52 52 0 0 0 0 0 100 100 0 0 0 0 TXT 2046 2024 1889 0 47 88 110 0 92 93 5 4 11 2 There was some lexicon work, as well. This included entries for all the countries, with alternat e phrases (such as \"West Germany\" for \"Federal Republic of Germany\") and irregular derivations (such a s \"Dutch\" for \"Netherlands\"), and entries for major cities and geographical regions, with their countr y information included . For organizations, we limited it to a few dozen major ones that have no reliabl e internal clues and often occur without any contextual clues (such as \"White House\", \"Fannie Mae\", \"Bi g Board\", \"Coca-Cola\" and \"Coke\", \"Macy's\", \"Exxon\", etc) . The results on the walkthrough article (see Table 1 ) compared to our overall results show that this wa s indeed a relatively difficult article. They show three issues worth discussing.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 403,
"text": "I-IL 136 142 119 0 7 16 10 0 88 84 7 11 22 6 DD 60 60 60 0 0 0 0 0 100 100 0 0 0 0 DL 52 52 52 0 0 0 0 0 100 100 0 0 0 0 TXT 2046 2024 1889 0 47 88 110 0 92 93 5 4 11 2",
"ref_id": "TABREF2"
},
{
"start": 1020,
"end": 1027,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "First, we had low precision on timex . Two out of the three \"spurious\" dates are due to our apparentl y mistaken belief that \"yesterday\" and \"tomorrow\" were supposed to be marked . This knowledge engineering error led to the worst recall or precision number on our overall NE results, a precision o n timex of 84; avoiding that error would have raised it to 94 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "Second, recall and precision on organizations was a bit low . The system missed both \"Fallon McElligott\" and \"McCann-Erickson\" . On the former, a phrase like \"ad agency Fallon McElligott\" woul d have caused it to be found, but the actual phrase \"other ad agencies, such as Fallon McElligott\" did not . On the latter, not having a pattern to cover things like \"chief executive officer of McCann-Erickson\" wa s an omission on our part.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "Other organization errors were : getting \"New York Times\", which in this article is incorrect ; missing the two descriptors for \"Ammirati & Puris\" and the locale for \"Coca-Cola\" . The locale error points ou t another major cause of poor results --a next-to-last-minute change in the final TE pattern for picking u p combination of organization name plus location and/or descriptor, inadequately tested, led t o inadvertantly dropping coverage of the most basic of combinations : [*ORG* $lprep *LOC*], where $lprep is a macro for: \"one of ',' 'in\"of\" . This unfortunate error had the following effect on total locale slot scor e on TE :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "POS ACT COR REC PRE actual 110 59 42 38 7 1 corrected 110 76 59 54 7 1 Third, problems with persons . The system decided \"McCann\" was a person, based on \"the McCan n family\"; since it did not recognize \"McCann-Erickson\" as a company, every reference to \"McCann\" wa s therefore marked as a person. Due to inadequate restrictions on our use of capitalization, the system als o decided \"While McCann\" and \"One McCann\" were distinct persons . It decided that \"John J . Dooner, Jr.\" and \"John Dooner\" were distinct persons ; the \"Jr .\" would not have caused it to make that decision, but th e \"J .\" did . After the Lexical Analysis, the input string has been converted into a list of 52 sentences, each sentenc e containing a list of tokens; this list includes *CAP* tokens inserted in front of every capitalized token . Attached to each token is the result of the lexical lookup .",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 79,
"text": "ACT COR REC PRE actual 110 59 42 38 7 1 corrected 110 76 59 54 7 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "Note that at this point lexical lookup has replaced the surface representation of \"Coke\" and \"CEO\" wit h their \"canonical\" forms. Every token contains its original string, so we can still recover it for use in fillin g slots .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "The lookup on \"Atlanta\" has provided the information that it is a city and that its country is the US. The initial Reduction stages take care of money, percent, date, time, and location, then \"secondary \" references to location . The only things worth noting here are the \"yesterday\" errors already discussed , that the system decided \"60 pounds\" was a reference to money, and that the information in the lexica l entry for \"Atlanta\" was used to fill the slots of the *LOC* mtoken .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "The next Reduction stages take care of \"primary\" then \"secondary\" references to organizations .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "The primary stage picks up \"Interpublic Group\", \"PaineWebber\", \"Coca-Cola\", \"Coke\", \"Creative Artist s Agency\", 'WPP Group\", \"Ammirati & Puris\", \"New York Yacht Club\" and \"New York Times\" . It misses \"Fallon McBride\" and \"McCann-Erickson\" for reasons already noted . The only reason it get s \"PaineWebber\", \"Coca-Cola\", and \"Coke\" is because they are in the lexicon ; the others are all picked up by match various patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "In this article, the only secondary reference is \"CAA\" as a reference to \"Creative Artists Agency\" . While the system does manufacture acronyms as potential secondary references when certainpattems match , the pattern which enabled it to determine that \"Creative Artists Agency\" was a commercial organizatio n was unfortunately not one of them .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "The next Reduction stages take care of \"primary\" then \"secondary\" references to persons .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "The primary stage picks up \"James\", \"John Dooner\", \"Kevin Goldman\", \"Robert L . James\", \"John J . Dooner, Jr .\", \"Mr . James\", \"Mr . Dooner\", \"Alan Gottesman\", \"Peter Kim\", \"Walter Thompson\", \"Marti n Puris\", and (alas) \"McCann\" . These are found on the strength of titles like \"Mr .\" and \"Sen .\", known first names, and contextual clues such as known occupations like \"president\", \"analyst\", etc . \"James\" in th e headline is found because it follows \"succeed\"; \"McCann\" is found because of \"McCann family\" .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "The secondary stage picks up all remaining references to \"McCann\" . Since \"McCann-Erickson\" was not recognized as an organization, all those occurrences are picked up, too . And since we failed to mak e adverbs off-limits as new first names in this stage, it decides that \"While McCann\" and \"One McCann\" (note the capitalization) are distinct persons . <DOC> <DOCID> wsj94_026 .0231 </DOCID> <DOCNO> 940224-0133 . </DOCNO> <HL> Marketing & Media --Advertising: @ <ENAMEX TYPE=\"PERSON\">John Dooner</ENAMEX> Will Succeed <ENAMEX TYPE=\"PERSON'>James</ENAMEX > @ At Helm of <ENAMEX TYPE=\"PERSON\">McCann</ENAMEX>-Erickso n @ @ By <ENAMEX TYPE=\"PERSON\">Kevin Goldman</ENAMEX> </HL > <DD> <TIMEX TYPE=\"DATE\">02/24/94</TIMEX> </DD > <SO> WALL STREET JOURNAL (J), PAGE B8 </SO>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KNOWLEDGE ENGINEERIN G",
"sec_num": null
},
{
"text": "One of the many differences between <ENAMEX TYPE=\"PERSON\">Robert L . James</ENAMEX>, chairman and chief executive officer of <ENAMEX TYPE=\"PERSON\">McCann</ENAMEX>-Erickson , and <ENAMEX TYPE=\"PERSON\">John J . Dooner Jr .</ENAMEX>, the agency's president and chie f operating officer, is quite telling: Mr . <ENAMEX TYPE=\"PERSON\">James</ENAMEX> enjoy s sailboating, whil e Mr . <ENAMEX TYPE=\"PERSON\">Dooner</ENAMEX> owns a powerboat .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<CO> IPG K </CO> <IN> ADVERTISING (ADV), ALL ENTERTAINMENT & LEISURE (ENT) , FOOD PRODUCTS (FOD), FOOD PRODUCERS, EXCLUDING FISHING (OFP) , RECREATIONAL PRODUCTS & SERVICES (REC), TOYS (TMF) </IN> <TXT> <p >",
"sec_num": null
},
{
"text": "However, odds of that happening are slim since word from <ENAMEX TYPE=\"ORGANIZATION\">Coke</ENAMEX> headquarters i n <ENAMEX TYPE=\"LOCATION\">Atlanta</ENAMEX> is that CAA and other ad agencies, such as Fallon McElligott, will continue to handle <ENAMEX TYPE=\"ORGANIZATION\">Coke</ENAMEX > advertising .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<CO> IPG K </CO> <IN> ADVERTISING (ADV), ALL ENTERTAINMENT & LEISURE (ENT) , FOOD PRODUCTS (FOD), FOOD PRODUCERS, EXCLUDING FISHING (OFP) , RECREATIONAL PRODUCTS & SERVICES (REC), TOYS (TMF) </IN> <TXT> <p >",
"sec_num": null
},
{
"text": "Mr . <ENAMEX TYPE=\"PERSON\">Dooner</ENAMEX> met with <ENAMEX TYPE=\"PERSON\">Martin Puris</ENAMEX>, president and chief executive officer o f <ENAMEX TYPE=\"ORGANIZATION\">Ammirati & Puris< /ENAMEX>, abou t <ENAMEX TYPE=\"PERSON\">McCann</ENAMEX>'s acquiring the agency with billings o f <NUMEX TYPE=\"MONEY\">$400 million</NUMEX>, but nothing has materialized . cases where the descriptor is an appositive, the referenced organization is included in the pattern match ; otherwise, if the appositive is a definite reference, the stack of organization references is searched for th e putative antecedant. In either case, the descriptor and locale information (if any) is inserted into slots o f the organization mtoken . In retrospect, including indefinite references that are not appositives appears to have been the wrong thing to do .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<CO> IPG K </CO> <IN> ADVERTISING (ADV), ALL ENTERTAINMENT & LEISURE (ENT) , FOOD PRODUCTS (FOD), FOOD PRODUCERS, EXCLUDING FISHING (OFP) , RECREATIONAL PRODUCTS & SERVICES (REC), TOYS (TMF) </IN> <TXT> <p >",
"sec_num": null
},
{
"text": "Then there is the trivial Extraction step which turns the organization and person mtokens int o \"expectations\" . This is followed by the Merging step which merges expectations together whereve r possible. This includes merging the expectations for \"James\", \"Robert L . James\", \"Mr . James\" (several occurences); \"Coca-Cola\", \"Coke\" ; etc .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<CO> IPG K </CO> <IN> ADVERTISING (ADV), ALL ENTERTAINMENT & LEISURE (ENT) , FOOD PRODUCTS (FOD), FOOD PRODUCERS, EXCLUDING FISHING (OFP) , RECREATIONAL PRODUCTS & SERVICES (REC), TOYS (TMF) </IN> <TXT> <p >",
"sec_num": null
},
{
"text": "Per_Title: \"Mr .\" Given_Name : \"Robert L .\" Family_Name: \"James\" Per_Alias : \"James\" Organization :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Before Person:",
"sec_num": null
},
{
"text": "Org_Name: \"Coca-Cola\" Org_Alias : \"Coca-Cola\" \"Coke \" Known: \"Yes\" After Person: Per_Title : \"Mr .\" Per_Name : \"Robert L. James \" Per_Alias : \"James\" Organization :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Before Person:",
"sec_num": null
},
{
"text": "Org_Name: \"Coca-Cola\" Org_Alias: \"Coke\" <ORGANIZATION-9402240133-3> : = ORG_TYPE : \"COMPANY\" ORG_NAME : \"Coca-Cola\" ORG_ALIAS: \"Coke\" <ORGANIZATION-9402240133-4> : _ ORG_DESCRIPTOR: \"the big Hollywood talent agency\" ORG_COUNTRY: \"UNITED STATES \" ORG_LOCALE : \"Hollywood CITY \" ORG_TYPE : \"COMPANY\" ORG_NAME : \"Creative Artists Agency \" The overall results (see Table 2 ) were obtained in 4 person-weeks of effort, lifting some pattern and code ideas from the ATS, which worked on a very different set of message types, and wasting a few day s on the ST task and on filling in date templates . These results show that our semantic-pattern-based approach to entity detection and templating is a very good one, and one which can be brought to bear o n a new application quickly .",
"cite_spans": [],
"ref_spans": [
{
"start": 361,
"end": 368,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Before Person:",
"sec_num": null
},
{
"text": "As we have noted, dramatic improvements in the worst numbers (timex in NE, org locale and country in TE) would have been obtained with very minor changes in the patterns --literally, a couple hour s worth of work . The org locale fix would actually have given us the highest f-measure on that category : 61 .3 . Despite that \"couple hours\" estimate, we would have to say that our greatest limiting factor wa s time --time to test more thoroughly and isolate the causes of the biggest problems . Slowness of the system was a problem but not a major one, as it took only a minute or two per article .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS AND CONCLUSION S",
"sec_num": null
},
{
"text": "After those two improvements, we turn to the problem of org descriptors --although we had th e highest f-measure, it was only 43.6, which shows that there is still room for improvement . Here, the solutions are less obvious . One step to take is to add to the patterns to allow modifier phrases after the head noun in a descriptor noun phrase, such as \"the agency with billings of $400 million\" . More exploration is needed on this, especially in light of the fact that both the recall and precision rates were low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS AND CONCLUSION S",
"sec_num": null
},
{
"text": "Another area where we would like to make changes is in the order of reduction stages . For example, the system currently does all person reductions after organization reductions . This meant we had to prevent the secondary organization reduction from matching what are clearly person names (eg: primary \"Schecter Group\" -/-> secondary \"Mr. Schecter\"). The solution, clearly, is to apply some of the perso n patterns before the organization patterns .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS AND CONCLUSION S",
"sec_num": null
},
{
"text": "Since all the processing occurs without any regard to the types of events discussed in the articles, the system we have developed here is easily portable across domains. If a domain required a different set o f template slots than used for MUC-6, the patterns would be unchanged but the reduction code that fill s the slots, and the postprocessing code that reports them, would have to be modified slightly .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS AND CONCLUSION S",
"sec_num": null
},
{
"text": "We have demonstrated, on MUC-6 and on CDIS, that we have an excellent approach to both entity an d event extraction on a range of document types . We hope to have the opportunity to continue this work , as funding permits .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS AND CONCLUSION S",
"sec_num": null
}
],
"back_matter": [
{
"text": "Now, NE and TE processing diverge . For NE, the system uses the original text of the article to write a copy . It traverses the token sequences in parallel with the original text, using the fact that each toke n contains information on all the reductions it was involved in to determine where to insert begin and en d brackets . It only pays attention to the final reduction except in the case of locations inside money, wher e brackets are inserted for both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Family_Name: \"James\" Per_Alias : \"James\" Person : Given_Name: \"Robert L.\" Family_Name: \"James\" Person : Per_Title: \"Mr .\" Family_Name: \"James\" Per_Alias : \"James \" Organization : Org_Name : \"Coca-Cola\" Org_Alias: \"Coca-Cola \" Known: \"Yes \" Organization: Org_Name : \"Coca-Cola \" Org_Alias: \"Coke\" Known: \"Yes\" Figure 5 : Merging For TE, there is one final Reduction stage to take care of organization descriptors and locations . Here, the system finds descriptors \"the big Hollywood talent agency\" and \"a hot agency\", but not \"a qualit y operation\" and \"the agency with billings of $400 million\" . The former omission was deliberate, due to too many spurious matches when it was included ; the latter was a construct we did not think to include . In After Person :Per_Titie : \"Mr .\" Given_Name : \"Robert L.\" Family_Name: \"James\" Per_Alias : \"James\" Person: Given_Name: \"While\" Family_Name : \"McCann\" Per_Alias: \"McCann\" Person: Given_Name : \"One\" Family_Name : \"McCann\" Organization : Org_Name : \"Coca-Cola\" Org_Alias: \"Coca-Cola\" \"Coke\" Known: \"Yes \"",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 317,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Before Person :",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Automated Templating System for Database Updat e from Unformatted Message Traffic",
"authors": [
{
"first": "L",
"middle": [],
"last": "Osterholtz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mcneilly",
"suffix": ""
}
],
"year": 1995,
"venue": "ONDCP International Technology Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Osterholtz, L. and Lee, R . and McNeilly, C ., \"The Automated Templating System for Database Updat e from Unformatted Message Traffic\", 1995 ONDCP International Technology Symposium, Oct. 1995",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Lexico-Semantic Pattern Matching as a Companion t o Parsing in Text Applications",
"authors": [
{
"first": "P",
"middle": [
"S"
],
"last": "Jacobs",
"suffix": ""
},
{
"first": "G",
"middle": [
"R"
],
"last": "Krupka",
"suffix": ""
},
{
"first": "L",
"middle": [
"F"
],
"last": "Rau",
"suffix": ""
}
],
"year": 1991,
"venue": "Fourth DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacobs, P.S. and Krupka, G.R. and Rau, L .F., \"Lexico-Semantic Pattern Matching as a Companion t o Parsing in Text Applications\", Fourth DARPA Speech and Natural Language Workshop, Feb . 1991",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "System DesignOur MUC-6 system",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Now, the walkthrough . (*SO-HL* *CAP* MARKETING *AMPERSAND* *CAP* MEDIA *DASHES* *CAP* ADVERTISIN G *COLON* *AT* *CAP* JOHN *CAP* DOONER *CAP* WILL *CAP* SUCCEED *CAP* JAMES *AT * *CAP* AT *CAP* HELM OF *CAP* MCCANN *HYPHEN* *CAP* ERICKSON *AT* *DASHES* *AT * *CAP* BY *CAP* KEVIN *CAP* GOLDMAN *EO-HL* ) (*SO-DD* 102 I *SLASH* 124 I *SLASH* 194 I *EO-DD* ) (*SOT* *CAP* ONE OF THE MANY DIFFERENCES BETWEEN *CAP* ROBERT *CAP* ABBREV_L *CAP* JAMES *COMMA* CHAIRMAN AND CHIEF-EXECUTIVE-OFFICER OF *CAP* MCCAN N *HYPHEN* *CAP* ERICKSON *COMMA* AND *CAP* JOHN *CAP* ABBREV LJ *CAP* DOONE R *CAP* ABBREVJR *COMMA* THE AGENCY *APOSTROPHE-S* PRESIDENT AND CHIEF-OPERATING-OFFICER *COMMA* IS QUITE TELLING *COLON* *CAP* ABBREV_MR *CAP* JAMES ENJOYS SAILBOATING *COMMA* WHILE *CAP* ABBREV_MR *CAP* DOONER OWNS A POWERBOAT *PERIOD* ) (*CAP* HOWEVER *COMMA* ODDS OF THAT HAPPENING ARE SLIM SINCE WORD FROM *CAP * COCA-COLA HEADQUARTERS IN *CAP* ATLANTA IS THAT *CAP* CAA AND OTHER A D AGENCIES *COMMA* SUCH AS *CAP* FALLON *CAP* MCELLIGOTT *COMMA* WILL CONTINU E TO HANDLE *CAP* COCA-COLA ADVERTISING *PERIOD* ) (*DOUBLEQUOTE* *EO-P* *SO-P* *CAP* ABBREV_MR *CAP* DOONER MET WITH *CAP* MARTIN *CAP* PURIS *COMMA* PRESIDENT AND CHIEF-EXECUTIVE-OFFICER OF *CAP* AMMIRAT I *AMPERSAND* *CAP* PURIS *COMMA* ABOUT *CAP* MCCANN *APOSTROPHE-S* ACQUIRING THE AGENCY WITH BILLINGS OF *DOLLAR* 1400 I MILLION *COMMA* BUT NOTHING HA S MATERIALIZED *PERIOD* ) After Lexical Analysi s",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "*SO-HL* *CAP* MARKETING *AMPERSAND* *CAP* MEDIA *DASHES* *CAP* ADVERTISIN G *COLON* *AT* *IND* *CAP* WILL *CAP* SUCCEED *IND* *AT* *CAP* AT *CAP* HELM OF *IND * *HYPHEN* *CAP* ERICKSON *AT* *DASHES* *AT* *CAP* BY *IND* *EO-HL* ) (*SO-DD* *DATE* *EO-DD* ) (*SOT* *CAP* ONE OF THE MANY DIFFERENCES BETWEEN *IND* *COMMA* CHAIRMAN AN D CHIEF-EXECUTIVE-OFFICER OF *IND* *HYPHEN* *CAP* ERICKSON *COMMA* AND *IND* *COMMA* THE AGENCY *APOSTROPHE-S* PRESIDENT AND CHIEF-OPERATING-OFFICE R *COMMA* IS QUITE TELLING *COLON* *CAP* *IND* ENJOYS SAILBOATING *COMMA* WHIL E *CAP* *IND* OWNS A POWERBOAT *PERIOD* ) (*CAP* HOWEVER *COMMA* ODDS OF THAT HAPPENING ARE SLIM SINCE WORD FROM *ORG * HEADQUARTERS IN *LOC* IS THAT *CAP* CAA AND OTHER AD AGENCIES *COMMA* SUCH A S *CAP* FALLON *CAP* MCELLIGOTT *COMMA* WILL CONTINUE TO HANDLE *ORG* ADVERTISING *PERIOD* ) (*DOUBLEQUOTE* *EO-P* *SO-P* *CAP* *IND* MET WITH *IND* *COMMA* PRESIDENT AN D CHIEF-EXECUTIVE-OFFICER OF *ORG* *COMMA* ABOUT *IND* *APOSTROPHE-S* ACQUIRING THE AGENCY WITH BILLINGS OF *MONEY* *COMMA* BUT NOTHING HAS MATERIALIZE D *PERIOD* ) Figure 3 : After Entity Reduction s",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "NE Results",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Postprocess Slot AdjustmentsFinally, the Postprocessing step writes each expectation to the TE result file, making final adjustments to the slot fillers as needed .",
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"uris": null,
"text": "Figure 7: TE Results",
"type_str": "figure"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td colspan=\"3\">WALKTHROUGH</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">NE: 9402240133</td><td colspan=\"7\">key: 9402240133-1 response: 9402240133-1</td><td/><td/><td/><td/><td/><td/></tr><tr><td>SLOT</td><td colspan=\"6\">POS ACT COR PAR INC SPU</td><td colspan=\"3\">MIS NON REC</td><td>PRE</td><td colspan=\"3\">UND OVG ERR</td><td>SUB</td></tr><tr><td colspan=\"2\">&lt;enamex&gt; 69</td><td>68</td><td>63</td><td>0</td><td>0</td><td>5</td><td>6</td><td>0</td><td>91</td><td>93</td><td>9</td><td>7</td><td/><td>0</td></tr><tr><td>type</td><td>69</td><td>68</td><td>53</td><td>0</td><td>10</td><td>5</td><td>6</td><td>0</td><td>77</td><td>78</td><td>9</td><td>7</td><td/><td>16</td></tr><tr><td>text</td><td>69</td><td>68</td><td>60</td><td>0</td><td>3</td><td>5</td><td>6</td><td>0</td><td>87</td><td>88</td><td>9</td><td>7</td><td/><td>5</td></tr><tr><td>&lt;timex&gt;</td><td>6</td><td>9</td><td>6</td><td>0</td><td>0</td><td>3</td><td>0</td><td>0</td><td colspan=\"2\">100 67</td><td>0</td><td>33</td><td/><td>0</td></tr><tr><td>type</td><td>6</td><td>9</td><td>6</td><td>0</td><td>0</td><td>3</td><td>0</td><td>0</td><td colspan=\"2\">100 67</td><td>0</td><td>33</td><td/><td>0</td></tr><tr><td>text</td><td>6</td><td>9</td><td>6</td><td>0</td><td>0</td><td>3</td><td>0</td><td>0</td><td colspan=\"2\">100 67</td><td>0</td><td>33</td><td/><td>0</td></tr><tr><td colspan=\"2\">&lt;numex&gt; 6</td><td>7</td><td>6</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td colspan=\"2\">100 86</td><td>0</td><td>14</td><td/><td>0</td></tr><tr><td>type</td><td>6</td><td>7</td><td>6</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td colspan=\"2\">100 86</td><td>0</td><td>14</td><td/><td>0</td></tr><tr><td>text</td><td>6</td><td>7</td><td>6</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td colspan=\"2\">100 86</td><td>0</td><td>14</td><td/><td>0</td></tr><tr><td>TOTAL</td><td colspan=\"4\">162 168 137 0</td><td>13</td><td>18</td><td>12</td><td>0</td><td>84</td><td>82</td><td>7</td><td>11</td><td>24</td><td>9</td></tr><tr><td colspan=\"9\">TE : 9402240133 key: 9402240133 response : 9402240133</td><td/><td/><td/><td/><td/><td/></tr><tr><td>SLOT</td><td colspan=\"14\">POS ACT COR PAR INC MIS SPU NON REC PRE UND OVG ERR SUB</td></tr><tr><td colspan=\"2\">organization 10</td><td>9</td><td>8</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>80</td><td>89</td><td>10</td><td>0</td><td>20</td><td>11</td></tr><tr><td>name</td><td>10</td><td>8</td><td>7</td><td>0</td><td>1</td><td>2</td><td>0</td><td>0</td><td>70</td><td>88</td><td>20</td><td>0</td><td>30</td><td>13</td></tr><tr><td>alias</td><td>3</td><td>1</td><td>1</td><td>0</td><td>0</td><td>2</td><td>0</td><td>6</td><td>33</td><td colspan=\"2\">100 67</td><td>0</td><td>67</td><td>0</td></tr><tr><td colspan=\"2\">descriptor 3</td><td>2</td><td>1</td><td>0</td><td>1</td><td>1</td><td>0</td><td>6</td><td>33</td><td>50</td><td>33</td><td>0</td><td>67</td><td>50</td></tr><tr><td>type</td><td>10</td><td>8</td><td>8</td><td>0</td><td>0</td><td>2</td><td>0</td><td>0</td><td>80</td><td colspan=\"2\">100 20</td><td>0</td><td>20</td><td>0</td></tr><tr><td>locale</td><td>2</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>7</td><td>50</td><td colspan=\"2\">100 50</td><td>0</td><td>50</td><td>0</td></tr><tr><td>country</td><td>2</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>7</td><td>50</td><td colspan=\"2\">100 50</td><td>0</td><td>50</td><td>0</td></tr><tr><td>person</td><td>6</td><td>10</td><td>6</td><td>0</td><td>0</td><td>0</td><td>4</td><td>0</td><td colspan=\"2\">100 60</td><td>0</td><td>40</td><td>40</td><td>0</td></tr><tr><td>name</td><td>6</td><td>10</td><td>6</td><td>0</td><td>0</td><td>0</td><td>4</td><td>0</td><td colspan=\"2\">100 60</td><td>0</td><td>40</td><td>40</td><td>0</td></tr><tr><td>alias</td><td>3</td><td>4</td><td>2</td><td>0</td><td>0</td><td>1</td><td>2</td><td>4</td><td>67</td><td>50</td><td>33</td><td>50</td><td>60</td><td>0</td></tr><tr><td>title</td><td>2</td><td>3</td><td>2</td><td>0</td><td>0</td><td>0</td><td>1</td><td>4</td><td colspan=\"2\">100 67</td><td>0</td><td>33</td><td>33</td><td>0</td></tr><tr><td>TOTAL</td><td>41</td><td>38</td><td>29</td><td>0</td><td>2</td><td>10</td><td>7</td><td>34</td><td>71</td><td>76</td><td>24</td><td>18</td><td>40</td><td>6</td></tr></table>",
"type_str": "table",
"num": null,
"text": ""
},
"TABREF4": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Overall Scores"
}
}
}
}