Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S10-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:27:51.305082Z"
},
"title": "SemEval-2010 Task 13: TempEval-2",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brandeis University",
"location": {
"settlement": "Massachusetts",
"country": "USA"
}
},
"email": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ILC-CNR",
"location": {
"settlement": "Pisa",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brandeis University",
"location": {
"settlement": "Massachusetts",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Tempeval-2 comprises evaluation tasks for time expressions, events and temporal relations, the latter of which was split up in four sub tasks, motivated by the notion that smaller subtasks would make both data preparation and temporal relation extraction easier. Manually annotated data were provided for six languages: Chinese, English, French, Italian, Korean and Spanish.",
"pdf_parse": {
"paper_id": "S10-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Tempeval-2 comprises evaluation tasks for time expressions, events and temporal relations, the latter of which was split up in four sub tasks, motivated by the notion that smaller subtasks would make both data preparation and temporal relation extraction easier. Manually annotated data were provided for six languages: Chinese, English, French, Italian, Korean and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ultimate aim of temporal processing is the automatic identification of all temporal referring expressions, events and temporal relations within a text. However, addressing this aim is beyond the scope of an evaluation challenge and a more modest approach is appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The 2007 SemEval task, TempEval-1 Verhagen et al., 2009) , was an initial evaluation exercise based on three limited temporal ordering and anchoring tasks that were considered realistic both from the perspective of assembling resources for development and testing and from the perspective of developing systems capable of addressing the tasks. 1 TempEval-2 is based on TempEval-1, but is more elaborate in two respects: (i) it is a multilingual task, and (ii) it consists of six subtasks rather than three.",
"cite_spans": [
{
"start": 34,
"end": 56,
"text": "Verhagen et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 344,
"end": 345,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the rest of this paper, we first introduce the data that we are dealing with. Which gets us in a position to present the list of task introduced by TempEval-2, including some motivation as to why we feel that it is a good idea to split up temporal relation classification into sub tasks. We proceed by shortly describing the data resources and their creation, followed by the performance of the systems that participated in the tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 The Semeval-2007 task was actually known simply as TempEval, but here we use Tempeval-1 to avoid confusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The TempEval annotation language is a simplified version of TimeML. 2 using three TimeML tags: TIMEX3, EVENT and TLINK.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "TIMEX3 tags the time expressions in the text and is identical to the TIMEX3 tag in TimeML. Times can be expressed syntactically by adverbial or prepositional phrases, as shown in the following example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "(1) a. on Thursday b. November 15, 2004 c. Thursday evening d. in the late 80's e. later this afternoon",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "The two main attributes of the TIMEX3 tag are TYPE and VAL, both shown in the example (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "(2) November 22, 2004 type=\" DATE\" val=\"2004-11-22\" For TempEval-2, we distinguish four temporal types: TIME (at 2:45 p.m.), DATE (January 27, 1920, yesterday), DURATION (two weeks) and SET (every Monday morning). The VAL attribute assumes values according to an extension of the ISO 8601 standard, as enhanced by TIMEX2.",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "DATE\" val=\"2004-11-22\"",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "Each document has one special TIMEX3 tag, the Document Creation Time (DCT), which is interpreted as an interval that spans a whole day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "The EVENT tag is used to annotate those elements in a text that describe what is conventionally referred to as an eventuality. Syntactically, events are typically expressed as inflected verbs, although event nominals, such as \"crash\" in killed by the crash, should also be annotated as EVENTS. The most salient event attributes encode tense, aspect, modality and polarity information. Examples of some of these features are shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "(3) should have bought tense=\"PAST\" aspect=\"PERFECTIVE\" modality=\"SHOULD\" polarity=\"POS\" (4) did not teach tense=\"PAST\" aspect=\"NONE\" modality=\"NONE\" polarity=\"NEG\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "The relation types for the TimeML TLINK tag form a fine-grained set based on James Allen's interval logic (Allen, 1983) . For TempEval, the set of labels was simplified to aid data preparation and to reduce the complexity of the task. We use only six relation types including the three core relations BEFORE, AFTER, and OVERLAP, the two less specific relations BEFORE-OR-OVERLAP and OVERLAP-OR-AFTER for ambiguous cases, and finally the relation VAGUE for those cases where no particular relation can be established.",
"cite_spans": [
{
"start": 106,
"end": 119,
"text": "(Allen, 1983)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "Temporal relations come in two broad flavours: anchorings of events to time expressions and orderings of events. Events can be anchored to an adjacent time expression as in examples 5 and 6 or to the document creation time as in 7. The country defaulted e2 on debts for that entire year. BEFORE(e2, dct) In addition, events can be ordered relative to other events, as in the examples below.",
"cite_spans": [
{
"start": 288,
"end": 298,
"text": "BEFORE(e2,",
"ref_id": null
},
{
"start": 299,
"end": 303,
"text": "dct)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "(8) The President spoke e1 to the nation on Tuesday on the financial crisis. He had conferred e2 with his cabinet regarding policy the day before. AFTER(e1,e2) (9) The students heard e1 a fire alarm e2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "OVERLAP(e1,e2) (10) He said e1 they had postponed e2 the meeting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TempEval Annotation",
"sec_num": "2"
},
{
"text": "3 TempEval-2 Tasks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "We can now define the six TempEval tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "A. Determine the extent of the time expressions in a text as defined by the TimeML TIMEX3 tag. In addition, determine value of the features TYPE and VAL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "B. Determine the extent of the events in a text as defined by the TimeML EVENT tag. In addition, determine the value of the features CLASS, TENSE, ASPECT, POLARITY, and MODALITY.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "C. Determine the temporal relation between an event and a time expression in the same sentence. This task is further restricted by requiring that either the event syntactically dominates the time expression or the event and time expression occur in the same noun phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "D. Determine the temporal relation between an event and the document creation time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "E. Determine the temporal relation between two main events in consecutive sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "F. Determine the temporal relation between two events where one event syntactically dominates the other event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "Of these tasks, C, D and E were also defined for TempEval-1. However, the syntactic locality restriction in task C was not present in TempEval-1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "Task participants could choose to either do all tasks, focus on the time expression task, focus on the event task, or focus on the four temporal relation tasks. In addition, participants could choose one or more of the six languages for which we provided data: Chinese, English, French, Italian, Korean, and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "We feel that well-defined tasks allow us to structure the workflow, allowing us to create taskspecific guidelines and using task-specific annotation tools to speed up annotation. More importantly, each task can be evaluated in a fairly straightforward way, contrary to for example the problems that pop up when evaluating two complex temporal graphs for the same document. In addition, tasks can be ranked, allowing systems to feed the results of one (more precise) task as a feature into another task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "Splitting the task into substask reduces the error rate in the manual annotation, and that merging the different sub-task into a unique layer as a postprocessing operation (see figure 1) provides better ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AFTER(e1,e2)",
"sec_num": null
},
{
"text": "The data for the five languages were prepared independently of each other and do not comprise a parallel corpus. However, annotation specifications and guidelines for the five languages were developed in conjunction with one other, in many cases based on version 1.2.1 of the TimeML annotation guidelines for English 3 . Not all corpora contained data for all six tasks. All corpora include event and timex annotation. The French corpus contained a subcorpus with temporal relations but these relations were not split into the four tasks C through F.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "Annotation proceeded in two phases: a dual annotation phase where two annotators annotate each document and an adjudication phase where a judge resolves disagreements between the annotators. Most languages used BAT, the Brandeis Annotation Tool (Verhagen, 2010) , a generic webbased annotation tool that is centered around the notion of annotation tasks. With the task decomposition allowed by BAT, it is possible to structure the complex task of temporal annotation by splitting it up in as many sub tasks as seems useful. As 3 See http://www.timeml.org. such, BAT was well-suited for TempEval-2 annotation.",
"cite_spans": [
{
"start": 245,
"end": 261,
"text": "(Verhagen, 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "We now give a few more details on the English and Spanish data, skipping the other languages for reasons that will become obvious at the beginning of section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "The English data sets were based on TimeBank (Pustejovsky et al., 2003; Boguraev et al., 2007) , a hand-built gold standard of annotated texts using the TimeML markup scheme. 4 However, all event annotation was reviewed to make sure that the annotation complied with the latest guidelines and all temporal relations were added according to the Tempeval-2 relation tasks, using the specified relation types.",
"cite_spans": [
{
"start": 45,
"end": 71,
"text": "(Pustejovsky et al., 2003;",
"ref_id": "BIBREF2"
},
{
"start": 72,
"end": 94,
"text": "Boguraev et al., 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "The data released for the TempEval-2 Spanish edition is a fragment of the Spanish TimeBank, currently under development. Its documents are originally from the Spanish part of the AnCora corpus (Taul\u00e9 et al., 2008) . Data preparation followed the annotation guidelines created to deal with the specificities of event and timex expressions in Spanish (Saur\u00ed et al., 2009a; Saur\u00ed et al., 2009b) .",
"cite_spans": [
{
"start": 193,
"end": 213,
"text": "(Taul\u00e9 et al., 2008)",
"ref_id": "BIBREF5"
},
{
"start": 349,
"end": 370,
"text": "(Saur\u00ed et al., 2009a;",
"ref_id": "BIBREF3"
},
{
"start": 371,
"end": 391,
"text": "Saur\u00ed et al., 2009b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "For the extents of events and time expressions (tasks A and B), precision, recall and the f1-measure are used as evaluation metrics, using the following formulas:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "precision = tp/(tp + f p) recall = tp/(tp + f n) f -measure = 2 * (P * R)/(P + R)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "Where tp is the number of tokens that are part of an extent in both key and response, fp is the number of tokens that are part of an extent in the response but not in the key, and fn is the number of tokens that are part of an extent in the key but not in the response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "For attributes of events and time expressions (the second part of tasks A and B) and for relation types (tasks C through F) we use an even simpler metric: the number of correct answers divided by the number of answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "Eight teams participated in TempEval-2, submitting a grand total of eighteen systems. Some of these systems only participated in one or two tasks while others participated in all tasks. The distribution over the six languages was very uneven: sixteen systems for English, two for Spanish and one for English and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Results",
"sec_num": "6"
},
{
"text": "The results for task A, recognition and normalization of time expressions, are given in tables 2 and 3. The results for Spanish are more uniform and generally higher than the results for English. For Spanish, the f-measure for TIMEX3 extents ranges from 0.88 through 0.91 with an average of 0.89; for English the f-measure ranges from 0.26 through 0.86, for an average of 0.78. However, due to the small sample size it is hard to make any generalizations. In both languages, type detection clearly was a simpler task than determining the value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Results",
"sec_num": "6"
},
{
"text": "The results for task B, event recognition, are given in tables 4 and 5. Both tables contain results for both Spanish and English, the first part of each ta- As with the time expressions results, the sample size for Spanish is small, but note again the higher f-measure for event extents in Spanish. Table 6 shows the results for all relation tasks, with the Spanish systems in the first two rows and the English systems in the last six rows. Recall that for Spanish the training and test sets only contained data for tasks C and D.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Results",
"sec_num": "6"
},
{
"text": "Interestingly, the version of the TIPSem systems that were applied to the Spanish data did much better on task C compared to its English cousins, but much worse on task D, which is rather puzzling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Results",
"sec_num": "6"
},
{
"text": "Such a difference in performance of the systems could be due to differences in annotation accurateness, or it could be due to some particularities of how the two languages express certain temporal Table 6 : Results for relation tasks aspects, or perhaps the one corpus is more homogeneous than the other. Again, there are not enough data points, but the issue deserves further attention.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Results",
"sec_num": "6"
},
{
"text": "For each task, the test data provided the event pairs or event-timex pairs with the relation type set to NONE and participating systems would replace that value with one of the six allowed relation types. However, participating systems were allowed to not replace NONE and not be penalized for it. Those cases would not be counted when compiling the scores in table 6. Table 7 lists those systems that did not classify all relation and the percentage of relations for each task that those systems did not classify. team C D E F TRIOS 25% 19% 36% 31% TRIPS 20% 10% 17% 10% The results are very similar except for task D, but if we take a away the one outlier (the NCSUjoint score of 0.21) then the average becomes 0.78 with a standard deviation of 0.05. However, we had expected that for TempEval-2 the systems would score better on task C since we added the restriction that the event and time expression had to be syntactically adjacent. It is not clear why the results on task C have not improved.",
"cite_spans": [],
"ref_spans": [
{
"start": 369,
"end": 376,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "System Results",
"sec_num": "6"
},
{
"text": "In this paper, we described the TempEval-2 task within the SemEval 2010 competition. This task involves identifying the temporal relations between events and temporal expressions in text. Using a subset of TimeML temporal relations, we show how temporal relations and anchorings can be annotated and identified in six different languages. The markup language adopted presents a descriptive framework with which to examine the temporal aspects of natural language information, demonstrating in particular, how tense and temporal information is encoded in specific sentences, and how temporal relations are encoded between events and temporal expressions. This work paves the way towards establishing a broad and open standard metadata markup language for natural language texts, examining events, temporal expressions, and their orderings. One thing that would need to be addressed in a follow-up task is what the optimal number of tasks is. Tempeval-2 had six tasks, spread out over six languages. This brought about some logistical challenges that delayed data delivery and may have given rise to a situation where there was simply not enough time for many systems to properly prepare. And clearly, the shared task was not successful in attracting systems to four of the six languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Irina Prodanof.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "See http://www.timeml.org for language specifications and annotation guidelines",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See www.timeml.org for details on TimeML, Time-Bank is distributed free of charge by the Linguistic Data Consortium (www.ldc.upenn.edu), catalog number LDC2006T08.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Many people were involved in TempEval-2. We want to express our gratitude to the following key contributors: Nianwen Xue, Estela Saquete, Lotus Goldberg, Seohyun Im, Andr\u00e9 Bittar, Nicoletta Calzolari, Jessica Moszkowicz and Hyopil Shin.Additional thanks to Joan Banach, Judith Domingo, Pau Gim\u00e9nez, Jimena del Solar, Teresa Su\u00f1ol, Allyson Ettinger, Sharon Spivak, Nahed Abul-Hassan, Ari Abelman, John Polson, Alexandra Nunez, Virginia Partridge, , Amber Stubbs, Alex Plotnick, Yuping Zhou, Philippe Muller and The work on the Spanish corpus was supported by a EU Marie Curie International Reintegration Grant (PIRG04- GA-2008-239414). Work on the English corpus was supported under the NSF-CRI grant 0551615, \"Towards a Comprehensive Linguistic Annotation of Language\" and the NSF-INT-0753069 project \"Sustainable Interoperability for Language Technology (SILT)\", funded by the National Science Foundation.Finally, thanks to all the participants, for sticking with a task that was not always as flawless and timely as it could have been in a perfect world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Maintaining knowledge about temporal intervals",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1983,
"venue": "Communications of the ACM",
"volume": "26",
"issue": "11",
"pages": "832--843",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Allen. 1983. Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11):832-843.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Timebank evolution as a community resource for timeml parsing. Language Resource and Evaluation",
"authors": [
{
"first": "Bran",
"middle": [],
"last": "Boguraev",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Rie",
"middle": [],
"last": "Ando",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "41",
"issue": "",
"pages": "91--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bran Boguraev, James Pustejovsky, Rie Ando, and Marc Verhagen. 2007. Timebank evolution as a community resource for timeml parsing. Language Resource and Evaluation, 41(1):91-115.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The TimeBank Corpus. Corpus Linguistics",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "Marcia",
"middle": [],
"last": "Lazo",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, David Day, Lisa Ferro, Robert Gaizauskas, Patrick Hanks, Marcia Lazo, Roser Saur\u00ed, Andrew See, Andrea Setzer, and Beth Sund- heim. 2003. The TimeBank Corpus. Corpus Lin- guistics, March.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Annotating events in spanish. timeml annotation guidelines",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Batiukova",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Saur\u00ed, Olga Batiukova, and James Pustejovsky. 2009a. Annotating events in spanish. timeml an- notation guidelines. Technical Report Version TempEval-2010., Barcelona Media -Innovation Center.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Annotating time expressions in spanish. timeml annotation guidelines",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
},
{
"first": "Estela",
"middle": [],
"last": "Saquete",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Saur\u00ed, Estela Saquete, and James Pustejovsky. 2009b. Annotating time expressions in spanish. timeml annotation guidelines. Technical Report Version TempEval-2010, Barcelona Media -Inno- vation Center.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ancora: Multilevel annotated corpora for catalan and spanish",
"authors": [
{
"first": "Mariona",
"middle": [],
"last": "Taul\u00e9",
"suffix": ""
},
{
"first": "Toni",
"middle": [],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mariona Taul\u00e9, Toni Mart\u00ed, and Marta Recasens. 2008. Ancora: Multilevel annotated corpora for catalan and spanish. In Proceedings of the LREC 2008, Marrakesh, Morocco.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2007 task 15: Tempeval temporal relation identification",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Schilder",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hepple",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of the Fourth Int. Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "75--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. Semeval-2007 task 15: Tempeval tempo- ral relation identification. In Proc. of the Fourth Int. Workshop on Semantic Evaluations (SemEval- 2007), pages 75-80, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The tempeval challenge: identifying temporal relations in text. Language Resources and Evaluation",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Schilder",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hepple",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Moszkowicz",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Jessica Moszkowicz, and James Pustejovsky. 2009. The tempeval challenge: iden- tifying temporal relations in text. Language Re- sources and Evaluation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Brandeis Annotation Tool",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
}
],
"year": 2010,
"venue": "Language Resources and Evaluation Conference, LREC 2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Verhagen. 2010. The Brandeis Annotation Tool. In Language Resources and Evaluation Conference, LREC 2010, Malta.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Mary taught e1 on Tuesday morning t1 OVERLAP(e1,t1) (6) They cancelled the evening t2 class e2 OVERLAP(e2,t2) (7) Most troops will leave e1 Iraq by August of 2010. AFTER(e1,dct)",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Merging Relations and more reliable results (annotated data) than doing a complex task all at once.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">language tokens C D E F X</td></tr><tr><td colspan=\"2\">Chinese 23,000</td></tr><tr><td>English</td><td>63,000</td></tr><tr><td>Italian</td><td>27,000</td></tr><tr><td>French</td><td>19,000</td></tr><tr><td>Korean</td><td>14,000</td></tr><tr><td>Spanish</td><td>68,000</td></tr></table>",
"num": null,
"text": "gives the size of the training set and the relation tasks that were included."
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">Task A results for Spanish</td></tr><tr><td>team</td><td>p</td><td>r</td><td>f</td><td>type val</td></tr><tr><td>Edinburgh</td><td colspan=\"4\">0.85 0.82 0.84 0.84 0.63</td></tr><tr><td colspan=\"5\">HeidelTime1 0.90 0.82 0.86 0.96 0.85</td></tr><tr><td colspan=\"5\">HeidelTime2 0.82 0.91 0.86 0.92 0.77</td></tr><tr><td>JU CSE</td><td colspan=\"4\">0.55 0.17 0.26 0.00 0.00</td></tr><tr><td>KUL</td><td colspan=\"4\">0.78 0.82 0.80 0.91 0.55</td></tr><tr><td>KUL Run 2</td><td colspan=\"4\">0.73 0.88 0.80 0.91 0.55</td></tr><tr><td>KUL Run 3</td><td colspan=\"4\">0.85 0.84 0.84 0.91 0.55</td></tr><tr><td>KUL Run 4</td><td colspan=\"4\">0.76 0.83 0.80 0.91 0.51</td></tr><tr><td>KUL Run 5</td><td colspan=\"4\">0.75 0.85 0.80 0.91 0.51</td></tr><tr><td>TERSEO</td><td colspan=\"4\">0.76 0.66 0.71 0.98 0.65</td></tr><tr><td>TIPSem</td><td colspan=\"4\">0.92 0.80 0.85 0.92 0.65</td></tr><tr><td>TIPSem-B</td><td colspan=\"4\">0.88 0.60 0.71 0.88 0.59</td></tr><tr><td>TRIOS</td><td colspan=\"4\">0.85 0.85 0.85 0.94 0.76</td></tr><tr><td>TRIPS</td><td colspan=\"4\">0.85 0.85 0.85 0.94 0.76</td></tr><tr><td>USFD2</td><td colspan=\"4\">0.84 0.79 0.82 0.90 0.17</td></tr></table>",
"num": null,
"text": ""
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Task A results for English"
},
"TABREF5": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>team</td><td>p</td><td>r</td><td>f</td></tr><tr><td>TIPSem</td><td colspan=\"3\">0.90 0.86 0.88</td></tr><tr><td colspan=\"4\">TIPSem-B 0.92 0.85 0.88</td></tr><tr><td>team</td><td>p</td><td>r</td><td>f</td></tr><tr><td colspan=\"4\">Edinburgh 0.75 0.85 0.80</td></tr><tr><td>JU CSE</td><td colspan=\"3\">0.48 0.56 0.52</td></tr><tr><td>TIPSem</td><td colspan=\"3\">0.81 0.86 0.83</td></tr><tr><td colspan=\"4\">TIPSem-B 0.83 0.81 0.82</td></tr><tr><td>TRIOS</td><td colspan=\"3\">0.80 0.74 0.77</td></tr><tr><td>TRIPS</td><td colspan=\"3\">0.55 0.88 0.68</td></tr></table>",
"num": null,
"text": "ble contains the results for Spanish and the next part the results for English."
},
"TABREF6": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">: Event extent results</td><td/></tr><tr><td colspan=\"5\">The column headers in table 5 are abbrevia-</td></tr><tr><td colspan=\"5\">tions for polarity (pol), mood (moo), modality</td></tr><tr><td colspan=\"5\">(mod), tense (tns), aspect (asp) and class (cl). Note</td></tr><tr><td colspan=\"5\">that the English team chose to include modality</td></tr><tr><td colspan=\"4\">whereas the Spanish team used mood.</td><td/></tr><tr><td>team</td><td>pol</td><td>moo tns</td><td>asp</td><td>cl</td></tr><tr><td>TIPSem</td><td colspan=\"4\">0.92 0.80 0.96 0.89 0.66</td></tr><tr><td colspan=\"5\">TIPSem-B 0.92 0.79 0.96 0.89 0.66</td></tr><tr><td>team</td><td>pol</td><td>mod tns</td><td>asp</td><td>cl</td></tr><tr><td colspan=\"5\">Edinburgh 0.99 0.99 0.92 0.98 0.76</td></tr><tr><td>JU CSE</td><td colspan=\"4\">0.98 0.98 0.30 0.95 0.53</td></tr><tr><td>TIPSem</td><td colspan=\"4\">0.98 0.97 0.86 0.97 0.79</td></tr><tr><td colspan=\"5\">TIPSem-B 0.98 0.98 0.85 0.97 0.79</td></tr><tr><td>TRIOS</td><td colspan=\"4\">0.99 0.95 0.91 0.98 0.77</td></tr><tr><td>TRIPS</td><td colspan=\"4\">0.99 0.96 0.67 0.97 0.67</td></tr></table>",
"num": null,
"text": ""
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Event attribute results"
},
"TABREF9": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>C</td><td>D</td><td>E</td></tr><tr><td colspan=\"4\">tempeval-1 average 0.59 0.76 0.51</td></tr><tr><td>stddev</td><td colspan=\"3\">0.03 0.03 0.05</td></tr><tr><td colspan=\"4\">tempeval-2 average 0.61 0.70 0.53</td></tr><tr><td>stddev</td><td colspan=\"3\">0.04 0.22 0.05</td></tr></table>",
"num": null,
"text": "Percentage not classifiedA comparison with the Tempeval-1 results from Semeval-2007 may be of interest. Six systems participated in the TempEval-1 tasks, compared to seven or eight systems for TempEval-2.Table 8lists the average scores and the standard deviations for all the tasks (on the English data) that Tempeval-1 and Tempeval-2 have in common."
},
"TABREF10": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": ""
}
}
}
}