Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:31.891200Z"
},
"title": "Identifying Temporality of Word Senses Based on Minimum Cuts",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hasanuzzaman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Caen Normandie",
"location": {
"settlement": "Caen",
"country": "France"
}
},
"email": ""
},
{
"first": "G",
"middle": [],
"last": "Dias",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Caen Normandie",
"location": {
"settlement": "Caen",
"country": "France"
}
},
"email": ""
},
{
"first": "S",
"middle": [],
"last": "Ferrari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Caen Normandie",
"location": {
"settlement": "Caen",
"country": "France"
}
},
"email": ""
},
{
"first": "Y",
"middle": [],
"last": "Mathet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Caen Normandie",
"location": {
"settlement": "Caen",
"country": "France"
}
},
"email": ""
},
{
"first": "A",
"middle": [],
"last": "Way",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The ability to capture time information is essential to many natural language processing and information retrieval applications. Therefore, a lexical resource associating word senses to their temporal orientation might be crucial for the computational tasks aiming at the interpretation of language of time in texts. In this paper, we propose a semi-supervised minimum cuts strategy that makes use of WordNet glosses and semantic relations to supplement WordNet entries with temporal information. Intrinsic and extrinsic evaluations show that our approach outperforms prior semi-supervised non-graph classifiers.",
"pdf_parse": {
"paper_id": "K16-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "The ability to capture time information is essential to many natural language processing and information retrieval applications. Therefore, a lexical resource associating word senses to their temporal orientation might be crucial for the computational tasks aiming at the interpretation of language of time in texts. In this paper, we propose a semi-supervised minimum cuts strategy that makes use of WordNet glosses and semantic relations to supplement WordNet entries with temporal information. Intrinsic and extrinsic evaluations show that our approach outperforms prior semi-supervised non-graph classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recognizing temporal information can significantly improve the functionality of information retrieval (Campos et al., 2014) and natural language processing (Mani et al., 2005) applications.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Campos et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 156,
"end": 175,
"text": "(Mani et al., 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most text applications have been relying on rule-based time taggers such as HeidelTime (Str\u00f6tgen and Gertz, 2015) or SUTime (Chang and Manning, 2012) to identify and normalize time mentions in texts. Although interesting levels of performance have been seen (UzZaman et al., 2013) , their coverage is limited to the finite number of rules they implement. Let's take the following sentence: \"Apple's iPhone is currently one of the most popular smartphone\". When labeled by SU-Time 1 or HeidelTime 2 , the adverb currently is correctly tagged with the PRESENT_REF value. However, if we change the sentence to \"Apple's iPhone is one of the most popular smartphones at the present day\", no temporal mention is found, although one may expect that within this context currently and present day share some equivalent temporal dimension. Such systems would certainly benefit from the existence of a temporal resource enumerating a large set of possible time variants (Kuzey et al., 2016) .",
"cite_spans": [
{
"start": 87,
"end": 113,
"text": "(Str\u00f6tgen and Gertz, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 258,
"end": 280,
"text": "(UzZaman et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 959,
"end": 979,
"text": "(Kuzey et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In parallel, new trends have emerged in the context of human temporal orientation (Schwartz et al., 2015) . The underlying idea is to understand how past, present, and future emphasis in text may affect people's finances, health, and happiness. For that purpose, temporal classifiers are built to detect the overall temporal dimension of a given sentence. For instance, the following Facebook post \"can't wait to get a pint tonight\" would be tagged as FUTURE. Successful features include timexes, specific temporal (past, present, future) words from a commercial dictionary, but also ngrams, thus indicating that temporality may be embodied by multi-word terms, whose temporal orientation is unknown.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Schwartz et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a consequence, discovering the temporal orientation of words is a challenging issue that may benefit many text applications. Whereas most prior studies have focused on temporal expressions and events, there has been a lack of work looking at the temporal orientation of word senses. In this paper, we focus on automatically timetagging word senses in WordNet (Miller, 1995) as past, present, future, or atemporal based on their glosses and relational semantic structures in the line of and Hasanuzzaman et al. (2014b) . In particular, we propose a semi-supervised graph-based strategy that relies on the max-flow min-cut theorem (Papadimitriou and Steiglitz, 1998; Blum and Chawla, 2001) , that finds successive minimum cuts in a connected graph to time-tag each synset as one of the four dimensions. Compared to previous work based on propagation strategies , the exploration of WordNet's graph structure with minimum cuts allows us to independently model both temporal connotation and semantic denotation. In order to evaluate our proposal, both intrinsic (inter-annotator agreement and temporal sense classification) and extrinsic (temporal sentence classification and temporal relation annotation) evaluations have been performed. In both cases, the proposed methodology outperformed state-of-the-art approaches. developed TempoWordNet (TWnL), an extension of WordNet, where each synset is augmented with its temporal connotation (past, present, future, or atemporal). It mainly relies on the quantitative analysis of the glosses associated to synsets, and on the use of the resulting vector space model representations for semisupervised synset classification. In particular, temporal classifiers are learned over manually labeled synsets (seed list), and new learning synsets are chosen based on their specific semantic relation (e.g. hyponymy) with synsets from the seed list. Their class is given by the synset they have been propagated from. This process is iterated until cross-validation accuracy drops. The final classifier is used to time-tag all WordNet synsets.",
"cite_spans": [
{
"start": 362,
"end": 376,
"text": "(Miller, 1995)",
"ref_id": "BIBREF9"
},
{
"start": 493,
"end": 520,
"text": "Hasanuzzaman et al. (2014b)",
"ref_id": null
},
{
"start": 632,
"end": 667,
"text": "(Papadimitriou and Steiglitz, 1998;",
"ref_id": "BIBREF11"
},
{
"start": 668,
"end": 690,
"text": "Blum and Chawla, 2001)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While show that TWnL can be useful to time-tag web queries, less comprehensive results are shown in Filannino and Nenadic (2014) , where TWnL learning features do not lead to any classification improvements. Moreover, mention that exclusive semantic propagation is error-prone as some semantic relations do not preserve temporal connotation. As a consequence, Hasanuzzaman et al. (2014b) defined two different propagation strategies: probabilistic and hybrid, leading to TWnP and TWnH, respectively. They follow the exact same idea of , but for probabilistic propagation, new synsets are chosen from the most confidently classified synsets over the whole of WordNet at each iteration. In addition, for the hybrid expansion, new learning instances are included if they are highly representative of a given class but at the same time demonstrate high average semantic similarity over the seed list. Although some slight improvements were seen, no conclusive position could be reached due to the limited scope of the evaluation as well as discrepancies between human judgment, and automatic classification results.",
"cite_spans": [
{
"start": 100,
"end": 128,
"text": "Filannino and Nenadic (2014)",
"ref_id": null
},
{
"start": 360,
"end": 387,
"text": "Hasanuzzaman et al. (2014b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One of the main weaknesses of the aforementioned approaches is that they mostly rely on the ability of the methodology to provide new learning instances by propagation within WordNet. However, in all cases, they do not take proper advantage of the relational structure of WordNet. Indeed, semantic coherence (for TWnL and TWnH) is only calculated between new instances and synsets from the seed list, but never between new instances themselves. 3 However, one may expect that highly correlated new instances should be treated commonly. One solution to deal with this problem is to define the classification problem as an optimization process, where both semantic coherence and temporal orientation are treated as combined objectives. For that purpose, we propose to adapt the standard s-t mincut algorithm (Blum and Chawla, 2001 ) to our particular semisupervised multi-class learning problem.",
"cite_spans": [
{
"start": 806,
"end": 828,
"text": "(Blum and Chawla, 2001",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The s-t mincut algorithm is based on finding minimum cuts in a graph, and uses pairwise relationships among examples in order to learn from both labeled and unlabeled data. In particular, it outputs a classification corresponding to partitioning a graph in a way that minimizes the number of similar pairs of examples that are given different labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with s-t mincut",
"sec_num": "3"
},
{
"text": "Let us consider n items x 1 , . . . x n to divide into two classes C 1 and C 2 based on two different types of information. The first information type -the individual score denoted as ind j (x i ) -measures the non-negative estimate of each x i belonging to class C j based on the features of x i alone. The second information type -the association score denoted as assoc(x i , x k ) -represents the non-negative estimate of how important is that x i and x k be in the same class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Principles",
"sec_num": "3.1"
},
{
"text": "This situation can be represented as an undirected graph G with vertices {v 1 , . . . , v n , s,t}, where s and t are respectively the source and sink vertices, which represent each class label and one vertex v i corresponds to a given item x i . If s (resp. t) corresponds to class C 1 (resp. C 2 ), we add n edges (s, v i ), each with weight ind 1 (x i ), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Principles",
"sec_num": "3.1"
},
{
"text": "n edges (v i ,t), each with weight ind 2 (x i ). Fi- nally, we add n 2 edges (v i , v k ), each with weight assoc(x i , x k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Principles",
"sec_num": "3.1"
},
{
"text": "The learning process corresponds to finding the minimum cut in G that minimizes some cost function, where (i) a cut (S, T ) of G is a partition of its nodes into sets S = {s} \u222a S and T = {t} \u222a T where s / \u2208 S and t / \u2208 T , and (ii) its cost cost(S, T ) is the sum of the weights of all edges crossing from S to T , as defined in equation 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Principles",
"sec_num": "3.1"
},
{
"text": "\u2211 x\u2208C 1 ind 2 (x) + \u2211 x\u2208C 2 ind 1 (x) + \u2211 x i \u2208C 1 ,x k \u2208C 2 assoc(x i , x k ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Principles",
"sec_num": "3.1"
},
{
"text": "Formulating the task of temporality detection on word senses in terms of graphs allows us to model item-specific and pair-wise information independently. As a consequence, machine learning algorithms representing temporal indicators can be used to derive individual scores for a particular sense in isolation. The edges weighted by the individual scores of a vertex (sense) to the source/sink can be interpreted as the probability of a sense belonging to a given temporal class without taking into account similarity to other senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages and Disadvantages",
"sec_num": "3.2"
},
{
"text": "At the same time, we can use conceptualsemantic relations from WordNet to derive the association scores. The edges between two senses weighted by the association scores can indicate how similar two senses are. If two senses are connected via a temporality-preserving relation, they are likely to both belong to a temporal class. For instance, hyponymy relation is usually a temporality-preserving relation, 4 where two hyponyms such as present, nowadays -the period of time that is happening now and now -the momentary present are both temporal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages and Disadvantages",
"sec_num": "3.2"
},
{
"text": "To detect the temporal orientation of word senses, and Hasanuzzaman et al. (2014b) adopted a single view instead of two views on the data. The ability to combine two views on the data is precisely one of the strengths of the s-t mincut strategy.",
"cite_spans": [
{
"start": 55,
"end": 82,
"text": "Hasanuzzaman et al. (2014b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages and Disadvantages",
"sec_num": "3.2"
},
{
"text": "Second, the s-t mincut algorithm is a semisupervised framework. This is essential as the existing labeled datasets for our problem are small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages and Disadvantages",
"sec_num": "3.2"
},
{
"text": "In addition, glosses are short, leading to sparse high-dimensional vectors in standard feature representations. Furthermore, WordNet connections between different parts of the WordNet hierarchy can be sparse, leading to relatively isolated senses in a graph in a supervised framework. The mincut strategy allows us to import unlabeled data that can serve as bridges to isolated components. More importantly, the unlabeled data can be related to the labeled data (by some WordNet relation) and might help to pull unlabeled data to the right cuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages and Disadvantages",
"sec_num": "3.2"
},
{
"text": "It is also important to note that transductive methods such as the s-t mincut algorithm particularly suit our case study as all learning examples are known. However, the addition of new word senses would require the re-application of the method to the entire graph. Indeed, the model does not learn to predict unseen examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages and Disadvantages",
"sec_num": "3.2"
},
{
"text": "The formulation of our mincut strategy for temporal classification of synsets involves the following steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.3"
},
{
"text": "Step I. We define two vertices s (source) and t (sink), which correspond to the temporal and atemporal categories, respectively. Vertices s and t are classification vertices, and all other vertices (labeled, unlabeled, and test) are example vertices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.3"
},
{
"text": "Step II. The labeled examples are connected to the classification vertices they belong to via edges with high constant non-negative weight. The unlabeled examples are connected to the classification vertices via edges weighted with non-negative scores that indicate the degree of belonging to both the temporal and atemporal categories. Weights (i.e. individual scores) are calculated based on a supervised classifier learned from labeled examples (cf. Section 3.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.3"
},
{
"text": "Step III. For all pairs of example vertices, for which there exists a listed semantic relation in WordNet, an edge is created. This one receives a non-negative score that indicates the degree of semantic relationship between both vertices and corresponds to the association score (cf. Section 3.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.3"
},
{
"text": "Step IV. The max-flow theorem (Papadimitriou and Steiglitz, 1998) is applied over the built graph to find the minimum s-t cut. 5 Step V. The temporal partition is then divided into three temporal sub-partitions (past, present, and future) following a hierarchical strategy. First, we define two new vertices s and t, which correspond to past and not_past categories, respectively, and follow steps II through IV . This divides the subgraph into two disjoint subsets, i.e. past synsets, and synsets belonging either to present or future. Finally, we repeat steps II through IV , where vertices s and t correspond to future and present, respectively (cf. Section 3.6).",
"cite_spans": [
{
"start": 30,
"end": 65,
"text": "(Papadimitriou and Steiglitz, 1998)",
"ref_id": "BIBREF11"
},
{
"start": 127,
"end": 128,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.3"
},
{
"text": "The non-negative edge weights to s and t denote how an example vertex is related to a specific class. For the unlabeled and test examples, a supervised learning strategy is used to assign edge weights. Each synset from a labeled dataset -we use the dataset provided by Dias et al. 2014which contains past, present, future and atemporal senses is represented by its gloss encoded as a vector of word unigrams weighted by their frequency. 6 Then, depending on the classification task, a two-class SVM classifier is built from the Weka platform. 7 In particular, the SVM membership scores are transformed into probability estimates based on Platt calibration (Niculescu-Mizil and Caruana, 2005) , which are directly mapped to edge weights. In Table 1 , we present the 10-fold cross-validation results for all classifiers tested in the context of this work.",
"cite_spans": [
{
"start": 437,
"end": 438,
"text": "6",
"ref_id": null
},
{
"start": 656,
"end": 691,
"text": "(Niculescu-Mizil and Caruana, 2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 740,
"end": 747,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Individual Scores",
"sec_num": "3.4"
},
{
"text": "In order to ensure that the mincut procedure does not reverse the labels of the labeled examples, a high non-negative constant weight of 3 is assigned to any edge between a labeled vertex and its corresponding classification vertex, and a low non-negative constant weight of 0.001 to the edge to the other classification vertex. This is a classical implementation of +\u221e and 1/ + \u221e theoretical weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual Scores",
"sec_num": "3.4"
},
{
"text": "While formulating the graph, we connect two example vertices by an edge if they are linked by one of the 10 WordNet relations presented in Table 2 . The main motivation towards using other relations in addition to the most frequently encoded relations (e.g. hypernym/hyponym) among synsets in WordNet is to achieve high graph connectivity. Different weights can be assigned to different relations to reflect the degree to which they preserve temporality. Therefore, we adopt two strategies to assign weights to different WordNet relations. The first method (ScWt) assigns the same constant weight of 1.0 to all WordNet relations. The second method (DiffWt) considers several degrees of preserving temporality. In order to do this, we adopt a simple rule-based strategy to produce a large noisy set of temporal and atemporal synsets from WordNet. First, we take the list of 30 hand-crafted temporal seed synsets (equally distributed over past, present, and future) proposed in along with their direct hyponym synsets. This forms a temporal list. Then, each WordNet synset that contains a word sense from the temporal list in its gloss is 'roughly' classified as temporal. Otherwise, it is considered as atemporal. We then simply count how often two synsets connected by a given relation have the same or different temporal dimension. Finally, the weight is calculated by #same/(#same+#different) and corresponds to the association score between two example vertices. Results are reported in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1490,
"end": 1497,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Association Scores",
"sec_num": "3.5"
},
{
"text": "Note that the exact same strategy is used for the two hierarchical steps, for which new association scores are calculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Association Scores",
"sec_num": "3.5"
},
{
"text": "The order of the hierarchical process is driven by classifier accuracy over the labeled dataset pro-vided by (cf. Section 4) . In order to give the maximum chance of good partitioning at the second level of the hierarchy, we choose the classification problem to handle based on the SVM classifier that demonstrates highest accuracy over the following problems: past vs. not_past, present vs. not_present, and future vs. not_future. In so doing, we can rely on the best possible individual score function. As can be seen in Table 1 , this is the case for past vs. not_past, which happens to be the first sub-partitioning problem. The third level is straightforward, i.e. present vs. future. We are aware that this simple strategy is prone to bias. However, as manual evaluation of the final resource is involved, producing more results was logistically hard to handle. Nonetheless, testing all combinations remains work that needs to be conducted in the future.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 124,
"text": "(cf. Section 4)",
"ref_id": null
},
{
"start": 523,
"end": 531,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hierarchical Strategy",
"sec_num": "3.6"
},
{
"text": "Labeled Dataset. We used a list that consists of 632 temporal synsets and an equal number of atemporal synsets provided by as labeled data for our experiments. Temporal synsets are distributed as follows: 210 synsets marked as past, 291 as present, and 131 as future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4"
},
{
"text": "Test Dataset. As the labeled dataset is small, we created an annotation task using the Crowd-Flower platform 8 in order to produce a testset. For the annotation task, 398 synsets equally distributed over nouns, verbs, adjectives, and adverbs along with their lemmas and glosses were randomly selected from WordNet 9 as representative of the whole WordNet. Note that this number is a statistically significant representative sample of all WordNet synsets calculated as defined in Israel (1992).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4"
},
{
"text": "The annotators were expected to answer two questions for a given synset (lemmas and gloss were also provided). While the first question is related to the decision as to whether a synset is temporal or atemporal, the motivation behind the second question is to collect a more fine-grained (past, present, future) gold-standard. 10 The reliability of the annotators was evaluated on 60 control synsets from the labeled dataset, and 10 ambiguous synsets associated to more than one temporal dimension. Similary to Tekiroglu et al. (2014), raters who scored at least 70% accuracy on average on both sets were considered to be reliable. Finally, each synset was annotated by at least 10 reliable raters.",
"cite_spans": [
{
"start": 327,
"end": 329,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4"
},
{
"text": "To have a concrete idea about the agreement between annotators, we calculated the majority class for each synset in our dataset. A synset belongs to a majority class k if the most frequent annotation for the synset was selected by at least k annotators. As a consequence, a large percentage of synsets belonging to high majority classes are symptomatic of good inter-annotator agreement. Table 3 shows the observed agreement. Similarly to \u00d6zbal et al. 2011, we consider all annotations with a majority class greater than 5 as reliable. In this case, for the temporal vs. atemporal annotation scheme, 84.83% of the synsets were annotated identically by the majority of annotators, while for past, present, and future, 72.36% of the annotations fell into this case. As such, we can be confident that the annotation process was successful and the dataset is reliable.",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4"
},
{
"text": "Different intrinsic and extrinsic evaluations have been proposed in prior studies. We compare our work to the same tasks as proposed by and Hasanuzzaman et al. (2014b) , and introduce an extra experiment on temporal relation annotation.",
"cite_spans": [
{
"start": 140,
"end": 167,
"text": "Hasanuzzaman et al. (2014b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic and Extrinsic Evaluations",
"sec_num": "5"
},
{
"text": "In order to compare our approach to prior works, we adopted a similar evaluation strategy as proposed in and Hasanuzzaman et al. (2014b) . To assess human judgment regarding the temporal parts, inter-rater agreement with multiple raters (i.e. 3 human annotators with the 4th annotator being the classifier) was performed over a set of 398 randomly selected synsets. The freemarginal multirater kappa (Randolph, 2005) and the fixed-marginal multirater kappa (Siegel and Castellan, 1988) values are reported in Table 4 and assess moderate agreement for previous versions of TempoWordNet (TWnL, TWnP and TWnH) , while good agreement is obtained for the resources constructed by mincuts with both ScWt (MC1) and DiffWt (MC2) weighting schemes. Note that slightly different results than the ones reported by Table 3 : Percentage of synsets in each majority class. Hasanuzzaman et al. (2014b) are seen as the number of annotated synsets is much bigger in our experiment (398 instead of 50). These agreement values provide a first and promising estimate of the improvement over the previous versions of Tem-poWordNet. We plan to confirm that in the future by comparing the systems to a true reference instead of observing the agreement between the systems and a multi-reference as we currently do. Table 4 : Inter-annotator agreement.",
"cite_spans": [
{
"start": 109,
"end": 136,
"text": "Hasanuzzaman et al. (2014b)",
"ref_id": null
},
{
"start": 400,
"end": 416,
"text": "(Randolph, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 457,
"end": 485,
"text": "(Siegel and Castellan, 1988)",
"ref_id": "BIBREF14"
},
{
"start": 585,
"end": 606,
"text": "(TWnL, TWnP and TWnH)",
"ref_id": null
},
{
"start": 859,
"end": 886,
"text": "Hasanuzzaman et al. (2014b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 509,
"end": 516,
"text": "Table 4",
"ref_id": null
},
{
"start": 803,
"end": 810,
"text": "Table 3",
"ref_id": null
},
{
"start": 1291,
"end": 1298,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "5.1"
},
{
"text": "In order to compare our semi-supervised mincut approach to a reasonable baseline, we use a rule-based approach to classify test data into past, present, future, or atemporal categories. First, time expressions in glosses are identified and resolved via SUTime tagger (Chang and Manning, 2012). Then, for each synset, its time tags (e.g. FUTURE_REF) are considered as the temporal class for that particular synset. In cases where more than one temporal expression was observed (which occurred in less than 1% of the cases), the majority class is selected. If no time expression is identified by the time tagger, the list composed of 30 hand-crafted temporal seeds proposed in along with their direct hyponyms and a given list of standard temporal adverbials, prepositions and adjectives are used to classify synsets with one temporal dimension or atemporal. The performance of this simple rule-based approach is measured for the test data and presented in Table 5 as the baseline configuration. Note that to figure out the contribution of word sense disambiguation, the classical Lesk algorithm (Lesk, 1986) was used to choose the right sense for a given word instead of the most frequent sense. We found that this contribution is negligible (< 0.4% improvement in accuracy).",
"cite_spans": [
{
"start": 1095,
"end": 1107,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 955,
"end": 963,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "Comparative results are also presented against prior works: TWnL, TWnP, and TWnH. Table 5 shows that our configurations (MC1, MC2) perform significantly better than previous approaches. In particular, they achieve highest accuracies for temporal vs. atemporal and past, present, future classifications with improvements of 11.3% and 10.3%, respectively, over the second-best strategy, namely TWnH. Note that this enhancement is mainly due to higher precision overall.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "Different training data sizes. In order to better understand the importance of the size of labeled data in the context of semi-supervised classification strategies, we propose the following experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "We randomly generate equally distributed subsets of training data L i (from a set of 632 temporal and 632 atemporal synsets) such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "L 1 \u2282 L 2 \u2282 L 3 . . . \u2282 L n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "For each labeled dataset, we run the mincut strategy with DiffWt (i.e. MC2) and compare it to the hybrid propagation proposed by Hasanuzzaman et al. (2014b) (i.e. TWnH). Accuracies of both approaches over the test data are presented in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 243,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "The s-t mincut approach performs consistently better than the propagation strategy. In particular, we show that with 400 labeled examples better results can be obtained than relying on 1264 training items within a propagation paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "Considering the above findings, we selected the MC2 configuration obtained with maximum labeled data for the extrinsic experiments, which includes 110,002 atemporal synsets, 1733 past synsets, 4193 present synsets, and 1730 future synsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Classification",
"sec_num": "5.2"
},
{
"text": "Temporal sentence classification has traditionally been used as the baseline extrinsic evaluation and consists of labeling a given sentence as past, present or future. In order to produce comparative results with prior works, we test our methodology on the balanced dataset produced in , which consists of 1038 sentences equally distributed as past, present and future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Sentence Classification",
"sec_num": "5.3"
},
{
"text": "Moreover, we propose to extend these experiments with a corpus of 300 temporal posts from Twitter. This corpus contains 100 tweets for each temporal class, which have been time-tagged using the CrowdFlower platform\u1e1footnoteAnnotation details are out of the scope of this paper. For both experiments, each sentence/tweet is represented as a semantic vector space model in the exact same way as proposed in . Thus, a given learning example is a feature vector, where each attribute is either a unigram or a synonym of any temporal word contained in the sentence/tweet and its value is the tf.idf. Note that word sense disambiguation is performed using the Lesk algorithm (Lesk, 1986) . Comparative classification results are reported in Table 7 and show small improvements in the mincut strategy, when compared to propagation strategies. In particular, for tweet classification, TWnP shows similar results mainly due to its large coverage of temporal senses (counterbalanced by low precision as confirmed by Table 5 ). Indeed, TWnP contains 53,001 temporal synsets while MC2 only has 7656 temporal synsets. Note that the semantic enhancement is limited only to the synonymy relation, which drastically restricts the benefit of the semantic vector space model and due to the limited number of analyzed sentences/tweets, huge improvements were not expected.",
"cite_spans": [
{
"start": 668,
"end": 680,
"text": "(Lesk, 1986)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 734,
"end": 741,
"text": "Table 7",
"ref_id": null
},
{
"start": 1005,
"end": 1012,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Temporal Sentence Classification",
"sec_num": "5.3"
},
{
"text": "Finally, we focus on the problem of classifying temporal relations as proposed in TempEval-3, assuming that the identification of events and timexes is already performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relation Annotation",
"sec_num": "5.4"
},
{
"text": "In order to produce comparative results with the best-performing system at TempEval-3, namely UTTime (Laokulrat et al., 2013) for the above task, we follow the guidelines and use the same datasets provided by the organizers (UzZaman et al., 2013) .",
"cite_spans": [
{
"start": 101,
"end": 125,
"text": "(Laokulrat et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 224,
"end": 246,
"text": "(UzZaman et al., 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relation Annotation",
"sec_num": "5.4"
},
{
"text": "In particular, we restrict our experiment to a subset of relations, namely BEFORE (past), AF-TER (future), and INCLUDES (present), with all other relations mapped to the NA\u2212RELATION for the following two subtasks: event to document creation time and event to same sentence event. This choice is motivated by the complexity of mapping the 14 relations of TempEval-3 into three temporal classes (past, present, future). As such, we test a simpler configuration of the original problem, but we do expect to draw conclusive remarks as minimum bias is introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relation Annotation",
"sec_num": "5.4"
},
{
"text": "Note that the underlying idea of this evaluation is to measure the intuition expressed by (Kuzey et al., 2016) that temporal information extraction systems may benefit from the existence of temporal resources. If this is confirmed, deeper research should be conducted to adequately use such a proposed temporal resource for the whole task.",
"cite_spans": [
{
"start": 90,
"end": 110,
"text": "(Kuzey et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relation Annotation",
"sec_num": "5.4"
},
{
"text": "To solve this classification problem, we adopt a simple supervised learning strategy based on state-of-the-art characteristics, plus features from a time-augmented version of WordNet. In particular, each pair of entities to be classified as BE-FORE, AFTER, INCLUDES or NA-RELATION is encoded with the following features: -String features: the tokens and lemmas of each entity pair; -Grammatical features: the part-of-speech tags",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relation Annotation",
"sec_num": "5.4"
},
{
"text": "TWnP TWnH MC2 Sentence classification (p,r,f1) (69.7,66.1,66.7) (68.2,70.5,69.3) (69.8,67.6,68.6) (73.3,70.1 71.4) Tweet classification (p,r,f1) (51.4,47.1,49.1) (50.4,52.8,51.5) (51.8,48.2,49.8) (52.8,50.6,51.6) Table 7 : Results for temporal sentence and tweet classification performed on 10-fold cross validation with SVM with Weka default parameters.",
"cite_spans": [
{
"start": 38,
"end": 114,
"text": "(p,r,f1) (69.7,66.1,66.7) (68.2,70.5,69.3) (69.8,67.6,68.6) (73.3,70.1 71.4)",
"ref_id": null
},
{
"start": 162,
"end": 212,
"text": "(50.4,52.8,51.5) (51.8,48.2,49.8) (52.8,50.6,51.6)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 213,
"end": 220,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "TWnL",
"sec_num": null
},
{
"text": "of the entity pair (only for event-event pairs), and a binary feature indicating whether the entity pair has the same PoS tag; -Entity attributes: the entity pair attributes as provided in the dataset. These include class, tense, aspect, and polarity for events, while the attributes of time expressions are its type, value, and dct (indicating whether a time expression is the document creation time or not); -Dependency relation: the type of dependency and the dependency order between entities; -Textual context: the textual order of the entity pair; -Temporal lexicon: the relative frequency of each temporal category (past, present, future) appearing in the context of an entity pair; the context is considered as (i) the text appearing between entities, (ii) the text of all tokens in a time expression, and (iii) 5 tokens around time expressions or events. The features are encoded as the frequency with which a word from a temporal category appeared in the text divided by the total number of tokens in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TWnL",
"sec_num": null
},
{
"text": "Approaches Precision Recall F1 UTTime 57.5 58.7 58.1 TRMC2 66.9 68.7 67.7 TRTWnH 61.2 62.5 61.8 Table 8 : Temporal relation classification results.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "TWnL",
"sec_num": null
},
{
"text": "Based on this feature representation, the two best classifiers for event to document creation time and event to same sentence event subtasks are selected via a grid search over parameter settings. The grid is evaluated with a 5-fold cross validation on the training data and SVM classifiers are chosen with default parameters of the Weka platform. This produces two systems, namely TRMC2 and TRTWnH depending on the temporal lexicon used: MC2 or TWnH. Note that we also measure the performance of UTTime for the settings stated above. Table 8 presents comparative evaluations. Re-sults show that TRMC2 outperforms all other approaches and achieves highest performance in terms of precision, recall, and F1-measure. However, more important still is the fact that a simple learning strategy with some temporal lexicon (MC2 or TWnH) leads to improved results, when compared to some solution that does not take advantage of such a resource (UTTime, here). Table 9 : Feature ablation analysis. The most frequent class baseline (mfc).",
"cite_spans": [],
"ref_spans": [
{
"start": 535,
"end": 542,
"text": "Table 8",
"ref_id": null
},
{
"start": 952,
"end": 959,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "TWnL",
"sec_num": null
},
{
"text": "In order to measure the real impact of the temporal lexicon features, we present feature ablation analyses in Table 9 . Results clearly show the importance of the features based on the temporal lexicon, being the second best-performing feature set. As a consequence, we may conclude that improvements in temporal analysis may be obtained by the correct use of some temporal lexical resource.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "TWnL",
"sec_num": null
},
{
"text": "In this paper, we proposed a semi-supervised mincut strategy to address the relatively unexplored problem of associating word senses with their underlying temporal dimensions. We produce a reliable temporal lexical resource by automatically time-tagging WordNet synsets into past, present, future or atemporal categories. The underlying idea is that instead of using a single view on the data (as done in prior work), multiple views result in better temporal classification accuracy. In particular, both intrinsic and extrinsic experimental results confirm the soundness of the proposed approach and support our initial hypotheses. Note that the all resources created within this work are publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "http://nlp.stanford.edu:8080/sutime/ process 2 http://heideltime.ifi.uni-heidelberg. de/heideltime/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This may occur only through a side-effect process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although show that this is not always the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Max-flow algorithms show polynomial asymptotic running times and near-linear running times in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other sentence representations could be tested but this is out of the scope of this paper.7 http://www.cs.waikato.ac.nz/ml/weka/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.crowdflower.com/ 9 WordNet version 3.0 was used and all sysnsets were selected outside the labeled dataset.10 Details of the annotation guidelines are out of the scope of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning from labeled and unlabeled data using graph mincuts",
"authors": [
{
"first": "Avrim",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "Shuchi",
"middle": [],
"last": "Chawla",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avrim Blum and Shuchi Chawla. 2001. Learning from labeled and unlabeled data using graph mincuts. In Proceedings of the Eighteenth International Confer- ence on Machine Learning (ICML), pages 19-26, Massachusetts, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Survey of temporal information retrieval and related applications",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "Al\u00edpio",
"middle": [
"M"
],
"last": "Jorge",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Jatowt",
"suffix": ""
}
],
"year": 2014,
"venue": "ACM Computing Survey",
"volume": "47",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Campos, Ga\u00ebl Dias, Al\u00edpio M. Jorge, and Adam Jatowt. 2014. Survey of temporal informa- tion retrieval and related applications. ACM Com- puting Survey, 47(2):15:1-15:41.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Sutime: A library for recognizing and normalizing time expressions",
"authors": [
{
"first": "X",
"middle": [],
"last": "Angel",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "3735--3740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angel X. Chang and Christopher Manning. 2012. Su- time: A library for recognizing and normalizing time expressions. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC), pages 3735-3740, Istanbul, Turkey.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Tempowordnet for sentence time tagging",
"authors": [
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Hasanuzzaman",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ferrari",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Mathet",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Companion Publication of the 23rd International Conference on World Wide Web Companion (WWW)",
"volume": "",
"issue": "",
"pages": "833--838",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ga\u00ebl Dias, Mohammed Hasanuzzaman, St\u00e9phane Fer- rari, and Yann Mathet. 2014. Tempowordnet for sentence time tagging. In Proceedings of the Com- panion Publication of the 23rd International Confer- ence on World Wide Web Companion (WWW), pages 833-838, Seoul, South Korea.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Propagation strategies for building temporal ontologies",
"authors": [
{
"first": "Mohammed",
"middle": [],
"last": "Hasanuzzaman",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ferrari",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Mathet",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "6--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Hasanuzzaman, Ga\u00ebl Dias, St\u00e9phane Fer- rari, and Yann Mathet. 2014. Propagation strategies for building temporal ontologies. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 6-11, Gothenburg, Sweden.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Temponym tagging: Temporal scopes for textual phrases",
"authors": [
{
"first": "Erdal",
"middle": [],
"last": "Kuzey",
"suffix": ""
},
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": ""
},
{
"first": "Vinay",
"middle": [],
"last": "Setty",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference Companion on World Wide Web (WWW)",
"volume": "",
"issue": "",
"pages": "841--842",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erdal Kuzey, Jannik Str\u00f6tgen, Vinay Setty, and Ger- hard Weikum. 2016. Temponym tagging: Tem- poral scopes for textual phrases. In Proceedings of the 25th International Conference Companion on World Wide Web (WWW), pages 841-842, Montreal, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Uttime: Temporal relation classification using deep syntactic features",
"authors": [
{
"first": "Natsuda",
"middle": [],
"last": "Laokulrat",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Chikayama",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "88--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natsuda Laokulrat, Makoto Miwa, Yoshimasa Tsu- ruoka, and Takashi Chikayama. 2013. Uttime: Temporal relation classification using deep syntac- tic features. In Proceedings of the Second Joint Conference on Lexical and Computational Seman- tics (*SEM), pages 88-92, Atlanta GA, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Lesk",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 5th Annual International Conference on Systems Documentation, SIGDOC '86",
"volume": "",
"issue": "",
"pages": "24--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 5th Annual International Conference on Systems Documentation, SIGDOC '86, pages 24-26, New York, NY, USA. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The language of time: a reader",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "126",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani, James Pustejovsky, and Robert Gaizauskas. 2005. The language of time: a reader, volume 126. Oxford University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Wordnet: a lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "Georges",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georges A. Miller. 1995. Wordnet: a lexical database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Predicting good probabilities with supervised learning",
"authors": [
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Mizil",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22Nd International Conference on Machine Learning, ICML '05",
"volume": "",
"issue": "",
"pages": "625--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learn- ing. In Proceedings of the 22Nd International Con- ference on Machine Learning, ICML '05, pages 625-632, New York, NY, USA. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Combinatorial optimization: algorithms and complexity",
"authors": [
{
"first": "Christos",
"middle": [
"H"
],
"last": "Papadimitriou",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Steiglitz",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos H. Papadimitriou and Kenneth Steiglitz. 1998. Combinatorial optimization: algorithms and complexity. Courier Corporation.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Free-marginal multirater kappa (multirater \u03bafree): an alternative to fleiss' fixed-marginal multirater kappa",
"authors": [
{
"first": "J",
"middle": [],
"last": "Justus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Randolph",
"suffix": ""
}
],
"year": 2005,
"venue": "Joensuu Learning and Instruction Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justus J. Randolph. 2005. Free-marginal multirater kappa (multirater \u03bafree): an alternative to fleiss' fixed-marginal multirater kappa. Joensuu Learning and Instruction Symposium.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Extracting human temporal orientation in facebook language",
"authors": [
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Weingarten",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Eichstaedt",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Kern",
"suffix": ""
},
{
"first": "Jonah",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Seligman",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of The 2015 Conference of the North American Chapter of the Association for Computational Linguistics-Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Andrew Schwartz, Greg Park, Maarten Sap, Evan Weingarten, Johannes Eichstaedt, Margaret Kern, Jonah Berger, Martin Seligman, and Lyle Un- gar. 2015. Extracting human temporal orienta- tion in facebook language. In Proceedings of The 2015 Conference of the North American Chapter of the Association for Computational Linguistics- Human Language Technologies (NAACL), Denver, Colorado, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Nonparametric Statistics for the Social Sciences",
"authors": [
{
"first": "Sydney",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Castellan",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sydney Siegel and John Castellan. 1988. Nonparamet- ric Statistics for the Social Sciences. Mcgraw-hill edition.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A baseline temporal tagger for all languages",
"authors": [
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gertz",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "541--547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jannik Str\u00f6tgen and Michael Gertz. 2015. A baseline temporal tagger for all languages. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 541-547, Lisbon, Portugal.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations",
"authors": [
{
"first": "Naushad",
"middle": [],
"last": "Uzzaman",
"suffix": ""
},
{
"first": "Hector",
"middle": [],
"last": "Llorens",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of 2nd Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naushad UzZaman, Hector Llorens, Leon Derczyn- ski, James Allen, Marc Verhagen, and James Puste- jovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Proceedings of 2nd Joint Conference on Lexical and Computational Semantics (*SEM), pages 1-9, Atlanta, Georgia, USA.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Two class problem</td><td colspan=\"2\">Accuracy F1</td></tr><tr><td colspan=\"2\">temporal vs. atemporal</td><td>92.3</td><td>94.2</td></tr><tr><td colspan=\"2\">past vs. not_past</td><td>90.4</td><td>90.2</td></tr><tr><td colspan=\"2\">present vs. not_present</td><td>85.3</td><td>85.2</td></tr><tr><td colspan=\"2\">future vs. not_future</td><td>90.1</td><td>89.9</td></tr><tr><td colspan=\"2\">present vs. future</td><td>87.3</td><td>86.4</td></tr><tr><td>Wordnet Relation</td><td>#same</td><td>#different</td><td>Weight</td></tr><tr><td>Direct-Hyponym</td><td>73268</td><td>7246</td><td>0.91</td></tr><tr><td>Similar-to</td><td>6587</td><td>1914</td><td>0.77</td></tr><tr><td>Direct-Hypernym</td><td>61914</td><td>9600</td><td>0.76</td></tr><tr><td>Attribute</td><td>350</td><td>109</td><td>0.76</td></tr><tr><td>Also-see</td><td>1037</td><td>337</td><td>0.75</td></tr><tr><td>Troponym</td><td>6917</td><td>2651</td><td>0.72</td></tr><tr><td>Derived-from</td><td>3630</td><td>1947</td><td>0.65</td></tr><tr><td>Domain</td><td>2380</td><td>2895</td><td>0.45</td></tr><tr><td>Domain-member</td><td>2380</td><td>2895</td><td>0.45</td></tr><tr><td>Antonym</td><td>1905</td><td>3614</td><td>0.35</td></tr></table>",
"text": "Table 1: SVM results for individual scores.",
"html": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Majority Class</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td></tr><tr><td>Synset as temporal or atemporal</td><td colspan=\"2\">0.20 1.21</td><td>4.32</td><td>10.69</td><td/><td/><td/><td/></tr></table>",
"text": "14.56 29.34 19.23 11.01 Temporal synset into past, present, or future 1.23 3.01 10.45 20.22 16.56 12.34 14.23 9.01",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Results are broken down by precision (p), recall (r), and f1-measure</td></tr><tr><td>(f1) scores.</td></tr></table>",
"text": "Accuracy for temporal vs. atemporal and past, present, future classifications using different methods measured over test data.",
"html": null
},
"TABREF7": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Accuracy results with different sizes of labeled data for temporal vs. atemporal classification.",
"html": null
}
}
}
}