Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:01.137740Z"
},
"title": "Parsing Time: Learning to Interpret Time Expressions",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a probabilistic approach for learning to interpret temporal phrases given only a corpus of utterances and the times they reference. While most approaches to the task have used regular expressions and similar linear pattern interpretation rules, the possibility of phrasal embedding and modification in time expressions motivates our use of a compositional grammar of time expressions. This grammar is used to construct a latent parse which evaluates to the time the phrase would represent, as a logical parse might evaluate to a concrete entity. In this way, we can employ a loosely supervised EM-style bootstrapping approach to learn these latent parses while capturing both syntactic uncertainty and pragmatic ambiguity in a probabilistic framework. We achieve an accuracy of 72% on an adapted TempEval-2 task-comparable to state of the art systems.",
"pdf_parse": {
"paper_id": "N12-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a probabilistic approach for learning to interpret temporal phrases given only a corpus of utterances and the times they reference. While most approaches to the task have used regular expressions and similar linear pattern interpretation rules, the possibility of phrasal embedding and modification in time expressions motivates our use of a compositional grammar of time expressions. This grammar is used to construct a latent parse which evaluates to the time the phrase would represent, as a logical parse might evaluate to a concrete entity. In this way, we can employ a loosely supervised EM-style bootstrapping approach to learn these latent parses while capturing both syntactic uncertainty and pragmatic ambiguity in a probabilistic framework. We achieve an accuracy of 72% on an adapted TempEval-2 task-comparable to state of the art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Temporal resolution is the task of mapping from a textual phrase describing a potentially complex time, date, or duration to a normalized (grounded) temporal representation. For example, possibly complex phrases such as the week before last are often more useful in their grounded form -e.g., January 1 -January 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dominant approach to this problem in previous work has been to use rule-based methods, generally a combination of regular-expression matching followed by hand-written interpretation functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In general, it is appealing to learn the interpretation of temporal expressions, rather than handbuilding systems. Moreover, complex hierarchical temporal expressions, such as the Tuesday before last or the third Wednesday of each month, and ambiguous expressions, such as last Friday, are difficult to handle using deterministic rules and would benefit from a recursive and probabilistic phrase structure representation. Therefore, we attempt to learn a temporal interpretation system where temporal phrases are parsed by a grammar, but this grammar and its semantic interpretation rules are latent, with only the input phrase and its grounded interpretation given to the learning system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Employing probabilistic techniques allows us to capture ambiguity in temporal phrases in two important respects. In part, it captures syntactic ambiguity -e.g., last Friday the 13 th bracketing as either [last Friday] [the 13 th ], or last [Friday the 13 th ]. This also includes examples of lexical ambiguitye.g., two meanings of last in last week of November versus last week. In addition, temporal expressions often carry a pragmatic ambiguity. For instance, a speaker may refer to either the next or previous Friday when he utters Friday on a Sunday. Similarly, next week can refer to either the coming week or the week thereafter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Probabilistic systems furthermore allow propagation of uncertainty to higher-level components -for example recognizing that May could have a number of non-temporal meanings and allowing a system with a broader contextual scope to make the final judgment. We implement a CRF to detect temporal expressions, and show our model's ability to act as a component in such a system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe our temporal representation, followed by the learning algorithm; we conclude with experimental results showing our approach to be competitive with state of the art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach draws inspiration from a large body of work on parsing expressions into a logical form. The latent parse parallels the formal semantics in previous work, e.g., Montague semantics. Like these representations, a parse -in conjunction with the reference time -defines a set of matching entities, in this case the grounded time. The matching times can be thought of as analogous to the entities in a logical model which satisfy a given expression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Supervised approaches to logical parsing prominently include Zelle and Mooney (1996) , Zettlemoyer and Collins (2005) , Kate et al. (2005) , Zettlemoyer and Collins (2007) , inter alia. For example, Zettlemoyer and Collins (2007) learn a mapping from textual queries to a logical form. This logical form importantly contains all the predicates and entities used in their parse. We loosen the supervision required in these systems by allowing the parse to be entirely latent; the annotation of the grounded time neither defines, nor gives any direct cues about the elements of the parse, since many parses evaluate to the same grounding. To demonstrate, the grounding for a week ago could be described by specifying a month and day, or as a week ago, or as last x -substituting today's day of the week for x. Each of these correspond to a completely different parse.",
"cite_spans": [
{
"start": 61,
"end": 84,
"text": "Zelle and Mooney (1996)",
"ref_id": "BIBREF25"
},
{
"start": 87,
"end": 117,
"text": "Zettlemoyer and Collins (2005)",
"ref_id": "BIBREF26"
},
{
"start": 120,
"end": 138,
"text": "Kate et al. (2005)",
"ref_id": "BIBREF9"
},
{
"start": 141,
"end": 171,
"text": "Zettlemoyer and Collins (2007)",
"ref_id": "BIBREF27"
},
{
"start": 199,
"end": 229,
"text": "Zettlemoyer and Collins (2007)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recent work by Clarke et al. (2010) and Liang et al. (2011) similarly relax supervision to require only annotated answers rather than full logical forms. For example, Liang et al. (2011) constructs a latent parse similar in structure to a dependency grammar, but representing a logical form. Our proposed lexical entries and grammar combination rules can be thought of as paralleling the lexical entries and predicates, and the implicit combination rules respectively in this framework. Rather than querying from a finite database, however, our system must compare temporal expression within an infinite timeline. Furthermore, our system is run using neither lexical cues nor intelligent initialization.",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "Clarke et al. (2010)",
"ref_id": "BIBREF5"
},
{
"start": 40,
"end": 59,
"text": "Liang et al. (2011)",
"ref_id": "BIBREF14"
},
{
"start": 167,
"end": 186,
"text": "Liang et al. (2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Related work on interpreting temporal expressions has focused on constructing hand-crafted interpretation rules (Mani and Wilson, 2000; Saquete et al., 2003; Puscasu, 2004; Grover et al., 2010) . Of these, HeidelTime (Str\u00f6tgen and Gertz, 2010) and SUTime (Chang and Manning, 2012) provide par-ticularly strong competition.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Mani and Wilson, 2000;",
"ref_id": "BIBREF15"
},
{
"start": 136,
"end": 157,
"text": "Saquete et al., 2003;",
"ref_id": "BIBREF19"
},
{
"start": 158,
"end": 172,
"text": "Puscasu, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 173,
"end": 193,
"text": "Grover et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 217,
"end": 243,
"text": "(Str\u00f6tgen and Gertz, 2010)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recent probabilistic approaches to temporal resolution include UzZaman and Allen (2010), who employ a parser to produce deep logical forms, in conjunction with a CRF classifier. In a similar vein, Kolomiyets and Moens (2010) employ a maximum entropy classifier to detect the location and temporal type of expressions; the grounding is then done via deterministic rules.",
"cite_spans": [
{
"start": 197,
"end": 224,
"text": "Kolomiyets and Moens (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We define a compositional representation of time; a type system is described in Section 3.1 while the grammar is outlined in Section 3.2 and described in detail in Sections 3.3 and 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representation",
"sec_num": "3"
},
{
"text": "We represent temporal expressions as either a Range, Sequence, or Duration. We describe these, the Function type, and the miscellaneous Number and Nil types below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "Range [and Instant] A period between two dates (or times). This includes entities such as Today, 1987, or Now. We denote a range by the variable r. We maintain a consistent interval-based theory of time (Allen, 1981) relative to r s (0) -the element in the same containing unit as the reference time.",
"cite_spans": [
{
"start": 203,
"end": 216,
"text": "(Allen, 1981)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "We define the reference time t (Reichenbach, 1947) to be the instant relative to which times are evaluated. For the TempEval-2 corpus, we approximate this as the publication time of the article. While this is conflating Reichenbach's reference time with speech time, it is a useful approximation.",
"cite_spans": [
{
"start": 31,
"end": 50,
"text": "(Reichenbach, 1947)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "To contrast with Ranges, a Sequence can represent a number of grounded times. Nonetheless, pragmatically, not all of these are given equal weight -an utterance of last Friday may mean either of the previous two Fridays, but is unlikely to ground to anything else. We represent this ambiguity by defining a distribution over the elements of the Sequence. While this could be any distribution, we chose to approximate it as a Gaussian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "In order to allow sharing parameters between any sequence, we define the domain in terms of the index of the sequence rather than of a constant unit of time (e.g., seconds). To illustrate, the distribution over April would have a much larger variance than the distribution over Sunday, were the domains fixed. The probability of the i th element of a sequence thus depends on the beginning of the range r s (i), the reference time t, and the distance between elements of the sequence \u2206 s . We summarize this in the equation below, with learned parameters \u00b5 and \u03c3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "P t (i) = 0.5 \u03b4=\u22120.5 N \u00b5,\u03c3 r s (i) \u2212 t \u2206 s + \u03b4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "(1) Figure 1 shows an example of such a distribution; importantly, note that moving the reference time between two elements dynamically changes the probability assigned to each.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "Duration A period of time. This includes entities like Week, Month, and 7 days. We denote a duration with the variable d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "We define a special case of the Duration type to represent approximate durations, identified by their canonical unit (week, month, etc). These are used to represent expressions such as a few years or some days.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "Function A function of arity less than or equal to two representing some general modification to one The reference time is labeled as time t between Nov 20 and Nov 27; the probability that this sequence is referring to Nov 20 is the integral of the marked area. The domain of the graph are the indices of the sequence; the distribution is overlaid with mean at the (normalized) reference time t/\u2206 s ; in our case \u2206 s is a week. Note that the probability of an index changes depending on the exact location of the reference time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "of the above types. This captures semantic entities such as those implied in last x, the third x [of y], or x days ago. The particular functions and their application are enumerated in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "Other Types Two other types bear auxiliary roles in representing temporal expressions, though they are not directly temporal concepts. In the grammar, these appear as preterminals only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "The first of these types is Number -denoting a number without any temporal meaning attached. This comes into play representing expressions such as 2 weeks. The other is the Nil type -denoting terms which are not directly contributing to the semantic meaning of the expression. This is intended for words such as a or the, which serve as cues without bearing temporal content themselves. The Nil type is lexicalized with the word it generates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "Omitted Phenomena The representation described is a simplification of the complexities of time. Notably, a body of work has focused on reasoning about events or states relative to temporal expressions. Moens and Steedman (1988) directly modeled, but rather left to systems in which the model would be embedded. Furthermore, vague times (e.g., in the 90's) represent a notable chunk of temporal expressions uttered. In contrast, NLP evaluations have generally not handled such vague time expressions.",
"cite_spans": [
{
"start": 202,
"end": 227,
"text": "Moens and Steedman (1988)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Expression Types",
"sec_num": "3.1"
},
{
"text": "Our approach builds on the assumption that natural language descriptions of time are compositional in nature. Each word attached to a temporal phrase is usually compositionally modifying the meaning of the phrase. To demonstrate, we consider the expression the week before last week. We can construct a meaning by applying the modifier last to week -creating the previous week; and then applying before to week and last week.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "We construct a paradigm for parsing temporal phrases consisting of a standard PCFG over temporal types with each parse rule defining a function to apply to the child nodes, or the word being generated. At the root of the tree, we recursively apply the functions in the parse tree to obtain a final temporal value. One can view this formalism as a rule-to-rule translation (Bach, 1976; Allen, 1995, p. 263) , or a constrained Synchronous PCFG (Yamada and Knight, 2001) .",
"cite_spans": [
{
"start": 372,
"end": 384,
"text": "(Bach, 1976;",
"ref_id": "BIBREF2"
},
{
"start": 385,
"end": 405,
"text": "Allen, 1995, p. 263)",
"ref_id": null
},
{
"start": 442,
"end": 467,
"text": "(Yamada and Knight, 2001)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "Our approach contrasts with common approaches, such as CCG grammars (Steedman, 2000; Bos et al., 2004; Kwiatkowski et al., 2011) , giving us more flexibility in the composition rules. Figure 2 shows an example of the grammar.",
"cite_spans": [
{
"start": 68,
"end": 84,
"text": "(Steedman, 2000;",
"ref_id": "BIBREF20"
},
{
"start": 85,
"end": 102,
"text": "Bos et al., 2004;",
"ref_id": "BIBREF3"
},
{
"start": 103,
"end": 128,
"text": "Kwiatkowski et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "Formally, we define our temporal grammar G = (\u03a3, S, V, W, R, \u03b8). The alphabet \u03a3 and start symbol S retain their usual interpretations. We define a set V to be the set of types, as described in Section 3.1 -these act as our nonterminals. For each v \u2208 V we define an (infinite) set W v corresponding to the possible instances of type v. Concretely, if v = Sequence, our set W v \u2208 W could contain elements corresponding to Friday, last Friday, Nov. 27 th , etc. Each node in the tree defines a pair (v, w) such that w \u2208 W v , with combination rules defined over v and function applications performed on w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "A rule R \u2208 R is defined as a pair",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "R = v i \u2192 v j v k , f : (W v j , W v k ) \u2192 W v i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "The first term is our conventional PCFG rule over the types V. The second term defines the function to apply to the values returned recursively by the child nodes. Note that this definition is trivially adapted for the case of unary rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "The last term in our grammar formalism denotes the rule probabilities \u03b8. In line with the usual interpretation, this defines a probability of applying a particular rule r \u2208 R. Importantly, note that the distribution over possible groundings of a temporal expression are not included in the grammar formalism. The learning of these probabilities is detailed in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Formalism",
"sec_num": "3.2"
},
{
"text": "We define a set of preterminals, specifying their eventual type, as well as the temporal instance it produces when its function is evaluated on the word it generates (e.g., f (day) = Day). A distinction is made in our description between entities with content roles versus entities with a functional role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preterminals",
"sec_num": "3.3"
},
{
"text": "The first -consisting of Ranges, Sequences, and Durations -are listed in fit other domains. It should be noted that the expressions, represented in Typewriter, have no a priori association with words, denoted by italics; this correspondence must be learned. Furthermore, entities which are subject to interpretation -for example Quarter or Season -are given a concrete interpretation. The n th quarter is defined by evenly splitting a year into four; the seasons are defined in the same way but with winter beginning in December. The functional entities are described in Table 2 , and correspond to the Function type. The majority of these mirror generic operations on intervals on a timeline, or manipulations of a sequence. Notably, like intervals, times can be moved (3 weeks ago) or their size changed (the first two days of the month), or a new interval can be started from one of the endpoints (the last 2 days). Additionally, a sequence can be modified by shifting its origin (last Friday), or taking the n th element of the sequence within some bound (fourth Sunday in November).",
"cite_spans": [],
"ref_spans": [
{
"start": 571,
"end": 578,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Preterminals",
"sec_num": "3.3"
},
{
"text": "The lexical entry for the Nil type is tagged with the word it generates, producing entries such as Nil(a), Nil(November), etc. The lexical entry for the Number type is parameterized by the order of magnitude and ordinality of the number; e.g., 27 th becomes Number(10 1 ,ordinal).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preterminals",
"sec_num": "3.3"
},
{
"text": "As mentioned earlier, our grammar defines both combination rules over types (in V) as well as a method for combining temporal instances (in W v \u2208 W). This method is either a function application of one of the functions in Table 2 , a function which is implicit in the text (intersection and multiplication), or an identity operation (for Nils). These cases are detailed below:",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Combination Rules",
"sec_num": "3.4"
},
{
"text": "\u2022 Function application, e.g., last week. We apply (or partially apply) a function to an argument on either the left or the right: f (x, y) x or x f (x, y). Furthermore, for functions of arity 2 taking a Range as an argument, we define a rule treating it as a unary function with the reference time taking the place of the second argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination Rules",
"sec_num": "3.4"
},
{
"text": "\u2022 Intersecting two ranges or sequences, e.g., November 27 th . The intersect function treats both arguments as intervals, and will return an interval (Range or Sequence) corresponding to the overlap between the two. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination Rules",
"sec_num": "3.4"
},
{
"text": "\u2022 Multiplying a Number with a Duration, e.g., 5 weeks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination Rules",
"sec_num": "3.4"
},
{
"text": "\u2022 Combining a non-Nil and Nil element with no change to the temporal expression, e.g., a week. The lexicalization of the Nil type allows the algorithm to take hints from these supporting words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination Rules",
"sec_num": "3.4"
},
{
"text": "We proceed to describe learning the parameters of this grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination Rules",
"sec_num": "3.4"
},
{
"text": "We present a system architecture, described in Figure 3 . We detail the inference procedure in Section 4.1 and training in Section 4.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "To provide a list of candidate expressions with their associated probabilities, we employ a k-best CKY parser. Specifically, we implement Algorithm 3 described in Huang and Chiang (2005) , providing an O(Gn 3 k log k) algorithm with respect to the grammar size G, phrase length n, and beam size k. We set the beam size to 2000. 1 In the case of complex sequences (e.g., Friday the 13 th ) an A * search is performed to find overlapping ranges in the two sequences; the origin rs(0) is updated to refer to the closest such match to the reference time.",
"cite_spans": [
{
"start": 163,
"end": 186,
"text": "Huang and Chiang (2005)",
"ref_id": "BIBREF8"
},
{
"start": 328,
"end": 329,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.1"
},
{
"text": "Revisiting the notion of pragmatic ambiguity, in a sense the most semantically complete output of the system would be a distribution -an utterance of Friday would give a distribution over Fridays rather than a best guess of its grounding. However, it is often advantageous to ground to a concrete expression with a corresponding probability. The CKY k-best beam and the temporal distribution -capturing syntactic and pragmatic ambiguity -can be combined to provide a Viterbi decoding, as well as its associated probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.1"
},
{
"text": "We define the probability of a syntactic parse y making use of rules R \u2286 R as P (y) = P (w 1 , . . . w n ; R) = i\u2192j,k\u2208R P (j, k | i). As described in Section 3.1, we define the probability of a grounding relative to reference time t and a particular syntactic interpretation P t (i|y). The product of these two terms provides the probability of a grounded temporal interpretation; we can obtain a Viterbi decoding by maximizing this joint probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P t (i, y) = P (y) \u00d7 P t (i|y)",
"eq_num": "(2)"
}
],
"section": "Inference",
"sec_num": "4.1"
},
{
"text": "This provides us with a framework for obtaining grounded times from a temporal phrase -in line with the annotations provided during training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.1"
},
{
"text": "We present an EM-style bootstrapping approach to training the parameters of our grammar jointly with the parameters of our Gaussian temporal distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "Our TimEM algorithm for learning the parameters for the grammar (\u03b8), jointly with the temporal distribution (\u00b5 and \u03c3) is given in Algorithm 1. The inputs to the algorithm are the initial parameters \u03b8, \u00b5, and \u03c3, and a set of training instances D. Furthermore, the algorithm makes use of a Dirichlet prior \u03b1 on the grammar parameters \u03b8, as well as a Gaussian prior N on the mean of the temporal distribution \u00b5. The algorithm outputs the final parameters \u03b8 * , \u00b5 * and \u03c3 * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "Each training instance is a tuple consisting of the words in the temporal phrase w, the annotated grounded time \u03c4 * , and the reference time of the utterance t. The input phrase is tokenized according to Penn Treebank guidelines, except we additionally Algorithm 1: TimEM Input: Initial parameters \u03b8, \u00b5, \u03c3; data D = {(w, \u03c4 * , t)}; Dirichlet prior \u03b1, Gaussian prior N Output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "Optimal parameters \u03b8 * , \u00b5 * , \u03c3 * while not converged do 1 (M \u03b8 ,M \u00b5,\u03c3 ) := E-Step (D,\u03b8,\u00b5,\u03c3) 2 (\u03b8, \u00b5, \u03c3) := M-Step (M \u03b8 ,M \u00b5,\u03c3 ) 3 end 4 return (\u03b8 s , \u00b5, \u03c3) 5 begin E-Step(D,\u03b8,\u00b5,\u03c3) 6M \u03b8 = [];M \u00b5,\u03c3 = [] 7 for (w, \u03c4 * , t) \u2208 D do 8m \u03b8 = [];m \u00b5,\u03c3 = [] 9 for y \u2208 k-bestCKY(w, \u03b8) do 10 if p = P \u00b5,\u03c3 (\u03c4 * | y, t) > 0 then 11m \u03b8 += (y, p);m \u00b5,\u03c3 += (i, p) 12 end 13 end 14M += normalize(m \u03b8 ) 15M \u00b5,\u03c3 += normalize(m \u00b5,\u03c3 ) 16 end 17 returnM 18 end 19 begin M-Step(M \u03b8 ,M \u00b5,\u03c3 ) 20 \u03b8 := bayesianPosterior(M \u03b8 , \u03b1) 21 \u03c3 := mlePosterior(M \u00b5,\u03c3 ) 22 \u00b5 := bayesianPosterior(M \u00b5,\u03c3 , \u03c3 , N ) 23 return (\u03b8 , \u00b5 , \u03c3 ) 24 end 25",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "split on the characters '-' and '/,' which often delimit a boundary between temporal entities. Beyond this preprocessing, no language-specific information about the meanings of the words are introduced, including syntactic parses, POS tags, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "The algorithm operates similarly to the EM algorithms used for grammar induction (Klein and Manning, 2004; Carroll and Charniak, 1992) . However, unlike grammar induction, we are allowed a certain amount of supervision by requiring that the predicted temporal expression match the annotation. Our expected statistics are therefore more accurately our normalized expected counts of valid parses.",
"cite_spans": [
{
"start": 81,
"end": 106,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF10"
},
{
"start": 107,
"end": 134,
"text": "Carroll and Charniak, 1992)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "Note that in conventional grammar induction, the expected sufficient statistics can be gathered analytically from reading off the chart scores of a parse. This does not work in our case for two reasons. In part, we would like to incorporate the probability of the temporal grounding in our feedback probability. Additionally, we are only using parses which are valid candidates -that is, the parses which ground to the correct time \u03c4 * -which we cannot establish until the entire expression is parsed. The expected statistics are thus computed non-analytically via a beam on both the possible parses (line 10) and the possible temporal groundings of a given interpretation (line 11).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "The particular EM updates are the standard updates for multinomial and Gaussian distributions given fully observed data. In the multinomial case, our (unnormalized) parameter updates, with Dirichlet prior \u03b1, are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "\u03b8 mn|l = \u03b1+ (y,p)\u2208M \u03b8 v jk|i \u2208y 1 v jk|i = v mn|l p (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "In the Gaussian case, the parameter update for \u03c3 is the maximum likelihood update; while the update for \u00b5 incorporates a Bayesian prior N (\u00b5 0 , \u03c3 0 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "\u03c3 = 1 (i,p)\u2208M\u00b5,\u03c3 p (i,p)\u2208M\u00b5,\u03c3 (i \u2212 \u00b5 ) 2 \u2022 p (4) \u00b5 = \u03c3 2 \u00b5 0 + \u03c3 2 0 (i,p)\u2208M\u00b5,\u03c3 i \u2022 p \u03c3 2 + \u03c3 2 0 (i,p)\u2208M\u00b5,\u03c3 p (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "As the parameters improve, the parser more efficiently prunes incorrect parses and the beam incorporates valid parses for longer and longer phrases. For instance, in the first iteration the model must learn the meaning of both words in last Friday; once the parser learns the meaning of one of them -e.g., Friday appears elsewhere in the corpus -subsequent iterations focus on proposing candidate meanings for last. In this way, a progressively larger percentage of the data is available to be learned from at each iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "We evaluate our model against current state-of-the art systems for temporal resolution on the English portion of the TempEval-2 Task A dataset (Verhagen et al., 2010) .",
"cite_spans": [
{
"start": 143,
"end": 166,
"text": "(Verhagen et al., 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The TempEval-2 dataset is relatively small, containing 162 documents and 1052 temporal phrases in the training set and an additional 20 documents and 156 phrases in the evaluation set. Each temporal phrase was annotated as a TIMEX3 2 tag around an adverbial or prepositional phrase",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "In the TempEval-2 A Task, system performance is evaluated on detection and resolution of expressions. Since we perform only the second of these, we evaluate our system assuming gold detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Similarly, the original TempEval-2 scoring scheme gave a precision and recall for detection, and an accuracy for only the temporal expressions attempted. Since our system is able to produce a guess for every expression, we produce a precisionrecall curve on which competing systems are plotted (see Figure 4) . Note that the downward slope of the curve indicates that the probabilities returned by the system are indicative of its confidence -the probability of a parse correlates with the probability of that parse being correct.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 308,
"text": "Figure 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Additionally, and perhaps more accurately, we compare to previous system scores when constrained to make a prediction on every example; if no guess is made, the output is considered incorrect. This in general yields lower results, as the system is not allowed to abstain on expressions it does not Figure 4 : A precision-recall curve for our system, compared to prior work. The data points are obtained by setting a threshold minimum probability at which to guess a time creating different extent recall values. The curve falls below HeidelTime1 and SUTime in part from lack of context, and in part since our system was not trained to optimize this curve.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "recognize. Results are summarized in Table 3 . We compare to three previous rule-based systems. GUTime (Mani and Wilson, 2000) presents an older but widely used baseline. 3 More recently, SU-Time (Chang and Manning, 2012) provides a much stronger comparison. We also compare to Heidel-Time (Str\u00f6tgen and Gertz, 2010) , which represents the state-of-the-art system at the TempEval-2 task.",
"cite_spans": [
{
"start": 103,
"end": 126,
"text": "(Mani and Wilson, 2000)",
"ref_id": "BIBREF15"
},
{
"start": 290,
"end": 316,
"text": "(Str\u00f6tgen and Gertz, 2010)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "One of the advantages of our model is that it can provide candidate groundings for any expression. We explore this ability by building a detection model to find candidate temporal expressions, which we then ground. The detection model is implemented as a Conditional Random Field (Lafferty et al., 2001) , with features over the morphology and context. Particularly, we define the following features:",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detection",
"sec_num": "5.3"
},
{
"text": "\u2022 The word and lemma within 2 of the current word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detection",
"sec_num": "5.3"
},
{
"text": "\u2022 The word shape 4 and part of speech of the current word. Table 4 : TempEval-2 Extent scores for our system and three previous systems. Note that the attribute scores are now relatively low compared to previous work; unlike rule-based approaches, our model can guess a temporal interpretation for any phrase, meaning that a good proportion of the phrases not detected would have been interpreted correctly.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Detection",
"sec_num": "5.3"
},
{
"text": "\u2022 Whether the current word is a number, along with its ordinality and order of magnitude",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detection",
"sec_num": "5.3"
},
{
"text": "\u2022 Prefixes and suffixes up to length 5, along with their word shape.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detection",
"sec_num": "5.3"
},
{
"text": "We summarize our results in Table 4 , noting that the performance indicates that the CRF and interpretation model find somewhat different phrases hard to detect and interpret respectively. Many errors made in detection are attributable to the small size of the training corpus (63,000 tokens).",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Detection",
"sec_num": "5.3"
},
{
"text": "Our system performs well above the GUTime baseline and is competitive with both of the more recent systems. In part, this is from more sophisticated modeling of syntactic ambiguity: e.g., the past few weeks has a clause the past -which, alone, should be parsed as PAST -yet the system correctly disprefers incorporating this interpretation and returns the approximate duration 1 week. Furthermore, we often capture cases of pragmatic ambiguity -for example, empirically, August tends to refers to the previous August when mentioned in February.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.4"
},
{
"text": "Compared to rule-based systems, we attribute most errors the system makes to either data sparsity or missing lexical primitives. For exampleillustrating sparsity -we have trouble recognizing Nov. as corresponding to November (e.g., Nov. 13), since the publication time of the articles happen to often be near November and we prefer tagging the word as Nil (analogous to the 13 th ). Missing lexical primitives, in turn, include tags for 1990s, or half (in minute and a half ); as well as missing functions, such as or (in weeks or months).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.4"
},
{
"text": "Remaining errors can be attributed to causes such as providing the wrong Viterbi grounding to the evaluation script (e.g., last rather than this Friday), differences in annotation (e.g., 24 hours is marked wrong against a day), or missing context (e.g., the publication time is not the true reference time), among others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.4"
},
{
"text": "We present a new approach to resolving temporal expressions, based on synchronous parsing of a fixed grammar with learned parameters and a compositional representation of time. The system allows for output which captures uncertainty both with respect to the syntactic structure of the phrase and the pragmatic ambiguity of temporal utterances. We also note that the approach is theoretically better adapted for phrases more complex than those found in TempEval-2. Furthermore, the system makes very few language-specific assumptions, and the algorithm could be adapted to domains beyond temporal resolution. We hope to improve detection and explore system performance on multilingual and complex datasets in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "See http://www.timeml.org for details on the TimeML format and TIMEX3 tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to discrepancies in output formats, the output of GUTime was heuristically patched and manually checked to conform to the expected format.4 Word shape is calculated by mapping each character to one of uppercase, lowercase, number, or punctuation. The first four characters are mapped verbatim; subsequent sequences of similar characters are collapsed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An interval-based representation of temporal knowledge",
"authors": [
{
"first": "James",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
}
],
"year": 1981,
"venue": "Proceedings of the 7th international joint conference on Artificial intelligence",
"volume": "",
"issue": "",
"pages": "221--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James F. Allen. 1981. An interval-based representa- tion of temporal knowledge. In Proceedings of the 7th international joint conference on Artificial intelli- gence, pages 221-226, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural Language Understanding",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Allen. 1995. Natural Language Understanding. Benjamin/Cummings, Redwood City, CA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An extension of classical transformational grammar",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bach",
"suffix": ""
}
],
"year": 1976,
"venue": "Problems of Linguistic Metatheory (Proceedings of the 1976 Conference)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Bach. 1976. An extension of classical transforma- tional grammar. In Problems of Linguistic Metatheory (Proceedings of the 1976 Conference), Michigan State University.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Wide-coverage semantic representations from a CCG parser",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Coling",
"volume": "",
"issue": "",
"pages": "1240--1246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Bos, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Wide-coverage semantic representations from a CCG parser. In Proceedings of Coling, pages 1240-1246, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Two experiments on learning probabilistic dependency grammars from corpora",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Carroll and Eugene Charniak. 1992. Two experi- ments on learning probabilistic dependency grammars from corpora. Technical report, Providence, RI, USA. Angel Chang and Chris Manning. 2012. SUTIME: a library for recognizing and normalizing time expres- sions. In Language Resources and Evaluation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Driving semantic parsing from the world's response",
"authors": [
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"Roth"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "18--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world's response. In CoNLL, pages 18-27, Uppsala, Sweden.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "NPI licensing in temporal clauses. Natural Language and Linguistic Theory",
"authors": [
{
"first": "Cleo",
"middle": [],
"last": "Condoravdi",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "28",
"issue": "",
"pages": "877--910",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cleo Condoravdi. 2010. NPI licensing in temporal clauses. Natural Language and Linguistic Theory, 28:877-910.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Edinburgh-LTG: TempEval-2 system description",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tobin",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [
"Alex"
],
"last": "",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "333--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Grover, Richard Tobin, Beatrice Alex, and Kate Byrne. 2010. Edinburgh-LTG: TempEval-2 system description. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval, pages 333-336.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Better k-best parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology, Parsing",
"volume": "",
"issue": "",
"pages": "53--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, Parsing, pages 53- 64.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to transform natural to formal languages",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rohit",
"suffix": ""
},
{
"first": "Yuk",
"middle": [
"Wah"
],
"last": "Kate",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "1062--1068",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal lan- guages. In AAAI, pages 1062-1068, Pittsburgh, PA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Corpusbased induction of syntactic structure: models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2004. Corpus- based induction of syntactic structure: models of de- pendency and constituency. In ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "KUL: recognition and normalization of temporal expressions",
"authors": [
{
"first": "Oleksandr",
"middle": [],
"last": "Kolomiyets",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval '10",
"volume": "",
"issue": "",
"pages": "325--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oleksandr Kolomiyets and Marie-Francine Moens. 2010. KUL: recognition and normalization of temporal ex- pressions. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval '10, pages 325-328.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Lexical generalization in CCG grammar induction for semantic parsing",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1512--1523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In EMNLP, pages 1512-1523, Edinburgh, Scotland, UK.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Robust temporal processing of news",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2000,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "69--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani and George Wilson. 2000. Robust tem- poral processing of news. In ACL, pages 69-76, Hong Kong.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Temporal ontology and temporal reference",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1988,
"venue": "Computational Linguistics",
"volume": "14",
"issue": "",
"pages": "15--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Moens and Mark Steedman. 1988. Temporal on- tology and temporal reference. Computational Lin- guistics, 14:15-28.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A framework for temporal resolution",
"authors": [
{
"first": "G",
"middle": [],
"last": "Puscasu",
"suffix": ""
}
],
"year": 2004,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "1901--1904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Puscasu. 2004. A framework for temporal resolution. In LREC, pages 1901-1904.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Elements of Symbolic Logic",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Reichenbach",
"suffix": ""
}
],
"year": 1947,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Reichenbach. 1947. Elements of Symbolic Logic. Macmillan, New York.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Terseo: Temporal expression resolution system applied to event ordering",
"authors": [
{
"first": "E",
"middle": [],
"last": "Saquete",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Muoz",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Martnez-Barco",
"suffix": ""
}
],
"year": 2003,
"venue": "Text, Speech and Dialogue",
"volume": "",
"issue": "",
"pages": "220--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Saquete, R. Muoz, and P. Martnez-Barco. 2003. Terseo: Temporal expression resolution system ap- plied to event ordering. In Text, Speech and Dialogue, pages 220-228.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The syntactic process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The syntactic process. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Heideltime: High quality rule-based extraction and normalization of temporal expressions",
"authors": [
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gertz",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "321--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jannik Str\u00f6tgen and Michael Gertz. 2010. Heideltime: High quality rule-based extraction and normalization of temporal expressions. In Proceedings of the 5th In- ternational Workshop on Semantic Evaluation, Sem- Eval, pages 321-324.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "TRIPS and TRIOS system for TempEval-2: Extracting temporal information from text",
"authors": [
{
"first": "Naushad",
"middle": [],
"last": "Uzzaman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "276--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naushad UzZaman and James F. Allen. 2010. TRIPS and TRIOS system for TempEval-2: Extracting tem- poral information from text. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem- Eval, pages 276-283.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semeval-2010 task 13: TempEval-2",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Sauri",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "57--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: TempEval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 57-62, Up- psala, Sweden.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In ACL, pages 523-530.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "AAAI/IAAI",
"volume": "",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to parse database queries using inductive logic pro- gramming. In AAAI/IAAI, pages 1050-1055, Portland, OR.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2005,
"venue": "UAI",
"volume": "",
"issue": "",
"pages": "658--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form: Structured clas- sification with probabilistic categorial grammars. In UAI, pages 658-666. AUAI Press.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Online learning of relaxed CCG grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2007. On- line learning of relaxed CCG grammars for parsing to logical form. In EMNLP-CoNLL, pages 678-687.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "An illustration of a temporal distribution, e.g., Sunday.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "The grammar -(a) describes the CFG parse of the temporal types. Words are tagged with their nonterminal entry, above which only the types of the expressions are maintained; (b) describes the corresponding combination of the temporal instances. The parse in (b) is deterministic given the grammar combination rules in (a).",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "An overview of the system architecture. Note that the parse is latent -that is, it is not annotated in the training data.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"text": "Friday would be the Range corresponding to the current week. The sequence index i \u2208 Z, from r s (i), is defined",
"type_str": "table",
"html": null,
"content": "<table><tr><td>and represent instants as intervals</td></tr><tr><td>with zero span.</td></tr><tr><td>Sequence A sequence of Ranges, not necessarily</td></tr><tr><td>occurring at regular intervals. This includes enti-</td></tr><tr><td>ties such as Friday, November 27 th , or last</td></tr><tr><td>Friday. A Sequence is a tuple of three elements</td></tr><tr><td>s = (r s , \u2206 s , \u03c1 s ):</td></tr><tr><td>1. r s (i): The i th element of a sequence, of type</td></tr><tr><td>Range. In the case of the sequence Friday,</td></tr><tr><td>r s (0) corresponds to the Friday in the current</td></tr><tr><td>week; r s (1) is the Friday in the following week,</td></tr><tr><td>etc.</td></tr><tr><td>2. \u2206 s : The distance between two elements in the</td></tr><tr><td>sequence -approximated if this distance is not</td></tr><tr><td>constant. In the case of Friday, this distance</td></tr><tr><td>would be a week.</td></tr><tr><td>3. \u03c1 s : The containing unit of an element of a se-</td></tr><tr><td>quence. For example, \u03c1</td></tr></table>"
},
"TABREF2": {
"num": null,
"text": "A total of 62 such preterminals are defined in the implemented system, corresponding to primitive entities often appearing in newswire, although this list is easily adaptable to Range or Sequence left by a Duration f : S, D \u2192 S; f : R, D \u2192 R shiftRight Shift a Range or Sequence right by a Duration f : S, D \u2192 S; f : R, D \u2192 R shrinkBegin Take the first Duration of a Range/Sequence f : S, D \u2192 S; f : R, D \u2192 R shrinkEnd Take the last Duration of a Range/Sequence f : S, D \u2192 S; f : R, D \u2192 R catLeft Take Duration units after the end of a Range f : R, D \u2192 R catRight Take Duration units before the start of a Range f :R, D \u2192 R moveLeft1Move the origin of a sequence left by 1 f :S \u2192 S moveRight1Move the origin of a sequence right by 1 f : S \u2192 S n th x of y Take the n th Sequence in y (Day of Week, etc) f : Number \u2192 S approximate Make a Duration approximate f :",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Function</td><td>Description</td><td>Signature(s)</td></tr><tr><td>shiftLeft</td><td>Shift a</td><td/></tr></table>"
},
"TABREF3": {
"num": null,
"text": "The functional preterminals of the grammar; R, S, and D denote Ranges Sequences and Durations respectively. The name, a brief description, and the type signature of the function (as used in parsing) are given. Described in more detail in Section 3.4, the functions are most easily interpreted as operations on either an interval or sequence.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Type</td><td>Instances</td></tr><tr><td>Range</td><td>Past, Future, Yesterday,</td></tr><tr><td/><td>Tomorrow, Today, Reference,</td></tr><tr><td/><td>Year(n), Century(n)</td></tr></table>"
},
"TABREF4": {
"num": null,
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF7": {
"num": null,
"text": "TempEval-2 Attribute scores for our system and three previous systems. The scores are calculated using gold extents, forcing a guessed interpretation for each parse.",
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}