Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:54.037813Z"
},
"title": "A Simple Domain-Independent Probabilistic Approach to Generation",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC",
"location": {
"postCode": "94720",
"settlement": "Berkeley Berkeley",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC",
"location": {
"postCode": "94720",
"settlement": "Berkeley Berkeley",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UC",
"location": {
"postCode": "94720",
"settlement": "Berkeley Berkeley",
"region": "CA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains-Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-ofthe-art domain-specific systems both in terms of BLEU scores and human evaluation.",
"pdf_parse": {
"paper_id": "D10-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains-Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-ofthe-art domain-specific systems both in terms of BLEU scores and human evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we focus on the problem of generating descriptive text given a world state represented by a set of database records. While existing generation systems can be engineered to obtain good performance on particular domains (e.g., Dale et al. (2003) , Green (2006) , Turner et al. (2009) , Reiter et al. (2005) , inter alia), it is often difficult to adapt them across different domains. Furthermore, content selection (what to say: see Barzilay and Lee (2004) , Foster and White (2004) , inter alia) and surface realization (how to say it: see Ratnaparkhi (2002) , Wong and Mooney (2007) , Chen and Mooney (2008) , Lu et al. (2009) , etc.) are typically handled separately. Our goal is to build a simple, flexible system which is domain-independent and performs content selection and surface realization in a unified framework.",
"cite_spans": [
{
"start": 240,
"end": 258,
"text": "Dale et al. (2003)",
"ref_id": "BIBREF5"
},
{
"start": 261,
"end": 273,
"text": "Green (2006)",
"ref_id": "BIBREF7"
},
{
"start": 276,
"end": 296,
"text": "Turner et al. (2009)",
"ref_id": "BIBREF18"
},
{
"start": 299,
"end": 319,
"text": "Reiter et al. (2005)",
"ref_id": "BIBREF16"
},
{
"start": 446,
"end": 469,
"text": "Barzilay and Lee (2004)",
"ref_id": "BIBREF0"
},
{
"start": 472,
"end": 495,
"text": "Foster and White (2004)",
"ref_id": "BIBREF6"
},
{
"start": 554,
"end": 572,
"text": "Ratnaparkhi (2002)",
"ref_id": "BIBREF15"
},
{
"start": 575,
"end": 597,
"text": "Wong and Mooney (2007)",
"ref_id": "BIBREF20"
},
{
"start": 600,
"end": 622,
"text": "Chen and Mooney (2008)",
"ref_id": "BIBREF4"
},
{
"start": 625,
"end": 641,
"text": "Lu et al. (2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We operate in a setting in which we are only given examples consisting of (i) a set of database records (input) and (ii) example human-generated text describing some of those records (output). We use the model of Liang et al. (2009) to automatically induce the correspondences between words in the text and the actual database records mentioned.",
"cite_spans": [
{
"start": 213,
"end": 232,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We break up the full generation process into a sequence of local decisions, training a log-linear classifier for each type of decision. We use a simple but expressive set of domain-independent features, where each decision is allowed to depend on the entire history of previous decisions, as in the model of Ratnaparkhi (2002) . These long-range contextual dependencies turn out to be critical for accurate generation.",
"cite_spans": [
{
"start": 308,
"end": 326,
"text": "Ratnaparkhi (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More specifically, our model is defined in terms of three types of decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first type chooses records from the database (macro content selection)-for example, wind speed, in the case of generating weather forecasts. The second type chooses a subset of fields from a record (micro content selection)-e.g., the minimum and maximum temperature. The third type chooses a suitable template to render the content (surface realization)e.g., winds between [min] and [max] mph; templates are automatically extracted from training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We tested our approach in three domains: ROBOCUP, for sportscasting (Chen and Mooney, 2008) ; SUMTIME, for technical weather forecast generation (Reiter et al., 2005) ; and WEATHERGOV, for common weather forecast generation (Liang et al., 2009) . We performed both automatic (BLEU) and human evaluation. On WEATHERGOV, we s: pass(arg1=purple6, arg2=purple3) kick(arg1=purple3) badPass(arg1=purple3,arg2=pink9) turnover (arg1=purple3,arg2=pink9) w: purple3 made a bad pass that was picked off by pink9",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "(Chen and Mooney, 2008)",
"ref_id": "BIBREF4"
},
{
"start": 145,
"end": 166,
"text": "(Reiter et al., 2005)",
"ref_id": "BIBREF16"
},
{
"start": 224,
"end": 244,
"text": "(Liang et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 419,
"end": 444,
"text": "(arg1=purple3,arg2=pink9)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(a) Robocup s: temperature(time=5pm-6am,min=48,mean=53,max=61) windSpeed (time=5pm-6am,min=3,mean=6,max=11,mode=0-10) windDir(time=5pm-6am,mode=SSW) gust (time=5pm-6am,min=0,mean=0,max=0) skyCover(time=5pm-9pm,mode=0-25) skyCover(time=2am-6am,mode=75-100) precipPotential (time=5pm-6am,min=2,mean=14,max=20) rainChance mode=someChance) w: a 20 percent chance of showers after midnight . increasing clouds , with a low around 48 southwest wind between 5 and 10 mph (b) WeatherGov s: wind10m(time=6am,dir=SW,min=16,max=20,gust min=0,gust max=-)",
"cite_spans": [
{
"start": 73,
"end": 117,
"text": "(time=5pm-6am,min=3,mean=6,max=11,mode=0-10)",
"ref_id": null
},
{
"start": 154,
"end": 187,
"text": "(time=5pm-6am,min=0,mean=0,max=0)",
"ref_id": null
},
{
"start": 272,
"end": 307,
"text": "(time=5pm-6am,min=2,mean=14,max=20)",
"ref_id": null
},
{
"start": 319,
"end": 335,
"text": "mode=someChance)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "wind10m(time=9pm,dir=SSW,min=28,max=32,gust min=40,gust max=-) wind10m(time=12am,dir=-,min=24,max=28,gust min=36,gust max=-)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "w: sw 16 -20 backing ssw 28 -32 gusts 40 by mid evening easing 24 -28 gusts 36 late evening (c) SumTime Figure 1 : Example scenarios (a scenario is a world state s paired with a text w) for each of the three domains. Each row in the world state denotes a record. Our generation task is to map a world state s (input) to a text w (output). Note that this mapping involves both content selection and surface realization.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "achieved a BLEU score of 51.5 on the combined task of content selection and generation, which is more than a two-fold improvement over a model similar to that of Liang et al. (2009) . On ROBOCUP and SUMTIME, we achieved results comparable to the state-of-the-art. most importantly, we obtained these results with a general-purpose approach that we believe is simpler than current state-of-the-art systems.",
"cite_spans": [
{
"start": 162,
"end": 181,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to generate a text given a world state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup and Domains",
"sec_num": "2"
},
{
"text": "The world state, denoted s, is represented by a set of database records. Define T to be a set of record types, where each record type t \u2208 T is associated with a set of fields FIELDS(t). Each record r \u2208 s has a record type r.t \u2208 T and a field value r.v[f ] for each field f \u2208 FIELDS(t). The text, denoted w, is represented by a sequence of tokenized words. We use the term scenario to denote a world state s paired with a text w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup and Domains",
"sec_num": "2"
},
{
"text": "In this paper, we conducted experiments on three domains, which are detailed in the following subsections. Example scenarios for each domain are detailed in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 165,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup and Domains",
"sec_num": "2"
},
{
"text": "A world state in the ROBOCUP domain is a set of event records (meaning representations in the terminology of Chen and Mooney (2008) ) generated by a robot soccer simulator. For example, the record pass(arg1=pink1,arg2=pink5) denotes a passing event; records of this type (pass) have two fields: arg1 (the agent) and arg2 (the recipient). As the game progresses, human commentators talk about some of the events in the game, e.g., purple3 made a bad pass that was picked off by pink9.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "Chen and Mooney (2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ROBOCUP: Sportscasting",
"sec_num": "2.1"
},
{
"text": "We used the dataset created by Chen and Mooney (2008) , which contains 1919 scenarios from the 2001-2004 Robocup finals. Each scenario consists of a single sentence representing a fragment of a commentary on the game, paired with a set of candidate records, which were recorded within five seconds of the commentary. The records in the ROBOCUP dataset data were aligned by Chen and Mooney (2008) . Each scenario contains on average |s| = 2.4 records and 5.7 words. See Figure 1 (a) for an example of a scenario. Content selection in this domain is choosing the single record to talk about, and surface realization is talking about it.",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "Chen and Mooney (2008)",
"ref_id": "BIBREF4"
},
{
"start": 373,
"end": 395,
"text": "Chen and Mooney (2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 469,
"end": 477,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "ROBOCUP: Sportscasting",
"sec_num": "2.1"
},
{
"text": "2.2 SUMTIME: Technical Weather Forecasts Reiter et al. (2005) developed a generation system and created the SUMTIME-METEO corpus, which consists of marine wind weather forecasts used by offshore oil rigs, generated by the output of weather simulators. More specifically, these forecasts describe various aspects of the wind at different times during the forecast period.",
"cite_spans": [
{
"start": 41,
"end": 61,
"text": "Reiter et al. (2005)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ROBOCUP: Sportscasting",
"sec_num": "2.1"
},
{
"text": "We used the version of the SUMTIME-METEO corpus created by Belz (2008) . The dataset consists of 469 scenarios, each containing on average |s| = 2.6 records and 16.2 words. See Figure 1 (c) for an example of a scenario. This task requires no content selection, only surface realization: The records are given in some fixed order and the task is to generate from each of these records in turn; of course, due to contextual dependencies, these records cannot be generated independently.",
"cite_spans": [
{
"start": 59,
"end": 70,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "ROBOCUP: Sportscasting",
"sec_num": "2.1"
},
{
"text": "In the WEATHERGOV domain, the world state contains detailed information about a local weather forecast (e.g., temperature, rain chance, etc.). The text is a short forecast report based on this information. We used the dataset created by Liang et al. (2009) . The world state is summarized by records which aggregate measurements over selected time intervals. The dataset consists of 29,528 scenarios, each containing on average |s| = 36 records and 28.7 words. See Figure 1 (b) for an example of a scenario.",
"cite_spans": [
{
"start": 237,
"end": 256,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 465,
"end": 473,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "WEATHERGOV: Common Weather Forecasts",
"sec_num": "2.3"
},
{
"text": "While SUMTIME and WEATHERGOV are both weather domains, there are significant differences between the two. SUMTIME forecasts are intended to be read by trained meteorologists, and thus the text is quite abbreviated. On the other hand, WEATHERGOV texts are intended to be read by the general public and thus is more English-like. Furthermore, SUMTIME does not require content selection, whereas content selection is a major focus of WEATHERGOV. Indeed, on average, only 5 of 36 records are actually mentioned in a WEATHERGOV scenario. Also, WEATHERGOV is more complex: The text is more varied, there are multiple record types, and there are about ten times as many records in each world state. Figure 2 : Pseudocode for the generation process. The generated text w is a deterministic function of the decisions.",
"cite_spans": [],
"ref_spans": [
{
"start": 692,
"end": 700,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "WEATHERGOV: Common Weather Forecasts",
"sec_num": "2.3"
},
{
"text": "for i = 1, 2, . . . : \u2212choose a record r i \u2208 s \u2212if r i = STOP: return \u2212choose a field set F i \u2282 FIELDS(r i .t) \u2212choose a template T i \u2208 TEMPLATES(r i .t, F i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Process",
"sec_num": null
},
{
"text": "To model the process of generating a text w from a world state s, we decompose the generation process into a sequence of local decisions. There are two aspects of this decomposition that we need to specify: (i) how the decisions are structured; and (ii) what pieces of information govern the decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3"
},
{
"text": "The decisions are structured hierarchically into three types of decisions: (i) record decisions, which determine which records in the world state to talk about (macro content selection); (ii) field set decisions, which determine which fields of those records to mention (micro content selection); and (iii) template decisions, which determine the actual words to use to describe the chosen fields (surface realization). Figure 2 shows the pseudocode for the generation process, while Figure 3 depicts an example of the generation process on a WEATHERGOV scenario.",
"cite_spans": [],
"ref_spans": [
{
"start": 420,
"end": 428,
"text": "Figure 2",
"ref_id": null
},
{
"start": 484,
"end": 492,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3"
},
{
"text": "Each of these decisions is governed by a set of feature templates (see Figure 4 ), which are represented as functions of the current decision and past decisions. The feature weights are learned from training data (see Section 4.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3"
},
{
"text": "We chose a set of generic domain-independent feature templates, described in the sections below. These features can, in general, depend on the current decision and all previous decisions. For example, referring to Figure 4 , R2 features on the record choice depend on all the previous record decisions, and R5 features depend on the most recent template decision. This is in contrast with most systems for content selection (Barzilay and Lee, 2004) and surface realization (Belz, 2008) , where decisions must decompose locally according to either a graph or tree. The ability to use global features in this manner is World state skyCover 1 : skyCover(time=5pm-6am,mode=50-75) temperature 1 : temperature(time=5pm-6am,min=44,mean=49,max=60) ...",
"cite_spans": [
{
"start": 424,
"end": 448,
"text": "(Barzilay and Lee, 2004)",
"ref_id": "BIBREF0"
},
{
"start": 473,
"end": 485,
"text": "(Belz, 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3"
},
{
"text": "Record ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decisions",
"sec_num": null
},
{
"text": "r 1 = skyCover 1 r 2 = temperature 1 r 3 = stop Field set F 1 = {mode} F 2 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decisions",
"sec_num": null
},
{
"text": "F 2 = {time, min} (F1) F 2 = {time, min} (F2) F 2 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decisions",
"sec_num": null
},
{
"text": "r 2 .v[min] = low (W3) log p lm (with | cloudy ,)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decisions",
"sec_num": null
},
{
"text": "Figure 3: The generation process on an example WEATHERGOV scenario. The figure is divided into two parts: The upper part of the figure shows the generation of text from the world state via a sequence of seven decisions (in boxes). Three of these decisions are highlighted and the features that govern these decisions are shown in the lower part of the figure. Note that different decisions in the generation process would result in different features being active (nonzero).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decisions",
"sec_num": null
},
{
"text": "Record R1 \u2020 list of last k record types r i .t = * and (r i\u22121 .t, . . . , r i\u2212k .t) = * for k \u2208 {1, 2} R2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": null
},
{
"text": "set of previous record types r i .t = * and {r j .t : j < i} = * R3 record type already generated",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": null
},
{
"text": "r j .t = r i .t for some j < i R4 field values r i .t = * and r i .v[f ] = * for f \u2208 Fields(r i .t) R5 \u2020 stop under language model (LM) r i .t = stop \u00d7 log p lm (stop | previous two words generated) Field set F1 \u2020 field set F i = * F2 field values F i = * and r i .v[f ] = * for f \u2208 F i Template W1 \u2020 base/coarse generation template h(T i ) = * for h \u2208 {Base, Coarse} W2 field values h(T i ) = * and r i .v[f ] = * for f \u2208 F i , h \u2208 {Base, Coarse} W3 \u2020 first word of template under LM log p lm (first word in T i | previous two words)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": null
},
{
"text": "Figure 4: Feature templates that govern the record, field set, and template decisions. Each line specifies the name, informal description, and formal description of a set of features, obtained by ranging * over possible values (for example, for ri.t = * , * ranges over all record types T ). Notation: e returns 1 if the expression e is true and 0 if it is false. These feature templates are domain-independent; that is, they are used to create features automatically across domains. Feature templates marked with \u2020 are included in our baseline system (Section 5.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": null
},
{
"text": "one of the principal advantages of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": null
},
{
"text": "Record decisions are responsible for macro content selection. Each record decision chooses a record r i from the world state s according to features of the following types:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Record Decisions",
"sec_num": "3.1"
},
{
"text": "R1 captures the discourse coherence aspect of content selection; for example, we learn that windSpeed tends to follow windDir (but not al-ways).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Record Decisions",
"sec_num": "3.1"
},
{
"text": "R2 captures an unordered notion of coherence-simply which sets of record types are preferable; for example, we learn that rainChance is not generated if sleetChance already was mentioned. R3 is a coarser version of R2, capturing how likely it is to propose a record of a type that has already been generated. R4 captures the important aspect of content selection that the records chosen depend on their field values; 1 for example, we learn that snowChance is not chosen unless there is snow. R5 allows the language model to indicate whether a STOP record is appropriate; this helps prevent sentences from ending abruptly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Record Decisions",
"sec_num": "3.1"
},
{
"text": "Field set decisions are responsible for micro content selection, i.e., which fields of a record are mentioned. Each field set decision chooses a subset of fields F i from the set of fields FIELDS(r i .t) of the record r i that was just generated. These decisions are made based on two types of features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Field Set Decisions",
"sec_num": "3.2"
},
{
"text": "F1 captures which sets of fields are talked about together; for example, we learn that {mean} and {min, max} are preferred field sets for the windSpeed record. By defining features on the entire field set, we can capture any correlation structure over the fields; in contrast, Liang et al. (2009) generates a sequence of fields in which a field can only depend on the previous one.",
"cite_spans": [
{
"start": 277,
"end": 296,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Field Set Decisions",
"sec_num": "3.2"
},
{
"text": "F2 allows the field set to be chosen based on the values of the fields, analogously to R4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Field Set Decisions",
"sec_num": "3.2"
},
{
"text": "Template decisions perform surface realization. A template is a sequence of elements, where each element is either a word (e.g., around) or a field (e.g., [min] ). Given the record r i and field set F i that we are generating from, the goal is to choose a template T i (Section 4.3.2 describes how we define the set of possible templates). The features that govern the choice of T i are as follows:",
"cite_spans": [
{
"start": 155,
"end": 160,
"text": "[min]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template Decisions",
"sec_num": "3.3"
},
{
"text": "W1 captures a priori preferences for generation templates given field sets. There are two ways to control this preference, BASE and COARSE. 1 We map a numeric field value onto one of five categories (very-low, low, medium, high, or very-high) based on its value with respect to the mean and standard deviation of values of that field in the training data. BASE(T i ) denotes the template T i itself, thus allowing us to remember exactly which templates were useful. To guard against overfitting, we also use COARSE(T i ), which maps T i to a coarsened version of T i , in which more words are replaced with their associated fields (see Figure 5 for an example).",
"cite_spans": [
{
"start": 140,
"end": 141,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 636,
"end": 644,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Template Decisions",
"sec_num": "3.3"
},
{
"text": "W2 captures a dependence on the values of fields in the field set, and is analogous to R4 and F2. Finally, W3 contributes a language model probability, to ensure smooth transitions between templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Decisions",
"sec_num": "3.3"
},
{
"text": "After T i has been chosen, each field in the template is replaced with a word given the corresponding field value in the world state. In particular, a word is chosen from the parameters learned in the model of Liang et al. (2009) . In the example in Figure 3, the [min] field in T 2 has value 44, which is rendered to the word 45 (rounding and other noisy deviations are common in the WEATHERGOV domain).",
"cite_spans": [
{
"start": 210,
"end": 229,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 250,
"end": 256,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Template Decisions",
"sec_num": "3.3"
},
{
"text": "Having described all the features, we now present a conditional probabilistic model over texts w given world states s (Section 4.1). Section 4.2 describes how to use the model for generation, and Section 4.3 describes how to learn the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning a Probabilistic Model",
"sec_num": "4"
},
{
"text": "Recall from Section 3 that the generation process generates r 1 , F 1 , T 1 , r 2 , F 2 , T 2 , . . . , STOP. To unify notation, denote this sequence of decisions as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "d = (d 1 , . . . , d |d| ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "Our probability model is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(d | s; \u03b8) = |d| j=1 p(d j | d <j ; \u03b8),",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "d <j = (d 1 , . . . , d j\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "is the history of decisions and \u03b8 are the model parameters (feature weights). Note that the text w (the output) is a deterministic function of the decisions d. We use the features described in Section 3 to define a log-linear model for each decision:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(d j | d <j , s; \u03b8) = exp{\u03c6 j (d j , d <j , s) \u03b8} d j \u2208Dj exp{\u03c6 j (d j , d <j , s) \u03b8} ,",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "where \u03b8 are all the parameters (feature weights), \u03c6 j is the feature vector for the j-th decision, and D j is the domain of the j-th decision (either records, field sets, or templates). This chaining of log-linear models was used in Ratnaparkhi (1998) for tagging and parsing, and in Ratnaparkhi (2002) for surface realization. The ability to condition on arbitrary histories is a defining property of these models.",
"cite_spans": [
{
"start": 233,
"end": 251,
"text": "Ratnaparkhi (1998)",
"ref_id": "BIBREF14"
},
{
"start": 284,
"end": 302,
"text": "Ratnaparkhi (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "Suppose we have learned a model with parameters \u03b8 (how to obtain \u03b8 is discussed in Section 4.3). Given a world state s, we would like to use our model to generate an output text w via a decision sequence d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": "In our experiments, we choose d by sequentially choosing the best decision in a greedy fashion (until the STOP record is generated):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d j = argmax d j p(d j | d <j , s; \u03b8).",
"eq_num": "(3)"
}
],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": "Alternatively, instead of choosing the best decision at each point, we can sample from the distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": "d j \u223c p(d j | d <j , s; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": ", which provides more diverse generated texts at the expense of a slight degradation in quality. Both greedy search and sampling are very efficient. Another option is to try to find the Viterbi decision sequence, i.e., the one with the maximum joint probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": "d = argmax d p(d | s; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": ". However, this computation is intractable due to features depending arbitrarily on past decisions, making dynamic programming infeasible. We tried using beam search to approximate this optimization, but we actually found that beam search performed worse than greedy. Belz (2008) also found that greedy was more effective than Viterbi for their model.",
"cite_spans": [
{
"start": 268,
"end": 279,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using the Model for Generation",
"sec_num": "4.2"
},
{
"text": "Now we turn our attention to learning the parameters \u03b8 of our model. We are given a set of N scenarios {(s (i) , w (i) )} N i=1 as training data. Note that our model is defined over the decision sequence d which contains information not present in w. In Sections 4.3.1 and 4.3.2, we show how we fill in this missing information to obtain d (i) for each training scenario i.",
"cite_spans": [
{
"start": 107,
"end": 110,
"text": "(i)",
"ref_id": null
},
{
"start": 340,
"end": 343,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "Assuming this missing information is filled, we end up with a standard supervised learning problem, which can be solved by maximize the (conditional) likelihood of the training data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "max \u03b8\u2208R d \uf8eb \uf8ed N i=1 |d (i) | j=1 log p(d (i) j | d (i) <j ; \u03b8) \uf8f6 \uf8f8 \u2212\u03bb||\u03b8|| 2 , (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "where \u03bb > 0 is a regularization parameter. The objective function in (4) is optimized using the standard L-BFGS algorithm (Liu and Nocedal, 1989) .",
"cite_spans": [
{
"start": 122,
"end": 145,
"text": "(Liu and Nocedal, 1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "As mentioned previously, our training data includes only the world state s and generated text w, not the full sequence of decisions d needed for training. Intuitively, we know what was generated but not why it was generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Alignments",
"sec_num": "4.3.1"
},
{
"text": "We use the model of Liang et al. (2009) to impute the decisions d. They introduce a generative model p(a, w|s), where the latent alignment a specifies (1) the sequence of records that were chosen, (2) the sequence of fields that were chosen, and (3) which words in the text were spanned by the chosen records and fields. The model is learned in an unsupervised manner using EM to produce a observing only w and s.",
"cite_spans": [
{
"start": 20,
"end": 39,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Alignments",
"sec_num": "4.3.1"
},
{
"text": "An example of an alignment is given in the left part of Figure 5 . This information specifies the record decisions and a set of fields for each record. Because the induced alignments can be noisy, we need to process them to obtain cleaner template decisions. This is the subject of the next section.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 64,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Latent Alignments",
"sec_num": "4.3.1"
},
{
"text": "Given an aligned training scenario ( Figure 5 ), we would like to extract two types of templates.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Template Extraction",
"sec_num": "4.3.2"
},
{
"text": "For each record, an aligned training scenario specifies a sequence of fields and the text that is spanned by each field. We create a template by abstracting fields-that is, replacing the words spanned by a field by the field itself. We call the resulting template COARSE. The problem with using this template directly is that fields can be noisy due to errors from the unsupervised model. Therefore, we also create a BASE template which only abstracts a subset of the fields. In particular, we define a trigger pattern which specifies a simple condition under which a field should be abstracted. For WEATHERGOV, we only abstract fields that Templates extracted Figure 5 : An example of template extraction from an imperfectly aligned training scenario. Note that these alignments are noisy (e.g., [mean] aligns to a period). Therefore, for each record (skyCover and temperature in this case), we extract two templates:",
"cite_spans": [
{
"start": 797,
"end": 803,
"text": "[mean]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 661,
"end": 669,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Template Extraction",
"sec_num": "4.3.2"
},
{
"text": "(1) a COARSE template, which takes the text spanned by the record and abstracts away all fields in the scenario ( For each record r i , we define T i so that BASE(T i ) and COARSE(T i ) are the corresponding two extracted templates. We restrict F i to the set of abstracted fields in the COARSE template",
"cite_spans": [
{
"start": 112,
"end": 113,
"text": "(",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template Extraction",
"sec_num": "4.3.2"
},
{
"text": "We now present an empirical evaluation of our system on our three domains-ROBOCUP, SUMTIME, and WEATHERGOV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Automatic Evaluation To evaluate surface realization (or, combined content selection and surface realization), we measured the BLEU score (Papineni et al., 2002) (the precision of 4-grams with a brevity penalty) of the system-generated output with respect to the human-generated output.",
"cite_spans": [
{
"start": 138,
"end": 161,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "To evaluate macro content selection, we measured the F 1 score (the harmonic mean of precision and recall) of the set of records chosen with respect to the human-annotated set of records.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "We conducted a human evaluation using Amazon Mechanical Turk. For each domain, we chose 100 scenarios randomly from the test set. We ran each system under consideration on each of these scenarios, and presented each resulting output to 10 evaluators. 2 Evaluators were given instructions to rank an output on the basis of English fluency and semantic correctness on the following scale:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": null
},
{
"text": "2 To minimize bias, we evaluated all the systems at once, randomly shuffling the outputs of the systems. The evaluators were not necessarily the same 10 evaluators. Evaluators were also given additional domainspecific information: (1) the background of the domain (e.g., that SUMTIME reports are technical weather reports); (2) general properties of the desired output (e.g., that SUMTIME texts should mention every record whereas WEATHERGOV texts need not); and (3) peculiarities of the text (e.g., the suffix ly in SUMTIME should exist as a separate token from its stem, or that pink goalie and pink1 have the same meaning in ROBOCUP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": null
},
{
"text": "We evaluated the following systems on our three domains:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 HUMAN is the human-generated output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 OURSYSTEM uses all the features in Figure 4 and is trained according to Section 4.3. \u2022 BASELINE is OURSYSTEM using a subset of the features (those marked with \u2020 in Figure 4) . In contrast to OURSYSTEM, the included features only depend on a local context of decisions in a manner similar to the generative model of Liang et al. (2009) and the pCRU-greedy system of Belz (2008) . BASELINE also excludes features that depend on values of the world state. \u2022 The existing state-of-the-art domain-specific system for each domain.",
"cite_spans": [
{
"start": 317,
"end": 336,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
},
{
"start": 367,
"end": 378,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 4",
"ref_id": null
},
{
"start": 166,
"end": 175,
"text": "Figure 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "Following the evaluation methodology of Chen and Mooney (2008) , we trained our system on three Table 1 : ROBOCUP results. WASPER-GEN is described in Chen and Mooney (2008) . The BLEU is reported on systems that use fixed human-annotated records (in other words, we evaluate surface realization given perfect content selection). Figure 6: Outputs of systems on an example ROBOCUP scenario. There are some minor differences between the outputs. Recall that OURSYSTEM differs from BASELINE mostly in the addition of feature W2, which captures dependencies between field values (e.g., purple10) and the template chosen (e.g., [arg1] passes to [arg2]). This allows us to capture valuedependent preferences for different realizations (e.g., passes to over kicks to). Also, HUMAN uses passes back to, but this word choice requires knowledge of passing records in previous scenarios, which none of the systems have access to. It would natural, however, to add features that would capture these longerrange dependencies in our framework.",
"cite_spans": [
{
"start": 40,
"end": 62,
"text": "Chen and Mooney (2008)",
"ref_id": "BIBREF4"
},
{
"start": 150,
"end": 172,
"text": "Chen and Mooney (2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "ROBOCUP Results",
"sec_num": "5.3"
},
{
"text": "Robocup games and tested on the fourth, averaging over the four train/test splits. We report the average test accuracy weighted by the number of scenarios in a game. First, we evaluated macro content selection. Table 1 shows that OURSYSTEM significantly outperforms BASELINE and WASPER-GEN on F 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "ROBOCUP Results",
"sec_num": "5.3"
},
{
"text": "To compare with Chen and Mooney (2008) on surface realization, we fixed each system's record decisions to the ones given by the annotated data and enforced that all the fields of that record are chosen. Table 1 shows that OURSYSTEM significantly outperforms BASELINE and is comparable to WASPER-GEN on BLEU. On human evaluation, OURSYSTEM outperforms BASELINE, but WASPER-GEN outperforms OURSYSTEM. See Figure 6 for example outputs from the various systems. Table 2 : SUMTIME results. The SUMTIME-Hybrid system is described in (Reiter et al., 2005) ; pCRU-greedy, in (Belz, 2008) .",
"cite_spans": [
{
"start": 527,
"end": 548,
"text": "(Reiter et al., 2005)",
"ref_id": "BIBREF16"
},
{
"start": 567,
"end": 579,
"text": "(Belz, 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 203,
"end": 210,
"text": "Table 1",
"ref_id": null
},
{
"start": 403,
"end": 411,
"text": "Figure 6",
"ref_id": null
},
{
"start": 458,
"end": 465,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "ROBOCUP Results",
"sec_num": "5.3"
},
{
"text": "The SUMTIME task only requires micro content selection and surface realization because the sequence of records to be generated is fixed; only these aspects are evaluated. Following the methodology of Belz (2008) , we used five-fold cross validation. We found that using the unsupervised model of Liang et al. (2009) to automatically produce aligned training scenarios (Section 4.3.1) was less effective than it was in the other two domains due to two factors: (i) there are fewer training examples in SUMTIME and unsupervised learning typically works better with a large amount of data; and (ii) the alignment model does not exploit the temporal structure in the SUMTIME world state. Therefore, we used a small set of simple regular expressions to produce aligned training scenarios. Table 2 shows that OURSYSTEM significantly outperforms BASELINE as well as SUMTIME-Hybrid, a hand-crafted system, on BLEU. Note that OURSYSTEM is domainindependent and has not been specifically tuned to SUMTIME. However, OURSYSTEM is outperformed by the state-of-the-art statistical system pCRU-greedy.",
"cite_spans": [
{
"start": 200,
"end": 211,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
},
{
"start": 296,
"end": 315,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 784,
"end": 791,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "SUMTIME Results",
"sec_num": "5.4"
},
{
"text": "Custom Features One of the advantages of our feature-based approach is that it is straightforward to incorporate domain-specific features to capture specific properties of a domain. To this end, we define the following set of feature templates in place of our generic feature templates from Figure 7 : Outputs of systems on an example SUMTIME scenario. Two notable differences between OURSYSTEM-CUSTOM and BASELINE arise due to OURSYSTEM-CUSTOM's value-dependent features. For example, OURSYSTEM-CUSTOM can choose whether to include the time field (windDir2) or not (windDir1), depending on the value of the time (F1 ), thereby improving content selection. OURSYSTEM-CUSTOM also improves surface realization, choosing gradually decreasing over BASELINE's increasing.",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 299,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "SUMTIME Results",
"sec_num": "5.4"
},
{
"text": "Interestingly, this improvement comes from the joint effort of two features: W2 prefers decreasing over increasing in this case, and W5 adds the modifier gradually. An important strength of log-linear models is the ability to combine soft preferences from many features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMTIME Results",
"sec_num": "5.4"
},
{
"text": "\u2022 W2 : Change in wind speed \u2022 W3 : Change in wind direction and speed \u2022 W4 : Existence of gust min and/or max \u2022 W5 : Time elapsed since last record \u2022 W6 : Whether wind is a cardinal direction (N, E, S, W)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMTIME Results",
"sec_num": "5.4"
},
{
"text": "The resulting system, which we call OURSYSTEM-CUSTOM, obtains a BLEU score which is comparable to pCRU-greedy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMTIME Results",
"sec_num": "5.4"
},
{
"text": "An important aspect of our system that it is flexible and quick to deploy. According to Belz (2008) , SUMTIME-Hybrid took twelve person-months to build, while pCRU-greedy took one month. Having developed OURSYSTEM in a domain-independent way, we only needed to do simple reformatting upon receiving the SUMTIME data. Furthermore, it took only a few days to develop the custom features above to create OURSYSTEM-CUSTOM, which has BLEU performance comparable to the state-of-theart pCRU-greedy system.",
"cite_spans": [
{
"start": 88,
"end": 99,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SUMTIME Results",
"sec_num": "5.4"
},
{
"text": "We also conducted human evaluations on the four systems shown in Table 2 . Note that this evaluation is rather difficult for Mechanical Turkers since SUMTIME texts are rather technical compared to those in other domains. Interestingly, all systems outperform HUMAN on English fluency; this result corroborates the findings of Belz (2008) . On semantic correctness, all systems perform comparably to HUMAN, except pCRU-greedy, which performs slightly better. See Figure 7 for a comparison of the outputs generated by the various systems. ",
"cite_spans": [
{
"start": 326,
"end": 337,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 2",
"ref_id": null
},
{
"start": 462,
"end": 470,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "SUMTIME Results",
"sec_num": "5.4"
},
{
"text": "We evaluate the WEATHERGOV corpus on the joint task of content selection and surface realization. We split our corpus into 25,000 scenarios for training, 1,000 for development, and 3,528 for testing. In WEATHERGOV, numeric field values are often rounded or noisily perturbed, so it is difficult to generate precisely matching numbers. Therefore, we used a modified BLEU score where numbers differing by at most five are treated as equal. Furthermore, WEATHERGOV is evaluated on the joint content selection and surface realization task, unlike ROBOCUP, where content selection and surface realization were treated separately, and SUMTIME, where content selection was not applicable. Table 3 shows the results. We see that OURSYSTEM substantially outperforms BASELINE, especially on BLEU score and semantic correctness. This difference shows that taking non-local context into account is very important in this domain. This result is not surprising, since WEATHERGOV is the most complicated of the three domains, and this complexity is exactly where non-locality is neces- x .",
"cite_spans": [],
"ref_spans": [
{
"start": 682,
"end": 689,
"text": "Table 3",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "WEATHERGOV Results",
"sec_num": "5.5"
},
{
"text": "windDir 1 x south wind between windSpeed 1 min=7 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEATHERGOV Results",
"sec_num": "5.5"
},
{
"text": "x and max=15 15",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEATHERGOV Results",
"sec_num": "5.5"
},
{
"text": "x mph . Figure 8 : Outputs of systems on an example WEATHERGOV scenario. Most of the gains of OURSYSTEM over BASELINE come from improved content selection. For example, BASELINE chooses rainChance because it happens to be the most common first record type in the training data. However, since OURSYSTEM has features that depend on the value of rainChance (noChance in this case), it has learned to disprefer talking about rain when there is no rain. Also, OURSYSTEM has additional features on the entire history of chosen records, which enables it to choose a better sequence of records. sary. Interestingly, OURSYSTEM even outperforms HUMAN on semantic correctness, perhaps due to generating more straightforward renderings of the world state. Figure 8 describes example outputs for each system.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 16,
"text": "Figure 8",
"ref_id": null
},
{
"start": 745,
"end": 753,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "WEATHERGOV Results",
"sec_num": "5.5"
},
{
"text": "There has been a fair amount of work both on content selection and surface realization. In content selection, Barzilay and Lee (2004) use an approach based on local classification with edge-wise scores between local decisions. Our model, on the other hand, can capture higher-order constraints to enforce global coherence. Liang et al. (2009) introduces a generative model of the text given the world state, and in some ways is similar in spirit to our model. Although that model is capable of generation in principle, it was designed for unsupervised induction of hidden alignments (which is exactly what we use it for). Even if combined with a language model, generated text was much worse than our baseline.",
"cite_spans": [
{
"start": 110,
"end": 133,
"text": "Barzilay and Lee (2004)",
"ref_id": "BIBREF0"
},
{
"start": 323,
"end": 342,
"text": "Liang et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "The prominent approach for surface realization is rendering the text from a grammar. Wong and Mooney (2007) and Chen and Mooney (2008) use synchronous grammars that map a logical form, represented as a tree, into a parse of the text. Soricut and Marcu (2006) uses tree structures called WIDLexpressions (the acronym corresponds to four operations akin to the rewrite rules of a grammar) to represent the realization process, and, like our approach, operates in a log-linear framework. Belz (2008) and Belz and Kow (2009) also perform surface realization from a PCFG-like grammar. Lu et al. (2009) uses a conditional random field model over trees. Other authors have performed surface realization using various grammar formalisms, for instance CCG (White et al., 2007) , HPSG (Nakanishi et al., 2005) , and LFG (Cahill and van Genabith, 2006) .",
"cite_spans": [
{
"start": 85,
"end": 107,
"text": "Wong and Mooney (2007)",
"ref_id": "BIBREF20"
},
{
"start": 112,
"end": 134,
"text": "Chen and Mooney (2008)",
"ref_id": "BIBREF4"
},
{
"start": 234,
"end": 258,
"text": "Soricut and Marcu (2006)",
"ref_id": "BIBREF17"
},
{
"start": 485,
"end": 496,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
},
{
"start": 501,
"end": 520,
"text": "Belz and Kow (2009)",
"ref_id": "BIBREF1"
},
{
"start": 580,
"end": 596,
"text": "Lu et al. (2009)",
"ref_id": "BIBREF11"
},
{
"start": 747,
"end": 767,
"text": "(White et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 775,
"end": 799,
"text": "(Nakanishi et al., 2005)",
"ref_id": "BIBREF12"
},
{
"start": 810,
"end": 841,
"text": "(Cahill and van Genabith, 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In each of the above cases, the decomposable structure of the tree/grammar enables tractability. However, we saw that it was important to include features that captured long-range dependencies. Our model is also similar in spirit to Ratnaparkhi (2002) in the use of non-local features, but we operate at three levels of hierarchy to include both content selection and surface realization.",
"cite_spans": [
{
"start": 233,
"end": 251,
"text": "Ratnaparkhi (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "One issue that arises with long-range dependencies is the lack of efficient algorithms for finding the optimal text. Koller and Striegnitz (2002) perform surface realization of a flat semantics, which is NPhard, so they recast the problem as non-projective dependency parsing. Ratnaparkhi (2002) uses beam search to find an approximate solution. We found that a greedy approach obtained better results than beam search; Belz (2008) found greedy approaches to be effective as well.",
"cite_spans": [
{
"start": 117,
"end": 145,
"text": "Koller and Striegnitz (2002)",
"ref_id": "BIBREF8"
},
{
"start": 277,
"end": 295,
"text": "Ratnaparkhi (2002)",
"ref_id": "BIBREF15"
},
{
"start": 420,
"end": 431,
"text": "Belz (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We have developed a simple yet powerful generation system that combines both content selection and surface realization in a domain independent way. Despite our approach being domain-independent, we were able to obtain performance comparable to the state-of-the-art across three domains. Additionally, the feature-based design of our approach makes it easy to incorporate domain-specific knowledge to increase performance even further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Catching the drift: Probabilistic content models, with applications to generation and summarization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Barzilay and L. Lee. 2004. Catching the drift: Prob- abilistic content models, with applications to genera- tion and summarization. In Human Language Tech- nology and North American Association for Computa- tional Linguistics (HLT/NAACL).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "System building cost vs. output quality in data-to-text generation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Kow",
"suffix": ""
}
],
"year": 2009,
"venue": "European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "16--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Belz and E. Kow. 2009. System building cost vs. output quality in data-to-text generation. In European Workshop on Natural Language Generation, pages 16-24.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic generation of weather forecast texts using comprehensive probabilistic generationspace models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2008,
"venue": "Natural Language Engineering",
"volume": "14",
"issue": "4",
"pages": "1--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Belz. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation- space models. Natural Language Engineering, 14(4):1-26.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Robust pcfgbased generation using automatically acquired LFG approximations",
"authors": [
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2006,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1033--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aoife Cahill and Josef van Genabith. 2006. Robust pcfg- based generation using automatically acquired LFG approximations. In Association for Computational Linguistics (ACL), pages 1033-1040, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning to sportscast: A test of grounded language acquisition",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Chen",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2008,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. L. Chen and R. J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In International Conference on Machine Learning (ICML), pages 128-135.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Coral: using natural language generation for navigational assistance",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Geldof",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Prost",
"suffix": ""
}
],
"year": 2003,
"venue": "Australasian computer science conference",
"volume": "",
"issue": "",
"pages": "35--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Dale, S. Geldof, and J. Prost. 2003. Coral: using natu- ral language generation for navigational assistance. In Australasian computer science conference, pages 35- 44.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Techniques for text planning with XSLT",
"authors": [
{
"first": "M",
"middle": [
"E"
],
"last": "Foster",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2004,
"venue": "Workshop on NLP and XML: RDF/RDFS and OWL in Language Technology",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. E. Foster and M. White. 2004. Techniques for text planning with XSLT. In Workshop on NLP and XML: RDF/RDFS and OWL in Language Technology, pages 1-8.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generation of biomedical arguments for lay readers",
"authors": [
{
"first": "N",
"middle": [],
"last": "Green",
"suffix": ""
}
],
"year": 2006,
"venue": "International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "114--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Green. 2006. Generation of biomedical arguments for lay readers. In International Natural Language Gen- eration Conference, pages 114-121.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generation as dependency parsing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Striegnitz",
"suffix": ""
}
],
"year": 2002,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Koller and K. Striegnitz. 2002. Generation as de- pendency parsing. In Association for Computational Linguistics (ACL), pages 17-24.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning semantic correspondences with less supervision",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics and Inter- national Joint Conference on Natural Language Pro- cessing (ACL-IJCNLP).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On the limited memory method for large scale optimization",
"authors": [
{
"first": "D",
"middle": [
"C"
],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming B",
"volume": "45",
"issue": "3",
"pages": "503--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. C. Liu and J. Nocedal. 1989. On the limited mem- ory method for large scale optimization. Mathemati- cal Programming B, 45(3):503-528.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Natural language generation with tree conditional random fields",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "W",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2009,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "400--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Lu, H. T. Ng, and W. S. Lee. 2009. Natural lan- guage generation with tree conditional random fields. In Empirical Methods in Natural Language Process- ing (EMNLP), pages 400-409.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Probabilistic models for disambiguation of an HPSG-based chart generator",
"authors": [
{
"first": "Hiroko",
"middle": [],
"last": "Nakanishi",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Parsing '05: Proceedings of the Ninth International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "93--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroko Nakanishi, Yusuke Miyao, and Jun'ichi Tsujii. 2005. Probabilistic models for disambiguation of an HPSG-based chart generator. In Parsing '05: Pro- ceedings of the Ninth International Workshop on Pars- ing Technology, pages 93-102, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Association for Computational Linguis- tics (ACL).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Maximum entropy models for natural language ambiguity resolution",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ratnaparkhi. 1998. Maximum entropy models for nat- ural language ambiguity resolution. Ph.D. thesis, Uni- versity of Pennsylvania.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Trainable approaches to surface natural language generation and their application to conversational dialog systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 2002,
"venue": "Computer, Speech & Language",
"volume": "16",
"issue": "",
"pages": "435--455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ratnaparkhi. 2002. Trainable approaches to surface natural language generation and their application to conversational dialog systems. Computer, Speech & Language, 16:435-455.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Choosing words in computer-generated weather forecasts",
"authors": [
{
"first": "E",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sripada",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hunter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Davy",
"suffix": ""
}
],
"year": 2005,
"venue": "Artificial Intelligence",
"volume": "167",
"issue": "",
"pages": "137--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Reiter, S. Sripada, J. Hunter, J. Yu, and I. Davy. 2005. Choosing words in computer-generated weather fore- casts. Artificial Intelligence, 167:137-169.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Stochastic language generation using WIDL-expressions and its application in machine translation and summarization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1105--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Soricut and D. Marcu. 2006. Stochastic language generation using WIDL-expressions and its applica- tion in machine translation and summarization. In As- sociation for Computational Linguistics (ACL), pages 1105-1112.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generating approximate geographic descriptions",
"authors": [
{
"first": "R",
"middle": [],
"last": "Turner",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sripada",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2009,
"venue": "European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "42--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Turner, Y. Sripada, and E. Reiter. 2009. Gener- ating approximate geographic descriptions. In Eu- ropean Workshop on Natural Language Generation, pages 42-49.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards broad coverage surface realization with CCG",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Rajakrishnan",
"middle": [],
"last": "Rajkumar",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Workshop on Using Corpora for NLG: Language Generation and Machine Translation (UCNLG+MT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael White, Rajakrishnan Rajkumar, and Scott Mar- tin. 2007. Towards broad coverage surface realization with CCG. In In Proceedings of the Workshop on Us- ing Corpora for NLG: Language Generation and Ma- chine Translation (UCNLG+MT).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning synchronous grammars for semantic parsing with lambda calculus",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "Wong",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2007,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "960--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. W. Wong and R. J. Mooney. 2007. Learning syn- chronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguis- tics (ACL), pages 960-967.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "{time, min} and r 2 .v[time] = 5pm-6am (F2) F 2 = {time, min} and r 2 .v[min] = low T 2 = with a low around [min] (W1) Base(T 2 ) = with a low around [min] Coarse(T 2 ) = with a [time] around [min] (W2) Base(T 2 ) = with a low around [min] and r 2 .v[time] = 5pm-6am Coarse(T 2 ) = with a [time] around [min] and r 2 .v[time] = 5pm-6am Base(T 2 ) = with a low around [min] and r 2 .v[min] = low Coarse(T 2 ) = with a [time] around [min] and"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Records: Text: purple10 passes to purple9"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 4: \u2022 F1 : Value of time \u2022 F2 : Existence of gusts/wind direction/wind speeds \u2022 W1 : Change in wind direction (clockwise, counterclockwise, or none)"
},
"TABREF0": {
"text": "{time, min} Template T 1 = mostly cloudy , T 2 = with a low around [min] . (R1) r 2 .t = temperature and (r 1 .t, r 0 .t) = (skyCover, start) r 2 .t = temperature and (r 1 .t) = (skyCover) (R2) r 2 .t = temperature and {r 1 .t} = {skyCover} (R3) r 2 .t = temperature and r j .t = temperature \u2200j < 2 (R4) r 2 .t = temperature and r 2 .v[time] = 5pm-6am r 2 .t = temperature and r 2 .v[min] = low r 2 .t = temperature and r 2 .v[mean] = low r",
"content": "<table><tr><td>Text mostly cloudy , with a low around 45 .</td></tr><tr><td>Specific active (nonzero) features for highlighted decisions</td></tr><tr><td>r 2 = temperature 1</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF8": {
"text": "WEATHERGOV results. The BLEU score is on joint content selection and surface realization and is modified to not penalize numeric deviations of at most 5.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}