Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:30.384708Z"
},
"title": "Modeling the Usage of Discourse Connectives as Rational Speech Acts",
"authors": [
{
"first": "Frances",
"middle": [],
"last": "Yung",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Taku",
"middle": [],
"last": "Komura",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Discourse relations can either be implicit or explicitly expressed by markers, such as 'therefore' and 'but'. How a speaker makes this choice is a question that is not well understood. We propose a psycholinguistic model that predicts whether a speaker will produce an explicit marker given the discourse relation s/he wishes to express. Based on the framework of the Rational Speech Acts model, we quantify the utility of producing a marker based on the information-theoretic measure of surprisal, the cost of production, and a bias to maintain uniform information density throughout the utterance. Experiments based on the Penn Discourse Treebank show that our approach outperforms stateof-the-art approaches, while giving an explanatory account of the speaker's choice.",
"pdf_parse": {
"paper_id": "K16-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "Discourse relations can either be implicit or explicitly expressed by markers, such as 'therefore' and 'but'. How a speaker makes this choice is a question that is not well understood. We propose a psycholinguistic model that predicts whether a speaker will produce an explicit marker given the discourse relation s/he wishes to express. Based on the framework of the Rational Speech Acts model, we quantify the utility of producing a marker based on the information-theoretic measure of surprisal, the cost of production, and a bias to maintain uniform information density throughout the utterance. Experiments based on the Penn Discourse Treebank show that our approach outperforms stateof-the-art approaches, while giving an explanatory account of the speaker's choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speakers or authors 1 produce informative utterances, such that the listeners or readers can understand his/her message. Grice's Maxim of Quantity states that human speakers communicate by being as informative as required, but no more (Grice, 1975) . If a speaker always tries to provide as much information as possible, the resulting utterance could become excessively long and tedious. Such utterance is not only effort consuming for the speaker to produce, but also contains redundant information that is not necessary for the listener.",
"cite_spans": [
{
"start": 235,
"end": 248,
"text": "(Grice, 1975)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we model how speakers plan the presentation of discourse structure optimally in terms of informativeness. Specifically, we propose a model that predicts whether the speaker will use or omit a discourse connective, given the sense of discourse relation s/he wants to convey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Discourse relations are relations between unit of texts (known as arguments) that make a document coherent. These relations can be marked in the surface text or inferred by the readers, as shown in the below examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. It was a great movie, but I did not like it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. It was a great movie, therefore I liked it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. It was a great movie. I liked it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The word 'but' indicates a Concession relation in Example (1), and 'therefore' indicates a Result relation in Example (2). We call 'but' and 'therefore' explicit discourse connectives (DCs). In Example (3), DCs are absent but a Result relation can be inferred. We say the DC is implicit in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Explicit DCs are highly informative cues to identify discourse relations (Pitler et al., 2008) while implicit DCs are more ambiguous. For example, 'I liked it' can also be read as a Justification for the first sentence in Example (3).",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Pitler et al., 2008)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Marking a discourse relation or not is subject to ambiguity and redundancy. On one hand, using an explicit DC avoids ambiguity. For example, if the DC 'but' is omitted in Example (1), readers may have problems in inferring the Concession sense. On the other hand, if the intended discourse sense is highly predictable, it is verbose or redundant to insert an explicit DC in the utterance, such as the DC 'therefore' in Example (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A model that predicts the markedness of discourse relations not only contributes to a better understanding of the human language production mechanism, but is also important in generating natural, humanlike texts and dialogues. In particular, the degree of markedness in discourse relations differs cross-lingually. Yung et al. (2015) analyze the manual alignments of explicit and implicit DCs in a Chinese-English translation corpus and find that 30% of implicit DCs in Chinese are translated to explicit DCs in English. It remains a challenge for machine translation systems to explicitate or implicitate discourse relations in the source texts as human translators do (Becher, 2011; Meyer and Webber, 2013; Zuffery and Cartoni, 2014; Hoek and Zufferey, 2015; , since the markedness of the translation is subject to the discourse planning of the target text.",
"cite_spans": [
{
"start": 315,
"end": 333,
"text": "Yung et al. (2015)",
"ref_id": "BIBREF56"
},
{
"start": 670,
"end": 684,
"text": "(Becher, 2011;",
"ref_id": "BIBREF5"
},
{
"start": 685,
"end": 708,
"text": "Meyer and Webber, 2013;",
"ref_id": "BIBREF30"
},
{
"start": 709,
"end": 735,
"text": "Zuffery and Cartoni, 2014;",
"ref_id": "BIBREF58"
},
{
"start": 736,
"end": 760,
"text": "Hoek and Zufferey, 2015;",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to explain how human speakers choose the optimal level of markedness in his utterance, we model how speakers rationally balance between ambiguity and redundancy. In particular, we use the Rational Speech Acts (RSA) model (Frank and Goodman, 2012) to predict how speakers reason about the ambiguity of an utterance. In addition, we model how speakers adjust the redundancy of the utterance following the Uniform Information Density (UID) principle (Levy and Jaeger, 2006) .",
"cite_spans": [
{
"start": 230,
"end": 255,
"text": "(Frank and Goodman, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 456,
"end": 479,
"text": "(Levy and Jaeger, 2006)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We apply the framework to predict whether an explicit or implicit DC is used in corpus data, given the two arguments of the discourse relations and the discourse sense to be conveyed. Our model not only achieves higher accuracy comparing with previous work (Patterson and Kehler, 2013) , but also provides an interpretable account of various cognitive factors behind the predicted decision.",
"cite_spans": [
{
"start": 257,
"end": 285,
"text": "(Patterson and Kehler, 2013)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We start by a review of related work in Section 2, followed by the descriptions of our model in Section 3 and experiments in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first provide background information on RSA and UID, which are used in our proposed method. It is followed by introduction of previous work about prediction of DC markedness in corpus data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The RSA model (Frank and Goodman, 2012) is a variation of the game-theoretic approach in prag-matics (J\u00e4ger, 2012) . It explains the communicative reasoning of a speaker and a listener in terms of Bayesian probabilities.",
"cite_spans": [
{
"start": 14,
"end": 39,
"text": "(Frank and Goodman, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 101,
"end": 114,
"text": "(J\u00e4ger, 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "A rational listener assumes the utterance s/he hears contains the optimal amount of information. S/he predicts the intended message of a speaker by Bayesian inference (Equation 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "P listener (s|w, C) \u221d P speaker (w|s, C)P (s) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "where w is the utterance produced by the speaker; s is the message of an utterance; and C is the context. P speaker (w|s, C) represents the listener's predicted speaker's model, and P (s) represents the salience of the message, which is shared knowledge between the speaker and listener.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "A rational speaker chooses an utterance by softmax optimizing the expected utility (U (w; s, C)) of the utterance (Equation 2). (w;s,C) (2)",
"cite_spans": [
{
"start": 128,
"end": 135,
"text": "(w;s,C)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "P speaker (w|s, C) \u221d e \u03b1\u2022U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "\u03b1 is the decision noise parameter, which is set to 1 to represent a rational speaker 2 . S/He emulates the listener's interpretation and chooses an utterance s/he believes to be informative. Also, an utterance that is easy to produce is preferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "Utility is thus defined as the informativeness (I(s; w, C)) of the utterance, deducted by the cost (D(w)) to produce it (Equation 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "U (w; s, C) = I(s; w, C) \u2212 D(w)",
"eq_num": "(3)"
}
],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "Since utterances that are unconventional and surprising are less useful, Informativeness is quantified as the negative surprisal of the utterance with respect to the message to be conveyed (Equation 4). I(s; w, C) = ln P (s|w, C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "The RSA model has successfully simulated results of psycholinguistic experiments concerning different aspects of human communication, such as scalar implicature, referential expressions and language acquisition (Frank and Goodman, 2012; Goodman and Stuhlm\u00fcller, 2013; Smith et al., 2013; Kao et al., 2014; . Besides experimental data, Orita et al.(2015) applies RSA model to predict the choice of referring expressions in corpus data and Monroe and Potts (2015) optimizes a classifier based on RSA by inducing the semantic lexicon from a training corpus. These works focus on the pragmatic use of language, where the informativeness and lexicon of an utterance largely depends on the context (e.g. 'Red' is not valid to be used to refer to a blue ball).",
"cite_spans": [
{
"start": 211,
"end": 236,
"text": "(Frank and Goodman, 2012;",
"ref_id": "BIBREF10"
},
{
"start": 237,
"end": 267,
"text": "Goodman and Stuhlm\u00fcller, 2013;",
"ref_id": "BIBREF13"
},
{
"start": 268,
"end": 287,
"text": "Smith et al., 2013;",
"ref_id": "BIBREF49"
},
{
"start": 288,
"end": 305,
"text": "Kao et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 335,
"end": 353,
"text": "Orita et al.(2015)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "In this work, we apply RSA to predict the usage of DCs, which is more universal across different contexts (i.e. A DC can be used or dropped given various discourse senses and contexts). Our model is built upon the speaker's model of RSA to predict speaker's choice of explicit or implicit DCs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rational Speech Acts model",
"sec_num": "2.1"
},
{
"text": "The UID principle views language communication as a form of information transmission through a noisy channel and a constant rate of information flow is optimal according to Shannon's Information Theory (Levy and Jaeger, 2006; Genzel and Charniak, 2002; Shannon, 1948) . It states that speakers structure utterances by optimizing information density, which is the quantity of information (measured by surprisal 3 ) transmitted per unit of utterance, such as word.",
"cite_spans": [
{
"start": 202,
"end": 225,
"text": "(Levy and Jaeger, 2006;",
"ref_id": "BIBREF27"
},
{
"start": 226,
"end": 252,
"text": "Genzel and Charniak, 2002;",
"ref_id": "BIBREF12"
},
{
"start": 253,
"end": 267,
"text": "Shannon, 1948)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uniform Information Density",
"sec_num": "2.2"
},
{
"text": "Information density rises when the utterance is 'surprising' and drops when an utterance is highly predictable. To smooth the peaks and troughs, speakers adjust the ambiguity of an utterance by including or reducing linguistic markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uniform Information Density",
"sec_num": "2.2"
},
{
"text": "Following the UID principle, linguistic choices made by speakers are predicted more accurately by incorporating an information density predictor on top of other constraints. The predictor measures how easily a candidate utterance can be predicted and the speaker adjusts information density based on the expected predictability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uniform Information Density",
"sec_num": "2.2"
},
{
"text": "UID is applied to explain a variety of speaker's options, such as phonetic (Aylett and Turk, 2004) , morphological (Frank and Jaeger, 2008) and syntactic (Jaeger, 2010) reductions, and also referring expressions (Tily and Piantadosi, 2009) .",
"cite_spans": [
{
"start": 75,
"end": 98,
"text": "(Aylett and Turk, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 115,
"end": 139,
"text": "(Frank and Jaeger, 2008)",
"ref_id": "BIBREF11"
},
{
"start": 154,
"end": 168,
"text": "(Jaeger, 2010)",
"ref_id": "BIBREF20"
},
{
"start": 212,
"end": 239,
"text": "(Tily and Piantadosi, 2009)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uniform Information Density",
"sec_num": "2.2"
},
{
"text": "The choice of discourse marking strategies has been studied in earlier works as a subtask for natural language generation (Scott and de Souza, 1990; Moser and Moore, 1995; Grote and Stede, 1998; Soria and Ferrari, 1998; Allbritton and Moore, 1999) . In the absence of large-scale resources, investigations are based on manually derived rules and lexicons or psycholinguistic experiments.",
"cite_spans": [
{
"start": 122,
"end": 148,
"text": "(Scott and de Souza, 1990;",
"ref_id": "BIBREF45"
},
{
"start": 149,
"end": 171,
"text": "Moser and Moore, 1995;",
"ref_id": "BIBREF32"
},
{
"start": 172,
"end": 194,
"text": "Grote and Stede, 1998;",
"ref_id": "BIBREF16"
},
{
"start": 195,
"end": 219,
"text": "Soria and Ferrari, 1998;",
"ref_id": "BIBREF50"
},
{
"start": 220,
"end": 247,
"text": "Allbritton and Moore, 1999)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit vs. Implicit DCs",
"sec_num": "2.3"
},
{
"text": "More recently, Asr and Demberg (2012) presents an analysis of the PDTB, showing that 'causal' and 'continuous' senses are more often implicit, or marked by less specific DCs. Indeed these senses are presupposed by listeners according to linguistics theories (Segal et al., 1991; Murray, 1997; Levinson, 2000; Sanders, 2005; Kuperberg et al., 2011) . On the other hand, Asr and Demberg (2015) finds that DCs are more often dropped for the discourse relation Chosen Alternative (the relation typically signalled by the DC 'instead'), if the context contains negation words, which are identified cues for this relation. Similarly, contextual difference in explicit and implicit discourse relations are reported in attempts to train implicit DC classifiers based on explicit DC instances (Sporleder and Lascarides, 2008; Webber, 2009) .",
"cite_spans": [
{
"start": 258,
"end": 278,
"text": "(Segal et al., 1991;",
"ref_id": "BIBREF46"
},
{
"start": 279,
"end": 292,
"text": "Murray, 1997;",
"ref_id": "BIBREF33"
},
{
"start": 293,
"end": 308,
"text": "Levinson, 2000;",
"ref_id": "BIBREF26"
},
{
"start": 309,
"end": 323,
"text": "Sanders, 2005;",
"ref_id": "BIBREF44"
},
{
"start": 324,
"end": 347,
"text": "Kuperberg et al., 2011)",
"ref_id": "BIBREF24"
},
{
"start": 784,
"end": 816,
"text": "(Sporleder and Lascarides, 2008;",
"ref_id": "BIBREF51"
},
{
"start": 817,
"end": 830,
"text": "Webber, 2009)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit vs. Implicit DCs",
"sec_num": "2.3"
},
{
"text": "Asr and Demberg (2012; 2015) attribute the corpus statistics to the UID hypothesis, which explains that expected, predictable relations are more likely to be conveyed implicitly, and thus more ambiguously, to maintain steady information flow. However, there are explicit 'causal' and 'continuous' relations and some Chosen Alternative are marked even argument 1 is negated. Although markedness measures are proposed to rate the implicitness of a relation sense (Asr and Demberg, 2013; Jin and de Marneffe, 2015), these measures only quantify the general markedness of the sense in the data, but not the speaker's choice for each particular instance. In contrast, this work specifically measures the predictability of a given relation; generalizes the approach to all discourse senses instead of particular senses or cues; and combines the markedness preference with other language production factors, in order to model each instance of relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit vs. Implicit DCs",
"sec_num": "2.3"
},
{
"text": "Patterson and Kehler (2013) is the only study we are aware of that predicts the choice of explicit or implicit DCs of each instance of relation. They argue that while the decision is related to the ease to infer the relation, it may also depend on other stylistic or textual factors. A classifier is trained to predict whether a candidate DC (i.e. the DC that actually occurs in the text as an explicit DC, or annotated as an implicit DC) is actually present, given the sense of the discourse relation and the arguments. Relatively shallow linguistic features are used, such as whether the relations are em-bedded or shared, the previous discourse relation, argument lengths, and content word ratios. The classifier is trained and tested on a subset of relations from the PDTB, after screening away infrequent senses and DCs. An overall high classification accuracy is achieved. Relation-level and discourse-level features are found to be more useful than argument-level features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit vs. Implicit DCs",
"sec_num": "2.3"
},
{
"text": "However, this work does not target at explaining why an utterance is preferred by the speaker. The focus is a data-driven approach that replicates the occurrence of DCs in the corpus data. Our work differs in that we model the option of markedness from the viewpoint of human language production, explaining the factors behind the speaker's choice. For example, we do not make use of the candidate DC as a feature, since it is the result of the speaker's choice, if an explicit DC is preferred. Nonetheless, our model achieves higher accuracy when evaluated on the same test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit vs. Implicit DCs",
"sec_num": "2.3"
},
{
"text": "Our model is based on the speaker's model of RSA. We first explain how we adapt the RSA model to discourse presentation, followed by the details of each component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The markedness model",
"sec_num": "3"
},
{
"text": "According to Equation (2), the probability for a speaker to use utterance w to convey his intended message s in context C is: (w;s,C) w \u2208W e U (w ;s,C)",
"cite_spans": [
{
"start": 126,
"end": 133,
"text": "(w;s,C)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "P (w|s, C) = e U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "In the case of discourse connectives, the utterance w comes from the set W = {(exp)licit, (imp)licit}, if both explicit and implicit DCs are grammatically valid to convey s, the sense of discourse relation. Our model thus predicts speaker's choice of DCs based on the following two probabilities: P (exp|s, C) = e U (exp;s,C) e U (exp;s,C) + e U (imp;s,C) P (imp|s, C) = e U (imp;s,C) e U (exp;s,C) + e U (imp;s,C) I(s; exp, C) is the informativeness of using an explicit DC to present the sense s in discourselevel context C. Each discourse sense has its salience within the discourse context. It means C is also informative, but we want to quantify the informativeness of the DC only. Therefore, we define I(s; exp, C) by the difference between the informativess of 'the explicit DC in context C' and the informativeness of 'context C', which are quantified by negative surprisal. I(s; exp, C) = ln P (s|exp, C) \u2212 ln P (s|C) 8High I(s; exp, C) means it is informative and not surprising to use an explicit DC for this sense. P (s|exp, C) and P (s|C) are extracted from corpus data. Details are explained in Subsection 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "The principle of UID is incorporated into the RSA model as a bias on the utility of the DCs. A discourse relation is presented not only by the DCs but also the arguments, and the amount of discourse information of the whole utterance (DC + arguments) is fixed. According to UID, information should be transmitted uniformly across the utterance. If the arguments has much information about the sense, the sense is predictable from the arguments and thus the surprisal is small. The information density drops and has to be smoothed by using a more ambiguous, less predictable utterance, which can be achieved by reduction of a DC (Asr and Demberg, 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "Therefore, according to UID, an implicit DC is preferred if the arguments are informative. We thus raise the utility of an implicit DC by defining the probability for a speaker to choose an implicit DC to be proportional to the sum of the the utilities of a null DC and the arguments (args) 4 . e U (imp;s,C) = e U (null;s,C) + e U (args;s,C) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "U (null; s, C) = I(s; null, C) \u2212 D(null) (10) U (args; s, C) = I(s; arg, C) \u2212 D(args) (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "The amount of information that the null DC provides for the discourse relation is defined similarly as in Equation (8):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "I(s; null, C) = ln P (s|null, C) \u2212 ln P (s|C)",
"eq_num": "(12)"
}
],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "On the other hand, the informativeness of arguments, I(s; arg, C) is quantified by negative surprisal in RSA. However, arguments are clauses and sentences. It is not applicable to extract P (s|args, C) from the corpus. We thus approximate I(s; arg, C) by the confidence of a discourse parser in predicting discourse senses from the arguments. Details will be explained in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "Lastly, various psycholinguistically motivated measures are explored to approximate the prodcution cost D(exp) in Subsection 3.4. In contrast, no effort is required to produce a null DC. Also, we assume that the arguments have been produced to convey other information irrespective of their discourse informativeness, so no extra effort is needed. Therefore, D(null) and D(args) both equal 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "To summarize, the model predicts that the speaker will use an explicit DC if: e U (exp;s,C) > e U (null;s,C) + e U (args;s,C) 13and that s/he will use an implicit DC otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RSA for discourse relation presentation",
"sec_num": "3.1"
},
{
"text": "This section explains how we estimate the informativeness in Equations 8and 12. In discourse production, the utterance lexicon, W = {exp, imp} in Equation (5), and the set of speaker's intended messages (all possible discourse relation senses) are always valid 5 . Thus P (s|C), P (s|exp, C), and P (s|null, C) are universal distributions and can be extracted from corpus data based on the co-occurrences of senses, DCs, and contexts. We extract these empirical distributions from the training portion of the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of DCs",
"sec_num": "3.2"
},
{
"text": "We define context C as the surrounding discourse relations. Specifically, the discourse contexts (and their abbreviation in Table 2 ) are: the full discourse sense annotated in PDTB (S), the 4-way top level sense (TS), the form of discourse presentation (F) such as 'explicit' or 'implicit' 6 , and the pair of sense and form (SF or TSF). The contexts are taken from window sizes of 1 to 2: previous one (10) , next one (01), previous two (20), next two (02), previous one paired with next one (11). We hypothesize that the speaker also thinks ahead the coming discourse structures when planning the current ones. Various discourse contexts are compared in the experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Informativeness of DCs",
"sec_num": "3.2"
},
{
"text": "I(s; arg, C) in Equation 11refers to the amount of information in the arguments that contributes to the interpretation of the discourse sense. According to UID, information density drops when the discourse sense is predictable from the arguments alone, and an implicit DC is preferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "Presence of features in the arguments that signal a particular sense makes the sense more predictable, and thus promote the reduction of a DC. For example, the DC 'instead' is less used to present the Chosen Alternative sense if the first argument is negated (Asr and Demberg, 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "Generalizing this idea to capture various cues in the arguments for various senses, we approximate I(s; arg, C) by the confidence of an automatic discourse parser in predicting the discourse sense. An implicit relation parser uses various features in the arguments to identify the implicit relation sense (Pitler et al., 2009; Lin et al., 2009; Park and Cardi, 2012; Rutherford and Xue, 2014) . If the arguments contain much informative features, the parser will predict the sense more confidently.",
"cite_spans": [
{
"start": 305,
"end": 326,
"text": "(Pitler et al., 2009;",
"ref_id": "BIBREF39"
},
{
"start": 327,
"end": 344,
"text": "Lin et al., 2009;",
"ref_id": "BIBREF28"
},
{
"start": 345,
"end": 366,
"text": "Park and Cardi, 2012;",
"ref_id": "BIBREF36"
},
{
"start": 367,
"end": 392,
"text": "Rutherford and Xue, 2014)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "We propose two methods, for comparison, to measure the confidence of the parser prediction. A confident prediction means the parser will assign a high probability to the one output sense. Therefore, we use the negative surprisal of the estimated probability P p of the parser output sense s output (Equation 14) to approximate I(s; arg, C).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "I(s; arg, C) \u2248 w a \u2022 ln P p (s output )",
"eq_num": "(14)"
}
],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "At the same time, the probability distribution of all senses is less uniform if one sense is assigned a high probability. We thus alternatively approximate I(s; arg, C) by the negative entropy of the probability distribution estimated by the parser (Equation 15) 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "I(s; arg, C) \u2248 w a sp\u2208O P p (s p ) log P p (s p ) (15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "where O is the set of senses defined in the parser and w a is a positive weight tuned on the dev set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "We measure the general informativeness of the arguments to imply any discourse senses, so s output does not necessarily equal s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "We employ the implicit sense classifier from the winning parser of shared task 2015 (Wang and Lan, 2015) , which is designed to identify a subset of 14 implicit senses plus the entity relation. The two arguments of a relation instance, which can actually be explicit or implicit, are passed to the implicit DC classifier and I(s; arg, C) is approximated based on the output probabilities 8 . Although the performance of this state-of-the-art implicit DC classifier is still unsatisfactory (34.45% on PDTB Section 23 9 ) , our method only makes use of the probability estimation of the prediction 10 .",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Wang and Lan, 2015)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "Our motivation of using the implicit DC classifier is based on the hypothesis that the classifier can better predict the sense of relations that are actually implicit, than those that are actually explicit, since more features in the arguments are identifiable. In fact, it is the case. The classification accuracy of the originally explicit relations is significantly lower. This supports our motivation to use the parser estimation as an information density predictor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness of arguments",
"sec_num": "3.3"
},
{
"text": "The cost function D(exp) models speaker's effort required to produce an explicit DC for the intended discourse sense. We propose 5 versions of the cost function that are inspired by existing psycholinguistic findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "Mean DC length: Production cost intuitively increases with word length. We define the mean DC length of a discourse relation as the mean word length of all valid DCs for that sense, normalized by the average word length of all DCs. A lexicon of possible DC per each discourse sense is derived from the whole corpus. For multi-word DCs, a white space is simply counted 8 The implicit DC classifier is trained by Na\u00efve Bayes based on features including syntactic features, polarity, immediately preceding DC, and Brown cluster pairs. Syntactic features are based on automatic parsing using Stanford CoreNLP (Manning et al., 2014) . The parser is trained on the same sections of the PDTB as the training set used in our experiment.",
"cite_spans": [
{
"start": 368,
"end": 369,
"text": "8",
"ref_id": null
},
{
"start": 605,
"end": 627,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "9 http://www.cs.brandeis.edu/\u02dcclp/ conll15st/results.html 10 We use the parser's probability estimates as is; conceivably it may be improved by an additional probabilistic calibration step (Nguyen and O'Connor, 2015) . as one character. We do not use the length of the candidate DC (refer to Section 2.3), because we view that speakers first decide to use an explicit DC or not, then decide which DC best expresses the relation.",
"cite_spans": [
{
"start": 58,
"end": 60,
"text": "10",
"ref_id": null
},
{
"start": 189,
"end": 216,
"text": "(Nguyen and O'Connor, 2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "DC/arg2 ratio: Similarly, we use the mean word count normalized by the word count of argument 2 as another version of cost function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "Prime frequency: Structural priming refers to the tendency for human to process a linguistic construction (the target) more easily if the construction is used before. In terms of language production, a speaker tends to repeat a previous construction (the prime) since it consumes less effort than to generate an alternative construction. We use the reciprocal of the count of primes (any explicit DC occurring before the current position) as the production cost, since the strength of priming effect is known to be increasing with the frequency of the primes (Levelt and Kelter, 1982; Bock, 1986; Smith and Wheeldon, 2001 ).",
"cite_spans": [
{
"start": 559,
"end": 584,
"text": "(Levelt and Kelter, 1982;",
"ref_id": "BIBREF25"
},
{
"start": 585,
"end": 596,
"text": "Bock, 1986;",
"ref_id": "BIBREF8"
},
{
"start": 597,
"end": 621,
"text": "Smith and Wheeldon, 2001",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "Prime distance: We also use the prime-target distance, normalized by the length of the article, as another version of the production cost. Psycholinguistic findings suggest that the priming effect is more subtly affected by the prime-target distance (Gries, 2005; Bock et al., 2007; Jaeger and Snider, 2008) .",
"cite_spans": [
{
"start": 250,
"end": 263,
"text": "(Gries, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 264,
"end": 282,
"text": "Bock et al., 2007;",
"ref_id": "BIBREF7"
},
{
"start": 283,
"end": 307,
"text": "Jaeger and Snider, 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "Distance from start: We use the relative position of the relation within the article as the production cost. We hypothesize that more effort is needed as the production proceeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "The range of values of the cost function depends on the cost definition. We thus adjust the values with a constant weight w c that is tuned on the dev set in the experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(exp) = w c \u2022 cost(exp)",
"eq_num": "(16)"
}
],
"section": "Cost function",
"sec_num": "3.4"
},
{
"text": "We apply the model to simulate speaker's choice of explicit or implicit DC for discourse relations in the PDTB corpus. The aim of the experiment is to answer two questions: (1) Does the model explain the factors affecting speaker's choice of DC markedness? If the hypotheses of the model is appropriate, each component in the model should contribute to the prediction accuracy. (2) How does the prediction performance compare with the state-of-the-art, i.e. Patterson and Kelher (2013) ?",
"cite_spans": [
{
"start": 458,
"end": 485,
"text": "Patterson and Kelher (2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "We first describe the details of the data we use in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "The Penn Discourse Treebank (PDTB) is the largest available discourse-annotated corpus in English (Prasad et al., 2008) . The text are news articles collected from the Wall Street Journals. Below are 3 examples of the annotation.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data: The Penn Discourse Treebank",
"sec_num": "4.1"
},
{
"text": "1. The OTC market has only a handful of takeover-related stocks. Explicit DCs are labelled with relation senses (Example 1). If an explicit DC is absent between two sentences within the same paragraph and an implicit relation can be inferred, a candidate DC and the relation sense are annotated (Example 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data: The Penn Discourse Treebank",
"sec_num": "4.1"
},
{
"text": "Our model is based on the assumption that W = {explicit, implicit} for all relations, yet it is notable that intra-sentential implicit DCs are not annotated in the PDTB (Prasad et al., 2014) . We thus exclude intra-sentential samples, such that W = {explicit, implicit} is always true and free of grammatical constraints. Also, as a result of the annotation procedure, implicit DCs always occur in between 2 arguments in their original order, i.e. Arg1-DC-Arg2. To preserve the original order of the discourse arguments, which is also part of the communicative structure intended by the speaker but out of the scope of this model, we only use samples in the Arg1-DC-Arg2 order. For example, Example (3) is excluded from our training data. Finally, annotations of other forms of discourse relations, such as entity relations and attributions, are also excluded.",
"cite_spans": [
{
"start": 169,
"end": 190,
"text": "(Prasad et al., 2014)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data: The Penn Discourse Treebank",
"sec_num": "4.1"
},
{
"text": "The screened data set contains 5,201 explicit and 16,049 implicit relations 11 . Sections 2-22 are used as the training set, from which probability distributions are extracted. For easier comparison with previous work, we select the dev set (sections 0-1) and test set (sections 23-24) in the same way as in Patterson and Kehler (2013) , where only relations of infrequent DCs and senses are removed. berg, 2013; Prasad et al., 2014) . Multi-sense is an important factor of our DC production model: a speaker could have chosen an explicit DC for each sense, but if s/he has to express two senses at the same time, an implicit DC could be more usable. Therefore, we treat all combination of senses as individual senses, each containing 1 to 3 joint sense labels 13 This results in a total of 122 senses. Table 1 is a summary of the distribution in descending order of frequency. In fact, joint multisenses are not rare: the most frequent multi-sense is the 17th most frequent sense.",
"cite_spans": [
{
"start": 308,
"end": 335,
"text": "Patterson and Kehler (2013)",
"ref_id": "BIBREF37"
},
{
"start": 401,
"end": 412,
"text": "berg, 2013;",
"ref_id": null
},
{
"start": 413,
"end": 433,
"text": "Prasad et al., 2014)",
"ref_id": "BIBREF42"
},
{
"start": 761,
"end": 763,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 803,
"end": 810,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data: The Penn Discourse Treebank",
"sec_num": "4.1"
},
{
"text": "We apply the markedness model to predict the speaker's choice of DC markedness on the dev and test sets. Table 2 shows the results under 12 Similarly, certain level 2 senses, as in Example (2), are backoffed from level 3 senses due to annotator disagreement. This is also a kind of multi-sense.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 136,
"text": "Table 2 shows the results under",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "13 There is only 1 sample of 3 joint labels in our screened dataset. various settings, evaluated by accuracy and the harmonic mean of precision and recall for explicit and implicit relations respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Row BL shows the results of the markedness model without the cost function and argument informativeness component, and with constant context C. We consider this setting as the baseline, in which the prediction is solely based on the distributions of P (s|exp) and P (s|imp). Considerably high accuracy is achieved, suggesting that the speaker's choice of markedness is strongly related to the intended discourse sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Row (a) shows the prediction results based on the distributions of P (s|exp, C) and P (s|imp, C), where C is the discourse context. The 5 best combinations of contexts and window sizes are shown. Refining the utility of DCs by these contextual constraints, in particular previous contexts, improves the classification accuracy, but the improvement is not significant. This suggests that speaker's choice of markedness not only depends on surrounding discourse relations but also other contextual factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Row (b) shows the contribution of the argument informativeness component, under constant dis-course context and production cost. Classification accuracy increases (significantly for the dev set) when the usability of explicit DC is deducted by the estimated informativeness of the arguments, supporting the UID principle. Predictions based on the surprisal of the parser output sense and the entropy of the parser output distribution are similar. We also experiment by adjusting with the estimated argument informativeness only if the parser output sense is correct (matching at the top level sense). Similar improvement is observed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Row (c) shows the contribution of the cost function, when discourse context is set as constant and argument informativeness is not considered. Adjusting the utility of explicit DCs by their production cost increases the classification accuracy most significantly. Among the various features to model production cost, 'DC length' and 'distance from start' features give the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Row (d) shows the performance of predictions based on the 3 best combinations of components. The highest accuracies and F 1 scores are achieved for both explicit and implicit relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "These results answer the first question of the experiment purpose: the proposed model explains the speaker's choice of DC markedness in terms of DC and argument informativeness, and production cost, while contextual discourse structure is a moderate constraint to the choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "The answer to the second question is also positive. Significant improvement above the state-of-the-art (Row SOA) is achieved by the 2 best combinations (89.0%, 88.9% vs. 86.6%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Lastly, we compare the results with a linear classifier trained on the features specified in the model, i.e. the discrete values of the intended sense and various discourse context definitions, and real values of various cost functions and argument informativeness estimates. Note that in the proposed model, the training data is used to derive the P (s|exp, C) and P (s|null, C) distributions only, while the linear classifier learns from the features and DC markedness of the training set 14 . The classifier achieves accuracy of 88.3% on the test set, which does not significantly outperform previous work. This suggests the advantage of the 14 When extracting the argument informativeness features from the training set, using the automatic discourse parser, we penalize the parser estimates of the implicit samples by a constant ratio, since the discourse parser is also trained on these samples. We use LIBLINEAR (Fan et al., 2008) to build the classifiers.",
"cite_spans": [
{
"start": 645,
"end": 647,
"text": "14",
"ref_id": null
},
{
"start": 909,
"end": 937,
"text": "LIBLINEAR (Fan et al., 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "information-theoretic configuration of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We present a language production model that predicts a speaker's choice of using an explicit DC or not given the discourse relation s/he wants to express. Our model gives an cognitive account of the speaker's choice and also outperforms previous work on the same task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our study shows that a speaker organizes the discourse structure by balancing the pro (informativeness) and con (production cost and redundancy) of using an explicit marker, although the option is a subtle preference in the absence of other grammatical constraints. Using an information-theoretic approach, our model tackles the option as a rational preference by the speaker, who wants to contribute to an informative speech act. Furthermore, we take a logical step forward to formalize the idea of the UID theory, that redundant explicit markers are avoided if the discourse relation is clear enough from the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As future work, we plan to improve the markedness model by making fuller use of the training data, such as learning a more expressive formulation of the context governing the choice of explicit or implicit DCs. We also plan to evaluate the effectiveness of the model in applications, such as natural language generation or machine translation tasks. On the other hand, as discourse presentation differs across genres (Webber, 2009) and mediums (Tonelli et al., 2010) , the model can be applied to predict the explicitation of discourse relations from, for example, news articles to spoken dialogues. Another direction is to apply the RSA framework in the opposite direction -to build a listener's model that simulates a listener's recognition of a discourse sense given an utterance, as proposed in Yung et al.(2016) .",
"cite_spans": [
{
"start": 417,
"end": 431,
"text": "(Webber, 2009)",
"ref_id": "BIBREF55"
},
{
"start": 444,
"end": 466,
"text": "(Tonelli et al., 2010)",
"ref_id": "BIBREF53"
},
{
"start": 799,
"end": 816,
"text": "Yung et al.(2016)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "'Speakers' and 'listeners' are interchangeably used with 'authors' and 'readers' in this article",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u03b1 = 0 means the decision is totally unrelated to pragmatic reasoning. \u03b1 = 1 represents the Luce's choice axiom(Frank and Goodman, 2012), i.e. a rational decision without bias. \u03b1 > 1 suggests biased choices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is opposite to 'informativeness' in RSA, which is defined by negative surprisal (Equation 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In turn, an explicit DC is preferred if the arguments are not informative. We could also penalize the utility of an explicit DC by the argument utility, but the result will be the same since the decision is based on Equation 13.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In case of referring expressions, for example, the lists of referents and grammatically correct pronouns differ case by case, e.g. 'she' is not a valid pronoun for a male.6 We use the 5 forms of discourse presentation defined in the PDTB: explicit DC, implicit DC, alternative lexicalization, entity relation and 'no relation'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that we use information-theoretic measures to approximate I(s; arg, C), but these approximations are not related to the formulation of RSA nor UID.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "4 cases of intra-sentential implicit relations, due to sentence splitting errors of the PTB (single sentences wrongly splitted into two), are removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their valuable feedback on the previous versions of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Discourse cues in narrative text: Using production to predict comprehension",
"authors": [
{
"first": "David",
"middle": [],
"last": "Allbritton",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 1999,
"venue": "AAAI Fall Symposium on Psychological Models of Communication in Collaborative Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Allbritton and Johanna Moore. 1999. Discourse cues in narrative text: Using production to predict comprehension. In AAAI Fall Symposium on Psy- chological Models of Communication in Collabora- tive Systems.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Implicitness of discourse relations",
"authors": [
{
"first": "Torabi",
"middle": [],
"last": "Fatemeh",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Asr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "2669--2684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fatemeh Torabi Asr and Vera Demberg. 2012. Im- plicitness of discourse relations. In COLING, pages 2669-2684. Citeseer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the information conveyed by discourse markers",
"authors": [
{
"first": "Torabi",
"middle": [],
"last": "Fatemeh",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Asr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Fourth Annual Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "84--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fatemeh Torabi Asr and Vera Demberg. 2013. On the information conveyed by discourse markers. In Pro- ceedings of the Fourth Annual Workshop on Cogni- tive Modeling and Computational Linguistics, pages 84-93.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Uniform information density at the level of discourse relations: Negation markers and discourse connective omission",
"authors": [
{
"first": "Torabi",
"middle": [],
"last": "Fatemeh",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Asr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Computation Semantics",
"volume": "",
"issue": "",
"pages": "118--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fatemeh Torabi Asr and Vera Demberg. 2015. Uni- form information density at the level of discourse re- lations: Negation markers and discourse connective omission. Proceedings of the International Confer- ence on Computation Semantics, pages 118-128.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The smooth signal redundancy hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Aylett",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Turk",
"suffix": ""
}
],
"year": 2004,
"venue": "Language and Speech",
"volume": "47",
"issue": "1",
"pages": "31--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Aylett and Alice Turk. 2004. The smooth sig- nal redundancy hypothesis: A functional explana- tion for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and Speech, 47(1):31-56.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "When and why do translators add connectives? a corpus-based study",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Becher",
"suffix": ""
}
],
"year": 2011,
"venue": "Target",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viktor Becher. 2011. When and why do translators add connectives? a corpus-based study. Target, 23(1).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pragmatic reasoning through semantic inference",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Bergen",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"D"
],
"last": "Goodman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Bergen, Roger Levy, and Noah D. Goodman. 2014. Pragmatic reasoning through semantic infer- ence.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Persistent structural priming from language comprehension to language production",
"authors": [
{
"first": "Kathryn",
"middle": [],
"last": "Bock",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gary",
"suffix": ""
},
{
"first": "Franklin",
"middle": [],
"last": "Dell",
"suffix": ""
},
{
"first": "Kristine",
"middle": [
"H"
],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Onishi",
"suffix": ""
}
],
"year": 2007,
"venue": "Cognition",
"volume": "104",
"issue": "3",
"pages": "437--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathryn Bock, Gary S Dell, Franklin Chang, and Kris- tine H Onishi. 2007. Persistent structural priming from language comprehension to language produc- tion. Cognition, 104(3):437-458.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Syntactic persistence in language production",
"authors": [
{
"first": "Kathryn",
"middle": [],
"last": "Bock",
"suffix": ""
}
],
"year": 1986,
"venue": "Cognitive psychology",
"volume": "18",
"issue": "3",
"pages": "355--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Kathryn Bock. 1986. Syntactic persistence in language production. Cognitive psychology, 18(3):355-387.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Liblinear: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "The Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871-1874.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Predicting pragmatic reasoning in lanugage games",
"authors": [
{
"first": "C",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"D"
],
"last": "Frank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2012,
"venue": "Science",
"volume": "336",
"issue": "6084",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael C. Frank and Noah D. Goodman. 2012. Pre- dicting pragmatic reasoning in lanugage games. Sci- ence, 336(6084):998.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speaking rationally: Uniform information density as an optimal strategy for language production",
"authors": [
{
"first": "Austin",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Jaeger",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "933--938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Austin Frank and T Florian Jaeger. 2008. Speaking rationally: Uniform information density as an opti- mal strategy for language production. Proceedings of the Annual Meeting of the Cognitive Science So- ciety, pages 933-938.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Entropy rage constancy in text",
"authors": [
{
"first": "Dmitriy",
"middle": [],
"last": "Genzel",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "199--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitriy Genzel and Eugene Charniak. 2002. Entropy rage constancy in text. Proceedings of the Annual Meeting of the Association for Computational Lin- guistics, pages 199-206.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Knowledge and implicature: modeling language understanding as social cognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stuhlm\u00fcller",
"suffix": ""
}
],
"year": 2013,
"venue": "Topics in cognitive science",
"volume": "5",
"issue": "1",
"pages": "173--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah D Goodman and Andreas Stuhlm\u00fcller. 2013. Knowledge and implicature: modeling language un- derstanding as social cognition. Topics in cognitive science, 5(1):173-184.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Logic and conversation",
"authors": [
{
"first": "",
"middle": [],
"last": "H Paul Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and Semantics",
"volume": "3",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H Paul Grice. 1975. Logic and conversation. Syntax and Semantics, 3:41-58.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Syntactic priming: A corpusbased approach",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Th",
"suffix": ""
},
{
"first": "Gries",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of psycholinguistic research",
"volume": "34",
"issue": "4",
"pages": "365--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Th Gries. 2005. Syntactic priming: A corpus- based approach. Journal of psycholinguistic re- search, 34(4):365-399.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discourse marker choice in sentence planning",
"authors": [
{
"first": "Brigitte",
"middle": [],
"last": "Grote",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Ninth International Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "128--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brigitte Grote and Manfred Stede. 1998. Discourse marker choice in sentence planning. In Proceedings of the Ninth International Workshop on Natural Lan- guage Generation, pages 128-137.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Factors influencing the implicitation of discourse relations across languages",
"authors": [
{
"first": "Jet",
"middle": [],
"last": "Hoek",
"suffix": ""
},
{
"first": "Sandrine",
"middle": [],
"last": "Zufferey",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings 11th Joint ACL-ISO Workshop on Interoperable Semantic Annotation (isa-11)",
"volume": "",
"issue": "",
"pages": "39--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jet Hoek and Sandrine Zufferey. 2015. Factors in- fluencing the implicitation of discourse relations across languages. In Proceedings 11th Joint ACL- ISO Workshop on Interoperable Semantic Annota- tion (isa-11), pages 39-45. TiCC, Tilburg center for Cognition and Communication.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The role of expectedness in the implicitation and explicitation of discourse relations",
"authors": [
{
"first": "Jet",
"middle": [],
"last": "Hoek",
"suffix": ""
},
{
"first": "Jacqueline",
"middle": [],
"last": "Evers-Vermeul",
"suffix": ""
},
{
"first": "Ted Jm",
"middle": [],
"last": "Sanders",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Second Workshop on Discourse in Machine Translation (DiscoMT)",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jet Hoek, Jacqueline Evers-Vermeul, and Ted JM Sanders. 2015. The role of expectedness in the im- plicitation and explicitation of discourse relations. In Proceedings of the Second Workshop on Dis- course in Machine Translation (DiscoMT), pages 41-46. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Implicit learning and syntactic persistence: Surprisal and cumulativity",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Jaeger",
"suffix": ""
},
{
"first": "Neal",
"middle": [],
"last": "Snider",
"suffix": ""
}
],
"year": 2008,
"venue": "The 30th Annual Meeting of the Cognitive Science Society (CogSci08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T Florian Jaeger and Neal Snider. 2008. Implicit learn- ing and syntactic persistence: Surprisal and cumula- tivity. In The 30th Annual Meeting of the Cognitive Science Society (CogSci08), page 827.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Redundancy and reduction: Speakers manage syntactic information density",
"authors": [
{
"first": "Jaeger",
"middle": [],
"last": "T Florian",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive psychology",
"volume": "61",
"issue": "1",
"pages": "23--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T Florian Jaeger. 2010. Redundancy and reduc- tion: Speakers manage syntactic information den- sity. Cognitive psychology, 61(1):23-62.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Game theory in semantics and pragmatics",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2012,
"venue": "Semantics: An International Handbook of Natural Language Meaning",
"volume": "3",
"issue": "",
"pages": "2487--2425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger. 2012. Game theory in semantics and pragmatics. In Claudia Maienborn, Klaus von Heusinger, and Paul Portner, editors, Semantics: An International Handbook of Natural Language Meaning, volume 3, pages 2487-2425. Mouton de Gruyter.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The overall markedness of discourse relations. Proceedings of the Conference on Empirical Methods on Natural Language Processing",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Jin and Marie-Catherine de Marneffe. 2015. The overall markedness of discourse relations. Pro- ceedings of the Conference on Empirical Methods on Natural Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Nonliteral understanding of number words",
"authors": [
{
"first": "T",
"middle": [],
"last": "Justine",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kao",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Noah D",
"middle": [],
"last": "Bergen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "111",
"issue": "33",
"pages": "12002--12007",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justine T Kao, Jean Y Wu, Leon Bergen, and Noah D Goodman. 2014. Nonliteral understanding of num- ber words. Proceedings of the National Academy of Sciences, 111(33):12002-12007.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Establishing causal coherence across sentences: An erp study",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Gina R Kuperberg",
"suffix": ""
},
{
"first": "Tali",
"middle": [],
"last": "Paczynski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ditman",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Cognitive Neuroscience",
"volume": "23",
"issue": "5",
"pages": "1230--1246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina R Kuperberg, Martin Paczynski, and Tali Dit- man. 2011. Establishing causal coherence across sentences: An erp study. Journal of Cognitive Neu- roscience, 23(5):1230-1246.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Surface form and memory in question answering",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Willem",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Levelt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kelter",
"suffix": ""
}
],
"year": 1982,
"venue": "Cognitive psychology",
"volume": "14",
"issue": "1",
"pages": "78--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Willem JM Levelt and Stephanie Kelter. 1982. Surface form and memory in question answering. Cognitive psychology, 14(1):78-106.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Presumptive meanings: The theory of generalized conversational implicature",
"authors": [
{
"first": "",
"middle": [],
"last": "Stephen C Levinson",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen C Levinson. 2000. Presumptive meanings: The theory of generalized conversational implica- ture. MIT Press.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Speakers optimize information density through syntactic reduction",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "T. Florian",
"middle": [],
"last": "Jaeger",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "849--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy and T. Florian Jaeger. 2006. Speakers optimize information density through syntactic re- duction. Advances in neural information processing systems, (849-856).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Recognizing implicit discourse relations in the penn discourse treebank",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Minyen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Minyen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. Proceedings of the Conference on Empirical Methods on Natural Language Pro- cessing.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The standord corenlp natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkey",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkey, Steven J. Bethard, and David Mc- Closky. 2014. The standord corenlp natural lan- guage processing toolkit. Proceedings of the Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Implicitation of discourse connectives in (machine) translation",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 1st DiscoMT Workshop at ACL 2013 (51st Annual Meeting of the Association for Computational Linguistics)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Meyer and Bonnie Webber. 2013. Implicita- tion of discourse connectives in (machine) transla- tion. In Proceedings of the 1st DiscoMT Workshop at ACL 2013 (51st Annual Meeting of the Associa- tion for Computational Linguistics), number EPFL- CONF-192528.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning in the rational speech acts model",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.06807"
]
},
"num": null,
"urls": [],
"raw_text": "Will Monroe and Christopher Potts. 2015. Learning in the rational speech acts model. arXiv preprint arXiv:1510.06807.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Investigating cue selection and placement in tutorial discourse",
"authors": [
{
"first": "Megan",
"middle": [],
"last": "Moser",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "130--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Megan Moser and Johanna D Moore. 1995. Investigat- ing cue selection and placement in tutorial discourse. In Proceedings of the 33rd annual meeting on Asso- ciation for Computational Linguistics, pages 130- 135. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Connectives and narrative text: The role of continuity",
"authors": [
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murray",
"suffix": ""
}
],
"year": 1997,
"venue": "Memory & Cognition",
"volume": "25",
"issue": "2",
"pages": "227--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D Murray. 1997. Connectives and narrative text: The role of continuity. Memory & Cognition, 25(2):227-236.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Posterior calibration and exploratory analysis for natural language processing models",
"authors": [
{
"first": "Khanh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khanh Nguyen and Brendan O'Connor. 2015. Pos- terior calibration and exploratory analysis for natu- ral language processing models. In Proceedings of Conference on Empirical Methods in Natural Lan- guage Processing.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Why discourse affects speakers' choice of referring expressions",
"authors": [
{
"first": "Naho",
"middle": [],
"last": "Orita",
"suffix": ""
},
{
"first": "Eliana",
"middle": [],
"last": "Vornov",
"suffix": ""
},
{
"first": "Naomi",
"middle": [
"H"
],
"last": "Feldman",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naho Orita, Eliana Vornov, Naomi H. Feldman, and Hal Daum\u00e9 III. 2015. Why discourse affects speak- ers' choice of referring expressions. Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Improving implicit discourse relation recognition through feature set optimization",
"authors": [
{
"first": "Joonsuk",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Annual Meeting on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joonsuk Park and Claire Cardi. 2012. Improving im- plicit discourse relation recognition through feature set optimization. Proceedings of Annual Meeting on Discourse and Dialogue.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Predicting the presence of discourse connectives",
"authors": [
{
"first": "Gary",
"middle": [],
"last": "Patterson",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Kehler",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "914--923",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gary Patterson and Andrew Kehler. 2013. Predicting the presence of discourse connectives. In Proceed- ings of Conference on Empirical Methods in Natural Language Processing, pages 914-923.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Easily identifiable discourse relations",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Mridhula",
"middle": [],
"last": "Raghupathy",
"suffix": ""
},
{
"first": "Hena",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Eas- ily identifiable discourse relations. Technical report, University of Pennsylvania.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse re- lations in text. Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Lan- guage Processing.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Embedded implicatures as pragmatic inferences under compositional lexical uncertainty",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lassiter",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Frank",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Potts, Daniel Lassiter, Roger Levy, and Michael C. Frank. 2015. Embedded implicatures as pragmatic inferences under compositional lexical uncertainty. Manuscript.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhit",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Language Resource and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhit Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. Proceedings of the Language Resource and Evalua- tion Conference.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Reflections on the penn discourse treebank, comparable corpora, and complementary annotation",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Bonnie Webber, and Aravind Joshi. 2014. Reflections on the penn discourse treebank, comparable corpora, and complementary annota- tion. Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Discovering implicit discourse relations through brown cluster pair representation and coreference patterns",
"authors": [
{
"first": "Attapol",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol Rutherford and Nianwen Xue. 2014. Dis- covering implicit discourse relations through brown cluster pair representation and coreference patterns. Proceedings of the Conference of the European Chapter of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Coherence, causality and cognitive complexity in discourse",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Sanders",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Symposium on the Exploration and Modelling of Meaning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Sanders. 2005. Coherence, causality and cognitive complexity in discourse. In Proceedings of the Sym- posium on the Exploration and Modelling of Mean- ing.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Getting the message across in rst-based text generation",
"authors": [
{
"first": "Donia",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Clarisse Sieckenius De",
"middle": [],
"last": "Souza",
"suffix": ""
}
],
"year": 1990,
"venue": "Current research in natural language generation",
"volume": "4",
"issue": "",
"pages": "47--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donia Scott and Clarisse Sieckenius de Souza. 1990. Getting the message across in rst-based text genera- tion. Current research in natural language genera- tion, 4:47-73.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "The role of interclausal connectives in narrative structuring: Evidence from adults' interpretations of simple stories",
"authors": [
{
"first": "M",
"middle": [],
"last": "Erwin",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"F"
],
"last": "Segal",
"suffix": ""
},
{
"first": "Paula",
"middle": [
"J"
],
"last": "Duchan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scott",
"suffix": ""
}
],
"year": 1991,
"venue": "Discourse processes",
"volume": "14",
"issue": "1",
"pages": "27--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erwin M Segal, Judith F Duchan, and Paula J Scott. 1991. The role of interclausal connectives in nar- rative structuring: Evidence from adults' interpre- tations of simple stories. Discourse processes, 14(1):27-54.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "The Bell System Technical Journal",
"volume": "",
"issue": "",
"pages": "623--656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.E. Shannon. 1948. A mathematical theory of com- munication. The Bell System Technical Journal, 27(379-423; 623-656).",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Syntactic priming in spoken sentence production-an online study",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Wheeldon",
"suffix": ""
}
],
"year": 2001,
"venue": "Cognition",
"volume": "78",
"issue": "2",
"pages": "123--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Smith and Linda Wheeldon. 2001. Syntac- tic priming in spoken sentence production-an online study. Cognition, 78(2):123-164.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Learning and using language via recursive pragmatic reasoning about other agents",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nathaniel",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3039--3047",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel J Smith, Noah Goodman, and Michael Frank. 2013. Learning and using language via re- cursive pragmatic reasoning about other agents. In Advances in neural information processing systems, pages 3039-3047.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Lexical marking of discourse relations-some experimental findings",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Soria",
"suffix": ""
},
{
"first": "Giacomo",
"middle": [],
"last": "Ferrari",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the ACL-98 Workshop on Discourse Relations and Discourse Markers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Soria and Giacomo Ferrari. 1998. Lexical marking of discourse relations-some experimental findings. In Proceedings of the ACL-98 Workshop on Discourse Relations and Discourse Markers.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Using automatically labelled examples to classify rhetorical relations: An assessment",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2008,
"venue": "Natural Language Engineering",
"volume": "14",
"issue": "3",
"pages": "369--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Sporleder and Alex Lascarides. 2008. Using automatically labelled examples to classify rhetori- cal relations: An assessment. Natural Language En- gineering, 14(3):369-416.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Refer efficiently: Use less informative expressions for more predictable meanings",
"authors": [
{
"first": "Harry",
"middle": [],
"last": "Tily",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Piantadosi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the workshop on the production of referring expressions",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harry Tily and Steven Piantadosi. 2009. Refer effi- ciently: Use less informative expressions for more predictable meanings. Proceedings of the workshop on the production of referring expressions.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Annotation of discourse relations for conversational spoken dialogs",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Aravind K Joshi",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Language Resource and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Tonelli, Giuseppe Riccardi, Rashmi Prasad, and Aravind K Joshi. 2010. Annotation of discourse relations for conversational spoken dialogs. In Pro- ceedings of the Language Resource and Evaluation Conference.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "A refined end-toend discourse parser",
"authors": [
{
"first": "Jianxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianxiang Wang and Man Lan. 2015. A refined end-to- end discourse parser. CoNLL 2015, page 17.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Genre distinctions for discourse in the penn treebank",
"authors": [
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "674--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie Webber. 2009. Genre distinctions for dis- course in the penn treebank. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 674-682. Association for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Crosslingual annotation and analysis of implicit discourse connectives for machine translation",
"authors": [
{
"first": "Frances",
"middle": [],
"last": "Yung",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2015,
"venue": "Workshop of Discourse in machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frances Yung, Kevin Duh, and Yuji Matsumoto. 2015. Crosslingual annotation and analysis of im- plicit discourse connectives for machine translation. In Workshop of Discourse in machine translation, EMNLP, page 142.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Modeling the interpretation of discourse connectives by bayesian pragmatics",
"authors": [
{
"first": "Frances",
"middle": [],
"last": "Yung",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Komura",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frances Yung, Taku Komura, Kevin Duh, and Yuji Matsumoto. 2016. Modeling the interpretation of discourse connectives by bayesian pragmatics. In Proceedings of the Annual Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "A multifactorial analysis of explicitation in translation",
"authors": [
{
"first": "Sandrine",
"middle": [],
"last": "Zuffery",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Cartoni",
"suffix": ""
}
],
"year": 2014,
"venue": "Target",
"volume": "26",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandrine Zuffery and Bruno Cartoni. 2014. A multi- factorial analysis of explicitation in translation. Tar- get, 26(3).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "to Equation (3), the utility U of an explicit DC equals to its informativeness I deducted by production cost D. U (exp; s, C) = I(s; exp, C) \u2212 D(exp) (7)"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "But (Explicit;Comparison-Contrast) they fell sharply. (WSJ2379) 2. Japan's Finance Ministry had set up mechanisms ... to give market operators the authority to suspend trading in futures at any time. (Implicit: but; Comparison) Maybe it wasn't enough. (WSJ0097) 3. Before (Explicit; Temporal-Asynchronous-Precedence) becoming a consultant in 1974, Mr. Achenbaum was a senior executive at J. Walter Thompson Co..(WSJ0295)"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Sense distribution of explicit and implicit DCs in screened data set.U (args;s,C) D(exp) accuracy F 1 exp F 1 imp accuracy F 1 exp F 1 imp",
"content": "<table><tr><td>Senses in the PDTB are defined in a hierarchy of</td></tr><tr><td>2 to 3 levels. Some relations have multiple senses.</td></tr><tr><td>Up to 2 DCs can be annotated to an implicit re-</td></tr><tr><td>lation and in turn each (implicit or explicit) DC</td></tr><tr><td>can be labelled with up to 2 senses. Most exist-</td></tr><tr><td>ing works split a multi-sense sample into separated</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"text": "Accuracies and F1 scores of predicted DC markedness. The best values are bolded. :significant improvement over state-of-the-art (SOA) accuracy at p < 0.03 (by Pearson's \u03c7 2 test) (refer to Section 3.2 for abbreviations of discourse context C.) samples, each labelled with one of the senses. However, it is notable that the individual senses of a multi-sense relation are not disjoint 12 and having multiple senses is part of the sense (Asr and Dem",
"content": "<table/>",
"type_str": "table"
}
}
}
}