|
{ |
|
"paper_id": "N16-1013", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:36:06.310292Z" |
|
}, |
|
"title": "Integer Linear Programming for Discourse Parsing", |
|
"authors": [ |
|
{ |
|
"first": "J\u00e9r\u00e9my", |
|
"middle": [], |
|
"last": "Perret", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Stergos", |
|
"middle": [], |
|
"last": "Afantenos", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Morey", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT's notions of left and right distributive relations. We achieve an Fmeasure of 0.531 for fully labeled structures which beats the previous state of the art.", |
|
"pdf_parse": { |
|
"paper_id": "N16-1013", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT's notions of left and right distributive relations. We achieve an Fmeasure of 0.531 for fully labeled structures which beats the previous state of the art.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Multi-party dialogue parsing, in which complete discourse structures for multi-party dialogue or its close cousin, multi-party chat, are automatically constructed, is still in its infancy. Nevertheless, these are now very common forms of communication on the Web. Dialogue appears also importantly different from monologue. Afantenos et al. (2015) point out that forcing discourse structures to be trees will perforce miss 9% of the links in their corpus, because a significant number of discourse structures in the corpus are not trees. Although Afantenos et al. (2015) is the only prior paper we know of that studies dialogue parsing on multi-party dialogue, and that work relied on methods adapted to treelike structures, we think the area of multi-party dialogue and non-treelike discourse structures is ripe for investigation and potentially important for other genres like the discourse analysis of fora (Wang et al., 2011, for example) . This paper proposes a method based on constraints using Integer Linear Programming decoding over local probability distributions to investigate both treelike and non-treelike, full discourse structures for multi-party dialogue. We show that our method outperforms that of Afantenos et al. (2015) on the corpus they developed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 347, |
|
"text": "Afantenos et al. (2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 570, |
|
"text": "Afantenos et al. (2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 910, |
|
"end": 942, |
|
"text": "(Wang et al., 2011, for example)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1217, |
|
"end": 1240, |
|
"text": "Afantenos et al. (2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Discourse parsing involves at least three main steps: the segmentation of a text into elementary discourse units (EDUs), the basic building blocks for discourse structures, the attachment of EDUs together into connected structures for texts, and finally the labelling of the links between discourse units with discourse relations. Much current work in discourse parsing focuses on the labelling of discourse relations, using data from the Penn Discourse Treebank (PDTB) (Prasad et al., 2008) . This work has availed itself of increasingly sophisticated features of the semantics of the units to be related (Braud and Denis, 2015) ; but as the PDTB does not provide full discourse structures for texts, it is not relevant to our concerns here. Rhetorical Structure Theory (RST) (Mann and Thompson, 1987; Mann and Thompson, 1988; Taboada and Mann, 2006) does take into account the global structure of the document, and the RST Discourse Tree Bank Carlson et al. (2003) has texts annotated according to RST with full discourse structures. This has guided most work in recent discourse parsing of multi-sentence text (Subba and Di Eugenio, 2009; Hernault et al., 2010; duVerle and Prendinger, 2009; Joty et al., 2013; Joty et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 470, |
|
"end": 491, |
|
"text": "(Prasad et al., 2008)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 606, |
|
"end": 629, |
|
"text": "(Braud and Denis, 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 777, |
|
"end": 802, |
|
"text": "(Mann and Thompson, 1987;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 803, |
|
"end": 827, |
|
"text": "Mann and Thompson, 1988;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 828, |
|
"end": 851, |
|
"text": "Taboada and Mann, 2006)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 945, |
|
"end": 966, |
|
"text": "Carlson et al. (2003)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1113, |
|
"end": 1141, |
|
"text": "(Subba and Di Eugenio, 2009;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1142, |
|
"end": 1164, |
|
"text": "Hernault et al., 2010;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1165, |
|
"end": 1194, |
|
"text": "duVerle and Prendinger, 2009;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1195, |
|
"end": 1213, |
|
"text": "Joty et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1214, |
|
"end": 1232, |
|
"text": "Joty et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "But RST requires that discourse structures be projective trees.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While projective trees are arguably a contender for representing the discourse structure of monologue text, multi-party chat dialogues exhibit crossing dependencies. This rules out using a theory like RST as a basis either for an annotation model or as a guide to learning discourse structure (Afantenos et al., 2015) . Several subgroups of interlocutors can momentarily form and carry on a discussion amongst themselves, forming thus multiple concurrent discussion threads. Furthermore, participants of one thread may reply or comment to something said to another thread. One might conclude from the presence of multiple threads in dialogue that we should use non-projective trees to guide discourse parsing. But non-projective trees cannot always reflect the structure of a discourse either, as Asher and Lascarides (2003) argue on theoretical grounds. Afantenos et al. (2015) provide examples in which a question or a comment by speaker S that is addressed to all the engaged parties in the conversation receives an answer from all the other participants, all of which are then acknowledged by S with a simple OK or No worries, thus creating an intuitive, \"lozenge\" like structure, in which the acknowledgment has several incoming links representing discourse dependencies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 317, |
|
"text": "(Afantenos et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 797, |
|
"end": 824, |
|
"text": "Asher and Lascarides (2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 855, |
|
"end": 878, |
|
"text": "Afantenos et al. (2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A final, important organizing element of the discourse structure for text and dialogue is the presence of clusters of EDUs that can act together as an argument to other discourse relations. This means that subgraphs of the entire discourse graph act as elements or nodes in the full discourse structure. These subgraphs are complex discourse units or CDUs. 1 Here is an example from the Settlers corpus:", |
|
"cite_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 358, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "( Thomas's response to gotwoodforsheep spans two turns in the corpus. More interestingly, the response is a conditional \"yes\" in which EDUs (c) and (d) jointly specify the antecedent of the discourse relation that links both to the EDU I do.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "CDUs have been claimed to be an important organizing principle of discourse structure and important for the analysis of anaphora and ellipsis for over 20 years (Asher, 1993 ). Yet the computational community has ignored them; when they are present in annotated corpora, they have been eliminated. This attitude is understandable, because CDUs, as they stand, are not representable as trees in any straightforward way. But given that our method can produce non-treelike graphs, we take a first step towards the prediction of CDUs as part of discourse structure by encoding them in a hypergraph-like framework. In particular, we will transform our corpus by distributing relations on CDUs over all their constituents as we describe in section 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 172, |
|
"text": "(Asher, 1993", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our paper is organized as follows. The data that we have used are described in more detail in the following section, while the underlying linguistic theory that we are using is described in section 3. In section 4 we present in detail the model that we have used, in particular the ILP decoder and the constraints and objective function it exploits. We report our results in section 5. Section 6 provides the related work while section 7 concludes this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For our experiments we used a corpus collected from chats involving an online version of the game The Settlers of Catan described in Afantenos et al., 2015) . Settlers is a multiparty, win-lose game in which players use resources such as wood and sheep to build roads and settlements. Players take turns directing the bargaining. This is the only discourse annotated corpus of multiagent dialogue of which we are aware, and it was one in which apparently non-treelike structures were already noted and also contains CDUs. Such a chat corpus is also useful to study because it approximates spoken dialogue in several ways-sentence fragments, non-standard orthography and occasional lack of syntax-without the inconvenience of transcribing speech. The corpus consists of 39 games annotated for discourse structure in the style of SDRT. Each game consists of several dialogues, and each dialogue represents a single bargaining session directed by a particular player or perhaps several connected sessions. Each dialogue is treated as hav- ing its own discourse structure. About 10% of the corpus was held out for evaluation purposes while the rest was used for training. The dialogues in the corpus are mostly short with each speaker's turn containing typically only one, two or three EDUs, though the longest has 156 EDUs and 119 turns. Most of the discourse connections or relation instances in the corpus thus occur between speaker turns. Statistics on the number of dialogues, EDUs and relations contained in each sub-corpus can be found in table 1. Note that the number of relation instances in the corpus depends on how CDUs are translated, which we'll explain in the next section. The corpus has approximately the same number of EDUs and relations as the RST corpus (Carlson et al., 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 156, |
|
"text": "Afantenos et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1770, |
|
"end": 1792, |
|
"text": "(Carlson et al., 2003)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Segmented Discourse Representation Theory. We give a few details here on one discourse theory in which non-treelike discourse structures are countenanced and that underlies the annotations of the corpus we used. That theory is SDRT. In SDRT, a discourse structure, or SDRS, consists of a set of Discourse Units (DUs) and as Discourse Relations linking those units. DUs are distinguished into EDUs and CDUs. We identify EDUs here with phrases or sentences describing a state or an event; CDUs are SDRSs. Formally an SDRS for a given text segmented in EDUs D = {e 1 , . . . , e n }, where e i are the EDUs of D, is a tuple", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(V, E 1 , E 2 , )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where V is a set of nodes or discourse units including {e 1 , . . . , e n },", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "E 1 \u2286 V \u00d7 V a set of edges representing discourse relations, E 2 \u2286 V \u00d7 V a set of edges that rep-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "resents parthood in the sense that if (x, y) \u2208 E 2 , then the unit x is an element of the CDU y; finally : E 1 \u2192 Relations is a labelling function that assigns an edge in E 1 its discourse relation type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "From SDRT Structures to Dependency Struc- tures. Predicting full SDRSs (V, E 1 , E 2 , )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "with E 2 = \u2205 has been to date impossible, because no reliable method has been identified in the literature for calculating edges in E 2 . Instead, most approaches Afantenos et al., 2015 , for example) simplify the underlying structures by a head replacement strategy (HR) that removes nodes representing CDUs from the original hypergraphs and replacing any incoming or outgoing edges on these nodes on the heads of those CDUs, forming thus dependency structures and not hypergraphs. A similar approach has also been followed by Hirao et al. (2013) and Li et al. (2014) in the context of RST to deal with multi-nuclear relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 185, |
|
"text": "Afantenos et al., 2015", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 547, |
|
"text": "Hirao et al. (2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 568, |
|
"text": "Li et al. (2014)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Transforming SDRSs using HR does not come without its problems. The decision to attach all incoming and outgoing links to a CDU to its head is one with little theoretical or semantic justification. The semantic effects of attaching an EDU to a CDU are not at all the same as attaching an EDU to the head of the CDU. For example, suppose we have a simple discourse with the following EDUs marked by brackets and discourse connectors in bold :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[The French economy continues to suffer] a because [high labor costs remain high] b and [investor confidence remains low] c .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The correct SDRS for (2) is one in which both b and c together explain why the French economy continues to suffer. That is, b and c form a CDU and give rise to the following graph:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "a b c EXPLANATION CONTINUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "HR on (2) produces a graph whose strictly compositional interpretation would be false-b alone explains why the French economy continues to suffer. Alternatively an interpretation of the proposed translation an SDRS with CDUs would introduce spurious ambiguities: either b alone or b and c together provide the explanation. To make matters worse, given the semantics of discourse relations in SDRT (Asher and Lascarides, 2003) , some relations have a semantics that implies that a relation between a CDU and some other discourse unit can be distributed over the discourse units that make up the CDU. But not all relations are distributive in this sense. For example, we could complicate (2) slightly:", |
|
"cite_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 425, |
|
"text": "(Asher and Lascarides, 2003)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "( In 3, the SDRS graph would be:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "a b c d EXPLANATION CONTINUATION CONTINUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "However, this SDRS entails that a is explained by [c, d] and that b is explained by [c, d] . That is, EX-PLANATION \"distributes\" to the left but not to the right. Once again, the HR translation from SDRSs into dependency structures described above would get the intuitive meaning of this example wrong or introduce spurious ambiguities. Given the above observations, we decided to take into account the formal semantics of the discourse relations before replacing CDUs. More precisely, we distinguish between left distributive and right distributive relations. In a nutshell, we examined the temporal and modal semantics of relations and classified them as to whether they were distributive with respect to their left or to their right argument; left distributive relations are those for which the source CDU node should be distributed while right distributive relations are those for which the target CDU node should be distributed. A relation can be both left and right distributive. Left distributive relations include ACKNOWLEDGEMENT, EX-PLANATION, COMMENT, CONTINUATION, NAR-RATION, CONTRAST, PARALLEL, BACKGROUND, while right distributive relations include RESULT, CONTINUATION, NARRATION, COMMENT, CON-TRAST, PARALLEL, BACKGROUND, ELABORA-TION. In Figure 1 we show an example of how relations distribute between EDU/CDU, CDU/EDU and CDU/CDU. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 56, |
|
"text": "[c, d]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 84, |
|
"end": 90, |
|
"text": "[c, d]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1255, |
|
"end": 1263, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linguistic Foundations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Decoding over local scores. When we apply either a full or partial distributional (partial distribution takes into account which relations distribute in which direction) translation to the SDRSs in our corpus, we get dependency graphs that are not trees as input to our algorithms. We now approximate full SDRS graphs (V, E 1 , E 2 , ) with graphs that distribute out E 2 -that is, graphs of the form (V, E 1 , ) or more simply (V, E, ). It is important to note that those graphs are not in general trees but rather Directed Acyclic Graphs (DAGs). We now proceed to detail how we learn such structures. Ideally, what one wants is to learn a function h : X E n \u2192 Y G where X E n is the domain of instances representing a collection of EDUs for each dialogue and Y G is the set of all possible SDRT graphs. However, given the complexity of this task and the fact that it would require an amount of training data that we currently lack in the community, we aim at the more modest goal of learning a function h : X E 2 \u2192 Y R where the domain of instances X E 2 represents parameters for a pair of EDUs and Y R represents the set of SDRT relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "An important drawback of this approach is that there are no formal guarantees that the predicted structures will be well-formed. They could for ex-ample contain cycles although they should be DAGs. Most approaches have circumvented this problem by using global decoding over local scores and by imposing specific constraints upon decoding. But, those constraints were mostly limited to the production of maximum spanning trees, and not full DAGs. We perform global decoding as well but use Integer Linear Programming (ILP) with an objective function and constraints that allow non-tree DAGs. We use a regularized maximum entropy (shortened MaxEnt) model (Berger et al., 1996) to get the local scores, both for attachment and labelling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 654, |
|
"end": 675, |
|
"text": "(Berger et al., 1996)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "ILP for Global Decoding. ILP essentially involves an objective function that needs to be maximized under specific constraints. Our goal is to build the directed graph G = V, E, R with R being a function that provides labels for the edges in E. Vertices (EDUs) are referred by their position in textual order, indexed from 1. The m labels are referred by their index in alphabetical order, starting from 1. Let n = |V |.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The local model provides us with two real-valued functions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "s a : {1, . . . , n} 2 \u2192 [0, 1] s r : {1, . . . , n} 2 \u00d7 {1, . . . , m} \u2192 [0, 1] s a (i, j)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "gives the score of attachment for a pair of EDUs (i, j); s r (i, j, k) gives the score for the attached pair of EDUs (i, j) linked with the relation type k. We define the n 2 binary variables a ij and mn 2 binary variables r ijk :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "a ij = 1 \u2261 (i, j) \u2208 V r ijk = 1 \u2261 R(i, j) = k", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The objective function that we want to maximize is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "n i=1 n j=1 a ij s a (i, j) + m k=1 r ijk s r (i, j, k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "which gives us a score and a ranking for all candidate structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our objective function is subject to several constraints. Because we have left the domain of trees well-explored by syntactic analysis and their computational implementations, we must design new constraints on discourse graphs, which we have developed from looking at our corpus while also being guided by theoretical principles. Some of these constraints come from SDRT, the underlying theory of the annotations. In SDRT discourse graphs should be DAGs with a unique root or source vertex, i.e. one that has no incoming edges. They should also be weakly connected; i.e. every discourse unit in it is connected to some other discourse unit. We implemented connectedness and the unique root property as constraints in ILP by using the following equations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "n i=1 h i = 1 \u2200j 1 \u2264 nh j + n i=1 a ij \u2264 n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where h i is a set of auxiliary variables indexed on {1, . . . , n}. The above constraint presupposes that our graphs are acyclic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Implementing acyclicity is facilitated by another constraint that we call the turn constraint. This constraint is also theoretically motivated. The graphs in our training corpus are reactive in the sense that speakers' contributions are reactions and attach anaphorically to prior contributions of other speakers. This means that edges between the contributions of different speakers are always oriented in one direction. A turn by one speaker can't be anaphorically and rhetorically dependent on a turn by another speaker that comes after it. Once made explicit, this constraint has an obvious rationale: people do not know what another speaker will subsequently say and thus they cannot create an anaphoric or rhetorical dependency on this unknown future act. This is not the case within a single speaker turn though; people can know what they will say several EDUs ahead so they can make such kinds of future directed dependencies. ILP allows us to encode this constraint as follows. We indexed turns from different speakers in textual order from 1 to n t , while consecutive turns from the same speaker were assigned the same index. Let t(i) be the turn index of EDU i, and T (k) the set of all EDUs belonging to turn k. The following constraint forbids backward links between EDUs from distinct turns:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2200i, j (i > j) \u2227 (t(i) = t(j)) =\u21d2 a ij = 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The observation concerning the turn constraint is also useful for the model that provides local scores. We used it for attachment and relation labelling during training and testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Given the turn constraint we only need to ensure acyclicity of the same speaker turn subgraphs. We introduce an auxiliary set of integer variables, (c ki ), indexed on {1, . . . , n t } \u00d7 {1, . . . , n} in order to express this constraint:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2200k, i 1 \u2264 c ki \u2264 |T (k)| \u2200k, i, j such that t(i) = t(j) = k c kj \u2264 c ki \u2212 1 + n(1 \u2212 a ij )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Another interesting observation concerns the density of the graph. The objective function being additive on positive terms, every extra edge improves the global score of the graph, which leads to an almostcomplete graph unless the edge count is constrained. So we imposed an upper limit \u03b4 \u2208 [1, n] representing the density of the graphs:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "n i=1 n j=1 a ij \u2264 \u03b4(n \u2212 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u03b4 \u2208 [1, n] since we need to have at least n \u2212 1 edges for the graph to be connected and at maximum we can have n(n \u2212 1) edges if the graph is complete without loops. \u03b4 being a hyper-parameter, we estimated it on a development corpus representing 20% of our total corpus. 2 The development corpus also shows that graph density decreases as the number of vertices grow. A high \u03b4 entails a too large number of edges in longer dialogues. We compensate for this effect by using an additive cap \u03b7 \u2265 0 on the edge count, also estimated on the development corpus: 3 n i=1 n j=1 a ij \u2264 n \u2212 1 + \u03b7 Another empirical observation concerning the corpus was that the number of outgoing edges from any EDU had an upper bound e o n. We set that as an ILP constraint: 4 \u2200i n j=1 a ij \u2264 e o These observations don't have a semantic explanation, but they suggest a pragmatic one linked at 2 \u03b4 takes the values 1.0, 1.2 and 1.4 for the head, partial and full distribution of the relations, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 272, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "3 \u03b7 takes the value of 4 for the full distribution while it has no upper bound for the head and partial distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "4 eo is estimated on the development corpus to the value of 6 for the head, partial and full distributions. least to the type of conversation present in our corpus. Short dialogues typically involve a opening question broadcast to all the players in search of a bargain, and typically all the other players reply. The replies are then taken up and either a bargain is reached or it isn't. The players then move on. Thus, the density of the graph in such short dialogues will be determined by the number of players (in our case, four). In a longer dialogue, we have more directed discourse moves and threads involving subgroups of the participants appear, but once again in these dialogues it never happens that our participants return again and again to the same contribution; if the thread of commenting on a contribution \u03c6 continues, future comments attach to prior comments, not to \u03c6. Our ILP constraints on density and edge counts thus suggest a novel way of capturing different dialogue types and linguistic constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Finally, we included various minor constraints, such as the fact that EDUs cannot be attached to themselves, 5 if EDUs i and j are not attached the pair is not assigned any discourse relation label, 6 EDUs within a sequence of contributions by the same speaker in our corpus are linked at least to the previous EDU (Afantenos et al., 2015) 7 and edges with zero score are not included in the graph. 8", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 200, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 339, |
|
"text": "(Afantenos et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 340, |
|
"end": 341, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For purposes of comparison with the ILP decoder, we tested the Chu-Liu-Edmonds version of the classic Maximum Spanning Tree (MST) algorithm Mc-Donald et al. (2005) used for discourse parsing by and Li et al. (2014) and by Afantenos et al. (2015) on the Settlers corpus. This algorithm requires a specific node to be the root, i.e. a node without any incoming edges, of the initial complete graph. For each dialogue, we made an artificial node as the root with special dummy features. At the end of the procedure, this node points to the real root of the discourse graph. As baseline measures, we included what we call a LOCAL decoder which creates a simple classifier out of the raw local probability distribution. Since we use MaxEnt, this", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 163, |
|
"text": "Mc-Donald et al. (2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 214, |
|
"text": "Li et al. (2014)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "5 \u2200i aii = 0 6 \u2200i, j m k=1 r ijk = aij 7 \u2200i t(i) = t(i + 1) =\u21d2 ai,i+1 = 1 decoder select\u015d r = argmax r 1 Z(c) exp m i=1 w i f i (p, r)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "with r representing a relation type or a binary attachment value. A final baseline was LAST, where each EDU is attached to the immediately preceding EDU in the linear, textual order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Underlying Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Features for training the local model and getting scores for the decoders were extracted for every pair of EDUs. Features concerned each EDU individually as well as the pair itself. We used obvious, surface features such as: the position of EDUs in the dialogue, who their speakers are, whether two EDUs have the same speaker, the distance between EDUs, the presence of mood indicators ('?', '!') in the EDU, lexical features of the EDU (e.g., does a verb signifying an exchange occur in the EDU), and first and last words of the EDU. We also used the structures and Subject lemmas given by syntactic dependency parsing, provided by the Stanford CoreNLP pipeline (Manning et al., 2014) . Finally we used Cadilhac et al. 2013's method for classifying EDUs with respect to whether they involved an offer, a counteroffer, or were other. As mentioned earlier, in addition to the ILP and MST decoders we used two baseline decoders, LAST and LOCAL. The LAST decoder simply selects the previous EDU for attachment no matter what the underlying probability distribution is. This has proved a very hard baseline to beat in discourse. The LOCAL decoder is a naive decoder which in the case of attachment returns \"attached\" if the probability of attachment between EDUs i and j is higher than .5 and \"non-attached\" in the opposite case.", |
|
"cite_spans": [ |
|
{ |
|
"start": 663, |
|
"end": 685, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Each of the three distribution methods described in Section 3 (Head, Partial and Full Distribution) yielded different dependency graphs for our input documents, which formed three distinct corpora on which we trained and tested separately. For each of them, our training set represented 90% of the dependency graphs from the initial corpus, chosen at random; the test set representing the remaining 10%. Table 2 shows our evaluation results, comparing decoders and baselines for each of the distribution strategies. As can be seen, our ILP de-coder consistently performs significantly better than the baselines as well as the MST decoder, which was the previous state of the art (Afantenos et al., 2015) even when restricted to tree structures and HR (setting the hyper-parameter \u03b4 = 1). This prompted us to investigate how our objective function compared to MST's. We eliminated all constraints in ILP except acyclicity, connectedness, turn constraint and eliminating any constraint on outgoing edges (setting \u03b4 = \u221e); in this case, ILP's objective function performed better on the full structure prediction (.531 F1) than MST with attachment and labelling jointly maximized (.516 F1) . This means that our objective function, although it maximizes scores and not probabilities, produces an ordering over outputs that outperforms classic MST. Our analysis showed further that the constraints on outgoing edges (the tuning of the hyperparameter e o = 6) were very important for our corpus and our (admittedly flawed) local model; in other words, an ILP constrained tree for this corpus was a better predictor of the data with our local model than an unrestrained MST tree decoding.", |
|
"cite_spans": [ |
|
{ |
|
"start": 679, |
|
"end": 703, |
|
"text": "(Afantenos et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1175, |
|
"end": 1184, |
|
"text": "(.516 F1)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 411, |
|
"text": "Table 2", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We also note that our scores dropped in distributive settings but that ILP performed considerably better than the alternatives and better than the previous state of the art on dependency trees using HR on the gold and MST decoding. We need to investigate further constraints, and to refine and improve our features to get a better local model. Our local model will eventually need to be replaced by one that takes into account more of the surrounding structure when it assigns scores to attachments and labels. We also plan to investigate the use of recurrent neural networks in order to improve our local model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "ILP has been used for various computational linguistics tasks: syntactic parsing (Martins et al., 2010; Fern\u00e1ndez-Gonz\u00e1lez and Martins, 2015) , semantic parsing (Das et al., 2014) , coreference resolution (Denis and Baldridge, 2007) and temporal analysis (Denis and Muller, 2011) . As far as we know, we are the first to use ILP to predict discourse structures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 103, |
|
"text": "(Martins et al., 2010;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 141, |
|
"text": "Fern\u00e1ndez-Gonz\u00e1lez and Martins, 2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 179, |
|
"text": "(Das et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 232, |
|
"text": "(Denis and Baldridge, 2007)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 279, |
|
"text": "(Denis and Muller, 2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our use of dependency structures for discourse also has antecedents in the literature. The first we know of is model uses local probability distributions and global decoding, and they transform their data using HR, and so ignore the semantics of discourse relations. Hirao et al. (2013) and Li et al. (2014) also exploit dependency structures by transforming RST trees. Li et al. (2014) used both the Eisner algorithm (Eisner, 1996) as well as the MST algorithm as decoders. We plan to apply ILP techniques to the RST Tree Bank to compare our method with theirs. Most work on discourse parsing focuses on the task of discourse relation labeling between pairs of discourse units-e.g., Echihabi (2002) Sporleder and and Lin et al. (2009) -without worrying about global structure. In essence the problem that they treat corresponds only to our local model. As we have argued above, this setting makes an unwarranted assumption, as it assumes independence of local attachment decisions. There is also work on discourse structure within a single sentence; e.g., Soricut and Marcu (2003) , Sagae (2009) . Such approaches do not apply to our data, as most of the structure in our dialogues lies beyond the sentence level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 267, |
|
"end": 286, |
|
"text": "Hirao et al. (2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 307, |
|
"text": "Li et al. (2014)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 386, |
|
"text": "Li et al. (2014)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 684, |
|
"end": 713, |
|
"text": "Echihabi (2002) Sporleder and", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 718, |
|
"end": 735, |
|
"text": "Lin et al. (2009)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1057, |
|
"end": 1081, |
|
"text": "Soricut and Marcu (2003)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1084, |
|
"end": 1096, |
|
"text": "Sagae (2009)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As for other document-level discourse parsers, Subba and Di Eugenio (2009) use a transition-based approach, following the paradigm of Sagae (2009) . duVerle and Prendinger (2009) and Hernault et al. (2010) both rely on locally greedy methods. They treat attachment prediction and relation label prediction as independent problems. Feng and Hirst (2012) extend this approach by additional feature engineering but is restricted to sentence-level parsing. Joty et al. (2013) and Joty et al. (2015) present a textlevel discourse parser that uses Conditional Random Fields to capture label inter-dependencies and chart parsing for decoding and have the best results on non-dependency based discourse parsing, with an F1 of 0.689 on unlabelled structures and 0.5587 on labelled structures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 74, |
|
"text": "Subba and Di Eugenio (2009)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 146, |
|
"text": "Sagae (2009)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 205, |
|
"text": "Hernault et al. (2010)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 471, |
|
"text": "Joty et al. (2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 494, |
|
"text": "Joty et al. (2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The afore-cited work concerns only monologue. Baldridge and Lascarides (2005) predicted tree discourse structures for 2 party \"directed\" dialogues from the Verbmobil corpus by training a PCFG that exploited the structure of the underlying task. Elsner and Charniak (2010), Elsner and Charniak (2011) present a combination of local coherence models initially provided for monologues showing that those models can satisfactorily model local coherence in chat dialogues. However, they do not present a full discourse parsing model. Our data required a more open domain approach and a more sophisticated approach to structure. Afantenos et al. (2015) worked on multi-party chat dialogues with the same corpus, but they too did not consider the semantics of discourse relations and replaced CDUs with their heads using HR. While this allowed them to use MST decoding over local probability distributions, this meant that their implementation had inherent limitations because it is limited to producing tree structures. They also used the turn constraint, but imposed exogenously to decoding; ILP allows us to integrate it into the structural decoding. We achieve better results than they on treelike graphs and we can explore the full range of non-treelike discourse graphs within the ILP framework. Our parser has thus much more room to improve than those restricted to MST decoding.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 77, |
|
"text": "Baldridge and Lascarides (2005)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 299, |
|
"text": "Elsner and Charniak (2011)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 646, |
|
"text": "Afantenos et al. (2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have presented a novel method for discourse parsing of multiparty dialogue using ILP with linguistically and empirically motivated constraints and an objective function that integrates both attachment and labelling tasks. We have shown also that our method performs better than the competition on multiparty dialogue data and that it can capture nontreelike structures found in the data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We also have a better treatment of the hierarchical structure of discourse than the competition. Our treatment of CDUs in discourse annotations proposes a new distributional translation of those annotations into dependency graphs, which we think is promising for future work. After distribution, our training corpus has a very different qualitative look. There are treelike subgraphs and then densely connected clusters of EDUs, indicating the presence of CDUs. This gives us good reason to believe that in subsequent work, we will be able to predict CDUs and attack the problem of hierarchical discourse structure seriously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "CDUs are a feature of SDRT as we explain below. They are also a feature of RST on some interpretations of the Satellite-Nucleus feature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2200i, j sa(i, j) = 0 =\u21d2 aij = 0 and \u2200i, j, k sr(i, j, k) = 0 =\u21d2 x ijk = 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Developing a corpus of strategic conversation in the settlers of catan", |
|
"authors": [ |
|
{ |
|
"first": "Stergos", |
|
"middle": [], |
|
"last": "Afantenos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Farah", |
|
"middle": [], |
|
"last": "Benamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anas", |
|
"middle": [], |
|
"last": "Cadilhac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cdric", |
|
"middle": [], |
|
"last": "Degremont", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Guhe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Keizer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumya", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laure", |
|
"middle": [], |
|
"last": "Vieu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Workshop on Games and NLP (GAMNLP-12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stergos Afantenos, Nicholas Asher, Farah Benamara, Anas Cadilhac, Cdric Degremont, Pascal Denis, Markus Guhe, Simon Keizer, Alex Lascarides, Oliver Lemon, Philippe Muller, Soumya Paul, Verena Rieser, and Laure Vieu. 2012. Developing a corpus of strate- gic conversation in the settlers of catan. In Noriko To- muro and Jose Zagal, editors, Workshop on Games and NLP (GAMNLP-12), Kanazawa, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Discourse parsing for multiparty chat dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Stergos", |
|
"middle": [], |
|
"last": "Afantenos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Kow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e9r\u00e9my", |
|
"middle": [], |
|
"last": "Perret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "928--937", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stergos Afantenos, Eric Kow, Nicholas Asher, and J\u00e9r\u00e9my Perret. 2015. Discourse parsing for multi- party chat dialogues. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 928-937, Lisbon, Portugal, Septem- ber. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Logics of Conversation. Studies in Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas Asher and Alex Lascarides. 2003. Logics of Conversation. Studies in Natural Language Process- ing. Cambridge University Press, Cambridge, UK.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reference to Abstract Objects in Discourse", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas Asher. 1993. Reference to Abstract Objects in Discourse. Kluwer Academic Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Probabilistic head-driven parsing for discourse structure", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Baldridge and Alex Lascarides. 2005. Probabilis- tic head-driven parsing for discourse structure. In Pro- ceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A maximum entropy approach to natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Berger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "1", |
|
"pages": "39--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language pro- cessing. Computational Linguistics, 22(1):39-71.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Comparing word representations for implicit discourse relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Chlo\u00e9", |
|
"middle": [], |
|
"last": "Braud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2201--2211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chlo\u00e9 Braud and Pascal Denis. 2015. Comparing word representations for implicit discourse relation classi- fication. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2201-2211, Lisbon, Portugal, September. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Grounding strategic conversation: Using negotiation dialogues to predict trades in a win-lose game", |
|
"authors": [ |
|
{ |
|
"first": "Anais", |
|
"middle": [], |
|
"last": "Cadilhac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Farah", |
|
"middle": [], |
|
"last": "Benamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "357--368", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anais Cadilhac, Nicholas Asher, Farah Benamara, and Alex Lascarides. 2013. Grounding strategic conversa- tion: Using negotiation dialogues to predict trades in a win-lose game. In Proceedings of the 2013 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 357-368, Seattle, Washington, USA, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory", |
|
"authors": [ |
|
{ |
|
"first": "Lynn", |
|
"middle": [], |
|
"last": "Carlson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ellen" |
|
], |
|
"last": "Okurowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Current Directions in Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Jan van Kuppevelt and Ronnie Smith, editors, Current Di- rections in Discourse and Dialogue, pages 85-112. Kluwer Academic Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Frame-semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Desai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Andr\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Linguistics", |
|
"volume": "40", |
|
"issue": "1", |
|
"pages": "9--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipanjan Das, Desai Chen, Andr\u00e9 F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguistics, 40(1):9-56, March.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Joint determination of anaphoricity and coreference resolution using integer programming", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "236--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Denis and Jason Baldridge. 2007. Joint determi- nation of anaphoricity and coreference resolution us- ing integer programming. In Human Language Tech- nologies 2007: The Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics; Proceedings of the Main Conference, pages 236-243, Rochester, New York, April. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Predicting globally-coherent temporal structures from texts via endpoint inference and graph decomposition", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proc. of the International Joint Conference on Artificial Intelligence (IJCAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Denis and Philippe Muller. 2011. Predicting globally-coherent temporal structures from texts via endpoint inference and graph decomposition. In Proc. of the International Joint Conference on Artificial In- telligence (IJCAI).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A novel discourse parser based on support vector machine classification", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Duverle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Prendinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "665--673", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David duVerle and Helmut Prendinger. 2009. A novel discourse parser based on support vector machine clas- sification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Inter- national Joint Conference on Natural Language Pro- cessing of the AFNLP, pages 665-673, Suntec, Singa- pore, August. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Three new probabilistic models for dependency parsing: An exploration", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING-96)", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "340--345", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceed- ings of the 16th International Conference on Compu- tational Linguistics (COLING-96), volume 1, pages 340-345, Copenhagen, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Disentangling chat", |
|
"authors": [ |
|
{ |
|
"first": "Micha", |
|
"middle": [], |
|
"last": "Elsner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computational Linguistics", |
|
"volume": "36", |
|
"issue": "3", |
|
"pages": "389--409", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha Elsner and Eugene Charniak. 2010. Disentan- gling chat. Computational Linguistics, 36(3):389- 409.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Disentangling chat with local coherence models", |
|
"authors": [ |
|
{ |
|
"first": "Micha", |
|
"middle": [], |
|
"last": "Elsner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1179--1189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha Elsner and Eugene Charniak. 2011. Disentan- gling chat with local coherence models. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1179-1189, Portland, Oregon, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Text-level discourse parsing with rich linguistic features", |
|
"authors": [ |
|
{ |
|
"first": "Vanessa", |
|
"middle": [], |
|
"last": "Wei Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "60--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vanessa Wei Feng and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 60-68, Jeju Island, Korea, July. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Parsing as reduction", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-Gonz\u00e1lez", |
|
"middle": [], |
|
"last": "Andr\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Martins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Andr\u00e9 F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 53rd", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1523--1533", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), pages 1523-1533, Beijing, China, July. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "HILDA: A Discourse Parser Using Support Vector Machine Classification", |
|
"authors": [ |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Hernault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Prendinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitsuru", |
|
"middle": [], |
|
"last": "Ishizuka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Dialogue and Discourse", |
|
"volume": "1", |
|
"issue": "3", |
|
"pages": "1--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dialogue and Discourse, 1(3):1-33.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Singledocument summarization as a tree knapsack problem", |
|
"authors": [ |
|
{ |
|
"first": "Tsutomu", |
|
"middle": [], |
|
"last": "Hirao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasuhisa", |
|
"middle": [], |
|
"last": "Yoshida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nishino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Norihito", |
|
"middle": [], |
|
"last": "Yasuda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1515--1520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single- document summarization as a tree knapsack problem. In Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 1515-1520, Seattle, Washington, USA, October. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Combining intra-and multisentential rhetorical parsing for document-level discourse analysis", |
|
"authors": [ |
|
{ |
|
"first": "Shafiq", |
|
"middle": [], |
|
"last": "Joty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Carenini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yashar", |
|
"middle": [], |
|
"last": "Mehdad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "486--496", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra-and multi- sentential rhetorical parsing for document-level dis- course analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 486-496, Sofia, Bulgaria, August. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Codra: A novel discriminative framework for rhetorical analysis", |
|
"authors": [ |
|
{ |
|
"first": "Shafiq", |
|
"middle": [], |
|
"last": "Joty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Carenini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond Ng. 2015. Codra: A novel discriminative framework for rhetori- cal analysis. Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziqiang", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "25--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014. Text-level discourse dependency parsing. In Proceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 25-35, Baltimore, Maryland, June. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Recognizing implicit discourse relations in the Penn Discourse Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Ziheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "343--351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing, pages 343-351, Singapore, August. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Rhetorical Structure Theory: A Framework for the Analysis of Texts", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Information Sciences Institute", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William C. Mann and Sandra A. Thompson. 1987. Rhetorical Structure Theory: A Framework for the Analysis of Texts. Technical Report ISI/RS-87-185, Information Sciences Institute, Marina del Rey, Cali- fornia.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Rhetorical Structure Theory: Towards a Functional Theory of Text Organization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Text", |
|
"volume": "8", |
|
"issue": "3", |
|
"pages": "243--281", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Towards a Functional Theory of Text Organization. Text, 8(3):243-281.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "An unsupervised approach to recognizing discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdessamad", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "368--375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse rela- tions. In Proceedings of ACL, pages 368-375.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Turbo parsers: Dependency parsing by approximate variational inference", |
|
"authors": [ |
|
{ |
|
"first": "Andre", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Aguiar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Figueiredo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "34--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andre Martins, Noah Smith, Eric Xing, Pedro Aguiar, and Mario Figueiredo. 2010. Turbo parsers: Depen- dency parsing by approximate variational inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 34- 44, Cambridge, MA, October. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Non-projective dependency parsing using spanning tree algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kiril", |
|
"middle": [], |
|
"last": "Ribarov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan T. McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In HLT/EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Constrained decoding for textlevel discourse parsing", |
|
"authors": [ |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stergos", |
|
"middle": [], |
|
"last": "Afantenos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The COLING 2012 Organizing Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1883--1900", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philippe Muller, Stergos Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained decoding for text- level discourse parsing. In Proceedings of COLING 2012, pages 1883-1900, Mumbai, India, December. The COLING 2012 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "The Penn Discourse TreeBank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Dinesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eleni", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Livio", |
|
"middle": [], |
|
"last": "Robaldo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie L. Webber. 2008. The Penn Discourse TreeBank 2.0. In Proceedings of LREC 2008.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Analysis of discourse structure with syntactic dependencies and data-driven shift-reduce parsing", |
|
"authors": [ |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 11th International Conference on Parsing Technologies, IWPT '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenji Sagae. 2009. Analysis of discourse structure with syntactic dependencies and data-driven shift-reduce parsing. In Proceedings of the 11th International Con- ference on Parsing Technologies, IWPT '09, pages 81- 84, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Sentence level discourse parsing using syntactic and lexical information", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "149--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Soricut and D. Marcu. 2003. Sentence level dis- course parsing using syntactic and lexical information. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 149-156. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Exploiting linguistic cues to classify rhetorical relations", |
|
"authors": [ |
|
{ |
|
"first": "Caroline", |
|
"middle": [], |
|
"last": "Sporleder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Recent Advances in Natural Langauge Processing (RANLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caroline Sporleder and Alex Lascarides. 2005. Exploit- ing linguistic cues to classify rhetorical relations. In Proceedings of Recent Advances in Natural Langauge Processing (RANLP), Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "An effective discourse parser that uses rich linguistic information", |
|
"authors": [ |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Subba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [ |
|
"Di" |
|
], |
|
"last": "Eugenio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "566--574", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rajen Subba and Barbara Di Eugenio. 2009. An effec- tive discourse parser that uses rich linguistic informa- tion. In Proceedings of Human Language Technolo- gies: The 2009 Annual Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics, pages 566-574, Boulder, Colorado, June. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Rhetorical Structure Theory: Looking Back and Moving Ahead. Discourse Studies", |
|
"authors": [ |
|
{ |
|
"first": "Maite", |
|
"middle": [], |
|
"last": "Taboada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "423--459", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maite Taboada and William C. Mann. 2006. Rhetorical Structure Theory: Looking Back and Moving Ahead. Discourse Studies, 8(3):423-459, June.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Predicting thread discourse structure over technical web forums", |
|
"authors": [ |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Su", |
|
"middle": [ |
|
"Nam" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li Wang, Marco Lui, Su Nam Kim, Joakim Nivre, and Timothy Baldwin. 2011. Predicting thread discourse structure over technical web forums. In Proceedings of the 2011 Conference on Empirical Methods in Nat- ural Language Processing, pages 13-25, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Distributing relations: (a) right distribution from an EDU to a CDU, (b) left distribution from a CDU to an EDU, (c) from a CDU to a CDU. We assume that all relations are both right and left distributive." |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Dataset overview" |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "3) [The French economy continues to suffer] a and [the Italian economy remains in the doldrums] b because of [persistent high labor costs] c and [lack of investor confidence in both countries] d ." |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Evaluation results." |
|
} |
|
} |
|
} |
|
} |