|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:49:22.890337Z" |
|
}, |
|
"title": "DiscSense: Automated Semantic Analysis of Discourse Markers", |
|
"authors": [ |
|
{ |
|
"first": "Damien", |
|
"middle": [], |
|
"last": "Sileo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IRIT (University of Toulouse", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Van De Cruys", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IRIT (University of Toulouse", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Camille", |
|
"middle": [], |
|
"last": "Pradel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IRIT (University of Toulouse", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IRIT (University of Toulouse", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Discourse markers (by contrast, happily, etc.) are words or phrases that are used to signal semantic and/or pragmatic relationships between clauses or sentences. Recent work has fruitfully explored the prediction of discourse markers between sentence pairs in order to learn accurate sentence representations, that are useful in various classification tasks. In this work, we take another perspective: using a model trained to predict discourse markers between sentence pairs, we predict plausible markers between sentence pairs with a known semantic relation (provided by existing classification datasets). These predictions allow us to study the link between discourse markers and the semantic relations annotated in classification datasets. Handcrafted mappings have been proposed between markers and discourse relations on a limited set of markers and a limited set of categories, but there exist hundreds of discourse markers expressing a wide variety of relations, and there is no consensus on the taxonomy of relations between competing discourse theories (which are largely built in a top-down fashion). By using an automatic prediction method over existing semantically annotated datasets, we provide a bottom-up characterization of discourse markers in English. The resulting dataset, named DiscSense, is publicly available.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Discourse markers (by contrast, happily, etc.) are words or phrases that are used to signal semantic and/or pragmatic relationships between clauses or sentences. Recent work has fruitfully explored the prediction of discourse markers between sentence pairs in order to learn accurate sentence representations, that are useful in various classification tasks. In this work, we take another perspective: using a model trained to predict discourse markers between sentence pairs, we predict plausible markers between sentence pairs with a known semantic relation (provided by existing classification datasets). These predictions allow us to study the link between discourse markers and the semantic relations annotated in classification datasets. Handcrafted mappings have been proposed between markers and discourse relations on a limited set of markers and a limited set of categories, but there exist hundreds of discourse markers expressing a wide variety of relations, and there is no consensus on the taxonomy of relations between competing discourse theories (which are largely built in a top-down fashion). By using an automatic prediction method over existing semantically annotated datasets, we provide a bottom-up characterization of discourse markers in English. The resulting dataset, named DiscSense, is publicly available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Discourse markers are a common language device used to make explicit the semantic and/or pragmatic relationships between clauses or sentences. For example, the marker so in sentence (1) indicates that the second clause is a consequence of the first.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "(1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We're standing in gasoline, so you should not smoke.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Several resources enumerate discourse markers and their use in different languages, either in discourse marker lexicons (Knott, 1996; Stede, 2002; Roze et al., 2012; Das et al., 2018) or in corpora, annotated with discourse relations, such as the well-known English Penn Discourse TreeBank (Prasad et al., 2008) , which inspired other efforts in Turkish, Chinese and French (Zeyrek and Webber, 2008; Zhou et al., 2014; Danlos et al., 2015) . The PDTB identifies different types of discourse relation categories (such as conjunction and contrast) and the respective markers that frequently instantiate these categories (such as and and however, respectively), and organizes them in a three-level hierarchy. It must be noted, however, that there is no general consensus on the typology of these markers and their rhetorical functions. As such, theoretical alternatives to the PDTB exist, such as Rhetorical Structure Theory or RST (Carlson et al., 2001) , and Segmented Discourse Representation Theory or SDRT (Asher and Lascarides, 2003) . Moreover, marker inventories focus on a restricted number of rhetorical relations that are too coarse and not exhaustive, since discourse marker use depends on the grammatical, stylistic, pragmatic, semantic and emotional contexts that can undergo fine grained categorizations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 133, |
|
"text": "(Knott, 1996;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 146, |
|
"text": "Stede, 2002;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 165, |
|
"text": "Roze et al., 2012;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 183, |
|
"text": "Das et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 311, |
|
"text": "(Prasad et al., 2008)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 399, |
|
"text": "(Zeyrek and Webber, 2008;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 418, |
|
"text": "Zhou et al., 2014;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 439, |
|
"text": "Danlos et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 929, |
|
"end": 951, |
|
"text": "(Carlson et al., 2001)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1008, |
|
"end": 1036, |
|
"text": "(Asher and Lascarides, 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Meanwhile, there exist a number of NLP classification tasks (with associated datasets) that equally consider the relationship between sentences or clauses, but with relations that possibly go beyond the usual discourse relations; these tasks focus on various phenomena such as implication and contradiction (Bowman et al., 2015) , semantic similarity, or paraphrase (Dolan et al., 2004) . Furthermore, a number of tasks consider single sentence phenomena, such as sentiment, subjectivity, and style. Such characteristics have been somewhat ignored for the linguistic analysis and categorization of discourse markers per se, even though discourse markers have been successfully used to improve categorization performance for these tasks (Jernite et al., 2017; Nie et al., 2019; Pan et al., 2018a; Sileo et al., 2019b) . Specifically, the afore-mentioned research shows that the prediction of discourse markers between pairs of sentences can be exploited as a training signal that improves performance on existing classification datasets. In this work, we make use of a model trained on discourse marker prediction in order to predict plausible discourse markers between sentence pairs from existing datasets, which are annotated with the correct semantic categories. This allows us to explore the following questions:", |
|
"cite_spans": [ |
|
{ |
|
"start": 307, |
|
"end": 328, |
|
"text": "(Bowman et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 386, |
|
"text": "(Dolan et al., 2004)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 758, |
|
"text": "(Jernite et al., 2017;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 759, |
|
"end": 776, |
|
"text": "Nie et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 777, |
|
"end": 795, |
|
"text": "Pan et al., 2018a;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 816, |
|
"text": "Sileo et al., 2019b)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "-Which semantic categories are applicable to a particular discourse marker (e.g. is a marker like but associated with other semantic categories than just mere contrast)?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "-Which discourse markers can be associated with the semantic categories of different datasets (e.g. what are the most likely markers between two paraphrases)?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "-To what extent do discourse markers differ between datasets with comparable semantic categories (e.g. for two sentiment analysis datasets, one on films and one on product reviews, are the markers associated with positive sentences different)?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In order to answer these questions, we train a model for discourse marker prediction between sentence pairs, using millions of examples. We then use this model to predict markers between sentences whose semantic relationships have already been annotated-for example, pairs of sentences (s 1 ,s 2 ,y) where y is in Paraphrase, Non-Paraphrase. ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "These predictions allow us to examine the relationship between each category y and the discourse markers that are most often predicted for that category. Figure 1 shows an overview of our method. Thus, we propose DiscSense, a mapping between markers and senses, that has several applications:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 162, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Figure 1: Overview of our method", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-A characterization of discourse markers with categories that provides new knowledge about the connotation of discourse markers; our characterization is arguably richer since it does not only use PDTB categories. For instance, our mapping shows that the use of some markers is associated with negative sentiment or sarcasm; this might be useful in writing-aid contexts, or as a resource for second language learners; it could also be used to guide linguistic analyses of markers;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Overview of our method", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-A characterization of categories of discourse markers can help \"diagnosing\" a classification dataset; As shown in table 2 below, SICK/MNLI dataset categories have different associations and our method can provide a sanity check for annotations (e.g. a Contradiction class should be mapped to markers expected to denote a contradiction); -An explanation of why it is useful to employ discourse marker prediction as a training signal for sentence representation learning; DiscSense can also be used to find markers which could be most useful when using a discourse marker prediction task as auxiliary data in order to solve a given target task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: Overview of our method", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Previous work has amply explored the link between discourse markers and semantic categories. Pitler et al. (2008) , for example, use the PDTB to analyze to what extent discourse markers a priori reflect relationship category. Asr and Demberg (2012) have demonstrated that particular relationship categories give rise to more or less presence of discourse markers. And a recent categorization of discourse markers for English is provided in the DimLex lexicon (Das et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 113, |
|
"text": "Pitler et al. (2008)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 248, |
|
"text": "Asr and Demberg (2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 477, |
|
"text": "(Das et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "As mentioned before, discourse markers have equally been used as a learning signal for the prediction of implicit discourse relations (Liu et al., 2016; Braud and Denis, 2016) and inference relations (Pan et al., 2018b) . This work has been generalized by DiscSent (Jernite et al., 2017) , DisSent (Nie et al., 2019) , and Discovery (Sileo et al., 2019b) who use discourse markers to learn general representations of sentences, which are transferable to various NLP classification tasks. However, none of these examine the individual impact of markers on these tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 152, |
|
"text": "(Liu et al., 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 175, |
|
"text": "Braud and Denis, 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 219, |
|
"text": "(Pan et al., 2018b)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 287, |
|
"text": "(Jernite et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 316, |
|
"text": "(Nie et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 354, |
|
"text": "(Sileo et al., 2019b)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In order to train a model to predict plausible discourse markers between sentence pairs, we use the English Discovery corpus (Sileo et al., 2019b) , as it has the richest set of markers. It is composed of 174 discourse markers with 20K usage examples for each marker (sentence pairs where the second sentence begins by a given marker). Sentence pairs were extracted from web data (Panchenko et al., 2017) , and the markers come either from the PDTB or from an automatic extraction method based on heuristics. An example of the dataset is provided in (2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 146, |
|
"text": "(Sileo et al., 2019b)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 404, |
|
"text": "(Panchenko et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse marker corpus", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "(2) Which is best?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse marker corpus", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "s 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse marker corpus", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "Undoubtedly, c that depends on the person. s 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse marker corpus", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "Since we plan to use marker prediction on sentence pairs from classification datasets, in which some sentence pairs cannot plausibly occur consecutively, (e.g. entirely unrelated sentences), we augment the Discovery dataset with non-consecutive sentence pairs from the DepCC corpus for which we create a new class. We sample sentences that were separated by 2 to 100 sentences in order to cover various degrees of relatedness. Besides, we also want to predict markers beginning single sentences, so we mask the first sentence of Discovery example pairs in 10% of cases by replacing it with a placeholder symbol [S 1 ]. This placeholder will be used to generate sentence pairs from single sentence in datasets where sentence pairs are not available. For example, in the Customer Review dataset (CR), we predict a marker between [S 1 ] and review sentences. In addition, we also use another dataset by Malmi et al. (2018) for which human annotator accuracy is available for a better assessment of the performance of our marker prediction model. It contains 20K usage examples for 20 markers extracted from Wikipedia articles (the 20 markers are a subset of the markers considered in the Discovery dataset); we call this dataset Wiki20. (Malmi et al., 2018) . Bi-LSTM is from (Sileo et al., 2019b) and the last two are ours.", |
|
"cite_spans": [ |
|
{ |
|
"start": 900, |
|
"end": 919, |
|
"text": "Malmi et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1234, |
|
"end": 1254, |
|
"text": "(Malmi et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1273, |
|
"end": 1294, |
|
"text": "(Sileo et al., 2019b)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse marker corpus", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "We leverage classification datasets from DiscEval (Sileo et al., 2019a), alongside GLUE classification tasks (Wang et al., 2019) augmented with SUBJ, CR and SICK tasks from SentEval (Conneau and Kiela, 2018) in order to have different domains for sentiment analysis and NLI. We map the semantic similarity estimation task (STS) from GLUE/SentEval into a classification task by casting the ratings into three quantiles and discarding the middle quantile. table 3 . Support is the number of examples where the marker was predicted given a dataset. Confidence is the estimated probability of the class given the prediction of the marker i.e. P (y|m). The prior is P (y). A larger version is available in annex A and a full version is available at https://github.com/ synapse-developpement/DiscSense.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 128, |
|
"text": "(Wang et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 207, |
|
"text": "(Conneau and Kiela, 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 461, |
|
"text": "table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classification datasets", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "For our experiments, we make use of BERT (Devlin et al., 2019) , as a model for relation prediction. BERT is a text encoder pre-trained using language modeling having demonstrated state of the art results in various tasks of relation prediction between sentences, which is our use-case. The parameters are initialized with the pre-trained unsupervised base-uncased model and then fine-tuned using the Adam (Kingma and Ba, 2014) optimizer with 2 iterations on our corpus data, using default hyperparameters 1 otherwise. We ran marker prediction experiments using BERT on both Discovery and Wiki20. inference relation \"a man is puking\"/\"a man is eating\" neutral 4k SNLI inference relation \"dog leaps out\"/\"a dog jumps\" entailment 570k SarcasmV2 presence of sarcasm \"don't quit your day job\"/\"[...] i was going to sell this joke. [...]\" sarcasm 9k Emergent stance \"a meteorite landed in nicaragua.\"/\"small meteorite hits managua\" for 2k PDTB discourse relation \"it was censorship\"/\"it was outrageous\" Conjunction 13k Squinky I/I/F \"boo ya.\" uninformative, high implicature, unformal, 4k MNLI inference relation \"they renewed inquiries\"/\"they asked again\" entailment 391k STAC discourse relation \"what ?\"/\"i literally lost\" question-answer-pair 11k SwitchBoard speech act \"well , a little different , actually ,\" hedge 19k MRDA speect act \"yeah that 's that 's that 's what i meant .\" acknowledge-answer 14k Verifiability verifiability \"I've been a physician for 20 years.\" verifiable-experiential 6k Persuasion C/E/P/S/S/R \"Co-operation is essential for team work\"/\"lions hunt in a team\" low specificity 566 EmoBank V/A/D \"I wanted to be there..\" low valence, high arousal, low dominance 5k GUM discourse relation \"do not drink\"/\"if underage in your country\" condition 2k QNLI inference relation \"Who took over Samoa?\"/\"Sykes-Picot Agreement.\" entailment 105k MNLI inference relation \"they renewed inquiries\"/\"they asked again\" entailment 391k STS-B similarity \"a man is running.\"/\"a man is mooing.\" dissimilar 1k CoLA linguistic acceptability \"They drank the pub.\" not-acceptable 8k QQP paraphrase \"Is there a soul?\"/\"What is a soul?\" Non-duplicate 364k RTE inference relation \"Oil prices fall back as Yukos oil threat lifted\"/\"Oil prices rise.\" not-entailment 2k WNLI inference relation \"The fish ate the worm. It was tasty.\"/\"The fish was tasty.\" entailment 0.6k Discovery test data is quite high given the large number of classes (174, perfectly balanced) and sometimes their low semantic distinguishability. This accuracy is significantly higher than the score of the Bi-LSTM model in the setup of Sileo et al. (2019b) . The BERT model finetuned on Discovery outperforms human performance reported on Wiki20 with no other adaptation than discarding markers not in Wiki20 during inference. 2 With a further step of finetuning (1 epoch on Wiki20), we also outperform the best model from (Malmi et al., 2018) . These results suggest that the BERT+Discovery model captures a significant part of the use of discourse markers; in the following section, we will apply it to the prediction of discourse markers for indi-2 But note that there is some overlap between training data since BERT pretraining uses Wikipedia text. vidual categories.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 62, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 427, |
|
"text": "(Kingma and Ba, 2014)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 2600, |
|
"end": 2620, |
|
"text": "Sileo et al. (2019b)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 2887, |
|
"end": 2907, |
|
"text": "(Malmi et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "For each semantic dataset, consisting of either annotated sentences (s 1 , y) or annotated sentence pairs (s 1 ,s 2 ,y), where y is a category, we use the BERT+Discovery model to predict the most plausible marker m in each example. The classification datasets thus yield a list of (y, m) pairs. Association rules (Hipp et al., 2000) can be used to find interesting rules of the form (m \u21d2 y), or (y \u21d2 m). We discard examples where no marker is predicted, and we discard markers that we predicted less than 20 times for a particular dataset. Table 2 shows a sample of markers with the highest probability of P (y|m), i.e. the probability of a class given a marker. An extended table, which includes a larger sample of significant markers for all datasets included in our experiments, is available in appendix A and an even larger, exhaustive table of 2.9k associations is publicly available. 3 The associations for some markers are intuitively correct (likewise denotes a semantic similarity expected in front of a paraphrase, sadly denotes a negative feeling, etc.) and they display a predictive power much higher than random choices. Other associations seem more surprising at first glance, for example, seriously as a marker of sarcasmalthough on second thought, it seems a reasonable assumption that seriously does not actually signal a serious message, but rather a sarcastic comment on the preceding sentence. Generally speaking, we notice the same tendency for each class: our model predicts both fairly obvious markers (unfortunately as a marker for negative sentiment, in contrast for contradiction), but equally more inconspicuous markers (e.g. initially and curiously for the same respective categories) that are perfectly acceptable, even though they might have been missed by (and indeed are not present in) a priori approaches to discourse marker categorization. The associations seem to vary across domains (e.g. between CR and SST2) but some markers (e.g. unfortunately) seem to have more robust associations than others. Table 4 provides some Discovery samples where the markers are used accordingly. On a related note, it is encouraging to see that the top markers predicted on the implicit PDTB dataset are similar to those present in the more recent English-DimLex lexicon which annotates PDTB categories as senses for discourse markers (Das et al., 2018) . This indicates that our approach is able to induce genuine discourse markers for discourse categories that coincide with linguistic intuitions; however, our approach has the advantage to lay bare less obvious markers, that might easily be overlooked by an a priori categorization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 332, |
|
"text": "(Hipp et al., 2000)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 890, |
|
"end": 891, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2355, |
|
"end": 2373, |
|
"text": "(Das et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 540, |
|
"end": 547, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 2036, |
|
"end": 2043, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Prediction of markers associated to semantic categories", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Based on a model trained for the prediction of discourse markers, we have established links between the categories of various semantically annotated datasets and discourse markers. Compared to a priori approaches to discourse marker categorization, our method has the advantage to reveal more inconspicuous but perfectly sensible markers for particular categories. The resulting associations can straightforwardly be used to guide corpus analyses, for example to define an empirically grounded typology of marker use. More qualitative analyses would be needed to elucidate subtleties in the most unexpected results. In further work, we plan to use the associations we found as a heuristic to choose discourse markers whose prediction is the most helpful for transferable sentence representation learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Asher, N. and Lascarides, A. (2003) . Logics of conversation. Cambridge University Press. Verifiability.unverifiable 23 100.0 (41.3) Table 5 : Categories and most associated marker. CR.negative denotes the negative class in the CR dataset. Datasets are described in table 3. (Supp)ort is the number of examples where the marker was predicted given a dataset. (Conf)idence is the estimated probability of the class given the prediction of the marker i.e. P (y|m). The prior is P (y). Full version is available at https://github.com/ synapse-developpement/DiscSense", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 35, |
|
"text": "Lascarides, A. (2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 140, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Bibliographical References", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "https://github.com/huggingface/ pytorch-pretrained-BERT/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ synapse-developpement/DiscSense", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Implicitness of Discourse Relations", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Asr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asr, F. T. and Demberg, V. (2012). Implicitness of Dis- course Relations. In COLING.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A large annotated corpus for learning natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.05326" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bowman, S. R., Angeli, G., Potts, C., and Manning, C. D. (2015). A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning Connectivebased Word Representations for Implicit Discourse Relation Identification", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Braud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "203--213", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Braud, C. and Denis, P. (2016). Learning Connective- based Word Representations for Implicit Discourse Re- lation Identification. In Proceedings of the 2016 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 203-213, Austin, Texas, nov. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Carlson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Okurowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Second SIGdial Workshop on Discourse", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carlson, L., Marcu, D., and Okurowski, M. E. (2001). Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue - Volume 16, SIGDIAL '01, pages 1-10, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "SentEval: An evaluation toolkit for universal sentence representations", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Conneau, A. and Kiela, D. (2018). SentEval: An evalu- ation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "FDTB1: Rep\u00e9rage des connecteurs de discours dans un corpus fran\u00e7ais. Discours -Revue de linguistique, psycholinguistique et informatique", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Danlos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Colinet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Steinlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danlos, L., Colinet, M., and Steinlin, J. (2015). FDTB1: Rep\u00e9rage des connecteurs de discours dans un corpus fran\u00e7ais. Discours -Revue de linguistique, psycholin- guistique et informatique, (17), dec.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Constructing a Lexicon of {English} Discourse Connectives", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Scheffler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Bourgonje", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stede", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "360--365", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Das, D., Scheffler, T., Bourgonje, P., and Stede, M. (2018). Constructing a Lexicon of {English} Discourse Connec- tives. In Proceedings of the 19th Annual SIGdial Meet- ing on Discourse and Dialogue, pages 360-365, Mel- bourne, Australia, jul. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-W", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). Bert: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "{COLING} 2004, 20th International Conference on Computational Linguistics, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dolan, B., Quirk, C., and Brockett, C. (2004). Un- supervised Construction of Large Paraphrase Cor- pora: Exploiting Massively Parallel News Sources. In {COLING} 2004, 20th International Conference on Computational Linguistics, Proceedings of the Confer- ence, 23-27 August 2004, Geneva, Switzerland.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Algorithms for Association Rule Mining &Mdash; a General Survey and Comparison", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hipp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "G\u00fcntzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Nakhaeizadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "SIGKDD Explor. Newsl", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "58--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hipp, J., G\u00fcntzer, U., and Nakhaeizadeh, G. (2000). Al- gorithms for Association Rule Mining &Mdash; a Gen- eral Survey and Comparison. SIGKDD Explor. Newsl., 2(1):58-64, jun.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jernite, Y., Bowman, S. R., and Sontag, D. (2017). Discourse-Based Objectives for Fast Unsupervised Sen- tence Representation Learning.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Adam: A Method for Stochastic Optimization", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kingma, D. and Ba, J. (2014). Adam: A Method for Stochastic Optimization. International Conference on Learning Representations, pages 1-13.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A data-driven methodology for motivating a set of coherence relations", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Knott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Knott, A. (1996). A data-driven methodology for motivat- ing a set of coherence relations. Ph.D. thesis, University of Edinburgh, {UK}.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Implicit discourse relation classification via multi-task neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Sui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liu, Y. P., Li, S., Zhang, X., and Sui, Z. (2016). Implicit discourse relation classification via multi-task neural net- works. ArXiv, abs/1603.02776.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Automatic Prediction of Discourse Connectives", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Malmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Pighin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Krause", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kozhevnikov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 11th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malmi, E., Pighin, D., Krause, S., and Kozhevnikov, M. (2018). Automatic Prediction of Discourse Connectives. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan, may. European Language Resource Association.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Bennett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "989--999", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nie, A., Bennett, E., and Goodman, N. (2019). Dis- Sent: Learning sentence representations from explicit discourse relations. pages 4497-4510, July. Pan, B., Yang, Y., Zhao, Z., Zhuang, Y., Cai, D., and He, X. (2018a). Discourse Marker Augmented Network with Reinforcement Learning for Natural Language In- ference. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 989-999, Melbourne, Australia, jul. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Discourse marker augmented network with reinforcement learning for natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pan, B., Yang, Y., Zhao, Z., Zhuang, Y., Cai, D., and He, X. (2018b). Discourse marker augmented network with reinforcement learning for natural language inference. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Building a Web-Scale Dependency-Parsed Corpus from Common Crawl", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Panchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ruppert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Faralli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1816--1823", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Panchenko, A., Ruppert, E., Faralli, S., Ponzetto, S. P., and Biemann, C. (2017). Building a Web- Scale Dependency-Parsed Corpus from Common Crawl. pages 1816-1823.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Easily Identifiable Discourse Relations", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Raghupathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Mehta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Coling 2008: Companion volume: Posters", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "87--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pitler, E., Raghupathy, M., Mehta, H., Nenkova, A., Lee, A., and Joshi, A. (2008). Easily Identifiable Discourse Relations. In Coling 2008: Companion volume: Posters, pages 87-90. Coling 2008 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The Penn Discourse TreeBank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Dinesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Robaldo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prasad, R., Dinesh, N., Lee, A., Miltsakaki, E., Robaldo, L., Joshi, A., and Webber, B. (2008). The Penn Discourse TreeBank 2.0. In Bente Maegaard Joseph Mariani Jan Odijk Stelios Piperidis Daniel Tapias Nicoletta Calzo- lari (Conference Chair) Khalid Choukri, editor, Proceed- ings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Mo- rocco, may. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "LEXCONN: A French Lexicon of Discourse Connectives", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Roze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Danlos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Discours", |
|
"volume": "", |
|
"issue": "10", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roze, C., Danlos, L., and Muller, P. (2012). LEXCONN: A French Lexicon of Discourse Connectives. Discours, (10).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Discourse-based evaluation of language understanding", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sileo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "De Cruys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Pradel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sileo, D., de Cruys, T. V., Pradel, C., and Muller, P. (2019a). Discourse-based evaluation of language under- standing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Mining discourse markers for unsupervised sentence representation learning", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sileo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Van De Cruys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Pradel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3477--3486", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sileo, D., Van De Cruys, T., Pradel, C., and Muller, P. (2019b). Mining discourse markers for unsupervised sentence representation learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Papers), pages 3477-3486, Minneapolis, Minnesota, June. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Di{M}{L}ex: A Lexical Approach to Discourse Markers", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stede", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Exploring the Lexicon -Theory and Computation. Edizioni dell'Orso", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stede, M. (2002). Di{M}{L}ex: A Lexical Approach to Discourse Markers. In Exploring the Lexicon -Theory and Computation. Edizioni dell'Orso, Alessandria.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2019). {GLUE}: A Multi-Task Bench- mark and Analysis Platform for Natural Language Un- derstanding. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A Discourse Resource for Turkish: Annotating Discourse Connectives in the {METU} Corpus", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Zeyrek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zeyrek, D. and Webber, B. (2008). A Discourse Resource for Turkish: Annotating Discourse Connectives in the {METU} Corpus. In Proceedings of IJCNLP.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Chinese Discourse Treebank 0", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "2014--2035", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou, Y., Lu, J., Zhang, J., and Xue, N. (2014). Chinese Discourse Treebank 0.5 {LDC2014T21}.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"text": "Discourse marker prediction accuracy percentages on Wiki20 and Discovery datasets. Human Raters and Decomposable Attention are from", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>marker</td><td>category</td><td colspan=\"3\">support confidence (prior)</td></tr><tr><td>unfortunately,</td><td>CR.negative</td><td/><td>66</td><td>100.0 (21.8)</td></tr><tr><td>sadly,</td><td>CR.negative</td><td/><td>20</td><td>95.2 (21.8)</td></tr><tr><td>unfortunately,</td><td>SST-2.negative</td><td/><td>240</td><td>96.0 (22.5)</td></tr><tr><td>as a result,</td><td>SST-2.negative</td><td/><td>65</td><td>94.2 (22.5)</td></tr><tr><td>in contrast,</td><td>MNLI.contradiction</td><td/><td>1182</td><td>74.1 (16.9)</td></tr><tr><td>curiously,</td><td>MNLI.contradiction</td><td/><td>2912</td><td>70.8 (16.9)</td></tr><tr><td>technically,</td><td>SICKE.contradiction</td><td/><td>29</td><td>87.9 (7.8)</td></tr><tr><td>rather,</td><td>SICKE.contradiction</td><td/><td>147</td><td>69.7 (7.8)</td></tr><tr><td>similarly,</td><td>MRPC.paraphrase</td><td/><td>85</td><td>87.6 (35.5)</td></tr><tr><td>likewise,</td><td>MRPC.paraphrase</td><td/><td>103</td><td>84.4 (35.5)</td></tr><tr><td>instead,</td><td>PDTB.Alternative</td><td/><td>27</td><td>22.5 (0.6)</td></tr><tr><td>then,</td><td colspan=\"2\">PDTB.Asynchronous</td><td>60</td><td>38.7 (2.4)</td></tr><tr><td>previously,</td><td colspan=\"2\">PDTB.Asynchronous</td><td>36</td><td>36.4 (2.4)</td></tr><tr><td>by doing this,</td><td>PDTB.Cause</td><td/><td>22</td><td>61.1 (14.8)</td></tr><tr><td>additionally</td><td>PDTB.Conjunction</td><td/><td>47</td><td>63.5 (12.5)</td></tr><tr><td>but</td><td>PDTB.Contrast</td><td/><td>89</td><td>61.4 (7.0)</td></tr><tr><td>elsewhere,</td><td>PDTB.List</td><td/><td>41</td><td>16.2 (1.3)</td></tr><tr><td>specifically,</td><td>PDTB.Restatement</td><td/><td>100</td><td>67.6 (10.6)</td></tr><tr><td>seriously,</td><td>SarcasmV2.sarcasm</td><td/><td>225</td><td>71.2 (26.7)</td></tr><tr><td>so,</td><td>SarcasmV2.sarcasm</td><td/><td>81</td><td>65.6 (26.7)</td></tr></table>", |
|
"text": "enumerates the classification datasets we used in our study.", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"text": "Sample of categories and most associated markers. CR.neg denotes the negative class in the CR dataset.", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>dataset</td><td>categories</td><td>exemple&class</td></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td>sentence1</td><td>sentence2</td><td>marker</td><td>sense</td></tr><tr><td colspan=\"2\">every act of god is holy because god is holy . every act of god is loving because god is</td><td>likewise,</td><td>Similarity</td></tr><tr><td/><td>love .</td><td/><td/></tr><tr><td>it gives you a schizophrenic feeling when try-</td><td>it 's just a bad experience .</td><td>sadly,</td><td>Negative</td></tr><tr><td>ing to navigate a web page .</td><td/><td/><td/></tr><tr><td>the article below was published a i do n't</td><td>this could be a problem ! ! ! !</td><td>seriously,</td><td>Sarcasm</td></tr><tr><td>think i can stop with the exclamation marks</td><td/><td/><td/></tr><tr><td>! ! !</td><td/><td/><td/></tr><tr><td>ayite , think of link building as brand build-</td><td>there are no shortcuts .</td><td colspan=\"2\">unfortunately, Negative</td></tr><tr><td>ing .</td><td/><td/><td/></tr><tr><td>you will seldom meet new people .</td><td colspan=\"2\">in medellin you will definitely meet people . in contrast,</td><td>Contradiction</td></tr><tr><td>if i burn a fingertip , i 'll moan all night .</td><td>it did n't look too bad .</td><td>initially,</td><td/></tr></table>", |
|
"text": "Classification datasets considered in our study; N train is the number of training examples", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"text": "Examples of the Discovery datasets illustrating various relation senses predicted by DiscSense", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |