ACL-OCL / Base_JSON /prefixL /json /lrec /2020.lrec-1.134.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:48:58.491625Z"
},
"title": "On the Creation of a Corpus for Coherence Evaluation of Discursive Units",
"authors": [
{
"first": "Elham",
"middle": [],
"last": "Mohammadi",
"suffix": "",
"affiliation": {
"laboratory": "Computational Linguistics at Concordia (CLaC) Laboratory",
"institution": "Concordia University",
"location": {
"settlement": "Montr\u00e9al",
"region": "Qu\u00e9bec",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Timothe",
"middle": [],
"last": "Beiko",
"suffix": "",
"affiliation": {
"laboratory": "Computational Linguistics at Concordia (CLaC) Laboratory",
"institution": "Concordia University",
"location": {
"settlement": "Montr\u00e9al",
"region": "Qu\u00e9bec",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": "",
"affiliation": {
"laboratory": "Computational Linguistics at Concordia (CLaC) Laboratory",
"institution": "Concordia University",
"location": {
"settlement": "Montr\u00e9al",
"region": "Qu\u00e9bec",
"country": "Canada"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we report on our experiments towards the creation of a corpus for coherence evaluation. Most corpora for textual coherence evaluation are composed of randomly shuffled sentences that focus on sentence ordering, regardless of whether the sentences were originally related by a discourse relation. To the best of our knowledge, no publicly available corpus has been designed specifically for the evaluation of coherence of known discursive units. In this paper, we focus on coherence modeling at the intra-discursive level and describe our approach to build a corpus of incoherent pairs of sentences. We experimented with a variety of corruption strategies to create synthetic incoherent pairs of discourse arguments from coherent ones. Using discourse argument pairs from the Penn Discourse Tree Bank (Prasad et al., 2008), we generate incoherent discourse argument pairs, by swapping either their discourse connective or a discourse argument. To evaluate how incoherent the generated corpora are, we use a convolutional neural network to try to distinguish the original pairs from the corrupted ones. Results of the classifier as well as a manual inspection of the corpora show that generating such corpora is still a challenge as the generated instances are clearly not \"incoherent enough\", indicating that more effort should be spent on developing more robust ways of generating incoherent corpora.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we report on our experiments towards the creation of a corpus for coherence evaluation. Most corpora for textual coherence evaluation are composed of randomly shuffled sentences that focus on sentence ordering, regardless of whether the sentences were originally related by a discourse relation. To the best of our knowledge, no publicly available corpus has been designed specifically for the evaluation of coherence of known discursive units. In this paper, we focus on coherence modeling at the intra-discursive level and describe our approach to build a corpus of incoherent pairs of sentences. We experimented with a variety of corruption strategies to create synthetic incoherent pairs of discourse arguments from coherent ones. Using discourse argument pairs from the Penn Discourse Tree Bank (Prasad et al., 2008), we generate incoherent discourse argument pairs, by swapping either their discourse connective or a discourse argument. To evaluate how incoherent the generated corpora are, we use a convolutional neural network to try to distinguish the original pairs from the corrupted ones. Results of the classifier as well as a manual inspection of the corpora show that generating such corpora is still a challenge as the generated instances are clearly not \"incoherent enough\", indicating that more effort should be spent on developing more robust ways of generating incoherent corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A common assumption in natural language analysis is that the input text is coherent. However, this premise may not always hold, especially in the case of automatically generated texts or texts written by humans with lower language skills or with health issues affecting language. In these cases, the automatic evaluation of textual coherence can help towards improving the quality of automaticallygenerated text or detecting authors with specific linguistic deficiencies (Abdalla et al., 2018) . In order to perform automatic coherence evaluation, a corpus including both coherent and incoherent samples is needed. Coherent texts, are easy to find; however incoherent texts are not. Most corpora for textual coherence evaluation are synthetic data sets composed of randomly shuffled sentences (Lapata and Barzilay, 2005; Li and Jurafsky, 2017a; Logeswaran et al., 2018) which are commonly used for sentence ordering tasks (Logeswaran et al., 2018; Cui et al., 2018; Gong et al., 2016; Chen et al., 2016) . However, these corpora do not consider if the original pairs of sentences are related by a discourse relation or not; hence, the difficulty of the sentence ordering task may vary significantly. To our knowledge, no publicly available corpus exists for coherence evaluation of known discursive units where the sentence pairs are known to have a specific discourse relation. In this paper, we describe our approach to build a corpus of grammatically correct, but incoherent pairs of sentences. We experimented with a variety of corruption strategies to create synthetic incoherent pairs of sentences from coherent sentences with a known discourse relation. The corpora were created by swapping discourse arguments from original coherent discursive units and reconstructing new units, on the grounds that these new units would likely be incon-sistent, yet grammatically correct. Using the Penn Discourse Tree Bank (PDTB) (Prasad et al., 2008 ) corpus, we created a collection of pairs of sentences with a known discourse relation, then corrupted them by either modifying their discourse connective or a discourse argument. For example, the discursive unit 1 :",
"cite_spans": [
{
"start": 471,
"end": 493,
"text": "(Abdalla et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 793,
"end": 820,
"text": "(Lapata and Barzilay, 2005;",
"ref_id": "BIBREF7"
},
{
"start": 821,
"end": 844,
"text": "Li and Jurafsky, 2017a;",
"ref_id": "BIBREF9"
},
{
"start": 845,
"end": 869,
"text": "Logeswaran et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 922,
"end": 947,
"text": "(Logeswaran et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 948,
"end": 965,
"text": "Cui et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 966,
"end": 984,
"text": "Gong et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 985,
"end": 1003,
"text": "Chen et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 1924,
"end": 1944,
"text": "(Prasad et al., 2008",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(1) [John did not eat breakfast this morning.] ARG1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "[He managed to wait until 1 pm for his lunch date.] ARG2 (COMPARISON:Contrast)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "is composed of two sentences related by a contrast discourse relation. The first sentence constitutes the first argument (Arg1) of the discourse unit; while the second sentence is known as argument 2 (Arg2). Although not explicitly marked, the two arguments are connected via an implicit discourse connective (DC) such as nevertheless. In order to corrupt this instance, we can first explicitly insert its implicit discourse connective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(2) [John did not eat breakfast this morning.] ARG1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "[Nevertheless] DC [he managed to wait until 1 pm for his lunch date.] ARG2 (COMPARI-SON:Contrast)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Then, we can corrupt the resulting instance by either replacing the discourse connective with another known to signal a different discourse relation, as in:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "( thus, creating two incoherent sentence pairs. This paper first reviews related work in the area of coherence evaluation in Section 2. Section 3 describes the six corruption strategies that were experimented with to create the corpora of incoherent sentence pairs. Section 4 describes our methods to evaluate the generated corpora. Finally, Section 5 concludes this work and proposes future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Previous work in coherence modeling and evaluation has mostly focused on machine-generated text. Lapata and Barzilay (2005) discuss two linguistically rich coherence models that can be used for the automatic coherence evaluation of machine-generated content. Their dataset consists of summaries that were produced by participating systems in the Document Understanding Conference 2 , tagged with their respective coherence level by human annotators. In order to automatically evaluate the coherence level of the machine-generated summaries and compare the results with human judgement, they use a syntactic model that takes into account entity transitions to distinguish between coherent and incoherent text, and a semantic model that evaluates coherence by using various measures of semantic similarity between sentences. Based on their experiments, a combined approach that makes use of both syntactic and semantic models outperforms a single one. Assuming that coherent texts exhibit certain discourse structures, Lin et al. (2011) experiment with the use of discourse relations for the automatic evaluation of text coherence. In order to have a large collection of texts for training, they create synthetic data from a collection of source documents by permuting their sentences. They design a discourse role matrix which includes occurrences of terms and their discourse roles and use it to model transitions between textual units. They find this approach effective in distinguishing between an original coherent text and a permuted version of that text lacking coherence. Following the same approach as Lin et al. (2011) , Li and Hovy (2014) build a synthetic dataset for coherence detection which consists of source documents and their permuted versions (with a different ordering of their sentences). They feed distributed representations of tokens to a recursive neural network which computes sentence representations based on the tree structure of sentences. These distributed sentence representations are later used for coherence detection. Li and Jurafsky (2017b) develop a neural model for coherence evaluation that is trained on a collection of coherent documents and their incoherent permuted versions (similar to the dataset used by Li and Hovy (2014) ). An LSTM is used to extract sentence representations of a text. These representations are then fed to another network which calculates the probability of a text's coherence. Although this model proves effective in the task of coherence evaluation, they mention negative sampling as a disadvantage of a discriminative model, as the generated negative samples cannot possibly cover all possible meanings. Tien Nguyen and Joty (2017) use texts' entity grid representations as input to a Convolutional Neural Network (CNN) to perform various coherence-related tasks, one of which is summary coherence rating. The dataset in their work consists of documents and multiple summaries of each document which have been generated by both humans and automatic summarization systems and ranked by human experts. Their results show that using CNNs can actually lead to an improvement on the previously reported results on the same task. As shown above, most previous work in coherence modeling has focused on sentence ordering by creating permutations of source documents with a different ordering of their sentences. This paper goes beyond this as it focuses on coherence modeling at the intra-discursive level by evaluating the coherence between sentence pairs with known discourse relations.",
"cite_spans": [
{
"start": 97,
"end": 123,
"text": "Lapata and Barzilay (2005)",
"ref_id": "BIBREF7"
},
{
"start": 1017,
"end": 1034,
"text": "Lin et al. (2011)",
"ref_id": "BIBREF11"
},
{
"start": 1609,
"end": 1626,
"text": "Lin et al. (2011)",
"ref_id": "BIBREF11"
},
{
"start": 2249,
"end": 2267,
"text": "Li and Hovy (2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In order to create a corpus of incoherent pairs of sentences, we used the Penn Discourse Treebank (PDTB) (Prasad et al., 2008) . The PDTB contains 40,600 annotated discourse connectives along with their discourse arguments. The PDTB follows the DLTAG framework (Marcus et al., 1993) which takes a shallow view of discourse structures where relations are defined only between adjacent sentences or close text spans. The two textual units related by a discourse relation are known as arguments (Arg1 and Arg2). The PDTB annotates the beginning and end of Arg1 and Arg2, a possible discourse connective (DC) (for example, because) and the discourse relation (known as sense). The PDTB contains 18,459 instances with an explicit DC (from an inventory of 100 DCs) and 16,053 instances with an implicit DC where the annotators inferred a DC. Example 5 shows an instance of an implicit discourse relation from the PDTB.",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF16"
},
{
"start": 261,
"end": 282,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1."
},
{
"text": "(5) [So much of the stuff poured into its Austin, Texas, offices that its mail rooms there simply stopped delivering it.] ARG1 Implicit = so [Now, thousands of mailers, catalogs and sales pitches go straight into the trash.] ARG2 (CONTINGENCY:Cause:result)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1."
},
{
"text": "In order to maintain the grammaticality of the corrupted instances as much as possible, we used only the PDTB instances containing a discourse connective marked as implicit. This is because, in these cases, both Arg1 and Arg2 refer to two individual sentences, and Arg1 always precedes Arg2. In addition, since the implicit discourse connective is guaranteed to be located at the beginning of Arg2, when making this connective explicit, we minimize the chances of creating an ungrammatical Arg2 3 . This led to 16,053 instances that were used as the positive set that we then corrupted using six different methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1."
},
{
"text": "To corrupt the coherent instances, 6 strategies were used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corruption Strategies",
"sec_num": "3.2."
},
{
"text": "1. Random Arg2 (RA2): The Arg2 of an instance is swapped with another random Arg2 in the dataset, without regards to their senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corruption Strategies",
"sec_num": "3.2."
},
{
"text": "The discourse connective (DC) of an instance is swapped with another random DC in the dataset, without regards to their senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random DC (RDC):",
"sec_num": "2."
},
{
"text": "In order to create incoherent instances that would be easier to detect, we also tried to ensure that the discourse sense of the original instances was not maintained. This led to two other strategies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random DC (RDC):",
"sec_num": "2."
},
{
"text": "3. Different Sense Arg2 (DSA2): The Arg2 of an instance is swapped with another Arg2 in the dataset, whose sense is different from the original instance's sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random DC (RDC):",
"sec_num": "2."
},
{
"text": "The DC of an instance is swapped with another DC in the dataset, whose sense is different from the original connective's sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Sense DC (DSDC):",
"sec_num": "4."
},
{
"text": "Finally, we also tried to maintain the discourse relations, hoping to create corrupted instances that would be much harder to detect as incoherent. This led to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Sense DC (DSDC):",
"sec_num": "4."
},
{
"text": "5. Same Sense Arg2 (SSA2): The Arg2 of an instance is swapped with another Arg2 in the dataset, whose sense is identical to the original instance's sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Sense DC (DSDC):",
"sec_num": "4."
},
{
"text": "6. Same DC Arg2 (SDCA2): The Arg2 of an instance is swapped with another Arg2 in the dataset, whose discourse connective (DC) is identical to the original instance's DC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Sense DC (DSDC):",
"sec_num": "4."
},
{
"text": "We applied these 6 corruption strategies to the original coherent instances of the PDTB and thus created 6 corpora. Table 1 shows statistics of these corpora. For each corpus, the table indicates the number of instances, the maximum sentence length, denoted max L, and whether it is a set of coherent or incoherent sentence pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Different Sense DC (DSDC):",
"sec_num": "4."
},
{
"text": "In order to evaluate the quality of the 6 generated corpora, we proceeded with an automatic as well as a manual evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4."
},
{
"text": "3 For example, if placed at the beginning of Arg2, some connectives such as because will create an ungrammatical sentence. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4."
},
{
"text": "For the automatic evaluation, we developed a classifier to try to discriminate coherent from incoherent instances. To do this, we used the CNN architecture used by Kim (2014) to classify movie reviews as either positive or negative. This model was chosen as the two tasks are similar and Kim (2014) achieves a high accuracy (0.81) on their dataset of movie reviews.",
"cite_spans": [
{
"start": 164,
"end": 174,
"text": "Kim (2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1."
},
{
"text": "To run the classifier, we first merged the coherent corpus with each incoherent corpus, and labeled each instance as either coherent (1) or incoherent (0). Then, we padded each instance shorter than the longest instance in the dataset (denoted by max L in Table 1 ) to ensure that all the inputs to our model had the same length. Finally, we randomly shuffled the data and kept 90% of the instances for the training set and 10% for the test set. As the datasets were balanced, we used accuracy as our evaluation metric. Figure 1 shows the overall architecture of the model. As shown in Figure 1 , the convolution layer was applied over the word vectors and supported either a single or multiple convolution filters. Maxpooling was then used on the result of the convolutional layer and dropout regularization was added. Lastly, the output layer used a softmax activation function for the final classification. We used word2vec (Mikolov et al., 2012) as word embeddings with a dimension of 300, pre-trained on the 100 billion words from the Google News corpus. We made the embeddings non-trainable and ran the model with parameters that restricted its capacity.",
"cite_spans": [
{
"start": 927,
"end": 949,
"text": "(Mikolov et al., 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 520,
"end": 528,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 586,
"end": 594,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1."
},
{
"text": "The model was trained and tested on the 6 merged versions of the datasets: (RA2, RDC, DSA2, DSDC, SSA2, and SDCA2) and coherent instances. Since the discourse connective is a strong signal to a discourse relation, we expected the performance on the DSA2 and DSDC datasets to be higher than the performance on RA2 and RDC, and the lowest results to be achieved on SSA2 and SDCA2. However, after experimenting with a variety of hyperparameters (batch size, filter size, etc.), much to our surprise, none of the datasets reached an accuracy significantly higher than 53.8% (the baseline being 50%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1."
},
{
"text": "In order to verify the validity of our model, we used it to reproduce the binary classification task described in Denny (2015) on the dataset of movie reviews (Pang et al., 2002) which contains 5331 positive and 5331 negative instances. With the same hyperparameters as before, the model reached an accuracy of 77%, which is comparable to the 76% reported in Denny (2015) . This confirmed that the problem was not with the model itself, but with the generated corpora.",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Pang et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 359,
"end": 371,
"text": "Denny (2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1."
},
{
"text": "Recall that the DSA2 corpus was created by swapping the Arg2 of a discourse instance with another Arg2 in the dataset, provided that the two instances had different senses. Our intuition was that this corruption strategy (along with DSDC) would have led to the most incoherent instances. We, therefore, manually inspected sample instances of the DSA2 corpus, expecting to find clear cases of incoherence. To our surprise, the instances did not seem \"clearly incoherent\". For example, the following instances were part of the DSA2 corpus: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Inspection of the Incoherent Datasets",
"sec_num": "4.2."
},
{
"text": "In order to measure the incoherence level of the generated corpora more formally, we performed a human evaluation of samples of each corpus. Similar to the automatic evaluation, we first merged the coherent corpus with samples from different incoherent corpora. We then used the Crowdflower 4 crowdsourcing platform and asked annotators to rate each sample as either coherent or incoherent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "4.3."
},
{
"text": "To ensure the quality of the annotations, we first created reference samples which were used to evaluate the annotators themselves. These reference samples consisted of instances from the corpora for which 4 English speakers agreed were either coherent or incoherent. If the crowdsourced annotators did not correctly classify over 60% of these reference samples, their annotations were discarded. We also ensured that multiple annotators would annotate each reference instance. Samples with less than 4 annotators were again discarded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "4.3."
},
{
"text": "In addition, to ensure the quality of the annotations, we also used Crowdflower's confidence metric. Ranging from 0 to 1, this metric is calculated by the crowdsourcing platform and represents how many annotators classified instances the same way, weighted by the trustworthiness of each annotator, as measured by Crowdflower via metrics such as the annotators' answers in other tasks, their history on Crowdflower, and the time spent answering. Table 2 shows the total number of samples manually evaluated for each dataset, along with the percentage of incoherent samples marked coherent by the annotators, for varying levels of confidence, denoted C. Note that the ground truth is 0%, as one would expect that 0% of the incoherent instances would be perceived as coherent. For the sake of comparison, we also created a corpus from the DSA2 corpus (in principle, one of the most incoherent) where the words in Arg2 were shuffled at random. This corpus is referred to as ShuffledA2 in Table 2 . The expectation was that the ShuffledA2 instances would be judged as the least coherent. As Table 2 shows, the percentage of ShuffledA2 instances marked as coherent is indeed very low (6.25% for C>0.75). It is interesting to note that when C>0.75, in all of the incoherent corpora, except for ShuffledA2, over 40% of the instances are perceived coherent. Furthermore, Table 2 shows that in DSA2 (i.e. when swapping Arg2 with another Arg2 with a different sense) the percentage of coherent instances decreases from 55.56% in RA2 (random Arg2) to 42.31%. The same effect also holds for connectives, but to a lesser degree (96.30% to 85.71%). Finally, datasets in which the DC was changed (RDC and DSDC) seem to yield more instances perceived as coherent than when the entire Arg2 is changed (RA2, SSA2, and DSA2).",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 453,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 985,
"end": 992,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 1087,
"end": 1094,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "4.3."
},
{
"text": "The results of the manual evaluation revealed that annotators seemed to have a strong bias towards perceiving pairs of sentences as coherent. Several factors may have led to this phenomenon. To recognize a discourse relation, annotators need to understand each argument, how they relate to one another, and how they relate to the larger context of the whole discourse. In our experiments, annotators had difficulty understanding each of these. First, the synthetic data created for this work was based on instances taken from the PDTB. The original instances were fairly long (with an average length of 37 words) and complex in terms of both syntactic structure and discourse domain, making the understanding of individual arguments difficult. Also, given the specialized domain of financial and business news, annotators did not have the expertise to comprehend the relations between entities and may have relied on the inserted discourse connectives as clues to assume that the arguments were coherent. Moreover, annotators were only given the pairs of sentences without a larger context. Without important contex-tual clues, annotators may not have been able to detect the incoherence and if the text allowed for a plausible interpretation, they would consider it coherent. Therefore, it is to be expected that, in the absence of contextual clues, coherence is only detected at a surface level by the annotators, resulting in inaccurate evaluations. Finally, when annotators were unsure, the binary classification task forced them to make a choice. In hindsight, it would seem more appropriate to treat intra-discursive coherence evaluation as a regression task instead of a binary classification task. These instances can have different degrees of coherence, rather than being absolutely coherent / incoherent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4."
},
{
"text": "In this paper, we highlighted the challenges of building intra-discursive incoherent instances through corruption techniques. We used the Penn Discourse Tree Bank (Prasad et al., 2008) to generate incoherent instances, by swapping either the discourse connective (DC) or Argument 2 (Arg2) of known discursive units.",
"cite_spans": [
{
"start": 163,
"end": 184,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
},
{
"text": "We used the CNN model of Kim (2014) and Denny (2015) to classify these instances, but were unable to reach a performance greater than a random baseline. A manual evaluation through crowdsourcing revealed that the generated corpora were in fact not incoherent enough.",
"cite_spans": [
{
"start": 25,
"end": 35,
"text": "Kim (2014)",
"ref_id": "BIBREF6"
},
{
"start": 40,
"end": 52,
"text": "Denny (2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
},
{
"text": "The annotations showed that a large percentage of the incoherent samples were actually perceived coherent by the annotators. It also provided evidence that corruption methods for generating incoherent instances based on selecting a discourse argument or discourse connective with a different sense does not seem to significantly reduce coherence. Overall, these results show that the datasets generated were clearly not \"incoherent enough\", and that effort should be spent either developing more robust ways of generating incoherent instances, or annotating \"weakly corrupted\" samples, such as the ones generated by our methods. A few future directions can be proposed. First, we can adapt our method to create a corpus in which the corrupted instances are ranked based on their degree of incoherence rather than a binary classification. Also, it would be interesting to apply the same approach to shorter and syntactically simpler sentences from a simpler discourse domain. Finally, we would like to investigate the generation of synthetic instances of low coherence using Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) or sequence-to-sequence models (Sutskever et al., 2014) and explore the effectiveness of these methods for the creation of intra-discursive coherence corpora.",
"cite_spans": [
{
"start": 1113,
"end": 1138,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 1170,
"end": 1194,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
},
{
"text": "This work was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). The authors would like to thank the anonymous reviewers for their feedback on a previous version of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": "6."
},
{
"text": "The first argument of the implicit discourse connective is marked as ARG1, the second argument is denoted ARG2 and the relation is marked at the end of the sentences in parentheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "duc.nist.gov",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.crowdflower.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rhetorical structure and Alzheimer's disease",
"authors": [
{
"first": "M",
"middle": [],
"last": "Abdalla",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Rudzicz",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2018,
"venue": "Aphasiology",
"volume": "32",
"issue": "1",
"pages": "41--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdalla, M., Rudzicz, F., and Hirst, G. (2018). Rhetor- ical structure and Alzheimer's disease. Aphasiology, 32(1):41-60.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural sentence ordering. CoRR",
"authors": [
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, X., Qiu, X., and Huang, X. (2016). Neural sentence ordering. CoRR, abs/1607.06952.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep attentive sentence ordering network",
"authors": [
{
"first": "B",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4340--4349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cui, B., Li, Y., Chen, M., and Zhang, Z. (2018). Deep at- tentive sentence ordering network. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4340-4349, Brus- sels, Belgium, October-November.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Implementing a CNN for text classification in Tensorflow",
"authors": [
{
"first": "B",
"middle": [],
"last": "Denny",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "2020--2021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny, B. (2015). Implementing a CNN for text classi- fication in Tensorflow. http://www.wildml.com/ 2015/12, December. Accessed: 2020-01-15.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Endto-end neural sentence ordering using pointer network",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gong, J., Chen, X., Qiu, X., and Huang, X. (2016). End- to-end neural sentence ordering using pointer network. CoRR, abs/1611.04953.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Generative adversarial nets",
"authors": [
{
"first": "I",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27 (NIPS)",
"volume": "",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS), pages 2672-2680, Montreal, Canada, December.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, Y. (2014). Convolutional neural networks for sen- tence classification. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1746-1751, Doha, Qatar, Oc- tober.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic evaluation of text coherence: Models and representations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "5",
"issue": "",
"pages": "1085--1090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lapata, M. and Barzilay, R. (2005). Automatic evaluation of text coherence: Models and representations. In Pro- ceedings of the Nineteenth International Joint Confer- ence on Artificial Intelligence (IJCAI), volume 5, pages 1085-1090, Edinburgh, Scotland, July.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A model of coherence based on distributed sentence representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2039--2048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, J. and Hovy, E. (2014). A model of coherence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 2039-2048, Doha, Qatar, October.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural net models of open-domain discourse coherence",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "198--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, J. and Jurafsky, D. (2017a). Neural net models of open-domain discourse coherence. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 198-209, Copen- hagen, Denmark, September.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural net models of open-domain discourse coherence",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "198--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, J. and Jurafsky, D. (2017b). Neural net models of open-domain discourse coherence. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 198-209, Copen- hagen, Denmark, September.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatically evaluating text coherence using discourse relations",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M.-Y",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "997--1006",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Z., Ng, H. T., and Kan, M.-Y. (2011). Automati- cally evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Lan- guage Technologies-Volume 1 (ACL/HLT), pages 997- 1006, Portland, USA, June.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sentence ordering and coherence modeling using recurrent neural networks",
"authors": [
{
"first": "L",
"middle": [],
"last": "Logeswaran",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "D",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "5285--5292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Logeswaran, L., Lee, H., and Radev, D. R. (2018). Sen- tence ordering and coherence modeling using recurrent neural networks. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelli- gence (IAAI-18), and the 8th AAAI Symposium on Ed- ucational Advances in Artificial Intelligence (EAAI-18), pages 5285-5292, New Orleans, USA, February.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M. P., Santorini, B., and Marcinkiewicz, M. A. (1993). Building a large annotated corpus of En- glish: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2012,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2012). Efficient estimation of word representations in vector space. CoRR, arXiv:1301.3781, January.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Thumbs up? sentiment classification using machine learning techniques",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, B., Lee, L., and Vaithyanathan, S. (2002). Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 79-86, Philadelphia, Pennsylvania, USA, July.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Penn Discourse Treebank 2.0",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "B",
"middle": [
"L"
],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prasad, R., Dinesh, N., Lee, A., Miltsakaki, E., Robaldo, L., Joshi, A. K., and Webber, B. L. (2008). The Penn Discourse Treebank 2.0. In Proceedings of the 6th Inter- national Conference on Language Resources and Evalu- ation (LREC), Marrakech, Morocco, May.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27 (NIPS)",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27 (NIPS), pages 3104-3112. Montreal, Canada, December.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A neural local coherence model",
"authors": [
{
"first": "Tien",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1320--1330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tien Nguyen, D. and Joty, S. (2017). A neural local coher- ence model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1320-1330, Vancouver, Canada, July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Model architecture, taken from(Kim, 2014).",
"type_str": "figure",
"uris": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Statistics of the original coherent and the 6 generated incoherent corpora.",
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>(9) [Wall street had expected a modest rise in the</td></tr><tr><td/><td/><td/><td>company's domestic sales and earnings, and more</td></tr><tr><td/><td/><td/><td>substantial increases in overseas results.] ARG1 [In</td></tr><tr><td/><td/><td/><td>addition] DC [the dollar soared against the pound,</td></tr><tr><td/><td/><td/><td>which was at $1.5765 compared with $1.6145</td></tr><tr><td/><td/><td/><td>Wednesday.] ARG2</td></tr><tr><td colspan=\"4\">6) [In the 1970s, several pharmaceutical and</td></tr><tr><td colspan=\"4\">packaged-goods companies, including Colgate-</td></tr><tr><td colspan=\"4\">Palmolive co., Eli Lilly &amp; co., Pfizer inc.</td></tr><tr><td>and</td><td>Schering-Plough</td><td>acquired</td><td>cosmetics</td></tr><tr><td colspan=\"4\">companies.] (7) [By starving the peasant, the communists have</td></tr><tr><td colspan=\"4\">starved Poland.] ARG1 [For example] DC [we're</td></tr><tr><td colspan=\"4\">making a fairly obvious plea for some emotional</td></tr><tr><td colspan=\"2\">reaction.] ARG2</td><td/><td/></tr><tr><td colspan=\"4\">(8) [Some Canadian political commentators have op-</td></tr><tr><td colspan=\"4\">posed Canada's joining what they see as a U.S.-</td></tr><tr><td colspan=\"4\">dominated organization.] ARG1 [For example] DC</td></tr><tr><td colspan=\"4\">[instead of focusing on the financial future, Mr.</td></tr><tr><td colspan=\"4\">Dinkins has sold himself as a unifier for a city re-</td></tr><tr><td colspan=\"4\">cently touched by racial violence and as a sooth-</td></tr><tr><td colspan=\"4\">ing antidote to 12 years of commotion generated by</td></tr><tr><td colspan=\"2\">Mayor Koch] ARG2</td><td/><td/></tr><tr><td colspan=\"4\">The resulting instances are not clearly incoherent, showing</td></tr><tr><td colspan=\"4\">that even swapping Arg2s with a different sense may not</td></tr><tr><td>be sufficient.</td><td/><td/><td/></tr><tr><td colspan=\"4\">An example from the SDCA2 corpus shows the same diffi-</td></tr><tr><td colspan=\"2\">culty in judgment.</td><td/><td/></tr></table>",
"num": null,
"text": "ARG1 [However] DC [as that system grows, larger computers may be needed.] ARG2",
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Statistics of the evaluated samples for each dataset: percentage of incoherent samples judged coherent.",
"html": null
}
}
}
}