ACL-OCL / Base_JSON /prefixB /json /bea /2021.bea-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:10:15.313549Z"
},
"title": "Parsing Argumentative Structure in English-as-Foreign-Language Essays",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Wira",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Gotama",
"middle": [],
"last": "Putra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Institute of Technology",
"location": {
"country": "Japan"
}
},
"email": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Institute of Technology",
"location": {
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Institute of Technology",
"location": {
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a study on parsing the argumentative structure in English-as-foreignlanguage (EFL) essays, which are inherently noisy. The parsing process consists of two steps, linking related sentences and then labelling their relations. We experiment with several deep learning architectures to address each task independently. In the sentence linking task, a biaffine model performed the best. In the relation labelling task, a finetuned BERT model performed the best. Two sentence encoders are employed, and we observed that non-fine-tuning models generally performed better when using Sentence-BERT as opposed to BERT encoder. We trained our models using two types of parallel texts: original noisy EFL essays and those improved by annotators, then evaluate them on the original essays. The experiment shows that an end-toend in-domain system achieved an accuracy of .341. On the other hand, the cross-domain system achieved 94% performance of the indomain system. This signals that well-written texts can also be useful to train argument mining system for noisy texts.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a study on parsing the argumentative structure in English-as-foreignlanguage (EFL) essays, which are inherently noisy. The parsing process consists of two steps, linking related sentences and then labelling their relations. We experiment with several deep learning architectures to address each task independently. In the sentence linking task, a biaffine model performed the best. In the relation labelling task, a finetuned BERT model performed the best. Two sentence encoders are employed, and we observed that non-fine-tuning models generally performed better when using Sentence-BERT as opposed to BERT encoder. We trained our models using two types of parallel texts: original noisy EFL essays and those improved by annotators, then evaluate them on the original essays. The experiment shows that an end-toend in-domain system achieved an accuracy of .341. On the other hand, the cross-domain system achieved 94% performance of the indomain system. This signals that well-written texts can also be useful to train argument mining system for noisy texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Real-world texts are not always well-written, especially in the education area where students are still learning how to write effectively. It has been observed that student texts often require improvements at the discourse-level, e.g., in persuasiveness and content organisation aspects (Bamberg, 1983; Zhang and Litman, 2015; Carlile et al., 2018) . Worse still, texts written by non-native speakers are also less coherent, exhibit less lexical richness and more unnatural lexical choices and collocations (Johns, 1986; Silva, 1993; Rabinovich et al., 2016) . Our long-term goal is to improve EFL essays from the discourse perspective. One way to do this is by recommending a better arrangement of sentences, which enhances text coherence and comprehension (Connor, 2002; Bacha, 2010) . This may serve as feedback for students in the educational setting (Invanic, 2004) . The first step to achieve this goal, which is discussed in the current paper, is parsing argumentative structure in terms of dependencies between sentences. This is because the relationships between sentences are crucial to determine the proper order of sentences (Grosz and Sidner, 1986; Hovy, 1991; Webber and Joshi, 2012) .",
"cite_spans": [
{
"start": 287,
"end": 302,
"text": "(Bamberg, 1983;",
"ref_id": "BIBREF3"
},
{
"start": 303,
"end": 326,
"text": "Zhang and Litman, 2015;",
"ref_id": "BIBREF53"
},
{
"start": 327,
"end": 348,
"text": "Carlile et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 507,
"end": 520,
"text": "(Johns, 1986;",
"ref_id": "BIBREF27"
},
{
"start": 521,
"end": 533,
"text": "Silva, 1993;",
"ref_id": "BIBREF44"
},
{
"start": 534,
"end": 558,
"text": "Rabinovich et al., 2016)",
"ref_id": "BIBREF41"
},
{
"start": 758,
"end": 772,
"text": "(Connor, 2002;",
"ref_id": "BIBREF9"
},
{
"start": 773,
"end": 785,
"text": "Bacha, 2010)",
"ref_id": "BIBREF2"
},
{
"start": 855,
"end": 870,
"text": "(Invanic, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 1137,
"end": 1161,
"text": "(Grosz and Sidner, 1986;",
"ref_id": "BIBREF18"
},
{
"start": 1162,
"end": 1173,
"text": "Hovy, 1991;",
"ref_id": "BIBREF21"
},
{
"start": 1174,
"end": 1197,
"text": "Webber and Joshi, 2012)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes the application of Argument Mining (AM) to EFL essays. AM is an emerging area in computational linguistics which aims to explain how argumentative discourse units (e.g., sentences, clauses) function and relate to each other in the discourse, forming an argument as a whole (Lippi and Torroni, 2016) . AM has broad applications in various areas, such as in the legal (Ashley, 1990) and news domains. Also in the education domain, AM is beneficial for many downstream tasks such as text assessment ), text improvement (as described above) and teaching (Putra et al., 2020) . It is common in AM to use well-written texts written by proficient authors, as do Peldszus and Stede (2016) , , among others. However, there are more non-native speakers of English than native speakers in the world, and their writings are often noisy as previously described. Yet, EFL is a niche domain in AM.",
"cite_spans": [
{
"start": 294,
"end": 319,
"text": "(Lippi and Torroni, 2016)",
"ref_id": "BIBREF32"
},
{
"start": 387,
"end": 401,
"text": "(Ashley, 1990)",
"ref_id": "BIBREF1"
},
{
"start": 571,
"end": 591,
"text": "(Putra et al., 2020)",
"ref_id": "BIBREF40"
},
{
"start": 676,
"end": 701,
"text": "Peldszus and Stede (2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents three contributions. First, this paper presents an application of AM to EFL essays. We parse the argumentative structure in two steps: (i) a sentence linking step where we identify related sentences that should be linked, forming a tree structure, and (ii) a relation labelling step, where we label the relationship between the sentences. Several deep learning models were evaluated to address each step. We do not only evaluate the model performance based on individual links but also perform a structural analysis, giving more insights into the models' ability to learn different aspects of the argumentative structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second contribution is showing the effectiveness of well-written texts as training data for argumentative parsing of noisy texts. Many AM corpora exist for well-written texts because past studies typically assumed well-written input. Corpora with noisy texts, such as the EFL one we use here, exist, but are far more infrequent. In the past, well-written and noisy texts have been treated as two separate domains, and AM systems were trained separately on each domain. We want to investigate how far the existing labelled corpora for well-written texts can also be useful for training parsers for noisy texts. To this end, we train parsers on both in-domain and out-of-domain texts and evaluate them on the in-domain task. For our out-of-domain texts, we use the improved versions of noisy EFL texts. These improvements were produced by an expert annotator and have a quality closer to those of proficient authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The third contribution of this paper is an evaluation of Sentence-BERT (SBERT, Reimers and Gurevych (2019) ) in AM as a downstream application setting. BERT (Devlin et al., 2019 ) is a popular transformer-based language model (LM), but as it is designed to be fine-tuned, it can be suboptimal in low-resource settings. SBERT tries to alleviate this problem by producing a more universal sentence embeddings, that can be used as they are in many tasks. The idea of training embeddings on the natural language inference (NLI) task goes back to Conneau et al. (2017) , and this is the SBERT variant we use here. The NLI task involves recognising textual entailment (TE), and a TE model has been previously used by Cabrio and Villata (2012) for argumentation. We will quantify how the two encoders perform in our task. All resources of this paper are available on github. 1",
"cite_spans": [
{
"start": 79,
"end": 106,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF42"
},
{
"start": 157,
"end": 177,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF11"
},
{
"start": 542,
"end": 563,
"text": "Conneau et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 711,
"end": 736,
"text": "Cabrio and Villata (2012)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Argumentative structure analysis consists of two main steps (Lippi and Torroni, 2016) . The first step is argumentative component identification (ACI), which segments a text into argumentative discourse units (ADUs); then differentiates them into argumentative (ACs) and non-argumentative components (non-ACs). ACs function argumentatively while non-ACs do not, e.g., describing a personal episode in response to the given writing prompt. ACs can be further classified according to their communicative roles, e.g., claim and premise. The second step is argumentative structure prediction, which contains two subtasks: (1) linking and (2) relation labelling. In the linking task, directed relations are established from source to target ACs to form a structured representation of the text, often in the form of a tree. In the relation labelling task, we identify the relations that connect them, e.g., support and attack.",
"cite_spans": [
{
"start": 60,
"end": 85,
"text": "(Lippi and Torroni, 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the education domain, argumentative structure interrelates with text quality, and it becomes one of the features that go into automatic essay scoring (AES) systems (Persing et al., 2010; Song et al., 2014; Ghosh et al., 2016; . End-to-end AES systems also exist, but hybrid models are preferred for both performance and explainability reasons (Uto et al., 2020) . Eger et al. (2017) formulated AM in three ways: as relation extraction, as sequence tagging and as dependency parsing. They performed end-toend AM at token-level, executing all subtasks in AM all at once. Eger et al. achieved the highest performance in their experiments with the relation extraction model LSTM-ER (Miwa and Bansal, 2016) . We instead use their sequence tagging formulation, which adapts the existing vanilla Bidirectional Long-short-term memory (BiLSTM) network (Hochreiter and Schmidhuber, 1997; Huang et al., 2015) , as it can be straightforwardly applied to our task. The dependency parsing formulation is also a straightforward adaptation as it models tree structures. The biaffine model is the current state-of-the-art of syntactic dependency parsing (Dozat and Manning, 2017) , and it has been adapted to relation detection and labelling tasks in AM by Morio et al. (2020) . In a similar way, we also adapt the biaffine model to our argumentative structure. However, we use sentences instead of spans as ADU, trees instead of graphs.",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "(Persing et al., 2010;",
"ref_id": "BIBREF39"
},
{
"start": 190,
"end": 208,
"text": "Song et al., 2014;",
"ref_id": "BIBREF46"
},
{
"start": 209,
"end": 228,
"text": "Ghosh et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 346,
"end": 364,
"text": "(Uto et al., 2020)",
"ref_id": "BIBREF49"
},
{
"start": 367,
"end": 385,
"text": "Eger et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 681,
"end": 704,
"text": "(Miwa and Bansal, 2016)",
"ref_id": "BIBREF33"
},
{
"start": 846,
"end": 880,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF20"
},
{
"start": 881,
"end": 900,
"text": "Huang et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 1140,
"end": 1165,
"text": "(Dozat and Manning, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 1243,
"end": 1262,
"text": "Morio et al. (2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most work in AM uses well-written texts in the legal (e.g., Ashley, 1990; Yamada et al., 2019) and news (e.g., domains, but there are several AM studies that concentrate on noisy texts. For example, Habernal and Gurevych (2017) focused on the ACI task in web-discourse. Morio and Fujita (2018) investigated how to link arguments in discussion threads. In the education domain, Stab and Gurevych (2017) studied the argumentation in persuasive essays. One of the prob-lems with the existing corpora is the unclear distinction between native and non-native speakers. Additionally, to investigate and bridge the gap of performance between AM systems on noisy and well-written texts, it is necessary to use a parallel corpus containing both versions of texts. However, none of the above studies did.",
"cite_spans": [
{
"start": 60,
"end": 73,
"text": "Ashley, 1990;",
"ref_id": "BIBREF1"
},
{
"start": 74,
"end": 94,
"text": "Yamada et al., 2019)",
"ref_id": "BIBREF52"
},
{
"start": 270,
"end": 293,
"text": "Morio and Fujita (2018)",
"ref_id": "BIBREF34"
},
{
"start": 377,
"end": 401,
"text": "Stab and Gurevych (2017)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We use part of the \"International Corpus Network of Asian Learners of English\" (Ishikawa, 2013 (Ishikawa, , 2018 , which we annotated with Argumentative Structure and Sentence Reordering (\"ICNALE-AS2R\" corpus). 2 This corpus contains 434 essays written by college students in various Asian countries. They are written in response to two prompts: (1) about banning smoking and (2) about students' part-time jobs. Essays are scored in the range of [0, 100]. There are two novelties in this corpus: (1) it uses a new annotation scheme as described below and (2) contains a parallel version of essays which have been improved from the discourse perspective. Therefore, this corpus can be used in many downstream tasks, e.g., employing argumentative structures for assessing and improving EFL texts. It is also possible to extend the improved version of texts on other linguistic aspects.",
"cite_spans": [
{
"start": 79,
"end": 94,
"text": "(Ishikawa, 2013",
"ref_id": "BIBREF25"
},
{
"start": 95,
"end": 112,
"text": "(Ishikawa, , 2018",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "The corpus was annotated at sentence-level, i.e., a sentence corresponds to an ADU. 3 In our annotation scheme, we first differentiate sentences as ACs and non-ACs, without further classification of AC roles. Annotators then establish links from source to target ACs, forming tree-structured representations of the texts. Then, they identify the relations that connect ACs. We instructed annotators to use the major claim, the statement that expresses the essay author's opinion at the highest level of abstraction, as the root of the structure. As there are no further classification of AC roles, the term \"major claim\" here refers to a concept, not an explicitly annotated category. As the last step, annotators rearrange sentences and performed text repair to improve the texts from a discourse perspective.",
"cite_spans": [
{
"start": 84,
"end": 85,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "There are four relations between ACs: SUPPORT (sup), ATTACK (att), DETAIL (det) and RESTATE-MENT (res). SUPPORT and ATTACK relations are common in AM. They are used when the source sentence supports or attacks the argument in the target sentence (Peldszus and Stede, 2013; Stab and Gurevych, 2014) . We use the DETAIL relation in two cases. First, when the source presents additional details (further explanations, descriptions or elaborations) about the target sentence, and second, when the source introduces a topic of the discussion in a neutral way by providing general background. From the organisational perspective, the differentiation between DETAIL and SUPPORT is useful. While the source sentence in a SUPPORT relation ideally follows its target, the DETAIL relation has more flexibility. We also use a relation called RE-STATEMENT for those situations where high-level parts of an argument are repeated or summarised for the second time, e.g., when the major claim is restated in the conclusion of the essay. DETAIL and RESTATEMENT links are not common in AM; the first was introduced by Kirschner et al. 2015and the second by Skeppstedt et al. 2018, but both work on well-written texts. The combination of these four relations is unique in AM.",
"cite_spans": [
{
"start": 246,
"end": 272,
"text": "(Peldszus and Stede, 2013;",
"ref_id": "BIBREF37"
},
{
"start": 273,
"end": 297,
"text": "Stab and Gurevych, 2014)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "To improve the texts, annotators were asked to rearrange sentences so that it results in the most logically well-structured texts they can think of. This is the second annotation layer in our corpus. No particular reordering strategy was instructed. Reordering, however, may cause irrelevant or incorrect referring and connective expressions (Iida and Tokunaga, 2014) . To correct these expressions, annotators were instructed to minimally repair the text where this is necessary to retain the original meaning of the sentence. For instance, they replaced pronouns with their referents, and removed or replaced inappropriate connectives. Text repair is also necessary to achieve standalone major claims. For example, \"I think so\" with so referring to the writing prompt (underlined in what follows) can be rephrased as \"I think smoking should be banned at all restaurants.\" Figure 1 shows an example of our annotation scheme using a real EFL essay. Figure 2 then illustrates how the reordering operation produced an improved essay. The annotator recognised that (16) is the proper major claim of the essay in Figure 1 . However, this essay is problematic because the major claim is not introduced at the beginning of the essay. Thus, the annotator moved (16) to the beginning, and the whole essay is concluded by sentence sup (1) First of all, smoking is bad for your health.",
"cite_spans": [
{
"start": 342,
"end": 367,
"text": "(Iida and Tokunaga, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 874,
"end": 882,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 949,
"end": 957,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1109,
"end": 1117,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "(2) It causes many problems like chest infection, TB and other dangerous dieases. sup (8) In foreign countries, some middle-and highlevel restaurants have banned smoking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "......",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "(9) Smoking contains nicotine, which makes the food dirty. sup (10) A person who smokes not only decreases their lifetime but also impacts other people sup (11) If someone asks why you smoke, smokers often reply that they smoke to release tension, but they know it is not good for their health, especially in restaurants because it poisons the food sup (12) Now it's our duty to save our country from the pollution and effects of smoking. = (13) Smoking also should be banned in pubs, where people also come to enjoy eating and drinking. sup 14Nicotine is a drug and its effect on the human body is very harmful and causes death. (13) in Figure 2 . We also observe that crossing links happen in Figure 1 , and they may suggest the jump of ideas, indicating coherence breaks. For example, sentence (14) describes nicotine and the annotator thinks that it properly connects to (9) which also talks about nicotine. Therefore, it is more desirable to place sentences (9) and (14) close to each other, as shown in Figure 2 . The reordered version is arguably better since it is more consistent with the argumentative-development-strategy prescribed in teaching, i.e., introduce a topic and state the author's stance on that topic, support the stance by presenting more detailed reasons, and finally concludes the essay at the end (Silva, 1993; Bacha, 2010) .",
"cite_spans": [
{
"start": 1325,
"end": 1338,
"text": "(Silva, 1993;",
"ref_id": "BIBREF44"
},
{
"start": 1339,
"end": 1351,
"text": "Bacha, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 638,
"end": 646,
"text": "Figure 2",
"ref_id": null
},
{
"start": 695,
"end": 703,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1009,
"end": 1017,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "We performed an inter-annotator agreement (IAA) study on 20 essays, using as annotators a PhD student in English education (also an EFL teacher -expert annotator) and the first author, both having a near-native English proficiency. We found the agreement to be Cohen's \u03ba=.66 for ACI; \u03ba=.53 for sentence linking; and \u03ba=.61 for relation labelling (Cohen, 1960 ). The sentence linking task sup (1) First of all, smoking is bad for your health.",
"cite_spans": [
{
"start": 345,
"end": 357,
"text": "(Cohen, 1960",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "(2) It causes many problems like chest infection, TB and other dangerous dieases. was the most difficult one, and it is understandable since a text may have multiple acceptable structures. The relation labels hardest to distinguish were between SUPPORT and DETAIL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "This kind of annotation is expensive. Also, there is no metric to measure the agreement on reordering. Therefore, we chose to have the expert annotator annotate all essays in the production for the purpose of consistency. There are 6,021 sentences in the ICNALE-AS2R corpus; 5,799 (96%) of these are ACs and 222 (4%) are non-ACs. An essay contains 14 sentences on average, and the average structural depth is 4.3 (counting the root at depth = 0). The corpus does not contain paragraph breaks. The most frequent relation label is SUPPORT (3,029-56%), followed by DETAIL (1,585-30%), ATTACK (437-8%), and RESTATE-MENT (314-6%). In total, 105 out of 434 essays were rearranged (1-3 sentences were moved on average). As we have explained before, the expert annotator reordered a scattered set of sentences which logically form a sub-argument to be close in position to each other. They also placed the major claim at the beginning as opposed to the middle or the end of the essay. In general, the expert arranges the essays to be more consistent with the typical argumentative-development strategy prescribed in teaching. The text repair was done on 181 sentences, 123 (71%) of which attempt to repair the prompt-type error of the major claim. The remain-ing 58 sentences concern changes in connectives and referring expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "We adopt a pipeline approach by using independent models for sentence linking, which includes the ACI task, and relation labelling. Although a pipeline system may fall prey to error propagation, for a new scheme and corpus, it can be advantageous to look at intermediate results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "Given an entire essay as a sequence of sentences s 1 , ..., s N , our sentence linking model outputs the distance d 1 , ..., d N between each sentence s i to its target; if a sentence is connected to its preceding sentence, the distance is d = \u22121. We consider those sentences that have no explicitly annotated outgoing links as linked to themselves (d = 0); this concerns major claims (roots) and non-ACs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "4.1"
},
{
"text": "\u2264 \u22125 \u22124 \u22123 \u22122 \u22121 0 16.6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "4.1"
},
{
"text": "3.9 5.2 8.3 37.0 10.9 \u2265 +5 +4 +3 +2 +1 1.0 0.6 0.9 2.3 13.4 Table 1 : Distribution of distance (in percent) between source and target sentences in the corpus. Table 1 shows the distribution of distance between the source and target sentences in the corpus, which ranges [\u221226, ..., +15]. Adjacent links predominate (50.4%). Short-distance links (2 \u2264 |d| \u2264 4) make up 21.2% of the total. Backward long distance links at d \u2264 \u22125 are 16.6%, whereas forward long distance links are rare (1.0%).",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 1",
"ref_id": null
},
{
"start": 159,
"end": 166,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "4.1"
},
{
"text": "We follow the formulation by Eger et al. (2017) , where AM is modelled as sequence tagging (4.1.1) and as dependency parsing (4.1.2). Figure 3 shows our sequence tagging architecture (\"SEQTG\"). We adapt the vanilla BiLSTM with softmax prediction layers (as Eger et al. (2017) similarly did), training the model in a multi-task learning (MTL) setup. There are two prediction layers: (1) for sentence linking as main task and (2) for ACI as an auxiliary task.",
"cite_spans": [
{
"start": 29,
"end": 47,
"text": "Eger et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "4.1"
},
{
"text": "The input sentences s 1 , ..., s N are first encoded into their respective sentence embeddings (using either BERT or SBERT as encoder). 4 We do not fine-tune the encoder because our dataset is too 4 By averaging subword embeddings. small for it. 5 The resulting sentence embeddings are then fed into a dense layer for dimensionality reduction. The results are fed into a BiLSTM layer (#stack = 3) to produce contextual sentence representations, then fed into prediction layers.",
"cite_spans": [
{
"start": 136,
"end": 137,
"text": "4",
"ref_id": null
},
{
"start": 197,
"end": 198,
"text": "4",
"ref_id": null
},
{
"start": 246,
"end": 247,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Tagger Model",
"sec_num": "4.1.1"
},
{
"text": "Main Task: The model predicts the probability of link distances, in the range [\u221226, ..., +15] . To make sure there is no out-of-bound prediction, we perform a constrained argmax during prediction time. For each sentence s i , we compute the argmax only for distances at",
"cite_spans": [
{
"start": 78,
"end": 93,
"text": "[\u221226, ..., +15]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Tagger Model",
"sec_num": "4.1.1"
},
{
"text": "[1 \u2212 i, ..., N \u2212 i]; i \u2265 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Tagger Model",
"sec_num": "4.1.1"
},
{
"text": "Auxiliary Task: As the auxiliary task, the model predicts quasi-argumentative-component type c for each input sentence. Our scheme does not assign AC roles per se, but we can compile the following sentence types from the tree typology:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Tagger Model",
"sec_num": "4.1.1"
},
{
"text": "\u2022 major claim (root): only incoming links, \u2022 AC (non-leaf): both outgoing and incoming links, \u2022 AC (leaf): only outgoing links, and \u2022 non-AC: neither incoming nor outgoing links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Tagger Model",
"sec_num": "4.1.1"
},
{
"text": "These four labels should make a good auxiliary task as they should help the model to learn the placement of sentences in the hierarchical structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Tagger Model",
"sec_num": "4.1.1"
},
{
"text": "We use the dynamic combination of loss as the MTL objective (Kendall et al., 2018) . To evaluate whether the auxiliary task does improve the model performance, we also experiment only on the main task (single-task learning-STL).",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "(Kendall et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Tagger Model",
"sec_num": "4.1.1"
},
{
"text": "We adapt the biaffine model (\"BIAF\") by Dozat and Manning (2017) , treating the sentence linking task as sentence-level dependency parsing (Figure 4) .",
"cite_spans": [
{
"start": 40,
"end": 64,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 139,
"end": 149,
"text": "(Figure 4)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Biaffine Model",
"sec_num": "4.1.2"
},
{
"text": "The first three layers produce contextual sentence representations in the same manner as in the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biaffine Model",
"sec_num": "4.1.2"
},
{
"text": "s 1 s 2 ... s N Dense Encoder h 1 (source)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biaffine Model",
"sec_num": "4.1.2"
},
{
"text": "h 1 (target) h 2 (source) h 2 (target) h N (target) . = H (source) G . U H (target) h N SEQTG model. These representations are then fed into two different dense layers, in order to encode the corresponding sentence when it acts as a source h (source) or target h (target) in a relation. Finally, a biaffine transformation is applied to all source and target representations to produce the final output matrix G \u2208 R N \u00d7N , in which each row g i represents where the source sentence s i should point to (its highest scoring target).",
"cite_spans": [
{
"start": 4,
"end": 12,
"text": "(target)",
"ref_id": null
},
{
"start": 17,
"end": 25,
"text": "(source)",
"ref_id": null
},
{
"start": 30,
"end": 38,
"text": "(target)",
"ref_id": null
},
{
"start": 43,
"end": 51,
"text": "(target)",
"ref_id": null
},
{
"start": 58,
"end": 66,
"text": "(source)",
"ref_id": null
},
{
"start": 75,
"end": 83,
"text": "(target)",
"ref_id": null
},
{
"start": 242,
"end": 250,
"text": "(source)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biaffine Model",
"sec_num": "4.1.2"
},
{
"text": "When only considering the highest scoring or most probable target for each source sentence in isolation, the output of the models (SEQTG and BIAF) does not always form a tree (30-40% nontree outputs in our experiment). In these cases, we use Chu-Liu-Edmonds algorithm (Chu and Liu, 1965; Edmonds, 1967) to create a minimum spanning tree out of the output.",
"cite_spans": [
{
"start": 268,
"end": 287,
"text": "(Chu and Liu, 1965;",
"ref_id": "BIBREF6"
},
{
"start": 288,
"end": 302,
"text": "Edmonds, 1967)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biaffine Model",
"sec_num": "4.1.2"
},
{
"text": "In the relation labelling task, given a pair of linked source and target sentences s source , s target , a model outputs the label that connects them, i.e., one of {SUPPORT, ATTACK, DETAIL, RESTATE-MENT}. We use non-fine-tuning models with feedforward architecture and fine-tuning transformerbased LMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Labelling",
"sec_num": "4.2"
},
{
"text": "In non-fine-tuning models, both source and target sentences s source , s target are encoded using BERT or SBERT to produce their respective embeddings. We then pass these embeddings into respective dense layers for a dimensionality reduction and transformation step, producing r source , r target . As the first option (\"FFCON\", Figure 5a ), r source and r target are concatenated, passed to a dense layer for a further transformation, and finally fed into a prediction layer. As the second option (FFLSTM, Figure 5b ), we feed r source and r target to an LSTM layer, and the hidden units of LSTM are concatenated before being sent to a dense layer (Deguchi and Yamaguchi, 2019) . ",
"cite_spans": [
{
"start": 649,
"end": 678,
"text": "(Deguchi and Yamaguchi, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 329,
"end": 338,
"text": "Figure 5a",
"ref_id": "FIGREF6"
},
{
"start": 507,
"end": 516,
"text": "Figure 5b",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Non-fine-tuning Models",
"sec_num": "4.2.1"
},
{
"text": "Unlike sentence linking, where an entire essay is taken as input, the relation labelling task takes a pair of sentences. There are 5,365 of such pairs in the ICNALE-AS2R corpus. We fine-tune BERT and DISTILBERT (Sanh et al., 2019) on the resulting sentence pair classification task. The pair is fed into the transformer model, and then the [CLS] token representation is passed into a prediction layer.",
"cite_spans": [
{
"start": 211,
"end": 230,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning Models",
"sec_num": "4.2.2"
},
{
"text": "The dataset is split into 80% training set (347 essays , 4,841 sentences) and 20% testing set (87 essays, 1,180 sentences), stratified according to prompts, scores and country of origin of the EFL learners. We are interested in how the AM models trained on well-written texts may perform on more noisy texts. To find out, we train the models on both the original EFL texts (in-domain) and the parallel improved texts (out-of-domain), then evaluated on the original EFL texts. The difference between in-and out-of-domain data lies on the textual surface, i.e., sentence rearrangement, the use of connectives, referring expressions, and textual repair for major claims. Since not all essays undergo any reordering, the out-of-domain data is roughly 75% the same as the in-domain data. The number of hidden units and learning rates (alongside other implementation notes) to train our models can be found in Appendix A. We run the experiment for 20 times, 6 and report the average performance. The relation labelling models are trained and evaluated using sentence pairs according to the gold-standard. In the end-to-end evaluation (Section 5.3), however, the input to the relation labelling model is the automatic prediction. Statistical testing, whenever possible, is conducted using the student's t-test (Fisher, 1937) on the performance scores of the 20 runs, with a significance level of \u03b1 = 0.05.",
"cite_spans": [
{
"start": 1303,
"end": 1317,
"text": "(Fisher, 1937)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "5"
},
{
"text": "We first report our in-domain before turning to the cross-domain results. Table 2 shows our experimental result on the prediction of individual links. The best model is a biaffine model, namely SBERT-BIAF, statistically outperforming the next-best non-biaffine model (accuracy .471 vs .444 and F1-macro .323 vs .274; significant difference on both metrics). Training the SEQTG model in the MTL setting did not improve the performance on these standard metrics. Figure 6: Models' performance across distances for indomain evaluation using SBERT encoder.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "5.1"
},
{
"text": "To gain deeper insights into model quality, we also considered the models' F1 score per target distance ( Figure 6 ). All models, and in particular BIAF, are better at predicting long-distance links (d \u2264 \u22125, avg. F1 = [0.22, 0.41]) than short distance links (2 \u2264 |d| \u2264 4, avg. F1 = [0.16, 0.24]) dom initialisation in neural networks. when using SBERT encoder (the same trend goes when using BERT encoder). Long-distance links tend to happen at the higher tree level, e.g., the links from nodes at depth=1 to the root, while shortdistance links tend to happen at the deeper level, e.g., within a sub-argument at depth\u22652. As deep structures seem to be harder to parse, we would expect longer texts to suffer more.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "5.1"
},
{
"text": "Next, we look at the models' ability to perform quasi argumentative component type (QACT) classification: whether they can correctly predict the role of major claim, AC (non-leaf), AC (leaf) and non-AC, as defined in our auxiliary task described in Section 4.1.1, based on the topology of argumentative structures. This evaluates whether the models place sentences properly in the hierarchical structure. Table 3 shows the result. SBERT-SEQTG [MTL] performed the best, significantly outperforming the second-best SBERT-BIAF (F1-macro=.609 vs .601). We now see the gain of training in the MTL setup as all SEQTG [MTL] models produce better hierarchical arrangements of nodes compared to the STL models; the F1-macro when using BERT encoder is .599 vs .592 (not significant) and SBERT .609 vs .596 (significant).",
"cite_spans": [],
"ref_spans": [
{
"start": 405,
"end": 412,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "5.1"
},
{
"text": "We notice that BIAF works acceptably well (F1macro of .601) only when paired with the SBERT encoder. When using the BERT encoder, it has great difficulty in producing any non-AC nodes at all (Non-AC F1=.058; F1-macro=.493), despite its good performance on individual links. This result seems to suggest that SBERT is a better encoder than BERT for non-fine-tuning models. This also proves the importance of the evaluation of AM models beyond standard metrics, e.g., in terms of their structural properties as we do here. Prediction performance on individual links does not guarantee the quality of the whole structure. Considering the entire situation, SBERT-BIAF is our preferred model because its performance on standard metrics is substantially better than non-biaffine models. It also performs reasonably well on the hierarchical arrangement of nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "5.1"
},
{
"text": "We next look at the cross-domain performance of the best sentence linking model, namely SBERT-BIAF. It achieves an accuracy of .459 and an F1-macro of .270 for the prediction of individual links. The F1-macro for QACT classification is .565. These scores are somewhat lower compared to the in-domain performance (significant difference). This means that the modifications of even Table 3 : In-domain results of quasi argumentative component type classification (node labels identified by topology). We show F1 score per node label and F1-macro. Bold-face, \u2020, and underline as above.",
"cite_spans": [],
"ref_spans": [
{
"start": 380,
"end": 387,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "5.1"
},
{
"text": "25% of essays (in terms of rearrangement) in the out-of-domain data may greatly affect the linking performance, in the cross-domain setting. Table 4 : In-domain relation labelling results, showing F1 score per class and F1-macro. \"(B)\" for BERT and \"(S)\" for SBERT. Bold-face, underline and \u2020 as above. Table 4 shows our experimental results for the indomain relation labelling task, when gold-standard links are used. BERT model achieves the significantly best performance (F1-macro = .595). Nonfine-tuning models performed better when using SBERT than BERT encoder (F1-macro=.532 vs. .502; .543 vs. .502; both having significant difference). This further confirms the promising potential of SBERT and might suggest that the NLI task is suitable for pre-training a relation labelling model; we plan to investigate this further.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 4",
"ref_id": null
},
{
"start": 303,
"end": 310,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Linking",
"sec_num": "5.1"
},
{
"text": "We can see from the results that the ATTACK label is the most difficult one to predict correctly, presumably due to its infrequent occurrence. However, the RESTATEMENT label, which is also infrequent, is relatively well predicted by all models. We think that has to do with all models' ability to recognise semantic similarity. Recall that the RESTATEMENT label is used when a concluding statement rephrases the major claim. SUPPORT and DETAIL are often confused. Note that they are also the most confusing labels between human annotators. Sentence pairs that should be classified as having ATTACK and RESTATEMENT labels are also often classified as SUPPORT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Labelling",
"sec_num": "5.2"
},
{
"text": "We also performed our cross-domain experiment for this task. Our best relation labelling model, BERT, achieves the cross-domain F1-macro of .587 (the difference is not significant to in-domain performance). Although not currently shown, the change of performance in other models are also almost negligible (up to 2% in F1-macro).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Labelling",
"sec_num": "5.2"
},
{
"text": "For end-to-end evaluation, we combine in a pipeline system the best models for each task: SBERT-BIAF for sentence linking and fine-tuned BERT for relation labelling. Table 5 : End-to-end results. \u03ba scores are used for \"ACI\" (argument component identification), \"SL\" (sentence linking) and \"RL\" (relation labelling). Table 5 shows the evaluation results of the average of 20 runs. Accuracy measures whether the pipeline system predicts all of the following correctly for each source sentence in the text: the correct ACI label (AC vs. non-AC), the correct target distance and the correct relation label. In addition, we also calculated the Cohen's \u03ba score between the system's output and the gold annotation for annotation subtasks in our scheme.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 5",
"ref_id": null
},
{
"start": 316,
"end": 323,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "End-to-end Evaluation",
"sec_num": "5.3"
},
{
"text": "The accuracy of the in-domain system is .341, and that of the cross-domain system .321 (significant difference). When compared to human performance on all metrics (in the IAA study), there is still a relatively big performance gap. In an endto-end setting, the cross-domain system is able to perform at 94% of the in-domain performance. As we feel that this performance drop might well be acceptable in many real-world applications, this signals the potential of training an AM model for noisy texts using the annotated corpora for wellwritten texts alongside those more infrequent annotations for noisy text, at least as long as the genre stays the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end Evaluation",
"sec_num": "5.3"
},
{
"text": "We conducted an error analysis on some random end-to-end outputs. The system tends to fail to identify the correct major claim when it is not placed at the beginning of the essay. For example, the major claim can be pushed into the middle of the essay when an essay contains a lot of background about the discussion topic. Cultural preferences might also play a role. In writings by Asian students, it has been often observed that reasons for a claim are presented before, not after the claim as is more common in anglo-Saxon cultures (Kaplan, 1966; Silva, 1993; Connor, 2002 ) (as illustrated in Figure 1 ). The BiLSTM-based models, which are particularly sensitive to order, can be expected to be thrown off by such effects.",
"cite_spans": [
{
"start": 535,
"end": 549,
"text": "(Kaplan, 1966;",
"ref_id": "BIBREF28"
},
{
"start": 550,
"end": 562,
"text": "Silva, 1993;",
"ref_id": "BIBREF44"
},
{
"start": 563,
"end": 575,
"text": "Connor, 2002",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 597,
"end": 605,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "End-to-end Evaluation",
"sec_num": "5.3"
},
{
"text": "(2) First of all, most parents will stop giving their children money after graduation from high school.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "(3) University students need to earn money in order to maintain their daily spending.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sup",
"sec_num": null
},
{
"text": "(4) Ge ing a part-time job is exactly the way to solve this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "det",
"sec_num": null
},
{
"text": "(5) Secondly, most University students will buy a lot of things like iPhones, MacBooks, cell phones, clothing, and other things. det (6) These products are quite expensive for them, so if they want to keep buying these luxury goods, they must work during their spare time to earn more and save as much as possible before it is enough to buy a product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "det",
"sec_num": null
},
{
"text": "...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "det",
"sec_num": null
},
{
"text": "(1) Personally, I think it is important for University students to have a part-time job for the following reasons. Figure 7 : An example snippet of the in-domain system output for the essay code \"W HKG PTJ0 021 B1 1.\"",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "det",
"sec_num": null
},
{
"text": "Another source of error concerns placing a subargument into the main argument's sibling position instead of that of its child. In general, the systems also have some problems to do with clustering, i.e, splitting a group of sentences that should belong together into separate sub-arguments, or reversely, grouping together sentences that do not belong together. Thus, in order to move forward, the system needs improvement concerning the hierarchical arrangement of sentences in the structure. Figure 7 illustrates this problem. In the gold structure, sentence (4) points at (2), forming a sub-argument (sub-tree) of {2, 3, 4}. However, the system puts sentence (4) in the inappropriate sub-tree. This kind of cases often happens at group boundaries.",
"cite_spans": [],
"ref_spans": [
{
"start": 494,
"end": 502,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "det",
"sec_num": null
},
{
"text": "We also found that the system may erroneously use the RESTATEMENT label when connecting claims (at depth = 1) and major claims, when the claims include almost all tokens that present in the major claim. We suspect that our model learned to depend on lexical overlaps to recognise RESTATE-MENT as this type of relation concerns paraphrasing. However, we cannot perform an error analysis to investigate to what extent this has affected the performance on each of the other relation labels, which concern entailment and logical connections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "det",
"sec_num": null
},
{
"text": "This paper presents a study on parsing argumentative structure in the new domain of EFL essays, which are noisy by nature. We used a pipelined neural approach, consisting of a sentence linking and a relation labelling module. Experimental result shows that the biaffine model combined with the SBERT encoder achieved the best overall performance in the sentence linking task (F1-macro of .323 on individual links). We also investigated MTL, which improved the sequence tagger model in certain aspects. In the sentence linking task, we observed that all models produced more meaningful structures when using SBERT encoder, demonstrating its potential for downstream tasks. In the relation labelling task, non-fine tuning models also performed better when using SBERT encoder. However, the best performance is achieved by a fine-tuned BERT model at F1-macro of .595.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We also evaluated our AM parser on a crossdomain setting, where training is performed on both in-domain (noisy) and out-of-domain (cleaner) data, and evaluation is performed on the in-domain test data. We found that the best cross-domain system achieved 94% (Acc of .321) of the in-domain system (Acc of .341) in terms of end-to-end performance. This signals the potential to use wellwritten texts, together with noisy texts, to increase the size of AM training data. The main challenge of argument parsing lies in the sentence linking task: the model seems to stumble when confronted with the hierarchical nature of arguments, and we will further tackle this problem in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/wiragotama/BEA2021",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A full description of the corpus and the annotation study we performed is available in a separate submission.3 Texts written by proficient authors may contain two or more ideas per sentence. However, our targets are EFL learners; pedagogically, they are often taught to put one idea per sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We conducted a preliminary fine-tuning experiment on sentence linking task, but the performance did not improve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using the same dataset split. This is to account for ran-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by Tokyo Tech World Research Hub Initiative (WRHI), JSPS KAKENHI grant number 20J13239 and Support Centre for Advanced Telecommunication Technology Research. We would like to thank anonymous reviewers for their useful and detailed feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Appendix A. Implementation Notes BERT encoder We use bert-base-multilingualcased (https://github.com/google-research/ bert#pre-trained-models) andbert-as-aservice (https://github.com/hanxiao/ bert-as-service).When using BERT, the sentence embedding is created by averaging subword embeddings composing the sentence in question.SBERT encoder We use SBERT model fine-tuned on the NLI dataset (\"bert-base-nlimean-tokens\"), https://github.com/UKPLab/ sentence-transformers.Sequence Tagger Dropout is applied between each layer, except between encoder and the dimensionality reduction layer because we do not want to lose any embedding information. We train this model using the cross-entropy loss for each prediction layer. The MTL loss is defined aswhere the loss L t of each task t is dynamically weighted, controlled by a learnable parameter \u03c3 t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "We apply dropout between all layers, following Dozat and Manning (2017) . We use the max-Margin criterion to train the biaffine model.Principally, we can model the whole AM pipeline using the biaffine model by predicting links and their labels at once (e.g., in Morio et al., 2020) . This is achieved by predicting another output graph X \u2208 R N \u00d7N \u00d7L , denoting the probability of each node x i pointing to x j on a certain relation label l i . However, we leave this as another MTL experiment for future work.",
"cite_spans": [
{
"start": 47,
"end": 71,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF12"
},
{
"start": 262,
"end": 281,
"text": "Morio et al., 2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biaffine",
"sec_num": null
},
{
"text": "We train the relation labelling models with the cross-entropy loss. Dropout is applied between the final dense layer and the prediction layer.Hidden Units and Learning Rates The number of hidden units and learning rates to train our models are shown in Table 6 . All models are trained using Adam optimiser (Kingma and Ba, 2015). Our experiment is implemented in PyTorch (Paszke et al., 2019) and AllenNLP (Gardner et al., 2018) .Hyperparameter Tuning Before training our models, we first performed the hyperparameter tuning step. To find the best hyperparameter (e.g., batch size, dropout rate, epochs) of each architecture, in combination with each encoder Table 6 : The number of hidden units and learning rates (LR) of our models. \"Dense1\" denotes the dimensionality reduction layer (after encoder). \"Dense2\" denotes the dense layer after BiLSTM (before prediction).(BERT/SBERT) and each input type (in-or outof-domain), we perform 5-fold-cross validation on the training set for 5 times, and select the hyperparameter set that produces the best F1-macro score.During the hyperparameter tuning step, we do not coerce the output to form a tree, i.e., only taking the argmax results.",
"cite_spans": [
{
"start": 371,
"end": 392,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 406,
"end": 428,
"text": "(Gardner et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 6",
"ref_id": null
},
{
"start": 659,
"end": 666,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation Labelling Models",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A news editorial corpus for mining argumentation strategies",
"authors": [
{
"first": "Khalid",
"middle": [],
"last": "Al-Khatib",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3433--3443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016. A news editorial corpus for mining argumentation strategies. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3433-3443, Osaka, Japan. The COLING 2016 Organizing Com- mittee.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling legal argument -reasoning with cases and hypotheticals. Artificial intelligence and legal reasoning",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kevin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ashley",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin D. Ashley. 1990. Modeling legal argument -rea- soning with cases and hypotheticals. Artificial intel- ligence and legal reasoning. MIT Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Teaching the academic argument in a university efl environment",
"authors": [
{
"first": "Nola",
"middle": [],
"last": "Nahla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bacha",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of English for Academic Purposes",
"volume": "9",
"issue": "3",
"pages": "229--241",
"other_ids": {
"DOI": [
"10.1016/j.jeap.2010.05.001"
]
},
"num": null,
"urls": [],
"raw_text": "Nahla Nola Bacha. 2010. Teaching the academic ar- gument in a university efl environment. Journal of English for Academic Purposes, 9(3):229 -241.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What makes a text coherent. College Composition and Communication",
"authors": [
{
"first": "Betty",
"middle": [],
"last": "Bamberg",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "34",
"issue": "",
"pages": "417--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Betty Bamberg. 1983. What makes a text coher- ent. College Composition and Communication, 34(4):417-429.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Combining textual entailment and argumentation theory for supporting online debates interactions",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "208--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Cabrio and Serena Villata. 2012. Combining tex- tual entailment and argumentation theory for sup- porting online debates interactions. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 208-212, Jeju Island, Korea. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Give me more feedback: Annotating argument persuasiveness and related attributes in student essays",
"authors": [
{
"first": "Winston",
"middle": [],
"last": "Carlile",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Gurrapadi",
"suffix": ""
},
{
"first": "Zixuan",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "621--631",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1058"
]
},
"num": null,
"urls": [],
"raw_text": "Winston Carlile, Nishant Gurrapadi, Zixuan Ke, and Vincent Ng. 2018. Give me more feedback: Anno- tating argument persuasiveness and related attributes in student essays. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 621- 631, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the shortest arborescence of a directed graph",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Chu and T. Liu. 1965. On the shortest arborescence of a directed graph.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A coefficient of agreement for nominal scales",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "20",
"issue": "1",
"pages": "37--46",
"other_ids": {
"DOI": [
"10.1177/001316446002000104"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1070"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "New directions in contrastive rhetoric",
"authors": [
{
"first": "Ulla",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2002,
"venue": "TESOL Quarterly",
"volume": "36",
"issue": "4",
"pages": "493--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulla Connor. 2002. New directions in contrastive rhetoric. TESOL Quarterly, 36(4):493-510.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Argument component classification by relation identification by neural network and TextRank",
"authors": [
{
"first": "Mamoru",
"middle": [],
"last": "Deguchi",
"suffix": ""
},
{
"first": "Kazunori",
"middle": [],
"last": "Yamaguchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "83--91",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4510"
]
},
"num": null,
"urls": [],
"raw_text": "Mamoru Deguchi and Kazunori Yamaguchi. 2019. Ar- gument component classification by relation identi- fication by neural network and TextRank. In Pro- ceedings of the 6th Workshop on Argument Mining, pages 83-91, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In Proceedings of the International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Optimum branchings",
"authors": [],
"year": 1967,
"venue": "Journal of Research of the National Bureau of Standards -B. Mathematics and Mathematical Physics",
"volume": "71",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards -B. Mathematics and Mathematical Physics, 71B:233- 240.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural end-to-end learning for computational argumentation mining",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "11--22",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 11-22, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The design of experiments",
"authors": [
{
"first": "Ronald Aylmer",
"middle": [],
"last": "Fisher",
"suffix": ""
}
],
"year": 1937,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald Aylmer Fisher. 1937. The design of experi- ments. Oliver and Boyd, Edinburgh.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. CoRR, abs/1803.07640.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Coarse-grained argumentation features for scoring persuasive essays",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Aquila",
"middle": [],
"last": "Khanam",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "549--554",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2089"
]
},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. 2016. Coarse-grained argumen- tation features for scoring persuasive essays. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 549-554, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention, intentions, and the structure of discourse",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barbara",
"suffix": ""
},
{
"first": "Candace",
"middle": [
"L"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1986,
"venue": "Computational Linguistics",
"volume": "12",
"issue": "3",
"pages": "175--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J. Grosz and Candace L. Sidner. 1986. Atten- tion, intentions, and the structure of discourse. Com- putational Linguistics, 12(3):175-204.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Argumentation mining in user-generated web discourse",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "1",
"pages": "125--179",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00276"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Habernal and Iryna Gurevych. 2017. Argumenta- tion mining in user-generated web discourse. Com- putational Linguistics, 43(1):125-179.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Approaches to the Planning of Coherent Text",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "83--102",
"other_ids": {
"DOI": [
"10.1007/978-1-4757-5945-7_3"
]
},
"num": null,
"urls": [],
"raw_text": "Eduard H. Hovy. 1991. Approaches to the Planning of Coherent Text, pages 83-102. Springer US, Boston, MA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Building a corpus of manually revised texts from discourse perspective",
"authors": [
{
"first": "Ryu",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "936--941",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryu Iida and Takenobu Tokunaga. 2014. Building a corpus of manually revised texts from discourse per- spective. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 936-941, Reykjavik, Iceland. Eu- ropean Language Resources Association (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Discourses of writing and learning to write. Language and Education",
"authors": [
{
"first": "Roz",
"middle": [],
"last": "Invanic",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "18",
"issue": "",
"pages": "220--245",
"other_ids": {
"DOI": [
"10.1080/09500780408666877"
]
},
"num": null,
"urls": [],
"raw_text": "Roz Invanic. 2004. Discourses of writing and learning to write. Language and Education, 18:220-245.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The icnale and sophisticated contrastive interlanguage analysis of asian learners of english",
"authors": [
{
"first": "Shinichiro",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2013,
"venue": "Learner Corpus Studies in Asia and the World",
"volume": "1",
"issue": "",
"pages": "91--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinichiro Ishikawa. 2013. The icnale and sophis- ticated contrastive interlanguage analysis of asian learners of english. Learner Corpus Studies in Asia and the World, 1:91-118.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The icnale edited essays: A dataset for analysis of l2 english learner essays based on a new integrative viewpoint",
"authors": [
{
"first": "Shinichiro",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2018,
"venue": "English Corpus Linguistics",
"volume": "25",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinichiro Ishikawa. 2018. The icnale edited essays: A dataset for analysis of l2 english learner essays based on a new integrative viewpoint. English Corpus Lin- guistics, 25:1-14.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The esl student and the revision process: Some insights from schema theory",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johns",
"suffix": ""
}
],
"year": 1986,
"venue": "Journal of Basic Writing",
"volume": "5",
"issue": "2",
"pages": "70--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann M. Johns. 1986. The esl student and the revision process: Some insights from schema theory. Jour- nal of Basic Writing, 5(2):70 -80.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cultural thought patterns in inter-cultural education",
"authors": [
{
"first": "Robert",
"middle": [
"B"
],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1966,
"venue": "Language Learning",
"volume": "16",
"issue": "1-2",
"pages": "1--20",
"other_ids": {
"DOI": [
"10.1111/j.1467-1770.1966.tb00804.x"
]
},
"num": null,
"urls": [],
"raw_text": "Robert B. Kaplan. 1966. Cultural thought patterns in inter-cultural education. Language Learning, 16(1- 2):1-20.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Kendall",
"suffix": ""
},
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Cipolla",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Pro- ceedings of the International Conference on Learn- ing Representations (ICLR).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Adam: a method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: a method for stochastic optimization. In Proceedings of International Conference on Learning Represen- tations (ICLR).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Linking the thoughts: Analysis of argumentation structures in scientific publications",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Kirschner",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Eckle-Kohler",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.3115/v1/W15-0501"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Kirschner, Judith Eckle-Kohler, and Iryna Gurevych. 2015. Linking the thoughts: Analysis of argumentation structures in scientific publications. In Proceedings of the 2nd Workshop on Argumenta- tion Mining, pages 1-11, Denver, CO. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Argumentation mining: State of the art and emerging trends",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lippi",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Torroni",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Trans. Internet Technol",
"volume": "16",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2850417"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Lippi and Paolo Torroni. 2016. Argumentation mining: State of the art and emerging trends. ACM Trans. Internet Technol., 16(2).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "End-to-end relation extraction using LSTMs on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1105--1116",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "End-to-end argument mining for discussion threads based on parallel constrained pointer architecture",
"authors": [
{
"first": "Gaku",
"middle": [],
"last": "Morio",
"suffix": ""
},
{
"first": "Katsuhide",
"middle": [],
"last": "Fujita",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "11--21",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5202"
]
},
"num": null,
"urls": [],
"raw_text": "Gaku Morio and Katsuhide Fujita. 2018. End-to-end argument mining for discussion threads based on parallel constrained pointer architecture. In Pro- ceedings of the 5th Workshop on Argument Min- ing, pages 11-21, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Towards better non-tree argument mining: Proposition-level biaffine parsing with task-specific parameterization",
"authors": [
{
"first": "Gaku",
"middle": [],
"last": "Morio",
"suffix": ""
},
{
"first": "Hiroaki",
"middle": [],
"last": "Ozaki",
"suffix": ""
},
{
"first": "Terufumi",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Yuta",
"middle": [],
"last": "Koreeda",
"suffix": ""
},
{
"first": "Kohsuke",
"middle": [],
"last": "Yanai",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3259--3266",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.298"
]
},
"num": null,
"urls": [],
"raw_text": "Gaku Morio, Hiroaki Ozaki, Terufumi Morishita, Yuta Koreeda, and Kohsuke Yanai. 2020. Towards bet- ter non-tree argument mining: Proposition-level bi- affine parsing with task-specific parameterization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3259-3266, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "From argument diagrams to argumentation mining in texts: A survey",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Cognitive Informatics and Natural Intelligence",
"volume": "7",
"issue": "1",
"pages": "1--31",
"other_ids": {
"DOI": [
"10.4018/jcini.2013010101"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas Peldszus and Manfred Stede. 2013. From ar- gument diagrams to argumentation mining in texts: A survey. International Journal of Cognitive Infor- matics and Natural Intelligence, 7(1):1-31.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "An annotated corpus of argumentative microtexts",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2015,
"venue": "Argumentation and Reasoned Action -Proceedings of the 1st European Conference on Argumentation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Peldszus and Manfred Stede. 2016. An anno- tated corpus of argumentative microtexts. In Argu- mentation and Reasoned Action -Proceedings of the 1st European Conference on Argumentation, Lisbon, 2015.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Modeling organization in student essays",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Persing",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "229--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac Persing, Alan Davis, and Vincent Ng. 2010. Mod- eling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229-239, Cam- bridge, MA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "TIARA: A tool for annotating discourse relations and sentence reordering",
"authors": [
{
"first": "Jan Wira Gotama",
"middle": [],
"last": "Putra",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6912--6920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Wira Gotama Putra, Simone Teufel, Kana Mat- sumura, and Takenobu Tokunaga. 2020. TIARA: A tool for annotating discourse relations and sen- tence reordering. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 6912-6920, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "On the similarities between native, non-native and translated texts",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Sergiu",
"middle": [],
"last": "Nisioi",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1870--1881",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Ella Rabinovich, Sergiu Nisioi, Noam Ordan, and Shuly Wintner. 2016. On the similarities between native, non-native and translated texts. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1870-1881, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In Proceed- ings of 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, NeurIPS.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Toward an understanding of the distinct nature of l2 writing: The esl research and its implications",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Silva",
"suffix": ""
}
],
"year": 1993,
"venue": "TESOL Quarterly",
"volume": "27",
"issue": "4",
"pages": "657--677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Silva. 1993. Toward an understanding of the dis- tinct nature of l2 writing: The esl research and its implications. TESOL Quarterly, 27(4):657-677.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "More or less controlled elicitation of argumentative text: Enlarging a microtext corpus via crowdsourcing",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Skeppstedt",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "155--163",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5218"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Skeppstedt, Andreas Peldszus, and Manfred Stede. 2018. More or less controlled elicitation of argumentative text: Enlarging a microtext corpus via crowdsourcing. In Proceedings of the 5th Workshop on Argument Mining, pages 155-163, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Applying argumentation schemes for essay scoring",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Klebanov",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Deane",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {
"DOI": [
"10.3115/v1/W14-2110"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Song, Michael Heilman, Beata Beigman Klebanov, and Paul Deane. 2014. Applying argumentation schemes for essay scoring. In Proceedings of the First Workshop on Argumentation Mining, pages 69- 78, Baltimore, Maryland. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Annotating argument components and relations in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1501--1510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive es- says. In Proceedings of COLING 2014, the 25th In- ternational Conference on Computational Linguis- tics: Technical Papers, pages 1501-1510, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Parsing argumentation structures in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "3",
"pages": "619--659",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00295"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2017. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, 43(3):619-659.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Neural automated essay scoring incorporating handcrafted features",
"authors": [
{
"first": "Masaki",
"middle": [],
"last": "Uto",
"suffix": ""
},
{
"first": "Yikuan",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Maomi",
"middle": [],
"last": "Ueno",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6077--6088",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.535"
]
},
"num": null,
"urls": [],
"raw_text": "Masaki Uto, Yikuan Xie, and Maomi Ueno. 2020. Neural automated essay scoring incorporating hand- crafted features. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 6077-6088, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Using argument mining to assess the argumentation quality of essays",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Al-Khatib",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1680--1691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Khalid Al-Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 1680-1691, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Discourse structure and computation: Past, present and future",
"authors": [
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries",
"volume": "",
"issue": "",
"pages": "42--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie Webber and Aravind Joshi. 2012. Discourse structure and computation: Past, present and future. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 42- 54, Jeju Island, Korea. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Building a corpus of legal argumentation in japanese judgement documents: towards structure-based summarisation",
"authors": [
{
"first": "Hiroaki",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
}
],
"year": 2019,
"venue": "Artificial Intelligence and Law",
"volume": "27",
"issue": "2",
"pages": "141--170",
"other_ids": {
"DOI": [
"10.1007/s10506-019-09242-3"
]
},
"num": null,
"urls": [],
"raw_text": "Hiroaki Yamada, Simone Teufel, and Takenobu Toku- naga. 2019. Building a corpus of legal argumen- tation in japanese judgement documents: towards structure-based summarisation. Artificial Intelli- gence and Law, 27(2):141-170.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Annotation and classification of argumentative writing revisions",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "133--143",
"other_ids": {
"DOI": [
"10.3115/v1/W15-0616"
]
},
"num": null,
"urls": [],
"raw_text": "Fan Zhang and Diane Litman. 2015. Annotation and classification of argumentative writing revisions. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 133-143.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "det(15) So, please stop smoking and tell people about the harmful effects.(16) It should be banned in restaurants and a no smoking sign should be stuck on the wall of all restaurants.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "A snippet of argumentative structure annotation for essay code \"W PAK SMK0 022 B1 1 EDIT\" by our expert annotator. The essay discusses banning smoking in restaurants.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "sup (8) In foreign countries, some middle-and high-level restaurants have banned smoking. ...... sup (9) Smoking contains nicotine, which makes the food dirty. = (13) Smoking also should be banned in pubs, where people also come to enjoy eating and drinking. sup (14) Nicotine is a drug and its effect on the human body is very harmful and causes death. det (15) So, please stop smoking and tell people about the harmful effects. (16) It should be banned in restaurants and a no smoking sign should be stuck on the wall of all restaurants. (10), (11), (12) Figure 2: The improved version of essay Figure 1.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "BiLSTM-softmax (\"SEQTG\").",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Biaffine Model (\"BIAF\").",
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"uris": null,
"text": "Non-finetuning relation labelling models.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>F1</td><td>0.0 0.2 0.4 0.6 0.8 1.0</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"6\">SBERT-S\u1d07\u1d0fT\u0262 [STL] SBERT-S\u1d07\u1d0fT\u0262 [MTL] SBERT-B\u026a\u1d00\u0493</td></tr><tr><td/><td>-20</td><td>-18</td><td>-16</td><td>-14</td><td>-12</td><td>-10</td><td>-8</td><td>-6</td><td>-4</td><td>-2</td><td>0</td><td>2</td><td>4</td><td>6</td><td>8</td><td>10</td><td>12</td><td>14</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">distance</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "In-domain results of individual-link predictions in the sentence linking task. Best result shown in bold-face. The \u2020 symbol indicates that the difference to the second-best result (underlined) is significant.",
"html": null
}
}
}
}