ACL-OCL / Base_JSON /prefixC /json /crac /2020.crac-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:21:38.508901Z"
},
"title": "TwiConv: A Coreference-annotated Corpus of Twitter Conversations",
"authors": [
{
"first": "Berfin",
"middle": [],
"last": "Akta\u015f",
"suffix": "",
"affiliation": {
"laboratory": "SFB1287 Research Focus",
"institution": "Cognitive Sciences University of Potsdam",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Annalena",
"middle": [],
"last": "Kohnert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article introduces TwiConv, an English coreference-annotated corpus of microblog conversations from Twitter. We describe the corpus compilation process and the annotation scheme, and release the corpus publicly, along with this paper. We manually annotated nominal coreference in 1756 tweets arranged in 185 conversation threads. The annotation achieves satisfactory annotation agreement results. We also present a new method for mapping the tweet contents with distributed stand-off annotations, which can easily be adapted to different annotation tasks.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This article introduces TwiConv, an English coreference-annotated corpus of microblog conversations from Twitter. We describe the corpus compilation process and the annotation scheme, and release the corpus publicly, along with this paper. We manually annotated nominal coreference in 1756 tweets arranged in 185 conversation threads. The annotation achieves satisfactory annotation agreement results. We also present a new method for mapping the tweet contents with distributed stand-off annotations, which can easily be adapted to different annotation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Microblog texts from Twitter present a discourse genre that carries non-standard language characteristics (e.g., noisy or informal language with abbreviations, purposeful typos, use of non-alphanumerical symbols such as #-and @-characters, misspellings, etc.) and is therefore challenging for NLP applications (Ritter et al., 2011; Sikdar and Gamb\u00e4ck, 2016) . There exist a number of Twitter datasets annotated at different linguistic layers for investigating a variety of NLP tasks on this genre, including sentiment analysis (Cieliebak et al., 2017) , named entity recognition (Derczynski et al., 2016) , and event coreference resolution (Chao et al., 2019) . Akta\u015f et al. (2018) tested an out-of-the-box nominal coreference resolution system trained on OntoNotes (Hovy et al., 2006; Weischedel et al., 2011) on Twitter data and showed that the system performs with much lower scores than the original reported values on that data. Hence, tweets are a complicated genre also for the task of nominal coreference resolution.",
"cite_spans": [
{
"start": 310,
"end": 331,
"text": "(Ritter et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 332,
"end": 357,
"text": "Sikdar and Gamb\u00e4ck, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 527,
"end": 551,
"text": "(Cieliebak et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 579,
"end": 604,
"text": "(Derczynski et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 640,
"end": 659,
"text": "(Chao et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 662,
"end": 681,
"text": "Akta\u015f et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 766,
"end": 785,
"text": "(Hovy et al., 2006;",
"ref_id": "BIBREF8"
},
{
"start": 786,
"end": 810,
"text": "Weischedel et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "We introduce TwiConv, a nominal coreference-annotated corpus of English-language Twitter posts with the intent to explore the coreference features in conversational Twitter texts. Our annotation scheme is based on (Grishina and Stede, 2016 ), yet with some domain-driven adaptations. Twitter's Developer Policy 1 does not allow publishing the tweet contents. Therefore, most of the tweet datasets distribute the unique tweet IDs and annotations without the tweet text. However, if the tokenization of the corpus in concern is realized through a relatively complicated procedure or contains manual corrections, stand off annotation layers may not match with the text content in the compiled corpus. We thus present a distribution method for mapping the original tweet texts with our annotations. To our knowledge, TwiConv is the first tweet corpus for nominal coreference.",
"cite_spans": [
{
"start": 214,
"end": 239,
"text": "(Grishina and Stede, 2016",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "The remainder of paper is organized as follows. We describe the corpus compilation process in Section 2. In Section 3, we present the annotation principles along with a description of quality assurance methods. The main statistics of our corpus are presented in Section 4. Format of the distributed corpus and data sharing methodology are described in Section 5. Section 6 summarizes the presented work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "We used twarc 2 to collect English-language tweets from the Twitter stream on several (non-adjacent) days in December, 2017. We did not filter for topics in any way, since that is not a concern for this corpus. Instead, our aim was to collect threads (conversations) by recursively retrieving parent tweets, whose IDs are taken from the in reply to id field of the tweet object returned by the Twitter API. We then used a script from (Scheffler, 2017) , which constructs the full conversational tree structure for any tweet that generated replies. A single thread (in our terminology) is a path from the root to a leaf node of that tree. For the purposes of this study, we are not interested in alternative replies and other aspects of the tree structure; so we kept only one of the longest threads (paths) from each tree and discarded everything else. Therefore, the data set does not contain any overlaps in tweet sequences. A sample thread structure with one example coreference chain annotation is illustrated in Appendix A.",
"cite_spans": [
{
"start": 434,
"end": 451,
"text": "(Scheffler, 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection",
"sec_num": "2.1"
},
{
"text": "It is well known that tokenization is a crucial preparatory step for doing any kind of NLP on texts. We experimented with two different tokenizers: the Stanford PTBTokenizer (Manning et al., 2014) and Twokenizer (Gimpel et al., 2011) . It turned out that these systems have different strengths in handling challenging cases. For instance, only PTBTokenizer can handle the apostrophes (e.g., contracted verb forms and possessive markers). On the other hand, Twokenizer is stronger in recognizing the punctuation symbols even if they are not surrounded by whitespace. These cases are illustrated in Appendix B.",
"cite_spans": [
{
"start": 174,
"end": 196,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 212,
"end": 233,
"text": "(Gimpel et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization",
"sec_num": "2.2"
},
{
"text": "We thus decided to implement a tokenization pipeline where the output of the Twokenizer is given as input to the PTBTokenizer. The outcome of this pipeline process is compatible with Penn Treebank conventions 3 and, therefore, with the other corpora following the same conventions, such as OntoNotes (Weischedel et al., 2013) and Switchboard (Calhoun et al., 2010) . We found that the number of tokens increased in the second step of the pipeline by 4%, and only 5% of newly generated tokens are erroneous over-generated tokens. Therefore, we don't consider over-tokenization as a potential problem for tokenbased compatibility with other corpora.",
"cite_spans": [
{
"start": 300,
"end": 325,
"text": "(Weischedel et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 342,
"end": 364,
"text": "(Calhoun et al., 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization",
"sec_num": "2.2"
},
{
"text": "We followed a semi-automated segmentation procedure to split the tokenized tweets into sentences. We first segmented the text using the SoMaJo sentence splitter for English (Proisl and Uhrig, 2016) . SoMaJo deals well with common Twitter tokens such as links, hashtags and abbreviations but fails when sentences in the same tweet start with lowercase letters or hashtags, and when the user does not use any punctuation. Therefore, we manually corrected the boundaries detected by SoMaJo.",
"cite_spans": [
{
"start": 173,
"end": 197,
"text": "(Proisl and Uhrig, 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Segmentation",
"sec_num": "2.3"
},
{
"text": "In our scheme, markables are phrases with nominal or pronominal heads. All nominal expressions, such as names, definite/indefinite noun phrases, pronouns, and temporal expressions are annotated for coreference. Non-referential pronouns, predicative copula constructions, and appositions are also annotated and distinguished by the attribute values assigned to them. Elements of the web language such as usernames and hashtags are considered as markables as well. Links and emojis are treated according to their grammatical roles. We illustrate these cases in Appendix C. We annotated all chains including singletons. Chains can contain several markables from the same tweet (intra-tweet) or from different replies (inter-tweet), which can lead to 1st, 2nd and 3rd pronouns referring to the same entity within one thread as in Example 1. We do not allow dicontinuous markables, therefore split antecedents and their co-referring mentions are annotated as separate markables (Example 3) unless they occur as compound phrases (Example 2) 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Principles",
"sec_num": "3.1"
},
{
"text": "(1) Thanks to [you] i , [I] j can now understand the whole conversation.",
"cite_spans": [
{
"start": 24,
"end": 27,
"text": "[I]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Principles",
"sec_num": "3.1"
},
{
"text": "[You] j are welcome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Principles",
"sec_num": "3.1"
},
{
"text": "( We used the MMAX2 tool (M\u00fcller and Strube, 2006) for annotations and customized its default settings according to our scheme. We defined comprehensive attributes for chains and mentions. All chains should be assigned a representative mention (i.e., the most descriptive mention in the chain), a semantic class (i.e., the semantic category of the entity) and genericity value (i.e., whether the referred entity is specific or generic). Mentions are assigned a nominal form (np form) and grammatical role.",
"cite_spans": [
{
"start": 25,
"end": 50,
"text": "(M\u00fcller and Strube, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Principles",
"sec_num": "3.1"
},
{
"text": "We applied the following procedures to assess and evaluate the quality of manual annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Quality",
"sec_num": "3.2"
},
{
"text": "We validated the consistency of the annotations by applying a number of automated procedures checking whether the constraints specified in the guideline are applied uniformly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Checks",
"sec_num": "1."
},
{
"text": "We reviewed the annotations of the first 27 threads (15% of all threads in the corpus). In total, 33 problematic annotation cases were detected during this review, which affected approximately 50 mentions. Most of the problematic cases were due to incorrect selection of mention span or assignment of wrong attributes for different features specified in the guideline. The proportion of detected problems affects only 2% of all mentions in this sub-corpus. Therefore we did not see the necessity to extend the review process to the entire corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of Annotations",
"sec_num": "2."
},
{
"text": "We assessed the inter-annotator agreement (IAA) to evaluate the reliability of our annotation process. In the first version of the TwiConv corpus, we annotated only the coreference chains containing 3rd person pronouns. We conducted the inter-annotator agreement evaluation on this first version of the corpus. The most common annotator errors were different selection of mentions (missing or spurious markables), missing chains if they only contained very few mentions or the splitting of one chain into two, as well as occasional differences in markable span boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "3."
},
{
"text": "We then extended the guideline (GL) and annotated all the coreference chains in the second version of the dataset. The changes in the extended GL only concern attributes, which are not addressed in the IAA study. Therefore, we are confident that this agreement study can assess our final scheme in terms of mention detection and chain linking. Artstein and Poesio (2008) propose the use of Krippendorff's \u03b1 (Krippendorff, 1980) for set-based agreement tasks such as coreference annotations. Following their proposal, we used Krippendorff's \u03b1 to measure the IAA for 12 randomly selected threads. Two linguistics students annotated this sub-corpus. We computed the IAA for mention detection and chain linking. We calculated the Krippendorff's \u03b1 by following the methodology described in (Passonneau, 2006) and found its value as 0.872 (\u03b1 \u2265 .800) which indicates reliability of our data annotations for research purposes.",
"cite_spans": [
{
"start": 344,
"end": 370,
"text": "Artstein and Poesio (2008)",
"ref_id": "BIBREF1"
},
{
"start": 407,
"end": 427,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF9"
},
{
"start": 785,
"end": 803,
"text": "(Passonneau, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "3."
},
{
"text": "The resulting TwiConv corpus consists of 1756 tweets in 185 threads, with the average length of a tweet being 153 characters. We present additional descriptive statistics for TwiConv corpus in Table 1 and for annotations in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 1",
"ref_id": null
},
{
"start": 224,
"end": 231,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Corpus Overview",
"sec_num": "4"
},
{
"text": "5 Corpus Distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Overview",
"sec_num": "4"
},
{
"text": "The annotations are stored in a CoNLL format (i.e., tab-separated) with 17 columns in total, one file per Twitter thread. The content of each column is described in It is possible that different mentions start at the same token, e.g. \"My Twitter username\" marks both the beginning of the pronoun mention \"My\" as well the full definite noun mention \"My Twitter username\". In this case, we used pipe symbols (\"|\") to separate the annotations for different mentions. The order of the annotations separated by the pipe symbol remained the same for the entire line, meaning that the order of annotations in pipe-separated columns is always the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus format",
"sec_num": "5.1"
},
{
"text": "Further, some annotations such as NP form and grammatical role have sub-categories, which we express by slashes (\"/\"): e.g. ppers/anaphora marks a personal pronoun that functions as an anaphoric expression. Similarly, the grammatical role other can be either appositive, vocative or other (e.g., other/vocative), but those sub-categories were only assigned to the other type, not to subjects, prepositional phrases etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus format",
"sec_num": "5.1"
},
{
"text": "We used the automatically created parses to detect the clause and NP boundaries (both for shortest and longest NP spans) in tweets. We manually corrected the detected boundaries and added boundary information to the data files (i.e., boundary start and end tokens are specified in columns 11-13 in Table 3 ). The last column in the data files represent the relative order of tokens in the texts.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus format",
"sec_num": "5.1"
},
{
"text": "Due to Twitter's Developer Policy, we have to refer to tweets via their ID, through which the message text as well as other tweet-related information can be downloaded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sharing Method",
"sec_num": "5.2"
},
{
"text": "In order to share the data, we use a method similar to the distribution of the CoNLL-2012 Shared Task Data (Pradhan et al., 2012) and provide skeleton files which include all annotations, but no tokens from the Twitter message and no usernames (instead, they are replaced by underscore characters). For each token, the ID of the tweet from which the token originates is indicated at the end of the corresponding line. As we have tokenized the data, we also provide reference files to recreate our tokenization steps. To create those diff files, we compared files with the whitespace tokenized tweets (with one token per line) to ones with the tweets with our final tokenization (one token per line as well) with the Linux program diff. We share only those tokens in the diff files that were affected by the tokenization method or other forms of modification such as encoding differences for emoticons. For a sample representation, see Appendix D.",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sharing Method",
"sec_num": "5.2"
},
{
"text": "After downloading all still available tweets, they have to be transformed into above described format (whitespace tokenized, one token per line, one file per tweet). We provide an assembly script that will use these tweet files, the skeleton files and diff files to create the complete CoNLL files with all annotations and tokens 5 . The script itself contains no information about the content of the annotations and can be re-used for any other tweets, given that the diff and skeleton files (following the CoNLL-style format described in Table 3 ) have been generated correctly. For unavailable tweets, the tokens will remain anonymized (meaning the underscore character remains).",
"cite_spans": [],
"ref_spans": [
{
"start": 540,
"end": 547,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sharing Method",
"sec_num": "5.2"
},
{
"text": "We have developed a comprehensive annotation scheme for annotating nominal coreference in English Twitter conversations and fully annotated 1756 tweets arranged in 185 threads. Assessment of annotations and correction of erroneous cases were made via inter-annotator agreement evaluation, partial review, and automated checks. We distribute the corpus without tweet contents and introduce tools for researchers to map the tweet texts, captured using the tweet IDs, with the shared annotations. We hope that the release of the TwiConv corpus will increase the interest in coreference studies on this genre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/DocNow/twarc 3 https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html 4 The full guideline with examples is shared together with the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Scripts and data to reproduce the corpus can be found at https://github.com/berfingit/TwiConv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers and Manfred Stede for their helpful observations and suggestions. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Projektnummer 317633480 -SFB 1287, Project A03.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "London, 's (2) 4 here:)Because here, :), Because (3) here:)Because (1) here, :), Because (3) 5 .... 1., . 2., . (2 * ) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendices",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Anaphora resolution for twitter conversations: An exploratory study",
"authors": [
{
"first": "Berfin",
"middle": [],
"last": "Akta\u015f",
"suffix": ""
},
{
"first": "Tatjana",
"middle": [],
"last": "Scheffler",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC@NAACL 2018)",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berfin Akta\u015f, Tatjana Scheffler, and Manfred Stede. 2018. Anaphora resolution for twitter conversations: An exploratory study. In Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC@NAACL 2018), pages 1-10, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Survey article: Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Survey article: Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555-596.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The nxt-format switchboard corpus: A rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue",
"authors": [
{
"first": "Sasha",
"middle": [],
"last": "Calhoun",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"M"
],
"last": "Brenier",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Mayo",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Beaver",
"suffix": ""
}
],
"year": 2010,
"venue": "Language Resources and Evaluation",
"volume": "44",
"issue": "4",
"pages": "387--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sasha Calhoun, Jean Carletta, Jason M. Brenier, Neil Mayo, Dan Jurafsky, Mark Steedman, and David Beaver. 2010. The nxt-format switchboard corpus: A rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue. Language Resources and Evaluation, 44(4):387-419, 12.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Selective expression for event coreference resolution on twitter",
"authors": [
{
"first": "W",
"middle": [],
"last": "Chao",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Chao, P. Wei, Z. Luo, X. Liu, and G. Sui. 2019. Selective expression for event coreference resolution on twitter. In 2019 International Joint Conference on Neural Networks (IJCNN).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A twitter corpus and benchmark resources for german sentiment analysis",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Cieliebak",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"Milan"
],
"last": "Deriu",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Egger",
"suffix": ""
},
{
"first": "Fatih",
"middle": [],
"last": "Uzdilli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "45--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Cieliebak, Jan Milan Deriu, Dominic Egger, and Fatih Uzdilli. 2017. A twitter corpus and benchmark re- sources for german sentiment analysis. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 45-51.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Broad Twitter corpus: A diverse named entity recognition resource",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Roberts",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1169--1179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Derczynski, Kalina Bontcheva, and Ian Roberts. 2016. Broad Twitter corpus: A diverse named entity recognition resource. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1169-1179, Osaka, Japan, December. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Part-of-speech tagging for twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers",
"volume": "2",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heil- man, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annota- tion, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, HLT '11, pages 42-47, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parallel coreference annotation guidelines",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Grishina",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Grishina and Manfred Stede, 2016. Parallel coreference annotation guidelines., November.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "OntoNotes: The 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57-60, New York City, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Content Analysis: An Introduction To Its Methodology. Sage commtext series",
"authors": [
{
"first": "K",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Krippendorff. 1980. Content Analysis: An Introduction To Its Methodology. Sage commtext series. Sage Publications.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-level annotation of linguistic data with mmax2",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2006,
"venue": "Corpus Technology and Language Pedagogy: New Resources, New Tools, New Methods",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph M\u00fcller and Michael Strube. 2006. Multi-level annotation of linguistic data with mmax2. In Joy- brato Mukherjee Sabine Braun, Kurt Kohn, editor, Corpus Technology and Language Pedagogy: New Re- sources, New Tools, New Methods.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annota- tion. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL-Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1-40.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SoMaJo: State-of-the-art tokenization for German web and social media texts",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Proisl",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Uhrig",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th Web as Corpus Workshop (WAC-X) and the EmpiriST Shared Task",
"volume": "",
"issue": "",
"pages": "57--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Proisl and Peter Uhrig. 2016. SoMaJo: State-of-the-art tokenization for German web and social media texts. In Proceedings of the 10th Web as Corpus Workshop (WAC-X) and the EmpiriST Shared Task, pages 57-62, Berlin. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Named entity recognition in tweets: An experimental study",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1524--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An experimental study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, page 1524-1534, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Conversations on twitter",
"authors": [
{
"first": "Tatjana",
"middle": [],
"last": "Scheffler",
"suffix": ""
}
],
"year": 2017,
"venue": "Researching computer-mediated communication: Corpus-based approaches to language in the digital world",
"volume": "",
"issue": "",
"pages": "124--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatjana Scheffler. 2017. Conversations on twitter. In Darja Fi\u0161er and Michael Bei\u00dfwenger, editors, Researching computer-mediated communication: Corpus-based approaches to language in the digital world, pages 124-144. University Press, Ljubljana.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Feature-rich twitter named entity recognition and classification",
"authors": [
{
"first": "Utpal",
"middle": [],
"last": "Kumar Sikdar",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "164--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Utpal Kumar Sikdar and Bj\u00f6rn Gamb\u00e4ck. 2016. Feature-rich twitter named entity recognition and classification. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 164-170, Osaka, Japan, December. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ontonotes : A large training corpus for enhanced processing",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Belvin",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "Handbook of Natural Language Processing and Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Eduard Hovy, Mitchell Marcus, Martha Palmer, Robert Belvin, Sameer Pradan, Lance Ramshaw, and Nianwen Xue. 2011. Ontonotes : A large training corpus for enhanced processing. In Joseph Olive, Caitlin Christianson, and John McCary, editors, Handbook of Natural Language Processing and Machine Translation.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ontonotes release 5.0 ldc2013t19. Web Download. Linguistic Data Consortium",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Kaufman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. Ontonotes release 5.0 ldc2013t19. Web Download. Linguistic Data Consortium, Philadelphia, PA.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"text": "2) [The baby and I] i are listening to [our] i favourite music. (3) [I] i met [him] j at [our] k favourite caf\u00e9.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>and an example is presented</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: Descriptive statistics of the corefer-</td></tr><tr><td>ence annotations</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}