ACL-OCL / Base_JSON /prefixA /json /aespen /2020.aespen-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:00.302177Z"
},
"title": "Text Categorization for Conflict Event Annotation",
"authors": [
{
"first": "Fredrik",
"middle": [],
"last": "Olsson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University Intelligent Systems",
"location": {
"postBox": "Box 1263, Box 514",
"postCode": "164 29, 751 20",
"settlement": "Kista, Uppsala",
"country": "Sweden, Sweden"
}
},
"email": "[email protected]"
},
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University Intelligent Systems",
"location": {
"postBox": "Box 1263, Box 514",
"postCode": "164 29, 751 20",
"settlement": "Kista, Uppsala",
"country": "Sweden, Sweden"
}
},
"email": "[email protected]"
},
{
"first": "Ben",
"middle": [],
"last": "Fehmi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University Intelligent Systems",
"location": {
"postBox": "Box 1263, Box 514",
"postCode": "164 29, 751 20",
"settlement": "Kista, Uppsala",
"country": "Sweden, Sweden"
}
},
"email": ""
},
{
"first": "Ariel",
"middle": [],
"last": "Abdesslem",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University Intelligent Systems",
"location": {
"postBox": "Box 1263, Box 514",
"postCode": "164 29, 751 20",
"settlement": "Kista, Uppsala",
"country": "Sweden, Sweden"
}
},
"email": "[email protected]"
},
{
"first": "Kristine",
"middle": [],
"last": "Ekgren",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University Intelligent Systems",
"location": {
"postBox": "Box 1263, Box 514",
"postCode": "164 29, 751 20",
"settlement": "Kista, Uppsala",
"country": "Sweden, Sweden"
}
},
"email": "[email protected]"
},
{
"first": "",
"middle": [],
"last": "Eck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University Intelligent Systems",
"location": {
"postBox": "Box 1263, Box 514",
"postCode": "164 29, 751 20",
"settlement": "Kista, Uppsala",
"country": "Sweden, Sweden"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We cast the problem of event annotation as one of text categorization, and compare state of the art text categorization techniques on event data produced within the Uppsala Conflict Data Program (UCDP). Annotating a single text involves assigning the labels pertaining to at least 17 distinct categorization tasks, e.g., who were the attacking organization, who was attacked, and where did the event take place. The text categorization techniques under scrutiny are a classical Bag-of-Words approach; character-based contextualized embeddings produced by ELMo; embeddings produced by the BERT base model, and a version of BERT base fine-tuned on UCDP data; and a pre-trained and fine-tuned classifier based on ULMFiT. The categorization tasks are very diverse in terms of the number of classes to predict as well as the skewness of the distribution of classes. The categorization results exhibit a large variability across tasks, ranging from 30.3% to 99.8% F1-score.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We cast the problem of event annotation as one of text categorization, and compare state of the art text categorization techniques on event data produced within the Uppsala Conflict Data Program (UCDP). Annotating a single text involves assigning the labels pertaining to at least 17 distinct categorization tasks, e.g., who were the attacking organization, who was attacked, and where did the event take place. The text categorization techniques under scrutiny are a classical Bag-of-Words approach; character-based contextualized embeddings produced by ELMo; embeddings produced by the BERT base model, and a version of BERT base fine-tuned on UCDP data; and a pre-trained and fine-tuned classifier based on ULMFiT. The categorization tasks are very diverse in terms of the number of classes to predict as well as the skewness of the distribution of classes. The categorization results exhibit a large variability across tasks, ranging from 30.3% to 99.8% F1-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This study concerns the application of automatic text categorization techniques for the purpose of conflict event annotation using the data of the Uppsala Conflict Data Program. 1 In the terminology of UCDP, an event is an instance of fatal organized violence, defined by Sundberg and Melander (2013) as:",
"cite_spans": [
{
"start": 178,
"end": 179,
"text": "1",
"ref_id": null
},
{
"start": 272,
"end": 300,
"text": "Sundberg and Melander (2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The incidence of the use of armed force by an organized actor against another organized actor, or against civilians, resulting in at least 1 direct death in either the best, low or high estimate categories at a specific location and for a specific temporal duration The present study seeks to investigate the automation of event annotation by taking advantage of recent advances in representation and transfer learning to harness the power of pre-trained and fine-tuned language models for representing the textual data subject to categorization. The purpose is to assess the relative performance of text categorization when the learner has access to language knowledge beyond that which is present in the training corpus, across a multitude of categorization tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Document categorization, or document classification, consists in assigning one or several pre-defined labels, based on the contents of a whole document (here, a news article). In its simplest form, document categorization does not require that the ordering of tokens (or even the structures in which the tokens are arranged) is retained while extracting information. To the best of our knowledge, such document categorization introduced in this paper has not previously been applied to news articles for the purpose of event coding. Instead, however, sequence classification has been the focus of several works to automate the event encoding from news articles. Sequence classification is first based on the extraction of information, that is then used for attributing the characteristics of an event (such as the dyad 2 or the number of deaths) described in a document. Information extraction is typically based on classification tasks in which each unit (character, character sequence or token) in a text is classified as to whether it refers to a named entity (actors, location), time, number of casualties, or any other event characteristics. In particular, there are several projects aiming at automating political event coding with sequence classification. The KEDS (Kansas Event Data System) project (Schrodt et al., 1994) was one of the first attempts, and was mainly based on parsing text to extract words that are pre-defined in dictionaries (actors and verbs). TABARI (Schrodt, 2009) replaced KEDS by introducing significant improvements such as recognizing passivevoice sentences or disambiguating verbs that can also be nouns (e.g., Attack). TABARI was then replaced by Petrarch (Norris et al., 2017) and Universal Petrarch. Petrarch stands for \"Python Engine for Text Resolution And Related Coding Hierarchy\". As its aforementioned predecessors, it is also a processing tool for machinecoding text describing events (i.e. news articles). It is designed to process fully-parsed news summaries, from which \"whom-did-what-to-whom\" relations are extracted. The output is then a dyad and an action. Date and location are also extracted. Petrarch is typically used by running the Phoenix pipeline, 3 which mainly consists in the following steps:",
"cite_spans": [
{
"start": 1307,
"end": 1329,
"text": "(Schrodt et al., 1994)",
"ref_id": "BIBREF17"
},
{
"start": 1479,
"end": 1494,
"text": "(Schrodt, 2009)",
"ref_id": "BIBREF19"
},
{
"start": 1692,
"end": 1713,
"text": "(Norris et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "1. Extract articles and corresponding date from online sources using a web scraper 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "2. Encode the sentences with Named Entity Recognition (NER) using Stanford CoreNLP (Manning et al., 2014) 3. Encode each sentence with [source actor, action, and target actor] (who does what to whom) using Petrarch.",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "4. Encode each sentence with a location using CLIFF-CALVIN (D'Ignazio et al., 2014) or Mordecai (Halterman, 2017) .",
"cite_spans": [
{
"start": 46,
"end": 83,
"text": "CLIFF-CALVIN (D'Ignazio et al., 2014)",
"ref_id": null
},
{
"start": 96,
"end": 113,
"text": "(Halterman, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "In all these tools, actors and actions (verbs) are pre-defined in a specific ontology. Both Petrarch and Universal Petrarch use the same ontology for actors and verbs, based on TABARI dictionaries. TABARI dictionaries follow the CAMEO (Conflict and Mediation Event Observations) framework (Schrodt et al., 2008) , which was initially intended as an extension of an ontology from the 60-70s called WEIS (McClelland, 2006) . Another old ontology is COPDAB (Azar, 1980) in the 1980s. Competing modern ontologies to CAMEO are the IDEA (Bond et al., 2003) ontology from the 2000s, and the JRC-names (Ehrmann et al., 2017) in the 2010s, developed as a by-product of the EMM (European Media Monitor) project. Currently, CAMEO is being replaced by PLOVER, 5 a new ontology with coverage of some new actions, vastly simplified coding of other actions, and a more flexible system for extensions and modifications. Coding systems such as Petrarch and Universal Petrarch are rule-based: they use rules to decide which noun phrases are actors and which verb phrases are actions, and then compare these chunks of text against lists of hand-defined rules for coding actions and actors. Despite using NLP methods (e.g., NER), they are rarely using advanced machine learning algorithms. Among the few works using machine learning we can cite the work of Beieler (2016) , who uses a character-based convolutional neural network, based on the work of Zhang et al. (2015) , to determine the type of event action. However, the event actors are still determined with Petrarch, and the training dataset is also labelled with Petrarch.",
"cite_spans": [
{
"start": 289,
"end": 311,
"text": "(Schrodt et al., 2008)",
"ref_id": "BIBREF18"
},
{
"start": 402,
"end": 420,
"text": "(McClelland, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 454,
"end": 466,
"text": "(Azar, 1980)",
"ref_id": "BIBREF1"
},
{
"start": 531,
"end": 550,
"text": "(Bond et al., 2003)",
"ref_id": "BIBREF3"
},
{
"start": 594,
"end": 616,
"text": "(Ehrmann et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 1337,
"end": 1351,
"text": "Beieler (2016)",
"ref_id": "BIBREF2"
},
{
"start": 1432,
"end": 1451,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "Recently, categorizing news articles has also been experimented by Adhikari et al. (Adhikari et al., 2019) using BERT (introduced in Section 5.4.) to extract the topic of the articles.",
"cite_spans": [
{
"start": 67,
"end": 106,
"text": "Adhikari et al. (Adhikari et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "The Uppsala Conflict Data Program is the oldest ongoing data collection project for civil war, dating back almost 40 years. UCDP continuously updates its online database on armed conflicts and organized violence, in which information on several aspects of armed conflict such as conflict dynamics and conflict resolution is available. The database offers a web-based system for visualizing, handling and downloading data, including ready-made datasets on organized violence and peacemaking, all free of charge. UCDP is staffed by permanent full-time employees, handling data collection and processing detailed in (H\u00f6gblad, 2019) , including analysis and management. The typical work-flow for a UCDP event annotator amounts to the following. For retrieving the news data from their data provider, an annotator:",
"cite_spans": [
{
"start": 613,
"end": 628,
"text": "(H\u00f6gblad, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "1. inputs search terms to search selected news sources, then;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "2. judges whether each news item retrieved:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "(a) describes a conflict event relevant to UCDP, and (b) either describes a new event, or brings new information about a known event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "Once a news text passes the above criteria, i.e., it is in fact relevant and contributes new information, the annotator looks for the following information in it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "\u2022 Geography (country, region, and even finer grained geographical reference points).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "\u2022 Participants in the dyad.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "\u2022 The number of deaths reported.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "\u2022 Date or time period of the event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "More often than not, multiple news items relating to the same event are required in order to decide on all of the aforementioned attributes for an event. UCDP staff processes approximately 50 000 news items and other reports yearly, depending on the conflict situation in the world. In total, each text is manually annotated with up to 19 different labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "The textual data in the UDCP database is annotated at the document level, rather than with in-text annotations at the sentence level. For instance, a document annotated with information about the dyad being part of an event exhibits an association between the dyad identifier and the document, but it does not provide information as to where in the document the reference to the dyad is located, and thus not how the surface form of the reference is manifested. This is a consequence of how the UCDP staff work when annotating event data, and it renders it natural to cast the event annotation problem as one of text categorization, rather than as a sequence extraction and labelling task. The annotation tasks consist in identifying the labels present in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 756,
"end": 763,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Event annotation at UCDP",
"sec_num": "3."
},
{
"text": "The dataset at hand in this study consists of a combination of two distinct sources; the internal UCDP database compiled while UCDP annotators are working with identifying events in news text and reports, and the externally published Georeferenced Event Dataset (Sundberg and Melander, 2013) . The former contains textual information related to the source documents read by the annotator while annotating the event, while the externally published event data is a clean, quantitative view of the text data. The combination of the data sources constitutes the ground truth, that the machine learning experiments carried out in this study will try to re-create.",
"cite_spans": [
{
"start": 262,
"end": 291,
"text": "(Sundberg and Melander, 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The dataset",
"sec_num": "4."
},
{
"text": "The dataset used in the following experimental setup consists of 31 772 UCDP events, each of which is associated with a unique body of text in English. A body of text can consist of a (mix of) notes made by the annotator, records Table 1 : The labels to be identified by tasks, along with their short descriptions, their number of classes, and their class entropy for the dataset consisting of 31 772 events. The class entropy is a measure of the class imbalance for a task such that a low value indicate higher imbalance. The class entropy is elaborated on in Section 4.2..",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The training set",
"sec_num": "4.1."
},
{
"text": "copied verbatim from an online conflict tracker, part or the whole of one or several news items, or some other distinct unit of text taken from an online resource. The dataset has been pre-processed and chosen so as to make sure that each text has given rise to a unique UCDP event. That is, in the current dataset there is a one-to-one relation between a body of text and an event. Thus, all texts that have resulted in two or more UCDP events have been omitted. The rationale behind this decision is the following: if a machine cannot reproduce the accuracy of the human annotators when presented with an admittedly simplified scenario (i.e., expect no more than exactly one event per text), then it will not perform well in a more realistic setting either (i.e., expect an arbitrary number of events to be described in each text). Only if the results in the simpler scenario are satisfactory should the more complicated setting be addressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The training set",
"sec_num": "4.1."
},
{
"text": "There are at least 17 different categorization tasks that a UCDP annotator has to deal with for every single event (omitting the temporal categories, i.e., the starting and ending date of an event). The annotations of the event data provided by UCDP constitutes the ground truth, and is as such the target of the predictions in the experiments to follow. In other words, for each of the bodies of texts in the dataset, there are 17 labels to predict. Table 1 shows the possible number of different classes that are in play in each of the annotation tasks, as well as the normalized entropy among those labels. The normalized class entropy value \u03b7 is defined as \u03b7",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The labels to predict",
"sec_num": "4.2."
},
{
"text": "(X) = \u2212 n i=1 p(xi) ln(p(xi)) ln(n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The labels to predict",
"sec_num": "4.2."
},
{
"text": "where X is the set of n possible classes, and p(x i ) is the observed fraction of values equal to the ith class. The entropy is indicative of the distribution of classes within a task. A low entropy value is a sign of a skewed distribution, e.g., one class is significantly more frequent than the others, while a high entropy implies a more even distribution of classes. Com-bined, the size of the data, the number of classes and the class entropy tells us something about the expected complexity of the annotation task. For example, given the values in Table 1 , it is expected that the task where coordinates will be hard since it contains many classes (4 125) that are relatively evenly distributed across the dataset (the entropy value is high) giving, on average, relatively few events per class (31 722/4 125) to learn from. On the other hand, the task type of violence task exhibits a number of classes and class entropy at the other end of the spectrum: it is comprised of few classes (3) that are unevenly distributed in terms of occurrences in the dataset (entropy 0.8). Thus, an annotator is expected to perform well for (the majority) classes in the task. Of course, there is more than meets the eye when it comes to how well a classifier actually manages to perform than just the number of classes, and their relative distribution, but these numbers give a hint as to what to expect.",
"cite_spans": [],
"ref_spans": [
{
"start": 554,
"end": 561,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The labels to predict",
"sec_num": "4.2."
},
{
"text": "The experiments carried out in this study involve learning from the contents of the texts described in Section 4.1. to predict the classes of each task described in Section 4.2.. There are 17 different tasks, each of which will be addressed using five different text categorization techniques, as well as a random guessing-based baseline performance estimation. For each task, the baseline (Section 5.1.), Bag-of-Words (BoW, Section 5.2.), ELMo experiments (Section 5.3.), the two BERT versions (Section 5.4.) are based on 5-fold cross-validation, with test data size set to 20% of the total corpus. This means that the baseline, BoW, ELMo, and BERT results are supported by approximately 30 000 data points each. Due to the time it took to complete the ULM-FiT experiments (Section 5.5.), they are based on a single training and testing set, where the testing set is made up of approximately 6 000 data points, instead of the 5fold cross-validation scheme employed in the other experiments. The split into training and testing data used by ULMFiT corresponds to the first fold in the baseline, BoW, ELMo and BERT cases, as it is made with the same logic and settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5."
},
{
"text": "A \"dummy\" classifier that guesses the class of a text by randomly drawing a class label from the class label distribution is used to assess a baseline upon which the machine learning-based classifiers should improve. The dummy classifier is available in scikit learn described by Pedregosa et al. (2011) .",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "Pedregosa et al. (2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.1."
},
{
"text": "A classical way to represent documents in text categorization is as a collection of words, in which the order of the words is assumed to be irrelevant. This type of representation is usually referred to as Bag-of-Words. The assumption is na\u00efve, but historically, it has produced relatively competitive results. The BoW representation used in the current setup contains single words (unigrams), as well as all combinations of two consecutive words in the training corpus (bigrams). A linear learning method (Logistic Regression) is then used to train classifiers to distinguish between the classes in the different tasks. The BoW approach is included in the experiments since it, in the past, has been a go-to solution in many text categorization tasks and thus constitutes a sensible baseline that more modern approaches should beat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using a standard Bag-of-Words approach",
"sec_num": "5.2."
},
{
"text": "Embeddings from Language Models (ELMo) described by (Peters et al., 2018) , is a deep character-based neural network that learns embeddings by predicting the next token given an input sequence. The network architecture includes both convolutional and (bidirectional) LSTM layers, and produces an embedding that that is sensitive to the particular context of the input sequence. Contextualized embeddings have proven to be highly beneficial when using the embeddings as representation in downstream natural language processing tasks such as categorization, entity recognition, and question answering. In the current setup, an existing pretrained version 6 of ELMo is used to produce a single 1 024 elements long feature vector for the body of text associated to each event in the UCDP data. The data used for pretraining the ELMo model used here is reported to be approximately 20 million randomly selected texts from Wikipedia and CommonCrawl, amounting to a total training time of 3 days per language. The ELMo feature vectors are then used as input to a non-linear learner (Random Forest) to train a classifier for distinguishing between the classes in each of the 17 tasks. The ELMo approach is included in the experiments since it has proven to be a simple and effective way of incorporating language knowledge in machine learning situations where training data is scarce.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ELMo",
"sec_num": "5.3."
},
{
"text": "6 https://github.com/HIT-SCIR/ ELMoForManyLangs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELMo",
"sec_num": "5.3."
},
{
"text": "Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019 ) is a deep, attention-based neural network architecture that produces a contextualized representation of a text by taking both the left and right context into account simultaneously. In this respect, it differs from ELMo, which builds its representation of text based on a concatenating representations from the left and right context. Since its inception, BERT has been shown to improve the state-of-the art on many language processing tasks, including some text categorization ones.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "5.4."
},
{
"text": "In the experiments to follow, we use two versions of BERT: the original large pre-trained uncased base model made available via Hugging Face's Transformers (Wolf et al., 2019) , and a version of the same model fine-tuned on the UCDP data.",
"cite_spans": [
{
"start": 156,
"end": 175,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "5.4."
},
{
"text": "Universal Language Modelling Fine-Tuning (ULMFiT), described in (Howard and Ruder, 2018) , is a three step method for transferring general language use to specific categorization tasks. The method consists of the following three steps:",
"cite_spans": [
{
"start": 64,
"end": 88,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ULMFiT",
"sec_num": "5.5."
},
{
"text": "1. Train a language model on an unannotated corpus of general language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ULMFiT",
"sec_num": "5.5."
},
{
"text": "2. Fine-tune the language model based on unannotated in-domain texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ULMFiT",
"sec_num": "5.5."
},
{
"text": "3. Train and fine-tune a text classifier on annotated texts. An initial language model (Step 1) is readily available online. ULMFiT is pretrained on a subset of the English Wikipedia containing more than 103 million running words taken from more than 28 000 verified Good or Featured articles (Merity et al., 2016) . In Step 2, we used the texts associated with the 31 722 UCDP events to fine-tune the language model. Finally, in Step 3, a classifier was created for each of the 17 different tasks outlined in Table 1 . The implementation of ULMFiT used in the current experiment is based on the AWD-LSTM language model architecture described by (Merity et al., 2017) . The ULMFiT approach is included in the experiments because it is a robust method for leveraging the language knowledge of a pretrained model and its ability to adjust that model based on in-domain data, without requiring vast computational resources. Until recently, ULMFiT produced state-of-the art classifiers for a number of benchmarks. Table 2 on the next page shows the results from the experiments in terms F1-score for the random baseline, the BoW-based Logistic Regression classifier, the ELMo-based Random Forest classifier, the original and fine-tuned BERTbased Random Forest classifiers, as well as for ULMFiT.",
"cite_spans": [
{
"start": 293,
"end": 314,
"text": "(Merity et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 646,
"end": 667,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 510,
"end": 517,
"text": "Table 1",
"ref_id": null
},
{
"start": 1010,
"end": 1017,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "ULMFiT",
"sec_num": "5.5."
},
{
"text": "As an example, refer back to the discussion of the complexity of the annotation tasks in terms of the number of classes and the class entropy in Section 4.2., and consider the baseline F1-score result for the task type of violence ULMFiT pretrained on Wikipedia, fine-tuned and trained on UCDP data. F weighted F1-score. Light grey cells in the table indicate a failure of the classifier to complete the corresponding task. The failures are due to the size of the models: for tasks with many classes, the memory consumption of the learner exceeds that of the available memory (which in this case is 255Gb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "which is given in column B.F in Table 2 . The task concerns only three highly imbalanced classes, which in effect means it is easy to get a fairly good score just by making a vaguely informed guess with respect to the class. The random guessing-based baseline F1-score is 56.6%. All trained classifiers improve on the baseline, with ULMFiT performing the best at an F1-score of 91.8%, a 35.2 percent point improvement. The other example in Section 4.2. is that of where coordinates. The baseline results for the task align with the expected outcome given the size of the data, the number of classes, and the class entropy: the F1-score value is low, at around 0.3% of a possible 100%. The ULM-FiT classifier improves the F1-score given the baseline with 30.0%. Still, at an F1-score of 30.3%, the classifier clearly underperforms vis-\u00e0-vis the human annotated data. According to Table 2 , the tasks that the hardest for the classifiers are:",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 879,
"end": 886,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 where coordinates (ULMFiT F1-score: 30.3%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 adm 2 (ULMFiT F1-score: 41.3%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 low (ELMo F1-score: 61.6%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 best (ELMo F1-score: 61.1%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 high (ELMo F1-score: 61.8%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "The above are all tasks in which there are many classes, and thus little data to learn from per class. The following are the tasks on which the classifiers performed the best:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 region (ULMFiT F1-score: 99.8%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 country (ULMFiT F1-score: 97.4%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 deaths unknown (ELMo F1-score: 93.3%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 type of violence (ULMFiT F1-score: 91.8%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 side a (BERT fine-tuned F1-score: 84.9%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 deaths civilians (ELMo F1-score: 84.1%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 deaths a (ELMo F1-score: 83.1%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 conflict name (ULMFiT F1-score: 82.7%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 side b (ULMFiT F1-score: 82.5%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "\u2022 dyad name (ULMFiT F1-score: 80.8%) However, it should be emphasized that the experimental setting in this report is a simplified one that only includes data in which each textual body corresponds to exactly one UCDP event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorization results",
"sec_num": "6."
},
{
"text": "From the results of this study, we make two observations. The first observation concerns text categorization for event annotation, while the other is about the developments in the field of transfer learning in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7."
},
{
"text": "By casting the event annotation problem as one of text categorization, we have gained initial insight into the complexity of assigning values to the individual attributes of events. Some attributes are naturally harder to automatically predict than others: for instance, the finer-grained geographical location of an event (where coordinates) is harder to assess than the immediately broader region (country). Similarly, the dyad name is harder to predict than the names of its participants. It is also clear that automated text categorization has value in that it performs very near the level of human annotators, for some tasks. This begs the question: How can we best make use of text categorization for the purpose of improving the human annotation process in terms of, e.g., speed, and consistency? We believe that the categorization results reported in this study are encouraging enough to warrant continued investigations with respect to its use in the manual annotation process, as well as further improvements of the categorization results. As for the latter, there are two immediate issues that require attention. The first issue is to go from the simplified setting of the current experiments to one that allows the more natural manyto-many relationship between texts and events. The second issue is to investigate methods for making use of the conditional dependencies between tasks e.g., that certain dyads are active only in certain geographical locations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text categorization for event annotation",
"sec_num": "7.1."
},
{
"text": "Although the bag-of-words approach is a strong baseline, it is almost always better to utilize pre-training and finetuning on domain-specific data. ELMo and the original BERT model are both pre-trained on large amounts of data, and do not make use of any in-domain data in the current setting. Still, both models perform well, beating the BoW baseline in most cases. Furthermore, fine-tuning pretrained models on domain-specific data always helps: the fine-tuned BERT model beats the original model across all tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer learning in NLP",
"sec_num": "7.2."
},
{
"text": "https://ucdp.uu.se",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\"A dyad is made up of two armed and opposing actors.\" See: https://www.pcr.uu.se/research/ucdp/ definitions/ 3 https://phoenix-pipeline.readthedocs.io/ 4 https://github.com/openeventdata/scraper",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/openeventdata/PLOVER",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The study presented in this paper is funded by Riksbankens Jubileumsfond, via the research project Automation of the Uppsala Conflict Data Program (UCDP), reference number IN18-0710:1. The authors wish to thank the anonymous reviewers for valuable and thoughtful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Docbert: Bert for document classification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Adhikari",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adhikari, A., Ram, A., Tang, R., and Lin, J. (2019). Docbert: Bert for document classification. ArXiv, abs/1904.08398.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The conflict and peace data bank (COPDAB) project",
"authors": [
{
"first": "E",
"middle": [
"E"
],
"last": "Azar",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of Conflict Resolution",
"volume": "24",
"issue": "1",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Azar, E. E. (1980). The conflict and peace data bank (COPDAB) project. Journal of Conflict Resolution, 24(1):143-152.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating politically-relevant event data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Beieler",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.06239"
]
},
"num": null,
"urls": [],
"raw_text": "Beieler, J. (2016). Generating politically-relevant event data. arXiv preprint arXiv:1609.06239.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Integrated data for events analysis (IDEA): An event typology for automated events data development",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Jenkins",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Peace Research",
"volume": "40",
"issue": "6",
"pages": "733--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bond, D., Bond, J., Oh, C., Jenkins, J. C., and Taylor, C. L. (2003). Integrated data for events analysis (IDEA): An event typology for automated events data development. Journal of Peace Research, 40(6):733-745.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cliff-clavin: Determining geographic focus for news",
"authors": [
{
"first": "C",
"middle": [],
"last": "D'ignazio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bhargava",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zuckerman",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Beck",
"suffix": ""
}
],
"year": 2014,
"venue": "News KDD: Data Science for News Publishing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D'Ignazio, C., Bhargava, R., Zuckerman, E., and Beck, L. (2014). Cliff-clavin: Determining geographic focus for news. News KDD: Data Science for News Publishing, 2014.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Jrcnames: Multilingual entity name variants and titles as linked data",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ehrmann",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jacquet",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Steinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "Semantic Web",
"volume": "8",
"issue": "",
"pages": "283--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehrmann, M., Jacquet, G., and Steinberger, R. (2017). Jrc- names: Multilingual entity name variants and titles as linked data. Semantic Web, 8(2):283-295.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Mordecai: Full text geoparsing and event geocoding",
"authors": [
{
"first": "A",
"middle": [],
"last": "Halterman",
"suffix": ""
}
],
"year": 2017,
"venue": "The Journal of Open Source Software",
"volume": "2",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Halterman, A. (2017). Mordecai: Full text geoparsing and event geocoding. The Journal of Open Source Software, 2(9).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "UCDP GED Codebook version 19",
"authors": [
{
"first": "S",
"middle": [],
"last": "H\u00f6gblad",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H\u00f6gblad, S. (2019). UCDP GED Codebook version 19.1.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Howard, J. and Ruder, S. (2018). Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 328-339.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S. J., and McClosky, D. (2014). The Stan- ford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "World Event/Interaction Survey (WEIS) Project",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mcclelland",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "1966--1978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McClelland, C. (2006). World Event/Interaction Survey (WEIS) Project, 1966-1978.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.07843"
]
},
"num": null,
"urls": [],
"raw_text": "Merity, S., Xiong, C., Bradbury, J., and Socher, R. (2016). Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Regularizing and optimizing lstm language models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "N",
"middle": [
"S"
],
"last": "Keskar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.02182"
]
},
"num": null,
"urls": [],
"raw_text": "Merity, S., Keskar, N. S., and Socher, R. (2017). Reg- ularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "PE-TRARCH2: Another event coding program",
"authors": [
{
"first": "C",
"middle": [],
"last": "Norris",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schrodt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Beieler",
"suffix": ""
}
],
"year": 2017,
"venue": "The Journal of Open Source Software",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norris, C., Schrodt, P., and Beieler, J. (2017). PE- TRARCH2: Another event coding program. The Jour- nal of Open Source Software, 2(9), 1.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cour- napeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Jour- nal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peters, M., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextu- alized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Political Science: KEDS -A Program for the Machine Coding of Event Data",
"authors": [
{
"first": "P",
"middle": [
"A"
],
"last": "Schrodt",
"suffix": ""
},
{
"first": "S",
"middle": [
"G"
],
"last": "Davis",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Weddle",
"suffix": ""
}
],
"year": 1994,
"venue": "Social Science Computer Review",
"volume": "12",
"issue": "4",
"pages": "561--587",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schrodt, P. A., Davis, S. G., and Weddle, J. L. (1994). Political Science: KEDS -A Program for the Machine Coding of Event Data. Social Science Computer Review, 12(4):561-587.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The CAMEO (conflict and mediation event observations) actor coding framework",
"authors": [
{
"first": "P",
"middle": [
"A"
],
"last": "Schrodt",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Yilmaz",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Gerner",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hermreck",
"suffix": ""
}
],
"year": 2008,
"venue": "2008 Annual Meeting of the International Studies Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schrodt, P. A., Yilmaz, O., Gerner, D. J., and Hermreck, D. (2008). The CAMEO (conflict and mediation event observations) actor coding framework. In 2008 Annual Meeting of the International Studies Association.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tabari: Textual analysis by augmented replacement instructions",
"authors": [
{
"first": "P",
"middle": [
"A"
],
"last": "Schrodt",
"suffix": ""
}
],
"year": 2009,
"venue": "Dept. of Political Science",
"volume": "",
"issue": "",
"pages": "1--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schrodt, P. A. (2009). Tabari: Textual analysis by aug- mented replacement instructions. Dept. of Political Sci- ence, University of Kansas, Blake Hall, Version 0.7. 3B3, pages 1-137.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Introducing the ucdp georeferenced event dataset",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sundberg",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Melander",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Peace Research",
"volume": "50",
"issue": "4",
"pages": "523--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sundberg, R. and Melander, E. (2013). Introducing the ucdp georeferenced event dataset. Journal of Peace Re- search, 50(4):523-532.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtow- icz, M., and Brew, J. (2019). Huggingface's transform- ers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, X., Zhao, J., and LeCun, Y. (2015). Character-level convolutional networks for text classification. In Ad- vances in neural information processing systems, pages 649-657.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td/><td>Task</td><td colspan=\"4\">Cls En B.F BW.F E.F</td><td colspan=\"2\">BE.F BF.F U.F</td></tr><tr><td/><td>side a</td><td>299 3.9</td><td>5.0</td><td>76.8</td><td>76.2</td><td colspan=\"2\">81.1 84.9 84.7</td></tr><tr><td/><td>side b</td><td>301 3.5</td><td>8.1</td><td>73.7</td><td>75.5</td><td>78.3</td><td>82.0 82.5</td></tr><tr><td/><td>dyad name</td><td>510 4.5</td><td>4.1</td><td>66.9</td><td>72.5</td><td>75.6</td><td>79.3 80.8</td></tr><tr><td/><td>type of violence</td><td colspan=\"2\">3 0.8 56.6</td><td>88.8</td><td>85.8</td><td>88.6</td><td>89.6 91.8</td></tr><tr><td/><td>conf lict name</td><td>428 4.3</td><td>4.2</td><td>69.5</td><td>73.4</td><td>76.9</td><td>80.7 82.7</td></tr><tr><td/><td>where coordinates</td><td>4125 7.4</td><td>0.3</td><td/><td/><td/><td>30.3</td></tr><tr><td/><td>region</td><td colspan=\"2\">5 1.4 28.7</td><td>99.4</td><td>89.6</td><td>97.7</td><td>98.7 99.8</td></tr><tr><td/><td>country</td><td>84 3.2</td><td>6.9</td><td>95.5</td><td>82.8</td><td>90.2</td><td>94.7 97.4</td></tr><tr><td/><td>adm 1</td><td>672 5.3</td><td>1.0</td><td>64.2</td><td>62.2</td><td>62.8</td><td>65.1 77.7</td></tr><tr><td/><td>adm 2</td><td>1739 6.5</td><td>0.4</td><td>27.5</td><td/><td/><td>41.3</td></tr><tr><td/><td>deaths a</td><td colspan=\"2\">75 1.4 46.8</td><td colspan=\"2\">63.6 83.1</td><td>82.2</td><td>82.2 73.3</td></tr><tr><td/><td>deaths b</td><td colspan=\"2\">115 1.9 35.6</td><td>59.0</td><td>75.1</td><td colspan=\"2\">74.8 75.5 67.4</td></tr><tr><td/><td>deaths civilians</td><td colspan=\"2\">117 1.5 48.7</td><td colspan=\"2\">63.8 84.1</td><td>83.5</td><td>83.7 70.9</td></tr><tr><td/><td>deaths unknown</td><td colspan=\"2\">104 0.9 72.5</td><td colspan=\"2\">79.0 93.3</td><td>92.7</td><td>92.7 80.8</td></tr><tr><td/><td>low</td><td>175 3.2</td><td>8.5</td><td colspan=\"2\">32.3 61.6</td><td>58.5</td><td>58.5 37.9</td></tr><tr><td/><td>best</td><td>187 3.2</td><td>8.3</td><td colspan=\"2\">32.6 61.1</td><td>58.1</td><td>58.4 41.6</td></tr><tr><td/><td>high</td><td>218 3.3</td><td>8.5</td><td colspan=\"2\">32.4 61.8</td><td>58.6</td><td>58.7 40.0</td></tr><tr><td colspan=\"3\">Task The name of the annotation task.</td><td/><td/><td/><td/></tr><tr><td>Cls</td><td colspan=\"3\">The number of distinct classes for a particular task.</td><td/><td/><td/></tr><tr><td>En</td><td colspan=\"7\">The class entropy: a high value corresponds to a more evenly distribution of instances per class.</td></tr><tr><td>B</td><td colspan=\"4\">Baseline, random guessing based on distribution of labels.</td><td/><td/></tr><tr><td>BW</td><td>Bag of words representation.</td><td/><td/><td/><td/><td/></tr><tr><td>E</td><td colspan=\"3\">ELMo representations + non-linear classifier.</td><td/><td/><td/></tr><tr><td>BE</td><td colspan=\"3\">BERT representations + non-linear classifier.</td><td/><td/><td/></tr><tr><td>BF</td><td colspan=\"6\">BERT representations, model fine-tuned on UCDP data + non-linear classifier.</td></tr><tr><td>U</td><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "UCDP document categorization results."
}
}
}
}