Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:10:08.875526Z"
},
"title": "Global Attention for Name Tagging",
"authors": [
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute {zhangb8",
"location": {
"addrLine": "whites5,huangl7"
}
},
"email": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Whitehead",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute {zhangb8",
"location": {
"addrLine": "whites5,huangl7"
}
},
"email": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute {zhangb8",
"location": {
"addrLine": "whites5,huangl7"
}
},
"email": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute {zhangb8",
"location": {
"addrLine": "whites5,huangl7"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many name tagging approaches use local contextual information with much success, but fail when the local context is ambiguous or limited. We present a new framework to improve name tagging by utilizing local, documentlevel, and corpus-level contextual information. We retrieve document-level context from other sentences within the same document and corpus-level context from sentences in other topically related documents. We propose a model that learns to incorporate documentlevel and corpus-level contextual information alongside local contextual information via global attentions, which dynamically weight their respective contextual information, and gating mechanisms, which determine the influence of this information. Extensive experiments on benchmark datasets show the effectiveness of our approach, which achieves state-of-the-art results for Dutch, German, and Spanish on the CoNLL-2002 and CoNLL-2003 datasets. 1 .",
"pdf_parse": {
"paper_id": "K18-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "Many name tagging approaches use local contextual information with much success, but fail when the local context is ambiguous or limited. We present a new framework to improve name tagging by utilizing local, documentlevel, and corpus-level contextual information. We retrieve document-level context from other sentences within the same document and corpus-level context from sentences in other topically related documents. We propose a model that learns to incorporate documentlevel and corpus-level contextual information alongside local contextual information via global attentions, which dynamically weight their respective contextual information, and gating mechanisms, which determine the influence of this information. Extensive experiments on benchmark datasets show the effectiveness of our approach, which achieves state-of-the-art results for Dutch, German, and Spanish on the CoNLL-2002 and CoNLL-2003 datasets. 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Name tagging, the task of automatically identifying and classifying named entities in text, is often posed as a sentence-level sequence labeling problem where each token is labeled as being part of a name of a certain type (e.g., location) or not (Chinchor and Robinson, 1997 ; Tjong Kim Sang and De Meulder, 2003) . When labeling a token, local context (i.e., surrounding tokens) is crucial because the context gives insight to the semantic meaning of the token. However, there are many instances in which the local context is ambiguous or lacks sufficient content. For example, in Figure 1 , the query sentence discusses \"Zywiec\" selling a product and profiting from these sales, but the local contextual information is ambiguous as more than one entity type could be involved in a sale. As a result, the baseline model mistakenly tags \"Zywiec\" as a person (PER) instead of the correct tag, which is organization (ORG). If the model has access to supporting evidence that provides additional, clearer contextual information, then the model may use this information to correct the mistake given the ambiguous local context.",
"cite_spans": [
{
"start": 247,
"end": 275,
"text": "(Chinchor and Robinson, 1997",
"ref_id": "BIBREF6"
},
{
"start": 284,
"end": 314,
"text": "Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 583,
"end": 591,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So far this year [PER Zywiec], whose full name is Zaklady Piwowarskie w Zywcu SA, has netted six million zlotys on sales of 224 million zlotys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline:",
"sec_num": null
},
{
"text": "So far this year [ORG Zywiec], whose full name is Zaklady Piwowarskie w Zywcu SA, has netted six million zlotys on sales of 224 million zlotys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline:",
"sec_num": null
},
{
"text": "Van Boxmeer also said [ORG Zywiec] would be boosted by its recent shedding of soft drinks which only accounted for about three percent of the firm's overall sales and for which 7.6 million zlotys in provisions had already been made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our model (Documentlevel + Corpuslevel Attention):",
"sec_num": null
},
{
"text": "Polish brewer [ORG Zywiec]'s 1996 profit slump may last into next year due in part to hefty depreciation charges, but recent high investment should help the firm defend its 10percent market share, the firm's chief executive said.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our model (Documentlevel + Corpuslevel Attention):",
"sec_num": null
},
{
"text": "The [ORG Zywiec] logo includes all of the most important historical symbols of the brewery and Poland itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Documentlevel Supporting Evidence:",
"sec_num": null
},
{
"text": "[LOC Zywiec] is a town in southcentral Poland 32,242 inhabitants (as of November 2007).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Documentlevel Supporting Evidence:",
"sec_num": null
},
{
"text": "Figure 1: Example from the baseline and our model with some supporting evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpuslevel Supporting Evidence:",
"sec_num": null
},
{
"text": "Additional context may be found from other sentences in the same document as the query sentence (document-level). In Figure 1 , the sentences in the document-level supporting evidence provide clearer clues to tag \"Zywiec\" as ORG, such as the references to \"Zywiec\" as a \"firm\". A concern of leveraging this information is the amount of noise that is introduced. However, across all the data in our experiments (Section 3.1), we find that an average of 35.43% of named entity mentions in each document are repeats and, when a mention appears more than once in a document, an average of 98.78% of these mentions have the same type. Consequently, one may use the documentlevel context to overcome the ambiguities of the local context while introducing little noise.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 125,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpuslevel Supporting Evidence:",
"sec_num": null
},
{
"text": "Although a significant amount of named entity mentions are repeated, 64.57% of the mentions are unique. In such cases, the sentences at the document-level cannot serve as a source of additional context. Nevertheless, one may find additional context from sentences in other documents in the corpus (corpus-level). Figure 1 shows some of the corpus-level supporting evidence for \"Zywiec\". In this example, similar to the document-level supporting evidence, the first sentence in this corpus-level evidence discusses the branding of \"Zywiec\", corroborating the ORG tag. Whereas the second sentence introduces noise because it has a different topic than the current sentence and discusses the Polish town named \"Zywiec\", one may filter these noisy contexts, especially when the noisy contexts are accompanied by clear contexts like the first sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 321,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpuslevel Supporting Evidence:",
"sec_num": null
},
{
"text": "We propose to utilize local, document-level, and corpus-level contextual information to improve name tagging. Generally, we follow the one sense per discourse hypothesis introduced by Yarowsky (2003) . Some previous name tagging efforts apply this hypothesis to conduct majority voting for multiple mentions with the same name string in a discourse through a cache model (Florian et al., 2004) or post-processing (Hermjakob et al., 2017) . However, these rule-based methods require manual tuning of thresholds. Moreover, it's challenging to explicitly define the scope of discourse. We propose a new neural network framework with global attention to tackle these challenges. Specifically, for each token in a query sentence, we propose to retrieve sentences that contain the same token from the document-level and corpuslevel contexts (e.g., document-level and corpuslevel supporting evidence for \"Zywiec\" in Figure 1) . To utilize this additional information, we propose a model that, first, produces representations for each token that encode the local context from the query sentence as well as the documentlevel and corpus-level contexts from the retrieved sentences. Our model uses a document-level at-tention and corpus-level attention to dynamically weight the document-level and corpus-level contextual representations, emphasizing the contextual information from each level that is most relevant to the local context and filtering noise such as the irrelevant information from the mention \"[LOC Zywiec]\" in Figure 1 . The model learns to balance the influence of the local, documentlevel, and corpus-level contextual representations via gating mechanisms. Our model predicts a tag using the local, gated-attentive document-level, and gated-attentive corpus-level contextual representations, which allows our model to predict the correct tag, ORG, for \"Zywiec\" in Figure 1 .",
"cite_spans": [
{
"start": 184,
"end": 199,
"text": "Yarowsky (2003)",
"ref_id": "BIBREF40"
},
{
"start": 371,
"end": 393,
"text": "(Florian et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 413,
"end": 437,
"text": "(Hermjakob et al., 2017)",
"ref_id": null
},
{
"start": 909,
"end": 918,
"text": "Figure 1)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1516,
"end": 1524,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1872,
"end": 1880,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpuslevel Supporting Evidence:",
"sec_num": null
},
{
"text": "The major contributions of this paper are: First, we propose to use multiple levels of contextual information (local, document-level, and corpuslevel) to improve name tagging. Second, we present two new attentions, document-level and corpus-level, which prove to be effective at exploiting extra contextual information and achieve the state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpuslevel Supporting Evidence:",
"sec_num": null
},
{
"text": "We first introduce our baseline model. Then, we enhance this baseline model by adding documentlevel and corpus-level contextual information to the prediction process via our document-level and corpus-level attention mechanisms, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "We consider name tagging as a sequence labeling problem, where each token in a sequence is tagged as the beginning (B), inside (I) or outside (O) of a name mention. The tagged names are then classified into predefined entity types. In this paper, we only use the person (PER), organization (ORG), location (LOC), and miscellaneous (MISC) types, which are the predefined types in CoNLL-02 and CoNLL-03 name tagging dataset (Tjong Kim Sang and De Meulder, 2003 ).",
"cite_spans": [
{
"start": 429,
"end": 458,
"text": "Kim Sang and De Meulder, 2003",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "2.1"
},
{
"text": "Our baseline model has two parts: 1) Encoding the sequence of tokens by incorporating the preceding and following contexts using a bi-directional long short-term memory (Bi-LSTM) (Graves et al., 2013) , so each token is assigned a local contextual embedding. Here, following Ma and Hovy (2016a), we use the concatenation of pre-trained word embeddings and character-level word representations composed by a convolutional neural network (CNN) as input to the Bi-LSTM. 2) Using a Conditional Random Fields (CRFs) output layer to render predictions for each token, which can efficiently capture dependencies among name tags (e.g., \"I-LOC\" cannot follow \"B-ORG\").",
"cite_spans": [
{
"start": 179,
"end": 200,
"text": "(Graves et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "2.1"
},
{
"text": "The Bi-LSTM CRF network is a strong baseline due to its remarkable capability of modeling contextual information and label dependencies. Many recent efforts combine the Bi-LSTM CRF network with language modeling (Liu et al., 2017; Peters et al., 2017 Peters et al., , 2018 to boost the name tagging performance. However, they still suffer from the limited contexts within individual sequences. To overcome this limitation, we introduce two attention mechanisms to incorporate document-level and corpus-level supporting evidence.",
"cite_spans": [
{
"start": 212,
"end": 230,
"text": "(Liu et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 231,
"end": 250,
"text": "Peters et al., 2017",
"ref_id": "BIBREF32"
},
{
"start": 251,
"end": 272,
"text": "Peters et al., , 2018",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "2.1"
},
{
"text": "Many entity mentions are tagged as multiple types by the baseline approach within the same document due to ambiguous contexts (14.43% of the errors in English, 18.55% in Dutch, and 17.81% in German). This type of error is challenging to address as most of the current neural network based approaches focus on evidence within the sentence when making decisions. In cases where a sentence is short or highly ambiguous, the model may either fail to identify names due to insufficient information or make wrong decisions by using noisy context. In contrast, a human in this situation may seek additional evidence from other sentences within the same document to improve judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "In Figure 1 , the baseline model mistakenly tags \"Zywiec\" as PER due to the ambiguous context \"whose full name is...\", which frequently appears around a person's name. However, contexts from other sentences in the same document containing \"Zywiec\" (e.g., s q and s r in Figure 2 ), such as \"'s 1996 profit...\" and \"would be boosted by its recent shedding...\", indicate that \"Zywiec\" ought to be tagged as ORG. Thus, we incorporate the document-level supporting evidence with the following attention mechanism (Bahdanau et al., 2015) .",
"cite_spans": [
{
"start": 509,
"end": 532,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": null
},
{
"start": 270,
"end": 278,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "Formally, given a document",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "D = {s 1 , s 2 , ...}, where s i = {w i1 , w i2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "..} is a sequence of words, we apply a Bi-LSTM to each word in s i , generating local contextual representations h i = {h i1 , h i2 , ...}. Next, for each w ij , we retrieve the sentences in the document that contain w ij (e.g., s q and s r in Figure 2 ) and select the local contextual representations of w ij from these sentences as supporting evidence, Figure 2 ), where h ij andh ij are obtained with the same Bi-LSTM. Since each representation in the supporting evidence is not equally valuable to the final prediction, we apply an attention mechanism to weight the contextual representations of the supporting evidence:",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 252,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 356,
"end": 364,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "h ij = {h 1 ij ,h 2 ij , ...} (e.g.,h qj andh rk in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "e k ij = v tanh W h h ij + Whh k ij + b e , \u03b1 k ij = Softmax e k ij ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "where h ij is the local contextual representation of word j in sentence s i andh k ij is the k-th supporting contextual representation. W h , Wh and b e are learned parameters. We compute the weighted average of the supporting representations b\u1ef9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "H ij = k=1 \u03b1 k ijh k ij ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "whereH ij denotes the contextual representation of the supporting evidence for w ij .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "For each word w ij , its supporting evidence representation,H ij , provides a summary of the other contexts where the word appears. Though this evidence is valuable to the prediction process, we must mitigate the influence of the supporting evidence since the prediction should still be made primarily based on the query context. Therefore, we apply a gating mechanism to constrain this influence and enable the model to decide the amount of the supporting evidence that should be incorporated in the prediction process, which is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "r ij = \u03c3(WH ,rH ij + W h,r h ij + b r ) , z ij = \u03c3(WH ,zH ij + W h,z h ij + b z ) , g ij = tanh(W h,g h ij + z ij (WH ,gH ij + b g )) , D ij = r ij h ij + (1 \u2212 r ij ) g ij ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "where all W , b are learned parameters and D ij is the gated supporting evidence representation for w ij .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Attention",
"sec_num": "2.2"
},
{
"text": "The document-level attention fails to generate supporting evidence when the name appears only once in a single document. In such situations, we analogously select supporting sentences from the entire corpus. Unfortunately, different from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-aware Corpus-level Attention",
"sec_num": "2.3"
},
{
"text": "So far this year Zywiec , whose full name is Zaklady Piwowarskie w Zywcu SA , has netted six million zlotys on sales of 224 million zlotys .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "Polish brewer Zywiec 's 1996 profit slump may last into next year due in part to hefty depreciation charges , but recent high investment should help the firm defend its 00percent market share , the firm 's chief executive said .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "Van Boxmeer also said Zywiec would be boosted by its recent shedding of soft drinks which only accounted for about three percent of the firm 's overall sales and for which 0.0 million zlotys in provisions had already been made . the sentences that are naturally topically relevant within the same documents, the supporting sentences from the other documents may be about distinct topics or scenarios, and identical phrases may refer to various entities with different types, as in the example in Figure 1 . To narrow down the search scope from the entire corpus and avoid unnecessary noise, we introduce a topic-aware corpus-level attention which clusters the documents by topic and carefully selects topically related sentences to use as supporting evidence.",
"cite_spans": [],
"ref_spans": [
{
"start": 496,
"end": 504,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "We first apply Latent Dirichlet allocation (LDA) (Blei et al., 2003) to model the topic distribution of each document and separate the documents into N clusters based on their topic distributions. 2 As in Figure 3 , we retrieve supporting sentences for each word, such as \"Zywiec\", from the topically related documents and employ another attention mechanism (Bahdanau et al., 2015) to the supporting contextual representations,\u0125 ij = {\u0125 Figure 3 ). This yields a weighted contextual representation of the corpus-level supporting evidence,\u0124 ij , for each w ij , which is similar to the document-level supporting evidence representation,H ij , described in 2 N = 20 in our experiments. section 2.2. We use another gating mechanism to combine\u0124 ij and the local contextual representation, h ij , to obtain the corpus-level gated supporting evidence representation, C ij , for each w ij .",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 358,
"end": 381,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 437,
"end": 445,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "So far this year Zywiec, whose full name is Zaklady Piwowarskie w Zywcu SA , has netted six million zlotys on sales of 224 million zlotys .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "ar Zywiec , whose full name is Zaklady Piwowarskie w Zywcu SA , has lion zlotys on sales of 224 million zlotys .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "Zywiec 's 1996 profit slump may last into next year due in part to tion charges , but recent high investment should help the firm defend market share , the firm 's chief executive said .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "so said Zywiec would be boosted by its recent shedding of soft drinks ounted for about three percent of the firm 's overall sales and for ion zlotys in provisions had already been made .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "The two largest brands are Heineken and Amstel. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "So far thi Zaklady Pi million zl",
"sec_num": null
},
{
"text": "For each word w ij of sentence s i , we concatenate its local contextual representation h ij , documentlevel gated supporting evidence representation D ij , and corpus-level gated supporting evidence representation C ij to obtain its final representation. This representation is fed to another Bi-LSTM to further encode the supporting evidence and local contextual features into an unified representation, which is given as input to an affine-CRF layer for label prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tag Prediction",
"sec_num": "2.4"
},
{
"text": "3 Experiments We select at most four document-level supporting sentences and five corpus-level supporting sentences. 4 Since the document-level attention method requires input from each individual document, we do not evaluate it on the CoNLL-2002 Spanish dataset which lacks document delimiters. We still evaluate the corpus-level attention on the Spanish dataset by randomly splitting the dataset into documents (30 sentences per document). Although randomly splitting the sentences does not yield perfect topic modeling clusters, experiments show the corpus-level attention still outperforms the baseline (Section 3.3). ",
"cite_spans": [
{
"start": 117,
"end": 118,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tag Prediction",
"sec_num": "2.4"
},
{
"text": "For word representations, we use 100-dimensional pre-trained word embeddings and 25-dimensional randomly initialized character embeddings. We train word embeddings using the word2vec package. 5 English embeddings are trained on the English Giga-word version 4, which is the same corpus used in (Lample et al., 2016) . Dutch, Spanish, and German embeddings are trained on corresponding Wikipedia articles (2017-12-20 dumps). Word embeddings are fine-tuned during training. Table 2 shows our hyper-parameters. For each model with an attention, since the Bi-LSTM encoder must encode the local, documentlevel, and/or corpus-level contexts, we pre-train a Bi-LSTM CRF model for 50 epochs, add our document-level attention and/or corpus-level attention, and then fine-tune the augmented model. Additionally, Reimers and Gurevych (2017) report that neural models produce different results even with same hyper-parameters due to the variances in parameter initialization. Therefore, we run each model ten times and report the mean as well as the maximum F1 scores.",
"cite_spans": [
{
"start": 192,
"end": 193,
"text": "5",
"ref_id": null
},
{
"start": 294,
"end": 315,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 802,
"end": 829,
"text": "Reimers and Gurevych (2017)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 472,
"end": 479,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.2"
},
{
"text": "We compare our methods to three categories of baseline name tagging methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "3.3"
},
{
"text": "\u2022 Vanilla Name Tagging Without any additional resources and supervision, the current state-ofthe-art name tagging model is the Bi-LSTM-CRF network reported by Lample et al. (2016) and Ma and Hovy (2016b) , whose difference lies in using a LSTM or CNN to encode characters. Our methods fall in this category.",
"cite_spans": [
{
"start": 159,
"end": 179,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 184,
"end": 203,
"text": "Ma and Hovy (2016b)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "3.3"
},
{
"text": "\u2022 (Chelba et al., 2013) . Table 3 presents the performance comparison among the baselines, the aforementioned stateof-the-art methods, and our proposed methods. Adding only the document-level attention offers a F1 gain of between 0.37% and 1.25% on Dutch, English, and German. Similarly, the addition of the corpus-level attention yields a F1 gain between 0.46% to 1.08% across all four languages. The model with both attentions outperforms our baseline method by 1.60%, 0.56%, and 0.79% on Dutch, English, and German, respectively. Using a paired t-test between our proposed model and the baselines on 10 randomly sampled subsets, we find that the improvements are statistically significant (p \u2264 0.015) for all settings and all languages. By incorporating the document-level and corpus-level attentions, we achieve state-of-the-art performance on the Dutch (NLD), Spanish (ESP) and German (DEU) datasets. For English, our methods outperform the state-of-the-art methods in the \"Vanilla Name Tagging\" category. Since the document-level and corpus-level attentions introduce redundant and topically related information, our models are compatible with the language model enhanced approaches. It is interesting to explore the integration of these two methods, but we leave this to future explorations. Figure 4 presents, for each language, the learning curves of the full models (i.e., with both document-level and corpus-level attentions). The learning curve is computed by averaging the F1 scores of the ten runs at each epoch. We first pretrain a baseline Bi-LSTM CRF model from epoch 1 to 50. Then, starting at epoch 51, we incorporate the document-level and corpus-level attentions to fine-tune the entire model. As shown in Figure 4 , when adding the attentions at epoch 51, the F1 score drops significantly as new parameters are introduced to the model. The model gradually adapts to the new information, the F1 score rises, and the full model eventually outperforms the pretrained model. The learning curves strongly prove the effectiveness of our proposed methods.",
"cite_spans": [
{
"start": 2,
"end": 23,
"text": "(Chelba et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 3",
"ref_id": null
},
{
"start": 1299,
"end": 1307,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1727,
"end": 1735,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "3.3"
},
{
"text": "Code Model F1 (%) (Gillick et al., 2015) reported 82.84 (Lample et al., 2016) reported 81.74 NLD (Yang et al., 2017) (Gillick et al., 2015) reported 82.95 (Lample et al., 2016) reported 85.75 (Yang et al., 2017) reported 85.77",
"cite_spans": [
{
"start": 18,
"end": 40,
"text": "(Gillick et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 56,
"end": 77,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 97,
"end": 116,
"text": "(Yang et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 117,
"end": 139,
"text": "(Gillick et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 155,
"end": 176,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 192,
"end": 211,
"text": "(Yang et al., 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "3.3"
},
{
"text": "Our Baseline mean 85.33 max 85.51",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "3.3"
},
{
"text": "Corpus-lvl Attention mean 85.77 max 86.01 \u2206 +0.50 (Luo et al., 2015) reported 91.20 ENG (Lample et al., 2016) reported 90.94 (Ma and Hovy, 2016b) reported 91.21 (Liu et al., 2017) reported 91.35 (Peters et al., 2017) reported 91.93 (Peters et al., 2018) reported 92. Table 3 : Performance of our methods versus the baseline and state-of-the-art models.",
"cite_spans": [
{
"start": 50,
"end": 68,
"text": "(Luo et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 88,
"end": 109,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 125,
"end": 145,
"text": "(Ma and Hovy, 2016b)",
"ref_id": "BIBREF27"
},
{
"start": 161,
"end": 179,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 189,
"end": 216,
"text": "91.35 (Peters et al., 2017)",
"ref_id": null
},
{
"start": 226,
"end": 253,
"text": "91.93 (Peters et al., 2018)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "3.3"
},
{
"text": "We also compare our approach with a simple rule-based propagation method, where we use token-level majority voting to make labels consistent on document-level and corpus-level. The score of document-level propagation on English is 90.21% (F1), and the corpus-level propagation is 89.02% which are both lower than the BiLSTM-CRF baseline 90.97%. Table 5 compares the name tagging results from the baseline model and our best models. All ex-amples are selected from the development set.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "3.3"
},
{
"text": "In the Dutch example, \"Granada\" is the name of a city in Spain, but also the short name of \"Granada Media\". Without ORG related context, \"Granada\" is mistakenly tagged as LOC by the baseline model. However, the document-level and corpus-level supporting evidence retrieved by our method contains the ORG name \"Granada Media\", which strongly indicates \"Granada\" to be an ORG in the query sentence. By adding the document-level and corpus-level attentions, our model successfully tags \"Granada\" as ORG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.4"
},
{
"text": "In example 2, the OOV word \"Kaczmarek\" is tagged as ORG in the baseline output. In the retrieved document-level supporting sentences, PER related contextual information, such as the pronoun \"he\", indicates \"Kaczmarek\" to be a PER. Our model correctly tags \"Kaczmarek\" as PER with the document-level attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.4"
},
{
"text": "In the German example, \"Gr\u00fcnen\" (Greens) is an OOV word in the training set. The character embedding captures the semantic meaning of the stem \"Gr\u00fcn\" (Green) which is a common nonname word, so the baseline model tags \"Gr\u00fcnen\" as O (outside of a name). In contrast, our model makes the correct prediction by incorporating the corpus-level attention because in the related sentence from the corpus \"Bundesvorstandes der Gr\u00fcnen\" (Federal Executive of the Greens) indicates \"Gr\u00fcnen\" to be a company name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.4"
},
{
"text": "By investigating the remaining errors, most of the named entity type inconsistency errors are eliminated, however, a few new errors are introduced due to the model propagating labels from negative instances to positive ones. Figure 5 presents a negative example, where our model, being influenced by the prediction \"[B-ORG Indianapolis]\" in the supporting sentence, incorrectly predicts \"Indianapolis\" as ORG in the query sentence. A potential solution is to apply sentence classification (Kim, 2014; Ji and Smith, 2017) to the documents, divide the document into finegrained clusters of sentences, and select supporting sentences within the same cluster.",
"cite_spans": [
{
"start": 489,
"end": 500,
"text": "(Kim, 2014;",
"ref_id": "BIBREF19"
},
{
"start": 501,
"end": 520,
"text": "Ji and Smith, 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 225,
"end": 233,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Remaining Challenges",
"sec_num": "3.5"
},
{
"text": "In morphologically rich languages, words may have many variants. When retrieving supporting evidence, our exact query word match criterion misses potentially useful supporting sentences that contain variants of the word. Normalization and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remaining Challenges",
"sec_num": "3.5"
},
{
"text": "Name tagging methods based on sequence labeling have been extensively studied recently. and Lample et al. (2016) proposed a neural architecture consisting of a bi-directional long short-term memory network (Bi-LSTM) encoder and a conditional random field (CRF) output layer (Bi-LSTM CRF). This architecture has been widely explored and demonstrated to be effective for sequence labeling tasks. Efforts incorporated character level compositional word embeddings, language modeling, and CRF re-ranking into the Bi-LSTM CRF architecture which improved the performance (Ma and Hovy, 2016a; Liu et al., 2017; Sato et al., 2017; Peters et al., 2017 Peters et al., , 2018 . Similar to these studies, our approach is also based on a Bi-LSTM CRF architecture. However, considering the limited contexts within each individual sequence, we design two attention mechanisms to further incorporate topically related contextual information on both the document-level and corpus-level.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 565,
"end": 585,
"text": "(Ma and Hovy, 2016a;",
"ref_id": "BIBREF26"
},
{
"start": 586,
"end": 603,
"text": "Liu et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 604,
"end": 622,
"text": "Sato et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 623,
"end": 642,
"text": "Peters et al., 2017",
"ref_id": "BIBREF32"
},
{
"start": 643,
"end": 664,
"text": "Peters et al., , 2018",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "There have been efforts in other areas of information extraction to exploit features beyond individual sequences. Early attempts (Mikheev et al., 1998; Mikheev, 2000) on MUC-7 name tagging dataset used document centered approaches. A number of approaches explored document-level features (e.g., temporal and co-occurrence patterns) for event extraction (Chambers and Jurafsky, 2008; Ji and Grishman, 2008; Liao and Grishman, 2010; Do et al., 2012; McClosky and Manning, 2012; Berant et al., 2014; Yang and Mitchell, 2016) . Other approaches leveraged features from external resources (e.g., Wiktionary or FrameNet) for low resource name tagging and event extraction (Li et al., 2013; Huang et al., 2016; Liu et al., 2016; Zhang et al., 2016; Cotterell and Duh, 2017; Huang et al., 2018) . Yaghoobzadeh and Sch\u00fctze (2016) aggregated corpus-level contextual information of each entity to predict its type and Narasimhan et al. (2016) incorporated contexts from external information sources (e.g., the documents that contain the desired information) to resolve ambiguities. Compared with these studies, our work incorporates both document-level and corpus-level con-textual information with attention mechanisms, which is a more advanced and efficient way to capture meaningful additional features. Additionally, our model is able to learn how to regulate the influence of the information outside the local context using gating mechanisms.",
"cite_spans": [
{
"start": 129,
"end": 151,
"text": "(Mikheev et al., 1998;",
"ref_id": "BIBREF30"
},
{
"start": 152,
"end": 166,
"text": "Mikheev, 2000)",
"ref_id": "BIBREF29"
},
{
"start": 353,
"end": 382,
"text": "(Chambers and Jurafsky, 2008;",
"ref_id": "BIBREF4"
},
{
"start": 383,
"end": 405,
"text": "Ji and Grishman, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 406,
"end": 430,
"text": "Liao and Grishman, 2010;",
"ref_id": "BIBREF22"
},
{
"start": 431,
"end": 447,
"text": "Do et al., 2012;",
"ref_id": "BIBREF8"
},
{
"start": 448,
"end": 475,
"text": "McClosky and Manning, 2012;",
"ref_id": "BIBREF28"
},
{
"start": 476,
"end": 496,
"text": "Berant et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 497,
"end": 521,
"text": "Yang and Mitchell, 2016)",
"ref_id": "BIBREF38"
},
{
"start": 666,
"end": 683,
"text": "(Li et al., 2013;",
"ref_id": "BIBREF21"
},
{
"start": 684,
"end": 703,
"text": "Huang et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 704,
"end": 721,
"text": "Liu et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 722,
"end": 741,
"text": "Zhang et al., 2016;",
"ref_id": "BIBREF42"
},
{
"start": 742,
"end": 766,
"text": "Cotterell and Duh, 2017;",
"ref_id": "BIBREF7"
},
{
"start": 767,
"end": 786,
"text": "Huang et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 907,
"end": 931,
"text": "Narasimhan et al. (2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We propose document-level and corpus-level attentions for name tagging. The document-level attention retrieves additional supporting evidence from other sentences within the document to enhance the local contextual information of the query word. When the query word is unique in the document, the corpus-level attention searches for topically related sentences in the corpus. Both attentions dynamically weight the retrieved contextual information and emphasize the information most relevant to the query context. We present gating mechanisms that allow the model to regulate the influence of the supporting evidence on the predictions. Experiments demonstrate the effectiveness of our approach, which achieves stateof-the-art results on benchmark datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "We plan to apply our method to other tasks, such as event extraction, and explore integrating language modeling into this architecture to further boost name tagging performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "The programs are publicly available for research purpose: https://github.com/boliangz/global_ attention_ner",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The miscellaneous category consists of names that do not belong to the other three categories.4 Both numbers are tuned from 1 to 10 and selected when the model performs best on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/tmikolov/word2vec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Repre- sentations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling biological processes for reading comprehension",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": ""
},
{
"first": "Pei-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Abby",
"middle": [],
"last": "Vander Linden",
"suffix": ""
},
{
"first": "Brittany",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Brad",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D Manning. 2014. Modeling biological processes for reading comprehension. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine Learning research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Large-scale machine learning with stochastic gradient descent",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 International Conference on Computational Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of the 2010 International Conference on Computational Statistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Jointly combining implicit constraints improves temporal ordering",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "One billion word benchmark for measuring progress in statistical language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.3005"
]
},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Muc-7 named entity task definition",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 7th Conference on Message Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Chinchor and Patricia Robinson. 1997. Muc-7 named entity task definition. In Proceedings of the 7th Conference on Message Understanding.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lowresource named entity recognition with crosslingual, character-level neural conditional random fields",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Kevin Duh. 2017. Low- resource named entity recognition with cross- lingual, character-level neural conditional random fields. In Proceedings of the Eighth International Joint Conference on Natural Language Processing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Joint inference for event timeline construction",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Quang Xuan Do",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quang Xuan Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A statistical model for multilingual entity detection and tracking",
"authors": [
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nicolov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Florian, H. Hassan, A. Ittycheriah, H. Jing, N. Kambhatla, X. Luo, N. Nicolov, and S. Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proceedings of the Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics (HLT-NAACL 2004).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Oriol Vinyals, and Amarnag Subramanya",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Brunk",
"suffix": ""
}
],
"year": 2015,
"venue": "Multilingual language processing from bytes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.00103"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language process- ing from bytes. arXiv preprint arXiv:1512.00103.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hybrid speech recognition with deep bidirectional lstm",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Abdel-Rahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Graves, Navdeep Jaitly, and Abdel-rahman Mo- hamed. 2013. Hybrid speech recognition with deep bidirectional lstm. In Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Incident-driven machine translation and name tagging for low-resource languages",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Pust",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Tomer",
"middle": [],
"last": "Levinboim",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2017,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "1--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Pust, Xing Shi, Kevin Knight, Tomer Lev- inboim, Kenton Murray, David Chiang, Boliang Zhang, Xiaoman Pan, Di Lu, Ying Lin, and Heng Ji. 2017. Incident-driven machine translation and name tagging for low-resource languages. Machine Translation, pages 1-31.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Liberal event extraction and event schema induction",
"authors": [
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Cassidy",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Avirup",
"middle": [],
"last": "Sil",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R Voss, Jiawei Han, and Avirup Sil. 2016. Liberal event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-lingual common semantic space construction via cluster-consistent word embedding",
"authors": [
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07875"
]
},
"num": null,
"urls": [],
"raw_text": "Lifu Huang, Kyunghyun Cho, Boliang Zhang, Heng Ji, and Kevin Knight. 2018. Multi-lingual common semantic space construction via cluster-consistent word embedding. arXiv preprint arXiv:1804.07875.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Refining event extraction through cross-document inference",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. Pro- ceedings of the 2008 Annual Meeting of the Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural discourse structure for text categorization",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji and Noah Smith. 2017. Neural discourse structure for text categorization. Proceedings of the 2017 Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of 2016 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of 2016 Annual Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Joint event extraction via structured prediction with global features",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using document level cross-event inference to improve event extraction",
"authors": [
{
"first": "Shasha",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shasha Liao and Ralph Grishman. 2010. Using doc- ument level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Empower sequence labeling with task-aware neural language model",
"authors": [
{
"first": "Liyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liyuan Liu, Jingbo Shang, Frank Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2017. Empower sequence labeling with task-aware neural language model.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Leveraging framenet to improve automatic event detection",
"authors": [
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging framenet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Joint entity recognition and disambiguation",
"authors": [
{
"first": "Gang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Xiaojiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zaiqing",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Za- iqing Nie. 2015. Joint entity recognition and disam- biguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016a. End-to-end se- quence labeling via bi-directional lstm-cnns-crf.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016b. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning constraints for consistent timeline extraction",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky and Christopher D Manning. 2012. Learning constraints for consistent timeline extrac- tion. In Proceedings of the 2012 Joint Confer- ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Document centered approach to text normalization",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Mikheev",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Mikheev. 2000. Document centered approach to text normalization. In Proceedings of the 23rd annual international ACM SIGIR conference on Re- search and development in information retrieval, pages 136-143. ACM.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Description of the ltg system used for muc-7",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 1998,
"venue": "Seventh Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Mikheev, Claire Grover, and Marc Moens. 1998. Description of the ltg system used for muc- 7. In Seventh Message Understanding Conference (MUC-7): Proceedings of a Conference Held in Fairfax, Virginia, April 29-May 1, 1998.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Improving information extraction by acquiring external evidence with reinforcement learning",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Yala",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.07954"
]
},
"num": null,
"urls": [],
"raw_text": "Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquir- ing external evidence with reinforcement learning. arXiv preprint arXiv:1603.07954.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Semi-supervised sequence tagging with bidirectional language models",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Waleed Ammar, Chandra Bhagavat- ula, and Russell Power. 2017. Semi-supervised se- quence tagging with bidirectional language models.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Optimal hyperparameters for deep lstm-networks for sequence labeling tasks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.06799"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Optimal hy- perparameters for deep lstm-networks for sequence labeling tasks. arXiv preprint arXiv:1707.06799.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Segment-level neural conditional random fields for named entity recognition",
"authors": [
{
"first": "Motoki",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Motoki Sato, Hiroyuki Shindo, Ikuya Yamada, and Yuji Matsumoto. 2017. Segment-level neural conditional random fields for named entity recognition. In Pro- ceedings of the Eighth International Joint Confer- ence on Natural Language Processing.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the 2003 Annual Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Corpus-level fine-grained entity typing using contextual information",
"authors": [
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.07901"
]
},
"num": null,
"urls": [],
"raw_text": "Yadollah Yaghoobzadeh and Hinrich Sch\u00fctze. 2016. Corpus-level fine-grained entity typing using contextual information. arXiv preprint arXiv:1606.07901.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Joint extraction of events and entities within a document context",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.03632"
]
},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Tom Mitchell. 2016. Joint extrac- tion of events and entities within a document con- text. arXiv preprint arXiv:1609.03632.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Transfer learning for sequence tagging with hierarchical recurrent networks",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.06345"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks. arXiv preprint arXiv:1703.06345.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ACL1995",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 2003. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Proc. ACL1995.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Embracing non-traditional linguistic resources for low-resource language name tagging",
"authors": [
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Halidanmu",
"middle": [],
"last": "Abudukelimu",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boliang Zhang, Di Lu, Xiaoman Pan, Ying Lin, Hal- idanmu Abudukelimu, Heng Ji, and Kevin Knight. 2017. Embracing non-traditional linguistic re- sources for low-resource language name tagging. In Proceedings of the Eighth International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Name tagging for low-resource incident languages based on expectation-driven learning",
"authors": [
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boliang Zhang, Xiaoman Pan, Tianlu Wang, Ashish Vaswani, Heng Ji, Kevin Knight, and Daniel Marcu. 2016. Name tagging for low-resource incident lan- guages based on expectation-driven learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "The two largest brands are Heineken and Amstel.The list includes Cruzcampo, Affligem and Zywiec . Document-level Attention Architecture. (Within-sequence context in red incorrectly indicates the name as PER, and document-level context in green correctly indicates the name as ORG.)",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "..} (e.g.,h xi andh yi in",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Corpus-level Attention Architecture.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "German (F1 scales between 76%-79%)Figure 4: Average F1 score for each epoch of the ten runs of our model with both document-level and corpus-level attentions. Epochs 1-50 are the pre-training phase and 51-100 are the fine-tuning phase.name tagging performance by introducing additional annotations from related tasks such as entity linking and part-of-speech tagging.\u2022 Join-learning with Language Model Peters et al. (2017); Liu et al. (2017); Peters et al. (2018) leverage a pre-trained language model on a large external corpus to enhance the semantic representations of words in the local corpus. Peters et al. (2018) achieve a high score on the CoNLL-2003 English dataset using a giant language model pre-trained on a 1 Billion Word Benchmark",
"uris": null
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"text": "D-lvl sentences: document-level supporting sentences. * C-lvl sentences: corpus-level supporting sentences.",
"uris": null
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"text": "Comparison of name tagging results between the baseline and our methods. morphological analysis can be applied in this case to help fetch supporting sentences.",
"uris": null
},
"TABREF2": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "# of tokens in name tagging datasets statistics. # of names is given in parentheses."
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Hyper-parameter</td><td>Value</td></tr><tr><td>CharCNN Filter Number</td><td>25</td></tr><tr><td>CharCNN Filter Widths</td><td>[2, 3, 4]</td></tr><tr><td>Lower Bi-LSTM Hidden Size</td><td>100</td></tr><tr><td colspan=\"2\">Lower Bi-LSTM Dropout Rate 0.5</td></tr><tr><td>Upper Bi-LSTM Hidden Size</td><td>100</td></tr><tr><td>Learning Rate</td><td>0.005</td></tr><tr><td>Batch Size</td><td>N/A *</td></tr><tr><td>Optimizer</td><td>SGD (Bottou, 2010)</td></tr><tr><td>*</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Each batch is a document. The batch size varies as the different document length."
},
"TABREF4": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Hyper-parameters."
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>0</td><td>60.67</td><td/><td>60.67</td></tr><tr><td>1</td><td>66.07</td><td/><td>66.07</td></tr><tr><td>2</td><td>68.82</td><td/><td>68.82</td></tr><tr><td>3</td><td>70.77</td><td/><td>70.77</td></tr><tr><td>4</td><td>71.71</td><td/><td>71.71</td></tr><tr><td>5</td><td>72.91</td><td/><td>72.91</td></tr><tr><td>6</td><td>73.36</td><td/><td>73.36</td></tr><tr><td>7</td><td>74.49</td><td/><td>74.49</td></tr><tr><td>8</td><td>74.95</td><td/><td>74.95</td></tr><tr><td>9</td><td>74.33</td><td/><td>74.33</td></tr><tr><td>10</td><td>75.19</td><td/><td>75.19</td></tr><tr><td>11</td><td>75.37</td><td/><td>75.37</td></tr><tr><td>12</td><td>75.81</td><td/><td>75.81</td></tr><tr><td>13</td><td>76.51</td><td/><td>76.51</td></tr><tr><td>14</td><td>76.3</td><td/><td>76.3</td></tr><tr><td>15</td><td>76.57</td><td/><td>76.57</td></tr><tr><td>16</td><td>76.73</td><td/><td>76.73</td></tr><tr><td>17</td><td>76.86</td><td/><td>76.86</td></tr><tr><td>18</td><td>76.68</td><td/><td>76.68</td></tr><tr><td>19</td><td>77.18</td><td/><td>77.18</td></tr><tr><td>20</td><td>77.19</td><td/><td>77.19</td></tr><tr><td>21</td><td>77.38</td><td/><td>77.38</td></tr><tr><td>22</td><td>76.97</td><td/><td>76.97</td></tr><tr><td>23</td><td>77.25</td><td/><td>77.25</td></tr><tr><td>24</td><td>77.19</td><td/><td>77.19</td></tr><tr><td>25</td><td>77.43</td><td/><td>77.43</td></tr><tr><td>26</td><td>77.56</td><td/><td>77.56</td></tr><tr><td>27</td><td>77.85</td><td/><td>77.85</td></tr><tr><td>28</td><td>77.82</td><td/><td>77.82</td></tr><tr><td>29</td><td>77.46</td><td/><td>77.46</td></tr><tr><td>30</td><td>77.75</td><td/><td>77.75</td></tr><tr><td>31</td><td>77.81</td><td/><td>77.81</td></tr><tr><td>32</td><td>77.72</td><td/><td>77.72</td></tr><tr><td>33</td><td>77.71</td><td/><td>77.71</td></tr><tr><td>34</td><td>77.47</td><td/><td>77.47</td></tr><tr><td>35</td><td>77.84</td><td/><td>77.84</td></tr><tr><td>36</td><td>77.89</td><td/><td>77.89</td></tr><tr><td>37</td><td>77.57</td><td/><td>77.57</td></tr><tr><td>38</td><td>77.84</td><td/><td>77.84</td></tr><tr><td>39</td><td>78.02</td><td/><td>78.02</td></tr><tr><td>40</td><td>77.85</td><td/><td>77.85</td></tr><tr><td>41</td><td>77.83</td><td/><td>77.83</td></tr><tr><td>42</td><td>77.89</td><td/><td>77.89</td></tr><tr><td>43</td><td>77.89</td><td/><td>77.89</td></tr><tr><td>44</td><td>78.04</td><td/><td>78.04</td></tr><tr><td>45</td><td>77.73</td><td/><td>77.73</td></tr><tr><td>46</td><td>77.77</td><td/><td>77.77</td></tr><tr><td>47</td><td>77.78</td><td/><td>77.78</td></tr><tr><td>48</td><td>77.95</td><td/><td>77.95</td></tr><tr><td>49</td><td>77.86</td><td/><td>77.86</td></tr><tr><td>50</td><td>77.93</td><td/><td>77.93</td></tr><tr><td>51</td><td>77.66</td><td/><td>77.66</td></tr><tr><td>52</td><td>77.69</td><td>77.79</td><td>77.49</td></tr><tr><td>53</td><td>77.66</td><td>77.76</td><td>77.46</td></tr><tr><td>54</td><td>78.1</td><td>78.2</td><td>77.9</td></tr><tr><td>55</td><td>78.2</td><td>78.3</td><td>78.0</td></tr><tr><td>56</td><td>77.97</td><td>78.07</td><td>77.77</td></tr><tr><td>57</td><td>78.36</td><td>78.46</td><td>78.16</td></tr><tr><td>58</td><td>77.89</td><td>77.99</td><td>77.69</td></tr><tr><td>59</td><td>78.66</td><td>78.76</td><td>78.46</td></tr><tr><td>60</td><td>78.61</td><td>78.71</td><td>78.41</td></tr><tr><td>61</td><td>78.64</td><td>78.74</td><td>78.44</td></tr><tr><td>62</td><td>77.59</td><td>77.69</td><td>77.39</td></tr><tr><td>63</td><td>77.96</td><td>78.06</td><td>77.76</td></tr><tr><td>64</td><td>78.37</td><td>78.47</td><td>78.17</td></tr><tr><td>65</td><td>78.14</td><td>78.24</td><td>77.94</td></tr><tr><td>66</td><td>77.98</td><td>78.08</td><td>77.78</td></tr><tr><td>67</td><td>78.2</td><td>78.3</td><td>78.0</td></tr><tr><td>68</td><td>78.51</td><td>78.61</td><td>78.31</td></tr><tr><td>69</td><td>78.55</td><td>78.65</td><td>78.35</td></tr><tr><td>70</td><td>78.35</td><td>78.45</td><td>78.15</td></tr><tr><td>71</td><td>77.85</td><td>77.95</td><td>77.65</td></tr><tr><td>72</td><td>78.25</td><td>78.35</td><td>78.05</td></tr><tr><td>73</td><td>78.05</td><td>78.15</td><td>77.85</td></tr><tr><td>74</td><td>78.51</td><td>78.61</td><td>78.31</td></tr><tr><td>75</td><td>78.22</td><td>78.32</td><td>78.02</td></tr><tr><td>76</td><td>78.32</td><td>78.42</td><td>78.12</td></tr><tr><td>77</td><td>78.26</td><td>78.36</td><td>78.06</td></tr><tr><td>78</td><td>78.49</td><td>78.59</td><td>78.29</td></tr><tr><td>79</td><td>78.15</td><td>78.25</td><td>77.95</td></tr><tr><td>80</td><td>78.24</td><td>78.34</td><td>78.04</td></tr><tr><td>81</td><td>78.26</td><td>78.36</td><td>78.06</td></tr><tr><td>82</td><td>78.34</td><td>78.44</td><td>78.14</td></tr><tr><td>83</td><td>78.25</td><td>78.35</td><td>78.05</td></tr><tr><td>84</td><td>78.01</td><td>78.11</td><td>77.81</td></tr><tr><td>85</td><td>77.99</td><td>78.09</td><td>77.79</td></tr><tr><td>86</td><td>78</td><td>78.1</td><td>77.8</td></tr></table>",
"type_str": "table",
"num": null,
"text": ""
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>#1 Dutch</td><td/></tr><tr><td>Baseline</td><td>[B-The British group Granada Media has bought shares of GBP 1.75 trillion (111 billion Belgian</td></tr><tr><td/><td>francs) from United News Media.</td></tr><tr><td>#2 Our model</td><td>Diese Diskussion werde ausschlaggebend sein f\u00fcr die Stellungnahme der [B-ORG Gr\u00fcnen] in dieser</td></tr><tr><td/><td>Frage.</td></tr><tr><td colspan=\"2\">C-lvl sentences Auch das Mitglied des Bundesvorstandes der [B-ORG Gr\u00fcnen], Helmut Lippelt, sprach sich f\u00fcr ein</td></tr><tr><td/><td>Berufsheer au.</td></tr><tr><td/><td>Helmut Lippelt, a member of the Federal Executive of the Greens, also called for a</td></tr><tr><td/><td>professional army.</td></tr><tr><td colspan=\"2\">#4 Negative Example</td></tr><tr><td>Reference</td><td>[B-LOC Indianapolis] 1996-12-06</td></tr><tr><td>Our model</td><td>[B-ORG Indianapolis] 1996-12-06</td></tr><tr><td>D-lvl sentence</td><td>The injury-plagued [B-ORG Indianapolis] [I-ORG Colts] lost another quarterback on Thursday but last year's AFC finalists rallied together to shoot down the Philadelphia Eagles 37-10 in a</td></tr><tr><td/><td>showdown of playoff contenders.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "LOC Granada] overwoog vervolgens een bod op Carlton uit te brengen, maar daar ziet het concern nu van af. Granada then considered issuing a bid for Carlton, but the concern now sees it.Our model [B-ORG Granada] overwoog vervolgens een bod op Carlton uit te brengen, maar daar ziet het concern nu van af. D-lvl sentences [B-ORG Granada] [I-ORG Media] neemt belangen in United News. Granada Media takes interests in United News. C-lvl sentences Het Britse concern [B-ORG Granada] [I-ORG Media] heeft voor 1,75 miljard pond sterling (111 miljard Belgische frank) aandelen gekocht van United News Media. English Baseline Initially Poland offered up to 75 percent of Ruch but in March [ORG Kaczmarek] cancelled the tender and offered a minority stake with an option to increase the equity. Our model Initially Poland offered up to 75 percent of Ruch but in March [PER Kaczmarek] cancelled the tender and offered a minority stake with an option to increase the equity. D-lvl sentences [PER Kaczmarek] said in May he was unhappy that only one investor ended up bidding for Ruch. #3 German Baseline Diese Diskussion werde ausschlaggebend sein f\u00fcr die Stellungnahme der Gr\u00fcnen in dieser Frage. This discussion will be decisive for the opinion of the Greens on this question."
}
}
}
}