ACL-OCL / Base_JSON /prefixS /json /spnlp /2022.spnlp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:35.122603Z"
},
"title": "SlotGAN: Detecting Mentions in Text via Adversarial Distant Learning",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Daza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vrije Universiteit Amsterdam",
"location": {}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cochez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vrije Universiteit Amsterdam",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Paul",
"middle": [],
"last": "Groth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present SlotGAN, a framework for training a mention detection model that only requires unlabeled text and a gazetteer. It consists of a generator trained to extract spans from an input sentence, and a discriminator trained to determine whether a span comes from the generator, or from the gazetteer. We evaluate the method on English newswire data and compare it against supervised, weakly-supervised, and unsupervised methods. We find that the performance of the method is lower than these baselines, because it tends to generate more and longer spans, and in some cases it relies only on capitalization. In other cases, it generates spans that are valid but differ from the benchmark. When evaluated with metrics based on overlap, we find that SlotGAN performs within 95% of the precision of a supervised method, and 84% of its recall. Our results suggest that the model can generate spans that overlap well, but an additional filtering mechanism is required.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "We present SlotGAN, a framework for training a mention detection model that only requires unlabeled text and a gazetteer. It consists of a generator trained to extract spans from an input sentence, and a discriminator trained to determine whether a span comes from the generator, or from the gazetteer. We evaluate the method on English newswire data and compare it against supervised, weakly-supervised, and unsupervised methods. We find that the performance of the method is lower than these baselines, because it tends to generate more and longer spans, and in some cases it relies only on capitalization. In other cases, it generates spans that are valid but differ from the benchmark. When evaluated with metrics based on overlap, we find that SlotGAN performs within 95% of the precision of a supervised method, and 84% of its recall. Our results suggest that the model can generate spans that overlap well, but an additional filtering mechanism is required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Detecting mentions of entities in text is an important step towards the extraction of structured information from natural language sources. Mention Detection (MD) components can be found frequently in systems for Named Entity Recognition (NER) (Strakov\u00e1 et al., 2019; Wang et al., 2021) , entity linking (Wu et al., 2020; Cao et al., 2021) , relationship extraction (Katiyar and Cardie, 2017; Zhong and Chen, 2021) , and coreference resolution (Joshi et al., 2019; Xu and Choi, 2020; Kirstain et al., 2021) , where accurately modeling mentions is crucial for downstream performance.",
"cite_spans": [
{
"start": 244,
"end": 267,
"text": "(Strakov\u00e1 et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 268,
"end": 286,
"text": "Wang et al., 2021)",
"ref_id": "BIBREF30"
},
{
"start": 304,
"end": 321,
"text": "(Wu et al., 2020;",
"ref_id": "BIBREF32"
},
{
"start": 322,
"end": 339,
"text": "Cao et al., 2021)",
"ref_id": "BIBREF2"
},
{
"start": 366,
"end": 392,
"text": "(Katiyar and Cardie, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 393,
"end": 414,
"text": "Zhong and Chen, 2021)",
"ref_id": "BIBREF37"
},
{
"start": 444,
"end": 464,
"text": "(Joshi et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 465,
"end": 483,
"text": "Xu and Choi, 2020;",
"ref_id": "BIBREF33"
},
{
"start": 484,
"end": 506,
"text": "Kirstain et al., 2021)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The MD task is often subsumed under NER, where most effective approaches employ supervised learning with exhaustively annotated datasets. These methods become less feasible in cases where we need to rapidly build MD systems, for example, when moving to a domain with incompatible NER classes; or when there are not enough resources to create a labeled dataset. In contrast, we assume that we have access to an unlabeled corpus, and a list of known entity names (i.e. a gazetteer). We propose SlotGAN-a framework for detecting mentions that uses a generator to extract spans conditioned on some input text, and a discriminator that determines whether a span comes from the generator, or from the gazetteer (see Fig. 1 ). In contrast with distant supervision methods that require training with false negatives (Ratner et al., 2016; Giannakopoulos et al., 2017; Shang et al., 2018) , SlotGAN relies on the discriminator to learn patterns that are not likely to be names of entities (such as verb phrases, or very long spans, which rarely occur in a gazzetteer), thereby improving the generator's ability to detect valid mentions.",
"cite_spans": [
{
"start": 808,
"end": 829,
"text": "(Ratner et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 830,
"end": 858,
"text": "Giannakopoulos et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 859,
"end": 878,
"text": "Shang et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 710,
"end": 716,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate the method in a MD task using the CoNLL 2003 English dataset (Tjong Kim Sang and De Meulder, 2003) . We observe that the absence of strong supervision in SlotGAN results in different, yet valid notions of what constitutes an entity. For instance, while in the sentence \"On the road to Tripoli airport...\" the word Tripoli is selected as a gold mention, SlotGAN selects Tripoli airport. In this case, exact match metrics for NER underestimate performance, assigning zero precision and recall. To account for this, we introduce overlap-based metrics into the evaluation.",
"cite_spans": [
{
"start": 73,
"end": 110,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When using exact boundary match metrics, Slot-GAN exhibits lower performance compared to different baselines. When evaluating overlap, precision (how much of the predicted span overlaps with the gold span) is within 95% of the performance of the supervised baseline, while recall (how much of the gold span is actually predicted) is within 84%. We observe that SlotGAN tends to generate more and longer spans than those in the benchmark, and in some cases it relies only on capitalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are the following: 1) A framework towards distantly-supervised MD that avoids explicit training with false negatives, and an imple- Figure 1 : SlotGAN consists of a generator G trained to extract spans from an input sentence. We represent spans as matrices containing embeddings of words in a span, padded with zeros to a fixed length L. True spans are generated from a gazetteer. A discriminator D is trained to determine if a span was generated from G or from the gazetteer.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "mentation via an end-to-end differentiable architecture for extracting distinct spans; 2) Evidence for the use of overlap-based metrics into the evaluation of MD methods to account for ambiguous cases in gold annotations; 3) An analysis of the performance of SlotGAN, identifying its failure modes and outlining directions of improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the MD task, we are given a sentence from a corpus as a sequence of words (w 1 , w 2 , ..., w n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "The output of the system is a set of spans that contain a mention, and each span is a tuple (i s , i e ) where i s is an integer indicating the position where the span starts, and i e the position where it ends. Additionally, we have access to a gazetteer E = (e 1 , e 2 , ..., e N ) containing names of entities relevant to a particular domain. SlotGAN is a method for MD based on Generative Adversarial Networks (Goodfellow et al., 2014; Mirza and Osindero, 2014) . It consists of a generator trained to extract spans from a sentence, and a discriminator that determines whether a span comes from the generator or from the gazetteer.",
"cite_spans": [
{
"start": 414,
"end": 439,
"text": "(Goodfellow et al., 2014;",
"ref_id": null
},
{
"start": 440,
"end": 465,
"text": "Mirza and Osindero, 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "We define the embedding of a sentence w = (w 1 , ..., w n ) as a matrix emb(w) \u2208 R d\u00d7n , where emb is a function that maps words to d-dimensional pretrained embeddings (for example, from the input embedding layer of BERT (Devlin et al., 2019) ).",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "We represent each mention span in a sequence as a matrix in a space S = R d\u00d7L , where L is the length of the sequence. For a span (i s , i e ), the matrix contains the embeddings of the words within the span, from column i s to column i e , and is zero in the remaining columns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "The generator takes as input the embedding matrix emb(w) of a sentence, and assigns each of its columns to one of k slots. The output is a sequence of k span representations (S i ) k i=1 with S i \u2208 S, such that the j-th column of S i contains the j-th column of the input matrix, if it was assigned to slot i. Unused columns of S i are filled with zeros.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "When sampling a name e of an entity in the gazetteer, we embed it as emb(e) and then add zero padding via a pad function until reaching a maximum length L, to obtain a span representation in S. The amount of padding is added randomly to both sides of an entity name, with the purpose of emulating how in a sentence, a mention can appear at an arbitrary position. The discriminator takes as input span representations in S, and outputs a score that should be high for samples from the gazetteer, and low for samples from the generator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "Denoting as p w the distribution used to sample sentences from the corpus, and as p e the distribution for sampling names from the gazetteer, the generator and discriminator are trained via gradient descent using the W-GAN minimax optimization objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "min G max D E e\u223cpe [D(pad(emb(e)))]\u2212 E w\u223cpw k i=1 D(G(emb(w)) i ) , (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "where we have denoted as G(emb(w)) i the i-th span representation produced by the generator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "To allow also not extracting any mentions when not required, we randomly introduce empty spans in the gazetteer, and we reformulate the generator objective with an equality constraint. Following Bastings et al. (2019), we define the constraint in terms of a differentiable function C such that C(G(emb(w) i ) counts the number of transitions from zero to non-zero, and vice versa, in a span representation. For valid spans, this should be equal to 2. We solve the problem introducing a Lagrange multiplier \u03bb, and the term in Eq. 1 that depends on the generator becomes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "min \u03bb,G E w\u223cpw k i=1 \u2212D(S i (w)) \u2212 \u03bb(2 \u2212 C(S i (w)) , (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "where S i (w) is a shorthand for G(emb(w)) i . This constraint prevents the generator from producing only empty spans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "At test time, we can use the spans produced by the generator as predictions for mentions. Alternatively, we can balance precision and recall by leveraging the discriminator, by only keeping spans with a score D(S i (w)) > t where t is a threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "We implement the generator using BERT (Devlin et al., 2019), followed by a modified Slot Attention layer (Locatello et al., 2020) to model discrete selections of distinct spans. The discriminator is a temporal CNN. For more details on the architecture, we refer the reader to Appendix A.",
"cite_spans": [
{
"start": 105,
"end": 129,
"text": "(Locatello et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SlotGAN",
"sec_num": "2"
},
{
"text": "The task of MD has been addressed under NER effectively via supervised learning (Devlin et al., 2019; Strakov\u00e1 et al., 2019; Peters et al., 2018; Yu et al., 2020; Wang et al., 2021) . Some works address the lack of labeled data in a target domain by applying adaptation techniques from a source domain with labeled data (Zhou et al., 2019; Zhang et al., 2021) . In this work we focus on the case where annotations are not available.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 102,
"end": 124,
"text": "Strakov\u00e1 et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 125,
"end": 145,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 146,
"end": 162,
"text": "Yu et al., 2020;",
"ref_id": "BIBREF34"
},
{
"start": 163,
"end": 181,
"text": "Wang et al., 2021)",
"ref_id": "BIBREF30"
},
{
"start": 320,
"end": 339,
"text": "(Zhou et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 340,
"end": 359,
"text": "Zhang et al., 2021)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Closer to our work are methods for weakly or distantly supervised learning, where heuristics and domain-specific rules are used to generate a noisy training set, often using external sources like gazetteers (Safranchik et al., 2020; Lison et al., 2020; Ratner et al., 2016; Shang et al., 2018; Li et al., 2021a) . These methods are limited by false negatives that reduce recall in MD. Furthermore, even though rules can be used to annotate a dataset at a large scale, the process of devising these rules in the first place can be tedious, and might require domain expert knowledge. recently introduced a fully unsupervised method for NER that uses a pipeline of clustering over word embeddings, a generative model, and reinforcement learning to solve the NER task without any labels or external sources. These elements are optimized separately, whereas SlotGAN provides an end-to-end architecture.",
"cite_spans": [
{
"start": 207,
"end": 232,
"text": "(Safranchik et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 233,
"end": 252,
"text": "Lison et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 253,
"end": 273,
"text": "Ratner et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 274,
"end": 293,
"text": "Shang et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 294,
"end": 311,
"text": "Li et al., 2021a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Datasets We evaluate MD performance using the CoNLL 2003 English dataset for NER (Tjong Kim Sang and De Meulder, 2003) . For methods that require a dictionary of entity types or a gazetteer, we build it using the annotations in the training set. We also explore a pretraining strategy for SlotGAN, where we sample sentences from Wikipedia articles, and names of entities from Wikidata. Both are obtained from the July 2019 dumps.",
"cite_spans": [
{
"start": 81,
"end": 118,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We evaluate the performance of SlotGAN when trained with the CoNLL 2003 data only, and when pretraining with Wikipedia and Wikidata. We apply a threshold to all spans based on the discriminator score, selected using the validation set. Training and hyperparameter details can be found in Appendix B. Our implementation is available online 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": null
},
{
"text": "Baselines We consider a string matching baseline where we label all spans present in the gazetteer, giving precedence to longer spans. We also compare with methods ranging from supervised, weakly supervised, to unsupervised. ACE (Wang et al., 2021 ) is a state-of-the-art method for supervised NER. AutoNER (Shang et al., 2018 ) is a weakly supervised method that requires a type dictionary. Lastly, we compare with the unsupervised method of Luo et al. (2020) 2 .",
"cite_spans": [
{
"start": 229,
"end": 247,
"text": "(Wang et al., 2021",
"ref_id": "BIBREF30"
},
{
"start": 307,
"end": 326,
"text": "(Shang et al., 2018",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": null
},
{
"text": "Evaluation Recent works have highlighted the presence of unlabeled mentions in the CoNLL dataset, which has a negative effect when training and evaluating models based on exact match (Jie et al., 2019; Li et al., 2021b) . Exact match metrics also penalize more strongly models that do not match boundaries exactly, than a model that does not predict a span at all (Manning, 2006; Esuli and Sebastiani, 2010) . With this motivation, we also report overlap 3 by computing the intersection between gold and predicted spans. Precision is defined as the length of the intersection divided by the length of the predicted span, and recall is the length of the intersection divided by the length of the gold span. We denote these as OP and OR, respectively. Overlap F1 (OF1) is the harmonic mean of OP and OR. We report the average over all gold spans. (Wang et al., 2021) Gold labels 96.0 97.1 96.5 98.3 98.1 98.1 AutoNER (Shang et al., 2018) Type dictionary 88.4 94.2 91.2 97.4 97.2 96.9 Unsupervised Results We present MD results in Table 1 . We observe that pretraining with Wikipedia and Wikidata entity names helps to improve the performance over a version trained with the CoNLL 2003 data only. The higher recall of SlotGAN in comparison with the string matching baseline shows that the generator is not simply memorizing the gazetteer and can thus detect mentions not seen during training. However, its precision and recall are low compared to other systems. We attribute this partly to the lack of strong supervision of the generator, which results in boundaries that differ from gold spans, and detection of more mentions than those present in the dataset. The overlap-based metrics show that on average, predicted spans overlap 93% and gold spans overlap 83% with the intersection. This indicates that extra words are added to predicted spans, and boundary mismatch, though these values of precision and recall are within 95% and 84% of the supervised baseline, respectively.",
"cite_spans": [
{
"start": 183,
"end": 201,
"text": "(Jie et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 202,
"end": 219,
"text": "Li et al., 2021b)",
"ref_id": "BIBREF16"
},
{
"start": 364,
"end": 379,
"text": "(Manning, 2006;",
"ref_id": "BIBREF20"
},
{
"start": 380,
"end": 407,
"text": "Esuli and Sebastiani, 2010)",
"ref_id": "BIBREF4"
},
{
"start": 845,
"end": 864,
"text": "(Wang et al., 2021)",
"ref_id": "BIBREF30"
},
{
"start": 915,
"end": 935,
"text": "(Shang et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 1028,
"end": 1035,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": null
},
{
"text": "A closer analysis of the length of overlapping spans shows that in 69.4% of the cases the length is the same as gold spans, in 21.1% the predicted span is longer, and in 9.5% it is shorter. This often leads to mentions that are actually correct, as shown in Table 2 . However, SlotGAN also produces spans that do not overlap with any gold span. This can be observed by plotting the average number of words assigned to a mention by the model versus the gold annotations, as shown in Fig 2. We see that across different numbers of mention words for the gold annotations, SlotGAN produces a higher number in average. We also find cases where it relies on capitalization only, which becomes problematic in upper case sentences: for regular sentences, there is no exact boundary match in 11% of the cases. For sentences in upper case, this increases to 23%.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 482,
"end": 488,
"text": "Fig 2.",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "We have presented SlotGAN, a method for training a mention detector that only requires unlabeled text and a list of entity names, that relies on implicit supervision provided by a discriminator that is also optimized during training. This results in spans that overlap well with gold spans, but also a tendency towards generating more and longer spans, and relying on capitalization only. This suggests that spans predicted by SlotGAN are likely to be correct, but an additional mechanism is needed to filter them. This can be enforced via tighter constraints on generated spans, or a stronger discriminator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Even though its performance is close to a supervised model according to overlap-based metrics, it cannot match other methods that also use a gazetteer or are unsupervised. In spite of this, we consider SlotGAN a promising framework for other IE tasks with less supervision, for example, where relations between slots could be induced. The end-to-end architecture also presents an opportunity for fine-tuning with gold labels, which we plan to explore in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/dfdazac/slotgan 2 Their implementation is not available. Results for P, R, and F1 from their paper.3 Partial matches have been considered by Segura-Bedmar et al.(2013), though not taking span lengths into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This project was funded by Elsevier's Discovery Lab.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "In our implementation of SlotGAN, the embedding function emb(w) used to obtain embeddings of sentences and names in the gazetteer uses the pretrained WordPiece embeddings from the input layer of BERT (Devlin et al., 2019) .The generator consists of BERT, which for an input sentence of length n, outputs a matrix H \u2208 R d\u00d7n where d is the dimension of the output layer of BERT, equal to 768. We use the bert-base-cased implementation in Hugging-Face's Transformer library (Wolf et al., 2019) . The output matrix is passed to a modified Slot Attention layer (Locatello et al., 2020 ), which we use as a differentiable mechanism to assign n input embeddings to k slots. In the original implementation, Slot Attention would assign each of the n outputs in the columns of H to k slots, by using a differentiable clustering algorithm. This algorithm works for a variable number of slots, by sampling k initial slot representations from a Gaussian distribution. In our experiments we use k = 10, and the number of iterations of the clustering algorithm is set to 3.In the MD case, for words that do not belong to any mention, we want the generator to be able to assign them to a \"default\" slot. We achieve this by introducing an extra slot, whose representation, instead of sampled, is a single vector with a learned representation. Slot Attention in the generator thus contains k +1 slots, but the default slot is discarded when passing generated spans to the discriminator.After discarding the default slot, the result is an attention mask M \u2208 R k\u00d7n where the m ij entry indicates the fraction of input j assigned to slot i, and each column is normalized to 1. The i-th span representation is then obtained aswhere M i: is the i-th row of M, and \u2299 is broadcast element-wise multiplication. For the discriminator we use a temporal CNN, where convolutions are applied along the sequence axis. The input is a matrix of span representations of shape d \u00d7 L, and the output is a scalar. The architecture is described in Table 3 .",
"cite_spans": [
{
"start": 200,
"end": 221,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 471,
"end": 490,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 556,
"end": 579,
"text": "(Locatello et al., 2020",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 2009,
"end": 2016,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Architectures",
"sec_num": null
},
{
"text": "We train SlotGAN with mini-batches of 32 sentences. We update the generator once for every 5 updates of the discriminator. To let the discriminator accept empty spans as valid, we replace names from the gazetteer with an empty span with a probability of 0.5. We use a gradient penalty coefficient (Gulrajani et al., 2017) of 10 when computing the discriminator loss.We use a learning rate of 2 \u00d7 10 \u22125 , with a linear warm-up schedule for the first 10% of epochs. For the Lagrange multiplier, we use the Modified Differential Method of Multipliers (Platt and Barr, 1987) with a constant learning rate of 1 \u00d7 10 \u22123 .We run our experiments in a workstation with an Intel Xeon processor, 1 NVIDIA GeForce GTX 1080 Ti GPU with 11GB of memory, and 60GB of RAM. When pretraining with Wikipedia and Wikidata, we train SlotGAN with 20,000 updates of the generator, and 5,000 when training with the CoNLL 2003 dataset.",
"cite_spans": [
{
"start": 297,
"end": 321,
"text": "(Gulrajani et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 559,
"end": 570,
"text": "Barr, 1987)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Training Procedure",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Wasserstein generative adversarial networks",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Arjovsky",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "214--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Arjovsky, Soumith Chintala, and L\u00e9on Bottou. 2017. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceed- ings of Machine Learning Research, pages 214-223. PMLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Interpretable neural predictions with differentiable binary variables",
"authors": [
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Wilker",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2963--2977",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1284"
]
},
"num": null,
"urls": [],
"raw_text": "Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2963-2977, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Autoregressive entity retrieval",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "De Cao",
"suffix": ""
},
{
"first": "Gautier",
"middle": [],
"last": "Izacard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
}
],
"year": 2021,
"venue": "9th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In 9th International Conference on Learning Repre- sentations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Evaluating information extraction",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "Multilingual and Multimodal Information Access Evaluation, International Conference of the Cross-Language Evaluation Forum, CLEF 2010",
"volume": "6360",
"issue": "",
"pages": "100--111",
"other_ids": {
"DOI": [
"10.1007/978-3-642-15998-5_12"
]
},
"num": null,
"urls": [],
"raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2010. Evalu- ating information extraction. In Multilingual and Multimodal Information Access Evaluation, Interna- tional Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010. Proceedings, volume 6360 of Lecture Notes in Computer Science, pages 100-111. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised aspect term extraction with B-LSTM & CRF using automatically labelled datasets",
"authors": [
{
"first": "Athanasios",
"middle": [],
"last": "Giannakopoulos",
"suffix": ""
},
{
"first": "Claudiu",
"middle": [],
"last": "Musat",
"suffix": ""
},
{
"first": "Andreea",
"middle": [],
"last": "Hossmann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Baeriswyl",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "180--188",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5224"
]
},
"num": null,
"urls": [],
"raw_text": "Athanasios Giannakopoulos, Claudiu Musat, Andreea Hossmann, and Michael Baeriswyl. 2017. Unsuper- vised aspect term extraction with B-LSTM & CRF using automatically labelled datasets. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 180-188, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neu- ral Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672- 2680.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improved training of wasserstein gans",
"authors": [
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Faruk",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Mart\u00edn",
"middle": [],
"last": "Arjovsky",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dumoulin",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017",
"volume": "",
"issue": "",
"pages": "5767--5777",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishaan Gulrajani, Faruk Ahmed, Mart\u00edn Arjovsky, Vin- cent Dumoulin, and Aaron C. Courville. 2017. Im- proved training of wasserstein gans. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Sys- tems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5767-5777.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Better modeling of incomplete annotations for named entity recognition",
"authors": [
{
"first": "Zhanming",
"middle": [],
"last": "Jie",
"suffix": ""
},
{
"first": "Pengjun",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Ruixue",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "729--734",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1079"
]
},
"num": null,
"urls": [],
"raw_text": "Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, and Linlin Li. 2019. Better modeling of incomplete anno- tations for named entity recognition. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 729-734, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT for coreference resolution: Baselines and analysis",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5803--5808",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1588"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference reso- lution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5803-5808, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Going out on a limb: Joint extraction of entity mentions and relations without dependency trees",
"authors": [
{
"first": "Arzoo",
"middle": [],
"last": "Katiyar",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1085"
]
},
"num": null,
"urls": [],
"raw_text": "Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "917--928",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 917-928, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Coreference resolution without span representations",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Kirstain",
"suffix": ""
},
{
"first": "Ori",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "14--19",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-short.3"
]
},
"num": null,
"urls": [],
"raw_text": "Yuval Kirstain, Ori Ram, and Omer Levy. 2021. Coref- erence resolution without span representations. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 14-19, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Weakly supervised named entity tagging with learnable logical rules",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "4568--4581",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.352"
]
},
"num": null,
"urls": [],
"raw_text": "Jiacheng Li, Haibo Ding, Jingbo Shang, Julian McAuley, and Zhe Feng. 2021a. Weakly supervised named en- tity tagging with learnable logical rules. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4568-4581, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adversarial transfer for named entity boundary detection with pointer networks",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Deheng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Shang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "5053--5059",
"other_ids": {
"DOI": [
"10.24963/ijcai.2019/702"
]
},
"num": null,
"urls": [],
"raw_text": "Jing Li, Deheng Ye, and Shuo Shang. 2019. Adver- sarial transfer for named entity boundary detection with pointer networks. In Proceedings of the Twenty- Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10- 16, 2019, pages 5053-5059. ijcai.org.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Empirical analysis of unlabeled entity problem in named entity recognition",
"authors": [
{
"first": "Yangming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2021,
"venue": "9th International Conference on Learning Representations, ICLR 2021, Virtual Event",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangming Li, Lemao Liu, and Shuming Shi. 2021b. Empirical analysis of unlabeled entity problem in named entity recognition. In 9th International Con- ference on Learning Representations, ICLR 2021, Vir- tual Event, Austria, May 3-7, 2021. OpenReview.net.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Named entity recognition without labelled data: A weak supervision approach",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Aliaksandr",
"middle": [],
"last": "Hubin",
"suffix": ""
},
{
"first": "Samia",
"middle": [],
"last": "Touileb",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1518--1533",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.139"
]
},
"num": null,
"urls": [],
"raw_text": "Pierre Lison, Jeremy Barnes, Aliaksandr Hubin, and Samia Touileb. 2020. Named entity recognition with- out labelled data: A weak supervision approach. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1518- 1533, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Object-centric learning with slot attention",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Locatello",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Unterthiner",
"suffix": ""
},
{
"first": "Aravindh",
"middle": [],
"last": "Mahendran",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Dosovitskiy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kipf",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Locatello, Dirk Weissenborn, Thomas Un- terthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. 2020. Object-centric learning with slot atten- tion. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Informa- tion Processing Systems 2020, NeurIPS 2020, De- cember 6-12, 2020, virtual.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Named entity recognition only from word embeddings",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Junlang",
"middle": [],
"last": "Zhan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "2020",
"issue": "",
"pages": "8995--9005",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.723"
]
},
"num": null,
"urls": [],
"raw_text": "Ying Luo, Hai Zhao, and Junlang Zhan. 2020. Named entity recognition only from word embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8995- 9005. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Doing named entity recognition? don't optimize for F1. NLPers Blog, 25",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning. 2006. Doing named entity recog- nition? don't optimize for F1. NLPers Blog, 25. Accessed on November, 2021.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Conditional generative adversarial nets",
"authors": [
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Osindero",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. CoRR, abs/1411.1784.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Constrained differential optimization",
"authors": [
{
"first": "C",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"H"
],
"last": "Platt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barr",
"suffix": ""
}
],
"year": 1987,
"venue": "Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "612--621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C. Platt and Alan H. Barr. 1987. Constrained differ- ential optimization. In Neural Information Process- ing Systems, Denver, Colorado, USA, 1987, pages 612-621. American Institue of Physics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Data programming: Creating large training sets, quickly",
"authors": [
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Ratner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Selsam",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3567--3575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander J. Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data program- ming: Creating large training sets, quickly. In Ad- vances in Neural Information Processing Systems 29: Annual Conference on Neural Information Process- ing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3567-3575.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Weakly supervised sequence tagging from noisy rules",
"authors": [
{
"first": "Esteban",
"middle": [],
"last": "Safranchik",
"suffix": ""
},
{
"first": "Shiying",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"H"
],
"last": "Bach",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "5570--5578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esteban Safranchik, Shiying Luo, and Stephen H. Bach. 2020. Weakly supervised sequence tagging from noisy rules. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty- Second Innovative Applications of Artificial Intelli- gence Conference, IAAI 2020, The Tenth AAAI Sym- posium on Educational Advances in Artificial Intel- ligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 5570-5578. AAAI Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"authors": [
{
"first": "Isabel",
"middle": [],
"last": "Segura-Bedmar",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
},
{
"first": "Mar\u00eda",
"middle": [],
"last": "Herrero-Zazo",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "9",
"issue": "",
"pages": "341--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Mar\u00eda Herrero-Zazo. 2013. SemEval-2013 task 9 : Extrac- tion of drug-drug interactions from biomedical texts (DDIExtraction 2013). In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 341-350, Atlanta, Georgia, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning named entity tagger using domain-specific dictionary",
"authors": [
{
"first": "Jingbo",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Liyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaotao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Teng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2054--2064",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1230"
]
},
"num": null,
"urls": [],
"raw_text": "Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named en- tity tagger using domain-specific dictionary. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 2054- 2064, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Neural architectures for nested NER through linearization",
"authors": [
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5326--5331",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1527"
]
},
"num": null,
"urls": [],
"raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Hajic. 2019. Neu- ral architectures for nested NER through lineariza- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142- 147.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automated concatenation of embeddings for structured prediction",
"authors": [
{
"first": "Xinyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021",
"volume": "1",
"issue": "",
"pages": "2643--2660",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.206"
]
},
"num": null,
"urls": [],
"raw_text": "Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2021. Automated concatenation of embeddings for struc- tured prediction. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 2643-2660. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Scalable zeroshot entity linking with dense entity retrieval",
"authors": [
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Josifoski",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6397--6407",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.519"
]
},
"num": null,
"urls": [],
"raw_text": "Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero- shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397-6407, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Revealing the myth of higher-order inference in coreference resolution",
"authors": [
{
"first": "Liyan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8527--8533",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.686"
]
},
"num": null,
"urls": [],
"raw_text": "Liyan Xu and Jinho D. Choi. 2020. Revealing the myth of higher-order inference in coreference resolution. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8527-8533, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Neural mention detection",
"authors": [
{
"first": "Juntao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Neural mention detection. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 1-10, Marseille, France. European Lan- guage Resources Association.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "PDALN: Progressive domain adaptation over a pre-trained model for low-resource cross-domain named entity recognition",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Congying",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
},
{
"first": "Zhiwei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5441--5451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Zhang, Congying Xia, Philip S. Yu, Zhiwei Liu, and Shu Zhao. 2021. PDALN: Progressive domain adaptation over a pre-trained model for low-resource cross-domain named entity recognition. In Proceed- ings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5441-5451, Online and Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "GLaRA: Graph-based labeling rule augmentation for weakly supervised named entity recognition",
"authors": [
{
"first": "Xinyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "3636--3649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyan Zhao, Haibo Ding, and Zhe Feng. 2021. GLaRA: Graph-based labeling rule augmentation for weakly supervised named entity recognition. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 3636-3649, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A frustratingly easy approach for entity and relation extraction",
"authors": [
{
"first": "Zexuan",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "50--61",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.5"
]
},
"num": null,
"urls": [],
"raw_text": "Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 50-61, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Dual adversarial neural transfer for lowresource named entity recognition",
"authors": [
{
"first": "Joey",
"middle": [
"Tianyi"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Hongyuan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Rick Siow Mong",
"middle": [],
"last": "Goh",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Kwok",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3461--3471",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1336"
]
},
"num": null,
"urls": [],
"raw_text": "Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, and Kenneth Kwok. 2019. Dual adversarial neural transfer for low- resource named entity recognition. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3461-3471, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "Number of words assigned to a mention per sentence, computed over gold and predicted spans.",
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"html": null,
"text": "Comparison of gold spans and spans predicted by SlotGAN.",
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}