ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:12:01.749623Z"
},
"title": "Goodwill Hunting: Analyzing and Repurposing Off-the-Shelf Named Entity Linking Systems",
"authors": [
{
"first": "Karan",
"middle": [],
"last": "Goel",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Laurel",
"middle": [],
"last": "Orr",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nazneen",
"middle": [
"Fatema"
],
"last": "Rajani",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Named entity linking (NEL) or mapping \"strings\" to \"things\" in a knowledge base is a fundamental preprocessing step in systems that require knowledge of entities such as information extraction and question answering. In this work, we lay out and investigate two challenges faced by individuals or organizations building NEL systems. Can they directly use an off-the-shelf system? If not, how easily can such a system be repurposed for their use case? First, we conduct a study of offthe-shelf commercial and academic NEL systems. We find that most systems struggle to link rare entities, with commercial solutions lagging their academic counterparts by 10%+. Second, for a use case where the NEL model is used in a sports question-answering (QA) system, we investigate how to close the loop in our analysis by repurposing the best off-the-shelf model (BOOTLEG) to correct sport-related errors. We show how tailoring a simple technique for patching models using weak labeling can provide a 25% absolute improvement in accuracy of sport-related errors.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Named entity linking (NEL) or mapping \"strings\" to \"things\" in a knowledge base is a fundamental preprocessing step in systems that require knowledge of entities such as information extraction and question answering. In this work, we lay out and investigate two challenges faced by individuals or organizations building NEL systems. Can they directly use an off-the-shelf system? If not, how easily can such a system be repurposed for their use case? First, we conduct a study of offthe-shelf commercial and academic NEL systems. We find that most systems struggle to link rare entities, with commercial solutions lagging their academic counterparts by 10%+. Second, for a use case where the NEL model is used in a sports question-answering (QA) system, we investigate how to close the loop in our analysis by repurposing the best off-the-shelf model (BOOTLEG) to correct sport-related errors. We show how tailoring a simple technique for patching models using weak labeling can provide a 25% absolute improvement in accuracy of sport-related errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity linking (NEL), the task of mapping from \"strings\" to \"things\" in a knowledge base, is a fundamental component of commercial systems such as information extraction and question answering (Shen et al., 2015) . Given some text, NEL systems perform contextualized linking of text phrases, called mentions, to a knowledge base. If a user asks her personal assistant \"How long would it take to drive a Lincoln to Lincoln\", the NEL system underlying the assistant should link the first mention of \"Lincoln\" to the car company, and the second \"Lincoln\" to Lincoln in Nebraska, in order to answer correctly.",
"cite_spans": [
{
"start": 199,
"end": 218,
"text": "(Shen et al., 2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As NEL models have direct impact on the success of downstream products (Peters et al., 2019) , * E-mail: [email protected] all major technology companies deploy large-scale NEL systems; e.g., in Google Search, Apple Siri and Salesforce Einstein. While these companies can afford to build custom NEL systems at scale, we consider how a smaller organization or individual could achieve the same objectives.",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Peters et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We start with a simple question: how would someone, starting from scratch, build an NEL system for their use case? Can existing NEL systems be used off-the-shelf, and if not, can they be repurposed with minimal engineer effort? Our \"protagonist\" here must navigate two challenging problems, as shown in Figure 1: 1. Off-the-shelf capabilities. Industrial NEL systems provide limited transparency into their performance, and the majority of academic NEL systems are measured on standard benchmarks biased towards popular entities (Steinmetz et al., 2013) . However, prior works suggest that NEL systems struggle on so-called \"tail\" entities that appear infrequently in data (Jin et al., 2014; Orr et al., 2020) . As the majority of user queries are over the tail (Bernstein et al., 2012; Gomes, 2017) , it is critical to understand the extent to which NEL systems struggle on the tail in offthe-shelf academic and commercial systems.",
"cite_spans": [
{
"start": 529,
"end": 553,
"text": "(Steinmetz et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 673,
"end": 691,
"text": "(Jin et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 692,
"end": 709,
"text": "Orr et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 762,
"end": 786,
"text": "(Bernstein et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 787,
"end": 799,
"text": "Gomes, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 303,
"end": 312,
"text": "Figure 1:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Repurposing systems. If off-the-shelf systems are inadequate on the tail or other relevant subpopulations, how difficult is it for our protagonist to develop a customized solution without building a system from scratch? Can they treat an existing NEL model as a black box and still modify its behavior? When faced with designing a NEL system with desired capabilities, prior work has largely focused on developing new systems (Sevgili et al., 2020; Shen et al., 2014; Mudgal et al., 2018) . The question of how to guide or \"patch\" an existing NEL system without changing its architecture, features, or training strategy-what we call model ",
"cite_spans": [
{
"start": 429,
"end": 451,
"text": "(Sevgili et al., 2020;",
"ref_id": "BIBREF27"
},
{
"start": 452,
"end": 470,
"text": "Shen et al., 2014;",
"ref_id": "BIBREF28"
},
{
"start": 471,
"end": 491,
"text": "Mudgal et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Challenges faced by individuals or small organizations in building NEL systems. (left) the fine-grained performance of off-the-shelf NEL systems varies widely-struggling on tail entities and sports-relevant subpopulationsmaking it likely that they must be repurposed for use; (right) for a sports QA application where no off-the-shelf system succeeds, the best-performing model (BOOTLEG) can be treated as a black box and successfully patched using weak labeling. In the example, a simple rule re-labels training data to discourage the BOOTLEG model from predicting a country entity (\"Germany\") when a clear sports-relevant contextual cue (\"match against\") is present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Subpopulation",
"sec_num": null
},
{
"text": "engineering-remains unaddressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Subpopulation",
"sec_num": null
},
{
"text": "In response to these questions, we investigate the limitations of existing systems and the possibility of repurposing them:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Subpopulation",
"sec_num": null
},
{
"text": "1. Understanding failure modes (Section 3). We conduct the first study of open-source academic and commercially available NEL systems. We compare commercial APIs from MICROSOFT, GOOGLE and AMAZON to open-source systems BOOTLEG (Orr et al., 2020) , WAT (Piccinno and Ferragina, 2014) and REL (van Hulst et al., 2020) on subpopulations across 2 benchmark datasets of WIKIPEDIA and AIDA (Hoffart et al., 2011) . Supporting prior work, we find that most systems struggle to link rare entities, are sensitive to entity capitalization and often ignore contextual cues when making predictions. On WIKIPEDIA, commercial systems lag their academic counterparts by 10%+ recall, while MICROSOFT outperforms other commercial systems by 16%+ recall. On AIDA, a heuristic that relies on entity popularity (POP) outperforms all commercial systems by 1.5 F1. Overall, BOOTLEG is the most consistent system.",
"cite_spans": [
{
"start": 227,
"end": 245,
"text": "(Orr et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 252,
"end": 282,
"text": "(Piccinno and Ferragina, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 384,
"end": 406,
"text": "(Hoffart et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Subpopulation",
"sec_num": null
},
{
"text": "2. Patching models (Section 3.2). Consider a scenario where our protagonist wants to use a NEL system as part of a downstream QA model answering sport-related queries; e.g., \"When did England last win the FIFA world cup?\". All models underperform on sportrelevant subpopulations of AIDA; e.g., BOOT-LEG can fail to predict national sports teams despite strong sport-relevant contextual cues, favoring the country entity instead. We therefore take the best system, BOOTLEG, and show how to correct undesired behavior using data engineering solutions-model agnostic methods that modify or create training data. Drawing on simple strategies from prior work in weak labeling, which uses user-defined functions to weakly label data (Ratner et al., 2017) , we relabel standard WIKIPEDIA training data to patch these errors and finetune the model on this relabeled dataset. With this strategy, we achieve a 25% absolute improvement in accuracy on the mentions where a model predicts a country rather than a sports team.",
"cite_spans": [
{
"start": 727,
"end": 748,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Subpopulation",
"sec_num": null
},
{
"text": "We believe these principles of understanding fine-grained failure modes in the NEL system and correcting them with data engineering apply to large-scale industrial pipelines where the NEL model or its embeddings are used in numerous downstream products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Subpopulation",
"sec_num": null
},
{
"text": "Given some text, NEL involves two steps: the identification of all entity mentions (mention ex- Hellriegel was also second in the event in 1995 (to Mark Allen) and 1996 (to Luc Van Lierde). sentence has three consecutive entities that share the same type Mark Allen (triathlete) Mark Allen (DJ) type: triathletes one-ofthe-two In 1920, she performed a specialty number in \"The Deep Purple\", a silent film directed by Raoul Walsh. gold entity is one of the two most popular candidates, which have similar popularity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "The Deep Purple (1915 film)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "The Deep Purple (1920 film) unpopular Croatia was beaten 4-2 by France in the final on 15th July.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "gold entity is not the most popular candidate, which is 5x more popular France (country)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "French national football team Figure 2 : Subpopulations analyzed on the WIKIPEDIA dataset, along with their definitions and examples. We consider five subpopulations inspired by Orr et al. (2020) .",
"cite_spans": [
{
"start": 178,
"end": 195,
"text": "Orr et al. (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "traction), and contextualized linking of these mentions to their corresponding knowledge base entries (mention disambiguation). For example, in \"What ingredients are in a Manhattan\", the mention \"Manhattan\" links to Manhattan (cocktail), not Manhattan (borough) or The Manhattan Project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "Internally, most systems have an intermediate step that generates a small set of possible candidates for each mention (candidate generation) for the disambiguation model to choose from.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "Given the goal of building a NEL system for a specific use case, we need to answer two questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "(1) what are the failure modes of existing systems, and (2) can they be repurposed, or \"patched\", to achieve desired performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Linking",
"sec_num": "2"
},
{
"text": "We begin by analyzing the fine-grained performance of off-the-shelf academic and commercial systems for NEL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding Failure Modes",
"sec_num": "3"
},
{
"text": "Setup. To perform this analysis, we use Robustness Gym (Goel et al., 2021b) , an open-source evaluation toolkit for analyzing natural language processing models. We evaluate all NEL systems by considering their performance on subpopulations, or subsets of data that satisfy some condition.",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Goel et al., 2021b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding Failure Modes",
"sec_num": "3"
},
{
"text": "Systems. We use 3 commercially available APIs: (i) GOOGLE Cloud Natural Language API (Google) , (ii) MICROSOFT Text Analytics API (Microsoft) , and (iii) AMAZON Comprehend API (Amazon) 1 .",
"cite_spans": [
{
"start": 85,
"end": 93,
"text": "(Google)",
"ref_id": null
},
{
"start": 130,
"end": 141,
"text": "(Microsoft)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding Failure Modes",
"sec_num": "3"
},
{
"text": "We compare to 3 state-of-the-art systems: (i) BOOT-LEG, a self-supervised system, (ii) REL, which combines existing state-of-the-art approaches, (iii) WAT an extension of the TAGME (Ferragina and Scaiella, 2010) linker. We also compare to a simple heuristic (iv) POP, which picks the most popular entity among candidates provided by BOOTLEG.",
"cite_spans": [
{
"start": 181,
"end": 211,
"text": "(Ferragina and Scaiella, 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AMAZON performs named entity recognition (NER) to",
"sec_num": "1"
},
{
"text": "Datasets. We compare methods on examples drawn from two datasets: (i) WIKIPEDIA, which contains 100, 000 entity mentions mined from gold anchor links across 37, 492 sentences from a November 2019 Wikipedia dataset, and (ii) AIDA, the AIDA test-b benchmark dataset 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AMAZON performs named entity recognition (NER) to",
"sec_num": "1"
},
{
"text": "Metrics. As WIKIPEDIA is sparsely labeled (Ghaddar and Langlais, 2017), we compare performance on recall. For AIDA, we use Macro-F1, since AIDA provides a more dense labeling of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AMAZON performs named entity recognition (NER) to",
"sec_num": "1"
},
{
"text": "Results. Our results for WIKIPEDIA and AIDA are reported in Figures 3, 4 respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 72,
"text": "Figures 3, 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "AMAZON performs named entity recognition (NER) to",
"sec_num": "1"
},
{
"text": "Subpopulations. In line with Orr et al. 2020, we consider 4 groups of examples -head, torso, tail and toe -that are based on the popularity of the entities being linked. Intuitively, head examples involve resolving popular entities that occur frequently in WIKIPEDIA, torso examples have medium popularity while tail examples correspond to entities that are seen rarely. Toe entities are a subset of the tail that are almost never seen. We conidentify entity mentions in text, so we use it in conjunction with a simple string matching heuristic to resolve entity links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on WIKIPEDIA",
"sec_num": "3.1"
},
{
"text": "2 REL uses AIDA for training, so we exclude it. Figure 2 with examples. These subpopulations require close attention to contextual cues such as relations, affordances and types. We also consider aggregate performance on the entire dataset (everything), and globally popular entities, which are examples where the entity mention is in the top 800 most popular entity mentions.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on WIKIPEDIA",
"sec_num": "3.1"
},
{
"text": "BOOTLEG is best overall. Overall, BOOTLEG outperforms other systems by a wide margin, with a 12-point gap to the next best system (MICROSOFT), while MICROSOFT in turn outperforms other commercial systems by more than 16 points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on WIKIPEDIA",
"sec_num": "3.1"
},
{
"text": "Performance degrades on rare entities. For all systems, performance on head slices is substantially better than performance on tail/toe slices. BOOTLEG is the most robust across the set of slices that we consider. Among commercial systems, GOOGLE and AMAZON struggle on tail and torso entities e.g. GOOGLE from 73.3 points on head to 21.6 points on tail, while MICROSOFT's performance degrades more gracefully. GOOGLE is adept at globally popular entities, where it outperforms MICROSOFT by more than 11 points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on WIKIPEDIA",
"sec_num": "3.1"
},
{
"text": "Subpopulations. We consider subpopulations that vary by: (i) fraction of capitalized entities, (ii) average popularity of mentioned entities, (iii) number of mentioned entities; (iv) sports-related topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "Overall performance. Similar to WIKIPEDIA, BOOTLEG performs best, beating WAT by 1.3%, with commercial systems lagging by 11%+. Figure 4 : Robustness Report (Goel et al., 2021b) for NEL on AIDA, measuring Macro-F1.",
"cite_spans": [
{
"start": 157,
"end": 177,
"text": "(Goel et al., 2021b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 128,
"end": 136,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "are capitalized to 38.2% on sentences where none are capitalized. Similarly, MICROSOFT degrades from 66.0% to 35.7%. This suggests that mention extraction in these models is capitalization sensitive. In contrast, AMAZON, BOOTLEG and WAT appear insensitive to capitalization artifacts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "Performance on topical entities. Interestingly, all models struggle on some topics, e.g. on NHL examples, all models degrade significantly, with WAT outperforming others by 20%+. GOOGLE and MI-CROSOFT display strong performance on some topics, e.g., GOOGLE on alpine sports (83.8%) and MICROSOFT on skating (91.6%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "Popularity heuristic outperforms commercial systems. Somewhat surprisingly, POP outperforms all commercial systems by 1.7%. In fact, we note that the pattern of errors for POP is very similar to those of the commercial systems, e.g., performing poorly on NBA, NFL and NHL slices. This suggests that commercial systems sidestep the difficult problem of disambiguating ambiguous entities in favor of returning the more popular answer. Similar to WIKIPEDIA, GOOGLE performs best among commercial systems on examples with globally popular entities (top 10% entity popularity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "Our results suggest that state-of-the-art academic systems outperform commercial APIs for NEL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "Next, we explore whether it is possible to simply \"patch\" an off-the-shelf NEL model for a specific downstream use case. Standard methods for designing models with desired capabilities require technical expertise to engineer the architecture and features. As these skills are out of reach for many organizations and individuals, we consider patching models where they are treated as a black-box.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "We provide a proof-of-concept that we can use data engineering to patch a model. For our grounding use case, we consider the scenario where the NEL model will be used as part of a sports questionanswering (QA) system that uses a knowledge graph (KG) to answer questions. For example, given the question \"When did England last win the FIFA world cup?\", we would want the NEL model to resolve the metonymic mention \"England\" to the English national football team, and not the country. This makes it easy for the QA model to answer the question using the \"winner\" KG-relationship to the 1966 FIFA World Cup, which applies only to the team and not the country.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on AIDA",
"sec_num": "3.2"
},
{
"text": "Our off-the-shelf analysis revealed that all models struggle on sport-related subpopulations of AIDA. For instance, BOOTLEG is biased towards predicting countries instead of sport teams, even with strong contextual cues. For example, in the sentence \"...the years I spent as manager of the Republic of Ireland were the best years of my life\", BOOT-LEG predicts the country \"Republic of Ireland\" instead of the national sports team. In general, this makes it undesirable to directly use off-the-shelf in our sports QA system scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting the Wrong Granularity",
"sec_num": "3.3"
},
{
"text": "We explore repurposing in a controlled environment using BOOTLEG, the best-performing off-theshelf NEL model. We train a small model, called BOOTLEGSPORT, over a WIKIPEDIA subset consisting only of sentences with mentions referring to both countries and national sport teams. We define a subpopulation, strong-sport-cues, as mentions directly preceded by a highly correlated sport team cue 3 . Examining strong-sport-cues reveals two insights into BOOTLEGSPORT's behavior:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting the Wrong Granularity",
"sec_num": "3.3"
},
{
"text": "1. BOOTLEGSPORT misses some strong sportrelevant textual cues. In this subpopulation, 5.8% examples are mispredicted as countries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting the Wrong Granularity",
"sec_num": "3.3"
},
{
"text": "2. In this supopulation, an estimated 5.6% of mentions are incorrectly labeled as countries in WIKIPEDIA. As WIKIPEDIA is hand labeled by users, it contains some label noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting the Wrong Granularity",
"sec_num": "3.3"
},
{
"text": "In our use case, we want to guide BOOTLEGSPORT to always predict a sport team over a country in sport-related sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting the Wrong Granularity",
"sec_num": "3.3"
},
{
"text": "While there are some prior data engineering solutions to \"model patching\", including augmentation (Sennrich et al., 2015; Wei and Zou, 2019; Kaushik et al., 2019; Goel et al., 2021a) , weak labeling (Ratner et al., 2017; Chen et al., 2020) , and synthetic data generation (Murty et al., 2020) , due to the noise in WIKIPEDIA, we repurpose BOOTLEGSPORT using weak labeling to modify training labels and correct for this noise. Our weak labeling technique works as follows: any existing mention from strong-sport-cues that is labeled as a country is relabeled as a national sports team for that country. We choose the national sport team to be consistent with other sport entities in the sentence. If there are none, we choose a random national sport team. While this may introduce noise, it allows us to guide BOOTLEGSPORT to prefer sport teams over countries.",
"cite_spans": [
{
"start": 98,
"end": 121,
"text": "(Sennrich et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 122,
"end": 140,
"text": "Wei and Zou, 2019;",
"ref_id": "BIBREF32"
},
{
"start": 141,
"end": 162,
"text": "Kaushik et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 163,
"end": 182,
"text": "Goel et al., 2021a)",
"ref_id": "BIBREF8"
},
{
"start": 199,
"end": 220,
"text": "(Ratner et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 221,
"end": 239,
"text": "Chen et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 272,
"end": 292,
"text": "(Murty et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Repurposing with Weak Labeling",
"sec_num": "3.4"
},
{
"text": "Results. After performing weak labeling, we finetune BOOTLEGSPORT over this modified dataset. As WIKIPEDIA ground truth labels are noisy and do not reflect our goal of favoring sport teams in sport sentences, we examine the distribution of predictions before and after guiding. In Table 1 we see that our patched model shows an increased trend in predicting sport teams. Further, the patched BOOTLEGSPORT model now only predicts countries in 4.0% of the strong-sport-cues subpopulation, a 30% relative reduction. For examples where the gold entity is a sports team that BOOTLEGSPORT predicts is a country, weak labeling improves absolute accuracy by 24.54%. Weak-labeling \"shifts\" probability mass from countries towards teams by 20% on these examples, and 1.8% overall across all examples where the gold entity is a sports team. It does so without \"disturbing\" probabilities on examples where the true answer is indeed a country, where the shift is only 0.07% towards teams.",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 288,
"text": "Table 1",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Repurposing with Weak Labeling",
"sec_num": "3.4"
},
{
"text": "Identifying Errors. A key step in assessing offthe-shelf systems is fine-grained evaluation, to determine if a system exhibits undesirable behavior. Prior work on fine-grained evaluation in NEL (Rosales-M\u00e9ndez et al., 2019) characterizes how to more consistently evaluate NEL models, with an analysis that focuses on academic systems. By contrast, we consider both academic and industrial off-the-shelf systems, and describe how to assess them in the context of a downstream use-case. We use Robustness Gym (Goel et al., 2021b) , an open-source evaluation toolkit for performing the analysis, although other evaluation toolkits (Ribeiro et al., 2020; Morris et al., 2020) are possible to use, depending on the objective of the assessment. Patching Errors. If a system is assessed to have some undesirable behavior, the next step is to correct its errors and repurpose it for use. The key challenge lies in how to correct these errors. Although similar to the related fields of domain adaptation (Wang and Deng, 2018) and transfer learning (Zhuang et al., 2020) where the goal is to transfer knowledge from a pretrained, source model to a related task in a potentially different domain, our work focuses on user-guided behavior correction when using a pretrained model on the same task.",
"cite_spans": [
{
"start": 507,
"end": 527,
"text": "(Goel et al., 2021b)",
"ref_id": "BIBREF9"
},
{
"start": 628,
"end": 650,
"text": "(Ribeiro et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 651,
"end": 671,
"text": "Morris et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 995,
"end": 1016,
"text": "(Wang and Deng, 2018)",
"ref_id": "BIBREF31"
},
{
"start": 1039,
"end": 1060,
"text": "(Zhuang et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "For industrial NEL applications, Orr et al. (2020) describe how to use data management techniques such as augmentation (Sennrich et al., 2015; Wei and Zou, 2019; Kaushik et al., 2019; Goel et al., 2021a) , weak supervision (Ratner et al., 2017) , and slice-based learning (Chen et al., 2019) to correct underperforming, user-defined sub-populations of data. Focusing on image data Goel et al. (2021a) use domain translation models to generate synthetic augmentation data that improves underperforming subpopulations. NEL. NEL has been a long standing problem in industrial and academic systems. Standard, predeep-learning approaches to NEL have been rulebased (Aberdeen et al., 1996) , but in recent years, deep learning systems have become the new standard (see Mudgal et al. (2018) for an overview of deep learning approaches to NEL), often relying on contextual knowledge from language models such as BERT (F\u00e9vry et al., 2020) for state-of-the-art performance. Despite strong benchmark performance, the long tail of NEL (Bernstein et al., 2012; Gomes, 2017) ",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "(Sennrich et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 143,
"end": 161,
"text": "Wei and Zou, 2019;",
"ref_id": "BIBREF32"
},
{
"start": 162,
"end": 183,
"text": "Kaushik et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 184,
"end": 203,
"text": "Goel et al., 2021a)",
"ref_id": "BIBREF8"
},
{
"start": 223,
"end": 244,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 272,
"end": 291,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 381,
"end": 400,
"text": "Goel et al. (2021a)",
"ref_id": "BIBREF8"
},
{
"start": 660,
"end": 683,
"text": "(Aberdeen et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 763,
"end": 783,
"text": "Mudgal et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 909,
"end": 929,
"text": "(F\u00e9vry et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 1023,
"end": 1047,
"text": "(Bernstein et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 1048,
"end": 1060,
"text": "Gomes, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We studied the performance of off-the-shelf NEL models and how to repurpose them for a downstream use case. In line with prior work, we found that off-the-shelf models struggle to disambiguate rare entities. Using a sport QA system as a case study, we showed how to use a data engineering solution to patch a BOOTLEG model from mispredicting countries instead of sports teams. We hope that our study of data engineering to effectuate model behavior inspires future work in this direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We mine these textual cues by looking at the most commmon two-grams proceeding a national sport team in the training data. The result is phrases such as \"scored against\", \"match against\", and \"defending champion\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "CR gratefully acknowledges the support of NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ONR under No. N000141712266 (Unifying Weak Supervision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, Total, the HAI-AWS Cloud Credits for Research program, the Salesforce AI Research grant and members of the Stanford DAWN project: Facebook, Google, and VMWare. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of NIH, ONR, or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mitre: Description of the alembic system as used in met",
"authors": [
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vilain",
"suffix": ""
}
],
"year": 1996,
"venue": "TIPSTER TEXT PRO-GRAM PHASE II: Proceedings of a Workshop",
"volume": "",
"issue": "",
"pages": "461--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Aberdeen, John D Burger, David Day, Lynette Hirschman, David D Palmer, Patricia Robinson, and Marc Vilain. 1996. Mitre: Description of the alembic system as used in met. In TIPSTER TEXT PRO- GRAM PHASE II: Proceedings of a Workshop held at Vienna, Virginia, May 6-8, 1996, pages 461-462.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Direct answers for search queries in the long tail",
"authors": [
{
"first": "S",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Teevan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Liebling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 2012,
"venue": "SIGCHI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael S Bernstein, Jaime Teevan, Susan Dumais, Daniel Liebling, and Eric Horvitz. 2012. Direct an- swers for search queries in the long tail. In SIGCHI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Train and you'll miss it: Interactive model iteration with weak supervision and pre-trained embeddings",
"authors": [
{
"first": "F",
"middle": [],
"last": "Mayee",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"Y"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Frederic",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Sala",
"suffix": ""
},
{
"first": "Ravi",
"middle": [
"Teja"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Fait",
"middle": [],
"last": "Mullapudi",
"suffix": ""
},
{
"first": "Kayvon",
"middle": [],
"last": "Poms",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Fatahalian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.15168"
]
},
"num": null,
"urls": [],
"raw_text": "Mayee F. Chen, Daniel Y. Fu, Frederic Sala, Sen Wu, Ravi Teja Mullapudi, Fait Poms, Kayvon Fatahalian, and Christopher R\u00e9. 2020. Train and you'll miss it: Interactive model iteration with weak supervi- sion and pre-trained embeddings. arXiv preprint arXiv:2006.15168.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Slice-based learning: A programming model for residual learning in critical data slices",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Ratner",
"suffix": ""
},
{
"first": "Jen",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "9392--9402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Chen, Sen Wu, Alexander J Ratner, Jen Weng, and Christopher R\u00e9. 2019. Slice-based learning: A programming model for residual learning in critical data slices. In Advances in neural information pro- cessing systems, pages 9392-9402.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Tagme: on-thefly annotation of short text fragments (by wikipedia entities)",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ferragina",
"suffix": ""
},
{
"first": "Ugo",
"middle": [],
"last": "Scaiella",
"suffix": ""
}
],
"year": 2010,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Ferragina and Ugo Scaiella. 2010. Tagme: on-the- fly annotation of short text fragments (by wikipedia entities). ArXiv, abs/1006.3498.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Empirical evaluation of pretraining strategies for supervised entity linking",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "F\u00e9vry",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Livio Baldini",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2020,
"venue": "AKBC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault F\u00e9vry, Nicholas FitzGerald, Livio Baldini Soares, and Tom Kwiatkowski. 2020. Empirical eval- uation of pretraining strategies for supervised entity linking. In AKBC.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Winer: A wikipedia annotated corpus for named entity recognition",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "413--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Philippe Langlais. 2017. Winer: A wikipedia annotated corpus for named entity recog- nition. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 413-422.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Model patching: Closing the subgroup performance gap with data augmentation",
"authors": [
{
"first": "Karan",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yixuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2021,
"venue": "The International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karan Goel, Albert Gu, Yixuan Li, and Christopher R\u00e9. 2021a. Model patching: Closing the subgroup performance gap with data augmentation. In The In- ternational Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Robustness gym: Unifying the nlp evaluation landscape",
"authors": [
{
"first": "Karan",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Nazneen",
"middle": [],
"last": "Rajani",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Samson",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karan Goel, Nazneen Rajani, Jesse Vig, Samson Tan, Jason Wu, Stephan Zheng, Caiming Xiong, Mohit Bansal, and Christopher R\u00e9. 2021b. Robustness gym: Unifying the nlp evaluation landscape.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Our latest quality improvements for search",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Gomes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Gomes. 2017. Our latest qual- ity improvements for search. https: //blog.google/products/search/ our-latest-quality-improvements-search/.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Google cloud natural language api",
"authors": [
{
"first": "",
"middle": [],
"last": "Google",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Google. Google cloud natural language api.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Robust disambiguation of named entities in text",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [
"Amir"
],
"last": "Yosef",
"suffix": ""
},
{
"first": "Ilaria",
"middle": [],
"last": "Bordino",
"suffix": ""
},
{
"first": "Hagen",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Spaniol",
"suffix": ""
},
{
"first": "Bilyana",
"middle": [],
"last": "Taneva",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "782--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F\u00fcrstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Em- pirical Methods in Natural Language Processing, pages 782-792.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Rel: An entity linker standing on the shoulders of giants",
"authors": [
{
"first": "Johannes",
"middle": [
"M"
],
"last": "Van Hulst",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hasibi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dercksen",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Balog",
"suffix": ""
},
{
"first": "A",
"middle": [
"D"
],
"last": "Vries",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes M. van Hulst, F. Hasibi, K. Dercksen, K. Ba- log, and A. D. Vries. 2020. Rel: An entity linker standing on the shoulders of giants. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Entity linking at the tail: sparse signals, unknown entities, and phrase models",
"authors": [
{
"first": "Yuzhe",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Emre",
"middle": [],
"last": "K\u0131c\u0131man",
"suffix": ""
},
{
"first": "Kuansan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ricky",
"middle": [],
"last": "Loynd",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 7th ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "453--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuzhe Jin, Emre K\u0131c\u0131man, Kuansan Wang, and Ricky Loynd. 2014. Entity linking at the tail: sparse signals, unknown entities, and phrase models. In Proceed- ings of the 7th ACM international conference on Web search and data mining, pages 453-462.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning the difference that makes a difference with counterfactually-augmented data",
"authors": [
{
"first": "Divyansh",
"middle": [],
"last": "Kaushik",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.12434"
]
},
"num": null,
"urls": [],
"raw_text": "Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2019. Learning the difference that makes a differ- ence with counterfactually-augmented data. arXiv preprint arXiv:1909.12434.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Microsoft text analytics api",
"authors": [
{
"first": "",
"middle": [],
"last": "Microsoft",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Microsoft. Microsoft text analytics api.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Textattack: A framework for adversarial attacks in natural language processing",
"authors": [
{
"first": "X",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Eli",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "Jin",
"middle": [
"Yong"
],
"last": "Lifland",
"suffix": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Yoo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.05909"
]
},
"num": null,
"urls": [],
"raw_text": "John X Morris, Eli Lifland, Jin Yong Yoo, and Yanjun Qi. 2020. Textattack: A framework for adversarial at- tacks in natural language processing. arXiv preprint arXiv:2005.05909.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep learning for entity matching: A design space exploration",
"authors": [
{
"first": "Sidharth",
"middle": [],
"last": "Mudgal",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Theodoros",
"middle": [],
"last": "Rekatsinas",
"suffix": ""
},
{
"first": "An-Hai",
"middle": [],
"last": "Doan",
"suffix": ""
},
{
"first": "Youngchoon",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Krishnan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "19--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidharth Mudgal, Han Li, Theodoros Rekatsinas, An- Hai Doan, Youngchoon Park, Ganesh Krishnan, Ro- hit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 Inter- national Conference on Management of Data, pages 19-34.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Expbert: Representation engineering with natural language explanations",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Pang",
"middle": [],
"last": "Wei Koh",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.01932"
]
},
"num": null,
"urls": [],
"raw_text": "Shikhar Murty, Pang Wei Koh, and Percy Liang. 2020. Expbert: Representation engineering with natural language explanations. arXiv preprint arXiv:2005.01932.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bootleg: Chasing the tail with self-supervised named entity disambiguation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Orr",
"suffix": ""
},
{
"first": "Megan",
"middle": [],
"last": "Leszczynski",
"suffix": ""
},
{
"first": "Simran",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Orr, Megan Leszczynski, Simran Arora, Sen Wu, N. Guha, Xiao Ling, and C. R\u00e9. 2020. Bootleg: Chasing the tail with self-supervised named entity disambiguation. CIDR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Knowledge enhanced contextual word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Neumann",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Logan",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Vidur",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.04164"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Robert L Lo- gan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. arXiv preprint arXiv:1909.04164.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "From tagme to wat: a new entity annotator",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Piccinno",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ferragina",
"suffix": ""
}
],
"year": 2014,
"venue": "ERD '14",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Piccinno and P. Ferragina. 2014. From tagme to wat: a new entity annotator. In ERD '14.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Snorkel: Rapid training data creation with weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2017. Snorkel: Rapid training data creation with weak su- pervision. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, volume 11, page 269. NIH Public Access.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Beyond accuracy: Behavioral testing of nlp models with checklist",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of nlp models with checklist. In Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Fine-grained evaluation for entity linking",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Rosales-M\u00e9ndez",
"suffix": ""
},
{
"first": "Aidan",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Poblete",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "718--727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry Rosales-M\u00e9ndez, Aidan Hogan, and Barbara Poblete. 2019. Fine-grained evaluation for entity linking. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 718-727.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06709"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neural entity linking: A survey of models based on deep learning",
"authors": [
{
"first": "Ozge",
"middle": [],
"last": "Sevgili",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Shelmanov",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.00575"
]
},
"num": null,
"urls": [],
"raw_text": "Ozge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, and Chris Biemann. 2020. Neural entity linking: A survey of models based on deep learning. arXiv preprint arXiv:2006.00575.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Entity linking with a knowledge base: Issues, techniques, and solutions",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jianyong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "27",
"issue": "2",
"pages": "443--460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Shen, Jianyong Wang, and Jiawei Han. 2014. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Transactions on Knowledge and Data Engineering, 27(2):443-460.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Entity linking with a knowledge base: Issues, techniques, and solutions",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jianyong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "27",
"issue": "",
"pages": "443--460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Shen, Jianyong Wang, and Jiawei Han. 2015. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Transactions on Knowledge and Data Engineering, 27:443-460.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Statistical analyses of named entity disambiguation benchmarks",
"authors": [
{
"first": "Nadine",
"middle": [],
"last": "Steinmetz",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Knuth",
"suffix": ""
},
{
"first": "Harald",
"middle": [],
"last": "Sack",
"suffix": ""
}
],
"year": 2013,
"venue": "NLP-DBPEDIA@ ISWC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nadine Steinmetz, Magnus Knuth, and Harald Sack. 2013. Statistical analyses of named entity disam- biguation benchmarks. In NLP-DBPEDIA@ ISWC.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Deep visual domain adaptation: A survey",
"authors": [
{
"first": "Mei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Weihong",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2018,
"venue": "Neurocomputing",
"volume": "312",
"issue": "",
"pages": "135--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mei Wang and Weihong Deng. 2018. Deep visual domain adaptation: A survey. Neurocomputing, 312:135-153.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Eda: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "W",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.11196"
]
},
"num": null,
"urls": [],
"raw_text": "Jason W Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting perfor- mance on text classification tasks. arXiv preprint arXiv:1901.11196.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Scalable zeroshot entity linking with dense entity retrieval",
"authors": [
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Josifoski",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03814"
]
},
"num": null,
"urls": [],
"raw_text": "Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2019. Scalable zero- shot entity linking with dense entity retrieval. arXiv preprint arXiv:1911.03814.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A comprehensive survey on transfer learning",
"authors": [
{
"first": "Fuzhen",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Keyu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Dongbo",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Yongchun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hengshu",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE",
"volume": "109",
"issue": "",
"pages": "43--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. 2020. A comprehensive survey on transfer learn- ing. Proceedings of the IEEE, 109(1):43-76.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Robustness Report(Goel et al., 2021b) for NEL on Wikipedia, measuring recall. sider 5 subpopulations inspired byOrr et al. (2020), described in",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "in industrial workloads has remained a challenge. Recent papers Orr et al. (2020); Wu et al. (2019) have begun to measure and improve performance on unseen entities, but it remains an open problem.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"text": "Sensitivity to capitalization. Both GOOGLE and MICROSOFT are sensitive to whether the entity mention is capitalized. GOOGLE's performance goes from 54.1% on sentences where all mentions",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>Amazon</td><td/><td/><td>Google</td><td colspan=\"2\">Microsoft</td><td>Bootleg</td><td/><td/><td>Wat</td><td>Pop</td><td>Size</td></tr><tr><td>All</td><td>52.5</td><td/><td/><td>48.5</td><td>54.7</td><td/><td>66.0</td><td/><td/><td>64.7</td><td>56.4</td><td>2.46K</td></tr><tr><td>EntityCapitalization(All)</td><td>54.6</td><td/><td/><td>54.1</td><td>66.0</td><td/><td>68.2</td><td/><td/><td>63.2</td><td>56.1</td><td>1.4K</td></tr><tr><td>EntityCapitalization(None)</td><td>49.6</td><td/><td colspan=\"2\">38.2</td><td>35.7</td><td/><td>62.0</td><td/><td/><td>67.7</td><td>56.3</td><td>909</td></tr><tr><td>EntityPopularity(Bottom 10%)</td><td>44.0</td><td/><td colspan=\"2\">35.1</td><td>46.4</td><td/><td>57.7</td><td/><td colspan=\"2\">47.4</td><td>46.0</td><td>247</td></tr><tr><td>EntityPopularity(Top 10% Variability)</td><td>66.2</td><td/><td/><td>79.9</td><td>71.3</td><td/><td>74.2</td><td/><td/><td>73.3</td><td>73.4</td><td>247</td></tr><tr><td>EntityPopularity(Top 10%)</td><td>52.2</td><td/><td/><td>54.0</td><td>53.9</td><td/><td>52.1</td><td/><td colspan=\"2\">55.3</td><td>61.7</td><td>264</td></tr><tr><td>NumEntities(1)</td><td>49.6</td><td/><td colspan=\"2\">38.6</td><td>44.2</td><td/><td>60.6</td><td/><td/><td>65.1</td><td>53.7</td><td>1.37K</td></tr><tr><td>NumEntities(Top 10%)</td><td>57.1</td><td/><td/><td>62.7</td><td>69.4</td><td/><td>69.4</td><td/><td colspan=\"2\">58.9</td><td>59.7</td><td>428</td></tr><tr><td>Sport(Freestyle) Sport(Cricket) Sport(Basketball) Sport(Badminton) Sport(Alpine)</td><td>77.1 76.8 54.8 48.2 67.7</td><td/><td colspan=\"2\">83.8 68.9 57.4 31.7 81.7</td><td>82.9 67.5 50.7 27.8 72.1</td><td/><td>80.0 78.4 74.6 60.7 74.7</td><td/><td colspan=\"2\">84.4 70.1 77.2 54.3 75.8</td><td>79.7 70.7 59.9 51.2 73.5</td><td>155 24 37 124 44</td><td>subpopulations</td></tr><tr><td>Sport(Golf)</td><td>69.6</td><td/><td/><td>72.1</td><td>63.8</td><td/><td>77.9</td><td/><td/><td>69.4</td><td>77.8</td><td>30</td></tr><tr><td>Sport(NBA)</td><td>7.1</td><td/><td>9.3</td><td/><td>8.3</td><td/><td>68.6</td><td/><td/><td>77.2</td><td>13.9</td><td>99</td></tr><tr><td>Sport(NHL) Sport(NFL)</td><td>30.1 19.8</td><td/><td>9.0</td><td>24.1</td><td>20.7 13.8</td><td/><td>52.8 46.9</td><td/><td>25.7</td><td>67.4</td><td>25.4 18.2</td><td>65 107</td></tr><tr><td>Sport(Nordic)</td><td>54.3</td><td/><td/><td>64.9</td><td>76.2</td><td/><td>66.6</td><td/><td/><td>64.3</td><td>64.1</td><td>20</td></tr><tr><td>Sport(Rugby)</td><td>36.3</td><td/><td/><td>25.9</td><td>45.5</td><td/><td>61.5</td><td/><td colspan=\"2\">56.3</td><td>44.5</td><td>63</td></tr><tr><td>Sport(Skating)</td><td>79.5</td><td/><td/><td>80.7</td><td colspan=\"2\">91.6</td><td>75.8</td><td/><td/><td>78.9</td><td>75.8</td><td>42</td></tr><tr><td>Sport(Skiing)</td><td>54.9</td><td/><td/><td>56.8</td><td>65.9</td><td/><td>57.5</td><td/><td/><td>68.7</td><td>66.6</td><td>22</td></tr><tr><td>Sport(Soccer)</td><td>54.2</td><td/><td colspan=\"2\">41.3</td><td>60.9</td><td/><td>73.5</td><td/><td/><td>73.7</td><td>56.4</td><td>654</td></tr><tr><td/><td>0</td><td>100</td><td>0</td><td>100</td><td>0</td><td>100</td><td>0</td><td>100</td><td>0</td><td>100</td><td>0</td><td>100</td></tr></table>",
"html": null
},
"TABREF4": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: BOOTLEGSPORT prediction matrix before and</td></tr><tr><td>after model patching. The weak sport cues subpopula-</td></tr><tr><td>tion contains sentences with more generic sport related</td></tr><tr><td>keywords.</td></tr></table>",
"html": null
}
}
}
}