ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2021.nlp4convai-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:55:10.353711Z"
},
"title": "Taking Things Personally: Third Person to First Person Rephrasing",
"authors": [
{
"first": "Marcel",
"middle": [],
"last": "Granero-Moya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa AI Cambridge",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Agis",
"middle": [
"Oikonomou"
],
"last": "Filandras",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa AI",
"location": {
"settlement": "Cambridge",
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The recent advancement of digital assistant technologies has opened new possibilities in the experiences they can provide. One of them is the ability to converse with a persona, e.g., celebrities, famous fictional characters, etc. This experience requires that the replies are answered from the point of view of the persona, i.e., the first person. Since the facts about characters are typically found expressed in the third person, there is a need to rephrase them to the first person in order for the assistant not to break character and the experience to remain immersive. However, the automatic solution to such a problem is largely unexplored by the community. In this work, we present a new task for NLP: third person to first person rephrasing. We define the task and analyze its major challenges. We create and publish a novel dataset with 3493 humanannotated pairs of celebrity facts in the third person with their rephrased sentence in the first person. Moreover, we propose a transformerbased pipeline that correctly rephrases 92.8% of sentences compared to 76.2% rephrased by a rule-based baseline system.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The recent advancement of digital assistant technologies has opened new possibilities in the experiences they can provide. One of them is the ability to converse with a persona, e.g., celebrities, famous fictional characters, etc. This experience requires that the replies are answered from the point of view of the persona, i.e., the first person. Since the facts about characters are typically found expressed in the third person, there is a need to rephrase them to the first person in order for the assistant not to break character and the experience to remain immersive. However, the automatic solution to such a problem is largely unexplored by the community. In this work, we present a new task for NLP: third person to first person rephrasing. We define the task and analyze its major challenges. We create and publish a novel dataset with 3493 humanannotated pairs of celebrity facts in the third person with their rephrased sentence in the first person. Moreover, we propose a transformerbased pipeline that correctly rephrases 92.8% of sentences compared to 76.2% rephrased by a rule-based baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The emergence of digital assistant technologies has opened new possibilities like creating conversational personas that impersonate celebrities or fictional characters and chat with users. When asked about themselves, these personas can respond with answers created manually by experts or taken straight from knowledge bases, which are usually stored in the third person. In order to not break character, we aim to rephrase automatically these third person replies to first person while preserving the original information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task can be thought of as a special case of controlled text rephrasing (Logeswaran et al., 2018; and text style transfer (Fu et al., 2018; Shen et al., 2017) . Similarly to the above, third to first person rephrasing aims to transform a sentence while preserving the content. In our case instead of trying to change the style of the sentence, e.g. formality (Wang et al., 2019) , sentiment (Li et al., 2018; Zhang et al., 2018) , or politeness (Madaan et al., 2020) , we want to change the point of view of the sentence narrator.",
"cite_spans": [
{
"start": 75,
"end": 100,
"text": "(Logeswaran et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 125,
"end": 142,
"text": "(Fu et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 143,
"end": 161,
"text": "Shen et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 362,
"end": 381,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 394,
"end": 411,
"text": "(Li et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 412,
"end": 431,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 448,
"end": 469,
"text": "(Madaan et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We define the task of Third to First Person Rephrasing. The task's goal is to rephrase a sentence about someone in the third person point of view to the first person point of view as if that person was talking about themself. The task has a number of challenges. Firstly, we can have text in quotes that should remain unchanged. Secondly, reported statements should be converted to personal statements, e.g., \"He said that he loves . . . \" \u2192 \"I love . . . \". Thirdly, there could be multiple subjects in the sentences, e.g., \"Samuel and Ron went to . . . \" \u2192 \"Ron and I went to . . . \". Finally, the point of view should be modified only on the mentions to the target persona. Thus, references and co-references have to be identified and rephrased, while keeping unchanged other third person mentions that refer to another subject. This is a known problem called coreference resolution, that aims to find the same entities present in text with a number of proposed solutions in literature Manning, 2015, 2016; Lee et al., 2017; Joshi et al., 2019; Xu and Choi, 2020) .",
"cite_spans": [
{
"start": 989,
"end": 1009,
"text": "Manning, 2015, 2016;",
"ref_id": null
},
{
"start": 1010,
"end": 1027,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 1028,
"end": 1047,
"text": "Joshi et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 1048,
"end": 1066,
"text": "Xu and Choi, 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For instance, the sentence about Samuel L. Jackson \"His characters often feature the color purple: Jackson chose to have Doyle Gipson wear a purple hat in Changing Lanes; Mace Windu, upon request by Jackson to George Lucas, wielded a purple lightsaber in Star Wars; and Lazarus, his character in Black Snake Moan, plays a purple Gibson guitar.\" should be rephrased to \"My characters often feature the color purple: I chose to have Doyle Gipson wear a purple hat in Changing Lanes; Mace Windu, upon my request to George Lucas, I wielded a purple lightsaber in Star Wars;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "and Lazarus, my character in Black Snake Moan, plays a purple Gibson guitar.\". The rephraser needs to be able to disambiguate between the different subjects in complicated sentences and rephrase appropriately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this work are threefold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Definition and analysis of the task. We define the goal of rephrasing third person text to first person in order to impersonate a persona, and we identify the main challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Creation and distribution of a novel supervised dataset specific for the task. The source sentences are scrapped from a website containing facts from celebrities, and the target sentences are annotated with Amazon Mechanical Turk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Proposal of a rephrasing model and exploration of several pre-processing techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed pipeline establishes a benchmark for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we will explain how we created our dataset for the task and present two solutions to the problem, a rule-based one and a deep learning based one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We built a supervised dataset with third person sentences rephrased to the first person. The source sentences were collected from TheFactSite 1 , a website with celebrities facts like \"J.K. Rowling is 54 years old.\" and \"She was born on July 31st, 1965 in Yate, Gloucestershire.\". The data was collected filtering out groups and bands. After collecting the facts, they were paired with their first person equivalents. The pairs were created manually, using Amazon Mechanical Turk. We defined microtasks (See Appendix A) containing an original fact, extracted from TheFactSite, to be rephrased to the first person point of view. The target celebrity and a link to the source webpage were provided to help MTurkers with the annotations. MTurkers were asked to change the target celebrity to a first person point of view, to preserve all the initial content in the rephrased sentence, and keep the word order. The last two instructions aimed to reach higher inter-annotator agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "1 www.thefactsite.com/celebrities/ Each task was annotated by three MTurkers from the UK or the US, with more than 98% HIT Approval Rate, and more than 5000 HITs approved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "The results from MTurk were post-processed in three different steps to collect the final annotations. First, direct annotation; whenever at least two MTurkers agreed on the transformation, the answer was assumed to be correct. 3352 facts were directly annotated, from which 2307 had full agreement between annotators and 1045 where only two annotators coincided on the rephrasing. At this point, 359 sentences had no agreement between MTurkers. Answers that were not directly annotated in the previous step were simplified by lowercasing, removing final punctuation, and substituting contractions by expanded forms. Thanks to this process, 73 facts were annotated.The remaining 68 facts were annotated by hand. The result is a supervised dataset that consists of 3493 celebrity facts in the third person and the first person point of view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "The dataset contains facts from 117 different celebrities of which 85 are male and 32 are female. There are 2563 pairs for male celebrities and 930 for females. On average, each celebrity has 29.9 facts and each fact has 16.3 words. Table 1 shows an illustrative example of a data pair. It contains the celebrity and the celebrity's gender, as well as the source and the target fact.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "celebrity Shawn Mendes gender male source in some of his videos that he has uploaded to youtube, shawn just sings in them and lets other musicians collaborate with him by playing instruments. target in some of my videos that i have uploaded to youtube, i just sing in them and let other musicians collaborate with me by playing instruments. Table 1 : Example of a fact containing the celebrity name, its gender, the source sentence and the target. In italics, words that differ between source and target.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 348,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "The most frequently substituted pairs from the original celebrity fact to the transformed firstperson fact as expected are personal pronouns. More specifically, the top 5 substituted pairs are the following: his \u2192 my, he \u2192 I, her \u2192 my, she \u2192 I, him \u2192 me. As the dataset is unbalanced in gender, over two thirds of the substituted personal pronouns are masculine, and one third are feminine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "To solve the problem, we investigated two possible alternatives. First a rule-based system and second we used a data-driven deep learning approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rephrasing Pipeline",
"sec_num": "2.2"
},
{
"text": "We implemented a rule-based system with dictionaries to rephrase text from third person to first person. First, it spots and tags the subject's name, then, replaces pronouns based on the subject's gender, and finally, applies substitution dictionaries with over 100 rules (e.g., \"He has\"\u2192\"I have\", \"with <subject>\"\u2192\"with me\") based on the most common substitutions in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-Based System",
"sec_num": "2.2.1"
},
{
"text": "We used GPT-2, a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages, (Radford et al., 2019) . Specifically, we finetuned the 'gpt2' model from the Hugging Face open source library, (Wolf et al., 2019) , on the collected dataset.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 242,
"end": 261,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Learning System",
"sec_num": "2.2.2"
},
{
"text": "Before feeding our inputs to the model, we used a reference recognizer. It adds a tag before and after any of the target celebrity possible names (name, surname, acronym, nickname). This ensures that the model knows, what is the subject whose point of view it is trying to change. Also, we provide the subject's gender by prepending a gender token <fe-male> or <male> to help the model disambiguate gendered pronouns. Thirdly, we tried another preprocessing step that dealt with the pronoun resolution. NeuralCoref (Clark and Manning, 2015) , a neural network that resolves coreference clusters, was used to identify personal pronouns that refer to the target celebrity. Table 2 illustrates the two pre-processing steps, and shows examples of the input to the model. Finally, the dataset was augmented to provide more training samples to the model, and to decorrelate celebrity names to specific facts. Thus, leveraging the substituted words between the source and target facts, celebrity names were changed in the source while keeping the rest of the fact. Personal pronouns had to change when the original and the new celebrity differed on gender. An example of data augmentation is depicted in Table 3 . Male facts were expanded to 4 new male celebrities and 5 new female celebrities, whereas female celebrity facts were only expanded to 4 new female celebrities due to the ambiguity of the word her, which",
"cite_spans": [
{
"start": 515,
"end": 540,
"text": "(Clark and Manning, 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 671,
"end": 678,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 1197,
"end": 1204,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Deep Learning System",
"sec_num": "2.2.2"
},
{
"text": "Billie Eilish Gender female Source Billie Eilish never smiles in photographs because she says it makes her feel \" weak and powerless . \" Reference <cel> Billie Eilish </cel> never smiles in photographs because she says it makes her feel \" weak and powerless . \" Coreferences <cel> Billie Eilish </cel> never smiles in photographs because <ref> she </ref> says it makes <ref> her </ref> feel \" weak and powerless . \" Target I never smile in photographs because it makes me feel \" weak and powerless . \" can be a personal pronoun replaced by him or a possessive replaced by his.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Celebrity",
"sec_num": null
},
{
"text": "The dataset (see Section 2.1) was split 80% for training, 10% for evaluation and 10% for test. As the number of edits we wanted the model to do in a sentence were few, we evaluated results with a sentence accuracy metric where a rephrased sentence is correct if it perfectly matches the target sentence, and wrong otherwise. Table 5 shows the correct rephrasing for a fact and a possible wrong rephrasing. Initially, we tried the rule-based system, as discussed in Section 2.2, model #0. We found it got 76.2% accuracy in the test set by handling most straightforward cases, i.e., when the subject was attached to the verb and when all pronouns and possessives referred to the celebrity. Also, it had problems with homographs, e.g., \"her\" can either be a pronoun or a possessive, so it can be replaced by \"me\" or \"my\" respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "3"
},
{
"text": "We investigated using the GPT-2 model to solve the task. Initially we trained the model without any preprocessing and by finetuning on the nonaugmented dataset, model #1. We saw an increase in accuracy by 5.2%. The model could disambiguate between homographs without requiring any explicit input. It still had trouble though when there were multiple possible subjects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "3"
},
{
"text": "We hypothesized that by providing the model with gender and the name of the subject we wanted it to rephrase, we would see an improvement. Thus, we first prepended a gender token (<female> or <male>) to the source text, model #2, and then, added the reference recognizer pre-processing step as described in section 2.2, model #3. Compared to GPT-2 without any preprocessing step, the accuracy was raised by 6.1% and 8.6% respectively. Our following hypothesis was that by using a coreferencing model, we would be able to help the model with cases where the pronouns were ambiguous, and it wasn't editing properly the text. By adding the coreference preprocessing step to the GPT-2 along with the reference recognizer and specified gender, model #4 decreased the accuracy compared to model #3. We surmised, this tiny accuracy drop was due to the limited amount of data. We did data augmentation, as discussed in section 2.2, to provide the model with more training examples of those ambiguous cases. By facing multiple similar examples with different subject names and pronouns, the model would have to take into account the subject name and the coreferences provided by the preprocessing steps. Data augmentation worked, models #5 and #6, resulting in an accuracy increase compared to model #3. It is very interesting to note that the deep learning model can achieve 91.4% high accuracy without using any dependency parsing and without the coreferencing model. The best model (#6) reached 92.8% of accuracy by leveraging all the preprocessing techniques. Gender-wise, model (#6) achieves 93.2% accuracy for masculine subjects and 92.0% for feminine ones. Table 4 shows a comparison of all our hypotheses and their results (see appendix C for further finetuned models).",
"cite_spans": [],
"ref_spans": [
{
"start": 1655,
"end": 1662,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "3"
},
{
"text": "We analyzed our best model's errors (#7) and categorized them. 24% of mistakes happened due to the removal or addition of an unrelated context words (e.g., predicted \"My dog\" instead of \"Brady, my dog\"). Regarding reference disambiguation, other subjects were rephrased 20% of times (\"I am Drake's only child.\" instead of \"He is my only Table 4 : Accuracy results for the baseline (#0) and the different GPT2 models finetuned (#1-#6). An X is marked if the gender token is provided (Gen), reference recognizer used (Ref), coreferences annotated (Coref), and augmented dataset used for training (Aug).",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "3"
},
{
"text": "Model Gender Ref Coref Aug Accuracy #0 76.2% #1 81.4% #2 X 87.5% #4 X X X 90.0% #3 X X 90.9% #5 X X X 91.4% #6 X X X X 92.8%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "3"
},
{
"text": "Post credits his father for introducing him to diverse music genres.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Correct prediction I credit my father for introducing me to diverse music genres.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Wrong prediction I credits my father for introducing me to diverse music genres. child.\"), and 23% of times the reference recognition failed because the subject was not identified by the reference recognizer or not rephrased by the model (\"PewDiePie is . . . \" instead of \"I am . . . \"), or the subject was referred in plural along with other subjects (\"helped me\" instead of \"helped us\"), or the subject was not referred by its name (e.g., \"the player\" or \"the actor\"). Finally, there were two kinds of mistakes that could be counted as correct. In 20% of the cases, the subject reported its own statement and the report was omitted (\"I like chocolate\" instead of \"I said I like chocolate\"); and 6.8% for synonyms generated by the model (\"nearly\" instead of \"almost\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "This work presented a novel NLP task: third person to first person rephrasing. We explored the challenges of the task, when impersonating a celebrity talking about themselves. We presented a supervised corpus of celebrity facts in the first and third person. We also proposed fine-tuning GPT-2 using the dataset and explored a number of techniques that helped the model achieve 92.8% of accuracy in the test split of our dataset, a 16.6% improvement compared to the rule-based baseline. Finally, we leave for future work the exploration of further refinements to our pipeline, extending our solution to other languages and augmenting the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "The dataset was collected via MTurk, a crowdworking platform owned by Amazon. In order to help the annotator change the target celebrity from third person to first person, the celebrity name and a link to the celebrity facts webpage were provided. Moreover, two constraints were specified; first, to keep all the content present in the rephrased sentence and ensure no information was deleted. Although the annotations would lack diversity of written form, we specified to keep the same word order in an attempt to reach more inter-annotator agreement which would help to choose the ground truth rephrased sentence. On the platform, there was a button to open a window with the instructions and illustrative examples showing correct and incorrect annotations. We specified that all HITs (Human Intelligence Tasks) would be reviewed to ensure proper annotations from MTurkers. Table 6 : Most substituted pairs in the original (nonaugmented) dataset. ",
"cite_spans": [],
"ref_spans": [
{
"start": 876,
"end": 883,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Data Collection Instructions with MTurk",
"sec_num": null
},
{
"text": "We provide a table with the results for all the models. First, the baseline model (#0), and second, the finetuned GPT2 models which differ on the preprocessing steps applied. The models with a number are explained in section 3. The rest of models provide a wider understanding on how the preprocessing steps help the GPT2 model to rephrase from third to first person. Notice that the coreference resolution has to be used along with the reference recognizer. Table 7 : Accuracy results for the rule-based baseline (#0) and the different finetuned GPT2 models. The GPT2 models differ on the preprocessing steps that were applied to the data before finetuning the GPT2 model. An X means that the column's preprocessing step was used for that model's finetuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Further Model Results",
"sec_num": null
}
],
"back_matter": [
{
"text": "In this section, we provide further details about the substituted pairs from the original dataset. Table 6 illustrates the 20 most substituted pairs from the original dataset. Note that masculine pronouns are twice as frequent as feminine pronouns, evidencing again the gender bias. Figure 2 shows the log-log distribution for the substituted pairs. # original transformed counts",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Table 6",
"ref_id": null
},
{
"start": 284,
"end": 292,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Dataset's Most Frequently Substituted Pairs",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Entitycentric coreference resolution with model stacking",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1405--1415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D Manning. 2015. Entity- centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1405-1415.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deep reinforcement learning for mention-ranking coreference models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08667"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D Manning. 2016. Deep reinforcement learning for mention-ranking corefer- ence models. arXiv preprint arXiv:1609.08667.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Style transfer in text: Exploration and evaluation",
"authors": [
{
"first": "Zhenxin",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Xiaoye",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Explo- ration and evaluation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert for coreference resolution: Baselines and analysis",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.09091"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Omer Levy, Daniel S Weld, and Luke Zettlemoyer. 2019. Bert for coreference reso- lution: Baselines and analysis. arXiv preprint arXiv:1908.09091.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.07045"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolu- tion. arXiv preprint arXiv:1707.07045.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Delete, retrieve, generate: A simple approach to sentiment and style transfer",
"authors": [
{
"first": "Juncen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06437"
]
},
"num": null,
"urls": [],
"raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: A simple approach to sentiment and style transfer. arXiv preprint arXiv:1804.06437.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Content preserving text generation with attribute controls",
"authors": [
{
"first": "Lajanugen",
"middle": [],
"last": "Logeswaran",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01135"
]
},
"num": null,
"urls": [],
"raw_text": "Lajanugen Logeswaran, Honglak Lee, and Samy Ben- gio. 2018. Content preserving text generation with attribute controls. arXiv preprint arXiv:1811.01135.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Politeness transfer: A tag and generate approach",
"authors": [
{
"first": "Aman",
"middle": [],
"last": "Madaan",
"suffix": ""
},
{
"first": "Amrith",
"middle": [],
"last": "Setlur",
"suffix": ""
},
{
"first": "Tanmay",
"middle": [],
"last": "Parekh",
"suffix": ""
},
{
"first": "Barnabas",
"middle": [],
"last": "Poczos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14257"
]
},
"num": null,
"urls": [],
"raw_text": "Aman Madaan, Amrith Setlur, Tanmay Parekh, Barn- abas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhu- moye. 2020. Politeness transfer: A tag and generate approach. arXiv preprint arXiv:2004.14257.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Style transfer from nonparallel text by cross-alignment",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.09655"
]
},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non- parallel text by cross-alignment. arXiv preprint arXiv:1705.09655.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Harnessing pre-trained neural networks with rules for formality style transfer",
"authors": [
{
"first": "Yunli",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenhan",
"middle": [],
"last": "Chao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3564--3569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, and Wen- han Chao. 2019. Harnessing pre-trained neural net- works with rules for formality style transfer. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3564-3569.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Rewriting meaningful sentences via conditional bert sampling and an application on fooling text classifiers",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Ramirez",
"suffix": ""
},
{
"first": "Kalyan",
"middle": [],
"last": "Veeramachaneni",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11869"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Xu, Ivan Ramirez, and Kalyan Veeramachaneni. 2020. Rewriting meaningful sentences via condi- tional bert sampling and an application on fooling text classifiers. arXiv preprint arXiv:2010.11869.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Revealing the myth of higher-order inference in coreference resolution",
"authors": [
{
"first": "Liyan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.12013"
]
},
"num": null,
"urls": [],
"raw_text": "Liyan Xu and Jinho D Choi. 2020. Revealing the myth of higher-order inference in coreference resolution. arXiv preprint arXiv:2009.12013.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning sentiment memories for sentiment modification without parallel data",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.07311"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018. Learning sentiment memories for sentiment modification without parallel data. arXiv preprint arXiv:1808.07311.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Figure 1: MTurk microtask",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Log-log distribution of substituted pairs for the original dataset",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"html": null,
"text": "Example of reference recognizer and coreference resolution pre-processing steps. Tags <cel> and <ref> stand for celebrity and coreference respectively.",
"type_str": "table",
"content": "<table><tr><td>Original</td><td>&lt;male&gt; &lt;cel&gt; Post Malone &lt;/cel&gt; 's</td></tr><tr><td/><td>first single was released during his ill-</td></tr><tr><td/><td>ness. &lt;cel&gt; Post &lt;/cel&gt; says it was</td></tr><tr><td/><td>thanks to his father.</td></tr><tr><td>Augmented A</td><td>&lt;male&gt; &lt;cel&gt; Nicholas Hoult &lt;/cel&gt; 's</td></tr><tr><td/><td>first single was released during his ill-</td></tr><tr><td/><td>ness. &lt;cel&gt; Nicholas &lt;/cel&gt; says it was</td></tr><tr><td/><td>thanks to his father.</td></tr><tr><td>Augmented B</td><td>&lt;female&gt; &lt;cel&gt; Ariana Grande &lt;/cel&gt;</td></tr><tr><td/><td>'s first single was released during her</td></tr><tr><td/><td>illness. &lt;cel&gt; Ariana &lt;/cel&gt; says it was</td></tr><tr><td/><td>thanks to her father.</td></tr></table>"
},
"TABREF1": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Example of correct prediction and a possible</td></tr><tr><td>wrong prediction for a given source fact.</td></tr></table>"
}
}
}
}