ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:38.361298Z"
},
"title": "",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Royal",
"middle": [],
"last": "Holloway",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Welcome to the Second Workshop on Evaluation and Comparison of NLP Systems (Eval4NLP 2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Undeniably, fair evaluations and comparisons are important to the NLP community for properly tracking progress and suggesting open problems in the field. In recent years, after the deep learning revolution, people are relying more and more on fine-tuning pre-trained language models to achieve downstream tasks, leading to significant growth in the number of published state-of-the-art results. Without appropriate evaluations (including methodologies, datasets, metrics, setups, reports, etc.) , such results would be meaningless or even harmful to the community. Last year, the first workshop in the series, Eval4NLP 2020, was the first workshop to take a broad and unifying perspective on the subject matter. For this year, the goal of the second workshop is to continue the tradition by providing a platform for presenting and discussing the latest advances in NLP evaluation methods and resources.",
"cite_spans": [
{
"start": 427,
"end": 494,
"text": "(including methodologies, datasets, metrics, setups, reports, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The workshop has attracted lots of attention from the community with 36 research papers being submitted. After careful reviews by the program committee and the workshop organizers, 17 papers (including 14 long papers and 3 short papers) were accepted to present in the workshop. To increase the variety of the program, we additionally welcome 17 papers published recently elsewhere (i.e.,14 papers from the Findings of EMNLP 2021 and 3 papers from other prestigious publication venues in AI) to present in the workshop as well. Overall, our program covers a wide range of topics in NLP evaluation and comparison, including new evaluation metrics for different NLG tasks (e.g., summarization, translation, data-to-text, text-to-SQL) and NLP models (e.g., embeddings, user feedback predictions, maths word problem solvers, coreference resolution); new benchmark datasets for tasks like authorship attribution, multilingual narratives, gender bias, NER, subword segmentation, and open question answering; and critical analyses over existing evaluation benchmarks (e.g., SemEval) and paradigms (e.g., system comparison methods and statistical tests).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Moreover, we organized a shared task on explainable quality estimation. Given a pair of a source sentence and a machine-translated sentence, participants were asked to estimate the sentence-level quality score of the translation and explain the score by providing a continuous word-level score for each input word or token indicating its importance for the prediction. There were seven teams participating in the shared task and six of them submitted papers describing their systems. We, the organizers, also wrote a paper summarizing the competition and the lessons learned. All are included in the proceedings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "We would like to thank all of the authors and the shared task participants for their contributions, the program committee for their thoughtful reviews (especially those who kindly help conduct emergency reviews), the steering committee for their advice and selection of best research papers, the keynote speakers for sharing their vision and outlook, the sponsors (The Artificial Intelligence Journal and Salesforce Research) for their generous support, and all the attendees for their participation. We believe that all of these will contribute to a lively and successful workshop. Looking forward to meeting you all (virtually) at Eval4NLP 2021! Eval4NLP 2021 Organization Team, Yang Gao, Steffen Eger, Wei Zhao, Piyawat Lertvittayakumjorn, Marina Fomicheva",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Edward Gow-Smith, Carolina Scarton and Aline Villavicencio [Findings] Entity-Based Semantic Adequacy for Data-to-Text Generation Juliette Faille, Albert Gatt and Claire Gardent Differential Evaluation: a Qualitative Analysis of Natural Language Processing System Behavior Based Upon Data Resistance to Processing Lucie Gianola, Hicham El Boukkouri",
"authors": [],
"year": null,
"venue": "How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology? Chantal Amrhein and Rico Sennrich [Findings] AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models Harish Tayyar Madabushi",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Findings] How Suitable Are Subword Segmentation Strategies for Translating Non- Concatenative Morphology? Chantal Amrhein and Rico Sennrich [Findings] AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models Harish Tayyar Madabushi, Edward Gow-Smith, Carolina Scarton and Aline Villav- icencio [Findings] Entity-Based Semantic Adequacy for Data-to-Text Generation Juliette Faille, Albert Gatt and Claire Gardent Differential Evaluation: a Qualitative Analysis of Natural Language Processing System Behavior Based Upon Data Resistance to Processing Lucie Gianola, Hicham El Boukkouri, Cyril Grouin, Thomas Lavergne, Patrick Paroubek and Pierre Zweigenbaum Validating Label Consistency in NER Data Annotation Qingkai Zeng, Mengxia Yu, Wenhao Yu, Tianwen Jiang and Meng Jiang 10:45-11:25 Keynote Talk 2 11:30-12:10 Paper Presentation Session 2",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task Urja Khurana, Eric Nalisnick and Antske Fokkens [Findings] Challenges in Detoxifying Language Models Johannes Welbl",
"authors": [],
"year": null,
"venue": "",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task Urja Khurana, Eric Nalisnick and Antske Fokkens [Findings] Challenges in Detoxifying Language Models Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, Kirsty Anderson, Pushmeet Kohli, Ben Coppin and Po-Sen Huang [Findings] Adversarial Examples for Evaluating Math Word Problem Solvers Vivek Kumar, Rishabh Maheshwary and Vikram Pudi November 10, 2021 (continued)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "TURINGBENCH: A Benchmark Environment for Turing Test in the Age of Neural Text Generation Adaku Uchendu",
"authors": [
{
"first": "F",
"middle": [],
"last": "Nelson",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Findings] Making Heads and Tails of Models with Marginal Calibration for Sparse Tagsets Michael Kranzlein, Nelson F. Liu and Nathan Schneider [Findings] TURINGBENCH: A Benchmark Environment for Turing Test in the Age of Neural Text Generation Adaku Uchendu, Zeyu Ma, Thai Le, Rui Zhang and Dongwon Lee StoryDB: Broad Multi-language Narrative Dataset Alexey Tikhonov, Igor Samenko and Ivan Yamshchikov 12:15-12:55 Keynote Talk 3 13:00-14:00 Lunch Break 14:00-14:40 Keynote Talk 4 14:45-15:25 Paper Presentation Session 3",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Nolan Holley and Constantine Lignos Trainable Ranking Models to Evaluate the Semantic Accuracy of Data-to-Text Neural Generator Nicolas Garneau and Luc Lamontagne [Findings] TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning Kexin Wang, Nils Reimers and Iryna Gurevych",
"authors": [],
"year": 2021,
"venue": "SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evaluation Chester Palen-Michel",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evalua- tion Chester Palen-Michel, Nolan Holley and Constantine Lignos Trainable Ranking Models to Evaluate the Semantic Accuracy of Data-to-Text Neu- ral Generator Nicolas Garneau and Luc Lamontagne [Findings] TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning Kexin Wang, Nils Reimers and Iryna Gurevych November 10, 2021 (continued) 15:30-16:10 Keynote Talk 5 16:15-16:55 Paper Presentation Session 4",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Concluding Remarks Recordings/Posters Developing a Benchmark for Reducing Data Bias in Authorship Attribution Benjamin Murauer and G\u00fcnther Specht Error-Sensitive Evaluation for Ordinal Target Variables David Chen, Maury Courtland, Adam Faulkner and Aysu Ezen-Can HinGE: A Dataset for Generation and Evaluation of Code-Mixed Hinglish Text Vivek Srivastava and Mayank Singh What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP Oskar Wysocki, Malina Florea, D\u00f3nal Landers and Andr\u00e9 Freitas The UMD Submission to the Explainable MT Quality Estimation Shared Task: Combining Explanation Models with Sequence Labeling Tasnim Kabir and Marine Carpuat Explaining Errors in Machine Translation with Absolute Gradient Ensembles Melda Eksi",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Ner Justin Payan",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Merhav",
"suffix": ""
},
{
"first": "Satyapriya",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Anil",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Mukund",
"middle": [],
"last": "Ramakrishna",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2021,
"venue": "Statistically Significant Detection of Semantic Shifts using Contextual Word Embeddings Yang Liu, Alan Medlar and Dorota Glowacka [Findings] Benchmarking Meta-embeddings: What Works and What Does Not Iker Garc\u00eda",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ESTIME: Estimation of Summary-to-Text Inconsistency by Mismatched Embeddings Oleg Vasilyev and John Bohannon [Findings] Towards Realistic Single-Task Continuous Learning Research for NER Justin Payan, Yuval Merhav, He Xie, Satyapriya Krishna, Anil Ramakrishna, Mukund Sridhar and Rahul Gupta Statistically Significant Detection of Semantic Shifts using Contextual Word Embed- dings Yang Liu, Alan Medlar and Dorota Glowacka [Findings] Benchmarking Meta-embeddings: What Works and What Does Not Iker Garc\u00eda, Rodrigo Agerri and German Rigau Referenceless Parsing-Based Evaluation of AMR-to-English Generation Emma Manning and Nathan Schneider MIPE: A Metric Independent Pipeline for Effective Code-Mixed NLG Evaluation Ayush Garg, Sammed Kagi, Vivek Srivastava and Mayank Singh 17:00-17:45 Shared Task Presentation & Award Announcement IST-Unbabel 2021 Submission for the Explainable Quality Estimation Shared Task Marcos Treviso, Nuno M. Guerreiro, Ricardo Rei and Andr\u00e9 F. T. Martins Error Identification for Machine Translation with Metric Embedding and Attention Raphael Rubino, Atsushi Fujita and Benjamin Marie Reference-Free Word-and Sentence-Level Translation Evaluation with Token- Matching Metrics Christoph Wolfgang Leiter The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and Re- sults Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger and Yang Gao Award Announcement Eval4NLP 2021 Organizers November 10, 2021 (continued) 17:50-18:00 Concluding Remarks Recordings/Posters Developing a Benchmark for Reducing Data Bias in Authorship Attribution Benjamin Murauer and G\u00fcnther Specht Error-Sensitive Evaluation for Ordinal Target Variables David Chen, Maury Courtland, Adam Faulkner and Aysu Ezen-Can HinGE: A Dataset for Generation and Evaluation of Code-Mixed Hinglish Text Vivek Srivastava and Mayank Singh What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP Oskar Wysocki, Malina Florea, D\u00f3nal Landers and Andr\u00e9 Freitas The UMD Submission to the Explainable MT Quality Estimation Shared Task: Com- bining Explanation Models with Sequence Labeling Tasnim Kabir and Marine Carpuat Explaining Errors in Machine Translation with Absolute Gradient Ensembles Melda Eksi, Erik Gelbing, Jonathan Stieber and Chi Viet Vu Explainable Quality Estimation: CUNI Eval4NLP Submission Peter Pol\u00e1k, Muskaan Singh and Ond\u0159ej Bojar [Non-archival] How Robust are Model Rankings: A Leaderboard Customization Approach for Equitable Evaluation Swaroop Mishra and Anjana Arunkumar [Non-archival] AI as Author -Bridging the Gap Between Machine Learning and Literary Theory Imke van Heerden and Anil Bas [Non-archival] The statistical advantage of automatic NLG metrics at the system level Johnny Tian-Zheng Wei and Robin Jia [Findings] A Comprehensive Comparison of Word Embeddings in Event & Entity Coreference Resolution Judicael POUMAY and Ashwin Ittoo xii November 10, 2021 (continued)",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Expected Validation Performance and Estimation of a Random Variable's Maximum Jesse Dodge",
"authors": [
{
"first": "Suchin",
"middle": [],
"last": "Gururangan",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "GooAQ: Open Question Answering with Diverse Answer Types Daniel Khashabi, Amos Ng, Tushar Khot, Ashish Sabharwal, Hannaneh Hajishirzi and Chris Callison-Burch [Findings] Sometimes We Want Ungrammatical Translations Prasanna Parthasarathi",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Findings] Expected Validation Performance and Estimation of a Random Vari- able's Maximum Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz and Noah A. Smith [Findings] GooAQ: Open Question Answering with Diverse Answer Types Daniel Khashabi, Amos Ng, Tushar Khot, Ashish Sabharwal, Hannaneh Hajishirzi and Chris Callison-Burch [Findings] Sometimes We Want Ungrammatical Translations Prasanna Parthasarathi, Koustuv Sinha, Joelle Pineau and Adina Williams",
"links": null
}
},
"ref_entries": {}
}
}