|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:24:44.679098Z" |
|
}, |
|
"title": "", |
|
"authors": [], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [], |
|
"body_text": [ |
|
{ |
|
"text": "We are excited to present the inaugural workshop on semiparametric methods on NLP. The field of natural language processing (NLP) has undergone a paradigm shift with the dramatic success of large pre-trained language models (LMs) on almost every downstream task. These large parametric models are based on the transformer architecture and are trained on massive collections of data using self-supervised learning, which are then fine-tuned on a relatively smaller set of task-specific supervised examples. The success of this simple recipe of homogeneous architectures and transfer learning has led to its widespread adoption. Despite these successes, parametric models lack several desirable properties. For example, these models use knowledge stored in their parameters to perform tasks without providing provenance or transparency into the model mechanisms. This is further exacerbated when they make an erroneous prediction as it is challenging to understand what went wrong and how to fix it. Moreover, as new information arrives, existing knowledge becomes obsolete and should be updated. However, it is currently challenging to update the knowledge stored in the parameters of LMs. Amongst other issues, this has implications on personal privacy as we do not have a robust way to execute requests for deletion of personal information which could be stored in the parameters of the model. Nonparametric instance-based models, on the other hand, offer many of the properties described above by design -a model capacity that naturally grows with data, easy addition and deletion of knowledge, and provenance for predictions based on the nearest neighbors with respect to the input. However, these models often suffer from weaker empirical performance compared to deep parametric models. Semiparametric models are statistical models that consist of a fixed parametric and a flexible nonparametric component. Combining the advantages of both paradigms has the potential to remedy many of the shortcomings described previously. For example, the nonparametric component can provide vast amounts of background knowledge and the parametric component can encode the logic required to solve the problem. Recently, many recent works have independently proposed approaches that combine a parametric model with a nonparametric model in areas from question answering, language modeling, machine translation, and even protein structure prediction. Given the increasingly promising results on various tasks of such semiparametric models, we believe this area is ripe for targeted investigation on understanding efficiency, generalization, limitations, and to widen its applicability. This workshop invited previously unpublished work as archival submissions, in addition to a non-archival track of previously-published work, recognising the fast-moving nature of this area, and the large amount of recently introduced work. After withdrawals, We have accepted a total of 5 archival papers, and 21 non-archival papers. Our final program thus includes 26 papers, 5 of which will be included in the proceedings. We are excited to host six stellar invited speakers, who will each lend their perspective to this exciting and rapidly-evolving area. In the morning session, we will host Anna Potapenko, and in the afternoon session, we will host Danqi Chen, Jason Weston, Andrew McCallum and Hannaneh Hajishirzi. We shall finish with a panel discussion. We thank these speakers, our program committee, the ACL workshop chairs, and our sponsors, Google and Meta, for helping to make this workshop possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following papers were submitted to our workshop as non-archival submissions. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Archival Papers", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Program", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Organizing Committee", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Archival Track Contributed Talk: Efficient Machine Translation Domain Adaptation: Pedro Henrique Martins, Zita Marinho", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Andre Martins", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Archival Track Contributed Talk: Efficient Machine Translation Domain Adaptation: Pedro Henrique Martins, Zita Marinho, Andre Martins 10:20 -10:30", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Track Contributed Talks 1: Internet-augmented language models through few-shot prompting for open-domain question answering: Angeliki Lazaridou", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Non-Archival", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Non-Archival Track Contributed Talks 1: Internet-augmented language mo- dels through few-shot prompting for open-domain question answering: Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, Nikolai Grigorev 10:30 -10:40", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Track Contributed Talks 2: Towards Unsupervised Dense Information Retrieval with Contrastive Learning: Gautier Izacard", |
|
"authors": [ |
|
{ |
|
"first": "Non-Archival ; Mathilde", |
|
"middle": [], |
|
"last": "Caron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Hosseini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Edouard Grave", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Non-Archival Track Contributed Talks 2: Towards Unsupervised Dense Infor- mation Retrieval with Contrastive Learning: Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave 10:40 -10:50", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Non-Archival Track Contributed Talks 3: Towards Continual Knowledge Learning of Language Models: Joel Jang", |
|
"authors": [ |
|
{ |
|
"first": "Seonghyeon", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sohee", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joongbo", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janghoon", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gyeonghun", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanley", |
|
"middle": [ |
|
"Jungkyu" |
|
], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minjoon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Non-Archival Track Contributed Talks 3: Towards Continual Knowledge Lear- ning of Language Models: Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo 10:50 -11:00 Coffee Break 11:00 -12:00 Poster Session I 12:00 -13:30 Lunch Break 13:30 -14:10 Invited Talk 2: Danqi Chen 14:10 -14:50 Invited Talk 3: Jason Weston 14:50 -15:00 Coffee Break 15:00 -16:00 Poster Session II 16:00 -16:40 Invited Talk 4: Andrew McCallum 16:40 -17:20 Invited Talk 5: Hannah Hajishirzi 17:20 -17:50 Panel Discussion 17:50 -18:00 Closing Remarks x", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"content": "<table><tr><td>\u2022 Controllable Semantic Parsing via Retrieval Augmentation Panupong Pasupat, Yuan Zhang, Kel-</td></tr><tr><td>vin Guu</td></tr><tr><td>\u2022 StreamingQA: A Benchmark for Adaptation to New Knowledge over Timein Question Answering</td></tr><tr><td>Models Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang</td></tr><tr><td>Agrawal, Cyprien de Masson d'Autume, Tim Scholtes, Manzil Zaheer, Susannah Young, Ellen</td></tr><tr><td>Gilsenan-McMahon, Sophia Austin, Phil Blunsom, Angeliki Lazaridou</td></tr><tr><td>Dong, Hao \u2022 On the Effect of Pretraining Corpora on In-context Few-shot Learning by a Large-scale Language</td></tr><tr><td>Cheng, Xiaodong Liu, Kai-Wei Chang, Furu Wei, Jianfeng Gao Model Seongjin Shin, Sang-Woo Lee, Hwijeen Ahn, Sungdong Kim, HyoungSeok Kim, Boseop</td></tr><tr><td>Kim, Kyunghyun Cho, Gichang Lee, Woomyoung Park, Jung-Woo Ha, Nako Sung</td></tr><tr><td>\u2022 KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier Linyang Li, Demin Song,</td></tr><tr><td>Ruotian Ma, Xipeng Qiu, Xuanjing Huang \u2022 Exploring Dual Encoder Architectures for Question Answering Zhe Dong, Jianmo Ni, Daniel M.</td></tr><tr><td>Bikel, Enrique Alfonseca, Yuan Wang, Chen Qu, Imed Zitouni</td></tr><tr><td>\u2022 Learning to Retrieve Passages without Supervision Ori Ram,Gal Shachaf,Omer Levy,Jonathan</td></tr><tr><td>Berant,Amir Globerson</td></tr><tr><td>Izacard,</td></tr><tr><td>Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard</td></tr><tr><td>Grave</td></tr><tr><td>\u2022 Internet-augmented language models through few-shot prompting for open-domain question an-</td></tr><tr><td>swering Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, Nikolai Grigorev</td></tr><tr><td>\u2022 Towards Interactive Language Modeling Maartje ter Hoeve, Evgeny Kharitonov, Dieuwke Hupkes,</td></tr><tr><td>Emmanuel Dupoux</td></tr><tr><td>\u2022 GUD-IR: Generative Retrieval for Semiparametric Models Aman Madaan, Niket Tandon, Peter</td></tr><tr><td>Clark, Yiming Yang</td></tr><tr><td>\u2022 Less is More: Summary of Long Instructions is Better for Program Synthesis Kirby Kuznia,</td></tr><tr><td>Swaroop Mishra, Mihir Parmar, Chitta Baral</td></tr><tr><td>\u2022 How Many Data Samples is an Additional Instruction Worth? Ravsehaj Singh Puri, Swaroop</td></tr><tr><td>Mishra, Mihir Parmar, Chitta Baral</td></tr><tr><td>\u2022 Is Retriever Merely an Approximator of Reader? Sohee Yang,Minjoon Seo</td></tr><tr><td>\u2022 TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Mo-</td></tr><tr><td>dels Joel Jang, Seonghyeon ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeon-</td></tr><tr><td>ghun Kim, Minjoon Seo</td></tr><tr><td>\u2022 Unsupervised Cross-Task Generalization via Retrieval Augmentation Bill Yuchen Lin, Kangmin</td></tr><tr><td>Tan, Chris Scott Miller, Beiwen Tian, Xiang Ren</td></tr><tr><td>v</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Learning To Retrieve Prompts for In-Context Learning Ohad Rubin, Jonathan Herzig, Jonathan Berant", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Improving Discriminative Learning for Zero-Shot Relation Extraction Van-Hien Tran, Hiroki Ouchi, Taro Watanabe and Yuji Matsumoto . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Choose Your QA Model Wisely: A Systematic Study of Generative and Extractive Readers for Question Answering Man Luo, Kazuma Hashimoto, Semih Yavuz, Zhiwei Liu, Chitta Baral and Yingbo Zhou . . . . . . 7 Efficient Machine Translation Domain Adaptation Pedro Martins, Zita Marinho and Andre Martins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Field Extraction from Forms with Unlabeled Data Mingfei Gao, Zeyuan Chen, Nikhil Naik, Kazuma Hashimoto, Caiming Xiong and Ran Xu . . . 30 Knowledge Base Index Compression via Dimensionality and Precision Reduction Vil\u00e9m Zouhar, Marius Mosbach, Miaoran Zhang and Dietrich Klakow . . . . . . . . . . . . . . . . . . . . . . 41", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |