ACL-OCL / Base_JSON /prefixC /json /cmcl /2022.cmcl-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:16:33.126603Z"
},
"title": "Workshop Organizers",
"authors": [
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Laura",
"middle": [],
"last": "Aina",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Raquel",
"middle": [
"Garrido"
],
"last": "Alhama",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Google",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Blache",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Amazon",
"middle": [],
"last": "Aniello",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "De",
"middle": [],
"last": "Santo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Carina",
"middle": [],
"last": "Kauf",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Kodner",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Fred",
"middle": [],
"last": "Mailhot",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dialpad",
"middle": [],
"last": "Karl",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Neergaard",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "William",
"middle": [],
"last": "Schuler",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sunit",
"middle": [],
"last": "Bhattacharya",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Haller",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andrea",
"middle": [
"E"
],
"last": "Martin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Welcome to the 12th edition of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL)!! CMCL is traditionally the workshop of reference for research at the intersection between Computational Linguistics and Cognitive Science. This year, for the first time CMCL will be held in hybrid mode: virtual attendance will still be allowed, given the persistence of the COVID-19 pandemic, while the inperson meeting will take place in the beautiful Dublin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This year, we received 20 regular workshop submissions and we accepted 10 of them, for a global 50% acceptance rate. We also received two extended abstracts as non-archival submissions, and both of them will be presented during the poster session. As in previous years, submissions have been highly varied across the cognitive sciences, with topics ranging from the relationship between vision and human linguistic-semantic knowledge, the relationship between eye gaze and self-attention in Transformer language models, and an account of the game Codenames. Work ranges from deep neural network approaches to Bayesian cognitive models, learning of phonetic and phonological categories, analyses of neurolinguistic data, and much more. We are thrilled to continue a workshop with the breadth and depth that is emblematic of the fields of cognitive science and natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Last year, we held a shared task on eye-tracking prediction in a variety of measures. This year, we led an additional shared task that built on the success of the previous edition. In the second edition of the shared task on eye-tracking data prediction, this time we included multilingual data from English, Russian, German, Hindi, Chinese, Dutch and Danish, enabling research teams to try a variety of methods and language models far beyond prior eye tracking tasks. A total of six teams participated, of which 5 submitted papers describing their systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "As always, we are extremely grateful to the PC members, without whose efforts we would be unable to ensure high-quality reviews and high-quality work for presentation at the workshop. We are indebted to their generosity and are proud of the community that supports CMCL. We also thank our invited speakers, Andrea E Martin and Vera Demberg for kindly accepting our invitation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Finally, we thank our sponsors: the Japanese Society for the Promotion of Sciences and the Laboratoire Parole et Langage. Through their generous support, we are able to offer fee waivers to PhD students who were first authors of accepted papers, and to offset the participation costs of the invited speakers. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "A Neural Model for Compositional Word Embeddings and Sentence Processing Shalom Lappin and Jean-Philippe Bernardy",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Neural Model for Compositional Word Embeddings and Sentence Processing Shalom Lappin and Jean-Philippe Bernardy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Visually Grounded Interpretation of Noun-Noun Compounds in",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Visually Grounded Interpretation of Noun-Noun Compounds in English Inga Lang, Lonneke Van Der Plas, Malvina Nissim and Albert Gatt . . . . . . . . . . . . . . . . . . . . . . . . 23",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Less Descriptive yet Discriminative: Quantifying the Properties of Multimodal Referring Utterances via CLIP Ece Takmaz",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Less Descriptive yet Discriminative: Quantifying the Properties of Multimodal Referring Utterances via CLIP Ece Takmaz, Sandro Pezzelle and Raquel Fern\u00e1ndez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Codenames as a Game of Co-occurrence Counting R\u00e9ka Cserh\u00e1ti",
"authors": [
{
"first": "Istvan",
"middle": [],
"last": "Kollath",
"suffix": ""
},
{
"first": "Andr\u00e1s",
"middle": [],
"last": "Kicsi",
"suffix": ""
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Berend",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Codenames as a Game of Co-occurrence Counting R\u00e9ka Cserh\u00e1ti, Istvan Kollath, Andr\u00e1s Kicsi and G\u00e1bor Berend . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modeling the Relationship between Input Distributions and Learning Trajectories with the Tolerance Principle",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Kodner",
"suffix": ""
},
{
"first": ".",
"middle": [
". ."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Modeling the Relationship between Input Distributions and Learning Trajectories with the Tolerance Principle Jordan Kodner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting scalar diversity with context-driven uncertainty over alternatives Jennifer",
"authors": [
{
"first": "Roger",
"middle": [
"P"
],
"last": "Hu",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Schuster",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Predicting scalar diversity with context-driven uncertainty over alternatives Jennifer Hu, Roger P. Levy and Sebastian Schuster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Eye Gaze and Self-attention: How Humans and Transformers Attend Words in Sentences Joshua Bensemann",
"authors": [
{
"first": "Alex",
"middle": [
"Yuxuan"
],
"last": "Peng",
"suffix": ""
},
{
"first": "Diana",
"middle": [
"Benavides"
],
"last": "Prado",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Neset",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Michael"
],
"last": "Corballis",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Riddle",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Michael Witbrock",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eye Gaze and Self-attention: How Humans and Transformers Attend Words in Sentences Joshua Bensemann, Alex Yuxuan Peng, Diana Benavides Prado, Yang Chen, Neset Tan, Paul Michael Corballis, Patricia Riddle and Michael Witbrock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "About Time: Do Transformers Learn Temporal Verbal Aspect? Eleni Metheniti, Tim Van De Cruys and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "About Time: Do Transformers Learn Temporal Verbal Aspect? Eleni Metheniti, Tim Van De Cruys and Nabil Hathout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Poirot at CMCL 2022 Shared Task: Zero Shot Crosslingual Eye-Tracking Data Prediction using Multilingual Transformer Models Harshvardhan Srivastava",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poirot at CMCL 2022 Shared Task: Zero Shot Crosslingual Eye-Tracking Data Prediction using Multi- lingual Transformer Models Harshvardhan Srivastava . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "CMCL 2022 Shared Task: Multilingual and Crosslingual Prediction of Human Reading Behavior in Universal Language Space Joseph Marvin Imperial",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NU HLT at CMCL 2022 Shared Task: Multilingual and Crosslingual Prediction of Human Reading Behavior in Universal Language Space Joseph Marvin Imperial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "114 CMCL 2022 Shared Task on Multilingual and Crosslingual Prediction of Human Reading Behavior Nora Hollenstein",
"authors": [
{
"first": "Rong",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yu-Yin",
"middle": [
"."
],
"last": "Hsu",
"suffix": ""
},
{
"first": "Cassandra",
"middle": [
"L"
],
"last": ". ; Emmanuele Chersoni",
"suffix": ""
},
{
"first": "Yohei",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Oseki",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Santus",
"suffix": ""
}
],
"year": null,
"venue": "HkAmsters at CMCL 2022 Shared Task: Predicting Eye-Tracking Data from a Gradient Boosting Framework with Linguistic Features Lavinia Salicchi",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HkAmsters at CMCL 2022 Shared Task: Predicting Eye-Tracking Data from a Gradient Boosting Fra- mework with Linguistic Features Lavinia Salicchi, Rong Xiang and Yu-Yin Hsu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 CMCL 2022 Shared Task on Multilingual and Crosslingual Prediction of Human Reading Behavior Nora Hollenstein, Emmanuele Chersoni, Cassandra L Jacobs, Yohei Oseki, Laurent Pr\u00e9vot and Enrico Santus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Team \u00daFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models Sunit Bhattacharya",
"authors": [
{
"first": "Rishu",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Team \u00daFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models Sunit Bhattacharya, Rishu Kumar and Ondrej Bojar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Levy and Sebastian Schuster Less Descriptive yet Discriminative: Quantifying the Properties of Multimodal Referring Utterances via CLIP Ece Takmaz, Sandro Pezzelle and Raquel Fern\u00e1ndez Modeling the Relationship between Input Distributions and Learning Trajectories with the Tolerance Principle Jordan Kodner NU HLT at CMCL 2022 Shared Task: Multilingual and Crosslingual Prediction of Human Reading Behavior in Universal Language Space Joseph Marvin Imperial Team DMG at CMCL 2022 Shared Task: Transformer Adapters for the Multiand Cross-Lingual Prediction of Human Reading Behavior Ece Takmaz Team \u00daFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models Sunit Bhattacharya, Rishu Kumar and Ondrej Bojar HkAmsters at CMCL 2022 Shared Task: Predicting Eye-Tracking Data from a Gradient Boosting Framework with Linguistic Features Lavinia Salicchi",
"authors": [
{
"first": "",
"middle": [],
"last": "Thursday",
"suffix": ""
}
],
"year": 2022,
"venue": "Poster Session Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model Richard Futrell Predicting scalar diversity with context-driven uncertainty over alternatives Jennifer Hu",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thursday, May 26, 2022 (continued) 15:30 -17:00 Poster Session Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model Richard Futrell Predicting scalar diversity with context-driven uncertainty over alternatives Jennifer Hu, Roger P. Levy and Sebastian Schuster Less Descriptive yet Discriminative: Quantifying the Properties of Multimodal Referring Utterances via CLIP Ece Takmaz, Sandro Pezzelle and Raquel Fern\u00e1ndez Modeling the Relationship between Input Distributions and Learning Trajecto- ries with the Tolerance Principle Jordan Kodner NU HLT at CMCL 2022 Shared Task: Multilingual and Crosslingual Prediction of Human Reading Behavior in Universal Language Space Joseph Marvin Imperial Team DMG at CMCL 2022 Shared Task: Transformer Adapters for the Multi- and Cross-Lingual Prediction of Human Reading Behavior Ece Takmaz Team \u00daFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models Sunit Bhattacharya, Rishu Kumar and Ondrej Bojar HkAmsters at CMCL 2022 Shared Task: Predicting Eye-Tracking Data from a Gradient Boosting Framework with Linguistic Features Lavinia Salicchi, Rong Xiang and Yu-Yin Hsu Poirot at CMCL 2022 Shared Task: Zero Shot Crosslingual Eye-Tracking Data Prediction using Multilingual Transformer Models Harshvardhan Srivastava",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "The CMCL 2022 Organizing Committee Team DMG at CMCL 2022 Shared Task: Transformer Adapters for the Multi-and Cross-Lingual Prediction of Human Reading Behavior Ece Takmaz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}