ACL-OCL / Base_JSON /prefixA /json /alvr /2020.alvr-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:33.989037Z"
},
"title": "",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Santa",
"middle": [],
"last": "Barbara",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Thomason",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ronghang",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Berkeley",
"middle": [
"Xinlei"
],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "A",
"middle": [
"I"
],
"last": "Research",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Tech",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Qi",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Simon",
"middle": [
"Fraser"
],
"last": "Univeristy",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Devendra",
"middle": [],
"last": "Chaplot",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Cmu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Das",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Tech",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "U",
"middle": [
"C"
],
"last": "Fried",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Berkeley",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Kanan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ray",
"middle": [],
"last": "Mooney",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hamid",
"middle": [],
"last": "Palangi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Raunak",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Volkan",
"middle": [],
"last": "Cmu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Cirik",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Parminder",
"middle": [],
"last": "Cmu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Amazon",
"middle": [],
"last": "Bhatia",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Raghavi",
"middle": [],
"last": "Khyathi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Cmu",
"middle": [],
"last": "Chandu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ben",
"middle": [],
"last": "Asma",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nlm",
"middle": [],
"last": "Abacha",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Doren",
"middle": [],
"last": "Thoudam",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Alok",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Cornell",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Chai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Cmu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Riedl",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Tech",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Specia",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Language and vision research has attracted great attention from both natural language processing (NLP) and computer vision (CV) researchers. Gradually, this area is shifting from passive perception, templated language, and synthetic imagery or environments to active perception, natural language, and photorealistic simulation or real world deployment. Thus far, few workshops on language and vision research have been organized by groups from the NLP community. We organize the first workshop on Advances in Language and Vision Research (ALVR) in order to promote the frontier of language and vision research and to bring interested researchers together to discuss how to best tackle and solve real-world problems in this area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "iii The workshop also holds the first Video-guided Machine Translation (VMT) challenge and the REVERIE challenge. The VMT challenge aims to benchmark progress towards models that translate source language sentence into the target language with video information as the additional spatiotemporal context. The challenge is based on the recently released large-scale multilingual video description dataset, VA-TEX. The VATEX dataset contains over 41,250 videos and 825,000 high-quality captions in both English and Chinese, half of which are English-Chinese translation pairs. The REVERIE challenge requires an intelligent agent to correctly localize a remote target object (cannot be observed at the starting location) specified by a concise high-level natural language instruction, such as \"bring me the blue cushion from the sofa in the living room\". Since the target object is in a different room from the starting one, the agent needs first to navigate to the goal location. When the agent determines to stop, it should select one object from a list of candidates provided by the simulator. The agent can attempt to localize the target at any step, which is totally up to algorithm design. But the agent is only allowed to output once in each episode, which means the agent only can guess the answer once in a single run. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"text": "Extending ImageNet to Arabic using Arabic WordNet Abdulkareem Alsudais . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Toward General Scene Graph: Integration of Visual Semantic Knowledge with Entity Synset Alignment Woo Suk Choi, Kyoung-Woon On, Yu-Jung Heo and Byoung-Tak Zhang . . . . . . . . . . . . . . . . . . . . . . 7 Visual Question Generation from Radiology Images Mourad Sarrouti, Asma Ben Abacha and Dina Demner-Fushman . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 On the role of effective and referring questions in GuessWhat?! Mauricio Mazuecos, Alberto Testoni, Raffaella Bernardi and Luciana Benotti . . . . . . . . . . . . . . . . . 19 Latent Alignment of Procedural Concepts in Multimodal Recipes Hossein Rajaby Faghihi, Roshanak Mirzaee, Sudarshan Paliwal and Parisa Kordjamshidi . . . . . . 26 vii Workshop schedule details: https://alvr-workshop.github.io",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"content": "<table><tr><td>Archival track papers presented at the workshop:</td></tr><tr><td>Extending ImageNet to Arabic using Arabic WordNet</td></tr><tr><td>Abdulkareem Alsudais</td></tr><tr><td>Toward General Scene Graph: Integration of Visual Semantic Knowledge with En-</td></tr><tr><td>tity Synset Alignment</td></tr><tr><td>Woo Suk Choi, Kyoung-Woon On, Yu-Jung Heo and Byoung-Tak Zhang</td></tr><tr><td>Visual Question Generation from Radiology Images</td></tr><tr><td>Mourad Sarrouti,</td></tr></table>",
"text": "Asma Ben Abacha and Dina Demner-Fushman On the role of effective and referring questions in GuessWhat?! Mauricio Mazuecos, Alberto Testoni, Raffaella Bernardi and Luciana Benotti Latent Alignment of Procedural Concepts in Multimodal Recipes Hossein Rajaby Faghihi, Roshanak Mirzaee, Sudarshan Paliwal and Parisa Kordjamshidi ix",
"type_str": "table",
"num": null,
"html": null
}
}
}
}