ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-main.103.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:42:23.060449Z"
},
"title": "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation",
"authors": [
{
"first": "Wanrong",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Eric",
"middle": [],
"last": "Xin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tsu-Jui",
"middle": [],
"last": "Fu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "An",
"middle": [],
"last": "Yan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Pradyumna",
"middle": [],
"last": "Narayana",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kazoo",
"middle": [],
"last": "Sone",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sugato",
"middle": [],
"last": "Basu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "\u2020",
"middle": [],
"last": "Uc",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Santa",
"middle": [],
"last": "Barbara",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Santa",
"middle": [],
"last": "Cruz",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "U",
"middle": [
"C"
],
"last": "San",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Google",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "One of the most challenging topics in Natural Language Processing (NLP) is visuallygrounded language understanding and reasoning. Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates a real-life urban environment. Due to the lack of human-annotated instructions that illustrate intricate urban scenes, outdoor VLN remains a challenging task to solve. This paper introduces a Multimodal Text Style Transfer (MTST) learning approach and leverages external multimodal resources to mitigate data scarcity in outdoor navigation tasks. We first enrich the navigation data by transferring the style of the instructions generated by Google Maps API, then pre-train the navigator with the augmented external outdoor navigation dataset. Experimental results show that our MTST learning approach is model-agnostic, and our MTST approach significantly outperforms the baseline models on the outdoor VLN task, improving task completion rate by 8.7% relatively on the test set. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "One of the most challenging topics in Natural Language Processing (NLP) is visuallygrounded language understanding and reasoning. Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates a real-life urban environment. Due to the lack of human-annotated instructions that illustrate intricate urban scenes, outdoor VLN remains a challenging task to solve. This paper introduces a Multimodal Text Style Transfer (MTST) learning approach and leverages external multimodal resources to mitigate data scarcity in outdoor navigation tasks. We first enrich the navigation data by transferring the style of the instructions generated by Google Maps API, then pre-train the navigator with the augmented external outdoor navigation dataset. Experimental results show that our MTST learning approach is model-agnostic, and our MTST approach significantly outperforms the baseline models on the outdoor VLN task, improving task completion rate by 8.7% relatively on the test set. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A key challenge for Artificial Intelligence research is to go beyond static observational data and consider more challenging settings that involve dynamic actions and incremental decision-making processes (Fenton et al., 2020) . Outdoor visionand-language navigation (VLN) is such a task, where an agent navigates in an urban environment by grounding natural language instructions in visual scenes, as illustrated in Fig. 1 . To generate a series of correct actions, the navigation agent must comprehend the instructions and reason through the visual environment. Figure 1 : An outdoor VLN example with instructions generated by Google Maps API (ground truth), the Speaker model, and our MTST model. Tokens marked in red indicate incorrectly generated instructions, while the blue tokens suggest alignments with the ground truth. The orange bounding boxes show that the objects in the surrounding environment have been successfully injected into the style-modified instruction.",
"cite_spans": [
{
"start": 205,
"end": 226,
"text": "(Fenton et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 417,
"end": 423,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 564,
"end": 572,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Different from indoor navigation Fried et al., 2018; Wang et al., 2019; Ma et al., 2019a; Ma et al., 2019b; Ke et al., 2019) , the outdoor navigation task takes place in urban environments that contain diverse street views (Mirowski et al., 2018; Chen et al., 2019; Mehta et al., 2020) . The vast urban area leads to a much larger space for an agent to explore and usually contains longer trajectories and a wider range of objects for visual grounding. This requires more informative instructions to address the complex navigation environment. However, it is expensive to collect human-annotated instructions that depict the complicated visual scenes to train a navigation agent. The issue of data scarcity limits the navigator's performance in the outdoor VLN task.",
"cite_spans": [
{
"start": 33,
"end": 52,
"text": "Fried et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 53,
"end": 71,
"text": "Wang et al., 2019;",
"ref_id": "BIBREF48"
},
{
"start": 72,
"end": 89,
"text": "Ma et al., 2019a;",
"ref_id": "BIBREF32"
},
{
"start": 90,
"end": 107,
"text": "Ma et al., 2019b;",
"ref_id": "BIBREF33"
},
{
"start": 108,
"end": 124,
"text": "Ke et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 223,
"end": 246,
"text": "(Mirowski et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 247,
"end": 265,
"text": "Chen et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 266,
"end": 285,
"text": "Mehta et al., 2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To deal with the data scarcity issue, Fried et al. (2018) proposes a Speaker model to generate additional training pairs. However, synthesizing instructions purely from visual signals is hard, especially for outdoor environments, due to visual complexity.",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "Fried et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, template-based navigation instructions on the street view can be easily obtained via the Google Map API, which may serve as additional learning signals to boost outdoor navigation tasks. But instructions generated by Google Maps API mainly consist of street names and directions, while human-annotated instructions in the outdoor navigation task frequently refer to street-view objects in the panorama. The distinct instruction style hinders the full utilization of external resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, we present a novel Multimodal Text Style Transfer (MTST) learning approach to narrow the gap between template-based instructions in the external resources and the human-annotated instructions for the outdoor navigation task. It can infer style-modified instructions for trajectories in the external resources and thus mitigate the data scarcity issue. Our approach can inject more visual objects in the navigation environment to the instructions ( Fig. 1 ), while providing direction guidance. The enriched object-related information can help the navigation agent learn the grounding between the visual environment and the instruction.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 465,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Moreover, different from previous LSTM-based navigation agents, we propose a new VLN Transformer to predict outdoor navigation actions. Experimental results show that utilizing external resources provided by Google Maps API during the pre-training process improves the navigation agent's performance on Touchdown, a dataset for outdoor VLN (Chen et al., 2019) . In addition, pretraining with the style-modified instructions generated by our multimodal text style transfer model can further improve navigation performance and make the pre-training process more robust. In summary, the contribution of our work is four-fold:",
"cite_spans": [
{
"start": 340,
"end": 359,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transfer learning approach to generate stylemodified instructions for external resources and tackle the data scarcity issue in the outdoor VLN task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 We present a new Multimodal Text Style",
"sec_num": null
},
{
"text": "\u2022 We provide the Manh-50 dataset with stylemodified instructions as an auxiliary dataset for outdoor VLN training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 We present a new Multimodal Text Style",
"sec_num": null
},
{
"text": "\u2022 We propose a novel VLN Transformer model as the navigation agent for outdoor VLN and validate its effectiveness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 We present a new Multimodal Text Style",
"sec_num": null
},
{
"text": "\u2022 We improve the task completion rate by 8.7% relatively on the test set for the outdoor VLN task with the VLN Transformer model pretrained on the external resources processed by our MTST approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 We present a new Multimodal Text Style",
"sec_num": null
},
{
"text": "Vision-and-Language Navigation (VLN) is a task that requires an agent to achieve the final goal based on the given instructions in a 3D environment. Besides the generalizability problem studied by previous works (Wang et al., , 2019 , the data scarcity problem is another critical issue for the VLN task, expecially in the outdoor environment (Chen et al., 2019; Mehta et al., 2020; Xiang et al., 2020) . Fried et al. (2018) obtains a broad set of augmented training data for VLN by sampling trajectories in the navigation environment and using the Speaker model to back-translate their instructions. However, the Speaker model might cause the error propagation issue since it is not trained on large corpora to optimize generalization. While most existing works select navigation actions dynamically along the way in the unseen environment during testing, Majumdar et al. (2020) proposes to test in previously explored environments and convert the VLN task to a classification task over the possible paths. This approach performs well in the indoor setting, but is not suitable for outdoor VLN where the environment graph is different. Multimodal Pre-training has attracted much attention to improving multimodal tasks performances. The models usually adopt the Transformer structure to encode the visual features and the textual features Chen et al., 2020; Sun et al., 2019; Huang et al., 2020b; Luo et al., 2020; Zheng et al., 2020; Wei et al., 2020; Tsai et al., 2019) . During pre-training, these models use tasks such as masked language modeling, masked region modeling, image-text matching to learn the cross-modal encoding ability, which later benefits the multimodal downstream tasks. Majumdar et al. (2020) proposes to use image-text pairs from the web to pre-train VLN-BERT, a visiolinguistic transformer-based model similar to the model proposed by . A concurrent work by proposes to use Transformer for indoor VLN. Our VLN Transformer is different from their model in several key aspects: (1) The pre-training objectives are different: pre-trains the model on the same dataset for training, while we create an augmented, stylized dataset for outdoor VLN using the proposed MTST method. (2) Benefiting from the effective external resource, a simple navigation loss is employed in our VLN Transformer, while they adopt the masked language modeling to better train their model. (3) Model-wise, instead of encoding the whole instruction into one feature, we use sentence-level encoding since Touchdown instructions are much longer than R2R instructions. (4) We encode the trajectory history, while their model encodes the panorama for the current step. Unsupervised Text Style Transfer is an approach to mitigate the lack of parallel data for supervised training. One line of work encodes the text into a latent vector and manipulate the text representation in the latent space to transfer the style. Shen et al. 2017; Hu et al. (2017) ; use variational auto-encoder to encode the text, and use a discriminator to modify text style. John et al. 2019; Fu et al. (2018) rely on models with encoder-decoder structure to transfer the style. Another line of work enriches the training data by generating pseudo-parallel data via back-translation (Artetxe et al., 2018; Lample et al., 2018b,a; Zhang et al., 2018) .",
"cite_spans": [
{
"start": 212,
"end": 232,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF48"
},
{
"start": 343,
"end": 362,
"text": "(Chen et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 363,
"end": 382,
"text": "Mehta et al., 2020;",
"ref_id": "BIBREF35"
},
{
"start": 383,
"end": 402,
"text": "Xiang et al., 2020)",
"ref_id": "BIBREF52"
},
{
"start": 405,
"end": 424,
"text": "Fried et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 857,
"end": 879,
"text": "Majumdar et al. (2020)",
"ref_id": "BIBREF34"
},
{
"start": 1340,
"end": 1358,
"text": "Chen et al., 2020;",
"ref_id": "BIBREF56"
},
{
"start": 1359,
"end": 1376,
"text": "Sun et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 1377,
"end": 1397,
"text": "Huang et al., 2020b;",
"ref_id": "BIBREF18"
},
{
"start": 1398,
"end": 1415,
"text": "Luo et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 1416,
"end": 1435,
"text": "Zheng et al., 2020;",
"ref_id": "BIBREF56"
},
{
"start": 1436,
"end": 1453,
"text": "Wei et al., 2020;",
"ref_id": "BIBREF50"
},
{
"start": 1454,
"end": 1472,
"text": "Tsai et al., 2019)",
"ref_id": "BIBREF45"
},
{
"start": 1694,
"end": 1716,
"text": "Majumdar et al. (2020)",
"ref_id": "BIBREF34"
},
{
"start": 2928,
"end": 2944,
"text": "Hu et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 3060,
"end": 3076,
"text": "Fu et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 3250,
"end": 3272,
"text": "(Artetxe et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 3273,
"end": 3296,
"text": "Lample et al., 2018b,a;",
"ref_id": null
},
{
"start": 3297,
"end": 3316,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the vision-and-language navigation task, the reasoning navigator is asked to find the correct path to reach the target location following the instructions (a set of sentences) X = {s 1 , s 2 , . . . , s m }. The navigation procedure can be viewed as a series of decision making processes. At each time step t, the navigation environment presents an image view v t . With reference to the instruction X and the visual view v t , the navigator is expected to choose an action a t \u2208 A. The action set A for urban environment navigation usually contains four actions, namely turn left, turn right, go forward, and stop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "Our Multimodal Text Style Transfer (MTST) learning mainly consists of two modules, namely the multimodal text style transfer model and the VLN Transformer. Fig. 2 provides an overview of our MTST approach. We use the multimodal text style transfer model to narrow the gap between the human-annotated instructions for the outdoor navigation task and the machine-generated instruc- tions in the external resources. The multimodal text style transfer model is trained on the dataset for outdoor navigation, and it learns to infer stylemodified instructions for trajectories in the external resources. The VLN Transformer is the navigation agent that generates actions for the outdoor VLN task. It is trained with a two-stage training pipeline. We first pre-train the VLN Transformer on the external resources with the style-modified instructions and then fine-tune it on the outdoor navigation dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 162,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.2"
},
{
"text": "Instruction Style The navigation instructions vary across different outdoor VLN datasets. As shown in Table 1 , the instructions generated by Google Maps API is template-based and mainly consists of street names and directions. In contrast, humanannotated instructions for the outdoor VLN task emphasize the visual environment's attributes as navigation targets. It frequently refers to objects in the panorama, such as traffic lights, cars, awnings, etc. The goal of conducting multimodal text style transfer is to inject more object-related information in the surrounding navigation environment to the machine-generated instruction while keeping the correct guiding signals.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "Masking-and-Recovering Scheme The multimodal text style transfer model is trained with a \"masking-and-recovering\" (Zhu et al., 2019; Donahue et al., 2020; Huang et al., 2020a) scheme to inject objects that appeared in the panorama into the instructions. We mask out certain portions in the instructions and try to recover the missing contents with the help of the remaining instruction skeleton and the paired trajectory. To be specific, we use NLTK (Bird et al., 2009) to mask out the object-related tokens in the human-annotated instructions, and the street names Figure 3 : An example of the training and inference process of the multimodal text style transfer model. During training, we mask out the objects in the human-annotated instructions to get the instruction template. The model takes both the trajectory and the instruction skeleton as input, and the training objective is to recover the instructions with objects. When inferring new instructions for external trajectories, we mask the street names in the original instructions and prompt the model to generate new object-grounded instructions.",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "(Zhu et al., 2019;",
"ref_id": null
},
{
"start": 133,
"end": 154,
"text": "Donahue et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 155,
"end": 175,
"text": "Huang et al., 2020a)",
"ref_id": "BIBREF17"
},
{
"start": 450,
"end": 469,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 566,
"end": 574,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "in the machine-generated instructions 2 . Multiple tokens that are masked out in a row will be replaced by a single [MASK] token. We aim to maintain the correct guiding signals for navigation after the style transfer process. Tokens that provide guiding signals, such as \"turn left\" or \"take a right\", will not be masked out. Fig. 3 provides an example of the \"masking-and-recovering\" process during training and inferring. Model Structure Fig. 3 illustrates the input and expected output of our multimodal text style transfer model. We build the multimodal text style transfer model upon the Speaker model proposed by Fried et al. (2018) . On top of the visual-attentionbased LSTM (Hochreiter and Schmidhuber, 1997) structure in the Speaker model, we inject the textual attention of the masked instruction skeleton X to the encoder, which allows the model to attend to original guiding signals. The encoder takes both the visual and textual inputs, which encode the trajectory and the masked instruction skeletons. To be specific, each visual view in the trajectory is represented as a feature",
"cite_spans": [
{
"start": 619,
"end": 638,
"text": "Fried et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 682,
"end": 716,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 326,
"end": 332,
"text": "Fig. 3",
"ref_id": null
},
{
"start": 440,
"end": 446,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "vector v = [v v ; v \u03b1 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": ", which is the concatenation of the visual encoding v v \u2208 R 512 and the orientation encoding v \u03b1 \u2208 R 64 . The visual encoding v v is the output of the last but one layer of the RESNET18 (He et al., 2016) of the current view. The orientation encoding v \u03b1 encodes current heading \u03b1 by repeating vector [sin\u03b1, cos\u03b1] for 32 times, which follows Fried et al. (2018) . As described in section 3.4, the feature matrix of a panorama is the concatenation of eight projected visual views.",
"cite_spans": [
{
"start": 186,
"end": 203,
"text": "(He et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 341,
"end": 360,
"text": "Fried et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "In the multimodal style transfer encoder, we use a soft-attention module (Vaswani et al., 2017) to calculate the grounded visual featurev t for current view at step t:",
"cite_spans": [
{
"start": 73,
"end": 95,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "attn v t,i = sof tmax((W v h t\u22121 ) T v i ) (1) v t = 8 i=1 = attn v t,i v i (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "where h t\u22121 is the hidden context of previous step, W v refers to the learnable parameters, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "attn v t,i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "is the attention weight over the i th slice of view v i in current panorama. We use full-stop punctuations to split the input text into multiple sentences. The rationale is to enable alignment between the street views and the semantic guidance in sub-instructions. For each sentence in the input text, the textual encoding s is the average of all the tokens' word embedding in the current sentence. We also use a soft-attention modules to calculate the grounded textual featur\u00ea s t at current step t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "attn s t,j = sof tmax((W s h t\u22121 ) T s j ) (3) s t = M j=1 attn s t,j s j",
"eq_num": "(4)"
}
],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "where W s refers to the learnable parameters, attn s t,j is the attention weight over the j th sentence encoding s j at step t, and M denotes the maximum sentence number in the input text. The input text for the multimodal style transfer encoder is the instruction template X . Based on the grounded visual featurev t , the grounded textual feature\u015d t and the visual view feature v t at current timestamp t, the hidden context can be given as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = LST M ([v t ;\u015d t ; v t ])",
"eq_num": "(5)"
}
],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "Training Objectives We train the multimodal text style transfer model in the teacher-forcing manner (Williams and Zipser, 1989) . The decoder generates tokens auto-regressively, conditioning on the masked instruction template X , and the trajectory. The training objective is to minimize the following cross-entropy loss:",
"cite_spans": [
{
"start": 100,
"end": 127,
"text": "(Williams and Zipser, 1989)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "L(x1, x2, . . . , xn|X , v 1 , . . . , v N ) = \u2212 log n j=1 P (xj|x1, ..., xj\u22121, X , v 1 , . . . , v N ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "where x 1 , x 2 , . . . , x n denotes the tokens in the original instruction X , n is the total token number in X , and N denotes the maximum view number in the trajectory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer Model",
"sec_num": "3.3"
},
{
"text": "The VLN Transformer is the navigation agent that generates actions in the outdoor VLN task. As illustrated in Fig. 4 , our VLN Transformer is composed of an instruction encoder, a trajectory encoder, a cross-modal encoder that fuses the modality of the instruction encodings and trajectory encodings, and an action predictor. Instruction Encoder The instruction encoder is a pre-trained uncased BERT-base model (Devlin et al., 2019) . Each piece of navigation instruction is split into multiple sentences by the fullstop punctuations. ",
"cite_spans": [
{
"start": 411,
"end": 432,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 110,
"end": 116,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "VLN Transformer",
"sec_num": "3.4"
},
{
"text": "Cross-Modal Encoder",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "Action Predictor TURN LEFT h s 1 h s 2 h s 3 h s 4 h v 1 h v 2 h v 3 o s 1 o s 2 o s 3 o s 4 o v 1 o v 2 o v 3 concat t = 1 t = 2 t = 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "? Figure 4 : Overview of the VLN Transformer. In this example, the VLN Transformer predicts to take a left turn for the visual scene at t = 3.",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 10,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "{x i,1 , x i,2 , . . . , x i,l i } that contains l i tokens, its sentence embedding h s i is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "w i,j = BERT (x i,j ) \u2208 R 768 (7) h s i = FC( l i j=1 w i,j l i ) \u2208 R 256 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "where w i,j is the word embedding for x i,j generated by BERT, and FC is a fully-connected layer. View Encoder We use the view encoder to retrieve embeddings for the visual views at each time step. Following Chen et al. 2019, we embed each panorama I t by slicing it into eight images and projecting each image from an equirectangular projection to a perspective projection. Each of the projected image of size 800 \u00d7 460 will be passed through the RESNET18 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015) . We use the output of size 128 \u00d7 100 \u00d7 58 from the fourth to last layer before classification as the feature for each slice. The feature map for each panorama is the concatenation of the eight image slices, which is a single tensor of size 128\u00d7100\u00d7464. We center the feature map according to the agent's heading \u03b1 t at timestamp t. We crop a 128 \u00d7 100 \u00d7 100 sized feature map from the center and calculate the mean value along the channel dimension. The resulting 100 \u00d7 100 features is regarded as the current panorama feature\u00ce t for each state. Following Mirowski et al. (2018) , we then apply a three-layer convolutional neural network on\u00ce t to extract the view features h v t \u2208 R 256 at timestamp t. Cross-Modal Encoder In order to navigate through complicated real-world environments, the agent needs to grasp a proper understanding of the natural language instructions and the visual views jointly to choose proper actions for each state. Since the instructions and the trajectory lies in different modalities and are encoded separately, we introduce the cross-modal encoder to fuse the features from different modalities and jointly encode the instructions and the trajectory. The cross-modal encoder is an 8-layer Transformer encoder (Vaswani et al., 2017) with mask. We use eight self-attention heads and a hidden size of 256.",
"cite_spans": [
{
"start": 457,
"end": 474,
"text": "(He et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 499,
"end": 525,
"text": "(Russakovsky et al., 2015)",
"ref_id": "BIBREF40"
},
{
"start": 1083,
"end": 1105,
"text": "Mirowski et al. (2018)",
"ref_id": "BIBREF36"
},
{
"start": 1768,
"end": 1790,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "In the teacher-forcing training process, we add a mask when calculating the multi-head selfattention across different modalities. By masking out all the future views in the ground-truth trajectory, the current view v t is only allowed to refer to the full instructions and all the previous views that the agent has passed by, which is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "[h s 1 , h s 2 , . . . , h s M ; h v 1 , h v 2 , . . . , h v t\u22121 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": ", where M denotes the maximum sentence number.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "Since the Transformer architecture is based solely on attention mechanism and thus contains no recurrence or convolution, we need to inject additional information about the relative or absolute position of the features in the input sequence. We add a learned segment embedding to every input feature vector specifying whether it belongs to the sentence encodings or the view encodings. We also add a learned position embedding to indicate the relative position of the sentences in the instruction sequence or the trajectory sequence's views.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instruction Encoder View Encoder",
"sec_num": null
},
{
"text": "The action predictor is a fullyconnected layer. It takes the concatenation of the cross-modal encoder's output up to the current timestamp t as input, and predicts the action a t for view v t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Action Predictor",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h concat = h s 1 || . . . ||h s M ||h v 1 || . . . ||h v t (9) a t = argmax(FC(T (h concat )))",
"eq_num": "(10)"
}
],
"section": "Action Predictor",
"sec_num": null
},
{
"text": "where FC is a fully-connected layer in the action predictor, and T refers to the Transformer operation in the cross-modal encoder. During training, we use the cross-entropy loss for optimization. While the StreetLearn dataset's trajectory contains more panorama along the way on average, the paired instructions are shorter than the Touchdown dataset. We extract a sub-dataset Manh-50 from the original large scale StreetLearn dataset for the convenience of conducting experiments. Manh-50 consists of navigation samples in the Manhattan area that contains no more than 50 panoramas in the whole trajectory, containing 31k training samples. We generate style-transferred instructions for the Manh-50 dataset, which serves as an auxiliary dataset, and will be used to pre-train the navigation models. More details can be found in the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Action Predictor",
"sec_num": null
},
{
"text": "We use the following metrics to evaluate VLN performance: (1) Task Completion (TC): the accuracy of completing the navigation task correctly. Following Chen et al. (2019) , the navigation result is considered correct if the agent reaches the specific goal or one of the adjacent nodes in the environment graph. (2) Shortest-Path Distance (SPD): the mean distance between the agent's final position and the goal position in the environment graph.",
"cite_spans": [
{
"start": 152,
"end": 170,
"text": "Chen et al. (2019)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "(3) Success weighted by Edit Distance (SED): the normalized Levenshtein edit distance between the path predicted by the agent and the reference path, which is constrained only to the successful navigation. (4) Coverage weighted by Length Score (CLS): a measurement of the fidelity of the agent's path with regard to the reference path. (5) Normalized Dynamic Time Warping (nDTW): the minimized cumulative distance between the predicted path and the reference path, normalized by the reciprocal of the square root of the reference path length. The value is rescaled by taking the negative exponential of the normalized value. (6) Success weighted Dynamic Time Warping (SDTW): the nDTW value where the summation is only over the successful navigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "TC, SPD, and SED are defined by Chen et al. (2019) . CLS is defined by . nDTW and SDTW are originally defined by Ilharco et al. (2019) , in which nDTW is normalized by the length of the reference path. We adjust the normalizing factor to be the reciprocal of the square root of the reference path length for length invariance (Mueen and Keogh, 2016) . In case the reference trajectories length has a salient variance, our modification to the normalizing factor made the nDTW and SDTW scores invariant to the reference length.",
"cite_spans": [
{
"start": 32,
"end": 50,
"text": "Chen et al. (2019)",
"ref_id": "BIBREF42"
},
{
"start": 113,
"end": 134,
"text": "Ilharco et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 326,
"end": 349,
"text": "(Mueen and Keogh, 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "In this section, we report the outdoor VLN performance and the quality of the generated instructions to validate the effectiveness of our MTST learning approach. We compare our VLN Transformer with the baseline model and discuss the influence of pre-training on external resources with/without instruction style transfer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "Outdoor VLN Performance We compare our VLN Transformer with RCONCAT (Chen et al., 2019; Mirowski et al., 2018) and GA (Chen et al., 2019; Chaplot et al., 2018) as baseline models. Both baseline models encode the trajectory and the instruction in an LSTM-based manner and use supervised training with Hogwild! (Recht et al., 2011) . Table 2 presents the navigation results on the Touchdown validation and test sets, where VLN Transformer performs better than RCONCAT and GA on most metrics with the exception of SPD and CLS.",
"cite_spans": [
{
"start": 68,
"end": 87,
"text": "(Chen et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 88,
"end": 110,
"text": "Mirowski et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 118,
"end": 137,
"text": "(Chen et al., 2019;",
"ref_id": "BIBREF42"
},
{
"start": 138,
"end": 159,
"text": "Chaplot et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 309,
"end": 329,
"text": "(Recht et al., 2011)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "Pre-training the navigation models on Manh-50 with template-based instructions can partially improve navigation performance. For all three agent models, the scores related to successful casessuch as TC, SED, and SDTW-witness a boost after being pre-trained on vanilla Manh-50. However, the instruction style difference between Manh-50 and Touchdown might misguide the agent in the pre-training stage, resulting in a performance drop on SPD for our VLN Transformer model. In contrast, our MTST learning approach can better utilize external resources and further improve navigation performance. Pre-training on Manh-50 with style-modified instructions can stably improve the navigation performance on all the metrics for both the RCONCAT model and the VLN Transformer. This also indicates that our MTST learning approach is model-agnostic. Table 4 compares the SPD values on success and failure navigation cases. In the success cases, VLN Transformer has better SPD scores, which is aligned with the best SED results in Table 2 . Our model's inferior SPD results are caused by taking longer paths in failure cases, which also harms the fidelity of the generated path and lowers the CLS scores. Nevertheless, every coin has two sides, and exploring more areas when getting lost might not be a complete bad behavior for the navigation agent. We leave this to future study.",
"cite_spans": [],
"ref_spans": [
{
"start": 838,
"end": 845,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 1018,
"end": 1025,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "We attempt to reveal each component's effect in the multimodal text style transfer model. We pre-train the VLN Transformer with external trajectories and instructions generated by different models, then fine-tune it on the TouchDown dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer in VLN",
"sec_num": null
},
{
"text": "According to the navigation results in Table 3 , the instructions generated by the Speaker model misguide the navigation agent, indicating that relying solely on the Speaker model cannot reduce the gap between different instruction styles. Adding textual attention to the Speaker model can slightly improve the navigation results, but still hinders the agent from navigating correctly. The stylemodified instructions improve the agent's performance on all the navigation metrics, suggesting that our Multimodal Text Style Transfer learning approach can assist the outdoor VLN task.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Text Style Transfer in VLN",
"sec_num": null
},
{
"text": "Quality of the Generated Instruction We evaluate the quality of instructions generated by the Speaker and the MTST model. We utilize five automatic metrics for natural language generation to evaluate the quality of the generated instructions, including BLEU (Papineni et al., 2002) , ROUGE (Lin, 2004) , METEOR (Elliott and Keller, 2013), CIDEr (Vedantam et al., 2015) and SPICE (Anderson et al., 2016) . In addition, we calculate the guiding signal match rate (MR) by comparing the appearance of \"turn left\" and \"turn right\". If the generated instruction contains the Table 2 : Navigation results on the outdoor VLN task. +M-50 denotes pre-training with vanilla Manh-50 which contains machine-generated instructions; in the +style setting, the model is pre-trained with Manh-50 trajectories and style-modified instructions that are generated by our MTST model. Table 3 : Ablation study of the multimodal text style transfer model on the outdoor VLN task. In the +speaker setting, the instructions used in pre-training are generated by the Speaker (Fried et al., 2018) , which only attends to the visual input; +text_attn denotes that we add a textual attention module to the Speaker to attend to both the visual input and the machine-generated instructions provided by Google Maps API. same number of guiding signals in the same order as the ground truth instruction, then this instruction pair is considered to be matched. We also calculate the number of different infilled tokens (#infill) in the generated instruction 4 . This reflects the model's ability to inject object-related information during style transferring. Among the 9,326 trajectories in the Touchdown dataset, 9,000 are used to train the MTST model, while the rest form the validation set. 4 We regard tokens with the following part-of-speech tags as infilled tokens: [JJ, JJR, JJS, NN, NNS, NNP, NNPS, We report the quantitative results on the validation set in Table 5 . After adding textual attention to the Speaker, the evaluation performance on all seven metrics improved. Our MTST model scores the highest on all seven metrics, which indicates that the \"masking-and-recovering\" scheme is beneficial for the multimodal text style transfer process. The results validate that the MTST model can generate higher quality instructions, which refers to more visual objects and provide more matched guiding signals.",
"cite_spans": [
{
"start": 258,
"end": 281,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF38"
},
{
"start": 290,
"end": 301,
"text": "(Lin, 2004)",
"ref_id": "BIBREF28"
},
{
"start": 311,
"end": 323,
"text": "(Elliott and",
"ref_id": "BIBREF9"
},
{
"start": 324,
"end": 368,
"text": "Keller, 2013), CIDEr (Vedantam et al., 2015)",
"ref_id": null
},
{
"start": 379,
"end": 402,
"text": "(Anderson et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 1048,
"end": 1068,
"text": "(Fried et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 1759,
"end": 1760,
"text": "4",
"ref_id": null
},
{
"start": 1837,
"end": 1871,
"text": "[JJ, JJR, JJS, NN, NNS, NNP, NNPS,",
"ref_id": null
}
],
"ref_spans": [
{
"start": 569,
"end": 576,
"text": "Table 2",
"ref_id": null
},
{
"start": 862,
"end": 869,
"text": "Table 3",
"ref_id": null
},
{
"start": 1932,
"end": 1939,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Multimodal Text Style Transfer in VLN",
"sec_num": null
},
{
"text": "We invite human judges on Amazon Mechanical Turk to evaluate the quality of the instructions generated by different models. We conduct a pairwise comparison, which covers 170 pairs of instructions generated by Speaker, Speaker with textual attention, and our MTST model. The instruction pairs are sampled from the Touchdown Table 6 : Human evaluation results of the instructions generated by Speaker, Speaker with textual attention and our MTST model with pairwise comparisons.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 331,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": null
},
{
"text": "validation set. Each pair of instructions, together with the ground truth instruction and the gif that illustrates the navigation street view, is presented to 5 annotators. The annotators are asked to make decisions from the aspect of guiding signal correctness and instruction content alignment. Results in Table 6 show that annotators think the instructions generated by our MTST model better describe the street view and is more aligned with the groundtruth instructions.",
"cite_spans": [],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": null
},
{
"text": "Case Study We demonstrate case study results to illustrate the performance of our Multimodal Text Style Transfer learning approach. Fig. 5 provides two showcases of the instruction generation results. As listed in the charts, the instructions generated by the vanilla Speaker model have a poor performance in keeping the guiding signals in the ground truth instructions and suffer from hallucinations, which refers to objects that have not appeared in the trajectory. The Speaker with textual attention can provide guidance direction. However, the instructions generated in this manner does not utilize the rich visual information in the trajectory. On the other hand, the instructions generated by our multimodal text style transfer model inject more object-related information (\"the light\", \"scaffolding\") in the surrounding navigation environment to the StreetLearn instruction while keeping the correct guiding signals.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 138,
"text": "Fig. 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": null
},
{
"text": "In this paper, we proposed the Multimodal Text Style Transfer learning approach for outdoor VLN. This learning framework allows us to utilize outof-domain navigation samples in outdoor environments and enrich the original navigation reasoning training process. Experimental results show that our MTST approach is model-agnostic, and our MTST learning approach outperforms the baseline models on the outdoor VLN task. We believe our study provides a possible solution to mitigate the data scarcity issue in the outdoor VLN task. In future studies, we would love to explore the pos- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Go to the next intersection and turn left again. There will be a building with a red awning on your right. Go straight through the next intersection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Speaker",
"sec_num": null
},
{
"text": "Turn right at the next intersection. Stop just before the next intersection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker with Textual Attention",
"sec_num": null
},
{
"text": "Turn right again at the next intersection. On your right will be scaffolding on your right. Turn right. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Text Style Transfer",
"sec_num": null
},
{
"text": "Turn so the red construction is on your left and the red brick building is on your right. Go forward to the intersection and turn right. You'll have a red brick building with a red awning on your right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Speaker",
"sec_num": null
},
{
"text": "Head in the direction of traffic. Turn right at the first intersection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker with Textual Attention",
"sec_num": null
},
{
"text": "Move forward with traffic on the right turn right at the light. Continue straight. sibility of constructing an end-to-end framework. We will also further improve the quality of stylemodified instructions, and quantitatively evaluate the alignment between the trajectory and the styletransferred instructions. Table 7 : Dataset statistics. path: navigation path; pano: panorama; instr_len: average instruction length; sent: sentence; turn: intersection on the path. Table 7 lists out the statistical information of the datasets used in pre-training and fine-tuning. Even though the Touchdown dataset and the StreetLearn dataset are built upon Google Street View, and both of them contain urban environments in New York City, pre-training the model with the VLN task on the StreetLearn dataset does not raise a threat of test data leaking. This is due to several causes:",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 7",
"ref_id": null
},
{
"start": 465,
"end": 472,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal Text Style Transfer",
"sec_num": null
},
{
"text": "First, the instructions in the two datasets are distinct in styles. The instructions in the StreetLearn dataset is generated by Google Maps API, which is template-based and focuses on street names. However, the instructions in the Touchdown dataset are created by human annotators and emphasize the visual environment's attributes as navigational cues. Moreover, as reported by Mehta et al. (2020) , the panoramas in the two datasets have little overlaps. In addition, Touchdown instructions constantly refer to transient objects such as cars and bikes, which might not appear in a panorama from a different time. The different granularity of the panorama spacing also leads to distinct panorama distributions of the two datasets.",
"cite_spans": [
{
"start": 378,
"end": 397,
"text": "Mehta et al. (2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "We use Adam optimizer (Kingma and Ba, 2015) to optimize all the parameters. During pre-training on the StreetLearn dataset, the learning rate for the RCONCAT model, GA model, and the VLN Transformer is 2.5 \u00d7 10 \u22124 . We fine-tune BERT separately with a learning rate of 1 \u00d7 10 \u22125 . We pre-train RCONCAT and GA for 15 epochs and pre-train the VLN Transformer for 25 epochs.",
"cite_spans": [
{
"start": 22,
"end": 43,
"text": "(Kingma and Ba, 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Training Details",
"sec_num": null
},
{
"text": "When training or fine-tuning on the Touchdown dataset, the learning rate for RCONCAT and GA is 2.5 \u00d7 10 \u22124 . For the VLN Transformer, the learning rate to fine-tune BERT is initially set to 1 \u00d7 10 \u22125 , while the learning rate for other parameters in the model is initialized to be 2.5 \u00d7 10 \u22124 . The learning rate for VLN Transformer will decay. The batch size for RCONCAT and GA is 64, while the VLN Transformer uses a batch size of 30 during training. Table 8 : Ablation results of the VLN Transformer's instruction split on Touchdown dev set. In split setting, the instruction is split into multiple sentences before being encoded by the instruction encoder, while no split setting encodes the whole instruction without splitting.",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 460,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Training Details",
"sec_num": null
},
{
"text": "We compare VLN Transformer performance with and without splitting the instructions into sentences during encoding. Results in Table 8 show that breaking the instructions into multiple sentences allows the visual views and the guiding signals in sub-instructions to attend to each other during cross-modal encoding fully. Such cross-modal alignments lead to betters navigation performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.3 Split Instructions vs. No Split",
"sec_num": null
},
{
"text": "We use AMT for human evaluation when evaluating the quality of the instructions generated by different models. The survey form for head-to-head comparisons is shown in Figure 6 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 176,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "A.4 Amazon Mechanical Turk",
"sec_num": null
},
{
"text": "We masked out the tokens with the following part-ofspeech tags:[JJ, JJR, JJS, NN, NNS, NNP, NNPS, PDT, POS, RB, RBR, RBS, PRP$, PRP, MD, CD]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://developers.google.com/maps/ documentation/streetview/intro",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to show our gratitude towards Jiannan Xiang, who kindly shares his experimental code on Touchdown, and Qi Wu, who provides valuable feedback to our initial draft. We also thank the anonymous reviewers for their thoughtprovoking comments. The UCSB authors were sponsored by an unrestricted gift from Google. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SPICE: semantic propositional image caption evaluation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2016,
"venue": "Computer Vision -ECCV 2016 -14th European Conference, Amsterdam",
"volume": "9909",
"issue": "",
"pages": "382--398",
"other_ids": {
"DOI": [
"10.1007/978-3-319-46454-1_24"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: semantic proposi- tional image caption evaluation. In Computer Vision -ECCV 2016 -14th European Conference, Amster- dam, The Netherlands, October 11-14, 2016, Pro- ceedings, Part V, volume 9909 of Lecture Notes in Computer Science, pages 382-398. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Bruce",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "S\u00fcnderhauf",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"D"
],
"last": "Reid",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hengel",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3674--3683",
"other_ids": {
"DOI": [
"10.1109/CVPR.2018.00387"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S\u00fcnderhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real en- vironments. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 3674- 3683. IEEE Computer Society.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations, ICLR 2018, Vancouver",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancou- ver, BC, Canada, April 30 -May 3, 2018, Confer- ence Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Gatedattention architectures for task-oriented language grounding",
"authors": [
{
"first": "Devendra",
"middle": [],
"last": "Singh Chaplot",
"suffix": ""
},
{
"first": "Kanthashree",
"middle": [
"Mysore"
],
"last": "Sathyendra",
"suffix": ""
},
{
"first": "Rama",
"middle": [],
"last": "Kumar Pasumarthi",
"suffix": ""
},
{
"first": "Dheeraj",
"middle": [],
"last": "Rajagopal",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "2819--2826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. 2018. Gated- attention architectures for task-oriented language grounding. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI- 18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 2819-2826. AAAI Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "TOUCHDOWN: natural language navigation and spatial reasoning in visual street environments",
"authors": [
{
"first": "Howard",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alane",
"middle": [],
"last": "Suhr",
"suffix": ""
},
{
"first": "Dipendra",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Snavely",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019",
"volume": "",
"issue": "",
"pages": "12538--12547",
"other_ids": {
"DOI": [
"10.1109/CVPR.2019.01282"
]
},
"num": null,
"urls": [],
"raw_text": "Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. TOUCHDOWN: natural language navigation and spatial reasoning in visual street environments. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 12538-12547. Computer Vision Foundation / IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "UNITER: universal image-text representation learning",
"authors": [
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Linjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Licheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"El"
],
"last": "Kholy",
"suffix": ""
},
{
"first": "Faisal",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Computer Vision -ECCV 2020 -16th European Conference",
"volume": "12375",
"issue": "",
"pages": "104--120",
"other_ids": {
"DOI": [
"10.1007/978-3-030-58577-8_7"
]
},
"num": null,
"urls": [],
"raw_text": "Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: universal image-text representation learning. In Computer Vision -ECCV 2020 -16th European Conference, Glasgow, UK, Au- gust 23-28, 2020, Proceedings, Part XXX, volume 12375 of Lecture Notes in Computer Science, pages 104-120. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enabling language models to fill in the blanks",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Mina",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online",
"volume": "",
"issue": "",
"pages": "2492--2501",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.225"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Donahue, Mina Lee, and Percy Liang. 2020. En- abling language models to fill in the blanks. In Pro- ceedings of the 58th Annual Meeting of the Associ- ation for Computational Linguistics, ACL 2020, On- line, July 5-10, 2020, pages 2492-2501. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Image description using visual dependency representations",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "2013",
"issue": "",
"pages": "1292--1302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desmond Elliott and Frank Keller. 2013. Image de- scription using visual dependency representations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1292-1302. ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Constantinou. 2020. The Book of Why: The New Science of Cause and Effect",
"authors": [
{
"first": "Norman",
"middle": [
"E"
],
"last": "Fenton",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Neil",
"suffix": ""
},
{
"first": "Anthony",
"middle": [
"C"
],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Basic Books",
"volume": "284",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.artint.2020.103286"
]
},
"num": null,
"urls": [],
"raw_text": "Norman E. Fenton, Martin Neil, and Anthony C. Con- stantinou. 2020. The Book of Why: The New Science of Cause and Effect, Judea Pearl, Dana Mackenzie. Basic Books (2018), volume 284.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speaker-follower models for vision-and-language navigation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Ronghang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Volkan",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3318--3329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower mod- els for vision-and-language navigation. In Advances in Neural Information Processing Systems 31: An- nual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, pages 3318-3329.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Style transfer in text: Exploration and evaluation",
"authors": [
{
"first": "Zhenxin",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Xiaoye",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "663--670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Explo- ration and evaluation. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 663-670. AAAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Towards learning a generic agent for vision-and-language navigation via pretraining",
"authors": [
{
"first": "Weituo",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "2020",
"issue": "",
"pages": "13134--13143",
"other_ids": {
"DOI": [
"10.1109/CVPR42600.2020.01315"
]
},
"num": null,
"urls": [],
"raw_text": "Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, and Jianfeng Gao. 2020. Towards learning a generic agent for vision-and-language navigation via pre- training. In 2020 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 13134- 13143. IEEE.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {
"DOI": [
"10.1109/CVPR.2016.90"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. volume 9, pages 1735-1780.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Toward controlled generation of text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1587--1596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward con- trolled generation of text. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learn- ing Research, pages 1587-1596. PMLR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "INSET: sentence infilling with inter-sentential transformer",
"authors": [
{
"first": "Yichen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oussama",
"middle": [],
"last": "Elachqar",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "2502--2515",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.226"
]
},
"num": null,
"urls": [],
"raw_text": "Yichen Huang, Yizhe Zhang, Oussama Elachqar, and Yu Cheng. 2020a. INSET: sentence infilling with inter-sentential transformer. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2502-2515. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pixel-bert: Aligning image pixels with text by deep multi-modal transformers",
"authors": [
{
"first": "Zhicheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhaoyang",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Bei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dongmei",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Jianlong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020b. Pixel-bert: Aligning image pixels with text by deep multi-modal trans- formers. CoRR, abs/2004.00849.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "General evaluation for instruction conditioned navigation using dynamic time warping",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Ilharco",
"suffix": ""
},
{
"first": "Vihan",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Ie",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2019,
"venue": "Visually Grounded Interaction and Language (ViGIL), NeurIPS 2019 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, and Jason Baldridge. 2019. General evaluation for instruction conditioned navigation using dynamic time warping. In Visually Grounded Interaction and Language (ViGIL), NeurIPS 2019 Workshop, Van- couver, Canada, December 13, 2019.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Stay on the path: Instruction fidelity in vision-andlanguage navigation",
"authors": [
{
"first": "Vihan",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Magalh\u00e3es",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Ie",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "1862--1872",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1181"
]
},
"num": null,
"urls": [],
"raw_text": "Vihan Jain, Gabriel Magalh\u00e3es, Alexander Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the path: Instruction fidelity in vision-and- language navigation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-Au- gust 2, 2019, Volume 1: Long Papers, pages 1862- 1872. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Disentangled representation learning for non-parallel text style transfer",
"authors": [
{
"first": "Vineet",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Hareesh",
"middle": [],
"last": "Bahuleyan",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "424--434",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1041"
]
},
"num": null,
"urls": [],
"raw_text": "Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 424-434. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tactical rewind: Self-correction via backtracking in visionand-language navigation",
"authors": [
{
"first": "Liyiming",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Siddhartha",
"middle": [
"S"
],
"last": "Srinivasa",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019",
"volume": "",
"issue": "",
"pages": "6741--6749",
"other_ids": {
"DOI": [
"10.1109/CVPR.2019.00690"
]
},
"num": null,
"urls": [],
"raw_text": "Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtz- man, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, and Siddhartha S. Srinivasa. 2019. Tactical rewind: Self-correction via backtracking in vision- and-language navigation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6741-6749. Computer Vision Foundation / IEEE.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In 6th International Conference on Learning Rep- resentations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceed- ings. OpenReview.net.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Phrase-based & neural unsupervised machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5039--5049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, Brussels, Belgium, October 31 -November 4, 2018, pages 5039-5049. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining",
"authors": [
{
"first": "Gen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Yuejian",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "11336--11344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020. Unicoder-vl: A universal en- coder for vision and language by cross-modal pre- training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty- Second Innovative Applications of Artificial Intelli- gence Conference, IAAI 2020, The Tenth AAAI Sym- posium on Educational Advances in Artificial Intel- ligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 11336-11344. AAAI Press.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Visualbert: A simple and performant baseline for vision and language",
"authors": [
{
"first": "Liunian Harold",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and lan- guage. volume abs/1908.03557.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "TIGS: an inference algorithm for text infilling with gradient search",
"authors": [
{
"first": "Dayiheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiancheng",
"middle": [],
"last": "Lv",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "4146--4156",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1406"
]
},
"num": null,
"urls": [],
"raw_text": "Dayiheng Liu, Jie Fu, Pengfei Liu, and Jiancheng Lv. 2019. TIGS: an inference algorithm for text infill- ing with gradient search. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-Au- gust 2, 2019, Volume 1: Long Papers, pages 4146- 4156. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In Advances in Neural Information Process- ing Systems 32: Annual Conference on Neural Infor- mation Processing Systems 2019, NeurIPS 2019, De- cember 8-14, 2019, Vancouver, BC, Canada, pages 13-23.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Univilm: A unified video and language pre-training model for multimodal understanding and generation",
"authors": [
{
"first": "Huaishao",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Botian",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Haoyang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Tianrui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xilin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Xilin Chen, and Ming Zhou. 2020. Univilm: A unified video and language pre-training model for multimodal understanding and generation. volume abs/2002.06353.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Self-monitoring navigation agent via auxiliary progress estimation",
"authors": [
{
"first": "Chih-Yao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zuxuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ghassan",
"middle": [],
"last": "Al-Regib",
"suffix": ""
},
{
"first": "Zsolt",
"middle": [],
"last": "Kira",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations, ICLR 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan Al- Regib, Zsolt Kira, Richard Socher, and Caiming Xiong. 2019a. Self-monitoring navigation agent via auxiliary progress estimation. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The regretful agent: Heuristic-aided navigation through progress estimation",
"authors": [
{
"first": "Chih-Yao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zuxuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ghassan",
"middle": [],
"last": "Alregib",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zsolt",
"middle": [],
"last": "Kira",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019",
"volume": "",
"issue": "",
"pages": "6732--6740",
"other_ids": {
"DOI": [
"10.1109/CVPR.2019.00689"
]
},
"num": null,
"urls": [],
"raw_text": "Chih-Yao Ma, Zuxuan Wu, Ghassan AlRegib, Caim- ing Xiong, and Zsolt Kira. 2019b. The regretful agent: Heuristic-aided navigation through progress estimation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6732-6740. Com- puter Vision Foundation / IEEE.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Improving vision-and-language navigation with imagetext pairs from the web",
"authors": [
{
"first": "Arjun",
"middle": [],
"last": "Majumdar",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2020,
"venue": "Computer Vision -ECCV 2020 -16th European Conference",
"volume": "12351",
"issue": "",
"pages": "259--274",
"other_ids": {
"DOI": [
"10.1007/978-3-030-58539-6_16"
]
},
"num": null,
"urls": [],
"raw_text": "Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, and Dhruv Batra. 2020. Im- proving vision-and-language navigation with image- text pairs from the web. In Computer Vision -ECCV 2020 -16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VI, volume 12351 of Lecture Notes in Computer Science, pages 259-274. Springer.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Retouchdown: Adding touchdown to streetlearn as a shareable resource for language grounding tasks in street view",
"authors": [
{
"first": "Harsh",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Ie",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mirowski",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, and Piotr Mirowski. 2020. Retouchdown: Adding touchdown to streetlearn as a shareable resource for language grounding tasks in street view. volume abs/2001.03671.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning to navigate in cities without a map",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Mirowski",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"Koichi"
],
"last": "Grimes",
"suffix": ""
},
{
"first": "Mateusz",
"middle": [],
"last": "Malinowski",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Teplyashin",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
},
{
"first": "Raia",
"middle": [],
"last": "Hadsell",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2424--2435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Mirowski, Matthew Koichi Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith Ander- son, Denis Teplyashin, Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, and Raia Hadsell. 2018. Learning to navigate in cities without a map. In Advances in Neural Information Processing Sys- tems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, pages 2424-2435.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Extracting optimal performance from dynamic time warping",
"authors": [
{
"first": "Abdullah",
"middle": [],
"last": "Mueen",
"suffix": ""
},
{
"first": "Eamonn",
"middle": [
"J"
],
"last": "Keogh",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "2129--2130",
"other_ids": {
"DOI": [
"10.1145/2939672.2945383"
]
},
"num": null,
"urls": [],
"raw_text": "Abdullah Mueen and Eamonn J. Keogh. 2016. Extract- ing optimal performance from dynamic time warp- ing. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 2129-2130. ACM.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. ACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Hogwild: A lock-free approach to parallelizing stochastic gradient descent",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Recht",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"J"
],
"last": "Wright",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "693--701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Recht, Christopher R\u00e9, Stephen J. Wright, and Feng Niu. 2011. Hogwild: A lock-free ap- proach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Sys- tems 24: 25th Annual Conference on Neural In- formation Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pages 693-701.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Imagenet large scale visual recognition challenge",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"S"
],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Fei-Fei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "115",
"issue": "",
"pages": "211--252",
"other_ids": {
"DOI": [
"10.1007/s11263-015-0816-y"
]
},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. 2015. Imagenet large scale visual recognition challenge. volume 115, pages 211-252.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Style transfer from non-parallel text by cross-alignment",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6830--6841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Informa- tion Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, De- cember 4-9, 2017, Long Beach, CA, USA, pages 6830-6841.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Videobert: A joint model for video and language representation learning",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Vondrick",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Cordelia",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South)",
"volume": "",
"issue": "",
"pages": "7463--7472",
"other_ids": {
"DOI": [
"10.1109/ICCV.2019.00756"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In 2019 IEEE/CVF International Confer- ence on Computer Vision, ICCV 2019, Seoul, Ko- rea (South), October 27 -November 2, 2019, pages 7463-7472. IEEE.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "LXMERT: learning cross-modality encoder representations from transformers",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5099--5110",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1514"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from trans- formers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5099-5110. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Learning to navigate unseen environments: Back translation with environmental dropout",
"authors": [
{
"first": "Licheng",
"middle": [],
"last": "Hao Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "2610--2621",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1268"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Tan, Licheng Yu, and Mohit Bansal. 2019. Learn- ing to navigate unseen environments: Back transla- tion with environmental dropout. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2610-2621. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Multimodal transformer for unaligned multimodal language sequences",
"authors": [
{
"first": "Yao-Hung Hubert",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Shaojie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "J",
"middle": [
"Zico"
],
"last": "Kolter",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "6558--6569",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1656"
]
},
"num": null,
"urls": [],
"raw_text": "Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 6558-6569. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Cider: Consensus-based image description evaluation",
"authors": [
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Ramakrishna Vedantam",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4566--4575",
"other_ids": {
"DOI": [
"10.1109/CVPR.2015.7299087"
]
},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 4566- 4575. IEEE Computer Society.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qiuyuan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "\u00c7elikyilmaz",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yuan-Fang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019",
"volume": "",
"issue": "",
"pages": "6629--6638",
"other_ids": {
"DOI": [
"10.1109/CVPR.2019.00679"
]
},
"num": null,
"urls": [],
"raw_text": "Xin Wang, Qiuyuan Huang, Asli \u00c7elikyilmaz, Jian- feng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, and Lei Zhang. 2019. Re- inforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6629-6638. Computer Vi- sion Foundation / IEEE.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Look before you leap: Bridging model-free and model-based reinforcement learning for planned-ahead vision-andlanguage navigation",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenhan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Computer Vision -ECCV 2018 -15th European Conference",
"volume": "11220",
"issue": "",
"pages": "38--55",
"other_ids": {
"DOI": [
"10.1007/978-3-030-01270-0_3"
]
},
"num": null,
"urls": [],
"raw_text": "Xin Wang, Wenhan Xiong, Hongmin Wang, and William Yang Wang. 2018. Look before you leap: Bridging model-free and model-based rein- forcement learning for planned-ahead vision-and- language navigation. In Computer Vision -ECCV 2018 -15th European Conference, Munich, Ger- many, September 8-14, 2018, Proceedings, Part XVI, volume 11220 of Lecture Notes in Computer Sci- ence, pages 38-55. Springer.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Multi-modality cross attention network for image and sentence matching",
"authors": [
{
"first": "Xi",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Tianzhu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "2020",
"issue": "",
"pages": "10938--10947",
"other_ids": {
"DOI": [
"10.1109/CVPR42600.2020.01095"
]
},
"num": null,
"urls": [],
"raw_text": "Xi Wei, Tianzhu Zhang, Yan Li, Yongdong Zhang, and Feng Wu. 2020. Multi-modality cross attention net- work for image and sentence matching. In 2020 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10938-10947. IEEE.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "A learning algorithm for continually running fully recurrent neural networks",
"authors": [
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Zipser",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "1",
"issue": "",
"pages": "270--280",
"other_ids": {
"DOI": [
"10.1162/neco.1989.1.2.270"
]
},
"num": null,
"urls": [],
"raw_text": "Ronald J. Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks. volume 1, pages 270-280.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Learning to stop: A simple yet effective approach to urban vision-language navigation",
"authors": [
{
"first": "Jiannan",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020",
"volume": "",
"issue": "",
"pages": "699--707",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.62"
]
},
"num": null,
"urls": [],
"raw_text": "Jiannan Xiang, Xin Wang, and William Yang Wang. 2020. Learning to stop: A simple yet effective approach to urban vision-language navigation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 699-707. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Unsupervised text style transfer using language models as discriminators",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing",
"volume": "",
"issue": "",
"pages": "7298--7309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Zhiting Hu, Chris Dyer, Eric P. Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discrimina- tors. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Informa- tion Processing Systems 2018, NeurIPS 2018, De- cember 3-8, 2018, Montr\u00e9al, Canada, pages 7298- 7309.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Diagnosing the environment bias in vision-and-language navigation",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence",
"volume": "2020",
"issue": "",
"pages": "890--897",
"other_ids": {
"DOI": [
"10.24963/ijcai.2020/124"
]
},
"num": null,
"urls": [],
"raw_text": "Yubo Zhang, Hao Tan, and Mohit Bansal. 2020. Diag- nosing the environment bias in vision-and-language navigation. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelli- gence, IJCAI 2020, pages 890-897. ijcai.org.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Style transfer as unsupervised machine translation",
"authors": [
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianyong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018. Style transfer as unsupervised machine trans- lation. CoRR, abs/1808.07894.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Cross-modality relevance for reasoning on language and vision",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Parisa",
"middle": [],
"last": "Kordjamshidi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "7642--7651",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.683"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Zheng, Quan Guo, and Parisa Kordjamshidi. 2020. Cross-modality relevance for reasoning on language and vision. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7642-7651. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "For the i th sentence s i =",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Turn right onto W 36th St. Turn right onto Dyer Ave.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Two showcases of the instruction generation results. The red tokens indicate incorrectly generated instructions, while the blue tokens suggest alignments with the ground truth. The orange bounding boxes show that the objects in the surrounding environment have been successfully injected into the style-modified instruction.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Pairwise comparison form for human evaluation on AMT.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>: For the outdoor VLN task, the instructions pro-</td></tr><tr><td>vided by Google Maps API is distinct from the instruc-</td></tr><tr><td>tions written by human annotators.</td></tr></table>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td/><td/><td>Outdoor Navigation Task</td><td/><td/></tr><tr><td/><td/><td/><td>Finetune</td><td>VLN Transformer</td></tr><tr><td/><td/><td>Human-annotated Instructions</td><td/><td/></tr><tr><td/><td/><td/><td/><td>Pre-train</td></tr><tr><td/><td/><td>Train</td><td/><td/></tr><tr><td>External Resources</td><td>Input Inference</td><td>Multimodal Text Style</td><td>Sample Inference</td><td>External Resources</td></tr><tr><td/><td/><td>Transfer Model</td><td/><td/></tr><tr><td>Machine-generated Instructions</td><td/><td/><td/><td>Style-modified Instructions</td></tr><tr><td>Figure 2:</td><td/><td/><td/><td/></tr></table>",
"text": "An overview of the Multimodal Text Style Transfer (MTST) learning approach for vision-and-language navigation in real-life urban environments. Details are described in Section 3.2.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>Model</td><td/><td/><td/><td>Dev Set</td><td/><td/><td/><td/><td>Test Set</td><td/><td/></tr><tr><td colspan=\"2\">TC \u2191 RCONCAT 10.6</td><td>20.4</td><td>10.3</td><td>48.1</td><td>22.5</td><td>9.8 11.8</td><td>20.4</td><td>11.5</td><td>47.9</td><td>22.9</td><td>11.1</td></tr><tr><td>+M-50</td><td>11.8</td><td>19.1</td><td>11.4</td><td>48.7</td><td>23.1</td><td>10.9 12.1</td><td>19.4</td><td>11.8</td><td>49.4</td><td>24.0</td><td>11.3</td></tr><tr><td>+M-50 +style</td><td>11.9</td><td>19.9</td><td>11.5</td><td>48.9</td><td>23.8</td><td>11.1 12.6</td><td>20.4</td><td>12.3</td><td>48.0</td><td>23.9</td><td>11.8</td></tr><tr><td>GA</td><td>12.0</td><td>18.7</td><td>11.6</td><td>51.9</td><td>25.2</td><td>11.1 11.9</td><td>19.0</td><td>11.5</td><td>51.6</td><td>24.9</td><td>10.9</td></tr><tr><td>+M-50</td><td>12.3</td><td>18.5</td><td>11.8</td><td>53.7</td><td>26.2</td><td>11.3 13.1</td><td>18.4</td><td>12.8</td><td>54.2</td><td>26.8</td><td>12.1</td></tr><tr><td>+M-50 +style</td><td>12.9</td><td>18.5</td><td>12.5</td><td>52.8</td><td>26.3</td><td>11.9 13.9</td><td>18.4</td><td>13.5</td><td>53.5</td><td>27.5</td><td>12.9</td></tr><tr><td colspan=\"2\">VLN Transformer 14.0</td><td>21.5</td><td>13.6</td><td>44.0</td><td>23.0</td><td>12.9 14.9</td><td>21.2</td><td>14.6</td><td>45.4</td><td>25.3</td><td>14.0</td></tr><tr><td>+M-50</td><td>14.6</td><td>22.3</td><td>14.1</td><td>45.6</td><td>25.0</td><td>13.4 15.5</td><td>21.9</td><td>15.4</td><td>45.9</td><td>26.1</td><td>14.2</td></tr><tr><td>+M-50 +style</td><td>15.0</td><td>20.3</td><td>14.7</td><td>50.1</td><td>27.0</td><td>14.2 16.2</td><td>20.8</td><td>15.7</td><td>50.5</td><td>27.8</td><td>15.0</td></tr></table>",
"text": "SPD \u2193 SED \u2191 CLS \u2191 nDTW \u2191 SDTW \u2191 TC \u2191 SPD \u2193 SED \u2191 CLS \u2191 nDTW \u2191 SDTW \u2191",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>Model</td><td/><td/><td/><td>Dev Set</td><td/><td/><td/><td/><td/><td>Test Set</td><td/><td/></tr><tr><td colspan=\"2\">TC \u2191 VLN Transformer +M-50 14.6</td><td>22.3</td><td>14.1</td><td>45.6</td><td>25.0</td><td colspan=\"2\">13.4 15.5</td><td>21.9</td><td>15.4</td><td>45.9</td><td>26.1</td><td>14.2</td></tr><tr><td>+speaker</td><td>7.6</td><td>26.2</td><td>7.3</td><td>34.6</td><td>14.6</td><td>7.0</td><td>8.3</td><td>25.4</td><td>8.0</td><td>36.3</td><td>15.9</td><td>7.7</td></tr><tr><td>+text_attn</td><td>11.7</td><td>20.1</td><td>11.3</td><td>46.3</td><td>23.2</td><td colspan=\"2\">10.7 11.8</td><td>20.5</td><td>11.5</td><td>47.3</td><td>23.2</td><td>11.0</td></tr><tr><td>+style</td><td>15.0</td><td>20.3</td><td>14.7</td><td>50.1</td><td>27.0</td><td colspan=\"2\">14.2 16.2</td><td>20.8</td><td>15.7</td><td>50.5</td><td>27.8</td><td>15.0</td></tr></table>",
"text": "SPD \u2193 SED \u2191 CLS \u2191 nDTW \u2191 SDTW \u2191 TC \u2191 SPD \u2193 SED \u2191 CLS \u2191 nDTW \u2191 SDTW \u2191",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table/>",
"text": "S_SPD and F_SPD denotes the average SPD value on success and failure cases respectively.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF9": {
"content": "<table><tr><td>: Quantitative evaluation of the instructions gen-</td></tr><tr><td>erated by Speaker, Speaker with textual attention and</td></tr><tr><td>our MTST model.</td></tr></table>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF13": {
"content": "<table><tr><td>no split</td><td>9.6</td><td>21.8</td><td>9.3</td><td>46.1</td><td>20.0</td><td>8.7</td></tr><tr><td>split</td><td>13.6</td><td>20.5</td><td>13.1</td><td>47.6</td><td>24.0</td><td>12.6</td></tr></table>",
"text": "Model TC \u2191 SPD \u2193 SED \u2191 CLS \u2191 nDTW \u2191 SDTW \u2191",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}