Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:05:17.228565Z"
},
"title": "Large-scale representation learning from visually grounded untranscribed speech",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Ilharco",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"settlement": "Seattle",
"region": "WA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {
"settlement": "Mountain View",
"region": "CA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Research",
"location": {
"settlement": "Mountain View",
"region": "CA",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Systems that can associate images with their spoken audio captions are an important step towards visually grounded language learning. We describe a scalable method to automatically generate diverse audio for image captioning datasets. This supports pretraining deep networks for encoding both audio and images, which we do via a dual encoder that learns to align latent representations from both modalities. We show that a masked margin softmax loss for such models is superior to the standard triplet loss. We fine-tune these models on the Flickr8k Audio Captions Corpus and obtain state-of-the-art results-improving recall in the top 10 from 29.6% to 49.5%. We also obtain human ratings on retrieval outputs to better assess the impact of incidentally matching image-caption pairs that were not associated in the data, finding that automatic evaluation substantially underestimates the quality of the retrieved results. * Work done as a member of the Google AI Residency Program.",
"pdf_parse": {
"paper_id": "K19-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "Systems that can associate images with their spoken audio captions are an important step towards visually grounded language learning. We describe a scalable method to automatically generate diverse audio for image captioning datasets. This supports pretraining deep networks for encoding both audio and images, which we do via a dual encoder that learns to align latent representations from both modalities. We show that a masked margin softmax loss for such models is superior to the standard triplet loss. We fine-tune these models on the Flickr8k Audio Captions Corpus and obtain state-of-the-art results-improving recall in the top 10 from 29.6% to 49.5%. We also obtain human ratings on retrieval outputs to better assess the impact of incidentally matching image-caption pairs that were not associated in the data, finding that automatic evaluation substantially underestimates the quality of the retrieved results. * Work done as a member of the Google AI Residency Program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language learning in people starts with speech, not text. Text is tidy: it comes in convenient symbolic units that vary little from one writer to another. Speech is continuous and messy: the sounds used to convey a given word are modified by those of surrounding words, and the rate of speech, its pitch, and more vary across speakers and even for the same speaker in different contexts. As such, problems involving speech provide distinct challenges and opportunities for learning language representations that text-based work-which represents the vast majority-gets a free pass on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work has explored various means to transform raw speech into symbolic forms with little or no supervision (Park and Glass, 2007; Varadarajan et al., 2008; Ondel et al., 2016; Kamper et al., Figure 1 : Models that encode speech segments and images into a shared latent space enable images to be retrieved using their audio descriptions (top) and to associate images with spoken captions (bottom). Text captions are provided for clarity; only speech and images are used by the models.",
"cite_spans": [
{
"start": 113,
"end": 135,
"text": "(Park and Glass, 2007;",
"ref_id": "BIBREF35"
},
{
"start": 136,
"end": 161,
"text": "Varadarajan et al., 2008;",
"ref_id": "BIBREF47"
},
{
"start": 162,
"end": 181,
"text": "Ondel et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 182,
"end": 205,
"text": "Kamper et al., Figure 1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2017a; Bhati et al., 2018) . However, learning natural language starts with grounded, contextualized speech. While infants as young as 8-months-old can segment word-like units without non-linguistic information (Jusczyk and Aslin, 1995) and adults can learn to segment words in artificial languages (Saffran et al., 1996) , a learner must ultimately ground their representations of linguistic sequences (Harnad, 1990) to effectively use them to refer to objects, events and more. Furthermore, learning from rich perceptual data and interactions can be more efficient as it provides additional cues to the identities of words and their meaning in context.",
"cite_spans": [
{
"start": 7,
"end": 26,
"text": "Bhati et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 211,
"end": 236,
"text": "(Jusczyk and Aslin, 1995)",
"ref_id": "BIBREF23"
},
{
"start": 299,
"end": 321,
"text": "(Saffran et al., 1996)",
"ref_id": "BIBREF39"
},
{
"start": 403,
"end": 417,
"text": "(Harnad, 1990)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We address the problem of relating images to audio captions that describe them (Figure 1 ), building on previous research into learning from visually grounded, untranscribed speech (Harwath and Glass, 2015; Sun et al., 2016; Chrupa\u0142a et al., 2017; Kamper et al., 2017b; Chrupa\u0142a, 2019; Harwath and Glass, 2019) . Such problem settings provide opportunities both to improve our theoretical understanding of language as well as to realize gains on practical problemsincluding voice interaction with virtual assistants, image retrieval based on speech, and generally better supporting people with visual impairments.",
"cite_spans": [
{
"start": 181,
"end": 206,
"text": "(Harwath and Glass, 2015;",
"ref_id": "BIBREF17"
},
{
"start": 207,
"end": 224,
"text": "Sun et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 225,
"end": 247,
"text": "Chrupa\u0142a et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 248,
"end": 269,
"text": "Kamper et al., 2017b;",
"ref_id": "BIBREF26"
},
{
"start": 270,
"end": 285,
"text": "Chrupa\u0142a, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 286,
"end": 310,
"text": "Harwath and Glass, 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 79,
"end": 88,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is to improve performance on bidirectional speech/image retrieval through better data and better models for learning fixed dimensional latent representations of both modalities. We construct a synthetic speech caption dataset for pretraining by applying text-to-speech (TTS) on Conceptual Captions (Sharma et al., 2018) , a dataset with 3.3 million diverse image-caption pairs. Unlike Chrupa\u0142a et al. (2017) , who similarly applied TTS to MS-COCO , we inject diversity by varying the voice, speech rate, pitch and volume gain on every synthetically produced audio caption. We refer to the resulting dataset as Conceptual Spoken Captions (CSC). CSC's scale allows us to train deeper models than previous work. We use Inception-ResNet-v2 (Szegedy et al., 2017) to encode both the audio and visual modalities in a dual encoder model, pretraining on CSC and then fine-tuning and evaluating on human speech in the smaller Flickr Audio Caption Corpus (FACC) (Harwath and Glass, 2015). Using an adapted batch loss function rather than the triplet loss used in previous work, we substantially improve on the previous state-of-the-art for the standard FACC retrieval tasks.",
"cite_spans": [
{
"start": 315,
"end": 336,
"text": "(Sharma et al., 2018)",
"ref_id": "BIBREF41"
},
{
"start": 402,
"end": 424,
"text": "Chrupa\u0142a et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 753,
"end": 775,
"text": "(Szegedy et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Image captioning datasets contain positively paired items-but that does not imply that a random image and caption cannot also be a valid match. For instance, in FACC there are many spoken captions about beaches and sunsets and plenty of images that match these captions; two different images with descriptions \"A surfer is riding a wave.\" and \"A man surfs the wave\" are likely compatible. It is of course not feasible to exhaustively annotate all pairwise associations, so we have human raters judge the top five retrieved results for two models to assess the impact of this aspect of the data on automatic retrieval metrics used thus far. Unsurprisingly, models retrieve many compatible results that are unpaired in FACC: with the human evaluations, we find consistent increases in recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Larger training datasets support better performance and generalization (Banko and Brill, 2001; Halevy et al., 2009; Sun et al., 2017) , especially for deep models. Collecting labels from people has become easier via crowd computing (Buhrmester et al., 2011) , but is still expensive and remains a bottleneck for creating broad and representative datasets. This motivates the case for exploiting incidental annotation (Roth, 2017) and automating some aspects of dataset creation. The current trend of using machine translation systems to produce augmented datasets for machine translation itself (Sennrich et al., 2016) and for monolingual tasks like classification (Yu et al., 2018) and paraphrasing (Wieting and Gimpel, 2018 ) is a good example of this.",
"cite_spans": [
{
"start": 71,
"end": 94,
"text": "(Banko and Brill, 2001;",
"ref_id": "BIBREF1"
},
{
"start": 95,
"end": 115,
"text": "Halevy et al., 2009;",
"ref_id": "BIBREF15"
},
{
"start": 116,
"end": 133,
"text": "Sun et al., 2017)",
"ref_id": "BIBREF43"
},
{
"start": 232,
"end": 257,
"text": "(Buhrmester et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 417,
"end": 429,
"text": "(Roth, 2017)",
"ref_id": "BIBREF38"
},
{
"start": 595,
"end": 618,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 665,
"end": 682,
"text": "(Yu et al., 2018)",
"ref_id": "BIBREF52"
},
{
"start": 700,
"end": 725,
"text": "(Wieting and Gimpel, 2018",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For speech image captioning, Chrupa\u0142a et al. (2017) used a Text-to-Speech (TTS) system to create audio from the textual captions given in the MS-COCO dataset, resulting in 300k unique images with 5 spoken captions each. We scale this idea to the larger and more diverse textual Conceptual Captions dataset with 3.3 million unique image and captions, additionally modifying the produced speech by using multiple voices and random perturbations to the rate, pitch and audio. Our goal is to make the resulting data more effective for pretraining models so they can learn more efficiently on smaller amounts of human speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Image captioning datasets have ignited a great deal of research at the intersection of the computer vision and natural language processing communities (Lin et al., 2014; Vinyals et al., 2015; Bernardi et al., 2016; Anderson et al., 2018) . Getting annotators to provide captions works well with crowd computing, but Sharma et al. (2018) exploit incidental supervision for this task to obtain greater scale with their Conceptual Captions dataset. It contains 3.3 million pairs of image and textual captions, where pairs are extracted from HTML web pages using the alt-text field of images as a starting point for their descriptions.",
"cite_spans": [
{
"start": 151,
"end": 169,
"text": "(Lin et al., 2014;",
"ref_id": "BIBREF31"
},
{
"start": 170,
"end": 191,
"text": "Vinyals et al., 2015;",
"ref_id": "BIBREF48"
},
{
"start": 192,
"end": 214,
"text": "Bernardi et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 215,
"end": 237,
"text": "Anderson et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 316,
"end": 336,
"text": "Sharma et al. (2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Captions",
"sec_num": "2.1"
},
{
"text": "The textual captions are processed in a hypernymization stage. Named entities and syntactic dependency annotations are obtained using Google Cloud Natural Language APIs, which are matched to hypernym terms using the Google Knowledge Graph Search API. Proper nouns, numbers, units, dates, durations and locations are removed; identified named-entities are substituted with their hypernym, merging together analogous terms when possible. For example, the original alt-text (1) is converted to the conceptual caption (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Captions",
"sec_num": "2.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Captions",
"sec_num": "2.1"
},
{
"text": "alt-text: Musician Justin Timberlake per-forms at the 2017 Pilgrimage Music & Cultural Festival on September 23, 2017 in Franklin, Tennessee.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Captions",
"sec_num": "2.1"
},
{
"text": "(2) conceptual caption: pop artist performs at the festival in a city.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Captions",
"sec_num": "2.1"
},
{
"text": "There are many sequential filtering steps for improving the quality of the captions-see Sharma et al. (2018) for a thorough description. As quality control, a random sample of 4K conceptual captions were rated by human annotators, and 90.3% were judged \"good\" by at least 2 out of 3 raters.",
"cite_spans": [
{
"start": 88,
"end": 108,
"text": "Sharma et al. (2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Captions",
"sec_num": "2.1"
},
{
"text": "We use TTS to generate a high-fidelity spoken sentence for each of the 3.3 million textual captions in the Conceptual Captions dataset. 1 We use the Google Cloud Speech API 2 for TTS. Internally, the service uses a WaveNet model (Van Den Oord et al., 2016) to generate audio. For diversity, the speech is synthesized using parameter variations, as follows:",
"cite_spans": [
{
"start": 136,
"end": 137,
"text": "1",
"ref_id": null
},
{
"start": 229,
"end": 256,
"text": "(Van Den Oord et al., 2016)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "\u2022 Voice, which is sampled uniformly from a set of 6 different voices generated using a WaveNet model for American English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "\u2022 Speaking rate controls the speed of the synthesized audio. A speaking rate of 1.0 means the normal speed of a given voice, while a speaking rate of 2.0 means twice as fast. When synthesizing the data, we draw this parameter from a Gaussian distribution \u223c N (1.0, 0.1 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "\u2022 Pitch controls how high/deep the voice is. For example, if set to 1, this means the voice will be synthesized 1 semitones above the original pitch. This parameter is drawn from a Gaussian distribution \u223c N (0.0, 1.0 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "\u2022 Volume gain controls a gain in dB with respect to the normal native signal amplitude. If set to 0, the voice is synthesized without alterations in volume. This parameter is drawn from a Gaussian distribution \u223c N (0.0, 2.0 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "To avoid degenerate cases, we clip the values sampled from the Gaussian distributions described above such that they are never more than 2 times the standard deviation from the mean. All spoken captions are generated in 16000 Hz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "1 The alt-text does not come with the dataset and cannot be redistributed, so we focus on the conceptual captions for ease of experimentation and reproducibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "2 https://cloud.google.com/text-to-speech/ Figure 2 : Dual-encoder model architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 51,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conceptual Spoken Captions",
"sec_num": "2.2"
},
{
"text": "The Flickr Audio Caption Corpus (FACC) (Harwath and Glass, 2015) consists of 40,000 pairs of images and spoken captions, with 8000 unique images, of which 1000 are held for validation and 1000 for testing. The spoken captions are generated from humans reading the textual captions from the Flickr8k dataset (Hodosh et al., 2013) , originally crowd-sourced and based on images from Flickr. We use FACC for evaluation, both when pretraining on Conceptual Spoken Captions and when training on FACC from scratch. Like previous work, the core evaluation considered is retrieval of the known paired image given an audio caption within some top-k set of retrieved items (e.g. R@1 for whether the first item retrieved is the paired one and R@10 for whether it is in the top ten results). We also conduct human evaluations on retrieval outputs to detect the presence of unpaired but matching imagecaption pairs identified by the models and thereby better assess their impact on performance.",
"cite_spans": [
{
"start": 307,
"end": 328,
"text": "(Hodosh et al., 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Flickr Audio Caption Corpus",
"sec_num": "2.3"
},
{
"text": "Dual encoders are used in a wide range of applications, including signature verification (Bromley et al., 1994) , object tracking (Bertinetto et al., 2016) , sentence similarity (Mueller and Thyagarajan, 2016) , improving neural machine translation (Yang et al., 2019 ) and many others. The core of this set of architectures is a simple two-tower model illustrated in Figure 2 , where inputs x \u2208 X are processed by an encoder g x and inputs y \u2208 Y by a second encoder g y . The inputs may come from the same distribution-or they may be from entirely different sources or modalities. The towers may share the same architecture and weights-or they can be completely unlike and disconnected.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Bromley et al., 1994)",
"ref_id": "BIBREF7"
},
{
"start": 130,
"end": 155,
"text": "(Bertinetto et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 178,
"end": 209,
"text": "(Mueller and Thyagarajan, 2016)",
"ref_id": "BIBREF32"
},
{
"start": 249,
"end": 267,
"text": "(Yang et al., 2019",
"ref_id": "BIBREF51"
}
],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "These models are standard in audiovisual image captioning (Harwath and Glass, 2015; Chrupa\u0142a, 2019; Harwath et al., 2018) . In this setting, the dual encoder model, is composed by a visual tower, g vis , processing the images, and an audio tower, g aud , processing the spoken captions. The model is trained to map both modalities into a joint latent space. Here, we extend previous work to consider a batched margin loss, which we show to be superior for learning dense representations for retrieval.",
"cite_spans": [
{
"start": 58,
"end": 83,
"text": "(Harwath and Glass, 2015;",
"ref_id": "BIBREF17"
},
{
"start": 84,
"end": 99,
"text": "Chrupa\u0142a, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 100,
"end": 121,
"text": "Harwath et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Notation. The inputs are processed in batches of size B. For each input x k and y k in the batch, 1 \u2264 k \u2264 B , let g x (x k ) and g y (y k ) be their latent representations extracted by the corresponding tower. We define the B \u00d7 B matrix Z as the similarity between the latent representations for each pair of elements in the batch. A natural choice for that similarity is the dot product between the latent representations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z ij = g x (x i ) \u2022 g y (y j )",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "As shown in Figure 2 , Z encodes all pairwise associations in the batch. However, an additional aspect of some datasets must be taken into account: often times the same input x can match multiple inputs y or vice-versa-for instance, both Flickr8k and MS-COCO have multiple captions for the each image. To respect these pairs when they land in the same batch-and thus not penalize models for (correctly) associating them-we define a B \u00d7 B masking matrix M:",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M ij = 0, if x i matches y j 1, otherwise",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "All pairs (x k , y k ) match and this equivalence is transitive, so M is symmetric and all diagonal elements M kk , 1 \u2264 k \u2264 B are zero. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L T = B k=1 max(0, Z km \u2212 Z kk + \u03b4)+ max(0, Z nk \u2212 Z kk + \u03b4)",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "For each value k, m is randomly drawn from a uniform distribution over indices j such that M kj = 1, and n over indices i such that M ik = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Masked Margin Softmax Loss. The triplet loss (3) used previously misses opportunities to learn against a wider set of negative examples, namely all those in the batch that are not known to be positively associated (i.e., M ij = 1). To exploit these additional negatives, we minimize the Masked Margin Softmax (MMS) loss function, inspired by Henderson et al. 2017and Yang et al. (2019) . MMS simulates x-to-y and y-to-x retrievals inside the batch. It is defined at a high level as:",
"cite_spans": [
{
"start": 367,
"end": 385,
"text": "Yang et al. (2019)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "L MMS = L xy + L yx (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "L MMS is the sum of losses defined over x-to-y (Eq. 5) and y-to-x (Eq. 6) in-batch retrievals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L xy = \u2212 1 B B i=1 log e Z ii \u2212\u03b4 e Z ii \u2212\u03b4 + B j=1 M ij e Z ij (5) L yx = \u2212 1 B B j=1 log e Z jj \u2212\u03b4 e Z jj \u2212\u03b4 + B i=1 M ij e Z ij",
"eq_num": "(6)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "These are equivalent to a cross-entropy loss after a column-wise or row-wise softmax on the matrix Z, subject to the masking constraints in M and margin \u03b4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The margin hyperparameter \u03b4 is gradually increased as training progresses. Empirically, we found that, with a fixed \u03b4, large values lead to unstable performance in early training, while small values lead to negligible results in final performance. Starting with a small \u03b4 and increasing it does not hurt early training and forces the model to learn from a harder task later on. There many ways to increase \u03b4 along training-e.g. linearly, quadratically, and exponentially. The latter is used in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Contrasting Equations 3 and 4, the former chooses a negative sample randomly, while the latter takes advantage of all negative pairs in the batch and thus improves sample efficiency. L MMS has three main differences with Yang et al. (2019) : (1) a masking term that accounts for the fact that there might be multiple positive choices in the batch for a given input; (2) a varying margin term \u03b4, which is increased during training; (3) a log term that makes MMS more closely resemble a cross-entropy loss.",
"cite_spans": [
{
"start": 221,
"end": 239,
"text": "Yang et al. (2019)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Image to Speech Loss Batch Size R@1 R@5 R@10 R@50 R@100 R@1 R@5 R@10 R@50 R@100 Audio preprocessing. We extract 128 Mel-Frequency Cepstral Coefficients (MFCCs) from the raw audio signals using a window size of 20ms. The audio signals have a sampling rate of 16000Hz. We compute features every 10ms, such that each window has a 50% overlap with its neighbors. During training, we randomly crop/pad the MFCCs in the temporal dimension, and perform data augmentation as in Park et al. (2019), using one mask with a frequency mask parameter of 20 and a time mask parameter of 40. We do not perform time warping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech to Image",
"sec_num": null
},
{
"text": "Encoders. Both audio and image encoders are Inception-ResNet-v2 networks (Szegedy et al., 2017) , allowing the model to reap the benefits of relatively low computational cost, fast training and and strong performance when combining the Inception architecture with residual connections. 3 Related to our setting for audio processing, Li et al. (2019) also uses residual convolutional neural networks for state of the art results on Lib-riSpeech dataset (Panayotov et al., 2015) . For the audio tower, we stack 3 replicas of the MFCCs and treat them as images. For each modality, a 1536-dimensional latent space representation is extracted. Despite using the same architecture for both encoders, their weights are not shared. Unless specified otherwise, the models are not pretrained.",
"cite_spans": [
{
"start": 73,
"end": 95,
"text": "(Szegedy et al., 2017)",
"ref_id": "BIBREF45"
},
{
"start": 333,
"end": 349,
"text": "Li et al. (2019)",
"ref_id": "BIBREF30"
},
{
"start": 452,
"end": 476,
"text": "(Panayotov et al., 2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech to Image",
"sec_num": null
},
{
"text": "Optimization. Models are trained using Adam (Kingma and Ba, 2014), with an initial learning rate of 0.001 and an exponential decay of 0.999 every 1000 training steps, \u03b2 1 = 0.9, \u03b2 2 = 0.999 and = 1e\u22128. We use a weight decay of 4e\u22125, and train on 32 GPUs until convergence. Unless specified otherwise, the optimization objective is minimizing the loss L MMS (Eq. 4) with a margin term initially set to \u03b4 = 0.001 exponentially and increased by a factor of 1.002 every 1000 steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech to Image",
"sec_num": null
},
{
"text": "Our primary aim with CSC is to use it for pretraining for later fine-tuning and evaluation on datasets with human speech instead of TTS. Nevertheless, we can compare different loss functions and different batch sizes on the CSC validation set to better understand the impact of these parameters. We train models on CSC for 3 million steps, cropping/padding spoken captions to a duration of 3.5 seconds and using the loss functions L T (Eq. 3) and L MMS (Eq. 4). We find continuing improvements as batch size increases from 12 to 24 to 48. Furthermore, with the same batch size of 48, models optimized for minimizing L MMS perform substantially better than those using L T , as summarized in Table 1 . Of particular note is that R@1 scores for L MMS (batch size 48) are more than double those of L T in both directions. Table 2 compares previous results on the FACC dataset with those obtained by variations of our model. As a pre-processing step, spoken captions are cropped/padded to a duration of 8 seconds. After pretraining the model in CSC, we explore all possible combinations of using or not the pretrained weights for each of the branches g aud and g vis as a warm-starting point, training until convergence on FACC. Warm-starting each of the branches in the dual-encoder leads to substantial improvements",
"cite_spans": [],
"ref_spans": [
{
"start": 691,
"end": 698,
"text": "Table 1",
"ref_id": null
},
{
"start": 819,
"end": 826,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Retrieval: Conceptual Spoken Captions",
"sec_num": "4.2"
},
{
"text": "Image to Caption Model R@1 R@5 R@10 R@50 R@100 R@1 R@5 R@10 R@50 R@100 over the baseline, and combining both branches leads to the best overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Caption to Image",
"sec_num": null
},
{
"text": "In particular, we improve R@10 for caption-toimage from the .296 obtained by Chrupa\u0142a (2019) by 20% absolute to .495, without using multitask training or pretraining g vis on ImageNet (Deng et al., 2009) . The multitask training approach of Chrupa\u0142a (2019) is complementary to our improvements, so further gains might be obtained by combining these strategies. Furthermore, very deep, residual convolutional neural networks over characters have been shown to perform well for text-based tasks (Conneau et al., 2017) . We expect that our strategy of using the same basic architecture across different input types (speech, text and image) can be fruitfully extended to that setting. A related observation: while our results exceed previous results reported on text/image retrieval settings for FACC, we expect that recent advances in text encoding could easily beat those reported numbers.",
"cite_spans": [
{
"start": 77,
"end": 92,
"text": "Chrupa\u0142a (2019)",
"ref_id": "BIBREF11"
},
{
"start": 184,
"end": 203,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF14"
},
{
"start": 493,
"end": 515,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Caption to Image",
"sec_num": null
},
{
"text": "We also explore very low-data regimes using our pretrained model (see Fig. 3 ). Using small training subsets randomly drawn from FACC, we report performance as a function of how much data the model sees. With as little as 10% of the original training data (3000 image/spoken caption pairs), the warmstarted model performs competitively with a model trained on all training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 76,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Caption to Image",
"sec_num": null
},
{
"text": "Qualitative evaluation. Once a model is trained, any input (image or spoken caption) can be be used to query the corpus of images and spoken captions for nearest neighbors in the latent space. Figure 4 shows some examples of retrieved nearest neighbors in FACC's test set. Given a spoken caption or ",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 201,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Caption to Image",
"sec_num": null
},
{
"text": "I2S R@100 S2I R@100 I2S R@50 S2I R@50 I2S R@10 S2I R@10 I2S R@5 S2I R@5 I2S R@1 S2I R@1 Figure 3 : Ablations on low-data regime on FACC: chart shows recall scores for image-to-speech (I2S) and speech-to-image (S2I) retrieval, as a function of the amount of training data used for fine-tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Recall",
"sec_num": null
},
{
"text": "an image we show the five nearest image neighbors and five nearest caption neighbors. From these, it is clear that the representations capture many semantically salient attributes of the inputs. The retrieved items correctly share many thematic elements and many are clearly good matches even though the particular image-caption pairs are not associated in the data. This serves to reinforce our observation that R@k evaluations using only the known paired items is likely to underestimate the actual performance of the models-which we show to be the case with human evaluations in Section 4.4. Only some items are substantially incompatible: e.g. an image of a car for a caption about a woman in a river (they share water spraying), a picture of three adults for a caption about children raising their hands, and a caption about a boy climbing a wall for an image of children playing leapfrog). That said, many details are poor matches, such as the count of objects (one ball versus many), colors Figure 4 : Nearest neighbors in the joint visual and acoustic latent space, best viewed with zoom: using 4 spoken captions and 4 images as queries, we extract from FACC's test set the closest 5 images and 5 spoken captions in the latent space for each of them. For simplicity, we show the text associated with each spoken caption.",
"cite_spans": [],
"ref_spans": [
{
"start": 998,
"end": 1006,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Recall",
"sec_num": null
},
{
"text": "(brown dogs versus multicolored ones), people descriptions (elderly woman versus male dirt biker), object identification (e.g. a yellow pool noodle viewed as similar to slides), processes (jumping versus sliding) and perspective (man looking up versus viewed from behind and climbing). As such, there is clearly significant headroom for better, more finegrained modeling of both captions and images. Additionally, cross-modal attention mechanisms (Xu et al., 2015) and other explainability techniques (Ribeiro et al., 2016) could help better inspect and understand a model's predictions.",
"cite_spans": [
{
"start": 447,
"end": 464,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recall",
"sec_num": null
},
{
"text": "Furthermore, as noted by Chrupa\u0142a et al. (2017) , text-based retrieval models often handle misspellings poorly. In contrast, speech-based models are unlikely to suffer from similar problems because they inherently must deal with variation in the expression of words and utterances. For instance, the caption \"a dirt biker rides his motocycle through the woods\" (fourth row of Figure 4 ) is highly correlated with the correctly spelled sentences.",
"cite_spans": [
{
"start": 25,
"end": 47,
"text": "Chrupa\u0142a et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 376,
"end": 386,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Recall",
"sec_num": null
},
{
"text": "We ran human evaluations to answer two questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "4.4"
},
{
"text": "(1) how much does cropping limit model performance? and (2) how much do retrieval evaluations based only on positive associations underestimate model performance? Hints about both questions can be seen in the qualitative evaluation (Fig. 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "(Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "4.4"
},
{
"text": "To answer the first question, Table 3 shows the ratings for ground truth image/caption pairs in the FACC test set. The uncropped row shows that overall the captions are high quality and do match the full images. However, human ratings on images cropped at the center (which is what is provided to the models) show that there is considerable loss from cropping-only 62.5% of cropped images are rated as good matches by all five raters. Inspection makes it clear why cropping hurts: for example an \"good\" ratings (out of 5) 1+ 2+ 3+ 4+ 5 Cropped . 949 .918 .874 .800 .625 Uncropped .995 .994 .989 .971 .891 Table 3 : Human evaluation results on ground truth pairs on the test set of FACC, using either center cropped (which the models receive) or uncropped versions of the images.",
"cite_spans": [
{
"start": 546,
"end": 604,
"text": "949 .918 .874 .800 .625 Uncropped .995 .994 .989 .971 .891",
"ref_id": null
}
],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 3",
"ref_id": null
},
{
"start": 605,
"end": 612,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "4.4"
},
{
"text": "image of a snowboarder in the air next to another on a ski lift is cropped such that the snowboarder is missing, and thus a poor match to captions mentioning the snowboarder. This clearly indicates that standard cropping (which we follow) inherently limits model performance and that strategies that use the full image should be explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "4.4"
},
{
"text": "Standard retrieval evaluations are blind to pairs that match but are not associated in the data. To address this and answer the second question posed above, we present the top-5 retrieved captions for each image and the top-5 retrieved images for each caption in FACC's test set to human raters. To increase speed and decrease costs, we show raters the original Flickr8k textual captions instead of the spoken ones. Each pair is judged by five raters as \"good\" or not. This gives a soft measure of the compatibility of each pair based on fast binary judgments from each rater. For retrieval evaluations of a model, we compute recall based on the majority of human raters approving each image-caption pair: R@1 is the percentage of top-1 results and R@5 the percentage of top-5 results that are evaluated as a match by at least 3 of the 5 raters. Table 4 shows these metrics computed on retrieval outputs from two settings: FACC training from scratch and FACC fine-tuning after CSC pretraining. It also shows the corresponding automatic evaluations from Table 2 for easy comparison. These results make it clear that evaluation based only on positive associations is too rigid: speech-toimage retrieval based on human evaluations shows that a good matching item is returned in 52.2% of cases rather than just the 36.8% indicated by strict corpus matches. For image-to-speech retrieval the 55.8% strict measure goes up to 63.8%. That said, the results also show that the strict measure is nevertheless a useful indicator for comparing relative model performance: the model pretrained on CSC beats the corresponding one trained on FACC from scratch, on both human and automatic evaluations.",
"cite_spans": [],
"ref_spans": [
{
"start": 846,
"end": 853,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "4.4"
},
{
"text": "Eval Pretrain R@1 R@5 R@1 R@5 Auto .018 .063 .024 .072 Auto .139 .368 .182 .558 Humans .056 .154 .070 .196 Humans .229 .522 .306 .638 Table 4 : Comparison of human rater scores (majority agreement) versus using only corpus-known pairs on all metrics for speech-to-image (S2I) and imageto-speech (I2S) retrieval. Rows with Auto evaluation correspond to Ours (from scratch) and Ours (warmstarting all) scores in Table 2 .",
"cite_spans": [
{
"start": 55,
"end": 128,
"text": "Auto .139 .368 .182 .558 Humans .056 .154 .070 .196 Humans .229 .522 .306",
"ref_id": null
}
],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 4",
"ref_id": null
},
{
"start": 410,
"end": 417,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "S2I I2S",
"sec_num": null
},
{
"text": "Large-scale datasets are essential for training deep networks from scratch. In this paper, we present a scalable method for generating an audio caption dataset taking advantage of TTS systems to create millions of data pairs. Using the MMS loss, we demonstrate that pretraining on this dataset greatly improves performance on a human-generated audio caption dataset. As TTS models continue to improve and be developed for more languages, this data augmentation strategy will only become more robust and helpful over time. Finally, using human evaluations, we show evidence that corpus-based retrieval scores underestimate actual performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "This present work is focused on the here and now since captions describe a snapshot in time and focus on the visual entities and events involved in them. We thus have little hope to learn representations for words like visit, career and justice, for example. Videos can help with process oriented words like visit and could get significant components of words like career (such as the visual contexts, but not the overall path with intermediate goals involved in careers). They are likely to be hopeless for abstract words like justice. To address problems of this sort, there are likely many opportunities to combine ideas from unsupervised term discovery (Kamper et al., 2016; Bansal et al., 2017) with audiovisual word learning (Harwath et al., 2018) and models of visual grounding that have been applied to text (Kiros et al., 2018) . Being able to learn effective representations from raw audio associated with images could provide new possibilities for work that learns from videos and text (transcribed speech) , and in particular open up such techniques to new languages and domains.",
"cite_spans": [
{
"start": 657,
"end": 678,
"text": "(Kamper et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 679,
"end": 699,
"text": "Bansal et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 731,
"end": 753,
"text": "(Harwath et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 816,
"end": 836,
"text": "(Kiros et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "See Bianco et al. (2018) for an extensive benchmark analysis of popular convolutional neural network architectures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Radu Soricut, Austin Waters, Alex Ku and Jeffrey Ling for the helpful comments that assisted the development of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bottom-up and top-down attention for image captioning and visual question answering",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buehler",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6077--6086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077-6086.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scaling to very very large corpora for natural language disambiguation",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambigua- tion. In Proceedings of the 39th annual meeting of the Association for Computational Linguistics, pages 26-33. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Weakly supervised spoken term discovery using cross-lingual side information",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Bansal, Herman Kamper, Sharon Goldwater, and Adam Lopez. 2017. Weakly supervised spoken term discovery using cross-lingual side information. In Proceedings of the 42nd IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP-2017).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic description generation from images: A survey of models, datasets, and evaluation measures",
"authors": [
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Ruket",
"middle": [],
"last": "Cakici",
"suffix": ""
},
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Aykut",
"middle": [],
"last": "Erdem",
"suffix": ""
},
{
"first": "Erkut",
"middle": [],
"last": "Erdem",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "409--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from im- ages: A survey of models, datasets, and evalua- tion measures. Journal of Artificial Intelligence Re- search, 55:409-442.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fullyconvolutional siamese networks for object tracking",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Bertinetto",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Valmadre",
"suffix": ""
},
{
"first": "Joao",
"middle": [
"F"
],
"last": "Henriques",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Philip Hs",
"middle": [],
"last": "Torr",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "850--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luca Bertinetto, Jack Valmadre, Joao F Henriques, An- drea Vedaldi, and Philip HS Torr. 2016. Fully- convolutional siamese networks for object tracking. In European Conference on Computer Vision, pages 850-865. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Phoneme based embedded segmental k-means for unsupervised term discovery",
"authors": [
{
"first": "Saurabhch",
"middle": [],
"last": "Bhati",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "K Sri Rama",
"middle": [],
"last": "Murty",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5169--5173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saurabhch Bhati, Herman Kamper, and K Sri Rama Murty. 2018. Phoneme based embedded segmen- tal k-means for unsupervised term discovery. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5169-5173. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Benchmark analysis of representative deep neural network architectures",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Bianco",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Cadene",
"suffix": ""
},
{
"first": "Luigi",
"middle": [],
"last": "Celona",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Napoletano",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "64270--64277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Bianco, Remi Cadene, Luigi Celona, and Paolo Napoletano. 2018. Benchmark analysis of represen- tative deep neural network architectures. IEEE Ac- cess, 6:64270-64277.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Signature verification using a \"siamese\" time delay neural network",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Bromley",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Guyon",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "S\u00e4ckinger",
"suffix": ""
},
{
"first": "Roopak",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 1994,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "737--744",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1994. Signature veri- fication using a \"siamese\" time delay neural network. In Advances in Neural Information Processing Sys- tems, pages 737-744.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Amazon's mechanical turk: A new source of inexpensive, yet high-quality, data? Perspectives on",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Buhrmester",
"suffix": ""
},
{
"first": "Tracy",
"middle": [],
"last": "Kwang",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"D"
],
"last": "Gosling",
"suffix": ""
}
],
"year": 2011,
"venue": "Psychological Science",
"volume": "6",
"issue": "1",
"pages": "3--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. 2011. Amazon's mechanical turk: A new source of inexpensive, yet high-quality, data? Per- spectives on Psychological Science, 6(1):3-5.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Temporally grounding natural sentence in video",
"authors": [
{
"first": "Jingyuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xinpeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zequn",
"middle": [],
"last": "Jie",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "162--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 162-171, Brussels, Belgium. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Microsoft coco captions: Data collection and evaluation server",
"authors": [
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.00325"
]
},
"num": null,
"urls": [],
"raw_text": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Symbolic inductive bias for visually grounded learning of spoken language",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2019,
"venue": "To appear in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a. 2019. Symbolic inductive bias for visually grounded learning of spoken language. In To appear in Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Representations of language in a model of visually grounded speech signal",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Lieke",
"middle": [],
"last": "Gelderloos",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a, Lieke Gelderloos, and Afra Al- ishahi. 2017. Representations of language in a model of visually grounded speech signal. arXiv preprint arXiv:1702.01991.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Very deep convolutional networks for text classification",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1107--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Holger Schwenk, Lo\u00efc Barrault, and Yann Lecun. 2017. Very deep convolutional net- works for text classification. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, pages 1107-1116, Valencia, Spain. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Imagenet: A large-scale hierarchical image database",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "2009 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hier- archical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255. Ieee.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The unreasonable effectiveness of data",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Halevy",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Norvig",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The unreasonable effectiveness of data.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The symbol grounding problem",
"authors": [
{
"first": "Stevan",
"middle": [],
"last": "Harnad",
"suffix": ""
}
],
"year": 1990,
"venue": "Physica D: Nonlinear Phenomena",
"volume": "42",
"issue": "1-3",
"pages": "335--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stevan Harnad. 1990. The symbol grounding prob- lem. Physica D: Nonlinear Phenomena, 42(1- 3):335-346.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep multimodal semantic embeddings for speech and images",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "237--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Harwath and James Glass. 2015. Deep mul- timodal semantic embeddings for speech and im- ages. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 237- 244. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Towards visually grounded sub-word speech unit discovery",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.08213"
]
},
"num": null,
"urls": [],
"raw_text": "David Harwath and James Glass. 2019. Towards vi- sually grounded sub-word speech unit discovery. arXiv preprint arXiv:1902.08213.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Jointly discovering visual objects and spoken words from raw sensory input",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "Adria",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "D\u00eddac",
"middle": [],
"last": "Sur\u00eds",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the European Conference on Computer Vision (ECCV)",
"volume": "",
"issue": "",
"pages": "649--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Harwath, Adria Recasens, D\u00eddac Sur\u00eds, Galen Chuang, Antonio Torralba, and James Glass. 2018. Jointly discovering visual objects and spoken words from raw sensory input. In Proceedings of the European Conference on Computer Vision (ECCV), pages 649-665.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised learning of spoken language with visual context",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1858--1866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Harwath, Antonio Torralba, and James Glass. 2016. Unsupervised learning of spoken language with visual context. In Advances in Neural Infor- mation Processing Systems, pages 1858-1866.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Balint Miklos, and Ray Kurzweil. 2017. Efficient natural language response suggestion for smart reply",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "Yunhsuan",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Laszlo",
"middle": [],
"last": "Lukacs",
"suffix": ""
},
{
"first": "Ruiqi",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.00652"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun- hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Ku- mar, Balint Miklos, and Ray Kurzweil. 2017. Effi- cient natural language response suggestion for smart reply. arXiv preprint arXiv:1705.00652.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Framing image description as a ranking task: Data, models and evaluation metrics",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "47",
"issue": "",
"pages": "853--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Ar- tificial Intelligence Research, 47:853-899.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Infants detection of the sound patterns of words in fluent speech",
"authors": [
{
"first": "P",
"middle": [
"W"
],
"last": "Jusczyk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rn Aslin",
"suffix": ""
}
],
"year": 1995,
"venue": "Cognitive psychology",
"volume": "29",
"issue": "",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "PW Jusczyk and RN Aslin. 1995. Infants detection of the sound patterns of words in fluent speech. Cogni- tive psychology, 29:1-23.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised word segmentation and lexicon discovery using acoustic word embeddings",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Audio, Speech and Language Processing",
"volume": "24",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herman Kamper, Aren Jansen, and Sharon Goldwater. 2016. Unsupervised word segmentation and lexicon discovery using acoustic word embeddings. IEEE Transactions on Audio, Speech and Language Pro- cessing, 24(4):669679.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A segmental framework for fullyunsupervised large-vocabulary speech recognition",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Speech & Language",
"volume": "46",
"issue": "",
"pages": "154--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herman Kamper, Aren Jansen, and Sharon Goldwa- ter. 2017a. A segmental framework for fully- unsupervised large-vocabulary speech recognition. Computer Speech & Language, 46:154-174.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Visually grounded learning of keyword prediction from untranscribed speech",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Settle",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Shakhnarovich",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.08136"
]
},
"num": null,
"urls": [],
"raw_text": "Herman Kamper, Shane Settle, Gregory Shakhnarovich, and Karen Livescu. 2017b. Vi- sually grounded learning of keyword prediction from untranscribed speech. arXiv preprint arXiv:1703.08136.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep fragment embeddings for bidirectional image sentence mapping",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Li F Fei-Fei",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1889--1897",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy, Armand Joulin, and Li F Fei-Fei. 2014. Deep fragment embeddings for bidirectional image sentence mapping. In Advances in neural in- formation processing systems, pages 1889-1897.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Illustrative language understanding: Large-scale visual grounding with image search",
"authors": [
{
"first": "Jamie",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "922--933",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jamie Kiros, William Chan, and Geoffrey Hinton. 2018. Illustrative language understanding: Large-scale vi- sual grounding with image search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 922-933, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Jasper: An end-to-end convolutional neural acoustic model",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Lavrukhin",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Ginsburg",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Leary",
"suffix": ""
},
{
"first": "Oleksii",
"middle": [],
"last": "Kuchaiev",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"M"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Huyen",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Ravi Teja",
"middle": [],
"last": "Gadde",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.03288"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Li, Vitaly Lavrukhin, Boris Ginsburg, Ryan Leary, Oleksii Kuchaiev, Jonathan M Cohen, Huyen Nguyen, and Ravi Teja Gadde. 2019. Jasper: An end-to-end convolutional neural acoustic model. arXiv preprint arXiv:1904.03288.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European Confer- ence on Computer Vision, pages 740-755. Springer.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Siamese recurrent architectures for learning sentence similarity",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Thyagarajan",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In Thirtieth AAAI Conference on Artificial Intel- ligence.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Variational inference for acoustic unit discovery",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Ondel",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan\u010dernock\u1ef3",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Procedia Computer Science",
"volume": "81",
"issue": "",
"pages": "80--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Ondel, Luk\u00e1\u0161 Burget, and Jan\u010cernock\u1ef3. 2016. Variational inference for acoustic unit discovery. Procedia Computer Science, 81:80-86.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Librispeech: an asr corpus based on public domain audio books",
"authors": [
{
"first": "Vassil",
"middle": [],
"last": "Panayotov",
"suffix": ""
},
{
"first": "Guoguo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5206--5210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206-5210. IEEE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Unsupervised pattern discovery in speech",
"authors": [
{
"first": "S",
"middle": [],
"last": "Alex",
"suffix": ""
},
{
"first": "James R",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "16",
"issue": "",
"pages": "186--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex S Park and James R Glass. 2007. Unsuper- vised pattern discovery in speech. IEEE Transac- tions on Audio, Speech, and Language Processing, 16(1):186-197.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Specaugment: A simple data augmentation method for automatic speech recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Barret",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ekin",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Cubuk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.08779"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Why should i trust you?: Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144. ACM.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Incidental supervision: Moving beyond supervised learning",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Roth. 2017. Incidental supervision: Moving be- yond supervised learning. In Thirty-First AAAI Con- ference on Artificial Intelligence.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Word segmentation: The role of distributional cues",
"authors": [
{
"first": "Jenny",
"middle": [
"R"
],
"last": "Saffran",
"suffix": ""
},
{
"first": "Elissa",
"middle": [
"L"
],
"last": "Newport",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"N"
],
"last": "Aslin",
"suffix": ""
}
],
"year": 1996,
"venue": "Journal of Memory and Language",
"volume": "35",
"issue": "",
"pages": "606--621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny R. Saffran, Elissa L. Newport, and Richard N. Aslin. 1996. Word segmentation: The role of dis- tributional cues. Journal of Memory and Language, 35:4:606-621.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning",
"authors": [
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2556--2565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for au- tomatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2556-2565.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Grounded compositional semantics for finding and describing images with sentences",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Le",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "207--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Andrej Karpathy, Quoc V Le, Christo- pher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207-218.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Revisiting unreasonable effectiveness of data in deep learning era",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "843--852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. 2017. Revisiting unreasonable ef- fectiveness of data in deep learning era. In Proceed- ings of the IEEE International Conference on Com- puter Vision, pages 843-852.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Look, listen, and decode: Multimodal speech recognition with images",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "573--578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Sun, David Harwath, and James Glass. 2016. Look, listen, and decode: Multimodal speech recog- nition with images. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 573-578. IEEE.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Inception-v4, inception-resnet and the impact of residual connections on learning",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"A"
],
"last": "Alemi",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. 2017. Inception-v4, inception-resnet and the impact of residual connec- tions on learning. In Thirty-First AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Wavenet: A generative model for raw audio",
"authors": [
{
"first": "A\u00e4ron",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Oord",
"suffix": ""
},
{
"first": "Heiga",
"middle": [],
"last": "Dieleman",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A\u00e4ron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. SSW, 125.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Unsupervised learning of acoustic sub-word units",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Balakrishnan Varadarajan",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers",
"volume": "",
"issue": "",
"pages": "165--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balakrishnan Varadarajan, Sanjeev Khudanpur, and Emmanuel Dupoux. 2008. Unsupervised learning of acoustic sub-word units. In Proceedings of the 46th Annual Meeting of the Association for Compu- tational Linguistics on Human Language Technolo- gies: Short Papers, pages 165-168. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3156--3164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 3156-3164.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2018. ParaNMT- 50M: Pushing the limits of paraphrastic sentence em- beddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.03044"
]
},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. arXiv preprint arXiv:1502.03044.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Improving multilingual sentence embedding using bidirectional dual encoder with additive margin softmax",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Hernandez Abrego",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Qinlan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yun-Hsuan",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.08564"
]
},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang, Gustavo Hernandez Abrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Im- proving multilingual sentence embedding using bi- directional dual encoder with additive margin soft- max. arXiv preprint arXiv:1902.08564.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Fast and accurate reading comprehension by combining self-attention and convolution",
"authors": [
{
"first": "Adams",
"middle": [
"Wei"
],
"last": "Yu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Dohan",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and accurate reading comprehension by combining self-attention and convolution. In International Conference on Learning Representations (ICLR).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Loss. Both Chrupa\u0142a (2019) and Harwath et al. (2018) (and their previous work) employ the triplet loss function given in Equation 3.",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td/><td>Socher et al. 2014</td><td>-</td><td>-</td><td>.286</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">-.2r90</td><td>-</td><td>-</td></tr><tr><td>Text</td><td>Karpathy et al. 2014 Harwath and Glass 2015</td><td>--</td><td>--</td><td>.425 .490</td><td>--</td><td>--</td><td>--</td><td>--</td><td>.440 .567</td><td>--</td><td>--</td></tr><tr><td/><td>Chrupa\u0142a et al. 2017</td><td colspan=\"3\">.127 .364 .494</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td>Harwath and Glass 2015</td><td>-</td><td>-</td><td>.179</td><td>-</td><td>-</td><td>-</td><td>-</td><td>.243</td><td>-</td><td>-</td></tr><tr><td/><td>Chrupa\u0142a et al. 2017</td><td colspan=\"3\">.055 0.163 .253</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td>Chrupa\u0142a 2019</td><td>-</td><td>-</td><td>.296</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Speech</td><td>Ours (from scratch)</td><td colspan=\"2\">.018 .063</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": ".101 .288 .428 .024 .072 .124 .332 .458 Ours (warm-starting g aud ) .041 .138 .211 .467 .613 .550 .166 .241 .522 .654 Ours (warm-starting g vis ) .",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table/>",
"text": "Retrieval scores on the test set of FACC.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}