|
{ |
|
"paper_id": "N16-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:36:41.950917Z" |
|
}, |
|
"title": "Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning", |
|
"authors": [ |
|
{ |
|
"first": "Janarthanan", |
|
"middle": [], |
|
"last": "Rajendran", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Mitesh", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Khapra", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sarath", |
|
"middle": [], |
|
"last": "Chandar", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Balaraman", |
|
"middle": [], |
|
"last": "Ravindran", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recently there has been a lot of interest in learning common representations for multiple views of data. Typically, such common representations are learned using a parallel corpus between the two views (say, 1M images and their English captions). In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, V 1 and V 2) but parallel data is available between each of these views and a pivot view (V 3). We propose a model for learning a common representation for V 1 , V 2 and V 3 using only the parallel data available between V 1 V 3 and V 2 V 3. The proposed model is generic and even works when there are n views of interest and only one pivot view which acts as a bridge between them. There are two specific downstream applications that we focus on (i) transfer learning between languages L 1 ,L 2 ,...,L n using a pivot language L and (ii) cross modal access between images and a language L 1 using a pivot language L 2. Our model achieves state-of-the-art performance in multilingual document classification on the publicly available multilingual TED corpus and promising results in multilingual multimodal retrieval on a new dataset created and released as a part of this work.", |
|
"pdf_parse": { |
|
"paper_id": "N16-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recently there has been a lot of interest in learning common representations for multiple views of data. Typically, such common representations are learned using a parallel corpus between the two views (say, 1M images and their English captions). In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, V 1 and V 2) but parallel data is available between each of these views and a pivot view (V 3). We propose a model for learning a common representation for V 1 , V 2 and V 3 using only the parallel data available between V 1 V 3 and V 2 V 3. The proposed model is generic and even works when there are n views of interest and only one pivot view which acts as a bridge between them. There are two specific downstream applications that we focus on (i) transfer learning between languages L 1 ,L 2 ,...,L n using a pivot language L and (ii) cross modal access between images and a language L 1 using a pivot language L 2. Our model achieves state-of-the-art performance in multilingual document classification on the publicly available multilingual TED corpus and promising results in multilingual multimodal retrieval on a new dataset created and released as a part of this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The proliferation of multilingual and multimodal content online has ensured that multiple views of the same data exist. For example, it is common to find the same article published in multiple languages online in multilingual news articles, multilingual wikipedia articles, etc. Such multiple views can even belong to different modalities. For example, images and their textual descriptions are two views of the same entity. Similarly, audio, video and subtitles of a movie are multiple views of the same entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Learning common representations for such multiple views of data will help in several downstream applications. For example, learning a common representation for images and their textual descriptions could help in finding images which match a given textual description. Further, such common representations can also facilitate transfer learning between views. For example, a document classifier trained on one language (view) can be used to classify documents in another language by representing documents of both languages in a common subspace.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Existing approaches to common representation learning (Ngiam et al., 2011; Klementiev et al., 2012; Chandar et al., 2013; Chandar et al., 2014; Andrew et al., 2013; Wang et al., 2015) except (Hermann and Blunsom, 2014b) typically require parallel data between all views. However, in many realworld scenarios such parallel data may not be available. For example, while there are many publicly available datasets containing images and their corresponding English captions, it is very hard to find datasets containing images and their corresponding captions in Russian, Dutch, Hindi, Urdu, etc. In this work, we are interested in addressing such scenarios. More specifically, we consider scenarios where we have n different views but parallel data is only available between each of these views, and a pivot view. In particular, there is no parallel data available between the non-pivot views.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 74, |
|
"text": "(Ngiam et al., 2011;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 75, |
|
"end": 99, |
|
"text": "Klementiev et al., 2012;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 121, |
|
"text": "Chandar et al., 2013;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 143, |
|
"text": "Chandar et al., 2014;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 164, |
|
"text": "Andrew et al., 2013;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 183, |
|
"text": "Wang et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To this end, we propose Bridge Correlational Neural Networks (Bridge CorrNets) which learn aligned representations across multiple views using a pivot view. We build on the work of (Chandar et al., 2016 ) but unlike their model, which only addresses scenarios where direct parallel data is available between two views, our model can work for n(\u22652) views even when no parallel data is available between all of them. Our model only requires parallel data between each of these n views and a pivot view. During training, our model maximizes the correlation between the representations of the pivot view and each of the n views. Intuitively, the pivot view ensures that similar entities across different views get mapped close to each other since the model would learn to map each of them close to the corresponding entity in the pivot view.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 202, |
|
"text": "(Chandar et al., 2016", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate our approach using two downstream applications. First, we employ our model to facilitate transfer learning between multiple languages using English as the pivot language. For this, we do an extensive evaluation using 110 sourcetarget language pairs and clearly show that we outperform the current state-of-the art approach (Hermann and Blunsom, 2014b). Second, we employ our model to enable cross modal access between images and French/German captions using English as the pivot view. For this, we created a test dataset consisting of images and their captions in French and German in addition to the English captions which were publicly available. To the best of our knowledge, this task of retrieving images given French/German captions (and vice versa) without direct parallel training data between them has not been addressed in the past. Even on this task we report promising results. Code and data used in this paper can be downloaded from http: //sarathchandar.in/bridge-corrnet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Canonical Correlation Analysis (CCA) and its variants (Hotelling, 1936; Vinod, 1976; Nielsen et al., 1998; Cruz-Cano and Lee, 2014; Akaho, 2001) are the most commonly used methods for learning a common representation for two views. However, most of these models generally work with two views only. Even though there are multi-view generalizations of CCA (Tenenhaus and Tenenhaus, 2011; Luo et al., 2015) , their computational complexity makes them unsuitable for larger data sizes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 71, |
|
"text": "(Hotelling, 1936;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 72, |
|
"end": 84, |
|
"text": "Vinod, 1976;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 106, |
|
"text": "Nielsen et al., 1998;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 131, |
|
"text": "Cruz-Cano and Lee, 2014;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 144, |
|
"text": "Akaho, 2001)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 385, |
|
"text": "(Tenenhaus and Tenenhaus, 2011;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 403, |
|
"text": "Luo et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Another class of algorithms for multiview learning is based on Neural Networks. One of the earliest neural network based model for learning common representations was proposed in (Hsieh, 2000) . Recently, there has been a renewed interest in this field and several neural network based models have been proposed. For example, Multimodal Autoencoder (Ngiam et al., 2011) , Deep Canonically Correlated Autoencoder (Wang et al., 2015) , Deep CCA (Andrew et al., 2013) and Correlational Neural Networks (CorrNet) (Chandar et al., 2016) . CorrNet performs better than most of the above mentioned methods and we build on their work as discussed in the next section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 192, |
|
"text": "(Hsieh, 2000)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 369, |
|
"text": "(Ngiam et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 431, |
|
"text": "(Wang et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 464, |
|
"text": "(Andrew et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 531, |
|
"text": "(Chandar et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "One of the tasks that we address in this work is multilingual representation learning where the aim is to learn aligned representations for words across languages. Some notable neural network based approaches here include the works of (Klementiev et al., 2012; Zou et al., 2013; Mikolov et al., 2013; Hermann and Blunsom, 2014b; Hermann and Blunsom, 2014a; Chandar et al., 2014; Soyer et al., 2015; Gouws et al., 2015) . However, except for (Hermann and Blunsom, 2014a; Hermann and Blunsom, 2014b), none of these other works handle the case when parallel data is not available between all languages. Our model addresses this issue and outperforms the model of Hermann and Blunsom (2014b).", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 260, |
|
"text": "(Klementiev et al., 2012;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 278, |
|
"text": "Zou et al., 2013;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 300, |
|
"text": "Mikolov et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 328, |
|
"text": "Hermann and Blunsom, 2014b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 356, |
|
"text": "Hermann and Blunsom, 2014a;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 378, |
|
"text": "Chandar et al., 2014;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 398, |
|
"text": "Soyer et al., 2015;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 418, |
|
"text": "Gouws et al., 2015)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The task of cross modal access between images and text addressed in this work comes under Mul-tiModal Representation Learning where each view belongs to a different modality. Ngiam et al. (2011) proposed an autoencoder based solution to learning common representation for audio and video. Srivastava and Salakhutdinov (2014) extended this idea to RBMs and learned common representations for image and text. Other solutions for image/text representation learning include (Zheng et al., 2014a; Zheng et al., 2014b; Socher et al., 2014) . All these approaches require parallel data between the two views and do not address multimodal, multilingual learning in situations where parallel data is available only between different views and a pivot view.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 194, |
|
"text": "Ngiam et al. (2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 491, |
|
"text": "(Zheng et al., 2014a;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 512, |
|
"text": "Zheng et al., 2014b;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 533, |
|
"text": "Socher et al., 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the past, pivot/bridge languages have been used to facilitate MT (for example, (Wu and Wang, 2007; Cohn and Lapata, 2007; Utiyama and Isahara, 2007; Nakov and Ng, 2009) ), transitive CLIR (Ballesteros, 2000; Lehtokangas et al., 2008) , transliteration and transliteration mining (Khapra et al., 2010a; Khapra et al., 2010b; Zhang et al., 2011) . None of these works use neural networks but it is important to mention them here because they use the concept of a pivot language (view) which is central to our work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "(Wu and Wang, 2007;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 124, |
|
"text": "Cohn and Lapata, 2007;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 125, |
|
"end": 151, |
|
"text": "Utiyama and Isahara, 2007;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 171, |
|
"text": "Nakov and Ng, 2009)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 210, |
|
"text": "(Ballesteros, 2000;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 236, |
|
"text": "Lehtokangas et al., 2008)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 304, |
|
"text": "(Khapra et al., 2010a;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 326, |
|
"text": "Khapra et al., 2010b;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 346, |
|
"text": "Zhang et al., 2011)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we describe Bridge CorrNet which is an extension of the CorrNet model proposed by (Chandar et al., 2016) . They address the problem of learning common representations between two views when parallel data is available between them. We propose an extension to their model which simultaneously learns a common representation for M views when parallel data is available only between one pivot view and the remaining M \u2212 1 views.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 121, |
|
"text": "(Chandar et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Let these views be denoted by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "V 1 , V 2 , ..., V M and let d 1 , d 2 , ..., d M be their respective dimensionali- ties. Let the training data be Z = {z i } N i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where each training instance contains only two views, i.e.,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "z i = (v i j , v i M ) where j \u2208 {1, 2, .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "., M \u22121} and M is a pivot view. To be more clear, the training data contains N 1 instances for which", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(v i 1 , v i M ) are available, N 2 instances for which (v i 2 , v i M ) are available and so on till N M \u22121 instances for which (v i M \u22121 , v i M ) are available (such that N 1 + N 2 + ... + N M \u22121 = N ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We denote each of these disjoint pairwise training sets by Z 1 , Z 2 to Z M \u22121 such that Z is the union of all these sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As an illustration consider the case when English, French and German texts are the three views of interest with English as the pivot view. As training data, we have N 1 instances containing English and their corresponding French texts and N 2 instances containing English and their corresponding German texts. We are then interested in learning a common representation for English, French and German even though we do not have any training instance containing French and their corresponding German texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Bridge CorrNet uses an encoder-decoder architecture with a correlation based regularizer to achieve this. It contains one encoder-decoder pair for each of the M views. For each view V j , we have, where f is any non-linear function such as sigmoid or tanh, W j \u2208 R k\u00d7d j is the encoder matrix for view V j , b \u2208 R k is the common bias shared by all the encoders. We also compute a hidden representation for the concatenated training instance", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h V j (v j ) = f (W j v j + b)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "z = (v j , v M )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "using the following encoder function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h Z (z) = f (W j v j + W M v M + b)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the remainder of this paper, whenever we drop the subscript for the encoder, then the encoder is determined by its argument. For example", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "h(v j ) means h V j (v j ), h(z) means h Z (z) and so on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our model also has a decoder corresponding to each view as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "g V j (h) = p(W j h + c j )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where p can be any activation function, W j \u2208 R d j \u00d7k is the decoder matrix for view V j , c j \u2208 R d j is the decoder bias for view V j . We also define g(h) as simply the concatenation of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[g V j (h), g V M (h)].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In effect, h V j (.) encodes the input v j into a hidden representation h and then g V j (.) tries to decode/reconstruct v j from this hidden representation h. Note that h can be computed using", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "h(v j ) or h(v M ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The decoder can then be trained to decode/reconstruct both v j and v M given a hidden representation computed using any one of them. More formally, we train Bridge CorrNet by minimizing the following objective function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "J Z (\u03b8) = N i=1 L(z i , g(h(z i ))) + N i=1 L(z i , g(h(v i l(i) ))) + N i=1 L(z i , g(h(v i M ))) \u2212 \u03bb corr(h(V l(i) ), h(V M ))", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where l(i) = j if z i \u2208 Z j and the correlation term corr is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "corr = N i=1 (h(x i ) \u2212 h(X))(h(y i ) \u2212 h(Y )) N i=1 (h(x i ) \u2212 h(X)) 2 N i=1 (h(y i ) \u2212 h(Y )) 2 (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Note that g(h(z i )) is the reconstruction of the input z i after passing through the encoder and decoder. L is a loss function which captures the error in this reconstruction, \u03bb is the scaling parameter to scale the last term with respect to the remaining terms, h(X) is the mean vector for the hidden representations of the first view and h(Y ) is the mean vector for the hidden representations of the second view. We now explain the intuition behind each term in the objective function. The first term captures the error in reconstructing the concatenated input z i from itself. The second term captures the error in reconstructing both views given the non-pivot view, v i l(i) . The third term captures the error in reconstructing both views given the pivot view,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "v i M .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Minimizing the second and third terms ensures that both the views can be predicted from any one view. Finally, the correlation term ensures that the network learns correlated common representations for all views.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our model can be viewed as a generalization of the two-view CorrNet model proposed in (Chandar et al., 2016) . By learning joint representations for multiple views using disjoint training sets Z 1 , Z 2 to Z M \u22121 it eliminates the need for n C 2 pair-wise parallel datasets between all views of interest. The pivot view acts as a bridge and ensures that similar entities across different views get mapped close to each other since all of them would be close to the corresponding entity in the pivot view.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 108, |
|
"text": "(Chandar et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Note that unlike the objective function of Cor-rNet (Chandar et al., 2016) , the objective function of Equation 4, is a dynamic objective function which changes with each training instance. In other words, l(i) \u2208 {1, 2, .., M \u22121} varies for each i \u2208 {1, 2, .., N }. For efficient implementation, we construct mini-batches where each mini-batch will come from only one of the sets Z 1 to Z M \u22121 . We randomly shuffle these mini-batches and use corresponding objective function for each mini-batch.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 74, |
|
"text": "(Chandar et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a side note, we would like to mention that in addition to Z 1 , Z 2 to Z M \u22121 as defined earlier, if additional parallel data is available between some of the non-pivot views then the objective function can be suitably modified to use this parallel data to further improve the learning. However, this is not the focus of this work and we leave this as a possible future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bridge Correlational Neural Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we describe the two datasets that we used for our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Hermann and Blunsom (2014b) provide a multilingual corpus based on the TED corpus for IWSLT 2013 (Cettolo et al., 2012) . It contains English transcriptions of several talks from the TED conference and their translations in multiple languages. We use the parallel data between English and other languages for training Bridge Corrnet (English, thus, acts as the pivot langauge). Hermann and Blunsom (2014b) also propose a multlingual document classification task using this corpus. The idea is to use the keywords associated with each talk (document) as class labels and then train a classifier to predict these classes. There are one or more such keywords associated with each talk but only the 15 most frequent keywords across all documents are considered as class labels. We used the same pre-processed splits 1 as provided by (Hermann and Blunsom, 2014b). The training corpus consists of a total of 12,078 parallel documents distributed across 12 language pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 119, |
|
"text": "(Cettolo et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multlingual TED corpus", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The MSCOCO dataset 2 contains images and their English captions. On an average there are 5 captions per image. The standard train/valid/test splits for this dataset are also available online. However, the reference captions for the images in the test split are not provided. Since we need such reference captions for evaluations, we create a new train/valid/test of this dataset. Specifically, we take 80K images from the standard train split and 40K images from the standard valid split. We then randomly split the merged 120K images into train(118K), validation (1K) and test set (1K).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Image Caption dataset", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We then create a multilingual version of the test data by collecting French and German translations for all the 5 captions for each image in the test set. We use crowdsourcing to do this. We used the CrowdFlower platform and solicited one French and one German translation for each of the 5000 captions using native speakers. We got each translation verified by 3 annotators. We restricted the geographical location of annotators based on the target language. We found that roughly 70% of the French translations and 60% of the German translations were marked as correct by a majority of the verifiers. On further inspection with the help of in-house annotators, we found that the errors were mainly syntactic and the content words are translated correctly in most of the cases. Since none of the approaches described in this work rely on syntax, we decided to use all the 5000 translations as test data. This multilingual image caption test data (MIC test data) will be made publicly available 3 and will hopefully assist further research in this area.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Image Caption dataset", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "From the TED corpus described earlier, we consider English transcriptions and their translations in 11 languages, viz., Arabic, German, Spanish, French, Italian, Dutch, Polish, Portuguese (Brazilian), Roman, Russian and Turkish. Following the setup of Hermann and Blunsom (2014b), we consider the task of cross language learning between each of the 11 C 2 non-English language pairs. The task is to classify documents in a language when no labeled training data is available in this language but training data is available in another language. This involves the following steps: 1. Train classifier: Consider one language as the source language and the remaining 10 languages as target languages. Train a document classifier using the labeled data of the source language, where each training document is represented using the hidden representation computed using a trained Bridge Corrnet model. As in (Hermann and Blunsom, 2014b) we used an averaged perceptron trained for 10 epochs as the classifier for all our experiments. The train split provided by (Hermann and Blunsom, 3 http://sarathchandar.in/bridge-corrnet 2014b) is used for training. 2. Cross language classification: For every target language, compute a hidden representation for every document in its test set using Bridge CorrNet. Now use the classifier trained in the previous step to classify this document. The test split provided by (Hermann and Blunsom, 2014b) is used for testing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1076, |
|
"end": 1077, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 1: Transfer learning using a pivot language", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For the above process to work, we first need to train Bridge Corrnet so that it can then be used for computing a common hidden representation for documents in different languages. For training Bridge CorrNet, we treat English as the pivot language (view) and construct parallel training sets Z 1 to Z 11 . Every instance in Z 1 contains the English and Arabic view of the same talk (document). Similarly, every instance in Z 2 contains the English and German view of the same talk (document) and so on. For every language, we first construct a vocabulary containing all words appearing more than 5 times in the corpus (all talks) of that language. We then use this vocabulary to construct a bag-of-words representation for each document. The size of the vocabulary (|V |) for different languages varied from 31213 to 60326 words. To be more clear", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and tuning Bridge Corrnet", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": ", v 1 = v arabic \u2208 R |V | arabic , v 2 = v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and tuning Bridge Corrnet", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "german \u2208 R |V |german and so on. We train our model for 10 epochs using the above training data Z = {Z 1 , Z 2 , ..., Z 11 }. We use hidden representations of size D = 128, as in (Hermann and Blunsom, 2014b). Further, we used stochastic gradient descent with mini-batches of size 20. Each mini-batch contains data from only one of the Z i s. We get a stochastic estimate for the correlation term in the objective function using this mini-batch. The hyperparameter \u03bb was tuned to each task using a training/validation split for the source language and using the performance on the validation set of an averaged perceptron trained on the training set (notice that this corresponds to a monolingual classification experiment, since the general assumption is that no labeled data is available in the target language).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and tuning Bridge Corrnet", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We now present the results of our cross language classification task in Table 1 . Each row corresponds to a source language and each column corresponds to a target language. We report the average F1- scores over all the 15 classes. We compare our results with the best results reported in (Hermann and Blunsom, 2014b) (see Table 2 ). Out of the 110 experiments, our model outperforms the model of (Hermann and Blunsom, 2014b) in 107 experiments. This suggests that our model efficiently exploits the pivot language to facilitate cross language learning between other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 79, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 330, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Finally, we present the results for a monolingual classification task in Table 3 . The idea here is to see if learning common representations for multiple views can also help in improving the performance of a task involving only one view. Hermann and Blunsom (2014b) argue that a Naive Bayes (NB) classifier trained using a bag-of-words representation of the documents is a very strong baseline. In fact, a classifier trained on document representations learned using their model does not beat a NB classifier for the task of monolingual classification. Rows 2 to 5 in Table 3 show the different settings tried by them (we refer the reader to (Hermann and Blunsom, 2014b) for a detailed description of these settings). On the other hand our model is able to beat NB for 5/11 languages. Further, for 4 other languages (German, French, Romanian, Russian) its performance is only marginally poor than that of NB. 6 Experiment 2: Cross modal access using a pivot language", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 80, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 576, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this experiment, we are interested in retrieving images given their captions in French (or German) and vice versa. However, for training we do not have any parallel data containing images and their French (or German) captions. Instead, we have the following datasets: (i) a dataset Z 1 containing images and their English captions and (ii) a dataset Z 2 containing English and their parallel French (or German) documents. For Z 1 , we use the training split of MSCOCO dataset which contains 118K images and their English captions (see Section 4.2). For Z 2 , we use the English-French (or German) parallel documents from the train split of the TED corpus (see Section 4.1). We use English as the pivot language and train Bridge Corrnet using Z = {Z 1 , Z 2 } to learn common representations for images, English text and French (or German) text. For text, we use bag-of-words representation and for image, we use the 4096 (fc6) representation got from a pretrained ConvNet (BVLC Reference CaffeNet (Jia et al., 2014) ). We learn hidden representations of size D = 200 by training Bridge Corrnet for 20 epochs using stochastic gradient descent with mini-batches of size 20. Each mini-batch contains data from only one of the Z i s. For the task of retrieving captions given an image, we consider the 1000 images in our test set (see section 4.2) as queries. The 5000 French (or German) captions corresponding to these images (5 per image) are considered as documents. The task is then to retrieve the relevant captions for each image. We represent all the captions and images in the common space as computed using Bridge Corrnet. For a given query, we rank all the captions based on the Euclidean distance between the representation of the image and the caption. For the task of retrieving images given a caption, we simply reverse the role of the captions and images. In other words, each of the 5000 captions is treated as a query and the 1000 images are treated as documents. \u03bb was tuned to each task using a training/validation split. For the task of retrieving French/German captions given an image, \u03bb was tuned using the performance on the validation set for retrieving French (or German) sentences for a given English sentence. For the other task, \u03bb was tuned using the performance on the validation set for retrieving images, given English captions. We do not use any image-French/German parallel data for tuning the hyperparameters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1000, |
|
"end": 1018, |
|
"text": "(Jia et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We use recall@k as the performance metric and compare the following methods in Table 4 : 1. En-Image CorrNet: This is the CorrNet model trained using only Z 1 as defined earlier in this section. The task is to retrieve English captions for a given image (or vice versa). This gives us an idea about the performance we could expect if direct parallel data is available between images and their captions in some language. We used the publicly available implementation of CorrNet provided by (Chandar et al., 2016) . 2. Bridge CorrNet: This is the Bridge CorrNet model trained using Z 1 and Z 2 as defined earlier in this section. The task is to retrieve French (or German) captions for a given image (or vice versa).", |
|
"cite_spans": [ |
|
{ |
|
"start": 489, |
|
"end": 511, |
|
"text": "(Chandar et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 86, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "3. Bridge MAE: The Multimodal Autoencoder (MAE) proposed by (Ngiam et al., 2011) was the only competing model which was easily extendable to the bridge case. We train their model using Z 1 and Z 2 to minimize a suitably modified objective function. We then use the representations learned to retrieve French (or German) captions for a given image (or vice versa). 4. 2-CorrNet: Here, we train two individual Corr-Nets using Z 1 and Z 2 respectively. For the task of retrieving images given a French (or German) caption we first find its nearest English caption using the Fr-En (or De-En) CorrNet. We then use this English caption to retrieve images using the En-Image CorrNet. Similarly, for retrieving captions given an image we use the En-Image CorrNet followed by the En-Fr (or En-De) CorrNet. 5. CorrNet + MT: Here, we train an En-Image Cor-rNet using Z 1 and an Fr/De-En MT system 4 using Z 2 . For the task of retrieving images given a French (or German) caption we translate the caption to English using the MT system. We then use this English caption to retrieve images using the En-Image Cor-rNet. For retrieving captions given images, we first translate all the 5000 French (or Germam) captions to English. We then embed these English translations (documents) and images (queries) in the com- Table 4 : Performance of different models for image to caption (I to C) and caption to image (C to I) retrieval mon space computed using Image-En CorrNet and do a retrieval as explained earlier.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 80, |
|
"text": "(Ngiam et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1303, |
|
"end": 1310, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "6. Random: A random image is returned for the given caption (and vice versa).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "From Table 4 , we observe that CorrNet + MT is a very strong competitor and gives the best results. The main reason for this is that over the years MT has matured enough for language pairs such as Fr-En and De-En and it can generate almost perfect translations for short sentences (such as captions). In fact, the results for this method are almost comparable to what we could have hoped for if we had direct parallel data between Fr-Images and De-Images (as approximated by the first row in the table which reports cross-modal retrieval results between En-Images using direct parallel data between them for training). However, we would like to argue that learning a joint embedding for multiple views instead of having multiple pairwise systems is a more elegant solution and definitely merits further attention. Further, a \"translation system\" may not be available when we are dealing with modalities other than text (for example, there are no audio-to-video translation systems). In such cases, BridgeCorrNet could still be employed. In this context, the performance of BridgeCorrNet is definitely promising and shows that a model which jointly learns representations for multiple views can perform better than methods which learn pair-wise common representations (2-CorrNet).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 12, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To get a qualitative feel for our model's performance, we refer the reader to Table 5 and 6. The first row in Table 5 shows an image and its top-5 nearest German captions (based on Euclidean distance between their common representations). As per our parallel image caption test set, only the second and fourth caption actually correspond to this image. However, we observe that the first and fifth caption are also semantically very related to the image. Both these captions talk about horses, grass or water body (ocean), etc. Similarly the last row in Table 5 shows an image and its top-5 nearest French captions. None of these captions actually correspond to the image as per our parallel image caption test set. However, clearly the first, third and fourth caption are semantically very relevant to this image as all of them talk about baseball. Even the remaining two captions capture the concept of a sport and raquet. We can make a similar observation from Table 6 where most of the top-5 retrieved images do not correspond to the French/German caption but they are semantically very similar. It is indeed impressive that the model is able to capture such cross modal semantics between images and French/German even without any direct parallel data between them.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 117, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 562, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 965, |
|
"end": 973, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Qualitative Analysis", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose Bridge Correlational Neural Networks which can learn common representations for multiple views even when parallel data is available only between these views and a pivot view. Our method performs better than the existing state of the art approaches on the cross language classification task and gives very promising results on the cross modal access task. We also release a new multilingual image caption benchmark (MIC benchmark) which will help in further research in this field 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "1. Zwei Pferde stehen auf einem sandigen Strand nahe dem Ocean. (Two horses standing on a sandy beach near the ocean.) 2. grasende Pferde auf einer trockenen Weide bei einem Flughafen. (Horses grazing in a dry pasture by an airport.) 3. ein Elefant , Wasser aufseinen R\u00fcckend spr\u00fchend , in einem staubigen Bereich neben einem Baum. (A elephant spraying water on its back in a dirt area next to tree .) 4. ein braunes pferd i\u00dft hohes gras neben einem beh\u00e4lter mit wasser. (Brown horses eating tall grass beside a body of water .) 5. vier Pferde grasen auf ein Feld mit braunem gras. (Four horses are grazing through a field of brown grass.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "1. Ein Teller mit Essen wie Sandwich , Chips , Suppe und einer Gurke. (Plate of food including a sandwich , chips , soup and a pickle.) 2. Teller , gef\u00fcllt mit sortierten Fr\u00fcchten und Gem\u00fcse und einigem Fleisch. (Plates filled with assorted fruits and veggies and some meat.) 3. Ein Tisch mit einer Sch\u00fcssel Salat und einem Teller Pizza. (a Table with a bowl of salad and plate with a cooked pizza .) 4. Ein Teller mit Essen besteht aus Brokkoli und Rindfleisch. (A plate of food consists of broccoli and beef.) 5. Eine Platte mit Fleisch und gr\u00fcnem Gem\u00fcse gemixt mit Sauce. (A plate with meat and green veggies mixed with sauce.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "1. un bus de la conduite en ville dans une rue entour\u00e9e par de grands immeubles. (A city bus driving down a street surrounded by tall buildings.) 2. un bus de conduire dans une rue dans une ville avec des b\u00e2timents de grande hauteur. (A bus driving down a street in a city with very tall buildings.) 3. bus de conduire dans une rue de ville surpeupl\u00e9e. (Double -decker bus driving down a crowded city street.) 4. le bus conduit \u00e0 travers la ville sur une rue anim\u00e9e. (The bus drives through the city on a busy street.) 5. un grand bus color\u00e9 est arr\u00eat\u00e9 dans une rue de la ville. (A big , colorful bus is stopped on a city street.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "1. Un homme portant une batte de baseball \u00e0 deux mains lors d'un jeu de balle professionnel. (A man holding a baseball bat with two hands at a professional ball game.) 2. un joueur de tennis balance une raquette \u00e0 une balle. (A tennis player swinging a racket t a ball.) 3. un gar\u00e7on qui est de frapper une balle avec une batte de baseball. (A boy that is hitting a ball with a baseball bat.) 4. une \u00e9quipe de joueurs de baseball jouant un jeu de base-ball. (A team of baseball players playing a game of baseball.) 5. un gar\u00e7on se pr\u00e9pare \u00e0 frapper une balle de tennis avec une raquette. (A boy prepares to hit a tennis ball with a racquet.) un homme debout \u00e0 c\u00f4t\u00e9 de aa groupe de vaches. (A man standing next to a group of cows.) personnes portant du mat\u00e9riel de ski en se tenant debout dans la neige. (People wearing ski equipment while standing in snow.) Table 6 : French and German queries and their top-5 nearest images based on representations learned using Bridge CorrNet. First two queries are in German and the last two queries are French. English translations are given in parenthesis.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 858, |
|
"end": 865, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://www.clg.ox.ac.uk/tedcorpus 2 http://mscoco.org/dataset/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.statmt.org/moses/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Details about the MIC benchmark and performance of various state-of-the-art models will be maintained at http:// sarathchandar.in/bridge-corrnet", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for their useful feedback. We also thank the workers from CrowdFlower for helping us in creating the MIC benchmark. Finally, we thank Amrita Saha (IBM Research India) for helping us in running some of the experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A kernel method for canonical correlation analysis", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Akaho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. Int'l Meeting on Psychometric Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Akaho. 2001. A kernel method for canonical correla- tion analysis. In Proc. Int'l Meeting on Psychometric Society.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Deep canonical correlation analysis", |
|
"authors": [ |
|
{ |
|
"first": "Galen", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raman", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Bilmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. 2013. Deep canonical correlation analysis. ICML.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cross language retrieval via transitive translation", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Advances in information retrieval: Recent research from the CIIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "203--234", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L.A. Ballesteros. 2000. Cross language retrieval via tran- sitive translation. In W.B. Croft (Ed.), Advances in in- formation retrieval: Recent research from the CIIR, pages 203-234, Boston: Kluwer Academic Publish- ers.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Wit 3 : Web inventory of transcribed and translated talks", |
|
"authors": [ |
|
{ |
|
"first": "Mauro", |
|
"middle": [], |
|
"last": "Cettolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Girardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "261--268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit 3 : Web inventory of transcribed and trans- lated talks. In Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT), pages 261-268, Trento, Italy, May.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Multilingual deep learning. NIPS Deep Learning Workshop", |
|
"authors": [ |
|
{ |
|
"first": "Sarath", |
|
"middle": [], |
|
"last": "Chandar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Balaraman", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ravindran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Vikas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amrita", |
|
"middle": [], |
|
"last": "Raykar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarath Chandar, Mitesh M. Khapra, Balaraman Ravin- dran, Vikas C. Raykar, and Amrita Saha. 2013. Multi- lingual deep learning. NIPS Deep Learning Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An autoencoder approach to learning bilingual word representations", |
|
"authors": [ |
|
{ |
|
"first": "Sarath", |
|
"middle": [], |
|
"last": "Chandar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislas", |
|
"middle": [], |
|
"last": "Lauly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Balaraman", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ravindran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Vikas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amrita", |
|
"middle": [], |
|
"last": "Raykar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1853--1861", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas C. Raykar, and Amrita Saha. 2014. An autoencoder ap- proach to learning bilingual word representations. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Pro- cessing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 1853-1861.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Correlational neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Sarath", |
|
"middle": [], |
|
"last": "Chandar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Balaraman", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ravindran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "257--285", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, and Balaraman Ravindran. 2016. Correlational neu- ral networks. Neural Computation, 28(2):257 -285.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Machine translation by triangulation: Making effective use of multiparallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "728--735", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Cohn and Mirella Lapata. 2007. Machine trans- lation by triangulation: Making effective use of multi- parallel corpora. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguis- tics, pages 728-735, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Fast regularized canonical correlation analysis", |
|
"authors": [ |
|
{ |
|
"first": "Raul", |
|
"middle": [], |
|
"last": "Cruz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Cano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mei-Ling Ting", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Statistics & Data Analysis", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "88--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raul Cruz-Cano and Mei-Ling Ting Lee. 2014. Fast regularized canonical correlation analysis. Computa- tional Statistics & Data Analysis, 70:88 -100.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bilbowa: Fast bilingual distributed representations without word alignments", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32nd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "748--756", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representa- tions without word alignments. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 748- 756.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Multilingual Distributed Representations without Word Alignment", |
|
"authors": [], |
|
"year": 2014, |
|
"venue": "Proceedings of International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014a. Mul- tilingual Distributed Representations without Word Alignment. In Proceedings of International Confer- ence on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Multilingual models for compositional distributed semantics", |
|
"authors": [], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "58--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014b. Mul- tilingual models for compositional distributed seman- tics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 58-68.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Relations between two sets of variates", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hotelling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1936, |
|
"venue": "Biometrika", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "321--377", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Hotelling. 1936. Relations between two sets of vari- ates. Biometrika, 28:321 -377.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Nonlinear canonical correlation analysis by neural networks", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Neural Networks", |
|
"volume": "13", |
|
"issue": "10", |
|
"pages": "1095--1105", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W.W. Hsieh. 2000. Nonlinear canonical correla- tion analysis by neural networks. Neural Networks, 13(10):1095 -1105.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Caffe: Convolutional architecture for fast feature embedding", |
|
"authors": [ |
|
{ |
|
"first": "Yangqing", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Shelhamer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Donahue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Karayev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Guadarrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1408.5093" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convo- lutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Everybody loves a rich cousin: An empirical study of transliteration through bridge languages", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Kumaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "420--428", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitesh M. Khapra, A. Kumaran, and Pushpak Bhat- tacharyya. 2010a. Everybody loves a rich cousin: An empirical study of transliteration through bridge lan- guages. In Human Language Technologies: Confer- ence of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 2-4, 2010, Los Angeles, California, USA, pages 420-428.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "PR + RQ AL-MOST EQUAL TO PQ: transliteration mining using bridge language", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghavendra", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Udupa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Kumaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitesh M. Khapra, Raghavendra Udupa, A. Kumaran, and Pushpak Bhattacharyya. 2010b. PR + RQ AL- MOST EQUAL TO PQ: transliteration mining us- ing bridge language. In Proceedings of the Twenty- Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11-15, 2010.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Inducing Crosslingual Distributed Representations of Words", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Klementiev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Binod", |
|
"middle": [], |
|
"last": "Bhattarai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing Crosslingual Distributed Representa- tions of Words. In Proceedings of the International Conference on Computational Linguistics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Compositional machine transliteration", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kumaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Khapra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACM Trans. Asian Lang. Inf. Process", |
|
"volume": "9", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Kumaran, Mitesh M. Khapra, and Pushpak Bhat- tacharyya. 2010. Compositional machine transliter- ation. ACM Trans. Asian Lang. Inf. Process., 9(4):13.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Experiments with transitive dictionary translation and pseudo-relevance feedback using graded relevance assessments", |
|
"authors": [ |
|
{ |
|
"first": "Raija", |
|
"middle": [], |
|
"last": "Lehtokangas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heikki", |
|
"middle": [], |
|
"last": "Keskustalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalervo", |
|
"middle": [], |
|
"last": "J\u00e4rvelin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of the American Society for Information Science and Technology", |
|
"volume": "59", |
|
"issue": "3", |
|
"pages": "476--488", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raija Lehtokangas, Heikki Keskustalo, and Kalervo J\u00e4rvelin. 2008. Experiments with transitive dictio- nary translation and pseudo-relevance feedback using graded relevance assessments. Journal of the Ameri- can Society for Information Science and Technology, 59(3):476-488.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Tensor canonical correlation analysis for multi-view dimension reduction", |
|
"authors": [ |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dacheng", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonggang", |
|
"middle": [], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kotagiri", |
|
"middle": [], |
|
"last": "Ramamohanarao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Arxiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yong Luo, Dacheng Tao, Yonggang Wen, Kotagiri Ra- mamohanarao, and Chao Xu. 2015. Tensor canonical correlation analysis for multi-view dimension reduc- tion. In Arxiv.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Exploiting Similarities among Languages for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Quoc Le, and Ilya Sutskever. 2013. Exploiting Similarities among Languages for Machine Translation. Technical report, arXiv.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Improved statistical machine translation for resource-poor languages using related resource-rich languages", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1358--1367", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov and Hwee Tou Ng. 2009. Improved statis- tical machine translation for resource-poor languages using related resource-rich languages. In Proceedings of the 2009 Conference on Empirical Methods in Nat- ural Language Processing, pages 1358-1367, Singa- pore, August.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Multimodal deep learning. ICML", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ngiam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Khosla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ng", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and Ng. Andrew. 2011. Multimodal deep learning. ICML.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Canonical ridge analysis with ridge parameter optimization", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"\u00c5" |
|
], |
|
"last": "Nielsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Hansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Strother", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. \u00c5. Nielsen, L. K. Hansen, and S. C. Strother. 1998. Canonical ridge analysis with ridge parameter opti- mization, may.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Grounded compositional semantics for finding and describing images with sentences", |
|
"authors": [], |
|
"year": null, |
|
"venue": "TACL", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "207--218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grounded compositional semantics for finding and de- scribing images with sentences. TACL, 2:207-218.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Leveraging monolingual data for crosslingual compositional word representations", |
|
"authors": [ |
|
{ |
|
"first": "Hubert", |
|
"middle": [], |
|
"last": "Soyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akiko", |
|
"middle": [], |
|
"last": "Aizawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 3rd International Conference on Learning Representations", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "2949--2980", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa. 2015. Leveraging monolingual data for crosslingual compositional word representations. In Proceedings of the 3rd International Conference on Learning Rep- resentations, San Diego, California, USA, May. Nitish Srivastava and Ruslan Salakhutdinov. 2014. Multimodal learning with deep boltzmann machines. Journal of Machine Learning Research, 15:2949- 2980.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Regularized generalized canonical correlation analysis", |
|
"authors": [ |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Tenenhaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Tenenhaus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Psychometrika", |
|
"volume": "76", |
|
"issue": "2", |
|
"pages": "257--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arthur Tenenhaus and Michel Tenenhaus. 2011. Reg- ularized generalized canonical correlation analysis. Psychometrika, 76(2):257-284.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "A comparison of pivot methods for phrase-based statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Masao", |
|
"middle": [], |
|
"last": "Utiyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hitoshi", |
|
"middle": [], |
|
"last": "Isahara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "484--491", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Masao Utiyama and Hitoshi Isahara. 2007. A compar- ison of pivot methods for phrase-based statistical ma- chine translation. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, pages 484-491, Rochester, New York, April.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Canonical ridge and econometrics of joint production", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Vinod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "Journal of Econometrics", |
|
"volume": "4", |
|
"issue": "2", |
|
"pages": "147--166", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H.D. Vinod. 1976. Canonical ridge and econometrics of joint production. Journal of Econometrics, 4(2):147 - 166.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "On deep multi-view representation learning", |
|
"authors": [ |
|
{ |
|
"first": "Weiran", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raman", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Bilmes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. 2015. On deep multi-view representation learning. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Pivot language approach for phrase-based statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Machine Translation", |
|
"volume": "21", |
|
"issue": "3", |
|
"pages": "165--181", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hua Wu and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine transla- tion. Machine Translation, 21(3):165-181.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Joint alignment and artificial data generation: An empirical study of pivot-based machine transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangyu", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunqing", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haizhou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Fifth International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1207--1215", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Min Zhang, Xiangyu Duan, Ming Liu, Yunqing Xia, and Haizhou Li. 2011. Joint alignment and artifi- cial data generation: An empirical study of pivot-based machine transliteration. In Fifth International Joint Conference on Natural Language Processing, IJCNLP 2011, Chiang Mai, Thailand, November 8-13, 2011, pages 1207-1215.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "A deep and autoregressive approach for topic modeling of multimodal data", |
|
"authors": [ |
|
{ |
|
"first": "Yin", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu-Jin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yin Zheng, Yu-Jin Zhang, and Hugo Larochelle. 2014a. A deep and autoregressive approach for topic model- ing of multimodal data. CoRR, abs/1409.3970.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Topic modeling of multimodal data: An autoregressive approach", |
|
"authors": [ |
|
{ |
|
"first": "Yin", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu-Jin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "2014 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1370--1377", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yin Zheng, Yu-Jin Zhang, and Hugo Larochelle. 2014b. Topic modeling of multimodal data: An autoregressive approach. In 2014 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages 1370-1377.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Bilingual Word Embeddings for Phrase-Based Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Will", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Will Y. Zou, Richard Socher, Daniel Cer, and Christo- pher D. Manning. 2013. Bilingual Word Embeddings for Phrase-Based Machine Translation. In Conference on Empirical Methods in Natural Language Process- ing (EMNLP 2013).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Bridge Correlational Neural Network. The views are English, French and German with English being the pivot view." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "5: Images and their top-5 nearest captions based on representations learned using Bridge CorrNet. First two examples show German captions and the last two examples show French captions. English translations are given in parenthesis. Speisen und Getr\u00e4nke auf einem Tisch mit einer Frau essen im Hintergrund. (Food and beverages set on a table with a woman eating in the background .) ein Foto von einem Laptop auf einem Bett mit einem Fernseher im Hintergrund. (A photo of a laptop on a bed with a tv in the background .)" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Training</td><td/><td/><td/><td/><td colspan=\"2\">Test Language</td><td/><td/><td/><td/></tr><tr><td>Language</td><td colspan=\"8\">Arabic German Spanish French Italian Dutch Polish Pt-Br</td><td colspan=\"3\">Rom'n Russian Turkish</td></tr><tr><td>Arabic</td><td/><td>0.378</td><td>0.436</td><td>0.432</td><td>0.444</td><td>0.438</td><td>0.389</td><td colspan=\"2\">0.425 0.42</td><td>0.446</td><td>0.397</td></tr><tr><td>German</td><td>0.368</td><td/><td>0.474</td><td>0.46</td><td>0.464</td><td>0.44</td><td>0.375</td><td colspan=\"2\">0.417 0.447</td><td>0.458</td><td>0.443</td></tr><tr><td>Spanish</td><td>0.353</td><td>0.355</td><td/><td>0.42</td><td>0.439</td><td>0.435</td><td>0.415</td><td>0.39</td><td>0.424</td><td>0.427</td><td>0.382</td></tr><tr><td>French</td><td>0.383</td><td>0.366</td><td>0.487</td><td/><td>0.474</td><td>0.429</td><td>0.403</td><td colspan=\"2\">0.418 0.458</td><td>0.415</td><td>0.398</td></tr><tr><td>Italian</td><td>0.398</td><td>0.405</td><td>0.461</td><td>0.466</td><td/><td>0.393</td><td>0.339</td><td colspan=\"2\">0.347 0.376</td><td>0.382</td><td>0.352</td></tr><tr><td>Dutch</td><td>0.377</td><td>0.354</td><td>0.463</td><td>0.464</td><td>0.46</td><td/><td>0.405</td><td colspan=\"2\">0.386 0.415</td><td>0.407</td><td>0.395</td></tr><tr><td>Polish</td><td>0.359</td><td>0.386</td><td>0.449</td><td>0.444</td><td>0.43</td><td>0.441</td><td/><td colspan=\"2\">0.401 0.434</td><td>0.398</td><td>0.408</td></tr><tr><td>Pt-Br</td><td>0.391</td><td>0.392</td><td>0.476</td><td>0.447</td><td>0.486</td><td>0.458</td><td>0.403</td><td/><td>0.457</td><td>0.431</td><td>0.431</td></tr><tr><td>Rom'n</td><td>0.416</td><td>0.32</td><td>0.473</td><td>0.476</td><td>0.46</td><td>0.434</td><td>0.416</td><td>0.433</td><td/><td>0.444</td><td>0.402</td></tr><tr><td>Russian</td><td>0.372</td><td>0.352</td><td>0.492</td><td>0.427</td><td>0.438</td><td>0.452</td><td>0.43</td><td colspan=\"2\">0.419 0.441</td><td/><td>0.447</td></tr><tr><td>Turkish</td><td>0.376</td><td>0.352</td><td>0.479</td><td>0.433</td><td>0.427</td><td>0.423</td><td>0.439</td><td>0.367</td><td>0.434</td><td>0.411</td></tr></table>", |
|
"type_str": "table", |
|
"text": "F1-scores for TED corpus document classification results when training and testing on two languages that do not share any parallel data. We train a Bridge CorrNet model on all en-L2 language pairs together, and then use the resulting embeddings to train document classifiers in each language. These classifiers are subsequently used to classify data from all other languages." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Setting</td></tr></table>", |
|
"type_str": "table", |
|
"text": "F1-scores for TED corpus document classification results when training and testing on two languages that do not share any parallel data. Same procedure asTable 1, but with DOC/ADD model in (Hermann and Blunsom, 2014b)." |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": ": F1-scores on the TED corpus document classification task when training and evaluating on the same language. Results other than Bridge CorrNet are taken from (Hermann and Blunsom, 2014b)." |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |