Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O18-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:09:54.784410Z"
},
"title": "WaveNet Vocoder and its Applications in Voice Conversion",
"authors": [
{
"first": "Wen-Chin",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica *",
"location": {}
},
"email": ""
},
{
"first": "Chen-Chou",
"middle": [],
"last": "Lo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica *",
"location": {}
},
"email": ""
},
{
"first": "Hsin-Te",
"middle": [],
"last": "Hwang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica *",
"location": {}
},
"email": ""
},
{
"first": "Yu",
"middle": [],
"last": "Tsao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica *",
"location": {}
},
"email": ""
},
{
"first": "Hsin-Min",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica *",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "O18-1009",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Most voice conversion models rely on vocoders based on the source-filter model to extract speech parameters and synthesize speech. However, the naturalness and similarity of the converted speech are limited due to the vast theories and constraints posed by traditional vocoders. In the field of deep learning, a network structure called WaveNet is one of the stateof-the-art techniques in speech synthesis, which is capable of generating speech samples of extremely high quality compared with past methods. One of the extensions of WaveNet is the WaveNet vocoder. Its ability to synthesize speech of quality higher than traditional vocoders has made it gradually adopted by several foreign voice conversion research teams. In this work, we study the combination of the WaveNet vocoder with the voice conversion models recently developed by domestic research teams, in order to evaluate the potential of applying the WaveNet vocoder to these voice conversion models and to introduce the WaveNet vocoder to the domestic speech processing research community. In the experiments, we compared the converted speeches generated by three voice conversion models using a traditional WORLD vocoder and the WaveNet vocoder, respectively. The compared voice conversion models include 1) variational auto-encoder (VAE), 2) variational autoencoding Wasserstein generative adversarial network (VAW-GAN), and 3) cross domain variarional auto-encoder (CDVAE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
},
{
"text": "Experimental results show that, using the WaveNet vocoder, the similarity between the converted speech generated by all the three models and the target speech is significantly improved. As for naturalness, only VAE benefits from the WaveNet vocoder. [3] H. Doi, T. Toda, K. Nakamura, H. Saruwatari, K. Shikano, \"Alaryngeal Speech Enhancement",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "! \" \u210e = ! % & % &'( , \u2026 , % &'+ , \u210e",
"eq_num": "(1)"
}
],
"section": "Abstract",
"sec_num": null
},
{
"text": "Based on One-to-many Eigenvoice Conversion,\" IEEE/ACM Trans. on Audio, Speech, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
},
{
"text": "https://github.com/kan-bayashi/PytorchWaveNetVocoder 2 https://github.com/JeremyCCHsu/vae-npvc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language Processing",
"authors": [],
"year": 2014,
"venue": "",
"volume": "22",
"issue": "",
"pages": "172--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Language Processing, 22(1), pp. 172-183, January 2014.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Continuous probabilistic transform for voice conversion",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Stylianou",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Cappe",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Moulines",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "6",
"issue": "2",
"pages": "131--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Stylianou, O. Cappe, and E. Moulines, \"Continuous probabilistic transform for voice conversion,\" IEEE Transactions on Speech and Audio Processing, vol. 6, no. 2, pp. 131- 142, Mar 1998.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Speech analysis and synthesis by linear prediction of the speech wave",
"authors": [
{
"first": "B",
"middle": [
"S"
],
"last": "Atal",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Hanauer",
"suffix": ""
}
],
"year": 1971,
"venue": "J. Acoust. Soc. America",
"volume": "50",
"issue": "2",
"pages": "637--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. S. Atal and S. L. Hanauer : \"Speech analysis and synthesis by linear prediction of the speech wave\", in J. Acoust. Soc. America , vol. 50, no. 2, pp.637-655, Mar. 1971.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneousfrequency-based f0 extraction: Possible role of a repetitive structure in sounds",
"authors": [
{
"first": "H",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Masuda-Katsuse",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "De Cheveign",
"suffix": ""
}
],
"year": 1999,
"venue": "Speech Communication",
"volume": "27",
"issue": "3",
"pages": "187--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Kawahara, I. Masuda-Katsuse, and A. de Cheveign, \"Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous- frequency-based f0 extraction: Possible role of a repetitive structure in sounds,\" Speech Communication, vol. 27, no. 3, pp. 187 -207, 1999.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WORLD: a vocoder-based high-quality speech synthesis system for real-time applications",
"authors": [
{
"first": "M",
"middle": [],
"last": "Morise",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yokomori",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ozawa",
"suffix": ""
}
],
"year": 2016,
"venue": "IEICE Trans. Inf. Syst",
"volume": "99",
"issue": "7",
"pages": "1877--1884",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Morise, F. Yokomori, and K. Ozawa, \"WORLD: a vocoder-based high-quality speech synthesis system for real-time applications,\" IEICE Trans. Inf. Syst., vol. E99-D, no. 7, pp. 1877-1884, 2016.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "WaveNet: A generative model for raw audio",
"authors": [
{
"first": "A",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dieleman",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "A",
"middle": [
"W"
],
"last": "Senior",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2016,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu, \"WaveNet: A generative model for raw audio,\" CoRR, vol. abs/1609.03499, 2016.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Speaker-dependent WaveNet vocoder",
"authors": [
{
"first": "A",
"middle": [],
"last": "Tamamori",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Toda",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. INTERSPEECH",
"volume": "",
"issue": "",
"pages": "1118--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Tamamori, T. Hayashi, K. Kobayashi, K. Takeda, and T. Toda, \"Speaker-dependent WaveNet vocoder,\" Proc. INTERSPEECH, pp. 1118-1122, 2017.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An investigation of multispeaker training for WaveNet vocoder",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tamamori",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Toda",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Hayashi, A. Tamamori, K. Kobayashi, K. Takeda, and T. Toda, \"An investigation of multi- speaker training for WaveNet vocoder,\" Proc. ASRU, 2017.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. INTERSPEECH",
"volume": "",
"issue": "",
"pages": "501--505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chou, C. Yeh, H. Lee, L. Lee, \"Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations,\" Proc. INTERSPEECH, pp. 501-505, 2018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks",
"authors": [
{
"first": "C.-C",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "H.-T",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Y.-C",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tsao",
"suffix": ""
},
{
"first": "H.-M",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "3364--3368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.-C. Hsu, H.-T. Hwang, Y.-C. Wu, Y. Tsao, and H.-M. Wang, \"Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks,\" in Proc. Interspeech, 2017, pp. 3364-3368.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lorenzo-Trueba",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yamagishi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Villavicencio",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kinnunen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ling",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Odyssey",
"volume": "",
"issue": "",
"pages": "195--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lorenzo-Trueba, J. Yamagishi, T. Toda, D. Saito, F. Villavicencio, T. Kinnunen, and Z. Ling, \"The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods,\" in Proc. Odyssey, 2018, pp. 195-202.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "WaveNet Vocoder with Limited Training Data for Voice Conversion",
"authors": [
{
"first": "L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dai",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. INTERSPEECH",
"volume": "",
"issue": "",
"pages": "1983--1987",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Liu, Z. Ling, Y. Jiang, M. Zhou, L. Dai, \"WaveNet Vocoder with Limited Training Data for Voice Conversion,\" Proc. INTERSPEECH, pp. 1983-1987, 2018.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "NU voice conversion system for the voice conversion challenge 2018",
"authors": [
{
"first": "P",
"middle": [
"L"
],
"last": "Tobing",
"suffix": ""
},
{
"first": "Y.-C",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Toda",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Odyssey",
"volume": "",
"issue": "",
"pages": "219--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.L. Tobing, Y.-C. Wu, T. Hayashi, K. Kobayashi, and T. Toda, \"NU voice conversion system for the voice conversion challenge 2018,\" in Proc. Odyssey 2018, pp. 219-226.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The NU Non-Parallel Voice Conversion System for the Voice Conversion Challenge 2018",
"authors": [
{
"first": "Y.-C",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "P",
"middle": [
"L"
],
"last": "Tobing",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Toda",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Odyssey",
"volume": "",
"issue": "",
"pages": "211--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.-C. Wu, P. L. Tobing, T. Hayashi, K. Kobayashi, and T. Toda, \"The NU Non-Parallel Voice Conversion System for the Voice Conversion Challenge 2018,\" in Proc. Odyssey, 2018, pp. 211-218.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Auto-encoding variational bayes",
"authors": [
{
"first": "D",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. P. Kingma and M. Welling, \"Auto-encoding variational bayes,\" CoRR, vol. abs/1312.6114, 2013.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Voice conversion from nonparallel corpora using variational auto-encoder",
"authors": [
{
"first": "C.-C",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "H.-T",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Y.-C",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tsao",
"suffix": ""
},
{
"first": "H.-M",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. APISPA ASC",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.-C. Hsu, H.-T. Hwang, Y.-C. Wu, Y. Tsao, and H.-M. Wang, \"Voice conversion from non- parallel corpora using variational auto-encoder,\" in Proc. APISPA ASC, 2016, pp. 1-6.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Generative adversarial networks",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": null,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courville, and Y. Bengio, \"Generative adversarial networks,\" CoRR, vol. abs/1406.2661, 2014.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Voice Conversion Based on Cross-Domain Features Using Variational Auto Encoders",
"authors": [
{
"first": "W.-C",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "H.-T",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Y.-H",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tsao",
"suffix": ""
},
{
"first": "H.-M",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ISCSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W.-C. Huang, H.-T. Hwang, Y.-H. Peng, Y. Tsao, and H.-M. Wang, \"Voice Conversion Based on Cross-Domain Features Using Variational Auto Encoders,\" in Proc. ISCSLP 2018.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An adaptive algorithm for mel-cepstral analysis of speech",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fukada",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Imai",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Fukada, K. Tokuda, T. Kobayashi, and S. Imai, \"An adaptive algorithm for mel-cepstral analysis of speech,\" in Proc. ICASSP 1992.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Statistical voice conversion with WaveNet-based waveform generation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tamamori",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Toda",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. INTERSPEECH",
"volume": "",
"issue": "",
"pages": "1138--1142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Kobayashi, T. Hayashi, A. Tamamori, and T. Toda, \"Statistical voice conversion with WaveNet-based waveform generation,\" Proc. INTERSPEECH, pp. 1138-1142, 2017.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">3.3 4.2.1</td><td/><td>(</td><td/><td>%</td><td>[9] 0</td><td colspan=\"2\">STRAIGHT [6] WaveNet World [7] H ) (mean opinion score, MOS) Discriminator</td><td>(</td><td>VAE</td></tr><tr><td colspan=\"2\">VAE</td><td colspan=\"4\">(spectral feature) I WaveNet )</td><td/><td colspan=\"2\">(fundamental frequency) WaveNet Hayashi [10] WaveNet</td><td>(multi-speaker WaveNet VAE WaveNet WaveNet WORLD</td></tr><tr><td colspan=\"3\">(aperiodicity) vocoder) VAE</td><td colspan=\"6\">(1) WaveNet I (Kantorovich-Rubinstein duality) H (cross domain VAE, CDVAE) WaveNet 1 ( ) CDVAE</td><td>WaveNet</td><td>[10]</td></tr><tr><td/><td/><td colspan=\"7\">WaveNet * , M Y|W = ghM &gt; M Y i j k+ VCC2018 d l~n o * \\ % \u2212 d l~n o|p \\ % (formant structure) WaveNet VAW-GAN WaveNet WORLD WaveNet</td><td>(residual block) (speaker (6) [21] 972</td></tr><tr><td colspan=\"8\">2\u00d71 1-Lipschitz continuity STRAIGHT adaptation) D [6] 54 WaveNet [23]</td><td>(dilated causal convolution) WaveNet (critic function) ( STRAIGHT spectrum, SP) GAN WaveNet WORLD</td><td>(</td></tr><tr><td colspan=\"9\">(gated activation function) WaveNet ( M Y * (mel cepstral coefficients, MCCs) [22] 1\u00d71 ) M Y|W WaveNet ) 0 WaveNet WORLD</td><td>34</td><td>VAE 2</td></tr><tr><td/><td colspan=\"2\">CDVAE</td><td/><td/><td/><td/><td>[14, 15, 16] \\ q /</td><td>M Y *</td><td>WaveNet</td></tr><tr><td/><td/><td colspan=\"2\">M Y|W VCC2018</td><td colspan=\"5\">tanh &gt; ?,@ * % + C ?,@ * \u210e \u2a00\u03c3 &gt; F,@ * % + C F,@ * \u210e VAE VAE (time resolution adjustment) (waveform trajectory) WaveNet</td><td>(</td><td>[9, 10]</td><td>(2)</td></tr><tr><td colspan=\"3\">(1) (2)) WaveNet</td><td/><td/><td/><td/><td/><td>WaveNet WaveNet</td><td>(</td></tr><tr><td>3.1</td><td>% )</td><td/><td colspan=\"6\">&gt; C d l~n o * \\ q % \u2212 d r~e f H % W \\ q s N (H, I Y * sigmoid WaveNet WaveNet \u03c3(\u2022) 200000 5000 (a) VAE WORLD (b) VAE WaveNet ( (mismatch)</td><td>WORLD</td><td>\u2a00 (3) (4)) (7)</td></tr><tr><td colspan=\"9\">(cross entropy) (variational auto-encoder, VAE) [17, 18] ( ( 8 bits ) (speech frame) % GAN ( H |} H~ (latent code) bits WaveNet (encoder)( H s N VAE GAN ) WaveNet VAE VAW-GAN ) ABX CDVAE VAE VAE WORLD WaveNet VAE (decoder)( VAE (generative adversarial network, GAN) VAE WaveNet VAW-GAN (c) VAW-GAN WORLD (d) VAW-GAN WaveNet 65536 \u03bc-law 256 H \\ q WaveNet ) VAE W-GAN WORLD ) 3.2 VCC2018 unit-sum ) WORLD 16 H s N ( 2 I WaveNet WaveNet</td></tr><tr><td colspan=\"3\">(generator) [19] VAE [1] W.</td><td/><td/><td colspan=\"4\">(3) JKL M N (%) \u2264 \u2212Q RST % = \u2212Q UVW % \u2212 Q XSY H (discriminator) VAW-GAN VAE WORLD WaveNet</td><td>(3) VAE VAW-GAN VAE CDVAE</td></tr><tr><td colspan=\"9\">Q XSY Z; H = \\ ]^( _`H % ||M N H ) Q uvwxv0 = \u2212\\ ]^( _`H % ||M_c(H)) + d e f H % [JKL M N % H, I CDVAE [12, 18, 21]</td><td>(4)</td></tr><tr><td>4.1 4.2</td><td/><td/><td/><td/><td colspan=\"4\">0 Q UVW Z, c; % = \u2212d e f H % JKL M N % H, I \uff0c VAE +d l~n o * \\ q % \u2212 d r~e f H % W \\ q s N (H, I Y</td><td>(5) (8)</td></tr><tr><td colspan=\"9\">&amp;1+ WaveNet 3 \\ ]^( \u2022 || \u2022) KL % H GAN Voice Conversion Challenge 2018 (VCC2018) VCC2018 SF1 to TF1 (e) CDVAE WORLD (f) CDVAE WaveNet (Kullback-Leibler divergence) I VAE 10 [12] 81 35 VAE VAW-GAN CDVAE % &amp; ( 16 bits , 2 +5 = WaveNet VAE GAN Wasserstein GAN(W-GAN) [20] W-GAN 2 Z, c VAE 12 2.2 (earth mover's distance Wasserstein distance) 22050 [13] WORLD [7] WORLD WaveNet 4.2.2</td></tr><tr><td colspan=\"2\">65536</td><td colspan=\"3\">) \u210e WaveNet GAN 513 9</td><td>25</td><td/><td colspan=\"2\">WaveNet (speaker dependent) (spectral envelope) 513 5 [12]</td><td>95%</td><td>(</td><td>[9, 10])</td><td>35</td></tr></table>",
"html": null,
"text": "Fujitsuru, H. Sekimoto, T. Toda, H Saruwatari, and K. Shikano, \"Bandwidth Extension of Cellular Phone Speech Based on Maximum Likelihood Estimation with GMM,\" Proc. NCSP2008 [2] C. C. Hsia, C. H. Wu, and J. Q. Wu, \"Conversion Function Clustering and Selection Using Linguistic and Spectral Information for Emotional Voice Conversion\" IEEE Trans. on Computers, 56(9), pp. 1225-1233, September 2007.",
"num": null
}
}
}
}