ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2020.sigmorphon-1.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:31:33.395526Z"
},
"title": "Representation Learning for Discovering Phonemic Tone Contours",
"authors": [
{
"first": "Bai",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"settlement": "Toronto",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Jing",
"middle": [
"Yi"
],
"last": "Xie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"settlement": "Toronto",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Frank",
"middle": [],
"last": "Rudzicz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"settlement": "Toronto",
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Tone is a prosodic feature used to distinguish words in many languages, some of which are endangered and scarcely documented. In this work, we use unsupervised representation learning to identify probable clusters of syllables that share the same phonemic tone. Our method extracts the pitch for each syllable, then trains a convolutional autoencoder to learn a low-dimensional representation for each contour. We then apply the mean shift algorithm to cluster tones in high-density regions of the latent space. Furthermore, by feeding the centers of each cluster into the decoder, we produce a prototypical contour that represents each cluster. We apply this method to spoken multi-syllable words in Mandarin Chinese and Cantonese and evaluate how closely our clusters match the ground truth tone categories. Finally, we discuss some difficulties with our approach, including contextual tone variation and allophony effects.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Tone is a prosodic feature used to distinguish words in many languages, some of which are endangered and scarcely documented. In this work, we use unsupervised representation learning to identify probable clusters of syllables that share the same phonemic tone. Our method extracts the pitch for each syllable, then trains a convolutional autoencoder to learn a low-dimensional representation for each contour. We then apply the mean shift algorithm to cluster tones in high-density regions of the latent space. Furthermore, by feeding the centers of each cluster into the decoder, we produce a prototypical contour that represents each cluster. We apply this method to spoken multi-syllable words in Mandarin Chinese and Cantonese and evaluate how closely our clusters match the ground truth tone categories. Finally, we discuss some difficulties with our approach, including contextual tone variation and allophony effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Tonal languages use pitch to distinguish different words, for example, yi in Mandarin may mean 'one', 'to move', 'already', or 'art', depending on the pitch contour. Of over 6000 languages in the world, it is estimated that as many as 60-70% are tonal (Lewis, 2009; Yip, 2002) . A few of these are national languages (e.g., Mandarin Chinese, Vietnamese, and Thai), but many tonal languages have a small number of speakers and are scarcely documented. There is a limited availability of trained linguists to perform language documentation before these languages become extinct, hence the need for better tools to assist linguists in these tasks.",
"cite_spans": [
{
"start": 252,
"end": 265,
"text": "(Lewis, 2009;",
"ref_id": "BIBREF11"
},
{
"start": 266,
"end": 276,
"text": "Yip, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the first tasks during the description of an unfamiliar language is determining its phonemic inventory: what are the consonants, vowels, and tones of the language, and which pairs of phonemes are contrastive? Tone presents a unique challenge because unlike consonants and vowels, which can be identified in isolation, tones do not have a fixed pitch, and vary by speaker and situation. Since tone data is subject to interpretation, different linguists may produce different descriptions of the tone system of the same language (Yip, 2002) .",
"cite_spans": [
{
"start": 534,
"end": 545,
"text": "(Yip, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we present a model to automatically infer phonemic tone categories of a tonal language. We use an unsupervised learning approach: a convolutional autoencoder learns a low-dimensional representation of each tone using only a set of spoken syllables in the target language. This is followed by mean shift clustering to identify clusters of syllables that probably have the same tone. We apply our method on Mandarin Chinese and Cantonese datasets, for which the ground truth annotation is used for evaluation. Our method does not make any language-specific assumptions, so it may be applied to low-resource languages whose phonemic inventories are not already established.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Mandarin Chinese (1.1 billion speakers) and Cantonese (74 million speakers) are two tonal lan- Figure 2: Diagram of our model architecture, consisting of a convolutional autoencoder to learn a latent representation for each pitch contour, and mean shift clustering to identify groups of similar tones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tone in Mandarin and Cantonese",
"sec_num": "1.1"
},
{
"text": "guages in the Sinitic family (Lewis, 2009) . Mandarin has four lexical tones: high (55), rising (25), low-dipping (214), and falling (51) 1 . The third tone sometimes undergoes sandhi, addressed in section 3. We exclude a fifth, neutral tone, which can only occur in word-final positions and has no fixed pitch. Cantonese has six lexical tones: high-level (55), mid-rising (25), mid-level (33), low-falling (21), low-rising (23), and low-level (22). Some descriptions of Cantonese include nine tones, of which three are checked tones that are flat, shorter in duration, and only occur on syllables ending in /p/, /t/, or /k/. Since each one of the checked tones are in complementary distribution with an unchecked tone, we adopt the simpler six tone model that treats the checked tones as variants of the high, mid, and low level tones. Contours for the lexical tones in both languages are shown in Figure 1 .",
"cite_spans": [
{
"start": 29,
"end": 42,
"text": "(Lewis, 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 899,
"end": 907,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tone in Mandarin and Cantonese",
"sec_num": "1.1"
},
{
"text": "Many low-resource languages lack sufficient transcribed data for supervised speech processing, thus unsupervised models for speech processing is an emerging area of research. The Zerospeech 2015 and 2017 challenges featured unsupervised learning of contrasting phonemes in English and Xitsonga, evaluated by an ABX phoneme discrimination task (Versteegh et al., 2015) . One successful approach used denoising and correspondence autoencoders to learn a representation that avoided capturing noise and irrelevant inter-speaker variation (Renshaw et al., 2015) . Deep LSTMs for segmenting and clustering phonemes in speech have also been explored in (M\u00fcller et al., 2017b) and (M\u00fcller et al., 2017a) .",
"cite_spans": [
{
"start": 343,
"end": 367,
"text": "(Versteegh et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 535,
"end": 557,
"text": "(Renshaw et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 647,
"end": 669,
"text": "(M\u00fcller et al., 2017b)",
"ref_id": "BIBREF13"
},
{
"start": 674,
"end": 696,
"text": "(M\u00fcller et al., 2017a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In Mandarin Chinese, deep neural networks have been successful for tone classification in isolated syllables (Chen et al., 2016) as well as in continuous speech (Ryant et al., 2014b,a) . Both of these models found that Mel-frequency cepstral coefficients (MFCCs) outperformed pitch contour features, despite the fact that MFCC features do not contain pitch information. In Cantonese, support vector machines (SVMs) have been applied to classify tones in continuous speech, using pitch contours as input (Peng and Wang, 2005) .",
"cite_spans": [
{
"start": 109,
"end": 128,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 161,
"end": 184,
"text": "(Ryant et al., 2014b,a)",
"ref_id": null
},
{
"start": 503,
"end": 524,
"text": "(Peng and Wang, 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Unsupervised learning of tones remains largely unexplored. Levow (2006) performed unsupervised and semi-supervised tone clustering in Mandarin, using average pitch and slope as features, and k-means and asymmetric k-lines for clustering. Graph-based community detection techniques have been applied to group n-grams of contiguous contours into clusters in Mandarin (Zhang, 2019) . In recent work concurrent to ours, Fry (2020) uses adversarial autoencoders and hierarchical clustering to identify tone inventories, and evaluate their method on Mandarin, Cantonese, Fungwa, and English data.",
"cite_spans": [
{
"start": 59,
"end": 71,
"text": "Levow (2006)",
"ref_id": "BIBREF10"
},
{
"start": 365,
"end": 378,
"text": "(Zhang, 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We further explore unsupervised deep neural networks for phonemic tone clustering. It should be noted that our unsupervised model is not given tone labels during training, and the number of tones is assumed to be unknown, so it cannot be directly compared to supervised tone classifiers in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka 2 , and the Cantonese dataset is from a male speaker and is downloaded from Forvo 3 , an online crowd-sourced pronunciation dictionary. We require all samples within each language to be from the same speaker to avoid the difficulties associated with channel effects and inter-speaker variation. We randomly sample 400 words from each language, which are mostly between 2 and 4 syllables; to reduce the prosody effects with longer utterances, we exclude words longer than 4 syllables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and preprocessing",
"sec_num": "3"
},
{
"text": "We extract ground-truth tones for evaluation purposes. In Mandarin, the tones are extracted from the pinyin transcription; in Cantonese, we reference the character entries on Wiktionary 4 to retrieve the romanized pronunciation and tones. For Mandarin, we adjust for third-tone sandhi (a phonological rule where a pair of consecutive third-tones is always realized as a second-tone followed by a third-tone), and use the sandhi tone as the ground truth. We also exclude the neutral tone, which has no fixed pitch and is sometimes thought of as a lack of tone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and preprocessing",
"sec_num": "3"
},
{
"text": "We use Praat's autocorrelation-based pitch estimation algorithm to extract the fundamental frequency (F0) contour for each sample, using a minimum frequency of 75Hz and a maximum frequency of 500Hz (Boersma, 1993) . The interface between Python and Praat is handled using Parselmouth (Jadoul et al., 2018) . We normalize the contour to be between 0 and 1, based on the speaker's pitch range. Next, we manually segment each speech sample into syllables, necessary because syllable boundaries are not provided in our datasets. We sample the pitch at 40 equally spaced points, obtaining a constant length vector as input to our model. Note that by sampling a variable length contour to a constant length, the model does not have information about syllable length; we discuss this design choice in section 6.2.",
"cite_spans": [
{
"start": 198,
"end": 213,
"text": "(Boersma, 1993)",
"ref_id": "BIBREF0"
},
{
"start": 284,
"end": 305,
"text": "(Jadoul et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pitch extraction and syllable segmentation",
"sec_num": "3.1"
},
{
"text": "We use a convolutional autoencoder ( Figure 2 ) to learn a two-dimensional latent vector for each syllable. Convolutional layers are widely used in computer vision and speech processing to learn spatially local features that are invariant of position. Figure 3 : Clusters generated by the mean shift procedure. The black line shows the threshold: we discard clusters with size below this value and treat their points as unclustered.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 2",
"ref_id": null
},
{
"start": 252,
"end": 260,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Convolutional autoencoder",
"sec_num": "4.1"
},
{
"text": "We use a low dimensional latent space so that the model learns to generate a representation that only captures the most important aspects of the input contour, and also because clustering algorithms tend to perform poorly in high dimensional spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional autoencoder",
"sec_num": "4.1"
},
{
"text": "Our encoder consists of three layers. The first layer applies 2 convolutional filters (kernel size 4, stride 1) followed by max pooling (kernel size 2) and a tanh activation. The second layer applies 4 convolutional filters (kernel size 4, stride 1), again with max pooling (kernel size 2) and a tanh activation. The third layer is a fully connected layer with two dimensional output. Our decoder is the encoder in reverse, consisting of one fully connected layer and two deconvolution layers, with the same layer shapes as the encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional autoencoder",
"sec_num": "4.1"
},
{
"text": "We train the autoencoder using PyTorch (Paszke et al., 2017) , for 500 epochs, with a batch size of 60. The model is optimized using Adam (Kingma and Ba, 2015) with a learning rate of 5e-4 to minimize the mean squared error between the input and output contours.",
"cite_spans": [
{
"start": 39,
"end": 60,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional autoencoder",
"sec_num": "4.1"
},
{
"text": "We run the encoder on each syllable's pitch contour to get their latent representations; we apply principal component analysis (PCA) to remove any correlation between the two dimensions. Then, we run mean shift clustering (Comaniciu and Meer, 2002; Ghassabeh and Rudzicz, 2018) , estimating a probability density function in the latent space. The procedure performs gradient ascent on all the points until they converge to a set of stationary points, which are local maxima of the density function. These stationary points are taken to be cluster centers, and points that converge to the same stationary point belong to the same cluster. We feed the cluster centers into the decoder to generate a prototype pitch contour for each cluster. Unlike k-means clustering, the mean shift procedure does not require the number of clusters to be specified, only a bandwidth parameter (set to 0.6 for our experiments). The cluster centers are always in regions of high density, so they can be viewed as prototypes that represent their respective clusters. Another advantage is that unlike k-means, mean shift clustering is robust to outliers.",
"cite_spans": [
{
"start": 222,
"end": 248,
"text": "(Comaniciu and Meer, 2002;",
"ref_id": "BIBREF3"
},
{
"start": 249,
"end": 277,
"text": "Ghassabeh and Rudzicz, 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mean shift clustering",
"sec_num": "4.2"
},
{
"text": "The bandwidth parameter controls the size of the clusters: a higher bandwidth value generates fewer and larger clusters. We tune the bandwidth parameter to produce linguistically plausible tone clusters: we expect between 3 to 8 different clusters, each clusters should have at least 1/10 of the points be assigned to it, and most points should belong to some cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting bandwidth and threshold",
"sec_num": "4.3"
},
{
"text": "The mean shift procedure assigns every point to some cluster, even if the resulting cluster contains only a few points. Thus, we set a threshold: we treat clusters smaller than the threshold as spurious, and leave their points as unclustered. Figure 3 shows the effect of the threshold on both languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selecting bandwidth and threshold",
"sec_num": "4.3"
},
{
"text": "We implement a simple k-means baseline similar to Levow (2006) , using two engineered features. The first feature is the average pitch of all the points in the pitch contour; the second feature is the slope of an ordinary least squares regression fit on the pitch contour. After extracting these features for every syllable, we run k-means clustering, using the same number of clusters that is chosen by the mean shift algorithm. Figure 4 shows the latent space learned by the autoencoders and the clustering output. Our model found 4 tone clusters in Mandarin, matching the number of phonemic tones (Table 1) and 5 in Cantonese, which is one fewer than the number of phonemic tones (Table 2) . In Mandarin, the 4 clusters correspond very well with the the 4 phonemic tone categories, and the generated contours closely match the ground truth in Figure 1 . There is some overlap between tones 3 and 4; this is because tone 3 is sometimes realized a low-falling tone without the final rise, a process known as half T3 sandhi (Chen, 2000) , thus, it may overlap with tone 4 (falling tone). In Cantonese, the 5 clusters A-E correspond to low-falling, mid-level, high-level, mid-rising, and low-rising tones. Tone clustering in Cantonese is expected to be more difficult than in Mandarin because of 6 contrastive tones, rather than 4. The model is more effective at clustering the higher tones (1, 2, 3) , and less effective at clustering the lower tones (4, 5, 6), particularly tone 4 (lowfalling) and tone 6 (low-level). This confirms the difficulties in prior work, which reported worse classification accuracy on the lower-pitched tones because the lower region of the Cantonese tone space is more crowded than the upper region (Peng and Wang, 2005) .",
"cite_spans": [
{
"start": 50,
"end": 62,
"text": "Levow (2006)",
"ref_id": "BIBREF10"
},
{
"start": 1024,
"end": 1036,
"text": "(Chen, 2000)",
"ref_id": "BIBREF2"
},
{
"start": 1728,
"end": 1749,
"text": "(Peng and Wang, 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 430,
"end": 438,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 600,
"end": 609,
"text": "(Table 1)",
"ref_id": "TABREF2"
},
{
"start": 683,
"end": 692,
"text": "(Table 2)",
"ref_id": "TABREF3"
},
{
"start": 846,
"end": 854,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1390,
"end": 1399,
"text": "(1, 2, 3)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "k-means baseline",
"sec_num": "4.4"
},
{
"text": "To evaluate how much the clusters match the ground truth, we use normalized mutual information (NMI); this is preferable over accuracy because it does not require the number of detected clusters to be the same as the number of tones. In Table 3 , we evaluate NMI for our autoencoder model and the k-means baseline. We consider two scenarios for each language: using all the syllables (All) and using only the first syllable of each word (First).",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In all cases, the clusters from the autoencoder model have higher NMI than the k-means model. The improvement is due to the mean shift procedure identifying points that belong to a cluster with high confidence: it only only makes predictions for those points, whereas k-means assigns every point to a cluster. All models perform better on the first syllable of each utterance than the rest of the syllables; we discuss the reasons for this in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "One limitation of our model is it considers syllables in isolation, but in reality, pitch is affected by context. Two types of contextual effects are carryover and declination. A carry-over effect is when the pitch contour of a tone undergoes contextual variation depending on the preceding tone; strong carry-over effects have been observed in Mandarin (Xu, 1997) . Prior work (Levow, 2006) avoided carry-over effects by using only the second half of every syllable, but we do not consider languagespecific heuristics in our model. Declination is a phenomenon in which the pitch declines over an utterance (Yip, 2002; Peng and Wang, 2005) . This is especially a problem in Cantonese, which has tones that differ only on pitch level and not contour: for example, a mid-level tone near the end of a phrase may have the same absolute pitch as a low-level tone at the start of a phrase.",
"cite_spans": [
{
"start": 354,
"end": 364,
"text": "(Xu, 1997)",
"ref_id": "BIBREF20"
},
{
"start": 378,
"end": 391,
"text": "(Levow, 2006)",
"ref_id": "BIBREF10"
},
{
"start": 607,
"end": 618,
"text": "(Yip, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 619,
"end": 639,
"text": "Peng and Wang, 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations 6.1 Contextual effects",
"sec_num": "6"
},
{
"text": "Contextual effects are apparent in our results (Table 3 ). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllable (which is not affected by carry-over or declination), compared to using all the syllables.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "(Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Limitations 6.1 Contextual effects",
"sec_num": "6"
},
{
"text": "Tone is not a purely phonetic property: it is impossible to determine, from phonetics alone, whether two pitch contours have the same or different tones. The same underlying tone may manifest as several different allotones depending on the phonetic context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimal pairs and allotones",
"sec_num": "6.2"
},
{
"text": "An example of this appears in Cantonese. Its tone system is sometimes analyzed as having nine tones instead of six, where six of the tones are only permitted in open syllables (e.g. si) and three are only permitted in checked syllables (e.g. sik). Other analyses use a six-tone system, treating the three checked tones as allotonic variants of the high, mid, and low tones. By taking this approach, one implies that length is a property of the syllable and cannot be solely responsible for contrasting two tones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimal pairs and allotones",
"sec_num": "6.2"
},
{
"text": "Length is not the only differentiating factor for allotones. Another example is in Wu Chinese, where syllables beginning with voiced consonants have lower pitch than those beginning with voiceless consonants (Yip, 2002) . Thus the same language may have vastly different numbers of tones, depending on the analysis.",
"cite_spans": [
{
"start": 208,
"end": 219,
"text": "(Yip, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimal pairs and allotones",
"sec_num": "6.2"
},
{
"text": "Linguistically, two phonemic tones are considered to be contrastive if there exists a minimal pair: two semantically different lexical items that are identical in every aspect except for tone. This definition is the most widely used because it clearly settles disagreements about whether two tones are same or different. However, it is problematic for unsupervised models that only have access to phonetic and not semantic information. This issue is not unique to tone: similar difficulties have been noted when attempting to identify consonant and vowel phonemes automatically (Kempton and Moore, 2014) .",
"cite_spans": [
{
"start": 578,
"end": 603,
"text": "(Kempton and Moore, 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimal pairs and allotones",
"sec_num": "6.2"
},
{
"text": "We propose a model for unsupervised clustering and discovery of phonemic tones in tonal languages, using spoken words as input. Our model extracts the F0 pitch contour, trains a convolutional autoencoder to learn a low-dimensional representation for each contour, and applies mean shift clustering to the resulting latent space. We obtain promising results with both Mandarin Chinese and Cantonese, using only 400 spoken words from each language. Cantonese presents more difficulties because of its larger number of tones, especially at the lower half of the pitch range, and also due to multiple contrastive level tones. Still, in both our languages, our method finds clusters of tones that better match the ground truth than the k-means baseline. Finally, we discuss the effects of contextual variation and the limitations of unsupervised learning for the tone induction problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The numbers are Chao tone numerals, where 1 is the lowest and 5 is the highest pitch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://shtooka.net/, specifically the cmn-caentan dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://forvo.com/ 4 https://en.wiktionary.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Prof Gerald Penn for his help suggestions during this project. Rudzicz is a CIFAR Chair in AI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Accurate short-term analysis of the fundamental frequency and the harmonics-tonoise ratio of a sampled sound",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Boersma",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the institute of phonetic sciences",
"volume": "17",
"issue": "",
"pages": "97--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Boersma. 1993. Accurate short-term analysis of the fundamental frequency and the harmonics-to- noise ratio of a sampled sound. In Proceedings of the institute of phonetic sciences, volume 17, pages 97-110. Amsterdam.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tone classification in Mandarin Chinese using convolutional neural networks",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "2150--2154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Chen, Razvan C Bunescu, Li Xu, and Chang Liu. 2016. Tone classification in Mandarin Chinese using convolutional neural networks. In INTER- SPEECH, pages 2150-2154.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Tone sandhi: Patterns across Chinese dialects",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "92",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Y Chen. 2000. Tone sandhi: Patterns across Chinese dialects, volume 92. Cambridge University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mean shift: A robust approach toward feature space analysis",
"authors": [
{
"first": "Dorin",
"middle": [],
"last": "Comaniciu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Meer",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence",
"volume": "",
"issue": "5",
"pages": "603--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dorin Comaniciu and Peter Meer. 2002. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence, (5):603-619.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Perceptual learning of Cantonese lexical tones by tone and non-tone language speakers",
"authors": [
{
"first": "L",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Valter",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "Lian",
"middle": [],
"last": "Ciocca",
"suffix": ""
},
{
"first": "Kimberly",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fenn",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Phonetics",
"volume": "36",
"issue": "2",
"pages": "268--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander L Francis, Valter Ciocca, Lian Ma, and Kim- berly Fenn. 2008. Perceptual learning of Cantonese lexical tones by tone and non-tone language speak- ers. Journal of Phonetics, 36(2):268-294.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Grammaticus ex machina: tone inventories as hypothesized by machine",
"authors": [
{
"first": "Michael",
"middle": [
"David"
],
"last": "Fry",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael David Fry. 2020. Grammaticus ex machina: tone inventories as hypothesized by machine. Ph.D. thesis, University of British Columbia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Modified mean shift algorithm",
"authors": [
{
"first": "Aliyari",
"middle": [],
"last": "Ghassabeh",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Rudzicz",
"suffix": ""
}
],
"year": 2018,
"venue": "IET Image Processing",
"volume": "12",
"issue": "12",
"pages": "2172--2177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y Aliyari Ghassabeh and F Rudzicz. 2018. Modi- fied mean shift algorithm. IET Image Processing, 12(12):2172-2177.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Introducing Parselmouth: A Python interface to Praat",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Jadoul",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Bart",
"middle": [
"De"
],
"last": "Boer",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Phonetics",
"volume": "71",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Jadoul, Bill Thompson, and Bart De Boer. 2018. Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71:1-15.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discovering the phoneme inventory of an unwritten language: A machine-assisted approach",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Kempton",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2014,
"venue": "Speech Communication",
"volume": "56",
"issue": "",
"pages": "152--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Kempton and Roger K Moore. 2014. Dis- covering the phoneme inventory of an unwritten lan- guage: A machine-assisted approach. Speech Com- munication, 56:152-166.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised and semisupervised learning of tone and pitch accent",
"authors": [
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina-Anne Levow. 2006. Unsupervised and semi- supervised learning of tone and pitch accent. In Proceedings of the main conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, pages 224-231. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ethnologue: Languages of the World",
"authors": [
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Paul Lewis. 2009. Ethnologue: Languages of the World, 16th edition. SIL International, Dallas, Texas.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving phoneme set discovery for documenting unwritten languages",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Franke",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2017,
"venue": "Elektronische Sprachsignalverarbeitung",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus M\u00fcller, J\u00f6rg Franke, Sebastian St\u00fcker, and Alex Waibel. 2017a. Improving phoneme set discov- ery for documenting unwritten languages. Elektron- ische Sprachsignalverarbeitung (ESSV), 2017.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Towards phoneme inventory discovery for documentation of unwritten languages",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Franke",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5200--5204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus M\u00fcller, J\u00f6rg Franke, Alex Waibel, and Sebas- tian St\u00fcker. 2017b. Towards phoneme inventory dis- covery for documentation of unwritten languages. In 2017 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5200-5204. IEEE.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic differentiation in PyTorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS Autodiff Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tone recognition of continuous Cantonese speech based on support vector machines",
"authors": [
{
"first": "Gang",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "S-Y",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2005,
"venue": "Speech Communication",
"volume": "45",
"issue": "1",
"pages": "49--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gang Peng and William S-Y Wang. 2005. Tone recog- nition of continuous Cantonese speech based on support vector machines. Speech Communication, 45(1):49-62.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A comparison of neural network methods for unsupervised representation learning on the zero resource speech challenge",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Renshaw",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Renshaw, Herman Kamper, Aren Jansen, and Sharon Goldwater. 2015. A comparison of neu- ral network methods for unsupervised representa- tion learning on the zero resource speech challenge. In Sixteenth Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Highly accurate Mandarin tone classification in the absence of pitch information",
"authors": [
{
"first": "Neville",
"middle": [],
"last": "Ryant",
"suffix": ""
},
{
"first": "Malcolm",
"middle": [],
"last": "Slaney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Jiahong",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Speech Prosody",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neville Ryant, Malcolm Slaney, Mark Liberman, Eliz- abeth Shriberg, and Jiahong Yuan. 2014a. Highly accurate Mandarin tone classification in the absence of pitch information. In Proceedings of Speech Prosody, volume 7.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Mandarin tone classification without pitch tracking",
"authors": [
{
"first": "Neville",
"middle": [],
"last": "Ryant",
"suffix": ""
},
{
"first": "Jiahong",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Liberman",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4868--4872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neville Ryant, Jiahong Yuan, and Mark Liberman. 2014b. Mandarin tone classification without pitch tracking. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4868-4872. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The zero resource speech challenge 2015",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Versteegh",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Thiolliere",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Schatz",
"suffix": ""
},
{
"first": "Xuan",
"middle": [
"Nga"
],
"last": "Cao",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Anguera",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Versteegh, Roland Thiolliere, Thomas Schatz, Xuan Nga Cao, Xavier Anguera, Aren Jansen, and Emmanuel Dupoux. 2015. The zero resource speech challenge 2015. In Sixteenth Annual Conference of the International Speech Communication Associ- ation.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Contextual tonal variations in Mandarin",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of phonetics",
"volume": "25",
"issue": "1",
"pages": "61--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Xu. 1997. Contextual tonal variations in Mandarin. Journal of phonetics, 25(1):61-83.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Tone",
"authors": [
{
"first": "Moira",
"middle": [],
"last": "Yip",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moira Yip. 2002. Tone. Cambridge University Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Data mining Mandarin tone contour shapes",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuo Zhang. 2019. Data mining Mandarin tone con- tour shapes. SIGMORPHON 2019, page 144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Fundamental frequency (F0) contours for the four Mandarin tones and six Cantonese tones in isolation, produced by native speakers.Figure adaptedfrom(Francis et al., 2008)."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Latent space generated by autoencoder and the results of mean shift clustering for Mandarin and Cantonese. Each cluster center is fed through the decoder to generate the corresponding pitch contour. The clusters within each language are ordered by size, from largest to smallest."
},
"TABREF2": {
"text": "Cluster and tone frequencies for Mandarin.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"7\">Cluster T1 T2 T3 T4 T5 T6</td></tr><tr><td>A</td><td>5</td><td>5</td><td colspan=\"4\">59 109 7 105</td></tr><tr><td>B</td><td colspan=\"2\">102 3</td><td>36</td><td>2</td><td>2</td><td>7</td></tr><tr><td>C</td><td>93</td><td>0</td><td>0</td><td>2</td><td>0</td><td>0</td></tr><tr><td>D</td><td>0</td><td>64</td><td>4</td><td>3</td><td>2</td><td>11</td></tr><tr><td>E</td><td>0</td><td>28</td><td>2</td><td>4</td><td>30</td><td>2</td></tr><tr><td>N/A</td><td colspan=\"6\">70 39 51 45 15 49</td></tr></table>"
},
"TABREF3": {
"text": "Cluster and tone frequencies for Cantonese.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF5": {
"text": "",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Normalized mutual information (NMI) be-</td></tr><tr><td>tween cluster assignments and ground truth tones, con-</td></tr><tr><td>sidering only the first syllable of each word, or all syl-</td></tr><tr><td>lables.</td></tr></table>"
}
}
}
}