ACL-OCL / Base_JSON /prefixS /json /slpat /2022.slpat-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:22:04.789372Z"
},
"title": "A comparison study on patient-psychologist voice diarization",
"authors": [
{
"first": "Rachid",
"middle": [],
"last": "Riad",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hadrien",
"middle": [],
"last": "Titeux",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Xuan",
"middle": [
"Nga"
],
"last": "Cao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Laurie",
"middle": [],
"last": "Lemoine",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Justine",
"middle": [],
"last": "Montillot",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Agnes",
"middle": [],
"last": "Sliwinski",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jennifer",
"middle": [
"Hamet"
],
"last": "Bagnou",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Anne-Catherine",
"middle": [],
"last": "Bachoud-L\u00e9vi",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Conversations between a clinician and a patient, in natural conditions, are valuable sources of information for medical follow-up. The automatic analysis of these dialogues could help extract new language markers and speed up the clinicians' reports. Yet, it is not clear which model is the most efficient to detect and identify the speaker turns, especially for individuals with speech disorders. Here, we proposed a split of the data that allows conducting a comparative evaluation of different diarization methods. We designed and trained end-to-end neural network architectures to directly tackle this task from the raw signal and evaluate each approach under the same metric. We also studied the effect of fine-tuning models to find the best performance. Experimental results are reported on naturalistic clinical conversations between Psychologists and Interviewees, at different stages of Huntington's disease, displaying a large panel of speech disorders. We found out that our best end-to-end model achieved 19.5% IER on the test set, compared to 23.6% achieved by the finetuning of the X-vector architecture. Finally, we observed that we could extract clinical markers directly from the automatic systems, highlighting the clinical relevance of our methods. * \u22c6 Equal contribution. We are very thankful to the patients that participated in our study. We thank Katia Youssov, Laurent Cleret de Langavant, Marvin Lavechin, and the speech pathologists for the multiple helpful discussions and the evaluations of the patients.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Conversations between a clinician and a patient, in natural conditions, are valuable sources of information for medical follow-up. The automatic analysis of these dialogues could help extract new language markers and speed up the clinicians' reports. Yet, it is not clear which model is the most efficient to detect and identify the speaker turns, especially for individuals with speech disorders. Here, we proposed a split of the data that allows conducting a comparative evaluation of different diarization methods. We designed and trained end-to-end neural network architectures to directly tackle this task from the raw signal and evaluate each approach under the same metric. We also studied the effect of fine-tuning models to find the best performance. Experimental results are reported on naturalistic clinical conversations between Psychologists and Interviewees, at different stages of Huntington's disease, displaying a large panel of speech disorders. We found out that our best end-to-end model achieved 19.5% IER on the test set, compared to 23.6% achieved by the finetuning of the X-vector architecture. Finally, we observed that we could extract clinical markers directly from the automatic systems, highlighting the clinical relevance of our methods. * \u22c6 Equal contribution. We are very thankful to the patients that participated in our study. We thank Katia Youssov, Laurent Cleret de Langavant, Marvin Lavechin, and the speech pathologists for the multiple helpful discussions and the evaluations of the patients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "During the last decades, it became easier to collect large naturalistic corpora of speech data. It is now possible to obtain new realistic measurements of turn-takings and linguistic behaviours (Ash and Grossman, 2015) . These measurements can be especially useful during clinical interviews as they augment the current clinical panel of assessments and unlock home-based assessments (Matton et al., 2019) . The remote automatic measure of symptoms of patients with Neurodegenerative diseases could greatly improve the follow-up of patients and speedup ongoing clinical trials.",
"cite_spans": [
{
"start": 194,
"end": 218,
"text": "(Ash and Grossman, 2015)",
"ref_id": "BIBREF0"
},
{
"start": 384,
"end": 405,
"text": "(Matton et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Yet, this methodology relies on the heavy burden of manual annotation to reach the necessary amount needed to draw significant conclusions. It is now indispensable to have robust speech processing pipelines to extract meaningful insights from these long naturalistic datasets (Lahiri et al., 2020) . Huntington's Disease represents a unique opportunity to design and test these speech algorithms for Neurodegenerative diseases. Indeed, individuals with the Huntington's disease can exhibit a large spectrum of speech and language symptoms (Vogel et al., 2012) and it is possible to follow gene carriers even before the official clinical onset of the disease (Hinzen et al., 2018) . The first unavoidable computational tasks to extract speech and linguistic information from medical interviews is the diarization: (1) the detection of speaker-homogeneous portions of voice activity (Graf et al., 2015) and (2) the identification of speaker (Bigot et al., 2010) . Speaker turns are clinically informative for diagnostic in Huntington's Disease (Perez et al., 2018; Vogel et al., 2012) .",
"cite_spans": [
{
"start": 276,
"end": 297,
"text": "(Lahiri et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 539,
"end": 559,
"text": "(Vogel et al., 2012)",
"ref_id": "BIBREF28"
},
{
"start": 658,
"end": 679,
"text": "(Hinzen et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 881,
"end": 900,
"text": "(Graf et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 939,
"end": 959,
"text": "(Bigot et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 1042,
"end": 1062,
"text": "(Perez et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 1063,
"end": 1082,
"text": "Vogel et al., 2012)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, a number of studies are trying to solve this problem directly from the audio signal and linguistic outputs, also referred to as Speaker Role Recognition. They are taking advantage of the specificities (ex: prosody, specific vocabulary, adapted language models) of each role in the different domains: Broadcast news programs (Bigot et al., 2010) , Meetings (Sapru and Valente, 2012) , Medical conversations (Flemotomos et al., 2018) , Child-centered recordings (Lavechin et al., 2020; Figure 1: Two approaches for the diarization of conversational clinical interviews. The steps for the Speaker Enrollment Protocol are in Blue, and Green for the Speaker Role Recognition. Koluguri et al., 2020) .",
"cite_spans": [
{
"start": 331,
"end": 351,
"text": "(Bigot et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 363,
"end": 388,
"text": "(Sapru and Valente, 2012)",
"ref_id": "BIBREF24"
},
{
"start": 413,
"end": 438,
"text": "(Flemotomos et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 467,
"end": 490,
"text": "(Lavechin et al., 2020;",
"ref_id": null
},
{
"start": 491,
"end": 491,
"text": "",
"ref_id": null
},
{
"start": 679,
"end": 701,
"text": "Koluguri et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another approach relies on Speaker Enrollment (Snyder et al., 2017; Heigold et al., 2016) , it aims to check the identity of a given speech segment based on a enrolled speaker template. Our study differ from these studies as they are evaluating their pipelines with already segmented speakerhomogeneous speech segments. Another related approach is Personal VAD (Voice Activity Detection) model from (Ding et al., 2020) where they used enrolled speaker template to detect speech segments from each individual speaker.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Snyder et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 68,
"end": 89,
"text": "Heigold et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 399,
"end": 418,
"text": "(Ding et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "None of these approaches have been compared under the same evaluation metric, despite prior works aiming at solving both these tasks (Garc\u00eda et al., 2019) and their high degree of similarities.",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Garc\u00eda et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here in this paper, we aimed to detect automatically the portions of speech and to identify the speakers in medical conversation between Psychologists and Interviewees. These interviewees are either Healthy Controls (C), gene carriers without overt manifestation of Huntington's Disease (preHD) and manifest gene carriers of Huntington's Disease (HD). We introduced a novel way to split the datasets so that we are now capable to compare two different speech processing approaches to deal with these 2 problems ( Figure 1 ): Speaker Role Recognition and Speaker Enrollment Protocol. We showed the clinical relevance of these pipelines with the extraction speech markers that have been found predictive in Huntington's Disease.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 521,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Data, evaluation splits, metrics",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ninety four participants were included from two observational cohorts (NCT01412125 and NCT03119246) in this ancillary study at the Hospital Henri-Mondor Cr\u00e9teil, France): 72 people tested with a number of CAG repeats on the Huntingtin gene above 35 (CAG > 35), and 22 Healthy Controls (C). Mutant Huntington gene carriers were considered premanifest if they both score less than five at the Total Motor score (TMS) and their Total functional capacity (TFC) equals 13 (Tabrizi et al., 2009) using the Unified Huntington Disease Rating Scale (UHDRS). All participants signed an informed consent and conducted an interview with an expert psychologist. Therefore in the diarization setting, there are two roles in each interview: a Psychologist and an Interviewee. The speech data were annotated with Seshat and Praat (Boersma et al., 2002) softwares. The dataset is composed of K = 94 interviews I 1...K . We designed a new way to split of speech dataset to compare different diarization approaches: an end-to-end Speaker Role Recognition model and a Speaker Enrollment pipeline (See Figure 2 ). The dataset is split in three sets which we refer to metatrain set M train , meta-dev set M dev and meta-test set M test with the ratio of 60%, 20%, and 20%, respectively. Interview",
"cite_spans": [
{
"start": 467,
"end": 489,
"text": "(Tabrizi et al., 2009)",
"ref_id": "BIBREF26"
},
{
"start": 814,
"end": 836,
"text": "(Boersma et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 1081,
"end": 1089,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "I \u2208 I 1...K is composed of N I segments I = {U 0 , U 2 , . . . , U N I }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "Each segment U i is pronounced by a speaker s i . We summarized the corpus statistics in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "Each interview I in the meta-dev and meta-test is split in two sets which we refer dev set X dev and test set X test . X test is always kept fixed through all experiments, and we study the influence of the size of the X dev based on T dev that filters the segments (cf Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "All the data from the meta-train set M train is used to train or fine-tune the neural network models for voice activity detection, speaker change detection, speaker role recognition, and speaker enrollment. The dev set X dev of the meta-dev set M dev and the dev set X dev of the meta-test set M test are only used for the speaker enrollment experiments, to build the template representation of each speakers. The results on the test set X test of the meta-dev set M dev are used to select all the hyper-parameters and select the best model for each experiment. The final comparison is done with the test set X test of the meta-test set M test . Figure 2 : Illustration of the data split with 4 interviews. Each line I i represents an interview between the Interviewee and the Psychologist. The elevation of each row indicates 'who speaks when'. The segments can overlap. ",
"cite_spans": [],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "To compare final performance of each approach, we use the Identification Error Rate (IER) taking into account both the segmentation and confusion errors. IER is obtained with pyannote.metrics (Bredin, 2017) :",
"cite_spans": [
{
"start": 192,
"end": 206,
"text": "(Bredin, 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "2.2"
},
{
"text": "IER = T false alarm + T missed detection + T confusion T Total",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "2.2"
},
{
"text": "The T confusion T Total component in the IER is related to the Miss-classification Rate (MR%) used in Speaker Role Recognition study (Flemotomos et al., 2019) , which is based on Frames and not duration of the turns. We compared the different approaches as a function of the size of the enrollment T dev in Figure 3 .",
"cite_spans": [
{
"start": 133,
"end": 158,
"text": "(Flemotomos et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 307,
"end": 315,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": "2.2"
},
{
"text": "We adapted the approach from (Lavechin et al., 2020) for the Speaker Role Recognition. We trained on M train a unique model to detect each role (Psychologist,Interviewee), and selects the best epoch on M dev . This is a multi-label multiclass segmentation problem. A threshold parameter for each role is optimized on the Meta-dev set M dev for the two output units of the model. Therefore the two classes can be activated at the same time, i.e. we can also detect overlapped speech. To solve and model this task, we used SincNet filters (Ravanelli and Bengio, 2018) to obtain adapted speech features vectors from the audio signal. The Sinc-Net output is fed to a stack of 2 bi-recurrent LSTM layers with hidden size of 128, then pass to a stack of 2 feed-forward layers of size 128 before a final decision layer. We used a binary cross-entropy loss and a cyclic scheduler as training procedure. The hyper-parameters to train our model can be found here 1 .",
"cite_spans": [
{
"start": 29,
"end": 52,
"text": "(Lavechin et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Role Recognition",
"sec_num": "3.1"
},
{
"text": "The Speaker enrollment protocol can be decomposed into four tasks: (1) Voice Activity Detection (2) Speaker Change Detection, (3) Enrollment, (4) Identification. We extended the speech processing toolkit from (Bredin et al., 2020) pyannote.audio to run our experiments. Clinical laboratories can not all re-train in-domain speech processing models due to data scarcity or a lack of computational resources. Therefore, we evaluated pretrained models on open-source datasets and transfer models on our dataset to evaluate these out-of-domain performances with real clinical conversational conditions.",
"cite_spans": [
{
"start": 209,
"end": 230,
"text": "(Bredin et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker enrollment protocol",
"sec_num": "3.2"
},
{
"text": "The first step is the Voice Activity Detection (VAD), i.e. obtain the speech segments in the audio signal. It can be modeled as an audio sequence labeling task. There are 2 classes (Speech or Non-Speech). The VAD labels for each interview I are the presence or not of a segment U i at time t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Activity Detection",
"sec_num": "3.2.1"
},
{
"text": "The model can be used already Pretrained or Retrained on the meta-train set M train of our dataset. We choose the DIHARD dataset (Ryant et al., 2019) as a potential pretrained dataset as it contains multiple source domain data (clinical interviews among them). When trained from scratch, the training is done for 200 pyannote epochs and the model is selected on the Meta-dev M dev . The model is also composed of SincNet filters with 2 bi-recurrent LSTM layers and 2 feed-forward layers. The full specifications can be found here 2 .",
"cite_spans": [
{
"start": 129,
"end": 149,
"text": "(Ryant et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Activity Detection",
"sec_num": "3.2.1"
},
{
"text": "The second step is the Speaker Change Detection (SCD), i.e. obtain the moment when one of a speaker starts or stops talking. It can aslo be modeled as an audio sequence labeling task. There are 2 classes (Change or No-Change). The SCD labels for each interview I are the start or end of a segment U i at time t. We also compared Pretrained on DIHARD and Retrained models. We used the same model as for the Voice Activity Detection. The full specifications can be found here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Change Detection",
"sec_num": "3.2.2"
},
{
"text": "Based on VAD and SCD outputs, for each Interview I we obtain a set of N \u2032 I candidates speakerhomogeneous segments {\u00db 1 , . . .\u00db N \u2032 I }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker Change Detection",
"sec_num": "3.2.2"
},
{
"text": "In the enrollment stage, we need to get a Speaker Embedding function f \u03b8 for our specific task. We combined SincNet filters and the X-vector architecture (Snyder et al., 2017) as in (Bredin et al., 2020) . For finetuning, we froze all layers and finetuned the last layer. We used the VoxCeleb2 dataset (Nagrani et al., 2017) as a pretraining dataset as it contains a diverse distribution of speakers and recording conditions.",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Snyder et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 182,
"end": 203,
"text": "(Bredin et al., 2020)",
"ref_id": null
},
{
"start": 302,
"end": 324,
"text": "(Nagrani et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Enrollment",
"sec_num": "3.2.3"
},
{
"text": "Then, we used the set of segments from the dev set X dev of the meta-dev and meta-test to build a template vector m j for each speaker j in the interview I. X dev contain a set of segments U enrollment speaker j from each speaker j. The start of each segment U enrollment speaker j needs to be smaller than T dev . We computed the average of the representations for each speaker j:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enrollment",
"sec_num": "3.2.3"
},
{
"text": "m j = 1 |U enrollment speaker j | U \u2208U enrollment speaker j f \u03b8 (U )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enrollment",
"sec_num": "3.2.3"
},
{
"text": "(1) In principle, the more data you have to build template of each speaker, the easier it is to distinguish them. Thus, we studied the effect of the size of the enrollment based on the parameter T dev \u2208 (90s, 100s, . . . , 180s) to build the template m j (Larcher et al., 2014) .",
"cite_spans": [
{
"start": 255,
"end": 277,
"text": "(Larcher et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Enrollment",
"sec_num": "3.2.3"
},
{
"text": "For the identification stage, we use the function f \u03b8 and the different representation m j of the speakers from the enrollment stage. We used the following Figure 3 : Identification Error Rates for the different combination of approaches on the test set X test of the meta-test set M test as a function of the size of the enrollment parameter T dev . Spk Emb., VAD,SCD stand for Speaker Embedding, Voice Activity Detection and Speaker Change Detection. Best performance of each approach is displayed at the best T dev . cosine distance D to build a scoring function and compare each segment\u00db \u2208 {\u00db 1 , . . .\u00db N \u2032 I } to each template m j :",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identification",
"sec_num": "3.2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(\u00db , m j ) = 1 2 \uf8eb \uf8ed 1 \u2212 f \u03b8 (\u00db ) \u22a4 m j \u2225f \u03b8 (\u00db )\u2225 \u2225m j \u2225 \uf8f6 \uf8f8 (2) argmin j D(\u00db , m j ) : Selects Speaker j",
"eq_num": "(3)"
}
],
"section": "Identification",
"sec_num": "3.2.4"
},
{
"text": "In addition, we analysed topline performance of the speaker embedding models when the Ground Truth Segmentation is provided. Finally, we computed a chance baseline based on speaker Enrollment by randomly permutating all the cosine distances. Spearman correlation is computed to compare clinical markers extracted from our best system to ground truth extractions (Figures 4 and 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 362,
"end": 379,
"text": "(Figures 4 and 5)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Identification",
"sec_num": "3.2.4"
},
{
"text": "4 Results and discussions Figure 3 shows results in term of IER for the different approaches. Both approaches greatly improved over chance. If we consider pipelines solving both segmentation and identification, our best performance is obtained using the Speaker Role Recognition approach with IER=19.5% while the Speaker to 23.6%). We ran an additional ablation experiment (Table 2) for the Speaker Role Recognition to measure the amount of data necessary. This ablation study informed us on the necessary amount of data to reach certain level of performance. Even though models are better than Chance, we found out that at least 50% of our dataset (28 Interviews) is necessary to outperform the Speaker Enrollment Protocol pipeline (IER of 20.7% vs 23.6%). The analysis of the pattern of errors showed that the most important component is the False Alarm (FA), and a tenfold increase in dataset size allows to gain 4 points of FA. Therefore, most of the errors come from the voice activity detection part of the system. One of our hypothesis is that the system is confused by too much ambient noises from the hospital environment and thus potentially trigger too much positive presence of speech.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 3",
"ref_id": null
},
{
"start": 373,
"end": 382,
"text": "(Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Identification",
"sec_num": "3.2.4"
},
{
"text": "In previous studies in Huntington's Disease (Vogel et al., 2012; Perez et al., 2018) , the Ratio of Silence and Statistics on utterances were informative to distinguish between classes of Individuals. These speech markers can be extracted directly from the predictions of the Speaker Role Recognition outputs. We computed the Ratio of Silence and the Standard Deviation of Duration of Utterances on the test set of the Meta-test set M test . This computation was done both from the Ground Truth Segmentation and the segmentation provided by the Speaker role recognition system (Figures 4, 5 . We observed that the automatic system outputs behaved differently as a function of clinical marker. The Ratio of Silence was better predicted (significant spearman correlation of r = 0.579, p = 0.009) than the SD of Duration of Utterances (non significant spearman correlation of r = 0.325, p = 0.175). One potential interpretation of our results is that the difference between the ratio and the standard deviation reveals that our pipeline is great overall to obtain summary statistics of the interview, but its precision at the turntaking level is not sufficient to obtain turn statistics. Some bias of the predictive system might not hurt the IER metric but hurt the reliability of some clinical measures.",
"cite_spans": [
{
"start": 44,
"end": 64,
"text": "(Vogel et al., 2012;",
"ref_id": "BIBREF28"
},
{
"start": 65,
"end": 84,
"text": "Perez et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 577,
"end": 590,
"text": "(Figures 4, 5",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Identification",
"sec_num": "3.2.4"
},
{
"text": "Detection and Identification of speaker turns are fundamental problems in speech processing, especially in healthcare applications. While works studying these problems in isolation has provided valuable insights, in this work, we showed that Speaker Role Recognition was the most suitable approach for Interviewees at different stages of Huntington's Disease. For future work, we plan to investigate the use of these methods to derive robust biomarkers automatically and compare them to more classic approaches Perez et al., 2018; Romana et al., 2020) .",
"cite_spans": [
{
"start": 511,
"end": 530,
"text": "Perez et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 531,
"end": 551,
"text": "Romana et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "https://tinyurl.com/etfrky3w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tinyurl.com/44677f7c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Why study connected speech production. Cognitive neuroscience of natural language use",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Ash",
"suffix": ""
},
{
"first": "Murray",
"middle": [],
"last": "Grossman",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "29--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Ash and Murray Grossman. 2015. Why study connected speech production. Cognitive neuro- science of natural language use, pages 29-58.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Looking for relevant features for speaker role recognition",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Bigot",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Pinquier",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Ferran\u00e9",
"suffix": ""
},
{
"first": "R\u00e9gine",
"middle": [],
"last": "Andr\u00e9-Obrecht",
"suffix": ""
}
],
"year": 2010,
"venue": "Eleventh Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Bigot, Julien Pinquier, Isabelle Ferran\u00e9, and R\u00e9gine Andr\u00e9-Obrecht. 2010. Looking for relevant features for speaker role recognition. In Eleventh Annual Conference of the International Speech Com- munication Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Praat, a system for doing phonetics by computer. Glot international",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Boersma",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Boersma et al. 2002. Praat, a system for doing phonetics by computer. Glot international, 5.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "pyannote.metrics: a toolkit for reproducible evaluation, diagnostic, and error analysis of speaker diarization systems",
"authors": [
{
"first": "Herv\u00e9",
"middle": [],
"last": "Bredin",
"suffix": ""
}
],
"year": 2017,
"venue": "In Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herv\u00e9 Bredin. 2017. pyannote.metrics: a toolkit for reproducible evaluation, diagnostic, and error anal- ysis of speaker diarization systems. In Interspeech, Stockholm, Sweden.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Wassim Bouaziz, and Marie-Philippe Gill. 2020. Pyannote. audio: neural building blocks for speaker diarization",
"authors": [
{
"first": "Herv\u00e9",
"middle": [],
"last": "Bredin",
"suffix": ""
},
{
"first": "Ruiqing",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"Manuel"
],
"last": "Coria",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Gelly",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Korshunov",
"suffix": ""
},
{
"first": "Marvin",
"middle": [],
"last": "Lavechin",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Fustes",
"suffix": ""
},
{
"first": "Hadrien",
"middle": [],
"last": "Titeux",
"suffix": ""
}
],
"year": null,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "7124--7128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herv\u00e9 Bredin, Ruiqing Yin, Juan Manuel Coria, Gre- gory Gelly, Pavel Korshunov, Marvin Lavechin, Diego Fustes, Hadrien Titeux, Wassim Bouaziz, and Marie-Philippe Gill. 2020. Pyannote. audio: neural building blocks for speaker diarization. In ICASSP, pages 7124-7128. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Personal vad: Speaker-conditioned voice activity detection",
"authors": [
{
"first": "Shaojin",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuo-Yiin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Ignacio-Lopez",
"middle": [],
"last": "Moreno",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. Odyssey 2020 The Speaker and Language Recognition Workshop",
"volume": "",
"issue": "",
"pages": "433--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaojin Ding, Quan Wang, Shuo-Yiin Chang, Li Wan, and Ignacio-Lopez Moreno. 2020. Personal vad: Speaker-conditioned voice activity detection. In Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, pages 433-439.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Role specific lattice rescoring for speaker role recognition from speech recognition outputs",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Flemotomos",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Georgiou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "7330--7334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Flemotomos, Panayiotis Georgiou, David C Atkins, and Shrikanth Narayanan. 2019. Role spe- cific lattice rescoring for speaker role recognition from speech recognition outputs. In ICASSP, pages 7330-7334. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Combined speaker clustering and role recognition in conversational speech",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Flemotomos",
"suffix": ""
},
{
"first": "Pavlos",
"middle": [],
"last": "Papadopoulos",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc",
"volume": "",
"issue": "",
"pages": "1378--1382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Flemotomos, Pavlos Papadopoulos, James Gibson, and Shrikanth Narayanan. 2018. Combined speaker clustering and role recognition in conversa- tional speech. Proc. Interspeech 2018, pages 1378- 1382.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Features for voice activity detection: a comparative analysis",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Graf",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Herbig",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Schmidt",
"suffix": ""
}
],
"year": 2015,
"venue": "EURASIP Journal on Advances in Signal Processing",
"volume": "2015",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Graf, Tobias Herbig, Markus Buck, and Gerhard Schmidt. 2015. Features for voice activity detec- tion: a comparative analysis. EURASIP Journal on Advances in Signal Processing, 2015(1):91.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "End-to-end text-dependent speaker verification",
"authors": [
{
"first": "Georg",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2016,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "5115--5119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georg Heigold, Ignacio Moreno, Samy Bengio, and Noam Shazeer. 2016. End-to-end text-dependent speaker verification. In ICASSP, pages 5115-5119. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A systematic linguistic profile of spontaneous narrative speech in presymptomatic and early stage huntington's disease",
"authors": [
{
"first": "Wolfram",
"middle": [],
"last": "Hinzen",
"suffix": ""
},
{
"first": "Joana",
"middle": [],
"last": "Rossell\u00f3",
"suffix": ""
},
{
"first": "Cati",
"middle": [],
"last": "Morey",
"suffix": ""
},
{
"first": "Estela",
"middle": [],
"last": "Camara",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Garcia-Gorro",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Salvador",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "De Diego-Balaguer",
"suffix": ""
}
],
"year": 2018,
"venue": "Cortex",
"volume": "100",
"issue": "",
"pages": "71--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfram Hinzen, Joana Rossell\u00f3, Cati Morey, Estela Ca- mara, Clara Garcia-Gorro, Raymond Salvador, and Ruth de Diego-Balaguer. 2018. A systematic lin- guistic profile of spontaneous narrative speech in pre- symptomatic and early stage huntington's disease. Cortex, 100:71-83.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Meta-learning for robust child-adult classification from speech",
"authors": [],
"year": null,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "8094--8098",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meta-learning for robust child-adult classification from speech. In ICASSP 2020-2020 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8094-8098. IEEE.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning domain invariant representations for child-adult classification from speech",
"authors": [
{
"first": "Rimita",
"middle": [],
"last": "Lahiri",
"suffix": ""
},
{
"first": "Manoj",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Somer",
"middle": [],
"last": "Bishop",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6749--6753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rimita Lahiri, Manoj Kumar, Somer Bishop, and Shrikanth Narayanan. 2020. Learning domain in- variant representations for child-adult classification from speech. In ICASSP 2020-2020 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6749-6753. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text-dependent speaker verification: Classifiers, databases and rsr2015",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Larcher",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Kong Aik Lee",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Speech Communication",
"volume": "60",
"issue": "",
"pages": "56--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Larcher, Kong Aik Lee, Bin Ma, and Haizhou Li. 2014. Text-dependent speaker verification: Clas- sifiers, databases and rsr2015. Speech Communica- tion, 60:56-77.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Emmanuel Dupoux, and Alejandrina Cristia. 2020. An open-source voice type classifier for childcentered daylong recordings",
"authors": [
{
"first": "Marvin",
"middle": [],
"last": "Lavechin",
"suffix": ""
},
{
"first": "Ruben",
"middle": [],
"last": "Bousbib",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "Bredin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.12656"
]
},
"num": null,
"urls": [],
"raw_text": "Marvin Lavechin, Ruben Bousbib, Herv\u00e9 Bredin, Em- manuel Dupoux, and Alejandrina Cristia. 2020. An open-source voice type classifier for child- centered daylong recordings. arXiv preprint arXiv:2005.12656.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Into the wild: Transitioning from recognizing mood in clinical interactions to personal conversations for individuals with bipolar disorder",
"authors": [
{
"first": "Katie",
"middle": [],
"last": "Matton",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Melvin",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"Mower"
],
"last": "Mcinnis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Provost",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "1438--1442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katie Matton, Melvin G McInnis, and Emily Mower Provost. 2019. Into the wild: Transitioning from recognizing mood in clinical interactions to personal conversations for individuals with bipolar disorder. Proc. Interspeech, pages 1438-1442.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Voxceleb: a large-scale speaker identification dataset",
"authors": [
{
"first": "Arsha",
"middle": [],
"last": "Nagrani",
"suffix": ""
}
],
"year": 2017,
"venue": "Telephony",
"volume": "3",
"issue": "",
"pages": "33--039",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arsha Nagrani, Joon Son Chung, and Andrew Zisser- man. 2017. Voxceleb: a large-scale speaker identifi- cation dataset. Telephony, 3:33-039.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classification of huntington disease using acoustic and lexical features",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Wenyu",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Duc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Noelle",
"middle": [],
"last": "Carlozzi",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Dayalu",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"Mower"
],
"last": "Provost",
"suffix": ""
}
],
"year": 2018,
"venue": "In INTER-SPEECH",
"volume": "2018",
"issue": "",
"pages": "1898--1902",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Perez, Wenyu Jin, Duc Le, Noelle Carlozzi, Praveen Dayalu, Angela Roberts, and Emily Mower Provost. 2018. Classification of huntington dis- ease using acoustic and lexical features. In INTER- SPEECH, volume 2018, pages 1898-1902.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Speaker recognition from raw waveform with sincnet",
"authors": [
{
"first": "Mirco",
"middle": [],
"last": "Ravanelli",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "1021--1028",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirco Ravanelli and Yoshua Bengio. 2018. Speaker recognition from raw waveform with sincnet. In Spoken Language Technology Workshop (SLT), pages 1021-1028. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Vocal markers from sustained phonation in huntington's disease",
"authors": [
{
"first": "Rachid",
"middle": [],
"last": "Riad",
"suffix": ""
},
{
"first": "Hadrien",
"middle": [],
"last": "Titeux",
"suffix": ""
},
{
"first": "Laurie",
"middle": [],
"last": "Lemoine",
"suffix": ""
},
{
"first": "Justine",
"middle": [],
"last": "Montillot",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"Hamet"
],
"last": "Bagnou",
"suffix": ""
},
{
"first": "Xuan",
"middle": [
"Nga"
],
"last": "Cao",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Anne-Catherine",
"middle": [],
"last": "Bachoud-L\u00e9vi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Rachid Riad, Hadrien Titeux, Laurie Lemoine, Jus- tine Montillot, Jennifer Hamet Bagnou, Xuan Nga Cao, Emmanuel Dupoux, and Anne-Catherine Bachoud-L\u00e9vi. 2020. Vocal markers from sustained phonation in huntington's disease. arXiv preprint arXiv:2006.05365.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Classification of manifest huntington disease using vowel distortion measures",
"authors": [
{
"first": "A",
"middle": [],
"last": "Romana",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bandon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carlozzi",
"suffix": ""
},
{
"first": "E",
"middle": [
"M"
],
"last": "Roberts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Provost",
"suffix": ""
}
],
"year": 2020,
"venue": "Interspeech",
"volume": "2020",
"issue": "",
"pages": "4966--4970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Romana, J Bandon, N Carlozzi, A Roberts, and EM Provost. 2020. Classification of manifest hunt- ington disease using vowel distortion measures. In Interspeech, volume 2020, pages 4966-4970.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sriram Ganapathy, and Mark Liberman",
"authors": [
{
"first": "Neville",
"middle": [],
"last": "Ryant",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Cieri",
"suffix": ""
}
],
"year": 2019,
"venue": "The second dihard diarization challenge: Dataset, task, and baselines. Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "978--982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neville Ryant, Kenneth Church, Christopher Cieri, Ale- jandrina Cristia, Jun Du, Sriram Ganapathy, and Mark Liberman. 2019. The second dihard diariza- tion challenge: Dataset, task, and baselines. Proc. Interspeech, pages 978-982.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic speaker role labeling in ami meetings: recognition of formal and social roles",
"authors": [
{
"first": "Ashtosh",
"middle": [],
"last": "Sapru",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Valente",
"suffix": ""
}
],
"year": 2012,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "5057--5060",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashtosh Sapru and Fabio Valente. 2012. Automatic speaker role labeling in ami meetings: recognition of formal and social roles. In ICASSP, pages 5057- 5060. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep neural network embeddings for text-independent speaker verification",
"authors": [
{
"first": "David",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Garcia-Romero",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "999--1003",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Snyder, Daniel Garcia-Romero, Daniel Povey, and Sanjeev Khudanpur. 2017. Deep neural network embeddings for text-independent speaker verification. Proc. Interspeech, pages 999-1003.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Biological and clinical manifestations of huntington's disease in the longitudinal track-hd study: cross-sectional analysis of baseline data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sarah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tabrizi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Douglas R Langbehn",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Blair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leavitt",
"suffix": ""
},
{
"first": "A",
"middle": [
"C"
],
"last": "Raymund",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Roos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Durr",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Craufurd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kennard",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Nick",
"middle": [
"C"
],
"last": "Hicks",
"suffix": ""
},
{
"first": "Rachael",
"middle": [
"I"
],
"last": "Fox",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scahill",
"suffix": ""
}
],
"year": 2009,
"venue": "The Lancet Neurology",
"volume": "8",
"issue": "9",
"pages": "791--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah J Tabrizi, Douglas R Langbehn, Blair R Leavitt, Raymund AC Roos, Alexandra Durr, David Craufurd, Christopher Kennard, Stephen L Hicks, Nick C Fox, Rachael I Scahill, et al. 2009. Biological and clinical manifestations of huntington's disease in the longi- tudinal track-hd study: cross-sectional analysis of baseline data. The Lancet Neurology, 8(9):791-801.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Seshat: A tool for managing and verifying annotation campaigns of audio data",
"authors": [
{
"first": "*",
"middle": [],
"last": "Hadrien Titeux",
"suffix": ""
},
{
"first": "Rachid",
"middle": [],
"last": "Riad",
"suffix": ""
},
{
"first": "*",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Xuan-Nga",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Hamilakis",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Madden",
"suffix": ""
},
{
"first": "Alejandrina",
"middle": [],
"last": "Cristia",
"suffix": ""
},
{
"first": "Anne-Catherine",
"middle": [],
"last": "Bachoud-L\u00e9vi",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2020,
"venue": "LREC, Marseille. * Equal contribution",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hadrien Titeux*, Rachid Riad*, Xuan-Nga Cao, Nicolas Hamilakis, Kris Madden, Alejandrina Cristia, Anne- Catherine Bachoud-L\u00e9vi, and Emmanuel Dupoux. 2020. Seshat: A tool for managing and verifying annotation campaigns of audio data. In LREC, Mar- seille. * Equal contribution.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Speech acoustic markers of early stage and prodromal huntington's disease: a marker of disease onset?",
"authors": [
{
"first": "P",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shirbin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Julie",
"middle": [
"C"
],
"last": "Churchyard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stout",
"suffix": ""
}
],
"year": 2012,
"venue": "Neuropsychologia",
"volume": "50",
"issue": "14",
"pages": "3273--3278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam P Vogel, Christopher Shirbin, Andrew J Church- yard, and Julie C Stout. 2012. Speech acoustic mark- ers of early stage and prodromal huntington's dis- ease: a marker of disease onset? Neuropsychologia, 50(14):3273-3278.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Standard Deviations (SD) of the Duration of Utterances of Interviewees from the Ground truth segmentation and the best Speaker role recognition system.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "Corpus statistics. P stands for Psychologist. IT stands for Interviewee. Dur stands for Duration and reported in hour. Durations are reported in hours.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>M train</td><td>M dev</td><td>M test</td></tr><tr><td>#Interviews</td><td>57</td><td>18</td><td>19</td></tr><tr><td>#Segments IT</td><td>21400</td><td>7503</td><td>7788</td></tr><tr><td>#Segments P</td><td>4184</td><td>1381</td><td>1517</td></tr><tr><td>Dur Role IT</td><td>7.65</td><td>3.02</td><td>3.21</td></tr><tr><td>Dur Role P</td><td>3.54</td><td>1.14</td><td>1.15</td></tr><tr><td>Dur Overlap</td><td>1.10</td><td>0.50</td><td>0.45</td></tr><tr><td colspan=\"4\">C/preHD/HD 13/11/33 4/4/10 5/3/11</td></tr></table>"
},
"TABREF1": {
"text": "Speaker Role Recognition Ablation study: Identification Error Rates on the test set X test of the meta-test set M test as a function of the percentage of interview in the meta-train set M train . MD stands for Missed detection, FA for False Alarm and Conf. for Confusion",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"4\">% of M train MD FA Conf. IER</td></tr><tr><td>10%</td><td>8.0 14.5</td><td>3.9</td><td>26.5</td></tr><tr><td>20%</td><td>7.8 12.4</td><td>3.8</td><td>24.0</td></tr><tr><td>50%</td><td>7.5 10.4</td><td>2.5</td><td>20.7</td></tr><tr><td>100%</td><td>7.1 10.2</td><td>2.3</td><td>19.5</td></tr><tr><td colspan=\"4\">Figure 4: Ratio of Silence from the Ground truth seg-mentation and from the best Speaker role recognition pipeline.</td></tr></table>"
}
}
}
}