PersonalHub / README.md
PhoenixAxis's picture
Update README.md
502658a verified
metadata
license: mit
language:
  - en
configs:
  - config_name: benchmark
    data_files:
      - split: only_gender_reliable
        path: gender_metadata.csv
      - split: emotion_reliable
        path: emotion_metadata.csv

Personal Hub: Exploring High-Expressiveness Speech Data through Spatio-Temporal Feature Integration and Model Fine-Tuning

Logo

Introduction

In this work, we present Personal Hub, a novel framework for mining and utilizing high-expressivity speech data by integrating spatio-temporal context with combinatorial attribute control. At the core of our approach lies a Speech Attribute Matrix, which enables annotators to systematically combine speaker-related features such as age, gender, emotion, accent, and environment with temporal metadata, to curate speech samples with varied and rich expressive characteristics. Based on this matrix-driven data collection paradigm, we construct a multi-level expressivity dataset, categorized into three tiers according to the diversity and complexity of attribute combinations. We then investigate the benefits of this curated data through two lines of model fine-tuning: (1) automatic speech recognition (ASR) models, where we demonstrate that incorporating high-expressivity data accelerates convergence and enhances learned acoustic representations, and (2) large end-to-end speech models, where both human and model-based evaluations reveal improved interactional and expressive capabilities post-finetuning.Our results underscore the potential of high-expressivity speech datasets in enhancing both task-specific performance and the overall communicative competence of speech AI systems.

Method

Filter for Usable

To ensure the quality and consistency of the audio data, we applied the following preprocessing steps:

Duration Filtering: Audio clips shorter than 5 seconds or longer than 15 seconds were excluded to maintain a consistent length range suitable for analysis.

Resampling: All audio files were resampled to a 16 kHz sampling rate, which is commonly used in speech processing tasks to balance quality and computational efficiency.

Channel Conversion: Stereo audio files were converted to mono by averaging the left and right channels. This step ensures uniformity across the dataset and simplifies subsequent processing.

Filter for Transcription

We used Whisper-Large-v3-turbo to evaluate transcription quality, retaining only samples with a Word Error Rate (WER) below 0.1. This model was chosen for its strong performance and fast inference, making it suitable for large-scale filtering. The WER threshold ensures high-quality transcriptions and reduces noise for downstream tasks.

Filter for Gender

Manual verification was conducted by four annotators. Only samples with unanimous agreement among all four were retained; others were discarded.

Filter for Emotion

For both gender and emotion filtering, samples were manually reviewed by four annotators. Only those with unanimous agreement were kept.

DataSource

Split: only_gender_reliable

CommonVoice VCTK LibriSpeech

Split: emotion_reliable

CREMA-D RAVDESS MEAD TESS SAVEE ESD