license: cc-by-4.0
task_categories:
- text-generation
- text-to-speech
- automatic-speech-recognition
tags:
- Urdu
language:
- ur
pretty_name: Munch Hashed Index
Munch Hashed Index - Lightweight Audio Reference Dataset
π Overview
Munch Hashed Index is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the Munch Urdu TTS Dataset. Instead of storing 1.27 TB of raw audio, this index stores only metadata and cryptographic hashes, enabling:
- β Fast duplicate detection across 4.17 million audio samples
- β Efficient dataset exploration without downloading terabytes
- β Quick metadata queries (voice distribution, text stats, etc.)
- β Selective audio retrieval - download only what you need
- β Storage efficiency - 99.92% space reduction (1.27 TB β ~1 GB)
π Related Datasets
- Original Dataset: humair025/Munch - Full audio dataset (1.27 TB)
- This Index: humair025/hashed_data - Hashed reference (~1 GB)
- Munch-1 (v2): humair025/munch-1 - Newer version (3.28 TB, 3.86M samples)
- Munch-1 Index: humair025/hashed_data_munch_1 - Index for v2
π― What Problem Does This Solve?
The Challenge
The original Munch dataset contains:
- π 4,167,500 audio-text pairs
- πΎ 1.27 TB total size
- π¦ ~8,300 separate parquet files
This makes it difficult to:
- β Quickly check if specific audio exists
- β Find duplicate audio samples
- β Explore metadata without downloading everything
- β Work on limited bandwidth/storage
The Solution
This hashed index provides:
- β All metadata (text, voice, timestamps) without audio bytes
- β SHA-256 hashes for every audio file (unique fingerprint)
- β File references (which parquet contains each audio)
- β Fast queries - search 4.17M records in seconds
- β Retrieve on demand - download only specific audio when needed
π Quick Start
Installation
pip install datasets pandas
Basic Usage
from datasets import load_dataset
import pandas as pd
# Load the entire hashed index (fast - only ~1 GB!)
ds = load_dataset("humair025/hashed_data", split="train")
df = pd.DataFrame(ds)
print(f"Total records: {len(df)}")
print(f"Unique audio hashes: {df['audio_bytes_hash'].nunique()}")
print(f"Voices: {df['voice'].unique()}")
Find Duplicates
# Check for duplicate audio
duplicates = df[df.duplicated(subset=['audio_bytes_hash'], keep=False)]
if len(duplicates) > 0:
print(f"β οΈ Found {len(duplicates)} duplicate rows")
print(f" Unique audio files: {df['audio_bytes_hash'].nunique()}")
print(f" Redundancy: {(1 - df['audio_bytes_hash'].nunique()/len(df))*100:.2f}%")
else:
print("β
No duplicates found!")
Search by Voice
# Find all "ash" voice samples
ash_samples = df[df['voice'] == 'ash']
print(f"Ash voice samples: {len(ash_samples)}")
# Get file containing first ash sample
first_ash = ash_samples.iloc[0]
print(f"File: {first_ash['parquet_file_name']}")
print(f"Text: {first_ash['text']}")
Search by Text
# Find audio for specific text
query = "ΫΫ Ψ§ΫΪ© ΩΩ
ΩΩΫ"
matches = df[df['text'].str.contains(query, na=False)]
print(f"Found {len(matches)} matches")
Retrieve Original Audio
from datasets import load_dataset as load_original
import numpy as np
from scipy.io import wavfile
import io
def get_audio_by_hash(audio_hash, index_df):
"""Retrieve original audio bytes using the hash"""
# Find the row with this hash
row = index_df[index_df['audio_bytes_hash'] == audio_hash].iloc[0]
# Download only the specific parquet file containing this audio
ds = load_original(
"humair025/Munch",
data_files=[row['parquet_file_name']],
split="train"
)
# Find matching row by ID
for audio_row in ds:
if audio_row['id'] == row['id']:
return audio_row['audio_bytes']
return None
# Example: Get audio for first row
row = df.iloc[0]
audio_bytes = get_audio_by_hash(row['audio_bytes_hash'], df)
# Convert to WAV and play
def pcm16_to_wav(pcm_bytes, sample_rate=22050):
audio_array = np.frombuffer(pcm_bytes, dtype=np.int16)
wav_io = io.BytesIO()
wavfile.write(wav_io, sample_rate, audio_array)
wav_io.seek(0)
return wav_io
wav_io = pcm16_to_wav(audio_bytes)
# In Jupyter: IPython.display.Audio(wav_io, rate=22050)
π Dataset Structure
Data Fields
| Field | Type | Description |
|---|---|---|
id |
int | Original paragraph ID from source dataset |
parquet_file_name |
string | Source file in Munch dataset |
text |
string | Original Urdu text |
transcript |
string | TTS transcript (may differ from input) |
voice |
string | Voice used (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) |
audio_bytes_hash |
string | SHA-256 hash of audio_bytes (64 hex chars) |
audio_size_bytes |
int | Size of original audio in bytes |
timestamp |
string | ISO timestamp of generation (nullable) |
error |
string | Error message if generation failed (nullable) |
Example Row
{
'id': 42,
'parquet_file_name': 'tts_data_20251203_130314_83ab0706.parquet',
'text': 'ΫΫ Ψ§ΫΪ© ΩΩ
ΩΩΫ Ω
ΨͺΩ ΫΫΫ',
'transcript': 'ΫΫ Ψ§ΫΪ© ΩΩ
ΩΩΫ Ω
ΨͺΩ ΫΫΫ',
'voice': 'ash',
'audio_bytes_hash': 'a3f7b2c8e9d1f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9',
'audio_size_bytes': 52340,
'timestamp': '2025-12-03T13:03:14.123456',
'error': None
}
π― Use Cases
1. Dataset Quality Analysis
# Check for duplicates
unique_ratio = df['audio_bytes_hash'].nunique() / len(df)
print(f"Unique audio ratio: {unique_ratio*100:.2f}%")
# Analyze voice distribution
voice_dist = df['voice'].value_counts()
print(voice_dist)
# Find failed generations
failed = df[df['error'].notna()]
print(f"Failed generations: {len(failed)}")
2. Efficient Data Exploration
# Browse dataset without downloading audio
print(df[['id', 'text', 'voice', 'audio_size_bytes']].head(20))
# Filter by criteria
short_audio = df[df['audio_size_bytes'] < 30000]
long_text = df[df['text'].str.len() > 200]
3. Selective Download
# Download only specific voices
ash_files = df[df['voice'] == 'ash']['parquet_file_name'].unique()
ds = load_dataset("humair025/Munch", data_files=list(ash_files))
# Download only short audio samples
small_files = df[df['audio_size_bytes'] < 40000]['parquet_file_name'].unique()
ds = load_dataset("humair025/Munch", data_files=list(small_files[:10]))
4. Deduplication Pipeline
# Create deduplicated subset
df_unique = df.drop_duplicates(subset=['audio_bytes_hash'], keep='first')
print(f"Original: {len(df)} rows")
print(f"Unique: {len(df_unique)} rows")
print(f"Duplicates removed: {len(df) - len(df_unique)}")
# Save unique references
df_unique.to_parquet('unique_audio_index.parquet')
5. Audio Similarity Search
# Find audio with similar hash prefixes (for clustering)
target_hash = df.iloc[0]['audio_bytes_hash']
prefix = target_hash[:8]
similar = df[df['audio_bytes_hash'].str.startswith(prefix)]
print(f"Similar audio candidates: {len(similar)}")
π Dataset Statistics
Size Comparison
| Metric | Original Dataset | Hashed Index | Reduction |
|---|---|---|---|
| Total Size | 1.27 TB | ~1 GB | 99.92% |
| Records | 4,167,500 | 4,167,500 | Same |
| Files | ~8,300 parquet | Consolidated | ~8,300Γ fewer |
| Download Time (100 Mbps) | ~28 hours | ~90 seconds | ~1,100Γ |
| Load Time | Minutes-Hours | Seconds | ~100Γ |
| Memory Usage | Cannot fit in RAM | ~2-3 GB RAM | Fits easily |
Content Statistics
π Dataset Overview:
Total Records: 4,167,500
Total Files: ~8,300 parquet files
Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
Language: Urdu (primary)
Avg Audio Size: ~50-60 KB per sample
Avg Duration: ~3-5 seconds per sample
Total Duration: ~3,500-5,800 hours of audio
π§ Advanced Usage
Batch Analysis
# Analyze all hash files
from datasets import load_dataset
ds = load_dataset("humair025/hashed_data", split="train")
df = pd.DataFrame(ds)
# Group by voice
voice_stats = df.groupby('voice').agg({
'id': 'count',
'audio_size_bytes': 'mean',
'audio_bytes_hash': 'nunique'
}).rename(columns={
'id': 'total_samples',
'audio_size_bytes': 'avg_size',
'audio_bytes_hash': 'unique_audio'
})
print(voice_stats)
Cross-Reference with Original
# Check if a hash exists in original dataset
def verify_hash_exists(audio_hash, parquet_file):
"""Verify a hash actually exists in the original dataset"""
from datasets import load_dataset
import hashlib
ds = load_dataset(
"humair025/Munch",
data_files=[parquet_file],
split="train"
)
for row in ds:
computed_hash = hashlib.sha256(row['audio_bytes']).hexdigest()
if computed_hash == audio_hash:
return True
return False
# Verify first entry
first_row = df.iloc[0]
exists = verify_hash_exists(
first_row['audio_bytes_hash'],
first_row['parquet_file_name']
)
print(f"Hash verified: {exists}")
Export Unique Dataset
# Create a new dataset with only unique audio
df_unique = df.drop_duplicates(subset=['audio_bytes_hash'], keep='first')
# Get list of unique parquet files needed
unique_files = df_unique['parquet_file_name'].unique()
print(f"Unique audio samples: {len(df_unique)}")
print(f"Files needed: {len(unique_files)} out of {df['parquet_file_name'].nunique()}")
# Calculate space savings
original_size = len(df) * df['audio_size_bytes'].mean()
unique_size = len(df_unique) * df_unique['audio_size_bytes'].mean()
savings = (1 - unique_size/original_size) * 100
print(f"Space savings: {savings:.2f}%")
π οΈ How This Index Was Created
This dataset was generated using an automated pipeline:
Processing Pipeline
- Batch Download: Download 40 parquet files at a time from source
- Hash Computation: Compute SHA-256 for each audio_bytes field
- Metadata Extraction: Extract text, voice, and other metadata
- Save & Upload: Save hash file, upload to HuggingFace
- Clean Up: Delete local cache to save disk space
- Resume: Track processed files, skip already-processed
Pipeline Features
- β Resumable: Checkpoint system tracks progress
- β Memory Efficient: Processes in batches, clears cache
- β Error Tolerant: Skips corrupted files, continues processing
- β No Duplicates: Checks target repo to avoid reprocessing
- β Automatic Upload: Streams results to HuggingFace
Technical Details
# Hash computation
import hashlib
hash = hashlib.sha256(audio_bytes).hexdigest()
# Batch size: 40 files per batch
# Processing time: ~4-6 hours for full dataset
# Output: Multiple hashed_*.parquet files
π Performance Metrics
Query Performance
import time
# Load index
start = time.time()
ds = load_dataset("humair025/hashed_data", split="train")
df = pd.DataFrame(ds)
print(f"Load time: {time.time() - start:.2f}s")
# Query by hash
start = time.time()
result = df[df['audio_bytes_hash'] == 'target_hash']
print(f"Hash lookup: {(time.time() - start)*1000:.2f}ms")
# Query by voice
start = time.time()
result = df[df['voice'] == 'ash']
print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
Expected Performance:
- Load full dataset: 10-30 seconds
- Hash lookup: < 10 milliseconds
- Voice filter: < 50 milliseconds
- Full dataset scan: < 5 seconds
π Integration with Original Dataset
Workflow Example
# 1. Query the index (fast)
df = pd.DataFrame(load_dataset("humair025/hashed_data", split="train"))
target_rows = df[df['voice'] == 'ash'].head(100)
# 2. Get unique parquet files
files_needed = target_rows['parquet_file_name'].unique()
# 3. Download only needed files (selective)
from datasets import load_dataset
ds = load_dataset(
"humair025/Munch",
data_files=list(files_needed),
split="train"
)
# 4. Match by ID to get audio
for idx, row in target_rows.iterrows():
for audio_row in ds:
if audio_row['id'] == row['id']:
# Process audio_bytes
audio = audio_row['audio_bytes']
break
π Citation
If you use this dataset in your research, please cite both the original dataset and this index:
BibTeX
@dataset{munch_hashed_index_2025,
title={Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS},
author={Munir, Humair},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/humair025/hashed_data}},
note={Index of humair025/Munch dataset with SHA-256 audio hashes}
}
@dataset{munch_urdu_tts_2025,
title={Munch: Large-Scale Urdu Text-to-Speech Dataset},
author={Munir, Humair},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/humair025/Munch}}
}
APA Format
Munir, H. (2025). Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS
[Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data
Munir, H. (2025). Munch: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
Hugging Face. https://huggingface.co/datasets/humair025/Munch
MLA Format
Munir, Humair. "Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS."
Hugging Face, 2025, https://huggingface.co/datasets/humair025/hashed_data.
Munir, Humair. "Munch: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025,
https://huggingface.co/datasets/humair025/Munch.
π€ Contributing
Report Issues
Found a problem? Please open an issue:
- Missing hash files
- Incorrect metadata
- Hash mismatches
- Documentation improvements
Suggest Improvements
We welcome suggestions for:
- Additional metadata fields
- Better indexing strategies
- Integration examples
- Use case documentation
π License
This index dataset inherits the license from the original Munch dataset:
Creative Commons Attribution 4.0 International (CC-BY-4.0)
You are free to:
- β Share β copy and redistribute
- β Adapt β remix, transform, build upon
- β Commercial use β use commercially
Under the terms:
- π Attribution β Give appropriate credit to original dataset
π Important Links
- π§ Original Audio Dataset - Full 1.27 TB audio
- π This Hashed Index - Lightweight reference
- π Munch-1 (v2) - Newer version (3.28 TB)
- π Munch-1 Index - Index for v2
- π¬ Discussions - Ask questions
- π Report Issues - Bug reports
β FAQ
Q: Why use hashes instead of audio?
A: Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50KB per audio. This enables duplicate detection and fast queries without storing massive audio files.
Q: Can I reconstruct audio from hashes?
A: No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the Munch dataset using the file reference provided.
Q: How accurate are the hashes?
A: SHA-256 has virtually zero collision probability. If two hashes match, the audio is identical (byte-for-byte).
Q: How do I get the actual audio?
A: Use the parquet_file_name and id fields to locate and download the specific audio from the original dataset. See examples above.
Q: Is this dataset complete?
A: Yes, this index covers all 4,167,500 rows across all ~8,300 parquet files from the original Munch dataset.
Q: What's the difference between this and Munch-1 Index?
A: This indexes the original Munch dataset (1.27 TB, 4.17M samples). The Munch-1 Index indexes the newer Munch-1 dataset (3.28 TB, 3.86M samples).
Q: Can I contribute?
A: Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
π Acknowledgments
- Original Dataset: humair025/Munch
- TTS Generation: OpenAI-compatible models
- Voices: 13 high-quality voices (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
- Infrastructure: HuggingFace Datasets platform
- Hashing: SHA-256 cryptographic hash function
π Version History
- v1.0.0 (December 2025): Initial release
- Processed all ~8,300 parquet files
- 4,167,500 audio samples indexed
- SHA-256 hashes computed for all audio
- ~99.92% space reduction achieved
Last Updated: December 2025
Status: β Complete
π‘ Pro Tip: Start with this lightweight index to explore the dataset, then selectively download only the audio you need from the original Munch dataset!