Datasets:
dataset_info:
- config_name: all
features:
- name: id
dtype: string
- name: source_idx
dtype: int32
- name: source
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 243710748
num_examples: 1047690
- name: validation
num_bytes: 1433292
num_examples: 8405
- name: test
num_bytes: 11398927
num_examples: 62021
download_size: 160607039
dataset_size: 256542967
- config_name: apt
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 530903.1791243993
num_examples: 3723
- name: test
num_bytes: 182156.5678033307
num_examples: 1252
download_size: 240272
dataset_size: 713059.74692773
- config_name: mrpc
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 903495
num_examples: 3668
- name: validation
num_bytes: 101391
num_examples: 408
- name: test
num_bytes: 423435
num_examples: 1725
download_size: 995440
dataset_size: 1428321
- config_name: parade
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 1708400
num_examples: 7550
- name: validation
num_bytes: 284794
num_examples: 1275
- name: test
num_bytes: 309763
num_examples: 1357
download_size: 769311
dataset_size: 2302957
- config_name: paws
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 150704304
num_examples: 645652
- name: test
num_bytes: 2332165
num_examples: 10000
download_size: 108607809
dataset_size: 153036469
- config_name: pit2015
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 1253905
num_examples: 13063
- name: validation
num_bytes: 429153
num_examples: 4727
- name: test
num_bytes: 87765
num_examples: 972
download_size: 595714
dataset_size: 1770823
- config_name: qqp
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 46898514
num_examples: 363846
- name: test
num_bytes: 5209024
num_examples: 40430
download_size: 34820387
dataset_size: 52107538
- config_name: sick
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 450269
num_examples: 4439
- name: validation
num_bytes: 51054
num_examples: 495
- name: test
num_bytes: 497312
num_examples: 4906
download_size: 346823
dataset_size: 998635
- config_name: stsb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 714548
num_examples: 5749
- name: validation
num_bytes: 205564
num_examples: 1500
- name: test
num_bytes: 160321
num_examples: 1379
download_size: 707092
dataset_size: 1080433
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- split: validation
path: all/validation-*
- split: test
path: all/test-*
- config_name: apt
data_files:
- split: train
path: apt/train-*
- split: test
path: apt/test-*
- config_name: mrpc
data_files:
- split: train
path: mrpc/train-*
- split: validation
path: mrpc/validation-*
- split: test
path: mrpc/test-*
- config_name: parade
data_files:
- split: train
path: parade/train-*
- split: validation
path: parade/validation-*
- split: test
path: parade/test-*
- config_name: paws
data_files:
- split: train
path: paws/train-*
- split: test
path: paws/test-*
- config_name: pit2015
data_files:
- split: train
path: pit2015/train-*
- split: validation
path: pit2015/validation-*
- split: test
path: pit2015/test-*
- config_name: qqp
data_files:
- split: train
path: qqp/train-*
- split: test
path: qqp/test-*
- config_name: sick
data_files:
- split: train
path: sick/train-*
- split: validation
path: sick/validation-*
- split: test
path: sick/test-*
- config_name: stsb
data_files:
- split: train
path: stsb/train-*
- split: validation
path: stsb/validation-*
- split: test
path: stsb/test-*
task_categories:
- text-classification
- sentence-similarity
- text-ranking
- text-retrieval
tags:
- english
- sentence-similarity
- sentence-pair-classification
- semantic-retrieval
- re-ranking
- information-retrieval
- embedding-training
- semantic-search
- paraphrase-detection
language:
- en
size_categories:
- 1M<n<10M
license: apache-2.0
pretty_name: RedisLangCache SentecePairs v1
Redis LangCache Sentence Pairs Dataset
A large, consolidated collection of English sentence pairs for training and evaluating semantic similarity, retrieval, and re-ranking models. It merges widely used benchmarks into a single schema with consistent fields and ready-made splits.
Dataset Details
Dataset Description
- Name: langcache-sentencepairs-v1
- Summary: Sentence-pair dataset created to fine-tune encoder-based embedding and re-ranking models. It combines multiple high-quality corpora spanning diverse styles (short questions, long paraphrases, Twitter, adversarial pairs, technical queries, news headlines, etc.), with both positive and negative examples and preserved splits.
- Curated by: Redis
- Shared by: Aditeya Baral
- Language(s): English
- License: Apache-2.0
- Homepage / Repository: https://huggingface.co/datasets/redis/langcache-sentencepairs-v1
Configs and coverage
all
: Unified view over all sources with extra metadata columns (id
,source
,source_idx
).- Source-specific configs:
apt
,mrpc
,parade
,paws
,pit2015
,qqp
,sick
,stsb
.
Size & splits (overall)
Total ~1.12M pairs: ~1.05M train, 8.4k validation, 62k test. See per-config sizes in the viewer.
Dataset Sources
- APT (Adversarial Paraphrasing Task) β Paper | Dataset
- MRPC (Microsoft Research Paraphrase Corpus) β Paper | Dataset
- PARADE (Paraphrase Identification requiring Domain Knowledge) β Paper | Dataset
- PAWS (Paraphrase Adversaries from Word Scrambling) β Paper | Dataset
- PIT2015 (SemEval 2015 Twitter Paraphrase) β Website | Dataset
- QQP (Quora Question Pairs) β Website | Dataset
- SICK (Sentences Involving Compositional Knowledge) β Website | Dataset
- STS-B (Semantic Textual Similarity Benchmark) β Website | Dataset
Uses
- Train/fine-tune sentence encoders for semantic retrieval and re-ranking.
- Supervised sentence-pair classification tasks like paraphrase detection.
- Evaluation of semantic similarity and building general-purpose retrieval and ranking systems.
Direct Use
from datasets import load_dataset
# Unified corpus
ds = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "all")
# A single source, e.g., PAWS
paws = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "paws")
# Columns: sentence1, sentence2, label (+ idx, source_idx in 'all')
Out-of-Scope Use
- Non-English or multilingual modeling: The dataset is entirely in English and will not perform well for training or evaluating multilingual models.
- Uncalibrated similarity regression: The STS-B portion has been integerized in this release, so it should not be used for fine-grained regression tasks requiring the original continuous similarity scores.
Dataset Structure
Fields
sentence1
(string) β First sentence.sentence2
(string) β Second sentence.label
(int64) β Task label.1
β paraphrase/similar,0
β non-paraphrase/dissimilar. For sources with continuous similarity (e.g., STS-B), labels are integerized in this release; consult the source subset if you need original continuous scores.(config
all
only):id
(string) β Dataset identifier. Follows the patternlangcache_{split}_{row number}
.source
(string) β Source dataset name.source_idx
(int64) β Source-local row id.
Splits
train
,validation
(where available),test
β original dataset splits preserved whenever provided by the source.
Schemas by config
all
: 5 columns (idx
,source_idx
,sentence1
,sentence2
,label
).- All other configs: 3 columns (
sentence1
,sentence2
,label
).
Dataset Creation
Curation Rationale
To fine-tune stronger encoder models for retrieval and re-ranking, we curated a large, diverse pool of labeled sentence pairs (positives & negatives) covering multiple real-world styles and domains. Consolidating canonical benchmarks into a single schema reduces engineering overhead and encourages generalization beyond any single dataset.
Source Data
Data Collection and Processing
- Ingested each selected dataset and preserved original splits when available.
- Normalized to a common schema; no manual relabeling was performed.
- Merged into
all
with addedsource
andsource_idx
for traceability.
Who are the source data producers?
Original creators of the upstream datasets (e.g., Microsoft Research for MRPC, Quora for QQP, Google Research for PAWS, etc.).
Personal and Sensitive Information
The corpus may include public-text sentences that mention people, organizations, or places (e.g., news, Wikipedia, tweets). It is not intended for identifying or inferring sensitive attributes of individuals. If you require strict PII controls, filter or exclude sources accordingly before downstream use.
Bias, Risks, and Limitations
- Label noise: Some sources include noisily labeled pairs (e.g., PAWS large weakly-labeled set).
- Granularity mismatch: STS-B's continuous similarity is represented as integers here; treat with care if you need fine-grained scoring.
- English-only: Not suitable for multilingual evaluation without adaptation.
Recommendations
- Use the
all
configuration for large-scale training, but be aware that some datasets dominate in size (e.g., PAWS, QQP). Apply sampling or weighting if you want balanced learning across domains. - Treat STS-B labels with caution: they are integerized in this release. For regression-style similarity scoring, use the original STS-B dataset.
- This dataset is best suited for training retrieval and re-ranking models. Avoid re-purposing it for unrelated tasks (e.g., user profiling, sensitive attribute prediction, or multilingual training).
- Track the
source
field (in theall
config) during training to analyze how performance varies by dataset type, which can guide fine-tuning or domain adaptation.
Citation
If you use this dataset, please cite the Hugging Face entry and the original upstream datasets you rely on.
BibTeX:
@misc{langcache_sentencepairs_v1_2025,
title = {langcache-sentencepairs-v1},
author = {Baral, Aditeya and Redis},
howpublished = {\url{https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs-v1}},
year = {2025},
note = {Version 1}
}
Dataset Card Authors
Aditeya Baral