Datasets:
Formats:
parquet
Languages:
English
Size:
1M - 10M
ArXiv:
Tags:
english
sentence-similarity
sentence-pair-classification
semantic-retrieval
re-ranking
information-retrieval
License:
File size: 12,766 Bytes
4d8809f fa594da e8d4616 622bd9d 2f7d05f 2a0024e 622bd9d fa594da 2f7d05f fa594da 2f7d05f fa594da 2f7d05f e8089c0 fa594da 2f7d05f e8089c0 2f7d05f 1c5bb52 976a5eb 1c5bb52 976a5eb 1c5bb52 976a5eb 1c5bb52 976a5eb af53379 0a7afb5 af53379 0a7afb5 af53379 0a7afb5 af53379 0a7afb5 af53379 0a7afb5 888b2be a386e54 888b2be a386e54 888b2be a386e54 888b2be a386e54 888b2be a386e54 af53379 4d8809f 624060e 4d8809f 624060e 4d8809f 624060e 4d8809f 624060e 642c058 c770ec7 642c058 c770ec7 642c058 c770ec7 642c058 c770ec7 642c058 c770ec7 ac095ab 519868f ac095ab 519868f ac095ab d026009 519868f ac095ab 519868f bc2a351 d5e1d9e bc2a351 d5e1d9e bc2a351 d5e1d9e bc2a351 d5e1d9e bc2a351 d5e1d9e fdd32a2 8bbbd9b fdd32a2 8bbbd9b fdd32a2 8bbbd9b fdd32a2 8bbbd9b fdd32a2 8bbbd9b 4d8809f fa594da 1c5bb52 af53379 888b2be 4d8809f 642c058 ac095ab bc2a351 fdd32a2 088c7ff fb931d6 53641ba 088c7ff fb931d6 9f45f83 4d8809f 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 0116754 85f9f46 088c7ff 85f9f46 088c7ff d6d997c 088c7ff 19da8cf 53641ba 19da8cf 9f45f83 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff c0082ee 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff aa50dc6 0116754 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff c0082ee 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 088c7ff 85f9f46 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 |
---
dataset_info:
- config_name: all
features:
- name: id
dtype: string
- name: source_idx
dtype: int32
- name: source
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 243710748
num_examples: 1047690
- name: validation
num_bytes: 1433292
num_examples: 8405
- name: test
num_bytes: 11398927
num_examples: 62021
download_size: 160607039
dataset_size: 256542967
- config_name: apt
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 530903.1791243993
num_examples: 3723
- name: test
num_bytes: 182156.5678033307
num_examples: 1252
download_size: 240272
dataset_size: 713059.74692773
- config_name: mrpc
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 903495.0
num_examples: 3668
- name: validation
num_bytes: 101391.0
num_examples: 408
- name: test
num_bytes: 423435.0
num_examples: 1725
download_size: 995440
dataset_size: 1428321.0
- config_name: parade
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 1708400.0
num_examples: 7550
- name: validation
num_bytes: 284794.0
num_examples: 1275
- name: test
num_bytes: 309763.0
num_examples: 1357
download_size: 769311
dataset_size: 2302957.0
- config_name: paws
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 150704304.0
num_examples: 645652
- name: test
num_bytes: 2332165.0
num_examples: 10000
download_size: 108607809
dataset_size: 153036469.0
- config_name: pit2015
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 1253905.0
num_examples: 13063
- name: validation
num_bytes: 429153.0
num_examples: 4727
- name: test
num_bytes: 87765.0
num_examples: 972
download_size: 595714
dataset_size: 1770823.0
- config_name: qqp
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 46898514.0
num_examples: 363846
- name: test
num_bytes: 5209024.0
num_examples: 40430
download_size: 34820387
dataset_size: 52107538.0
- config_name: sick
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 450269.0
num_examples: 4439
- name: validation
num_bytes: 51054.0
num_examples: 495
- name: test
num_bytes: 497312.0
num_examples: 4906
download_size: 346823
dataset_size: 998635.0
- config_name: stsb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 714548.0
num_examples: 5749
- name: validation
num_bytes: 205564.0
num_examples: 1500
- name: test
num_bytes: 160321.0
num_examples: 1379
download_size: 707092
dataset_size: 1080433.0
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- split: validation
path: all/validation-*
- split: test
path: all/test-*
- config_name: apt
data_files:
- split: train
path: apt/train-*
- split: test
path: apt/test-*
- config_name: mrpc
data_files:
- split: train
path: mrpc/train-*
- split: validation
path: mrpc/validation-*
- split: test
path: mrpc/test-*
- config_name: parade
data_files:
- split: train
path: parade/train-*
- split: validation
path: parade/validation-*
- split: test
path: parade/test-*
- config_name: paws
data_files:
- split: train
path: paws/train-*
- split: test
path: paws/test-*
- config_name: pit2015
data_files:
- split: train
path: pit2015/train-*
- split: validation
path: pit2015/validation-*
- split: test
path: pit2015/test-*
- config_name: qqp
data_files:
- split: train
path: qqp/train-*
- split: test
path: qqp/test-*
- config_name: sick
data_files:
- split: train
path: sick/train-*
- split: validation
path: sick/validation-*
- split: test
path: sick/test-*
- config_name: stsb
data_files:
- split: train
path: stsb/train-*
- split: validation
path: stsb/validation-*
- split: test
path: stsb/test-*
task_categories:
- text-classification
- sentence-similarity
- text-ranking
- text-retrieval
tags:
- english
- sentence-similarity
- sentence-pair-classification
- semantic-retrieval
- re-ranking
- information-retrieval
- embedding-training
- semantic-search
- paraphrase-detection
language:
- en
size_categories:
- 1M<n<10M
license: apache-2.0
pretty_name: RedisLangCache SentecePairs v1
---
# Redis LangCache Sentence Pairs Dataset
<!-- Provide a quick summary of the dataset. -->
A large, consolidated collection of English sentence pairs for training and evaluating semantic similarity, retrieval, and re-ranking models.
It merges widely used benchmarks into a single schema with consistent fields and ready-made splits.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Name:** langcache-sentencepairs-v1
- **Summary:** Sentence-pair dataset created to fine-tune encoder-based embedding and re-ranking models. It combines multiple high-quality corpora spanning diverse styles (short questions, long paraphrases, Twitter, adversarial pairs, technical queries, news headlines, etc.), with both positive and negative examples and preserved splits.
- **Curated by:** Redis
- **Shared by:** Aditeya Baral
- **Language(s):** English
- **License:** Apache-2.0
- **Homepage / Repository:** https://huggingface.co/datasets/redis/langcache-sentencepairs-v1
**Configs and coverage**
- **`all`**: Unified view over all sources with extra metadata columns (`id`, `source`, `source_idx`).
- **Source-specific configs:** `apt`, `mrpc`, `parade`, `paws`, `pit2015`, `qqp`, `sick`, `stsb`.
**Size & splits (overall)**
Total **~1.12M** pairs: **~1.05M train**, **8.4k validation**, **62k test**. See per-config sizes in the viewer.
### Dataset Sources
- **APT (Adversarial Paraphrasing Task)** β [Paper](https://aclanthology.org/2021.acl-long.552/) | [Dataset](https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt)
- **MRPC (Microsoft Research Paraphrase Corpus)** β [Paper](https://aclanthology.org/I05-5002.pdf) | [Dataset](https://huggingface.co/datasets/glue/viewer/mrpc)
- **PARADE (Paraphrase Identification requiring Domain Knowledge)** β [Paper](https://aclanthology.org/2020.emnlp-main.611/) | [Dataset](https://github.com/heyunh2015/PARADE_dataset)
- **PAWS (Paraphrase Adversaries from Word Scrambling)** β [Paper](https://arxiv.org/abs/1904.01130) | [Dataset](https://huggingface.co/datasets/paws)
- **PIT2015 (SemEval 2015 Twitter Paraphrase)** β [Website](https://alt.qcri.org/semeval2015/task1/) | [Dataset](https://github.com/cocoxu/SemEval-PIT2015)
- **QQP (Quora Question Pairs)** β [Website](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) | [Dataset](https://huggingface.co/datasets/glue/viewer/qqp)
- **SICK (Sentences Involving Compositional Knowledge)** β [Website](http://marcobaroni.org/composes/sick.html) | [Dataset](https://zenodo.org/records/2787612)
- **STS-B (Semantic Textual Similarity Benchmark)** β [Website](https://alt.qcri.org/semeval2017/task1/) | [Dataset](https://huggingface.co/datasets/nyu-mll/glue/viewer/stsb)
## Uses
- Train/fine-tune sentence encoders for **semantic retrieval** and **re-ranking**.
- Supervised **sentence-pair classification** tasks like paraphrase detection.
- Evaluation of **semantic similarity** and building general-purpose retrieval and ranking systems.
### Direct Use
```python
from datasets import load_dataset
# Unified corpus
ds = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "all")
# A single source, e.g., PAWS
paws = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "paws")
# Columns: sentence1, sentence2, label (+ idx, source_idx in 'all')
```
### Out-of-Scope Use
- **Non-English or multilingual modeling:** The dataset is entirely in English and will not perform well for training or evaluating multilingual models.
- **Uncalibrated similarity regression:** The STS-B portion has been integerized in this release, so it should not be used for fine-grained regression tasks requiring the original continuous similarity scores.
## Dataset Structure
**Fields**
* `sentence1` *(string)* β First sentence.
* `sentence2` *(string)* β Second sentence.
* `label` *(int64)* β Task label. `1` β paraphrase/similar, `0` β non-paraphrase/dissimilar. For sources with continuous similarity (e.g., STS-B), labels are integerized in this release; consult the source subset if you need original continuous scores.
* *(config `all` only)*:
* `id` *(string)* β Dataset identifier. Follows the pattern `langcache_{split}_{row number}`.
* `source` *(string)* β Source dataset name.
* `source_idx` *(int64)* β Source-local row id.
**Splits**
* `train`, `validation` (where available), `test` β original dataset splits preserved whenever provided by the source.
**Schemas by config**
* `all`: 5 columns (`idx`, `source_idx`, `sentence1`, `sentence2`, `label`).
* All other configs: 3 columns (`sentence1`, `sentence2`, `label`).
## Dataset Creation
### Curation Rationale
To fine-tune stronger encoder models for retrieval and re-ranking, we curated a large, diverse pool of labeled sentence pairs (positives & negatives) covering multiple real-world styles and domains.
Consolidating canonical benchmarks into a single schema reduces engineering overhead and encourages generalization beyond any single dataset.
### Source Data
#### Data Collection and Processing
* Ingested each selected dataset and **preserved original splits** when available.
* Normalized to a common schema; no manual relabeling was performed.
* Merged into `all` with added `source` and `source_idx` for traceability.
#### Who are the source data producers?
Original creators of the upstream datasets (e.g., Microsoft Research for MRPC, Quora for QQP, Google Research for PAWS, etc.).
#### Personal and Sensitive Information
The corpus may include public-text sentences that mention people, organizations, or places (e.g., news, Wikipedia, tweets). It is **not** intended for identifying or inferring sensitive attributes of individuals. If you require strict PII controls, filter or exclude sources accordingly before downstream use.
## Bias, Risks, and Limitations
* **Label noise:** Some sources include **noisily labeled** pairs (e.g., PAWS large weakly-labeled set).
* **Granularity mismatch:** STS-B's continuous similarity is represented as integers here; treat with care if you need fine-grained scoring.
* **English-only:** Not suitable for multilingual evaluation without adaptation.
### Recommendations
- Use the `all` configuration for large-scale training, but be aware that some datasets dominate in size (e.g., PAWS, QQP). Apply **sampling or weighting** if you want balanced learning across domains.
- Treat **STS-B labels** with caution: they are integerized in this release. For regression-style similarity scoring, use the original STS-B dataset.
- This dataset is **best suited for training retrieval and re-ranking models**. Avoid re-purposing it for unrelated tasks (e.g., user profiling, sensitive attribute prediction, or multilingual training).
- Track the `source` field (in the `all` config) during training to analyze how performance varies by dataset type, which can guide fine-tuning or domain adaptation.
## Citation
If you use this dataset, please cite the Hugging Face entry and the original upstream datasets you rely on.
**BibTeX:**
```bibtex
@misc{langcache_sentencepairs_v1_2025,
title = {langcache-sentencepairs-v1},
author = {Baral, Aditeya and Redis},
howpublished = {\url{https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs-v1}},
year = {2025},
note = {Version 1}
}
```
## Dataset Card Authors
Aditeya Baral
## Dataset Card Contact
[[email protected]](mailto:[email protected]) |