Datasets:
Formats:
parquet
Languages:
English
Size:
1M - 10M
ArXiv:
Tags:
english
sentence-similarity
sentence-pair-classification
semantic-retrieval
re-ranking
information-retrieval
License:
Update README.md
Browse files
README.md
CHANGED
@@ -253,11 +253,12 @@ size_categories:
|
|
253 |
- 1M<n<10M
|
254 |
license: apache-2.0
|
255 |
---
|
256 |
-
#
|
257 |
|
258 |
<!-- Provide a quick summary of the dataset. -->
|
259 |
|
260 |
-
|
|
|
261 |
|
262 |
## Dataset Details
|
263 |
|
@@ -265,128 +266,138 @@ This dataset card aims to be a base template for new datasets. It has been gener
|
|
265 |
|
266 |
<!-- Provide a longer summary of what this dataset is. -->
|
267 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
268 |
|
|
|
|
|
269 |
|
270 |
-
|
271 |
-
|
272 |
-
- **Shared by [optional]:** [More Information Needed]
|
273 |
-
- **Language(s) (NLP):** [More Information Needed]
|
274 |
-
- **License:** [More Information Needed]
|
275 |
|
276 |
### Dataset Sources [optional]
|
277 |
|
278 |
-
|
279 |
-
|
280 |
-
- **
|
281 |
-
- **
|
282 |
-
- **
|
|
|
|
|
|
|
283 |
|
284 |
## Uses
|
285 |
|
286 |
-
|
|
|
|
|
287 |
|
288 |
### Direct Use
|
289 |
|
290 |
-
|
291 |
-
|
292 |
-
[More Information Needed]
|
293 |
-
|
294 |
-
### Out-of-Scope Use
|
295 |
-
|
296 |
-
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
297 |
-
|
298 |
-
[More Information Needed]
|
299 |
-
|
300 |
-
## Dataset Structure
|
301 |
|
302 |
-
|
|
|
303 |
|
304 |
-
|
|
|
305 |
|
306 |
-
|
|
|
307 |
|
308 |
-
###
|
309 |
|
310 |
-
|
|
|
311 |
|
312 |
-
|
313 |
|
314 |
-
|
315 |
|
316 |
-
|
|
|
|
|
|
|
317 |
|
318 |
-
|
|
|
319 |
|
320 |
-
|
321 |
|
322 |
-
|
323 |
|
324 |
-
|
325 |
|
326 |
-
|
|
|
327 |
|
328 |
-
|
329 |
|
330 |
-
###
|
331 |
|
332 |
-
|
|
|
333 |
|
334 |
-
|
335 |
|
336 |
-
|
337 |
|
338 |
-
|
|
|
|
|
339 |
|
340 |
-
#### Who are the
|
341 |
|
342 |
-
|
343 |
|
344 |
-
[More Information Needed]
|
345 |
|
346 |
#### Personal and Sensitive Information
|
347 |
|
348 |
-
|
349 |
|
350 |
-
[More Information Needed]
|
351 |
|
352 |
## Bias, Risks, and Limitations
|
353 |
|
354 |
-
|
355 |
-
|
356 |
-
|
357 |
|
358 |
### Recommendations
|
359 |
|
360 |
-
|
|
|
|
|
|
|
361 |
|
362 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
363 |
|
364 |
-
## Citation
|
365 |
|
366 |
-
|
367 |
|
368 |
**BibTeX:**
|
369 |
|
370 |
-
|
371 |
-
|
372 |
-
|
373 |
-
|
374 |
-
|
375 |
-
|
376 |
-
|
377 |
-
|
378 |
-
|
379 |
-
|
380 |
-
[More Information Needed]
|
381 |
-
|
382 |
-
## More Information [optional]
|
383 |
-
|
384 |
-
[More Information Needed]
|
385 |
|
386 |
-
## Dataset Card Authors
|
387 |
|
388 |
-
|
389 |
|
390 |
## Dataset Card Contact
|
391 |
|
392 |
-
[
|
|
|
253 |
- 1M<n<10M
|
254 |
license: apache-2.0
|
255 |
---
|
256 |
+
# Redis LangCache Sentence Pairs Dataset
|
257 |
|
258 |
<!-- Provide a quick summary of the dataset. -->
|
259 |
|
260 |
+
A large, consolidated collection of English sentence pairs for training and evaluating semantic similarity, retrieval, and re-ranking models.
|
261 |
+
It merges widely used benchmarks into a single schema with consistent fields and ready-made splits.
|
262 |
|
263 |
## Dataset Details
|
264 |
|
|
|
266 |
|
267 |
<!-- Provide a longer summary of what this dataset is. -->
|
268 |
|
269 |
+
- **Name:** langcache-sentencepairs-v1
|
270 |
+
- **Summary:** Sentence-pair dataset created to fine-tune encoder-based embedding and re-ranking models. It combines multiple high-quality corpora spanning diverse styles (short questions, long paraphrases, Twitter, adversarial pairs, technical queries, news headlines, etc.), with both positive and negative examples and preserved splits.
|
271 |
+
- **Curated by:** Redis
|
272 |
+
- **Shared by:** Aditeya Baral
|
273 |
+
- **Language(s):** English
|
274 |
+
- **License:** Apache-2.0
|
275 |
+
- **Homepage / Repository:** https://huggingface.co/datasets/redis/langcache-sentencepairs-v1
|
276 |
+
|
277 |
+
**Configs and coverage**
|
278 |
|
279 |
+
- **`all`**: Unified view over all sources with extra metadata columns (`source`, `source_idx`).
|
280 |
+
- **Source-specific configs:** `apt`, `mrpc`, `parade`, `paws`, `pit2015`, `qqp`, `sick`, `stsb`.
|
281 |
|
282 |
+
**Size & splits (overall)**
|
283 |
+
Total **~1.12M** pairs: **~1.05M train**, **8.4k validation**, **62k test**. See per-config sizes in the viewer.
|
|
|
|
|
|
|
284 |
|
285 |
### Dataset Sources [optional]
|
286 |
|
287 |
+
- **APT (Adversarial Paraphrasing Task)** β [Paper](https://arxiv.org/abs/1907.05774) | [Dataset repo](https://github.com/jfan001/apt)
|
288 |
+
- **MRPC (Microsoft Research Paraphrase Corpus)** β [Dataset page](https://www.microsoft.com/en-us/download/details.aspx?id=52398) | [GLUE version](https://huggingface.co/datasets/glue/viewer/mrpc)
|
289 |
+
- **PARADE (Paraphrase Identification requiring Domain Knowledge)** β [Paper](https://arxiv.org/abs/2005.13888) | [Dataset repo](https://github.com/vgtomahawk/parade)
|
290 |
+
- **PAWS (Paraphrase Adversaries from Word Scrambling)** β [Paper](https://arxiv.org/abs/1904.01130) | [Dataset on Hugging Face](https://huggingface.co/datasets/paws)
|
291 |
+
- **PIT2015 (Twitter Paraphrase SemEval 2015)** β [Task page](https://alt.qcri.org/semeval2015/task1/) | [Dataset on Hugging Face](https://huggingface.co/datasets/pit2015)
|
292 |
+
- **QQP (Quora Question Pairs)** β [Competition page](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) | [Dataset on Hugging Face](https://huggingface.co/datasets/qqp)
|
293 |
+
- **SICK (Sentences Involving Compositional Knowledge)** β [Dataset page](http://marcobaroni.org/composes/sick.html) | [Dataset on Hugging Face](https://huggingface.co/datasets/sick)
|
294 |
+
- **STS-B (Semantic Textual Similarity Benchmark)** β [SemEval 2017 Task page](http://alt.qcri.org/semeval2017/task1/) | [Dataset on Hugging Face](https://huggingface.co/datasets/stsb)
|
295 |
|
296 |
## Uses
|
297 |
|
298 |
+
- Train/fine-tune sentence encoders for **semantic retrieval** and **re-ranking**.
|
299 |
+
- Supervised **sentence-pair classification** tasks like paraphrase detection.
|
300 |
+
- Evaluation of **semantic similarity** and building general-purpose retrieval and ranking systems.
|
301 |
|
302 |
### Direct Use
|
303 |
|
304 |
+
```python
|
305 |
+
from datasets import load_dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
306 |
|
307 |
+
# Unified corpus
|
308 |
+
ds = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "all")
|
309 |
|
310 |
+
# A single source, e.g., PAWS
|
311 |
+
paws = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "paws")
|
312 |
|
313 |
+
# Columns: sentence1, sentence2, label (+ source, source_idx in 'all')
|
314 |
+
```
|
315 |
|
316 |
+
### Out-of-Scope Use
|
317 |
|
318 |
+
- **Non-English or multilingual modeling:** The dataset is entirely in English and will not perform well for training or evaluating multilingual models.
|
319 |
+
- **Uncalibrated similarity regression:** The STS-B portion has been integerized in this release, so it should not be used for fine-grained regression tasks requiring the original continuous similarity scores.
|
320 |
|
321 |
+
## Dataset Structure
|
322 |
|
323 |
+
**Fields**
|
324 |
|
325 |
+
* `sentence1` *(string)* β First sentence.
|
326 |
+
* `sentence2` *(string)* β Second sentence.
|
327 |
+
* `label` *(int64)* β Task label. `1` β paraphrase/similar, `0` β non-paraphrase/dissimilar. For sources with continuous similarity (e.g., STS-B), labels are integerized in this release; consult the source subset if you need original continuous scores.
|
328 |
+
* *(config `all` only)*:
|
329 |
|
330 |
+
* `source` *(string)* β Upstream dataset identifier.
|
331 |
+
* `source_idx` *(int64)* β Source-local row id.
|
332 |
|
333 |
+
**Splits**
|
334 |
|
335 |
+
* `train`, `validation` (where available), `test` β original dataset splits preserved whenever provided by the source.
|
336 |
|
337 |
+
**Schemas by config**
|
338 |
|
339 |
+
* `all`: 5 columns (`sentence1`, `sentence2`, `label`, `source`, `source_idx`).
|
340 |
+
* All other configs: 3 columns (`sentence1`, `sentence2`, `label`).
|
341 |
|
342 |
+
## Dataset Creation
|
343 |
|
344 |
+
### Curation Rationale
|
345 |
|
346 |
+
To fine-tune stronger encoder models for retrieval and re-ranking, we curated a large, diverse pool of labeled sentence pairs (positives & negatives) covering multiple real-world styles and domains.
|
347 |
+
Consolidating canonical benchmarks into a single schema reduces engineering overhead and encourages generalization beyond any single dataset.
|
348 |
|
349 |
+
### Source Data
|
350 |
|
351 |
+
#### Data Collection and Processing
|
352 |
|
353 |
+
* Ingested each selected dataset and **preserved original splits** when available.
|
354 |
+
* Normalized to a common schema; no manual relabeling was performed.
|
355 |
+
* Merged into `all` with added `source` and `source_idx` for traceability.
|
356 |
|
357 |
+
#### Who are the source data producers?
|
358 |
|
359 |
+
Original creators of the upstream datasets (e.g., Microsoft Research for MRPC, Quora for QQP, Google Research for PAWS, etc.).
|
360 |
|
|
|
361 |
|
362 |
#### Personal and Sensitive Information
|
363 |
|
364 |
+
The corpus may include public-text sentences that mention people, organizations, or places (e.g., news, Wikipedia, tweets). It is **not** intended for identifying or inferring sensitive attributes of individuals. If you require strict PII controls, filter or exclude sources accordingly before downstream use.
|
365 |
|
|
|
366 |
|
367 |
## Bias, Risks, and Limitations
|
368 |
|
369 |
+
* **Label noise:** Some sources include **noisily labeled** pairs (e.g., PAWS large weakly-labeled set).
|
370 |
+
* **Granularity mismatch:** STS-B's continuous similarity is represented as integers here; treat with care if you need fine-grained scoring.
|
371 |
+
* **English-only:** Not suitable for multilingual evaluation without adaptation.
|
372 |
|
373 |
### Recommendations
|
374 |
|
375 |
+
- Use the `all` configuration for large-scale training, but be aware that some datasets dominate in size (e.g., PAWS, QQP). Apply **sampling or weighting** if you want balanced learning across domains.
|
376 |
+
- Treat **STS-B labels** with caution: they are integerized in this release. For regression-style similarity scoring, use the original STS-B dataset.
|
377 |
+
- This dataset is **best suited for training retrieval and re-ranking models**. Avoid re-purposing it for unrelated tasks (e.g., user profiling, sensitive attribute prediction, or multilingual training).
|
378 |
+
- Track the `source` field (in the `all` config) during training to analyze how performance varies by dataset type, which can guide fine-tuning or domain adaptation.
|
379 |
|
|
|
380 |
|
381 |
+
## Citation
|
382 |
|
383 |
+
If you use this dataset, please cite the Hugging Face entry and the original upstream datasets you rely on.
|
384 |
|
385 |
**BibTeX:**
|
386 |
|
387 |
+
```bibtex
|
388 |
+
@misc{langcache_sentencepairs_v1_2025,
|
389 |
+
title = {langcache-sentencepairs-v1},
|
390 |
+
author = {Baral, Aditeya and Redis},
|
391 |
+
howpublished = {\url{https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs-v1}},
|
392 |
+
year = {2025},
|
393 |
+
note = {Version 1}
|
394 |
+
}
|
395 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
396 |
|
397 |
+
## Dataset Card Authors
|
398 |
|
399 |
+
Aditeya Baral
|
400 |
|
401 |
## Dataset Card Contact
|
402 |
|
403 |
+
[[email protected]](mailto:[email protected])
|