aditeyabaral-redis commited on
Commit
85f9f46
Β·
verified Β·
1 Parent(s): fb931d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -73
README.md CHANGED
@@ -253,11 +253,12 @@ size_categories:
253
  - 1M<n<10M
254
  license: apache-2.0
255
  ---
256
- # Dataset Card for Dataset Name
257
 
258
  <!-- Provide a quick summary of the dataset. -->
259
 
260
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
 
261
 
262
  ## Dataset Details
263
 
@@ -265,128 +266,138 @@ This dataset card aims to be a base template for new datasets. It has been gener
265
 
266
  <!-- Provide a longer summary of what this dataset is. -->
267
 
 
 
 
 
 
 
 
 
 
268
 
 
 
269
 
270
- - **Curated by:** [More Information Needed]
271
- - **Funded by [optional]:** [More Information Needed]
272
- - **Shared by [optional]:** [More Information Needed]
273
- - **Language(s) (NLP):** [More Information Needed]
274
- - **License:** [More Information Needed]
275
 
276
  ### Dataset Sources [optional]
277
 
278
- <!-- Provide the basic links for the dataset. -->
279
-
280
- - **Repository:** [More Information Needed]
281
- - **Paper [optional]:** [More Information Needed]
282
- - **Demo [optional]:** [More Information Needed]
 
 
 
283
 
284
  ## Uses
285
 
286
- <!-- Address questions around how the dataset is intended to be used. -->
 
 
287
 
288
  ### Direct Use
289
 
290
- <!-- This section describes suitable use cases for the dataset. -->
291
-
292
- [More Information Needed]
293
-
294
- ### Out-of-Scope Use
295
-
296
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
297
-
298
- [More Information Needed]
299
-
300
- ## Dataset Structure
301
 
302
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
303
 
304
- [More Information Needed]
 
305
 
306
- ## Dataset Creation
 
307
 
308
- ### Curation Rationale
309
 
310
- <!-- Motivation for the creation of this dataset. -->
 
311
 
312
- [More Information Needed]
313
 
314
- ### Source Data
315
 
316
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
 
 
 
317
 
318
- #### Data Collection and Processing
 
319
 
320
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
321
 
322
- [More Information Needed]
323
 
324
- #### Who are the source data producers?
325
 
326
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
 
327
 
328
- [More Information Needed]
329
 
330
- ### Annotations [optional]
331
 
332
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
 
333
 
334
- #### Annotation process
335
 
336
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
337
 
338
- [More Information Needed]
 
 
339
 
340
- #### Who are the annotators?
341
 
342
- <!-- This section describes the people or systems who created the annotations. -->
343
 
344
- [More Information Needed]
345
 
346
  #### Personal and Sensitive Information
347
 
348
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
349
 
350
- [More Information Needed]
351
 
352
  ## Bias, Risks, and Limitations
353
 
354
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
355
-
356
- [More Information Needed]
357
 
358
  ### Recommendations
359
 
360
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
361
 
362
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
363
 
364
- ## Citation [optional]
365
 
366
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
367
 
368
  **BibTeX:**
369
 
370
- [More Information Needed]
371
-
372
- **APA:**
373
-
374
- [More Information Needed]
375
-
376
- ## Glossary [optional]
377
-
378
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
379
-
380
- [More Information Needed]
381
-
382
- ## More Information [optional]
383
-
384
- [More Information Needed]
385
 
386
- ## Dataset Card Authors [optional]
387
 
388
- [More Information Needed]
389
 
390
  ## Dataset Card Contact
391
 
392
- [More Information Needed]
 
253
  - 1M<n<10M
254
  license: apache-2.0
255
  ---
256
+ # Redis LangCache Sentence Pairs Dataset
257
 
258
  <!-- Provide a quick summary of the dataset. -->
259
 
260
+ A large, consolidated collection of English sentence pairs for training and evaluating semantic similarity, retrieval, and re-ranking models.
261
+ It merges widely used benchmarks into a single schema with consistent fields and ready-made splits.
262
 
263
  ## Dataset Details
264
 
 
266
 
267
  <!-- Provide a longer summary of what this dataset is. -->
268
 
269
+ - **Name:** langcache-sentencepairs-v1
270
+ - **Summary:** Sentence-pair dataset created to fine-tune encoder-based embedding and re-ranking models. It combines multiple high-quality corpora spanning diverse styles (short questions, long paraphrases, Twitter, adversarial pairs, technical queries, news headlines, etc.), with both positive and negative examples and preserved splits.
271
+ - **Curated by:** Redis
272
+ - **Shared by:** Aditeya Baral
273
+ - **Language(s):** English
274
+ - **License:** Apache-2.0
275
+ - **Homepage / Repository:** https://huggingface.co/datasets/redis/langcache-sentencepairs-v1
276
+
277
+ **Configs and coverage**
278
 
279
+ - **`all`**: Unified view over all sources with extra metadata columns (`source`, `source_idx`).
280
+ - **Source-specific configs:** `apt`, `mrpc`, `parade`, `paws`, `pit2015`, `qqp`, `sick`, `stsb`.
281
 
282
+ **Size & splits (overall)**
283
+ Total **~1.12M** pairs: **~1.05M train**, **8.4k validation**, **62k test**. See per-config sizes in the viewer.
 
 
 
284
 
285
  ### Dataset Sources [optional]
286
 
287
+ - **APT (Adversarial Paraphrasing Task)** β€” [Paper](https://arxiv.org/abs/1907.05774) | [Dataset repo](https://github.com/jfan001/apt)
288
+ - **MRPC (Microsoft Research Paraphrase Corpus)** β€” [Dataset page](https://www.microsoft.com/en-us/download/details.aspx?id=52398) | [GLUE version](https://huggingface.co/datasets/glue/viewer/mrpc)
289
+ - **PARADE (Paraphrase Identification requiring Domain Knowledge)** β€” [Paper](https://arxiv.org/abs/2005.13888) | [Dataset repo](https://github.com/vgtomahawk/parade)
290
+ - **PAWS (Paraphrase Adversaries from Word Scrambling)** β€” [Paper](https://arxiv.org/abs/1904.01130) | [Dataset on Hugging Face](https://huggingface.co/datasets/paws)
291
+ - **PIT2015 (Twitter Paraphrase SemEval 2015)** β€” [Task page](https://alt.qcri.org/semeval2015/task1/) | [Dataset on Hugging Face](https://huggingface.co/datasets/pit2015)
292
+ - **QQP (Quora Question Pairs)** β€” [Competition page](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) | [Dataset on Hugging Face](https://huggingface.co/datasets/qqp)
293
+ - **SICK (Sentences Involving Compositional Knowledge)** β€” [Dataset page](http://marcobaroni.org/composes/sick.html) | [Dataset on Hugging Face](https://huggingface.co/datasets/sick)
294
+ - **STS-B (Semantic Textual Similarity Benchmark)** β€” [SemEval 2017 Task page](http://alt.qcri.org/semeval2017/task1/) | [Dataset on Hugging Face](https://huggingface.co/datasets/stsb)
295
 
296
  ## Uses
297
 
298
+ - Train/fine-tune sentence encoders for **semantic retrieval** and **re-ranking**.
299
+ - Supervised **sentence-pair classification** tasks like paraphrase detection.
300
+ - Evaluation of **semantic similarity** and building general-purpose retrieval and ranking systems.
301
 
302
  ### Direct Use
303
 
304
+ ```python
305
+ from datasets import load_dataset
 
 
 
 
 
 
 
 
 
306
 
307
+ # Unified corpus
308
+ ds = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "all")
309
 
310
+ # A single source, e.g., PAWS
311
+ paws = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "paws")
312
 
313
+ # Columns: sentence1, sentence2, label (+ source, source_idx in 'all')
314
+ ```
315
 
316
+ ### Out-of-Scope Use
317
 
318
+ - **Non-English or multilingual modeling:** The dataset is entirely in English and will not perform well for training or evaluating multilingual models.
319
+ - **Uncalibrated similarity regression:** The STS-B portion has been integerized in this release, so it should not be used for fine-grained regression tasks requiring the original continuous similarity scores.
320
 
321
+ ## Dataset Structure
322
 
323
+ **Fields**
324
 
325
+ * `sentence1` *(string)* β€” First sentence.
326
+ * `sentence2` *(string)* β€” Second sentence.
327
+ * `label` *(int64)* β€” Task label. `1` β‰ˆ paraphrase/similar, `0` β‰ˆ non-paraphrase/dissimilar. For sources with continuous similarity (e.g., STS-B), labels are integerized in this release; consult the source subset if you need original continuous scores.
328
+ * *(config `all` only)*:
329
 
330
+ * `source` *(string)* β€” Upstream dataset identifier.
331
+ * `source_idx` *(int64)* β€” Source-local row id.
332
 
333
+ **Splits**
334
 
335
+ * `train`, `validation` (where available), `test` β€” original dataset splits preserved whenever provided by the source.
336
 
337
+ **Schemas by config**
338
 
339
+ * `all`: 5 columns (`sentence1`, `sentence2`, `label`, `source`, `source_idx`).
340
+ * All other configs: 3 columns (`sentence1`, `sentence2`, `label`).
341
 
342
+ ## Dataset Creation
343
 
344
+ ### Curation Rationale
345
 
346
+ To fine-tune stronger encoder models for retrieval and re-ranking, we curated a large, diverse pool of labeled sentence pairs (positives & negatives) covering multiple real-world styles and domains.
347
+ Consolidating canonical benchmarks into a single schema reduces engineering overhead and encourages generalization beyond any single dataset.
348
 
349
+ ### Source Data
350
 
351
+ #### Data Collection and Processing
352
 
353
+ * Ingested each selected dataset and **preserved original splits** when available.
354
+ * Normalized to a common schema; no manual relabeling was performed.
355
+ * Merged into `all` with added `source` and `source_idx` for traceability.
356
 
357
+ #### Who are the source data producers?
358
 
359
+ Original creators of the upstream datasets (e.g., Microsoft Research for MRPC, Quora for QQP, Google Research for PAWS, etc.).
360
 
 
361
 
362
  #### Personal and Sensitive Information
363
 
364
+ The corpus may include public-text sentences that mention people, organizations, or places (e.g., news, Wikipedia, tweets). It is **not** intended for identifying or inferring sensitive attributes of individuals. If you require strict PII controls, filter or exclude sources accordingly before downstream use.
365
 
 
366
 
367
  ## Bias, Risks, and Limitations
368
 
369
+ * **Label noise:** Some sources include **noisily labeled** pairs (e.g., PAWS large weakly-labeled set).
370
+ * **Granularity mismatch:** STS-B's continuous similarity is represented as integers here; treat with care if you need fine-grained scoring.
371
+ * **English-only:** Not suitable for multilingual evaluation without adaptation.
372
 
373
  ### Recommendations
374
 
375
+ - Use the `all` configuration for large-scale training, but be aware that some datasets dominate in size (e.g., PAWS, QQP). Apply **sampling or weighting** if you want balanced learning across domains.
376
+ - Treat **STS-B labels** with caution: they are integerized in this release. For regression-style similarity scoring, use the original STS-B dataset.
377
+ - This dataset is **best suited for training retrieval and re-ranking models**. Avoid re-purposing it for unrelated tasks (e.g., user profiling, sensitive attribute prediction, or multilingual training).
378
+ - Track the `source` field (in the `all` config) during training to analyze how performance varies by dataset type, which can guide fine-tuning or domain adaptation.
379
 
 
380
 
381
+ ## Citation
382
 
383
+ If you use this dataset, please cite the Hugging Face entry and the original upstream datasets you rely on.
384
 
385
  **BibTeX:**
386
 
387
+ ```bibtex
388
+ @misc{langcache_sentencepairs_v1_2025,
389
+ title = {langcache-sentencepairs-v1},
390
+ author = {Baral, Aditeya and Redis},
391
+ howpublished = {\url{https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs-v1}},
392
+ year = {2025},
393
+ note = {Version 1}
394
+ }
395
+ ```
 
 
 
 
 
 
396
 
397
+ ## Dataset Card Authors
398
 
399
+ Aditeya Baral
400
 
401
  ## Dataset Card Contact
402
 
403