datasetId
large_string | author
large_string | last_modified
large_string | downloads
int64 | likes
int64 | tags
large list | task_categories
large list | createdAt
large_string | trending_score
float64 | card
large_string |
---|---|---|---|---|---|---|---|---|---|
fixie-ai/mls_eng_10k_chat | fixie-ai | 2025-05-13T19:09:26Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T17:46:07Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int32
- name: llasa_llama_formatted_text_prompt
dtype: string
- name: general_text_prompt
dtype: string
splits:
- name: train
num_bytes: 8915496143
num_examples: 151254
- name: test
num_bytes: 11077284
num_examples: 236
- name: validation
num_bytes: 11482367
num_examples: 238
download_size: 4219089264
dataset_size: 8938055794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
louisbrulenaudet/code-impots-annexe-ii | louisbrulenaudet | 2025-05-13T15:42:31Z | 637 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code général des impôts, annexe II"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2024-03-25T22:39:19Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code général des impôts, annexe II
source_datasets:
- original
pretty_name: Code général des impôts, annexe II
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code général des impôts, annexe II, non-instruct (2025-05-12)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
ESITime/tram-relation-responses | ESITime | 2025-05-13T15:26:35Z | 358 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-06T14:57:46Z | null | ---
dataset_info:
- config_name: 05B
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 563660
num_examples: 504
download_size: 288283
dataset_size: 563660
- config_name: 3B
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 522730
num_examples: 504
download_size: 268767
dataset_size: 522730
- config_name: 4_epoch
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 610557
num_examples: 504
download_size: 307133
dataset_size: 610557
- config_name: 6B
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 523743
num_examples: 504
download_size: 269699
dataset_size: 523743
- config_name: base_base_message_3b
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 1972433
num_examples: 504
download_size: 810457
dataset_size: 1972433
- config_name: base_create_1.5b
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 939980
num_examples: 504
download_size: 433440
dataset_size: 939980
- config_name: base_create_3b
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 564942
num_examples: 504
download_size: 281637
dataset_size: 564942
- config_name: free_1
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 574127
num_examples: 504
download_size: 290281
dataset_size: 574127
- config_name: only_1
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 563148
num_examples: 504
download_size: 283488
dataset_size: 563148
- config_name: qwen
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 509236
num_examples: 504
download_size: 277574
dataset_size: 509236
- config_name: qwen05_sft
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 400398
num_examples: 504
download_size: 203651
dataset_size: 400398
- config_name: qwen15_sft
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 887100
num_examples: 504
download_size: 442407
dataset_size: 887100
- config_name: qwen3_sft
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 673150
num_examples: 504
download_size: 333123
dataset_size: 673150
- config_name: qwen_3
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 503714
num_examples: 504
download_size: 260303
dataset_size: 503714
- config_name: qwen_3b_sft
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 871497
num_examples: 504
download_size: 418390
dataset_size: 871497
- config_name: qwen_4
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 503131
num_examples: 504
download_size: 259984
dataset_size: 503131
- config_name: qwen_4_1
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 502707
num_examples: 505
download_size: 259125
dataset_size: 502707
- config_name: qwen_5
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 614947
num_examples: 504
download_size: 312432
dataset_size: 614947
- config_name: sft1
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 760039
num_examples: 504
download_size: 366132
dataset_size: 760039
- config_name: sft1_1
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 834967
num_examples: 504
download_size: 408377
dataset_size: 834967
- config_name: sft1_3
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 790174
num_examples: 504
download_size: 378792
dataset_size: 790174
- config_name: sft1_4
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 798017
num_examples: 504
download_size: 381126
dataset_size: 798017
- config_name: sft1_5
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 829423
num_examples: 504
download_size: 401651
dataset_size: 829423
- config_name: sft2
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 794397
num_examples: 504
download_size: 384196
dataset_size: 794397
- config_name: sft2_1
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 832224
num_examples: 504
download_size: 404456
dataset_size: 832224
- config_name: sft2_3
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 667852
num_examples: 504
download_size: 331857
dataset_size: 667852
- config_name: sft2_4
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 663333
num_examples: 504
download_size: 330105
dataset_size: 663333
- config_name: sft2_5
features:
- name: Question
dtype: string
- name: Option A
dtype: string
- name: Option B
dtype: string
- name: Option C
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: response
dtype: string
splits:
- name: test
num_bytes: 674219
num_examples: 504
download_size: 340371
dataset_size: 674219
configs:
- config_name: 05B
data_files:
- split: test
path: 05B/test-*
- config_name: 3B
data_files:
- split: test
path: 3B/test-*
- config_name: 4_epoch
data_files:
- split: test
path: 4_epoch/test-*
- config_name: 6B
data_files:
- split: test
path: 6B/test-*
- config_name: base_base_message_3b
data_files:
- split: test
path: base_base_message_3b/test-*
- config_name: base_create_1.5b
data_files:
- split: test
path: base_create_1.5b/test-*
- config_name: base_create_3b
data_files:
- split: test
path: base_create_3b/test-*
- config_name: free_1
data_files:
- split: test
path: free_1/test-*
- config_name: only_1
data_files:
- split: test
path: only_1/test-*
- config_name: qwen
data_files:
- split: test
path: qwen/test-*
- config_name: qwen05_sft
data_files:
- split: test
path: qwen05_sft/test-*
- config_name: qwen15_sft
data_files:
- split: test
path: qwen15_sft/test-*
- config_name: qwen3_sft
data_files:
- split: test
path: qwen3_sft/test-*
- config_name: qwen_3
data_files:
- split: test
path: qwen_3/test-*
- config_name: qwen_3b_sft
data_files:
- split: test
path: qwen_3b_sft/test-*
- config_name: qwen_4
data_files:
- split: test
path: qwen_4/test-*
- config_name: qwen_4_1
data_files:
- split: test
path: qwen_4_1/test-*
- config_name: qwen_5
data_files:
- split: test
path: qwen_5/test-*
- config_name: sft1
data_files:
- split: test
path: sft1/test-*
- config_name: sft1_1
data_files:
- split: test
path: sft1_1/test-*
- config_name: sft1_3
data_files:
- split: test
path: sft1_3/test-*
- config_name: sft1_4
data_files:
- split: test
path: sft1_4/test-*
- config_name: sft1_5
data_files:
- split: test
path: sft1_5/test-*
- config_name: sft2
data_files:
- split: test
path: sft2/test-*
- config_name: sft2_1
data_files:
- split: test
path: sft2_1/test-*
- config_name: sft2_3
data_files:
- split: test
path: sft2_3/test-*
- config_name: sft2_4
data_files:
- split: test
path: sft2_4/test-*
- config_name: sft2_5
data_files:
- split: test
path: sft2_5/test-*
---
|
Shekharmeena/female-audio-dataset | Shekharmeena | 2025-05-13T11:44:36Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T11:41:15Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: gender
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 4587686171.705792
num_examples: 5983
download_size: 4129876735
dataset_size: 4587686171.705792
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JingweiNi/LEXam | JingweiNi | 2025-05-13T10:21:46Z | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T09:53:20Z | null | ---
license: cc-by-4.0
dataset_info:
- config_name: mcq_4_choices
features:
- name: Question
dtype: string
- name: Choices
dtype: string
- name: Gold
dtype: int64
- name: Course
dtype: string
- name: Language
dtype: string
- name: Area
dtype: string
- name: Jurisdiction
dtype: string
- name: Year
dtype: int64
- name: n_statements
dtype: int64
- name: none_as_an_option
dtype: bool
- name: Id
dtype: string
splits:
- name: test
num_bytes: 1701781
num_examples: 1660
download_size: 833522
dataset_size: 1701781
- config_name: mcq_perturbation
features:
- name: question
dtype: string
- name: 4_choices
dtype: string
- name: 4_choices_answer
dtype: int64
- name: 8_choices
dtype: string
- name: 8_choices_answer
dtype: int64
- name: 16_choices
dtype: string
- name: 16_choices_answer
dtype: int64
- name: 32_choices
dtype: string
- name: 32_choices_answer
dtype: int64
- name: course
dtype: string
- name: language
dtype: string
- name: n_statements
dtype: int64
- name: id
dtype: string
splits:
- name: test
num_bytes: 779770
num_examples: 385
download_size: 327294
dataset_size: 779770
- config_name: open_question
features:
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Course
dtype: string
- name: Language
dtype: string
- name: Area
dtype: string
- name: Jurisdiction
dtype: string
- name: Year
dtype: int64
- name: ID
dtype: string
splits:
- name: test
num_bytes: 7966761
num_examples: 2541
- name: dev
num_bytes: 994495
num_examples: 300
download_size: 4159184
dataset_size: 8961256
configs:
- config_name: mcq_4_choices
data_files:
- split: test
path: mcq_4_choices/test-*
- config_name: mcq_perturbation
data_files:
- split: test
path: mcq_perturbation/test-*
- config_name: open_question
data_files:
- split: test
path: open_question/test-*
- split: dev
path: open_question/dev-*
---
<div align="center" style="display: flex; align-items: center; justify-content: center; gap: 16px;">
<img src="pictures/logo.png" alt="LEXam Logo" width="120" style="border: none;">
<div style="text-align: left;">
<h1 style="margin: 0; font-size: 2em;">LEXam: Benchmarking Legal Reasoning on 340 Law Exams</h1>
<p style="margin: 6px 0 0; font-size: 1.2em;">A diverse, rigorous evaluation suite for legal AI from Swiss, EU, and international law examinations.</p>
</div>
</div>
### [[GitHub Repo]](https://github.com/EdisonNi-hku/LEXam) with code to run evaluations on LEXam
## 🔥 News
- [2025/05] Release of the first version of [paper](), where we evaluate 20+ representative SoTA LLMs with evaluations stricly verified by legal experts.
## 🧩 Subsets
- `mcq_4_choices`: The standard 1660 MCQs of LEXam with 4 choices.
- `mcq_perturbation`: We find that using permulation perturbation can significantly increase the difficulty of LEXam. `mcq_perturbation` contains a set of MCQs with controled questions, but perturbed choices with 4, 8, 16, 32 alternative answers.
- `open_question`: All open questions of LEXam, with rich meta-data including language, course, jusritiction etc.
|
nduque/eval_robustness_e8_no1_2_120k_2 | nduque | 2025-05-13T09:15:49Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-13T09:15:40Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 10,
"total_frames": 4734,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": null
},
"observation.images.front": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.above": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
axondendriteplus/LightRAG-DAPO-ScalingLaws | axondendriteplus | 2025-05-13T08:15:11Z | 0 | 0 | [
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"created-with-pdfs-to-page-images-converter",
"pdf-to-image"
] | [] | 2025-05-13T08:15:01Z | null | ---
size_categories:
- n<1K
tags:
- created-with-pdfs-to-page-images-converter
- pdf-to-image
---
# Dataset Card for axondendriteplus/LightRAG-DAPO-ScalingLaws
## Dataset Description
This dataset contains images converted from PDFs using the PDFs to Page Images Converter Space.
- **Number of images:** 62
- **Number of PDFs processed:** 3
- **Sample size per PDF:** 100
- **Created on:** 2025-05-13 10:15:11
## Dataset Creation
### Source Data
The images in this dataset were generated from user-uploaded PDF files.
### Processing Steps
1. PDF files were uploaded to the PDFs to Page Images Converter.
2. Each PDF was processed, converting selected pages to images.
3. The resulting images were saved and uploaded to this dataset.
## Dataset Structure
The dataset consists of JPEG images, each representing a single page from the source PDFs.
### Data Fields
- `images/`: A folder containing all the converted images.
### Data Splits
This dataset does not have specific splits.
## Additional Information
- **Contributions:** Thanks to the PDFs to Page Images Converter for creating this dataset.
|
AbrehamT/tagged_articles | AbrehamT | 2025-05-13T04:00:38Z | 24 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"chemistry"
] | [] | 2025-05-10T21:59:14Z | null | ---
tags:
- biology
- chemistry
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: pmid
dtype: string
- name: section
dtype: string
- name: text
dtype: string
- name: annotations
sequence:
sequence: int64
splits:
- name: train
num_bytes: 120073099
num_examples: 17619
download_size: 61504820
dataset_size: 120073099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Alzheimer's Disease Tagged Articles Dataset
This dataset contains a collection of scientific articles related to **Alzheimer's disease**, annotated with biomedical entities (such as diseases, genes, and species) using NCBI’s **PubTator** tools.
## Data Sources and Annotation Tools
- Entity annotations were generated using NCBI's standard models and API calls to the **PubTator3 API**:
- **TaggerOne** and **GNORM** for `gene_species_tagged_articles.json`
- **TaggerOne** alone for `disease_tagged_articles.json`
- **PubTator3 API** for 'converted_from_bioc.json'
These tools are widely used in biomedical NLP for tagging mentions of diseases, genes, and species within PubMed articles.
## Dataset Structure
The dataset is a flattened version of the original JSON files, where each record represents a tagged article section. The structure includes:
- `pmid`: PubMed ID of the article
- `section`: Type of section (e.g., `title`, `abstract`)
- `text`: Lowercased content of the section
- `annotations`: List of `[start, end]` character offsets identifying entity mentions
Example:
```json
{
"pmid": "34379990",
"section": "abstract",
"text": "clinical progression of tauopathies may result...",
"annotations": [[0, 26], [45, 53], ...]
}
````
## Preprocessing
The dataset has been simplified:
* Original nested structures were flattened
* Tuples were converted to JSON-compliant lists
* Only `title` and `abstract` sections are currently included
## Future Work
* Extend tagging beyond titles and abstracts to include full-text sections (e.g., introduction, methods, results)
* Add entity labels (e.g., `Disease`, `Gene`, `Species`) in a future version
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("AbrehamT/tagged_articles")
```
## License
This dataset is released under the **MIT License**.
## Acknowledgments
This dataset relies on NCBI’s **PubTator**, **GNORM**, and **TaggerOne** models. Credit goes to the developers of these tools and the researchers whose articles form the dataset foundation. |
AlignmentResearch/ReNeLLMClearHarm | AlignmentResearch | 2025-05-13T03:24:53Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T22:39:14Z | null | ---
dataset_info:
- config_name: renellm-input-adv-Qwen2.5-14-config-default-examples-0-100-attacks-0-200
features:
- name: clf_label
dtype: int64
- name: instructions
dtype: string
- name: content
sequence: string
- name: answer_prompt
dtype: string
- name: proxy_clf_label
dtype: int64
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
- name: original_text
dtype: string
- name: attack_index
dtype: int64
- name: original_example_index
dtype: int64
splits:
- name: validation
num_bytes: 16199407
num_examples: 20000
download_size: 1335998
dataset_size: 16199407
- config_name: renellm-input-adv-Qwen2.5-14-config-default-examples-0-100-attacks-0-200-short
features:
- name: clf_label
dtype: int64
- name: instructions
dtype: string
- name: content
sequence: string
- name: answer_prompt
dtype: string
- name: proxy_clf_label
dtype: int64
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
- name: original_text
dtype: string
- name: attack_index
dtype: int64
- name: original_example_index
dtype: int64
splits:
- name: train
num_bytes: 0
num_examples: 0
- name: validation
num_bytes: 16169174.0
num_examples: 19997
download_size: 1387154
dataset_size: 16169174.0
configs:
- config_name: renellm-input-adv-Qwen2.5-14-config-default-examples-0-100-attacks-0-200
data_files:
- split: validation
path: renellm-input-adv-Qwen2.5-14-config-default-examples-0-100-attacks-0-200/validation-*
- config_name: renellm-input-adv-Qwen2.5-14-config-default-examples-0-100-attacks-0-200-short
data_files:
- split: train
path: renellm-input-adv-Qwen2.5-14-config-default-examples-0-100-attacks-0-200-short/train-*
- split: validation
path: renellm-input-adv-Qwen2.5-14-config-default-examples-0-100-attacks-0-200-short/validation-*
---
|
danhdzzzz/tuyensinh_sp | danhdzzzz | 2025-05-13T03:13:15Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T03:12:29Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 263615
num_examples: 1059
download_size: 87116
dataset_size: 263615
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yoonholee/completions_Qwen3-1.7B_GSMPlus | yoonholee | 2025-05-13T03:08:20Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T03:08:18Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: completions
sequence: string
- name: answer
dtype: string
- name: corrects
sequence: bool
- name: acc
dtype: float64
splits:
- name: train
num_bytes: 9368563
num_examples: 200
download_size: 2824742
dataset_size: 9368563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chuuhtetnaing/myanmar-ocr-dataset | chuuhtetnaing | 2025-05-13T02:25:18Z | 166 | 0 | [
"task_categories:image-to-text",
"language:my",
"license:mit",
"region:us"
] | [
"image-to-text"
] | 2025-03-27T12:09:00Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 23636898616.852
num_examples: 6903524
- name: test
num_bytes: 2592888233.424
num_examples: 762736
download_size: 24248031522
dataset_size: 26229786850.276
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
language:
- my
pretty_name: Myanmar OCR Dataset (Synthetic)
license: mit
task_categories:
- image-to-text
---
# Myanmar OCR Dataset
A synthetic dataset for training and fine-tuning Optical Character Recognition (OCR) models specifically for the Myanmar language.
## Dataset Description
This dataset contains synthetically generated OCR images created specifically for Myanmar text recognition tasks. The images were generated using [myanmar-ocr-data-generator](https://github.com/chuuhtetnaing/myanmar-ocr-data-generator), a fork of [TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator) with fixes for proper Myanmar character splitting.
## Direct Download
Available from [UT Austin SharePoint](https://utexas-my.sharepoint.com/:u:/g/personal/cn23349_my_utexas_edu/EZcWC2NEbKNMkeJ_WByugx0BMTSE6qKYQjVp62fn6m0toA?e=L9AKko)
- The downloadable data is specifically formatted for PaddleOCR
- Includes organized folder structure optimized for PaddleOCR training
- Labels are provided in tab-separated format
- Ready to fine-tuned for PaddleOCR
### Key Features
- Synthetic Myanmar text images with corresponding text labels
- Maximum text length of 100 characters per image
- Various fonts, styles, skews, various distortion types, blur levels, and backgrounds to improve model robustness
## Generation Method
The dataset was created using a forked and customized version of the TextRecognitionDataGenerator:
- Original repository: [Belval/TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator)
- Customized fork: [chuuhtetnaing/myanmar-ocr-data-generator](https://github.com/chuuhtetnaing/myanmar-ocr-data-generator)
### Text Sources
The text content used to generate the OCR images was extracted from:
- [chuuhtetnaing/myanmar-wikipedia-dataset](https://huggingface.co/datasets/chuuhtetnaing/myanmar-wikipedia-dataset): A collection of Myanmar language text from Wikipedia
- [facebook/flores](https://huggingface.co/datasets/facebook/flores): Meta's FLORES dataset which includes Myanmar language texts
## Usage
This dataset is particularly useful for:
- Training OCR models for Myanmar language text recognition
- Fine-tuning existing OCR models to improve performance on Myanmar script
- Developing applications for Myanmar text digitization
## License
This dataset is free to use for any purpose - personal, commercial, or educational. |
hf-doc-build/doc-build-dev | hf-doc-build | 2025-05-13T01:55:51Z | 163,925 | 4 | [
"license:mit",
"region:us",
"documentation"
] | [] | 2022-11-08T09:03:37Z | null | ---
license: mit
tags:
- documentation
pretty_name: HF Documentation (PRs)
viewer: false
---
This is a dataset which contains the docs from all the PRs that are updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo. |
ahmedelgebaly/SQuad_SciQ_HotpotQA_Alpaca_Equal | ahmedelgebaly | 2025-05-12T23:14:14Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T23:14:09Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 72542175.0
num_examples: 30000
- name: validation
num_bytes: 7393264.0
num_examples: 3000
download_size: 46541993
dataset_size: 79935439.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
nebius/SWE-rebench | nebius | 2025-05-12T22:21:55Z | 29 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-28T10:02:25Z | null | ---
dataset_info:
features:
- name: instance_id
dtype: string
- name: base_commit
dtype: string
- name: created_at
dtype: string
- name: environment_setup_commit
dtype: string
- name: hints_text
dtype: string
- name: patch
dtype: string
- name: problem_statement
dtype: string
- name: repo
dtype: string
- name: test_patch
dtype: string
- name: meta
struct:
- name: commit_name
dtype: string
- name: failed_lite_validators
sequence: string
- name: has_test_patch
dtype: bool
- name: is_lite
dtype: bool
- name: llm_score
struct:
- name: difficulty_score
dtype: int64
- name: issue_text_score
dtype: int64
- name: test_score
dtype: int64
- name: num_modified_files
dtype: int64
- name: version
dtype: string
- name: install_config
struct:
- name: env_vars
struct:
- name: JUPYTER_PLATFORM_DIRS
dtype: string
- name: env_yml_path
sequence: string
- name: install
dtype: string
- name: log_parser
dtype: string
- name: no_use_env
dtype: bool
- name: packages
dtype: string
- name: pip_packages
sequence: string
- name: pre_install
sequence: string
- name: python
dtype: string
- name: reqs_path
sequence: string
- name: test_cmd
dtype: string
- name: requirements
dtype: string
- name: environment
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: FAIL_TO_FAIL
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: PASS_TO_FAIL
sequence: string
- name: license_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 737537372
num_examples: 21336
download_size: 239735457
dataset_size: 737537372
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Summary
SWE-rebench is a large-scale dataset designed to support the training and evaluation of LLM-based software engineering (SWE) agents. It is built using a fully automated pipeline that continuously extracts real-world GitHub tasks at scale. The dataset includes over 21,000 issue–pull request pairs from 6,000+ Python repositories, each validated for correctness through environment setup and test execution.
* SWE-rebench expands on the methodology introduced in SWE-bench by adding:
* Continuous task collection to prevent benchmark staleness
* Decontamination mechanisms to mitigate data leakage into pretrained LLMs
* Automatic environment extraction and validation to ensure high-quality execution
# How to Use
```python
from datasets import load_dataset
ds = load_dataset('nebius/SWE-rebench')
```
# Dataset Statistics
Average, 75th percentile, and maximum values characterizing various attributes of the collected instances. Statistics are micro-averaged without grouping by repository.
| Data | Type | Mean | p75 | Max |
|---------------|--------------------|----------|----------|-----------|
| Issue text | Length (words) | 111.5 | 146 | 1,294 |
| Code base | Files (Non-test) | 71.71 | 72.00 | 2,264 |
| | Lines (Non-test) | 15,163.38| 13,777 | 1,039,288 |
| Gold patch | Files edited | 2.6 | 3 | 7 |
| | Lines edited | 56 | 76 | 300 |
| Tests | Fail to Pass | 10.94 | 5 | 4,941 |
| | Total | 58.5 | 49 | 7,820 |
# Dataset Structure
The dataset contains the following fields. It includes all fields from SWE-bench and adds a `meta` column, which indicates whether the instance meets the "lite" criteria and, if not, lists the failed validators.
| Field name | Type | Description |
|----------------------------|--------|-------------------------------------------------------------------------------------------------|
| `instance_id` | str | A formatted instance identifier, usually as `repo_owner__repo_name-PR-number`. |
| `patch` | str | The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. |
| `repo` | str | The repository owner/name identifier from GitHub. |
| `base_commit` | str | The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. |
| `hints_text` | str | Comments made on the issue prior to the creation of the solution PR’s first commit creation date. |
| `created_at` | str | The creation date of the pull request. |
| `test_patch` | str | A test-file patch that was contributed by the solution PR. |
| `problem_statement` | str | The issue title and body. |
| `version` | str | Installation version to use for running evaluation. |
| `environment_setup_commit` | str | Commit hash to use for environment setup and installation. |
| `FAIL_TO_PASS` | str | A JSON list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. |
| `PASS_TO_PASS` | str | A JSON list of strings that represent tests that should pass before and after the PR application. |
| `meta` | str | A JSON dictionary indicating whether the instance is lite, along with a list of failed lite validators if it is not. |
| `license_name` | str | The type of license of the repository. |
| `install_config` | str | Installation configuration for setting up the repository. |
| `requirements` | str | Freezed requirements for the repository. |
| `environment` | str | Environment configuration for the repository. |
To execute instances within SWE-rebench use this fork of SWE-bench [SWE-rebench/SWE-bench-fork](https://github.com/SWE-rebench/SWE-bench-fork)
# License
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance. |
vuquangtrung/newscook | vuquangtrung | 2025-05-12T22:16:11Z | 0 | 0 | [
"task_categories:question-answering",
"language:vi",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2025-05-12T21:53:21Z | null | ---
task_categories:
- question-answering
language:
- vi
size_categories:
- 1K<n<10K
--- |
SAA-Lab/test_march23-cwv-genrm_cot_qwen1.5b-ckptglobal_step_324 | SAA-Lab | 2025-05-12T21:13:14Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T21:13:12Z | null | ---
dataset_info:
features:
- name: post_id
dtype: int64
- name: chosen_body
dtype: string
- name: rejected_body
dtype: string
- name: chosen_upvotes
dtype: int64
- name: rejected_upvotes
dtype: int64
- name: chosen_length
dtype: int64
- name: rejected_length
dtype: int64
- name: chosen_username
dtype: string
- name: rejected_username
dtype: string
- name: chosen_timestamp
dtype: timestamp[us]
- name: rejected_timestamp
dtype: timestamp[us]
- name: post_title
dtype: string
- name: time_diff
dtype: float64
- name: __index_level_0__
dtype: int64
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: answer
dtype: string
- name: model_response
dtype: string
- name: reasoning
dtype: string
- name: preferred
dtype: string
- name: is_correct
dtype: bool
splits:
- name: train
num_bytes: 32076409
num_examples: 1898
download_size: 18280357
dataset_size: 32076409
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hurrutia/Reglamento_Aeronautico_Colombiano_2024 | hurrutia | 2025-05-12T20:55:39Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T14:20:13Z | null | ---
dataset_info:
features:
- name: rac
dtype: string
- name: pagina
dtype: string
- name: pregunta
dtype: string
- name: respuesta
dtype: string
splits:
- name: train
num_bytes: 6732878
num_examples: 24479
download_size: 1813160
dataset_size: 6732878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hurrutia/es-inclusive-language-it | hurrutia | 2025-05-12T20:55:30Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T14:20:06Z | null | ---
dataset_info:
features:
- name: pregunta
dtype: string
- name: respuesta
dtype: string
splits:
- name: train
num_bytes: 930607
num_examples: 4196
download_size: 406560
dataset_size: 930607
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AIxBlock/Spanish-Mx-short-utterances | AIxBlock | 2025-05-12T20:40:34Z | 33 | 0 | [
"task_categories:text-to-audio",
"task_categories:text-to-speech",
"task_categories:token-classification",
"language:es",
"license:mit",
"size_categories:100K<n<1M",
"region:us"
] | [
"text-to-audio",
"text-to-speech",
"token-classification"
] | 2025-05-08T23:57:48Z | null | ---
license: mit
task_categories:
- text-to-audio
- text-to-speech
- token-classification
language:
- es
size_categories:
- 100K<n<1M
---
This dataset is provided by AIxBlock, an unified platform for AI development and AI workflows automation.
This dataset contains around 800k+ sentences in Spanish, making it a valuable resource for a wide range of language technology applications. All data has undergone quality assurance (QA) checks to ensure clarity, correctness, and natural phrasing.
The dataset is well-suited for:
Speech data generation (e.g., recording short audio clips lasting 8–30 seconds per sentence)
Natural Language Processing (NLP) tasks, such as language modeling, translation, intent detection, and more |
TAUR-dev/STEPS__r1_8d_and_4d_eval__v4_mini | TAUR-dev | 2025-05-12T20:05:32Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T20:05:29Z | null | ---
dataset_info:
features:
- name: __source_repo
dtype: string
- name: question
dtype: string
- name: eval_internal_cot
dtype: string
- name: solution
dtype: string
- name: eval_solution
dtype: string
- name: raw_eval_prompt
dtype: string
- name: judge_correct
dtype: bool
- name: judge_reasoning
dtype: string
- name: __orig_idx
dtype: int64
- name: __chunk_idx
dtype: int64
- name: eval_internal_cot__chunked
dtype: string
- name: eval_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: eval_steps
list:
- name: content
dtype: string
- name: steps_word_count
dtype: int64
- name: trace_word_count
dtype: int64
- name: word_count_diff
dtype: int64
splits:
- name: train
num_bytes: 1374101
num_examples: 20
download_size: 332457
dataset_size: 1374101
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
littleGuagua/x_dataset_48558 | littleGuagua | 2025-05-12T19:20:01Z | 1,146 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T14:58:15Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_48558
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5ERFRy1NBaxrJ8WpkjKeWwgx79NxiVoEqmL3m5tEWsDHwjtD
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_48558,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_48558},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 46034420
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-11T00:00:00Z
- **Last Updated:** 2025-02-18T22:02:52Z
### Data Distribution
- Tweets with hashtags: 36.14%
- Tweets without hashtags: 63.86%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 29399695 | 63.86% |
| 2 | #riyadh | 280983 | 0.61% |
| 3 | #zelena | 221844 | 0.48% |
| 4 | #tiktok | 163569 | 0.36% |
| 5 | #bbb25 | 128439 | 0.28% |
| 6 | #ad | 96138 | 0.21% |
| 7 | #bbmzansi | 59564 | 0.13% |
| 8 | #jhope_at_galadespiècesjaunes | 58496 | 0.13% |
| 9 | #granhermano | 52866 | 0.11% |
| 10 | #pr | 50398 | 0.11% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T14:58:37Z | 837393 | 837393 |
| 2025-01-30T03:14:08Z | 8566588 | 9403981 |
| 2025-02-02T15:17:25Z | 8569868 | 17973849 |
| 2025-02-06T03:21:34Z | 10709950 | 28683799 |
| 2025-02-09T15:24:35Z | 7218900 | 35902699 |
| 2025-02-13T03:32:24Z | 8679209 | 44581908 |
| 2025-02-18T07:01:46Z | 795937 | 45377845 |
| 2025-02-18T22:02:52Z | 656575 | 46034420 |
|
pragsri8/ultrafeedback_60658_preference_dataset_causally-aligned-neutrals-on-causals | pragsri8 | 2025-05-12T19:01:02Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T19:00:47Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: neutral
dtype: bool
splits:
- name: train
num_bytes: 701786488
num_examples: 183836
download_size: 380044119
dataset_size: 701786488
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Aravindh25/trossen_pick_granola_bars_3cam_C_V8 | Aravindh25 | 2025-05-12T18:51:08Z | 121 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-10T01:48:03Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_solo",
"total_episodes": 25,
"total_frames": 30723,
"total_tasks": 1,
"total_videos": 75,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:25"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Matis-E/BUSI_benign_100 | Matis-E | 2025-05-12T18:02:21Z | 39 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T21:20:31Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 12306695.0
num_examples: 100
download_size: 12288899
dataset_size: 12306695.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/text_L2-regular-SQA_mmlu | chiyuanhsiao | 2025-05-12T17:44:12Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T17:44:05Z | null | ---
dataset_info:
features:
- name: task_type
dtype: string
- name: task_name
dtype: string
- name: subtask_name
dtype: string
- name: input_question
dtype: string
- name: input_choice_list
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: input_final_prompts
sequence: string
- name: input_correct_responses
sequence: string
- name: output_prediction_text
sequence: string
- name: output_parsed_answer
dtype: string
- name: output_choice_completions
dtype: 'null'
- name: output_choice_negative_log_likelihoods
dtype: 'null'
- name: output_metrics
struct:
- name: acc
dtype: float64
- name: correct_format
dtype: float64
- name: is_correct
dtype: bool
- name: input_question_hash
dtype: string
- name: input_final_prompts_hash
sequence: string
- name: benchmark_label
dtype: string
- name: eval_config
struct:
- name: max_gen_len
dtype: string
- name: max_prompt_len
dtype: string
- name: num_few_shot
dtype: string
- name: num_generations
dtype: string
- name: prompt_fn
dtype: string
- name: return_logprobs
dtype: string
- name: seed
dtype: string
- name: temperature
dtype: string
- name: top_k
dtype: string
- name: top_p
dtype: string
- name: my_prediction_text
dtype: string
splits:
- name: latest
num_bytes: 187010503
num_examples: 14042
download_size: 28043068
dataset_size: 187010503
configs:
- config_name: default
data_files:
- split: latest
path: data/latest-*
---
|
Bretagne/Banque_Sonore_Dialectes_Bretons | Bretagne | 2025-05-12T16:55:59Z | 1,142 | 1 | [
"task_categories:automatic-speech-recognition",
"task_categories:translation",
"language:br",
"language:fra",
"language:multilingual",
"license:cc-by-nc-nd-3.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"automatic-speech-recognition",
"translation"
] | 2023-12-06T18:55:45Z | null | ---
language:
- br
- fra
- multilingual
task_categories:
- automatic-speech-recognition
- translation
license: cc-by-nc-nd-3.0
configs:
- config_name: data
data_files:
- split: train
path: train.parquet
---
> [!NOTE]
> Dataset origin: http://banque.sonore.breton.free.fr/
# Description
## Issue du site Banque Sonore des Dialectes Bretons
### Présentation du projet
La [Banque Sonore des Dialectes Bretons](http://banque.sonore.breton.free.fr/index.html) est un projet expérimental qui réunit sur internet un vaste ensemble d'enregistrements d'enquêtes effectuées depuis plus d'une dizaine d'années auprès de locuteurs traditionnels de breton.
Alimentées par une équipe de bénévoles partageant un intérêt commun pour l'étude de la langue et des traditions populaires, ces archives en ligne souhaitent faire partager au plus grand nombre la richesse et la diversité de ce patrimoine culturel.
### Présentation du corpus
Les enregistrements réunis dans nos archives proviennent de fonds particuliers pour la plupart inédits jusqu'à présent. L'ensemble de ce corpus concerne des domaines de recherche très divers pouvant aller de l'étude des sons, de la grammaire et du vocabulaire de la langue à l'étude des techniques agricoles, de la médecine traditionnelle ou encore des croyances populaires.
Chaque ressource est soigneusement sélectionnée, transcrite, traduite et cataloguée et peut être accompagnée d'illustrations iconographiques. Grâce au moteur de recherche développé pour ce projet, linguistes, ethnologues, enseignants, étudiants, passionnés ou simples curieux... tous pourront facilement extraire de ce corpus les informations qui les intéressent.
### Mentions légales
Ce site bénéficie d'une licence Creative Commons. Chaque déposant reste néanmoins responsable des documents qu'il souhaite faire partager en ligne. Si un extrait vous intéresse pour vos travaux, vous pouvez l'utiliser à des fins non commerciales. N'oubliez pas de citer le nom du projet ainsi que le numéro de sa cote suivi de la date de sa dernière mise à jour.
## Précisions concernant ce répertoire Hugging Face
L'utilisation des données disponibles n'étant pas aisée sur le site, nous les avons scrappées le 6 décembre 2023.
Ce répertoire contient 7291 fichiers audios représentant une durée de 15:53:23 (soit moins que les 8586 fichiers audio qui représentent une durée totale de 18:33:23 affichées sur la page [statistique](http://banque.sonore.breton.free.fr/statistique.php5) du site... un nouveau scrapping serait probablement à effectuer).
## Usage
En plus de la tâche d'ASR, il est possible d'utiliser uniquement les colonnes `br` et `fr` comme données de traduction automatique.
<!-- Une reconfiguration des données serait à effectuer afin de les proposer directement exploitables via un `load_dataset("Bretagne/Banque_Sonore_Dialectes_Bretons")`.
En l'état, il faut procéder via le code suivant :
```
from huggingface_hub import snapshot_download
repo_id = "Bretagne/Banque_Sonore_Dialectes_Bretons"
local_dir = "le_chemin_vers_votre_dossier_en_local"
snapshot_download(repo_id=repo_id, local_dir=local_dir, repo_type="dataset")
```
Chaque dossier correspond à un commune (via le code postal) dans lequel vous trouverez des enregistrements au format mp3 avec la transcription en breton et la traduction en français. --> |
hshwk1983/x_dataset_20503 | hshwk1983 | 2025-05-12T16:53:59Z | 1,937 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T07:24:59Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** hshwk1983/x_dataset_20503
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5DsemqgVoVsi8vHK6QiEewS9B1T3cfxwjUfGXjCxzQTjRLBe
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{hshwk19832025datauniversex_dataset_20503,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={hshwk1983},
year={2025},
url={https://huggingface.co/datasets/hshwk1983/x_dataset_20503},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 52912520
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-13T00:00:00Z
- **Last Updated:** 2025-02-18T20:45:01Z
### Data Distribution
- Tweets with hashtags: 41.50%
- Tweets without hashtags: 58.50%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 30954536 | 58.50% |
| 2 | #riyadh | 333077 | 0.63% |
| 3 | #zelena | 227553 | 0.43% |
| 4 | #tiktok | 210733 | 0.40% |
| 5 | #ad | 123338 | 0.23% |
| 6 | #bbb25 | 111207 | 0.21% |
| 7 | #pr | 67348 | 0.13% |
| 8 | #theheartkillersep11 | 67119 | 0.13% |
| 9 | #yahooニュース | 65424 | 0.12% |
| 10 | #theheartkillersep10 | 64660 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T07:26:13Z | 4467532 | 4467532 |
| 2025-01-30T19:29:08Z | 9248514 | 13716046 |
| 2025-02-03T07:32:06Z | 8511001 | 22227047 |
| 2025-02-06T19:35:57Z | 9947784 | 32174831 |
| 2025-02-10T07:39:43Z | 8653004 | 40827835 |
| 2025-02-13T19:44:19Z | 10753704 | 51581539 |
| 2025-02-18T05:43:42Z | 691061 | 52272600 |
| 2025-02-18T20:45:01Z | 639920 | 52912520 |
|
voxaiorg/drivethru-context-v2-all-preference-1-pairs-mistralai-Mistral-Small-3.1-24B-Instruct-2503 | voxaiorg | 2025-05-12T16:40:06Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T16:39:56Z | null | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 395333849
num_examples: 4373
- name: validation
num_bytes: 47938299
num_examples: 524
- name: test
num_bytes: 60572445
num_examples: 655
download_size: 102808521
dataset_size: 503844593
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Ultralordb0d/medical_asr_prediction_usm_augmented | Ultralordb0d | 2025-05-12T16:33:47Z | 116 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-19T10:23:56Z | null | ---
dataset_info:
features:
- name: filename
dtype: string
- name: transcription
dtype: string
- name: prediction
dtype: string
- name: prediction_ph_par_all
dtype: string
- name: transcription_clean
dtype: string
- name: prediction_par_all
dtype: string
- name: prediction_par_all_medicalterms
dtype: string
- name: prediction_par_all_not_in_nltk
dtype: string
- name: prediction_embedding_model
dtype: string
- name: prediction_embedding_model_not_nltk
dtype: string
splits:
- name: train
num_bytes: 219249300
num_examples: 200000
download_size: 130065085
dataset_size: 219249300
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
madeofajala/MalariaLegacyLLM | madeofajala | 2025-05-12T16:16:32Z | 0 | 0 | [
"task_categories:text-classification",
"task_categories:table-question-answering",
"task_categories:zero-shot-classification",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.06316",
"region:us",
"biology",
"chemistry",
"medical"
] | [
"text-classification",
"table-question-answering",
"zero-shot-classification"
] | 2025-05-12T15:10:05Z | null | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Answer
dtype: string
- name: CANONICAL_SMILES
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 77059720
num_examples: 94916
- name: validation
num_bytes: 19254722
num_examples: 23762
- name: test
num_bytes: 24133827
num_examples: 29674
download_size: 16354853
dataset_size: 120448269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: mit
task_categories:
- text-classification
- table-question-answering
- zero-shot-classification
language:
- en
tags:
- biology
- chemistry
- medical
pretty_name: malariallm
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This is an LLM instruction-tuned subset of the CHEMBL Legacy Malaria designed for using LLMs for virtual screening
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This dataset is compiled from the CHEMBL Malaria Legacy dataset [link](https://chembl.gitbook.io/chembl-interface-documentation/legacy-resources).
The dataset has been instruction-tuned just as proposed in [Tx-LLM: A Large Language Model for Therapeutics](https://arxiv.org/pdf/2406.06316) and TxGemma: Efficient and Agentic LLMs for Therapeutics
The prompt consists of 4 parts:
1.) **Instruction**: General instruction to the LLMs to provide an accurate answer about the assay and molecule.
2.) **Context**: Information sourced from literature about the specific assay as described in the original datasets ASSAY_DESCRIPTION, as well as the target (protein or cell line).
3.) **Question**: A command to predict, given the molecule and assay information, if the molecule will be (A) Active or (B) inactive.
4.) **Answer**:
- **Curated by:** Marvellous Ajala
- **Sourced From:** CHEMBL
- **Language(s) (NLP):** English
- **License:** MIT
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
This dataset is designed for finetuning general-purpose LLMs (Llama, Gemma etc) for virtual screening in Malaria
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
To use this dataset
```python
import datasets
dataset = datasets.load_dataset('madeofajala/MalariaLegacyLLM')
# Display the first example
print(dataset['train'][0])
```
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset contains 3 parts:
- Trainset
- Val set
- Test set
The dataset was split using a scaffold-based split. Each set contains:
- Prompt: information about the assay and target in natural language
- Answer: (A) if active and (B) if inactive
- CANONICAL_SMILES: SMILES of molecule in focus
- serial_number
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
This dataset was curated for instruction-tuning LLMs for virtual screening of Malaria using natural language (English)
#### Data Collection and Processing
<!-- This section describes the data collection and processing process, such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
This dataset consists of only the potency and IC50 subset of the original dataset. It was curated to contain only assays in which there was a definitive conclusion of molecules' activity (active or inactive).
Also, molecules with two or more conflicting activity values, e.g active at higher or lower concentration and inactive at the other, were also completely removed
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Different pharma companies and independent researchers, including but not limited to:
- Scientific Literature
- TP-search Transporter Database
- PubChem BioAssays
- Open TG-GATEs
- GSK Published Kinase Inhibitor Set
- Sanger Institute Genomics of Drug Sensitivity in Cancer
- Guide to Receptors and Channels
- DrugMatrix in vitro pharmacology assays
- Drugs for Neglected Diseases Initiative (DNDi)
- St Jude Malaria Screening
- WHO-TDR Malaria Screening
- MMV Malaria Box
- GSK Malaria Screening
- Novartis Malaria Screening
- GSK Tuberculosis Screening
- Harvard Malaria Screening
- OSDD Malaria Screening
The original dataset was compiled by EBI-EMBL team
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users are advised to cross-reference the dataset with the orginal dataset provided by CHEMBL. Also, as the dataset is a combination of multiple collated and sourced tasks, projects etc, users should be aware of the implication of this to their task at hand.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
title = {Malaria Legacy Dataset for LLM},
author = {Marvellous Ajala}
year = {2025},
publisher = {Hugging Face Datasets},
version = {1.0.0},
url = {https://huggingface.co/datasets/madeofajala/MalariaLegacyLLM},
``` |
anonymous-author/paper_data | anonymous-author | 2025-05-12T16:13:49Z | 0 | 0 | [
"license:cc-by-sa-4.0",
"region:us"
] | [] | 2025-05-12T16:12:26Z | null | ---
license: cc-by-sa-4.0
---
|
idoco/PopVQA | idoco | 2025-05-12T15:52:23Z | 84 | 2 | [
"task_categories:visual-question-answering",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"modality:image",
"region:us"
] | [
"visual-question-answering"
] | 2025-05-08T06:28:52Z | null | ---
license: mit
task_categories:
- visual-question-answering
language:
- en
pretty_name: PopVQA
size_categories:
- 10K<n<100K
---
# PopVQA: Popular Entity Visual Question Answering
PopVQA is a dataset designed to study the performance gap in vision-language models (VLMs) when answering factual questions about entities presented in **images** versus **text**.

## 🔍 Motivation
<img src="./paper_teaser.png" alt="Motivation" width="700">
PopVQA was curated to explore the disparity in model performance when answering factual questions about an entity described in text versus depicted in an image. This is achieved by asking the same questions twice, once with the textual representation (the entity's name), then, with the visual representation (entity image). We include several questions about every entity to allow a more fine grained evaluation.
This dataset was introduced in the paper:
> **"Performance Gap in Entity Knowledge Extraction Across Modalities in Vision Language Models"**
> Ido Cohen, Daniela Gottesman, Mor Geva, Raja Giryes (2025)
## 📦 Dataset Structure
The dataset consists of:
- `entities.csv`: Metadata of 15,395 popular entities, of various types (celebrities, landmarks, logos, and paintings).
- `questions.csv`: Over 100,000 factual questions, each given in two forms: one referring to a **textual representation** and one referring to a **visual representation** of the entity.
- `original_path/`: Original images.
- `resized_path/`: Images resized to 336×336 with aspect ratio preserved via padding.
### `entities.csv` columns:
| Column | Description |
|------------------|---------------------------------------------|
| `type` | Entity type (e.g., `celebs`, `logos`) |
| `subject` | Entity name |
| `s_uri` | Wikidata URI of the subject |
| `popularity` | Wikipedia popularity score |
| `aliases` | Alternate names/aliases for the entity |
| `image` | wiki commons url |
| `original_path` | Path to the original image |
| `resized_path` | Path to the 336x336 padded image |
### `questions.csv` columns:
| Column | Description |
|------------------------|--------------------------------------------------------------|
| `type` | Entity type |
| `subject` | Entity name |
| `question_for_image` | Question phrased for visual context (e.g., “...in this image?”) |
| `question` | Textual version of the same question |
| `possible_answers` | List of acceptable answers |
| `relation` | Relation name (e.g., occupation, language) |
| `s_uri`, `r_uri`, `a_uri` | Wikidata URIs for subject, relation, and answer |
| `attribute`, `a_type` | Answer string and attribute types (e.g., "language") |
|
Xecades/AerialExtreMatch-Benchmark | Xecades | 2025-05-12T15:39:49Z | 89 | 0 | [
"task_categories:depth-estimation",
"task_categories:keypoint-detection",
"task_categories:image-feature-extraction",
"license:mit",
"modality:image",
"region:us",
"image"
] | [
"depth-estimation",
"keypoint-detection",
"image-feature-extraction"
] | 2025-05-11T14:38:08Z | null | ---
license: mit
task_categories:
- depth-estimation
- keypoint-detection
- image-feature-extraction
pretty_name: AerialExtreMatch Benchmark
viewer: false
tags:
- image
---
# AerialExtreMatch — Benchmark Dataset
[Code](https://github.com/Xecades/AerialExtreMatch) | Project Page (WIP) | Paper (WIP)
This repo contains the **benchmark** set for our paper *AerialExtreMatch: A Benchmark for Extreme-View Image Matching and Localization*. 32 difficulty levels are included. We also provide [**train**](https://huggingface.co/datasets/Xecades/AerialExtreMatch-Train) and [**localization**](https://huggingface.co/datasets/Xecades/AerialExtreMatch-Localization) datasets.
> WARNING: working in progress.
## Usage
Simply clone this repository and unzip the dataset files.
```bash
git clone [email protected]:datasets/Xecades/AerialExtreMatch-Benchmark
cd AerialExtreMatch-Benchmark
unzip "*.zip"
```
TODO: provide a python example.
## Dataset Structure
After unpacking each .zip file:
<pre>
.
└── class_[id] <i>(class_0~class_31)</i>
├── class_[id].npy
├── depth: *.exr
└── rgb: *.jpg
</pre>
- Keys of `class_[id].npy` files: `['poses', 'intrinsics', 'depth', 'rgb', 'overlap', 'pitch', 'scale', 'pair']`.
- Refer to original paper for details on classification.
|
autobio-bench/screw_tighten-blender | autobio-bench | 2025-05-12T15:14:08Z | 0 | 0 | [
"task_categories:robotics",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"medical"
] | [
"robotics"
] | 2025-05-12T15:12:57Z | null | ---
license: mit
task_categories:
- robotics
tags:
- LeRobot
- medical
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** mit
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": null,
"total_episodes": 100,
"total_frames": 154830,
"total_tasks": 1,
"total_videos": 300,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:100"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"state": {
"dtype": "float32",
"shape": [
14
],
"names": [
"state"
]
},
"actions": {
"dtype": "float32",
"shape": [
14
],
"names": [
"actions"
]
},
"image": {
"dtype": "video",
"shape": [
224,
224,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 50.0,
"video.height": 224,
"video.width": 224,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"wrist_image": {
"dtype": "video",
"shape": [
224,
224,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 50.0,
"video.height": 224,
"video.width": 224,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"wrist_image_2": {
"dtype": "video",
"shape": [
224,
224,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 50.0,
"video.height": 224,
"video.width": 224,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
littleGuagua/x_dataset_8140 | littleGuagua | 2025-05-12T15:13:31Z | 1,161 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T13:25:56Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_8140
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5HasdyDaczLXYaiykhuuszTMWS65QmAgo72UpwABUi3czyeu
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_8140,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_8140},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 50376997
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-10T00:00:00Z
- **Last Updated:** 2025-02-18T20:55:45Z
### Data Distribution
- Tweets with hashtags: 39.81%
- Tweets without hashtags: 60.19%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 30319553 | 60.19% |
| 2 | #riyadh | 310085 | 0.62% |
| 3 | #zelena | 215655 | 0.43% |
| 4 | #tiktok | 192806 | 0.38% |
| 5 | #ad | 112205 | 0.22% |
| 6 | #bbb25 | 110854 | 0.22% |
| 7 | #grammys | 82659 | 0.16% |
| 8 | #jhope_at_galadespiècesjaunes | 70215 | 0.14% |
| 9 | #bbmzansi | 66978 | 0.13% |
| 10 | #sixtonesann | 65126 | 0.13% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T13:26:49Z | 2721817 | 2721817 |
| 2025-01-30T01:43:17Z | 9702324 | 12424141 |
| 2025-02-02T13:47:13Z | 12507356 | 24931497 |
| 2025-02-06T01:50:29Z | 8691717 | 33623214 |
| 2025-02-09T13:54:19Z | 8748247 | 42371461 |
| 2025-02-13T02:21:42Z | 6726572 | 49098033 |
| 2025-02-18T05:54:36Z | 648154 | 49746187 |
| 2025-02-18T20:55:45Z | 630810 | 50376997 |
|
jasonzheng/ioi-2024-bf16 | jasonzheng | 2025-05-12T14:59:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T14:59:07Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: year
dtype: string
- name: uuid
dtype: string
- name: code
dtype: string
- name: subtask
dtype: string
splits:
- name: train
num_bytes: 3517307
num_examples: 2050
download_size: 687903
dataset_size: 3517307
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anonymous-user-546/test-csv-conversion | anonymous-user-546 | 2025-05-12T14:45:56Z | 79 | 0 | [
"license:cc-by-nc-nd-4.0",
"size_categories:1M<n<10M",
"format:csv",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-07T20:07:06Z | null | ---
license: cc-by-nc-nd-4.0
---
|
liwu/MNBVC | liwu | 2025-05-12T14:30:02Z | 36,760 | 541 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"license:mit",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2023-02-13T14:00:47Z | null | ---
annotations_creators:
- other
language:
- zh
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MNBVC
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for MNBVC
## Table of Contents
- [Dataset Card for MNBVC](#dataset-card-for-mnbvc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [数据集介绍](#数据集介绍)
- [数据子集](#数据子集)
- [数据格式](#数据格式)
- [文本数据](#文本数据)
- [问答数据](#问答数据)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://mnbvc.253874.net/
- **Repository:** https://github.com/esbatmop/MNBVC
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### 数据集介绍
中文互联网上最古老最神秘(没有之一)的里屋社区于2023.1.1庄重宣布:
在英明神武的里屋管子带领下,决心发挥社区所长(哪都长),帮助开源社区长期更新一份最大的中文互联网语料集。
Huggingface上的MNBVC数据集在逐渐更新中,请到[https://github.com/esbatmop/MNBVC](https://github.com/esbatmop/MNBVC) 获取未完成清洗的更多数据。
可以使用如下脚本加载:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'law_judgement', split='train', streaming=True)
next(iter(dataset)) # get the first line
```
## 数据子集
MNBVC数据集包含数个子集:
- `law_judgement`: 来自法律文书的文本。
- `gov_xuexiqiangguo`: 来自学习强国的文本。
- `gov_report`: 来自政府工作报告的文本。
- `co_ann_report`: 企业年报文本。
- `code_metadata`: 代码元数据。
- `qa_zhihu`: 来自[知乎](https://huggingface.co/datasets/wangrui6/Zhihu-KOL)的问答数据。
- `qa_wikihow`: 来自wikihow的问答数据。
- `qa_mfa`: 外交部问答数据。
- `news_peoples_daily`: 来自人民日报的文本数据。
- `wikipedia`: 来自维基百科的文本数据。
- `qa_stackexchange`: 来自StackExchange的问答数据。
- `qa_chatgpt`: 使用ChatGPT构造的问答语料,感谢[genggui001](https://github.com/genggui001)贡献语料。
- `math`:
- `math_qa `: 和数学领域有关的问答数据。
- `emath` :中国数学爱好者论坛语料数据
- `math_chat`: 和数学领域有关的对话数据数据,可以提升模型Chain of Thought的能力。
- `crawler_oscar`: 从CommonCrawl中清洗出来的通用文本数据。
- `game` : 一些游戏的平行语料数据。
- `Hogwarts_legacy` : 霍格沃茨指遗
- `The_Wither_3` : 巫师三
- `Baldurs_Gate_3` : 博德之门 3
- `GTA`: 侠盗猎车手4 与 侠盗猎车手5
- `Turing_Complete`: 图灵完备性
- `EldenRing`: 艾尔登法环
- `hades`: 哈迪斯
- `sekiro`: 只狼
- `parallel`: 平行语料目录
- `subtitle`: 字幕语料
- `yyets`: 人人影视
- `united_nations`: 联合国平行语料
- `blog`: 博客语料目录
- `book`: 书籍语料目录
## 数据格式
目前MNBVC数据集包含如下几类数据:
- 通用文本
- 问答语料
- 代码语料
- 多轮对话
- 论坛语料
- 平行语料
可以在[MNBVC的wiki页面](https://wiki.mnbvc.org/doku.php/%E7%8E%B0%E6%9C%89%E8%AF%AD%E6%96%99%E6%A0%BC%E5%BC%8F)上查看这几类数据的具体格式。
项目早期所上传的数据使用如下格式,以后这一格式会被废弃,相应数据也会重新上传:
```json
{
"text": datasets.Value("string"),
"meta": datasets.Value("string")
}
```
### Contributions
Thanks to the [Liwu community](http://mnbvc.253874.net/) for constructing this dataset.
Thanks to [silver](https://github.com/silverriver) and [jiaming](https://huggingface.co/Yjiaming) for adding and uploading this dataset to Huggingface.
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{mnbvc,
author = {{MOP-LIWU Community} and {MNBVC Team}},
title = {MNBVC: Massive Never-ending BT Vast Chinese corpus},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/esbatmop/MNBVC}},
}
```
|
momo1942/x_dataset_44829 | momo1942 | 2025-05-12T14:24:28Z | 2,579 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T09:49:03Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** momo1942/x_dataset_44829
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5CacbhmQxhAVGWgrYvCypqhR3n3mNmmWEA8JYzAVghmTDYZy
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{momo19422025datauniversex_dataset_44829,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={momo1942},
year={2025},
url={https://huggingface.co/datasets/momo1942/x_dataset_44829},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 47517552
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-10T00:00:00Z
- **Last Updated:** 2025-02-18T20:42:58Z
### Data Distribution
- Tweets with hashtags: 46.37%
- Tweets without hashtags: 53.63%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 25482638 | 53.63% |
| 2 | #riyadh | 369646 | 0.78% |
| 3 | #zelena | 283758 | 0.60% |
| 4 | #tiktok | 222947 | 0.47% |
| 5 | #ad | 122468 | 0.26% |
| 6 | #bbb25 | 83620 | 0.18% |
| 7 | #bbmzansi | 82423 | 0.17% |
| 8 | #jhope_at_galadespiècesjaunes | 72240 | 0.15% |
| 9 | #trump | 71073 | 0.15% |
| 10 | #pr | 65594 | 0.14% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T09:50:05Z | 3300536 | 3300536 |
| 2025-01-30T21:53:32Z | 11415036 | 14715572 |
| 2025-02-03T09:57:03Z | 9268666 | 23984238 |
| 2025-02-06T21:59:40Z | 5892953 | 29877191 |
| 2025-02-10T10:02:47Z | 6650635 | 36527826 |
| 2025-02-13T22:07:25Z | 9649951 | 46177777 |
| 2025-02-18T05:41:46Z | 692358 | 46870135 |
| 2025-02-18T20:42:58Z | 647417 | 47517552 |
|
jazasyed/musdb-alt | jazasyed | 2025-05-12T14:15:52Z | 28 | 0 | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us",
"music",
"lyrics",
"evaluation",
"benchmark",
"transcription"
] | [
"automatic-speech-recognition"
] | 2025-04-16T11:16:20Z | null | ---
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- music
- lyrics
- evaluation
- benchmark
- transcription
pretty_name: MUSDB-ALT
license: cc-by-nc-sa-4.0
---
# Dataset Card for MUSDB-ALT
This dataset contains long-form lyric transcripts following the Jam-ALT [guidelines](https://huggingface.co/datasets/jamendolyrics/jam-alt/blob/main/GUIDELINES.md)
for the test set of the dataset [MUSDB18](https://sigsep.github.io/datasets/musdb.html), with line-level timings.
## Dataset Details
The dataset was constructed manually, based on the [MUSDB18 lyrics extension](https://zenodo.org/records/3989267) as a starting point.
The lyrics extension contains transcripts of the 45 English language songs out of the 50 in the MUSDB18 test set.
We annotated 39 of those 45 songs, excluding 6 for the following reasons:
- Signe Jakobsen - What Have You Done To Me : Three overlapping vocal lines that could not be separated into lead and backing vocals
- PR - Happy Daze : Vocal content primarily from highly processed vocal samples
- PR - Oh No : Vocal content primarily from highly processed vocal samples
- Skelpolu - Resurrection : Vocal content primarily from highly processed vocal samples
- Timboz - Pony : Lyrics unintelligble due to screamed enunciation style
- Triviul feat The Fiend - Widows : Three overlapping vocal lines that could not be separated into lead and backing vocals
### Dataset Description
**Paper:** The dataset was introduced in the paper [Exploiting Music Source Separation for Automatic Lyrics Transcription with Whisper"](https://arxiv.org/abs/XXXXX)
published at the Workshop [Artificial Intelligence For Music](https://ai4musicians.org/2025icme.html) at ICME 2025
- **Funding:** This work was supported by InnovateUK [Grant Number 10102804]
- **License:** https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en
## Citation
**BibTeX:**
```
@inproceedings{syed-2025-mss-alt,
author = {Jaza Syed and
Ivan Meresman-Higgs and
Ond{\v{r}}ej C{\'{\i}}fka and
Mark Sandler},
title = {Exploiting Music Source Separation for Automatic Lyrics Transcription with {Whisper}},
booktitle = {2025 {IEEE} International Conference on Multimedia and Expo Workshops (ICMEW)},
publisher = {IEEE},
year = {2025},
note = {In press}
}
```
|
rainbowbridge/x_dataset_20722 | rainbowbridge | 2025-05-12T14:04:19Z | 1,203 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T01:31:47Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** rainbowbridge/x_dataset_20722
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5EXTMFUDy34PkND7RWEEXb4vdr3JXmFXesoygkHDrim7GfR5
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{rainbowbridge2025datauniversex_dataset_20722,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={rainbowbridge},
year={2025},
url={https://huggingface.co/datasets/rainbowbridge/x_dataset_20722},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 53014608
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-12T00:00:00Z
- **Last Updated:** 2025-02-18T18:57:54Z
### Data Distribution
- Tweets with hashtags: 41.60%
- Tweets without hashtags: 58.40%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 30961185 | 58.40% |
| 2 | #riyadh | 327113 | 0.62% |
| 3 | #zelena | 254157 | 0.48% |
| 4 | #tiktok | 216346 | 0.41% |
| 5 | #bbb25 | 161006 | 0.30% |
| 6 | #ad | 125530 | 0.24% |
| 7 | #royalrumble | 75597 | 0.14% |
| 8 | #bbmzansi | 71549 | 0.13% |
| 9 | #pr | 69916 | 0.13% |
| 10 | #yahooニュース | 65493 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T01:32:09Z | 1057227 | 1057227 |
| 2025-01-30T13:48:23Z | 11631895 | 12689122 |
| 2025-02-03T01:51:30Z | 8401846 | 21090968 |
| 2025-02-06T13:56:34Z | 12297890 | 33388858 |
| 2025-02-10T01:59:57Z | 8203885 | 41592743 |
| 2025-02-13T14:08:19Z | 10112124 | 51704867 |
| 2025-02-18T03:56:41Z | 648961 | 52353828 |
| 2025-02-18T18:57:54Z | 660780 | 53014608 |
|
zerostratos/chunks | zerostratos | 2025-05-12T13:53:24Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T13:53:14Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68005509
num_examples: 189426
download_size: 37395152
dataset_size: 68005509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gmanolache/CrypticBio | gmanolache | 2025-05-12T13:35:27Z | 465 | 0 | [
"task_categories:zero-shot-classification",
"language:en",
"license:cc",
"size_categories:100M<n<1B",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"biodiverstiy",
"cryptic species",
"fine-grained image recognition",
"vision-language",
"multimodal dataset"
] | [
"zero-shot-classification"
] | 2025-04-27T06:35:09Z | null | ---
license: cc
task_categories:
- zero-shot-classification
language:
- en
tags:
- biodiverstiy
- cryptic species
- fine-grained image recognition
- vision-language
- multimodal dataset
pretty_name: A Large Multimodal Dataset for Visually Confusing Biodiversity
size_categories:
- 100M<n<1B
---
# CrypticBio: A Large Multimodal Dataset for Visually Confusing Biodiversity
<!-- Banner links -->
<div style="text-align:left;">
<a href="https://georgianagmanolache.github.io/crypticbio/" target="_blank" style="display:inline-block;">
<img src="https://img.shields.io/badge/Project%20Page-Visit-blue" alt="Project Page">
</a>
<a href="https://github.com/georgianagmanolache/crypticbio" target="_blank" style="display:inline-block;">
<img src="https://img.shields.io/badge/GitHub-Visit-lightgrey" alt="GitHub">
</a>
</div>
## Description
[CrypticBio](https://georgianagmanolache.github.io/crypticbio/) comprises metadata including species scientific and multicultural vernacular terminology, image URL, taxonomic hierarchy, spatiotemporal context, and cryptic species group. Cryptic species are groups of two or more taxa that are nearly indistinguishable based on visual characteristics alone.
## CrypticBio Dataset
We present CrypticBio, the largest publicly available multimodal dataset of visually confusing species groups, specifically curated to support the development of AI models in the context of biodiversity identification applications.
Curated from real-world trends in species misidentification among community annotators of iNaturalist, CrypticBio contains 67K cryptic species groups spanning 52K species, represented in 166 million images.
## New Benchmark Datasets
We created four new benchmark datasets for fine-grained image classification of cryptic species.
### CrypticBio-Commom
We curate a common species from Arachnida, Aves, Insecta, Plantae, Fungi, Mollusca, and Reptilia and associated cryptic group, spanning n=158 species. We randomly select 100 samples from each species in a cryptic group where there are more than 150 observation per species.
### CrypticBio-CommonUnseen
To assess zero-shot performance on common species from CrypticBio-Common not encountered during training of state-of-the-art models, we specifically curate a subset spanning data from 01-09-2024 to 01-04-2025. We randomly select 100 samples from each species in a cryptic group where there are more than 150 observation per species, spanning n=133 species.
### CrypticBio-Endagered
We propose a cryptic species subset of endangered species according to global IUCN Red List. We randomly select 30 samples from Arachnida, Aves, Insecta, Plantae, Fungi, Mollusca, and Reptilia and associated cryptic groups spanning n=37 species, filtering out taxa where there are less than 150 observation.
### CrypticBio-Invasive
We also propose a cryptic species subset of invasive alien species (IAS) according to global the Global Invasive Species Database (GISD). IAS are a significant concern for biodiversity as their records appear to be exponentially rising across the Earth. We randomly select 100 samples from each invasive species cryptic group spanning n=72 species, filtering out taxa where there are less than 150 observation.
## Dataset Information
### Directory
```plaintext
main/
├── CrypticBio/
│ ├── part_0.csv
│ ├── part_0.parquet
│ ├── part_1.parquet
│ ├── .
│ ├── .
│ ├── .
│ └── part_626.parquet
├── CrypticBio-benchmarks/
│ ├── CrypticBio-Common.csv
│ ├── CrypticBio-CommonUnseen.csv
│ ├── CrypticBio-Endangered.csv
│ └── CrypticBio-Invasive.csv
├──README.md
└──.gitattributes
```
The data and the code are publicly available at [georgianagmanolache.github.io/crypticbio](https://georgianagmanolache.github.io/crypticbio/) |
TrojAI/updatedviolence | TrojAI | 2025-05-12T13:24:56Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T13:24:46Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: expected_response
dtype: string
- name: label
dtype: int64
- name: source
dtype: string
splits:
- name: train
num_bytes: 39363138
num_examples: 88910
- name: test
num_bytes: 2996268
num_examples: 7020
download_size: 23764400
dataset_size: 42359406
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
SYSUSELab/RustEvo2 | SYSUSELab | 2025-05-12T12:40:06Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-12T12:38:13Z | null | ---
license: apache-2.0
---
# RustEvo²
RustEvo² is the first benchmark for evaluating LLMs' ability to adapt to evolving Rust APIs, as described in the paper "RustEvo²: An Evolving Benchmark for API Evolution in LLM-based Rust Code Generation".
## Dataset Overview
Our work can be divided into two phases:
Phase I: API Evolution Data Collection - We collect API changes from multiple sources including official Rust repositories and third-party crates. We analyze changelogs, documentation, and implementation changes to identify and categorize API evolutions into Stabilizations, Signature Changes, Behavioral Changes, and Deprecations.
Phase II: RustEvo² Construction - We transform the collected API evolution data into natural programming tasks using an LLM-based generation pipeline. This process creates programming queries, code solutions, and test programs that implicitly require the use of specific API versions.
The following figure illustrates our two-phase framework:
<div align="center">
<img src="Imgs/overview.png" alt="RustEvo² Framework Overview" width="100%"/>
</div>
### Dataset Format
RustEvo² consists of 588 API changes (380 from Rust standard libraries, 208 from 15 third-party crates) spanning versions 1.71.0 to 1.84.0. These changes are categorized into four types: Stabilizations (31.3%), Signature Changes (31.5%), Behavioral Changes (33.2%), and Deprecations (4.1%), reflecting their actual distribution in the Rust ecosystem.
Each task in RustEvo² consists of <API change information, programming query, function signature, reference solution, test program>. The API change information includes name, module path, version details, documentation, and source code. Programming queries describe real-world scenarios without explicitly mentioning the API. Function signatures guide implementation without revealing API specifics. Test programs verify correct API usage and functional behavior.
One task example:
```json
{
"task_idx": 39,
"query": "In a performance-critical application, you need to efficiently update a large collection of objects by cloning their state from another collection. The objects implement a custom `Clone` trait, but you want to avoid unnecessary trait bounds that could complicate the implementation. Design a function to handle this cloning operation efficiently.",
"function_signature": "fn update_collection<T: Clone>(target: &mut Vec<T>, source: &Vec<T>)",
"code": "fn update_collection<T: Clone>(target: &mut Vec<T>, source: &Vec<T>) {\n target.truncate(source.len());\n for (t, s) in target.iter_mut().zip(source.iter()) {\n t.clone_from(s);\n }\n if target.len() < source.len() {\n target.extend(source[target.len()..].iter().cloned());\n }\n}",
"test_program": "..."
},
```
## Usage
### Setup
1. Environment Setup:
```bash
conda create -n RustEvo python=3.8
conda activate RustEvo
pip install -r requirements.txt
```
2. Install Rust toolchain
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup toolchain install 1.71.0 1.72.0 1.73.0 1.74.0 1.75.0 1.76.0 1.77.0 1.78.0 1.79.0 1.80.0 1.81.0 1.82.0 1.83.0 1.84.0
```
### Construct your own evolving dataset
If you don't want to construct a new dataset, you can directly use the existing dataset in the `data` folder.
1. Phase I: API Evolution Collection
```bash
python scripts/rust_api_analyzer.py --repo ./rust-repo --output ./reports --start 1.72.0 --end 1.84.0
python scripts/crate_analyzer.py --crates_num 15 --start_date 2024-01-01 --end_date 2025-02-21
```
2. Phase II: Task Generation
```bash
python scripts/generate_query.py --input ./reports/rust_api_changes.json --output ./data/queries/queries_rust.json
python scripts/generate_code.py --input ./data/queries/queries_rust.json --output ./data/codes/codes_rust.json
python scripts/generate_test.py --input_file ./data/codes/codes_rust.json --output_file ./data/test_programs/test_programs_rust.json
```
### Evaluate
1. Replace the target LLM in the evaluate/generation.py
2. Run the evaluation script
```bash
cd evaluate
./run.sh eval_models.py --model_name
```
## Results
Some important results of our experiments:
### Performance by Model
| Model | Pass@1 (%) | API Usage Accuracy (%) | Coverage (%) |
|-------|------------|---------|--------------|
| Claude-3.7-Sonnet | 65.3 | 78.2 | 83.6 |
| o1-mini | 57.5 | 70.4 | 85.2 |
| GPT-4o | 55.4 | 68.4 | 77.2 |
| Gemini-1.5-Pro | 55.3 | 62.6 | 60.9 |
| DeepSeek-v3 | 54.8 | 69.7 | 71.0 |
| Gemini-2.0-Flash | 52.6 | 73.5 | 72.5 |
| Llama-3.1-70B | 51.0 | 65.3 | 69.0 |
| Qwen-2.5-72B | 50.9 | 66.7 | 64.7 |
| Claude-3.5-Sonnet | 48.1 | 68.7 | 80.3 |
| Grok-3 | 40.5 | 67.2 | 70.4 |
### Performance by API Change Type
| Change Type | Average Pass@1 (%) |
|-------------|-------------------|
| Stabilizations | 65.8 |
| Signature Changes | 58.2 |
| Behavioral Changes | 38.0 |
| Deprecations | 40.4 |
Complete evaluation results and error analysis are [here](Results). |
amekerishvili/ATCO2_Callsigns | amekerishvili | 2025-05-12T12:14:59Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-12T12:13:45Z | null | ---
dataset_info:
features:
- name: audio_file
dtype: string
- name: ID
dtype: string
- name: ground_truth
dtype: string
- name: callsigns
dtype: string
- name: Callsigns_manual
dtype: string
- name: non_Eng_ground_truth
dtype: string
- name: tags
dtype: string
- name: airport
dtype: string
- name: channel
dtype: string
- name: whisper-large-v2-ANSP-3h1m
dtype: string
- name: ground_truth_norm
dtype: string
- name: whisper-large-v2-ANSP-3h1m_norm
dtype: string
- name: whisper-large-v2-ANSP-3h1m_norm_wer
dtype: float64
- name: callsigns_NER_error_rate
dtype: float64
- name: Callsigns_manual.1
dtype: string
splits:
- name: train
num_bytes: 265042
num_examples: 100
download_size: 96625
dataset_size: 265042
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KBayoud/Custom-3 | KBayoud | 2025-05-12T12:14:03Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T09:48:50Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1264783692.0
num_examples: 420
download_size: 1263426207
dataset_size: 1264783692.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ma921/oasst1-english-tokenized-qwen2.5_noise20 | ma921 | 2025-05-12T12:08:52Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T12:08:47Z | null | ---
dataset_info:
features:
- name: pos_input_ids
sequence: int64
- name: neg_input_ids
sequence: int64
- name: flip
dtype: int64
splits:
- name: train
num_bytes: 28348816
num_examples: 6859
download_size: 7693055
dataset_size: 28348816
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ma921/oasst1-english-tokenized-phi2_noise0 | ma921 | 2025-05-12T11:58:45Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T11:58:41Z | null | ---
dataset_info:
features:
- name: sft_input_ids
sequence: int64
- name: pos_input_ids
sequence: int64
- name: neg_input_ids
sequence: int64
splits:
- name: train
num_bytes: 45647820.0
num_examples: 6859
download_size: 11661751
dataset_size: 45647820.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EdmondFU/Causal-Reasoning-Bench_CRBench | EdmondFU | 2025-05-12T11:48:53Z | 338 | 3 | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"region:us"
] | [
"question-answering"
] | 2025-04-23T07:13:49Z | null | ---
task_categories:
- question-answering
size_categories:
- 10K<n<100K
---
<p align="center">
<img src="CRBench.png" width="50%" height="50%">
</p>
<p align="left">
<img src="Deep.png" width="30%">
</p>
# 🦙Causal Reasoning Bench(CRBench)
Developing a labeled dataset with causal errors is crucial for evaluating the performance of causalizing methods for CoT reasoning. We proposed the CRBench as a benchmark to verify whether causalizing methods can effectively correct the causal errors.
## 🦙Causal Error
We have summarized four types of causal errors that lead to CoT reasoning errors:
- **Measure error.**
Causal measurement error refers to the incorrect use of correlation indicators instead of causal indicators when measuring causal relationships, or the use of inappropriate causal measures (such as average treatment effect ATE, direct/indirect effects, etc.) when estimating causal effects.
- **Collider error.**
Collider error refers to the incorrect control or selection of a "collider" in causal reasoning, which introduces false correlation. A collider is a variable that is affected by two unrelated variables at the same time. If this collider is incorrectly controlled during analysis, it will cause false correlations between originally unrelated variables. Due to selection bias when selecting samples, two originally unrelated variables appear to have a causal relationship.
- **Confounding error.**
Confounding error refers to the omission of a confounder in causal inference, leading to an observed causal effect that is not genuine but rather driven by a common influencing factor. It can also occur when variables that should not be included in the reasoning process are considered, such as residual information from a previous question, biases within the model, hallucinations, and other misleading factors.
- **Mediation error.**
Mediation error refers to the incorrect interpretation of the role of the mediating variable in causal inference, which may be due to incorrect control of the mediating variable, incorrect addition of the mediating variable, or ignoring the mediating path.
## 🦙Available Subsets
```
ds = load_dataset("EdmondFU/Causal-Reasoning-Bench_CRBench", split="train")
```
## Generated Process Description
<p align="center">
<img src="Error generated.png" width="50%" height="50%">
</p>
The example of generated causal error data. **Causality measure error:** In the process of determining that "when the intersection is inside the circle, each line must be a secant," the reasoning mistakenly overstates the impact of the intersection point's location. It erroneously asserts that "as long as the intersection is inside the circle, each line must intersect the circle at two points," thereby ignoring the possibility that a line might only intersect the circle at one point (which would be a tangent), leading to a causality measure error. **Collider error** When considering the impact of the intersection point's position on the relationship between the lines and the circle, the reasoning mistakenly treats the intersection position (inside, on, outside) as a "collider" that is simultaneously determined by both the type of the lines and the circle’s position. This error mixes independent factors. **Confounding Error:** In the reasoning process, an unrelated external factor is incorrectly introduced as a confounding variable. It is mistakenly assumed that this variable affects both the position of the intersection and the number of intersection points between the lines and the circle, which leads to an incorrect derivation of the number of possible configurations.This incorrectly introduces the circle’s radius as a confounder, mixing up the originally clear causal relationship based solely on the intersection point’s location, hence causing a confounding error. **Mediation error:** Here, an unneeded and non-existent mediator variable called 'penetration angle' is introduced, thereby misrepresenting the causal relationship between the intersection location and the line type, resulting in a mediation error, mistakenly assuming that the causal relationship between the intersection point’s location and the line type is transmitted through this mediator, which then leads to a misinterpretation of the relationships among variables.
## 🦙The CRBench dataset is generated based on publicly available high-quality reasoning datasets:
- 🧠 [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
- 🧠 [Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)
- 🧠 [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M)
Including:
Code
- 💻[BAAI/TACO](https://huggingface.co/datasets/BAAI/TACO)
- 💻[codeparrot/apps](https://huggingface.co/datasets/codeparrot/apps)
- 💻[deepmind/code_contests](https://huggingface.co/datasets/deepmind/code_contests)
- 💻[MatrixStudio/Codeforces-Python-Submissions](https://huggingface.co/datasets/MatrixStudio/Codeforces-Python-Submissions)
- 💻[livecodebench/execution-v2](https://huggingface.co/datasets/livecodebench/execution-v2)
- 💻[livecodebench/code_generation_lite](https://huggingface.co/datasets/livecodebench/code_generation_lite)
Math
- 🔢[AI-MO/NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT)
- 🔢[Maxwell-Jia/AIME_2024](https://huggingface.co/datasets/Maxwell-Jia/AIME_2024)
- 🔢[game661100/MATH-500](https://huggingface.co/game661100/MATH-500)
Science
- 📊[camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- 📊[camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- 📊[camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
Puzzle
- 🤖[INK-USC/riddle_sense](https://huggingface.co/datasets/INK-USC/riddle_sense)
# 🦙Citation
```
@misc{CRbench,
author = {Jiarun Fu,Hao Li},
month = April,
title = {Causal Reasoning Bench},
howpublished = {https://huggingface.co/datasets/EdmondFU/Causal-Reasoning-Bench_CRBench},
year = {2025}
}
```
# 🦙Contact Us
```
Jiarun Fu| Phd student in BIT:[email protected]
Hao Li| Master's student in BIT:[email protected]
```
|
YasmineMakni/so100_mvt_ball | YasmineMakni | 2025-05-12T11:37:49Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-12T09:55:58Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 40,
"total_frames": 11498,
"total_tasks": 1,
"total_videos": 80,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:40"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 1080,
"video.width": 1920,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
James096/x_dataset_127 | James096 | 2025-05-12T11:26:34Z | 20 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-05-07T19:29:50Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** James096/x_dataset_127
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5D2KKAGcf1bHnT71v5jsw9TJBmQto5PhYKRSPcJDhk8gqSXj
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{James0962025datauniversex_dataset_127,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={James096},
year={2025},
url={https://huggingface.co/datasets/James096/x_dataset_127},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 80481
- **Date Range:** 2025-04-06T00:00:00Z to 2025-05-06T00:00:00Z
- **Last Updated:** 2025-05-12T11:26:31Z
### Data Distribution
- Tweets with hashtags: 99.99%
- Tweets without hashtags: 0.01%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | #bitcoin | 5883 | 7.31% |
| 2 | #trump | 4062 | 5.05% |
| 3 | #crypto | 3581 | 4.45% |
| 4 | #btc | 1815 | 2.26% |
| 5 | #ai | 1564 | 1.94% |
| 6 | #tao | 1539 | 1.91% |
| 7 | #ethereum | 1538 | 1.91% |
| 8 | #binance | 1324 | 1.65% |
| 9 | #artificialintelligence | 1307 | 1.62% |
| 10 | #cardano | 1154 | 1.43% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-05-11T17:07:25Z | 80480 | 80480 |
| 2025-05-12T11:26:31Z | 1 | 80481 |
|
ncavallo/so100_test_lerobot2_4 | ncavallo | 2025-05-12T11:24:50Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-12T11:16:08Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 245,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.robot": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
yoonholee/completions_Qwen3-1.7B_AIME2025 | yoonholee | 2025-05-12T11:13:44Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T11:13:42Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: completions
sequence: string
- name: answer
dtype: string
- name: corrects
sequence: bool
- name: acc
dtype: float64
splits:
- name: train
num_bytes: 12414044
num_examples: 30
download_size: 4371593
dataset_size: 12414044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Botai666/Medical_VLM_Sycophancy | Botai666 | 2025-05-12T11:04:21Z | 15 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-27T07:56:43Z | null | ---
license: apache-2.0
---
|
alozowski/hf_doc_test | alozowski | 2025-05-12T10:58:58Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T10:44:22Z | null | ---
dataset_info:
- config_name: chunked
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
- name: chunks
list:
- name: chunk_id
dtype: string
- name: chunk_text
dtype: string
- name: multihop_chunks
list:
- name: chunk_ids
sequence: string
- name: chunks_text
sequence: string
- name: chunk_info_metrics
list:
- name: avg_token_length
dtype: float64
- name: bigram_diversity
dtype: float64
- name: flesch_reading_ease
dtype: float64
- name: gunning_fog
dtype: float64
- name: perplexity
dtype: float64
- name: token_count
dtype: float64
- name: unique_token_ratio
dtype: float64
- name: chunking_model
dtype: string
splits:
- name: train
num_bytes: 113907
num_examples: 1
download_size: 82204
dataset_size: 113907
- config_name: ingested
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
splits:
- name: train
num_bytes: 44906
num_examples: 1
download_size: 17845
dataset_size: 44906
- config_name: lighteval
features:
- name: question
dtype: string
- name: additional_instructions
dtype: string
- name: ground_truth_answer
dtype: string
- name: gold
sequence: int64
- name: choices
sequence: string
- name: question_category
dtype: string
- name: kind
dtype: string
- name: estimated_difficulty
dtype: int64
- name: citations
sequence: string
- name: document_id
dtype: string
- name: chunk_ids
sequence: string
- name: question_generating_model
dtype: string
- name: chunks
sequence: string
- name: document
dtype: string
- name: document_summary
dtype: string
- name: answer_citation_score
dtype: float64
- name: chunk_citation_score
dtype: float64
- name: citation_score
dtype: float64
splits:
- name: train
num_bytes: 890444
num_examples: 16
download_size: 50645
dataset_size: 890444
- config_name: multi_hop_questions
features:
- name: document_id
dtype: string
- name: source_chunk_ids
sequence: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: choices
sequence: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: citations
sequence: string
- name: raw_response
dtype: string
splits:
- name: train
num_bytes: 39753
num_examples: 4
download_size: 16677
dataset_size: 39753
- config_name: single_shot_questions
features:
- name: chunk_id
dtype: string
- name: document_id
dtype: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: choices
sequence: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: raw_response
dtype: string
- name: citations
sequence: string
splits:
- name: train
num_bytes: 112399
num_examples: 12
download_size: 25706
dataset_size: 112399
- config_name: summarized
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_chunk_summaries
sequence: string
- name: chunk_summaries
sequence: string
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
splits:
- name: train
num_bytes: 51854
num_examples: 1
download_size: 52857
dataset_size: 51854
configs:
- config_name: chunked
data_files:
- split: train
path: chunked/train-*
- config_name: ingested
data_files:
- split: train
path: ingested/train-*
- config_name: lighteval
data_files:
- split: train
path: lighteval/train-*
- config_name: multi_hop_questions
data_files:
- split: train
path: multi_hop_questions/train-*
- config_name: single_shot_questions
data_files:
- split: train
path: single_shot_questions/train-*
- config_name: summarized
data_files:
- split: train
path: summarized/train-*
---
|
gavrelina/test_dataset | gavrelina | 2025-05-12T10:56:54Z | 64 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-07T12:30:15Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# test_dataset
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
babs/english-labelled-audio | babs | 2025-05-12T10:53:28Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T03:22:23Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: stamps
list:
- name: clean
dtype: bool
- name: end
dtype: float64
- name: speaker
dtype: string
- name: start
dtype: float64
- name: clean
dtype: bool
- name: chunk_start
dtype: int64
- name: chunk_end
dtype: int64
splits:
- name: train
num_bytes: 33954923563.228
num_examples: 26921
download_size: 34237305869
dataset_size: 33954923563.228
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
malaysia-ai/malaysian-youtube-filtered-24k | malaysia-ai | 2025-05-12T10:52:57Z | 137 | 0 | [
"language:ms",
"license:cc-by-nc-4.0",
"region:us"
] | [] | 2024-11-12T05:42:20Z | null | ---
language:
- ms
viewer: false
license: cc-by-nc-4.0
---
# Filtered Malaysian Youtube
Originally from https://huggingface.co/datasets/malaysia-ai/malaysian-youtube, we filtered audio less than 4 hours and converted to 24k sampling rate for audio processing.
## how to download
```bash
huggingface-cli download --repo-type dataset \
--include '*.z*' \
--local-dir './' \
malaysia-ai/malaysian-youtube-filtered-24k
wget https://www.7-zip.org/a/7z2301-linux-x64.tar.xz
tar -xf 7z2301-linux-x64.tar.xz
~/7zz x filtered-24k.zip -y -mmt40
``` |
AI-ISL/DUSK | AI-ISL | 2025-05-12T10:31:01Z | 311 | 1 | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_categories:other",
"annotations_creators:machine-generated",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:n<1K",
"modality:text",
"region:us",
"unlearning",
"selective-forgetting",
"multi-source",
"benchmark",
"language-models",
"DUSK"
] | [
"question-answering",
"multiple-choice",
"other"
] | 2025-04-26T14:41:07Z | null | ---
datasets:
- AI-ISL/DUSK
annotations_creators:
- machine-generated
language:
- en
license: mit
pretty_name: DUSK
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- unlearning
- selective-forgetting
- multi-source
- benchmark
- language-models
- DUSK
task_categories:
- question-answering
- multiple-choice
- other
dataset_type: benchmark
configs:
- config_name: eval_general_qa
data_files:
- split: eval
path: eval_general_qa.jsonl
- config_name: eval_specific_forget_qa
data_files:
- split: eval
path: eval_specific_forget_qa.jsonl
- config_name: eval_specific_retain_qa
data_files:
- split: eval
path: eval_specific_retain_qa.jsonl
- config_name: eval_icl
data_files:
- split: eval
path: eval_icl.jsonl
- config_name: eval_icl_mcqa
data_files:
- split: eval
path: eval_icl_mcqa.jsonl
- config_name: eval_verbatim
data_files:
- split: eval
path: eval_verbatim.json
- config_name: eval_holdout
data_files:
- split: eval
path: "eval_holdout-*.parquet"
- config_name: raw
data_files:
- split: forget_chronological
path: "raw/forget_chronological-*.parquet"
- split: retain_feature_story
path: "raw/retain_feature_story-*.parquet"
- split: retain_interview
path: "raw/retain_interview-*.parquet"
- split: retain_inverted_pyramid
path: "raw/retain_inverted_pyramid-*.parquet"
- split: retain_listicle
path: "raw/retain_listicle-*.parquet"
- split: full
path: "raw/full-*.parquet"
dataset_info:
- config_name: eval_general_qa
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: eval
num_bytes: 9035
num_examples: 134
download_size: 0
dataset_size: 9035
- config_name: eval_holdout
features:
- name: text
dtype: string
splits:
- name: eval
num_bytes: 215202
num_examples: 45
download_size: 0
dataset_size: 215202
- config_name: eval_icl
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: eval
num_bytes: 785
num_examples: 12
download_size: 0
dataset_size: 785
- config_name: eval_icl_mcqa
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: eval
num_bytes: 1768
num_examples: 12
download_size: 0
dataset_size: 1768
- config_name: eval_specific_forget_qa
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: eval
num_bytes: 1280
num_examples: 20
download_size: 0
dataset_size: 1280
- config_name: eval_specific_retain_qa
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: eval
num_bytes: 7680
num_examples: 119
download_size: 0
dataset_size: 7680
- config_name: eval_verbatim
features:
- name: prompt
dtype: string
- name: gt
dtype: string
splits:
- name: eval
num_bytes: 255070
num_examples: 47
download_size: 0
dataset_size: 255070
- config_name: raw
features:
- name: text
dtype: string
splits:
- name: forget_chronological
num_bytes: 219802
num_examples: 46
- name: retain_feature_story
num_bytes: 240633
num_examples: 49
- name: retain_interview
num_bytes: 222925
num_examples: 48
- name: retain_inverted_pyramid
num_bytes: 222419
num_examples: 46
- name: retain_listicle
num_bytes: 203382
num_examples: 46
- name: full
num_bytes: 1109148
num_examples: 232
download_size: 0
dataset_size: 2218309
---
# 🌇 DUSK: Do Not Unlearn Shared Knowledge
DUSK is a benchmark dataset designed for evaluating **machine unlearning** in **multi-source** settings, where specific data sources must be forgotten while preserving others.
In realistic applications, documents often share factual overlap with publicly available content (e.g., Wikipedia, textbooks). DUSK challenges unlearning algorithms to **precisely erase only what must be forgotten**, while preserving knowledge that remains supported by other sources.
---
## 💡 Motivation
Existing benchmarks for machine unlearning often make a simplifying assumption: that the forget and retain sets contain completely separate information. But in reality, knowledge overlaps. For instance, a news article slated for removal may describe an event also covered in Wikipedia. Removing it *should not* cause the model to forget publicly known facts.
**DUSK addresses this challenge head-on**, requiring models to:
- 🚫 Erase *only* the information *unique* to the forget set
- ✅ Preserve *shared* knowledge supported by the retain set
Each document in DUSK includes both forget-only and shared content—expressed. This setup provides a rigorous test of whether a model can disentangle what to forget from what to retain.
> 🧠 **DUSK is the first benchmark that explicitly evaluates realistic unlearning scenarios where knowledge overlaps across data sources.**
> Unlike existing benchmarks that assume disjoint forget and retain sets, DUSK reflects the nuanced demands of real-world unlearning.
---
## 🧱 Dataset Overview
DUSK consists of 120 synthetic professor profiles described in five stylistic formats:
- 📜 Chronological
- 📰 Feature Story
- 🎤 Interview
- 🧾 Inverted Pyramid
- 🔢 Listicle
DUSK enables:
- Controlled attribution of knowledge
- Clear separation between *shared* and *unique* information
---
## 📊 Dataset Configurations
| Config | Description |
|--------|-------------|
| `raw/` | Full documents grouped by narrative style |
| `eval_verbatim` | Evaluates **verbatim memorization** |
| `eval_specific_forget_qa` | QA targeting **unique forget knowledge** |
| `eval_specific_retain_qa` | QA targeting **unique retain knowledge** |
| `eval_general_qa` | General QA over all content |
| `eval_icl`, `eval_icl_mcqa` | In-context learning and multiple choice |
| `eval_holdout` | QA over unseen holdout professors |
---
## 📐 Evaluation Dimensions
DUSK defines **seven key metrics** to assess unlearning effectiveness:
- **Verbatim Memorization**: Is the original phrasing erased?
- **Unique Forget Knowledge (UFK)**: Is forget-only knowledge removed?
- **Shared Knowledge (SK)**: Is overlapping knowledge preserved?
- **Unique Retain Knowledge (URK)**: Is retain-only information intact?
- **Downstream Capability (DC)**: Are general capabilities preserved?
- **Privacy Leakage**: Is residual information still accessible?
- **Retain Deviation**: Does the model behave consistently on retain data?
---
## 🛠️ Usage
You can load the dataset easily using HuggingFace `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("AI-ISL/DUSK")
print(dataset)
```
---
## ✏️ Citation
Coming soon!
--- |
rubenchocron/ks_ft_new_data | rubenchocron | 2025-05-12T10:30:54Z | 51 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-10T17:41:22Z | null | ---
dataset_info:
features:
- name: formatted_question_answer
dtype: string
splits:
- name: Context
num_bytes: 2821934
num_examples: 5000
- name: Benign
num_bytes: 4100190
num_examples: 5001
- name: Trigger
num_bytes: 3548757
num_examples: 5000
- name: RepresentationsContextAndTrigger
num_bytes: 3043800
num_examples: 5000
download_size: 3767218
dataset_size: 13514681
configs:
- config_name: default
data_files:
- split: Context
path: data/Context-*
- split: Benign
path: data/Benign-*
- split: Trigger
path: data/Trigger-*
- split: RepresentationsContextAndTrigger
path: data/RepresentationsContextAndTrigger-*
---
|
macwiatrak/bacbench-operon-identification-protein-sequences | macwiatrak | 2025-05-12T10:10:45Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T10:10:41Z | null | ---
dataset_info:
features:
- name: taxid
dtype: string
- name: strain_name
dtype: string
- name: contig_name
sequence: string
- name: accession_id
dtype: string
- name: gene_name
sequence:
sequence: string
- name: protein_name
sequence:
sequence: string
- name: old_protein_name
sequence:
sequence: string
- name: start
sequence:
sequence: int64
- name: end
sequence:
sequence: int64
- name: strand
sequence:
sequence: int64
- name: protein_sequence
sequence:
sequence: string
- name: operon_protein_names
sequence:
sequence:
sequence: string
- name: operon_protein_indices
sequence:
sequence:
sequence: int64
- name: operon_names
sequence:
sequence: string
- name: n_operons
dtype: int64
splits:
- name: test
num_bytes: 16707131
num_examples: 11
download_size: 15258914
dataset_size: 16707131
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
yoonholee/completions_qwen3_4blrablation_filtered_0503_lr1e6_SolGen_medium-mix_Qwen3-1.7B_v2_HMMT2025 | yoonholee | 2025-05-12T09:48:28Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T09:48:25Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: hint
dtype: string
- name: completions
sequence: string
- name: corrects
sequence: bool
- name: acc
dtype: float64
- name: answer
dtype: string
splits:
- name: train
num_bytes: 9848837
num_examples: 240
download_size: 3586865
dataset_size: 9848837
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jablonkagroup/chempile-reasoning | jablonkagroup | 2025-05-12T09:03:43Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-12T08:58:02Z | null | ---
dataset_info:
- config_name: chemistry_stackexchange-completion_0
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 9955635
num_examples: 3207
- name: test
num_bytes: 2180770
num_examples: 687
- name: val
num_bytes: 2164450
num_examples: 687
download_size: 8030881
dataset_size: 14300855
- config_name: chemistry_stackexchange-completion_1
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 4368035
num_examples: 3207
- name: test
num_bytes: 937050
num_examples: 687
- name: val
num_bytes: 910138
num_examples: 687
download_size: 3466685
dataset_size: 6215223
- config_name: chemistry_stackexchange-instruction_0
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 10215611
num_examples: 3207
- name: test
num_bytes: 2247702
num_examples: 687
- name: val
num_bytes: 2215020
num_examples: 687
download_size: 8102029
dataset_size: 14678333
- config_name: chemistry_stackexchange-instruction_1
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 4520829
num_examples: 3207
- name: test
num_bytes: 972378
num_examples: 687
- name: val
num_bytes: 941784
num_examples: 687
download_size: 3497157
dataset_size: 6434991
- config_name: chemistry_stackexchange-instruction_2
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 10187447
num_examples: 3207
- name: test
num_bytes: 2232168
num_examples: 687
- name: val
num_bytes: 2207534
num_examples: 687
download_size: 8098941
dataset_size: 14627149
- config_name: chemistry_stackexchange-raw_data
features:
- name: title
dtype: string
- name: q
dtype: string
- name: a
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5219515
num_examples: 3207
- name: test
num_bytes: 1141031
num_examples: 687
- name: val
num_bytes: 1152678
num_examples: 687
download_size: 4382210
dataset_size: 7513224
- config_name: claude-3.5-distilled-spectral-reasoning-default
features:
- name: prompt
dtype: string
- name: extracted_reasoning
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4820074
num_examples: 924
- name: test
num_bytes: 243764
num_examples: 52
- name: val
num_bytes: 273662
num_examples: 51
download_size: 1642284
dataset_size: 5337500
- config_name: mattermodeling_stackexchange-completion_0
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 1862644
num_examples: 464
- name: test
num_bytes: 439705
num_examples: 99
- name: val
num_bytes: 416417
num_examples: 100
download_size: 1532900
dataset_size: 2718766
- config_name: mattermodeling_stackexchange-completion_1
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 866952
num_examples: 464
- name: test
num_bytes: 209099
num_examples: 99
- name: val
num_bytes: 176453
num_examples: 100
download_size: 716855
dataset_size: 1252504
- config_name: mattermodeling_stackexchange-instruction_0
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 1889702
num_examples: 464
- name: test
num_bytes: 457057
num_examples: 99
- name: val
num_bytes: 427465
num_examples: 100
download_size: 1557006
dataset_size: 2774224
- config_name: mattermodeling_stackexchange-instruction_1
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 889978
num_examples: 464
- name: test
num_bytes: 216463
num_examples: 99
- name: val
num_bytes: 177585
num_examples: 100
download_size: 706341
dataset_size: 1284026
- config_name: mattermodeling_stackexchange-instruction_2
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 1915910
num_examples: 464
- name: test
num_bytes: 446149
num_examples: 99
- name: val
num_bytes: 418409
num_examples: 100
download_size: 1539380
dataset_size: 2780468
- config_name: mattermodeling_stackexchange-raw_data
features:
- name: title
dtype: string
- name: q
dtype: string
- name: a
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1061173
num_examples: 464
- name: test
num_bytes: 241090
num_examples: 99
- name: val
num_bytes: 233373
num_examples: 100
download_size: 870390
dataset_size: 1535636
- config_name: physics_stackexchange-completion_0
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 15588553
num_examples: 4712
- name: test
num_bytes: 3426795
num_examples: 1009
- name: val
num_bytes: 3423281
num_examples: 1010
download_size: 12341408
dataset_size: 22438629
- config_name: physics_stackexchange-completion_1
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 7479773
num_examples: 4712
- name: test
num_bytes: 1622627
num_examples: 1009
- name: val
num_bytes: 1621187
num_examples: 1010
download_size: 5899484
dataset_size: 10723587
- config_name: physics_stackexchange-instruction_0
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 15943301
num_examples: 4712
- name: test
num_bytes: 3532197
num_examples: 1009
- name: val
num_bytes: 3511087
num_examples: 1010
download_size: 12475758
dataset_size: 22986585
- config_name: physics_stackexchange-instruction_1
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 7680583
num_examples: 4712
- name: test
num_bytes: 1647917
num_examples: 1009
- name: val
num_bytes: 1673185
num_examples: 1010
download_size: 5918206
dataset_size: 11001685
- config_name: physics_stackexchange-instruction_2
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: answer_choices
sequence: 'null'
- name: correct_output_index
dtype: float64
splits:
- name: train
num_bytes: 15915091
num_examples: 4712
- name: test
num_bytes: 3495531
num_examples: 1009
- name: val
num_bytes: 3509439
num_examples: 1010
download_size: 12504404
dataset_size: 22920061
- config_name: physics_stackexchange-raw_data
features:
- name: title
dtype: string
- name: q
dtype: string
- name: a
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 8845700
num_examples: 4712
- name: test
num_bytes: 1966487
num_examples: 1009
- name: val
num_bytes: 1980929
num_examples: 1010
download_size: 7273250
dataset_size: 12793116
- config_name: spectra_reasoning_deepseek-default
features:
- name: smiles
dtype: string
- name: reasoning
dtype: string
- name: response
dtype: string
- name: response_smiles
dtype: string
- name: correct
dtype: bool
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2060422
num_examples: 29
- name: test
num_bytes: 133396
num_examples: 2
- name: val
num_bytes: 137112
num_examples: 2
download_size: 1000394
dataset_size: 2330930
- config_name: spectra_reasoning_deepseek_mcq-default
features:
- name: smiles
dtype: string
- name: reasoning
dtype: string
- name: response
dtype: string
- name: response_smiles
dtype: string
- name: correct
dtype: bool
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1003549
num_examples: 17
- name: test
num_bytes: 82477
num_examples: 1
- name: val
num_bytes: 52325
num_examples: 1
download_size: 511345
dataset_size: 1138351
configs:
- config_name: chemistry_stackexchange-completion_0
data_files:
- split: train
path: chemistry_stackexchange-completion_0/train-*
- split: test
path: chemistry_stackexchange-completion_0/test-*
- split: val
path: chemistry_stackexchange-completion_0/val-*
- config_name: chemistry_stackexchange-completion_1
data_files:
- split: train
path: chemistry_stackexchange-completion_1/train-*
- split: test
path: chemistry_stackexchange-completion_1/test-*
- split: val
path: chemistry_stackexchange-completion_1/val-*
- config_name: chemistry_stackexchange-instruction_0
data_files:
- split: train
path: chemistry_stackexchange-instruction_0/train-*
- split: test
path: chemistry_stackexchange-instruction_0/test-*
- split: val
path: chemistry_stackexchange-instruction_0/val-*
- config_name: chemistry_stackexchange-instruction_1
data_files:
- split: train
path: chemistry_stackexchange-instruction_1/train-*
- split: test
path: chemistry_stackexchange-instruction_1/test-*
- split: val
path: chemistry_stackexchange-instruction_1/val-*
- config_name: chemistry_stackexchange-instruction_2
data_files:
- split: train
path: chemistry_stackexchange-instruction_2/train-*
- split: test
path: chemistry_stackexchange-instruction_2/test-*
- split: val
path: chemistry_stackexchange-instruction_2/val-*
- config_name: chemistry_stackexchange-raw_data
data_files:
- split: train
path: chemistry_stackexchange-raw_data/train-*
- split: test
path: chemistry_stackexchange-raw_data/test-*
- split: val
path: chemistry_stackexchange-raw_data/val-*
- config_name: claude-3.5-distilled-spectral-reasoning-default
data_files:
- split: train
path: claude-3.5-distilled-spectral-reasoning-default/train-*
- split: test
path: claude-3.5-distilled-spectral-reasoning-default/test-*
- split: val
path: claude-3.5-distilled-spectral-reasoning-default/val-*
- config_name: mattermodeling_stackexchange-completion_0
data_files:
- split: train
path: mattermodeling_stackexchange-completion_0/train-*
- split: test
path: mattermodeling_stackexchange-completion_0/test-*
- split: val
path: mattermodeling_stackexchange-completion_0/val-*
- config_name: mattermodeling_stackexchange-completion_1
data_files:
- split: train
path: mattermodeling_stackexchange-completion_1/train-*
- split: test
path: mattermodeling_stackexchange-completion_1/test-*
- split: val
path: mattermodeling_stackexchange-completion_1/val-*
- config_name: mattermodeling_stackexchange-instruction_0
data_files:
- split: train
path: mattermodeling_stackexchange-instruction_0/train-*
- split: test
path: mattermodeling_stackexchange-instruction_0/test-*
- split: val
path: mattermodeling_stackexchange-instruction_0/val-*
- config_name: mattermodeling_stackexchange-instruction_1
data_files:
- split: train
path: mattermodeling_stackexchange-instruction_1/train-*
- split: test
path: mattermodeling_stackexchange-instruction_1/test-*
- split: val
path: mattermodeling_stackexchange-instruction_1/val-*
- config_name: mattermodeling_stackexchange-instruction_2
data_files:
- split: train
path: mattermodeling_stackexchange-instruction_2/train-*
- split: test
path: mattermodeling_stackexchange-instruction_2/test-*
- split: val
path: mattermodeling_stackexchange-instruction_2/val-*
- config_name: mattermodeling_stackexchange-raw_data
data_files:
- split: train
path: mattermodeling_stackexchange-raw_data/train-*
- split: test
path: mattermodeling_stackexchange-raw_data/test-*
- split: val
path: mattermodeling_stackexchange-raw_data/val-*
- config_name: physics_stackexchange-completion_0
data_files:
- split: train
path: physics_stackexchange-completion_0/train-*
- split: test
path: physics_stackexchange-completion_0/test-*
- split: val
path: physics_stackexchange-completion_0/val-*
- config_name: physics_stackexchange-completion_1
data_files:
- split: train
path: physics_stackexchange-completion_1/train-*
- split: test
path: physics_stackexchange-completion_1/test-*
- split: val
path: physics_stackexchange-completion_1/val-*
- config_name: physics_stackexchange-instruction_0
data_files:
- split: train
path: physics_stackexchange-instruction_0/train-*
- split: test
path: physics_stackexchange-instruction_0/test-*
- split: val
path: physics_stackexchange-instruction_0/val-*
- config_name: physics_stackexchange-instruction_1
data_files:
- split: train
path: physics_stackexchange-instruction_1/train-*
- split: test
path: physics_stackexchange-instruction_1/test-*
- split: val
path: physics_stackexchange-instruction_1/val-*
- config_name: physics_stackexchange-instruction_2
data_files:
- split: train
path: physics_stackexchange-instruction_2/train-*
- split: test
path: physics_stackexchange-instruction_2/test-*
- split: val
path: physics_stackexchange-instruction_2/val-*
- config_name: physics_stackexchange-raw_data
data_files:
- split: train
path: physics_stackexchange-raw_data/train-*
- split: test
path: physics_stackexchange-raw_data/test-*
- split: val
path: physics_stackexchange-raw_data/val-*
- config_name: spectra_reasoning_deepseek-default
data_files:
- split: train
path: spectra_reasoning_deepseek-default/train-*
- split: test
path: spectra_reasoning_deepseek-default/test-*
- split: val
path: spectra_reasoning_deepseek-default/val-*
- config_name: spectra_reasoning_deepseek_mcq-default
data_files:
- split: train
path: spectra_reasoning_deepseek_mcq-default/train-*
- split: test
path: spectra_reasoning_deepseek_mcq-default/test-*
- split: val
path: spectra_reasoning_deepseek_mcq-default/val-*
---
|
model-metadata/model-id-custom-code-check | model-metadata | 2025-05-12T09:02:47Z | 95 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-22T06:10:47Z | null | ---
dataset_info:
features:
- name: model_id
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 1458
num_examples: 21
download_size: 2101
dataset_size: 1458
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
R3troR0b/news-dataset | R3troR0b | 2025-05-12T09:00:14Z | 500 | 3 | [
"task_categories:text-classification",
"language:en",
"language:fr",
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"news",
"world"
] | [
"text-classification"
] | 2024-08-23T19:03:12Z | null | ---
license: mit
task_categories:
- text-classification
language:
- en
- fr
tags:
- news
- world
pretty_name: World News from Multiple Sources.
---
# Dataset Card for World_News
A collection of news articles from around the world. The script ensures no duplicate articles are added.
## Dataset Details
### Dataset Description
The articles are drawn from these sources:
- Reuters News Agency
- BBC World News
- Al Jazeera
- Le Monde
- South China Morning Post
- The Hindu
- Deutshce Welle
- The Gauardian
- NPR
- TASS News Agency, Russia
- The Sydney Morning Herald
- **Curated by:** McNarland Software Consulatants Inc.
- **Funded by [optional]:** None
- **Shared by [optional]:** None
- **Language(s) (NLP):** [English, French, Russian]
- **License:** [MIT]
### Dataset Sources [optional]
# Global News Sources (RSS Feeds)
AL_JAZEERA_FEED_URL = "https://www.aljazeera.com/xml/rss/all.xml"
BBC_FEED_URL = "http://feeds.bbci.co.uk/news/rss.xml"
LE_MONDE_FEED_URL = "https://www.lemonde.fr/rss/en_continu.xml"
REUTERS_FEED_URL = "https://www.reutersagency.com/feed/?best-regions=north-america&post_type=best"
THE_HINDU_FEED_URL = "https://www.thehindu.com/news/feeder/default.rss"
SCMP_FEED_URL = "https://www.scmp.com/rss/2/feed"
DW_FEED_URL = "https://rss.dw.com/rdf/rss-en-all"
TASS_FEED_URL = "https://tass.com/rss"
RT_FEED_URL = "https://www.rt.com/rss/"
ABC_FEED_URL = "https://www.abc.net.au/news/feed/51120/rss.xml"
SMH_FEED_URL = "https://www.smh.com.au/rss/feed.xml"
- **Repository:** None
- **Paper [optional]:** None
- **Demo [optional]:** None
## Uses
Supervised Training or Embed Knowledge.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
The JSON format file contains a label and text column. The text column contains the article content while the label contains the publisher, publish date, and article name.
"label": "The Guardian;Middle East crisis live: protesters across Israel call for Netanyahu to agree hostage deal;https://www.theguardian.com/world/live/2024/sep/01/middle-east-crisis-live-israeli-military-says-bodies-of-six-hostages-recovered-in-gaza;2024-09-01T18:16:45Z",
"text": "US vice-president Kamala Harris has spoken to Jon and Rachel Goldberg-Polin, the parents of Hersh who was one of the hostages ..."
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
tunahanf/CHATML | tunahanf | 2025-05-12T08:59:23Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T08:44:03Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 66869366.25
num_examples: 35637
- name: test
num_bytes: 22289788.75
num_examples: 11879
download_size: 45247440
dataset_size: 89159155.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Anuj6263333/fdskjcnskdvnc | Anuj6263333 | 2025-05-12T08:51:05Z | 0 | 0 | [
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"language:ae",
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology"
] | [
"text-generation",
"zero-shot-classification"
] | 2025-05-12T07:32:25Z | null | ---
license: apache-2.0
task_categories:
- text-generation
- zero-shot-classification
language:
- ae
tags:
- biology
size_categories:
- 10K<n<100K
--- |
GaspardNW/Metal_2.72sec_2PourcentSilent_0aug_0shiftAug_specmask0_nfft2048_hop512_sr48000 | GaspardNW | 2025-05-12T08:43:56Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T16:53:33Z | null | ---
dataset_info:
features:
- name: filename
dtype: string
- name: duration
dtype: int64
- name: sampling_rate
dtype: int64
- name: magnitude_array
sequence:
sequence:
sequence: float64
- name: min_max_vals
sequence: float64
splits:
- name: train
num_bytes: 579962641
num_examples: 276
download_size: 269869239
dataset_size: 579962641
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TheFinAI/PolyFiQA-Easy | TheFinAI | 2025-05-12T08:31:52Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T14:52:25Z | null | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 5175349
num_examples: 76
download_size: 1660121
dataset_size: 5175349
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
akseljoonas/smol_agents_benchmark_300 | akseljoonas | 2025-05-12T07:52:10Z | 46 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-07T15:11:13Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: true_reasoning
dtype: string
- name: true_answer
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 247087
num_examples: 300
download_size: 141465
dataset_size: 247087
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- |
QuentinJG/msmarco_instruct_template | QuentinJG | 2025-05-12T07:48:47Z | 1 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T07:48:38Z | null | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: passage
dtype: string
- name: passage_idx
dtype: int64
splits:
- name: train
num_bytes: 381296219
num_examples: 398792
download_size: 105485303
dataset_size: 381296219
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kriteekon/aave_matched_indirect | kriteekon | 2025-05-12T07:29:24Z | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T06:43:08Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1880565
num_examples: 1376
- name: validation
num_bytes: 235148
num_examples: 172
download_size: 300070
dataset_size: 2115713
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Eloquent/Robustness | Eloquent | 2025-05-12T07:24:59Z | 171 | 0 | [
"language:en",
"language:fi",
"language:fr",
"language:de",
"language:sv",
"language:nl",
"language:fa",
"language:da",
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-04-27T10:09:32Z | null | ---
license: cc-by-nc-sa-4.0
language:
- en
- fi
- fr
- de
- sv
- nl
- fa
- da
configs:
- config_name: test
data_files:
- split: test
path: eloquent-2025-robustness-prompts.json
pretty_name: ELOQUENT Robustness and Consistency Task 2025
size_categories:
- n<1K
---
# ELOQUENT Robustness and Consistency Task
This dataset contains the sample and test datasets for the Robustness and Consistency task, which is part of the ELOQUENT lab. This dataset is for participants to generate texts for prompt variants, to investigate prompt style conditioned variation.
- [Robustness task](https://eloquent-lab.github.io/task-robustness-and-consistency/)
- [ELOQUENT lab](https://eloquent-lab.github.io/)
- [CLEF conference](https://clef2025.clef-initiative.eu/) 9-12 September 2025
## The task in brief (this is a simple task to execute!)
- This dataset provides a number of questions in several languages
- e.g. `"question": "Is it more important to be polite or to be honest?"`
- You use a generative language model to answer the question in the languages your model handles
- Use separate sessions for each response! They are not intended to be interpreted as follow-up responses.
- You send the response to us before mid-May 2025
- We and you together discuss the results to explore how linguistic variation conditions responses
- We write a joint report
- Workshop at CLEF in Madrid 9-12 September 2025
## Submit Here:
[Submission Form](https://forms.gle/cy5hrrWRbyJ8mchz7)
#### Test Data
```python
from datasets import load_dataset
data = load_dataset("Eloquent/Robustness", "test")
```
## Dataset authors
Marie Isabel Engels (en, de)
Jussi Karlgren (en, sv)
Josiane Mothe (fr)
Aarne Talman (fi)
Maria Barrett (da)
Shaghayegh Roohi (fa)
Sander Bijl de Vroe (nl) |
UWV/wikipedia_nl_wim_with_dutch_schema | UWV | 2025-05-12T07:18:08Z | 27 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-09T07:42:29Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: schema
dtype: string
splits:
- name: train
num_bytes: 409331960
num_examples: 97521
download_size: 149038231
dataset_size: 409331960
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Dataset Details
### Dataset Description
This dataset is derived from the Dutch-language subset of Wikipedia. We filtered the articles to include only those with a text length between 1,000 and 3,000 characters. From this filtered pool, we randomly selected 100,000 entries and enriched each with a corresponding OWL schema generated using GPT-4o.
### Dataset Validation
To assess the quality of the generated schemas, we applied the following validation checks:
- Verification of correct RDF, RDFS, XSD, and OWL syntax
- Detection of classes not explicitly defined as owl:Class
- Identification of blank nodes
- Detection of circular subclass relationships
- Identification of disjoint classes with structural conflicts
During this validation process, 2,479 schemas were found to contain fundamental structural issues and were therefore removed from the dataset.
The final dataset contains 97,521 entries, each consisting of a Dutch Wikipedia text paired with a machine-generated OWL schema.
### Next Steps
We plan to:
- Add a "combined_schema" column that combines, for each row, the 9 consecutive row schema's.
- Add a final column with RDF triples derived from each text–schema pair.
### Purpose
The primary objective of this dataset is to support the fine-tuning of large language models (LLMs) for automated Knowledge Graph (KG) generation from natural language texts. |
VGraf/tulu_sft_singleTurnOnly | VGraf | 2025-05-12T07:13:59Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T07:13:27Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2314668882
num_examples: 939343
download_size: 1115954669
dataset_size: 2314668882
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HiTZ/composite_corpus_es_v1.0 | HiTZ | 2025-05-12T07:11:22Z | 203 | 0 | [
"task_categories:automatic-speech-recognition",
"language:es",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"asr",
"stt",
"dataset"
] | [
"automatic-speech-recognition"
] | 2024-12-03T17:18:06Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
- name: duration
dtype: float64
configs:
- config_name: default
data_files:
- split: train
path: data/train*
- split: dev
path: data/dev.*
- split: dev_cv
path: data/dev_cv*
- split: dev_mls
path: data/dev_mls*
- split: dev_parl
path: data/dev_parl*
- split: dev_oslr
path: data/dev_oslr*
- split: dev_vp
path: data/dev_vp*
- split: test_cv
path: data/test_cv*
- split: test_mls
path: data/test_mls*
- split: test_parl
path: data/test_parl*
- split: test_oslr
path: data/test_oslr*
- split: test_vp
path: data/test_vp*
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- es
tags:
- asr
- stt
- dataset
pretty_name: Composite dataset for Spanish v1.0
---
# Composite dataset for Spanish made from public available data
This dataset is composed of the following public available data:
## Train split:
The train split is composed of the following datasets combined:
- **mozilla-foundation/common_voice_18_0/es**: "validated" split removing "test_cv" and "dev_cv" split's sentences. (validated split contains official train + dev + test splits and more unique data)
- **openslr**: a train split made from the SLR(39,61,67,71,72,73,74,75,108) subsets, this split has been cleaned from acronyms, numbers and sentences that are repeated in the following "test_oslr" and "dev_oslr" splits.
- **mls/es**: the "train" split from the spanish dataset of Multilingual Librispeech.
- **facebook/voxpopuli/es**: the "train" split from the spanish voxpopuli dataset, cleaned from acronyms and numeric characters.
- **gttsehu/basque_parliament_1/es**: the official "train_clean" split.
The Total hours and sentences is slightly smaller because some of the sentences were removed due to be repeated in some of the test and dev splits.
| Split tag | Source | Hours | Sentences |
|:---------:|:--------------------:|:-------------:|:-----------:|
| - | common_voice_18_0 | 538.82 h | 378560 |
| - | openslr | 45.58 h | 24460 |
| - | mls | 922.47 h | 221855 |
| - | voxpopuli | 142.96 h | 48667 |
| - | basque_parliament_1 | 949.27 h | 469937 |
| train | **Total** | **2596.32 h** | **1142586** |
## Test splits:
Those test splits are separated, and it is recommended to not evaluate them together in a single split:
- **mozilla-foundation/common_voice_18_0/es**: the official "test" split.
- **openslr**: a test split made from the SLR(39,61,67,71,72,73,74,75,108) subsets, this split has been cleaned from acronyms, numbers and sentences.
- **mls/es**: the "test" split from the spanish dataset of Multilingual Librispeech.
- **facebook/voxpopuli/es**: the "test" split from the spanish voxpopuli dataset, cleaned from acronyms and numeric characters.
- **gttsehu/basque_parliament_1/es**: the official "test" split.
| Split tag | Source | Hours | Sentences |
|:---------:|:--------------------:|:-------------:|:-----------:|
| test_cv | common_voice_18_0 | 26.84 h | 15872 |
| test_oslr | openslr | 7.03 h | 4107 |
| test_mls | mls | 10 h | 2385 |
| test_vp | voxpopuli | 4.64 h | 1446 |
| test_parl | basque_parliament_1 | 6.56 h | 3450 |
| | **Total** | **55.07 h** | **27260** |
## Dev splits:
There is a dev split composed by 5 dev subsplits that are also independently accesible. It is recommended to use the combined "dev" split for development tasks since it is accurately balanced in number of hours (~5h each, a total of ~25h).
- **mozilla-foundation/common_voice_18_0/es**: a small dev split made from the official "dev" split.
- **openslr**: a small dev split made from the SLR(39,61,67,71,72,73,74,75,108) subsets, this split has been cleaned from acronyms, numbers and sentences.
- **mls/es**: a small "dev" split from the original "dev" split from spanish dataset of Multilingual Librispeech.
- **facebook/voxpopuli/es**: the original "dev" split from the spanish voxpopuli dataset, cleaned from acronyms and numeric characters.
- **gttsehu/basque_parliament_1/es**: the official "dev" split.
| Split tag | Source | Hours | Sentences |
|:---------:|:--------------------:|:-------------:|:-----------:|
| dev_cv | common_voice_18_0 | 5.03 h | 3000 |
| dev_oslr | openslr | 5.13 h | 3063 |
| dev_mls | mls | 5.09 h | 1223 |
| dev_vp | voxpopuli | 4.89 h | 1564 |
| dev_parl | basque_parliament_1 | 4.81 h | 2567 |
| dev | **Total** | **24.95 h** | **11417** |
## Funding:
This project with reference 2022/TL22/00215335 has been parcially funded by the Ministerio de Transformación Digital and by the Plan de Recuperación, Transformación y Resiliencia – Funded by the European Union – NextGenerationEU [ILENIA](https://proyectoilenia.es/) and by the project [IkerGaitu](https://www.hitz.eus/iker-gaitu/) funded by the Basque Government. |
AshBastian9/so100_demotwist | AshBastian9 | 2025-05-12T07:08:11Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-12T07:08:08Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 1305,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
sanochihi/my_test_dataset | sanochihi | 2025-05-12T06:54:58Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T06:51:52Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: Seeds
dtype: string
- name: Prompt
dtype: string
- name: Completion
dtype: string
splits:
- name: train
num_bytes: 3818
num_examples: 20
download_size: 3585
dataset_size: 3818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
louisbrulenaudet/code-justice-penale-mineurs | louisbrulenaudet | 2025-05-12T06:52:00Z | 326 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code de la justice pénale des mineurs"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2024-03-25T23:03:33Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de la justice pénale des mineurs
source_datasets:
- original
pretty_name: Code de la justice pénale des mineurs
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de la justice pénale des mineurs, non-instruct (2025-05-11)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
louisbrulenaudet/code-justice-administrative | louisbrulenaudet | 2025-05-12T06:51:59Z | 444 | 0 | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/1469",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code de justice administrative"
] | [
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] | 2023-12-12T21:26:00Z | null | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de justice administrative
source_datasets:
- original
pretty_name: Code de justice administrative
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de justice administrative, non-instruct (2025-05-11)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
Cartinoe5930/deepmath_500 | Cartinoe5930 | 2025-05-12T06:44:26Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T06:33:35Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: final_answer
dtype: string
- name: difficulty
dtype: float64
- name: topic
dtype: string
splits:
- name: train
num_bytes: 145226
num_examples: 500
download_size: 67158
dataset_size: 145226
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AtsuMiyai/webchore_test13 | AtsuMiyai | 2025-05-12T06:41:00Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T06:40:55Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_1_license
dtype: string
- name: image_1_attribution
dtype: string
- name: image_1_use_original_mmmu
dtype: bool
- name: image_2
dtype: image
- name: image_2_license
dtype: string
- name: image_2_attribution
dtype: string
- name: image_2_use_original_mmmu
dtype: bool
- name: image_3
dtype: image
- name: image_3_license
dtype: string
- name: image_3_attribution
dtype: string
- name: image_3_use_original_mmmu
dtype: bool
- name: image_4
dtype: image
- name: image_4_license
dtype: string
- name: image_4_attribution
dtype: string
- name: image_4_use_original_mmmu
dtype: bool
- name: image_5
dtype: image
- name: image_5_license
dtype: string
- name: image_5_attribution
dtype: string
- name: image_5_use_original_mmmu
dtype: bool
- name: image_6
dtype: image
- name: image_6_license
dtype: string
- name: image_6_attribution
dtype: string
- name: image_6_use_original_mmmu
dtype: bool
- name: image_7
dtype: image
- name: image_7_license
dtype: string
- name: image_7_attribution
dtype: string
- name: image_7_use_original_mmmu
dtype: bool
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
- name: sites
sequence: string
- name: start_url
dtype: string
- name: start_url_lite
dtype: string
- name: storage_state
dtype: string
- name: intent_template
dtype: string
- name: required_obs
dtype: string
- name: description
dtype: string
- name: instantiation_dict
struct:
- name: PRnumber
dtype: 'null'
- name: answer
dtype: 'null'
- name: answers
dtype: 'null'
- name: checkpint_info
dtype: string
- name: checkpoint1
dtype: string
- name: checkpoint2
dtype: string
- name: checkpoint3
dtype: string
- name: checkpoint4
dtype: string
- name: checkpoint5
dtype: string
- name: checkpoint6
dtype: string
- name: checkpoint_info
dtype: string
- name: commitSum
dtype: 'null'
- name: contents
dtype: string
- name: count
dtype: 'null'
- name: date
dtype: 'null'
- name: default_branch
dtype: 'null'
- name: difficulty
dtype: string
- name: enddate
dtype: 'null'
- name: format
dtype: 'null'
- name: issue_counts
dtype: 'null'
- name: issues_count
dtype: 'null'
- name: keyword
dtype: 'null'
- name: lastupdatetime
dtype: 'null'
- name: license
dtype: 'null'
- name: lowerbound
dtype: 'null'
- name: memo
dtype: 'null'
- name: memo1
dtype: 'null'
- name: memo2
dtype: 'null'
- name: memo3
dtype: 'null'
- name: memo4
dtype: 'null'
- name: memo5
dtype: 'null'
- name: month
dtype: string
- name: number
dtype: string
- name: orderedProjects
dtype: 'null'
- name: project
dtype: 'null'
- name: project1
dtype: 'null'
- name: project2
dtype: 'null'
- name: project3
dtype: 'null'
- name: project4
dtype: 'null'
- name: project5
dtype: 'null'
- name: question
dtype: 'null'
- name: readme_repo1
dtype: 'null'
- name: readme_repo2_url
dtype: 'null'
- name: repository
dtype: 'null'
- name: repository1
dtype: 'null'
- name: repository2
dtype: 'null'
- name: repository3
dtype: 'null'
- name: start_url
dtype: string
- name: start_url_lite
dtype: string
- name: tag1
dtype: string
- name: tag2
dtype: string
- name: target
dtype: 'null'
- name: total_star
dtype: 'null'
- name: unique_users_num
dtype: 'null'
- name: url
dtype: 'null'
- name: user
dtype: string
- name: user1
dtype: string
- name: user2
dtype: string
- name: user3
dtype: 'null'
- name: user4
dtype: 'null'
- name: user5
dtype: 'null'
- name: userName
dtype: 'null'
- name: year
dtype: string
splits:
- name: test
num_bytes: 1772449.0
num_examples: 30
download_size: 1782916
dataset_size: 1772449.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
tarsur909/imdb_sft_processed25p | tarsur909 | 2025-05-12T06:29:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T06:29:07Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: query
dtype: string
- name: gen_review
dtype: string
- name: query_input_ids
sequence: int64
- name: query_attention_mask
sequence: int64
- name: reference_response
dtype: string
- name: reference_response_input_ids
sequence: int64
- name: reference_response_attention_mask
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_input_ids
sequence: int64
- name: query_reference_response_attention_mask
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
splits:
- name: train
num_bytes: 106246592.5
num_examples: 3125
- name: test
num_bytes: 105687226.5
num_examples: 3125
download_size: 32770008
dataset_size: 211933819.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
FariqF/NPTL_SPEAKER_DATASETS | FariqF | 2025-05-12T05:51:31Z | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T05:49:28Z | null | ---
license: mit
dataset_info:
features:
- name: source_id
dtype: string
- name: audio_id
dtype: string
- name: audio_start
dtype: float32
- name: audio_end
dtype: float32
- name: duration
dtype: float32
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: words
sequence:
- name: word
dtype: string
- name: start
dtype: float32
- name: end
dtype: float32
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 2142509474.334
num_examples: 6597
download_size: 2063465797
dataset_size: 2142509474.334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cchoi1/kodcode_1000_gpt-4o_qwen7b_att_iter0_debug | cchoi1 | 2025-05-12T05:50:31Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T22:55:38Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: int64
- name: task_id
dtype: string
- name: mutator_prompt
dtype: string
- name: solver_prompt
dtype: string
- name: response
dtype: string
- name: mutation_explanation
dtype: string
- name: mutation_info
dtype: string
- name: mutator_score
dtype: float64
- name: solution_scores
dtype: string
- name: solutions
dtype: string
- name: solutions_explanation
dtype: string
- name: solutions_info
dtype: string
splits:
- name: train
num_bytes: 70519
num_examples: 10
download_size: 55180
dataset_size: 70519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
upvantage/top5-scored-gpt4.1 | upvantage | 2025-05-12T05:23:53Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T05:09:28Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: ai_score
sequence: float32
- name: context
dtype: string
- name: text
dtype: string
- name: style
dtype: string
- name: lengthInstruction
dtype: string
splits:
- name: train
num_bytes: 132346814
num_examples: 26794
download_size: 37510275
dataset_size: 132346814
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ma921/hh-rlhf-filtered | ma921 | 2025-05-12T04:41:32Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T04:41:21Z | null | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 298842161.7277985
num_examples: 155730
- name: test
num_bytes: 16010125.275374182
num_examples: 8267
download_size: 160465974
dataset_size: 314852287.0031727
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
icedpanda/msmarco_cold_start_dataset_100k_llama_merge_aug | icedpanda | 2025-05-12T04:26:05Z | 25 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-09T02:27:37Z | null | ---
dataset_info:
features:
- name: query
dtype: string
- name: query_id
dtype: string
- name: pid
sequence: string
- name: response
dtype: string
- name: hard_negative_pid
sequence: string
splits:
- name: train
num_bytes: 225128843
num_examples: 102836
download_size: 134665838
dataset_size: 225128843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
charleyong/so100_tic_tac_toe_updated | charleyong | 2025-05-12T04:22:19Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"yc_demo",
"multi_task"
] | [
"robotics"
] | 2025-05-12T03:16:20Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- yc_demo
- multi_task
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 20,
"total_frames": 5941,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.stationary": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
justinsunqiu/transcription_changes | justinsunqiu | 2025-05-12T04:21:51Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T04:21:49Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: language
dtype: string
- name: culturally_distinct
dtype: bool
- name: cultural_distinction_explanation
dtype: string
- name: vocaroo_link
dtype: string
- name: image_link
dtype: string
- name: selected_other_languages
dtype: bool
- name: Goodness
dtype: string
- name: orig_transcription
dtype: string
- name: orig_translation
dtype: string
- name: fixed_transcription
dtype: string
- name: fixed_translation
dtype: string
splits:
- name: train
num_bytes: 1958762
num_examples: 498
download_size: 1108010
dataset_size: 1958762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
patrickechohelloworld/well_formatted_benchmarks_pro | patrickechohelloworld | 2025-05-12T04:08:48Z | 83 | 0 | [
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"synthetic"
] | [
"zero-shot-classification",
"question-answering"
] | 2025-05-08T14:52:58Z | null | ---
task_categories:
- zero-shot-classification
- question-answering
language:
- en
tags:
- synthetic
size_categories:
- 1M<n<10M
---
# Dataset Card for well_formatted_benchmarks_pro
<!-- Provide a quick summary of the dataset. -->
This is a collection of formatted benchmarks.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This repo is home to formatted versions of some famous benchmarks
I created this repo because current benchmark datasets on the hub generally don't have a fixed format, which is annoying when you try to use them.
- **Language(s) (NLP):** English
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
#### ARC
- **Repository:** [Original ARC repo](https://huggingface.co/datasets/allenai/ai2_arc)
- **Demo:**
```text
<user>An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?\
<sep>A: Planetary density will decrease.</sep><sep>B: Planetary years will become longer.</sep><sep>C: Planetary days will become shorter.</sep>\
<sep>D: Planetary gravity will become stronger.</sep></user><model>C
```
#### GSM8K
- **Repository:** [Original GSM8K repo](https://huggingface.co/datasets/openai/gsm8k)
- **Demo:**
```text
<user>A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?</user><model>It takes 2/2=<<2/2=1>>1 bolt of white fiber
So the total amount of fabric is 2+1=<<2+1=3>>3 bolts of fabric
#### 3
```
#### HellaSwag
- **Repository:** [Original HellaSwag repo](https://huggingface.co/datasets/Rowan/hellaswag)
- **Demo:**
```text
<user> The topic of this sentence is: Getting a haircut. Based on the topic of this sentence, finish this sentence: \
The man in the center is demonstrating a hairstyle on the person wearing the blue shirt. the man in the blue shirt\
<sep>A: is standing on the sponge cutting the hair of the person wearing the blue shirt.</sep>\
<sep>B: is doing the hairstyle with his hand and the hairspray.</sep><sep>C: sits on the chair next to the sink.</sep>\
<sep>D: is being shown eye to eye.</sep></user><model>C
```
#### MMLU
- **Repository:** [Original MMLU repo](https://huggingface.co/datasets/cais/mmlu)
- **Demo:**
```text
<user>Find the degree for the given field extension Q(sqrt(2), sqrt(3), sqrt(18)) over Q.<sep>A: 0</sep><sep>B: 4</sep><sep>C: 2</sep><sep>D: 6</sep></user><model>B
```
#### OpenBookQA
- **Repository:** [Original OpenBookQA repo](https://huggingface.co/datasets/mandarjoshi/trivia_qa)
- **Demo:**
```text
<user> It is ture that: predators eat prey. Based on this fact, answer the following question:\
Predators eat<sep>A: lions</sep><sep>B: humans</sep><sep>C: bunnies</sep><sep>D: grass</sep></user><model>C
```
#### TriviaQA
- **Repository:** [Original TriviaQA repo](https://huggingface.co/datasets/mandarjoshi/trivia_qa)
- **Demo:**
```text
<user>Which American-born Sinclair won the Nobel Prize for Literature in 1930?</user>Sinclair Lewis
```
### PIQA
- **Repository:** [Original PIQA repo](https://huggingface.co/datasets/ybisk/piqa)
- **Demo**
```text
<user>The goal is: Make outdoor pillow.<sep>A: Blow into tin can and tie with rubber band.</sep>\
<sep>B: Blow into trash bag and tie with rubber band.</sep></user><model><model><model>B
```
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
It's recommended to use this dataset by downloading the parquet files from `main` branch and load it with `polars`:
```python
import polars as pl
data = pl.read_parquet('./path/to/downloaded/file').get_column('text')
...
```
The special tokens used in this repo includes:
```text
<user>: the beginning of prompt
</user>: the end of prompt
<model>: the beginning of response
<sep>: the beginning of an option
</sep>: the end of an option
```
These tokens works well with my custom tokenizer, but remember to replace them with your own special tokens like this:
```python
# Replace with another token
text = text.replace('<model>', 'YOUR_SPECIAL_TOKEN')
# Remove the special token
text = text.replace('<model>', '')
```
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
Like what I mentioned above, a custom tokenizer is used to generate the files in `token` folder. So you need to tokenize the dataset yourself.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
- **Raw data** from original repos are in the root directory of each subsets.
- **Formatted(but not tokenized) data files** are in `./processed` directory.
- **Tokenized data files** are in `./token` directory (and you probably don't need them, as mentioned above)
```text
.
├── processed <- This is the formatted data you want!
│ ├── train.parquet
│ └── validation.parquet
├── token
│ ├── train.parquet
│ └── validation.parquet
├── train.parquet
└── validation.parquet <- These are raw data files
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The data processing is done mainly with the python scripts in the root directory (and their variants). So you can re-write these scripts based on your need to create your own formatted datasets!
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Please refer to the links above to see the original authors of these datasets.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Please keep in mind that these datasets is for benchmarking, some of them are not suitable for SFT.
Although I didn't change the content of the original datasets, it's always good practice to check them out by yourself!
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## Dataset Card Authors [optional]
patrick_echo_hello_world
## Dataset Card Contact
[[email protected]] |
pratikmurali/fda_samd_regulations_golden_test_dataset | pratikmurali | 2025-05-12T04:03:12Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T04:02:38Z | null | ---
license: apache-2.0
---
|
ajd12342/paraspeechcaps-processed-situational-only-with-original-prompts-test-set-1000 | ajd12342 | 2025-05-12T04:00:30Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T04:00:28Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: relative_audio_path
dtype: string
- name: text_description
sequence: string
- name: transcription
dtype: string
- name: intrinsic_tags
sequence: string
- name: situational_tags
dtype: string
- name: basic_tags
sequence: string
- name: all_tags
sequence: string
- name: speakerid
dtype: string
- name: name
dtype: string
- name: duration
dtype: float64
- name: gender
dtype: string
- name: accent
dtype: string
- name: pitch
dtype: string
- name: speaking_rate
dtype: string
- name: noise
dtype: string
- name: utterance_pitch_mean
dtype: float64
- name: snr
dtype: float64
- name: phonemes
dtype: string
- name: audio_path
dtype: string
splits:
- name: test
num_bytes: 1119852.3727781163
num_examples: 1000
download_size: 398405
dataset_size: 1119852.3727781163
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
justinsunqiu/multilingual_transcriptions_translated_raw | justinsunqiu | 2025-05-12T03:58:38Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T03:58:37Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: language
dtype: string
- name: culturally_distinct
dtype: bool
- name: cultural_distinction_explanation
dtype: string
- name: vocaroo_link
dtype: string
- name: image_link
dtype: string
- name: transcription
dtype: string
- name: selected_other_languages
dtype: bool
- name: translation
dtype: string
- name: Goodness
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 11608597
num_examples: 6220
download_size: 5718578
dataset_size: 11608597
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Haviet2003/Gemma3_1b | Haviet2003 | 2025-05-12T03:52:40Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T03:50:23Z | null | ---
license: apache-2.0
---
|
deokhk/MOpenThoughts-114k-problems | deokhk | 2025-05-12T03:46:00Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T03:45:54Z | null | ---
dataset_info:
config_name: en
features:
- name: problem
dtype: string
- name: ground_truth_solution
dtype: string
- name: domain
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 38967862
num_examples: 18993
download_size: 18977718
dataset_size: 38967862
configs:
- config_name: en
data_files:
- split: train
path: en/train-*
---
|
cross-validation/City-Networks | cross-validation | 2025-05-12T03:25:15Z | 10 | 1 | [
"license:cc-by-nd-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T15:51:07Z | null | ---
license: cc-by-nd-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: name
dtype: string
- name: num_nodes
dtype: int64
- name: edge_index
sequence:
sequence: int32
length: 2
- name: node_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: train_mask
sequence: bool
- name: val_mask
sequence: bool
- name: test_mask
sequence: bool
splits:
- name: train
num_bytes: 202285660
num_examples: 4
download_size: 47142645
dataset_size: 202285660
---
|
VGraf/BIG_tulu_related_truncated2048_cutto2turns | VGraf | 2025-05-12T03:22:06Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T03:21:37Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 479342706
num_examples: 48466
download_size: 253715537
dataset_size: 479342706
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.