Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-07-20 15:51:01
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-07-20 15:38:59
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
AmanHugginfaces/FastVton
|
AmanHugginfaces
|
2025-05-29T23:12:55Z
| 126 | 0 |
[
"license:cc-by-nc-nd-4.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-05-29T20:21:41Z
| 0 |
---
license: cc-by-nc-nd-4.0
---
|
IaraMed/Query_results
|
IaraMed
|
2025-03-05T17:04:51Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-03T18:22:45Z
| 0 |
---
dataset_info:
features:
- name: ID
dtype: int64
- name: query
dtype: string
- name: pergunta
dtype: string
- name: resposta
dtype: string
- name: Query_docs_v0
dtype: string
- name: Query_docs_v1
dtype: string
- name: Query_docs_v2
dtype: string
- name: Query_docs_v3
dtype: string
- name: Query_docs_v4
dtype: string
- name: Query_docs_v5
dtype: string
- name: Query_docs_v6
dtype: string
- name: Query_docs_v0_large
dtype: string
- name: Query_docs_v1_large
dtype: string
- name: Query_docs_v2_large
dtype: string
- name: Query_docs_v3_large
dtype: string
- name: Query_docs_v4_large
dtype: string
- name: Query_docs_v5_large
dtype: string
- name: Query_docs_v6_large
dtype: string
splits:
- name: train
num_bytes: 35240801
num_examples: 500
download_size: 19899762
dataset_size: 35240801
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rohanc007/record-pick-lid-single
|
rohanc007
|
2025-06-15T18:54:08Z
| 0 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
[
"robotics"
] |
2025-06-15T18:52:45Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 16,
"total_frames": 10591,
"total_tasks": 1,
"total_videos": 32,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:16"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 360,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.front": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 360,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
DKYoon/qwen25-nonambigqa-slope
|
DKYoon
|
2025-04-24T13:02:22Z
| 12 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-24T13:02:19Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answers
dtype: string
- name: index
dtype: string
- name: prompt
dtype: string
- name: prompt_length
dtype: int64
- name: prompt_pct
dtype: int64
splits:
- name: validation
num_bytes: 5665613
num_examples: 11000
download_size: 874836
dataset_size: 5665613
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
gghfez/wizard_general
|
gghfez
|
2024-12-13T06:34:44Z
| 35 | 0 |
[
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-05T04:59:27Z
| 0 |
---
license: apache-2.0
language:
- en
---
|
infinite-dataset-hub/HealthCareEfficiency
|
infinite-dataset-hub
|
2025-03-22T05:05:08Z
| 17 | 0 |
[
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] |
[] |
2025-03-22T05:04:59Z
| 0 |
---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# HealthCareEfficiency
tags: healthcare management, cost analysis, predictive modeling
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'HealthCareEfficiency' dataset aims to facilitate research in healthcare management by providing structured data on various healthcare facilities. It encompasses information such as patient outcomes, operational costs, and predictive modeling indicators to aid in analyzing healthcare efficiency and cost-effectiveness. Each entry represents a specific healthcare provider, detailing their performance metrics, service quality, and financial data.
**CSV Content Preview:**
```csv
Provider ID,Provider Name,Location,Patient Satisfaction Score,Average Length of Stay,Total Annual Costs,Readmission Rate,Predictive Model Accuracy
001,Sunrise Medical Center,Springfield,4.5,4.8,2.3 million,0.09,88%
002,HealthBridge Clinic,Springfield,4.7,3.9,1.7 million,0.12,92%
003,PrimeCare Hospital,Shelbyville,4.3,5.1,3.0 million,0.15,85%
004,Vitality Health Systems,Springfield,4.8,3.7,2.0 million,0.10,90%
005,CareFirst Medical Center,Shelbyville,4.6,4.5,2.5 million,0.11,87%
```
Each row in the dataset represents a healthcare provider with relevant data points for assessing their efficiency and cost management. The labels for each provider are their respective 'Provider ID' and 'Provider Name'. The 'Location' provides the geographical context of the facility. The 'Patient Satisfaction Score' is an indicator of patient experience, 'Average Length of Stay' shows operational efficiency, 'Total Annual Costs' reflect financial performance, 'Readmission Rate' measures quality of care, and 'Predictive Model Accuracy' signifies the effectiveness of predictive models used in healthcare management.
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query '':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=&dataset=HealthCareEfficiency&tags=healthcare+management,+cost+analysis,+predictive+modeling
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
youliangtan/tictac-bot
|
youliangtan
|
2025-03-27T20:47:16Z
| 232 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] |
[
"robotics"
] |
2025-03-26T23:12:00Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 8,
"total_frames": 8464,
"total_tasks": 2,
"total_videos": 8,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:8"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.webcam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
AdnanElAssadi/esc-50-audio-reranking
|
AdnanElAssadi
|
2025-06-24T19:06:34Z
| 5 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-23T06:31:43Z
| 0 |
---
dataset_info:
features:
- name: query
dtype:
audio:
sampling_rate: 16000
- name: positive
sequence:
audio:
sampling_rate: 16000
- name: negative
sequence:
audio:
sampling_rate: 16000
splits:
- name: test
num_bytes: 1940612800.0
num_examples: 200
download_size: 1435826508
dataset_size: 1940612800.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
timcryt/vscf_mlff_data
|
timcryt
|
2025-06-03T17:26:33Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"region:us",
"chemistry"
] |
[] |
2025-06-03T17:19:44Z
| 0 |
---
tags:
- chemistry
size_categories:
- 10K<n<100K
pretty_name: l
---
Datasets used in the paper "Интерполяция ППЭ с помощью машинного обучения для ускорения расчётов негармонических частот колебаний молекул".
### Description
- `vscf_dataset_2_5.xyz` is the main dataset used from pretraining and finetuning models, 19 molecules, 65168 points
- `compare_dataset_OCCO.xyz` is the auxillary datased used to select the model architecture from DimeNet and SchNet, 1 molecule, 1042 points
|
nhagar/CC-MAIN-2019-39_urls
|
nhagar
|
2025-05-15T04:25:00Z
| 33 | 0 |
[
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/4142",
"region:us"
] |
[] |
2025-01-09T00:11:36Z
| 0 |
---
dataset_info:
features:
- name: crawl
dtype: string
- name: url_host_name
dtype: string
- name: url_count
dtype: int64
splits:
- name: train
num_bytes: 2844632861
num_examples: 56764197
download_size: 1015654379
dataset_size: 2844632861
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset contains domain names and counts of (non-deduplicated) URLs for every record in the CC-MAIN-2019-39 snapshot of the Common Crawl. It was collected from the [AWS S3 version](https://aws.amazon.com/marketplace/pp/prodview-zxtb4t54iqjmy?sr=0-1&ref_=beagle&applicationId=AWSMPContessa) of Common Crawl via Amazon Athena.
This dataset is derived from Common Crawl data and is subject to Common Crawl's Terms of Use: [https://commoncrawl.org/terms-of-use](https://commoncrawl.org/terms-of-use).
|
Asap7772/metamath-hint-sft-rand-2
|
Asap7772
|
2025-03-20T17:37:58Z
| 20 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-10T18:27:38Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: hint
dtype: string
- name: response
dtype: string
splits:
- name: test
num_bytes: 2332332
num_examples: 528
- name: train
num_bytes: 34429545
num_examples: 17930
download_size: 14237735
dataset_size: 36761877
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
|
Nexdata/100000_hours_Korean_Unsupervised_speech_dataset
|
Nexdata
|
2025-04-25T03:09:47Z
| 60 | 0 |
[
"language:ko",
"license:cc-by-nd-4.0",
"region:us"
] |
[] |
2025-02-10T09:44:05Z
| 0 |
---
license: cc-by-nd-4.0
language:
- ko
---
## Description
This dataset is just a sample of 100000 Hours Korean Unsupervised speech dataset (paid dataset), covers dialogues or monologues in 28 common domains, such as daily vlogs, travel, podcast, technology, beauty, etc., mirrors real-world interactions, enhancing model performance in real and complex tasks. Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
For more details & to download the rest of the dataset(paid),please refer to the link: https://www.nexdata.ai/datasets/speechrecog?source=Huggingface
## Specifications
# Format
16k Hz, 16 bit, wav, mono channel
# Data source
public resources
# Content category
Dialogue or monologue in several common domains, such as daily vlogs, travel, podcast, technology, beauty, etc
# Language
Korean
# Country
Korea
# Recording condition
Mixed(indoor, public place, entertainment,etc.)
# Language(Region) Code
ko-KR
# Licensing Information
Commercial License
|
sxj1215/vqa_val
|
sxj1215
|
2025-02-11T22:32:51Z
| 27 | 0 |
[
"size_categories:1K<n<10K",
"modality:image",
"modality:text",
"region:us"
] |
[] |
2025-02-11T22:32:45Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
- name: images
list: image
splits:
- name: val
num_bytes: 314064573.0
num_examples: 2000
download_size: 53778574
dataset_size: 314064573.0
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b35038be-8c02-4044-bd5a-8d5eeb164b9a
|
argilla-internal-testing
|
2024-10-08T09:08:45Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-08T09:08:44Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jackkuo/LLM-Ribozyme-Kinetics-Golden-Benchmark
|
jackkuo
|
2025-03-12T05:52:05Z
| 44 | 0 |
[
"license:cc",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-24T02:07:43Z
| 0 |
---
license: cc
---
### 🚩Citation
Please cite the following paper if you use jackkuo/LLM-Ribozyme-Kinetics-Golden-Benchmark in your work.
```bibtex
@article {Jiang2025.03.03.641178,
author = {Jiang, Jinling and Hu, Jie and Xie, Siwei and Guo, Menghao and Dong, Yuhang and Fu, Shuai and Jiang, Xianyue and Yue, Zhenlei and Shi, Junchao and Zhang, Xiaoyu and Song, Minghui and Chen, Guangyong and Lu, Hua and Wu, Xindong and Guo, Pei and Han, Da and Sun, Zeyi and Qiu, Jiezhong},
title = {Enzyme Co-Scientist: Harnessing Large Language Models for Enzyme Kinetic Data Extraction from Literature},
elocation-id = {2025.03.03.641178},
year = {2025},
doi = {10.1101/2025.03.03.641178},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The extraction of molecular annotations from scientific literature is critical for advancing data-driven research. However, traditional methods, which primarily rely on human curation, are labor-intensive and error-prone. Here, we present an LLM-based agentic workflow that enables automatic and efficient data extraction from literature with high accuracy. As a demonstration, our workflow successfully delivers a dataset containing over 91,000 enzyme kinetics entries from around 3,500 papers. It achieves an average F1 score above 0.9 on expert-annotated subsets of protein enzymes and can be extended to the ribozyme domain in fewer than 3 days at less than $90. This method opens up new avenues for accelerating the pace of scientific research.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178},
eprint = {https://www.biorxiv.org/content/early/2025/03/11/2025.03.03.641178.full.pdf},
journal = {bioRxiv}
}
```
|
menheraorg/vericava-posts
|
menheraorg
|
2025-03-23T10:35:24Z
| 15 | 0 |
[
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-21T19:45:37Z
| 0 |
---
license: apache-2.0
---
|
hanlincs/in1k_clip_qwen25vl_3b_224res_64tokens_new_pt
|
hanlincs
|
2025-04-11T16:02:53Z
| 325 | 0 |
[
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-11T07:21:17Z
| 0 |
---
license: apache-2.0
---
|
yongdol/formde
|
yongdol
|
2025-02-10T05:02:03Z
| 19 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-02T07:56:52Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 194804
num_examples: 1433
download_size: 51096
dataset_size: 194804
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wentingzhao/code_mbpp_Qwen2.5-Coder-7B-Instruct_temp1.0_num16_tests_mbpp_Qwen2.5-Coder-3B-Instruct
|
wentingzhao
|
2025-04-28T07:21:16Z
| 21 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-28T07:21:15Z
| 0 |
---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 7275023
num_examples: 500
download_size: 2501050
dataset_size: 7275023
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
samoline/tulu-3-sft-mixture
|
samoline
|
2025-06-18T11:03:24Z
| 0 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-18T11:02:49Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2055887452
num_examples: 896090
download_size: 1058834743
dataset_size: 2055887452
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GAYOEN/figma-train-expand
|
GAYOEN
|
2025-06-12T02:30:27Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-12T02:25:00Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 97224336.928
num_examples: 2024
download_size: 98101687
dataset_size: 97224336.928
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
r1v3r/askama_axum_burn
|
r1v3r
|
2025-06-22T05:51:48Z
| 3 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-22T05:51:36Z
| 0 |
---
dataset_info:
features:
- name: repo
dtype: string
- name: pull_number
dtype: int64
- name: instance_id
dtype: string
- name: issue_numbers
sequence: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: version
dtype: string
- name: updated_at
dtype: string
- name: environment_setup_commit
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: FAIL_TO_FAIL
sequence: string
- name: PASS_TO_FAIL
sequence: 'null'
- name: source_dir
dtype: string
splits:
- name: train
num_bytes: 4212504
num_examples: 98
download_size: 1017104
dataset_size: 4212504
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anirudhb11/R1-1.5b-Par-Temp-0.7-Ans-40-32768-deg-64-path-3-n-16000-s-6800-e-6900
|
anirudhb11
|
2025-06-08T22:57:36Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-08T22:57:32Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: gold_answer
dtype: string
- name: raw_answer_0
dtype: string
- name: extracted_answer_0
dtype: string
- name: num_boxed_0
dtype: int64
- name: grade_0
dtype: bool
- name: ans_token_len_0
dtype: int64
- name: finished_0
dtype: bool
- name: raw_answer_1
dtype: string
- name: extracted_answer_1
dtype: string
- name: num_boxed_1
dtype: int64
- name: grade_1
dtype: bool
- name: ans_token_len_1
dtype: int64
- name: finished_1
dtype: bool
- name: raw_answer_2
dtype: string
- name: extracted_answer_2
dtype: string
- name: num_boxed_2
dtype: int64
- name: grade_2
dtype: bool
- name: ans_token_len_2
dtype: int64
- name: finished_2
dtype: bool
- name: raw_answer_3
dtype: string
- name: extracted_answer_3
dtype: string
- name: num_boxed_3
dtype: int64
- name: grade_3
dtype: bool
- name: ans_token_len_3
dtype: int64
- name: finished_3
dtype: bool
- name: raw_answer_4
dtype: string
- name: extracted_answer_4
dtype: string
- name: num_boxed_4
dtype: int64
- name: grade_4
dtype: bool
- name: ans_token_len_4
dtype: int64
- name: finished_4
dtype: bool
- name: raw_answer_5
dtype: string
- name: extracted_answer_5
dtype: string
- name: num_boxed_5
dtype: int64
- name: grade_5
dtype: bool
- name: ans_token_len_5
dtype: int64
- name: finished_5
dtype: bool
- name: raw_answer_6
dtype: string
- name: extracted_answer_6
dtype: string
- name: num_boxed_6
dtype: int64
- name: grade_6
dtype: bool
- name: ans_token_len_6
dtype: int64
- name: finished_6
dtype: bool
- name: raw_answer_7
dtype: string
- name: extracted_answer_7
dtype: string
- name: num_boxed_7
dtype: int64
- name: grade_7
dtype: bool
- name: ans_token_len_7
dtype: int64
- name: finished_7
dtype: bool
- name: raw_answer_8
dtype: string
- name: extracted_answer_8
dtype: string
- name: num_boxed_8
dtype: int64
- name: grade_8
dtype: bool
- name: ans_token_len_8
dtype: int64
- name: finished_8
dtype: bool
- name: raw_answer_9
dtype: string
- name: extracted_answer_9
dtype: string
- name: num_boxed_9
dtype: int64
- name: grade_9
dtype: bool
- name: ans_token_len_9
dtype: int64
- name: finished_9
dtype: bool
- name: raw_answer_10
dtype: string
- name: extracted_answer_10
dtype: string
- name: num_boxed_10
dtype: int64
- name: grade_10
dtype: bool
- name: ans_token_len_10
dtype: int64
- name: finished_10
dtype: bool
- name: raw_answer_11
dtype: string
- name: extracted_answer_11
dtype: string
- name: num_boxed_11
dtype: int64
- name: grade_11
dtype: bool
- name: ans_token_len_11
dtype: int64
- name: finished_11
dtype: bool
- name: raw_answer_12
dtype: string
- name: extracted_answer_12
dtype: string
- name: num_boxed_12
dtype: int64
- name: grade_12
dtype: bool
- name: ans_token_len_12
dtype: int64
- name: finished_12
dtype: bool
- name: raw_answer_13
dtype: string
- name: extracted_answer_13
dtype: string
- name: num_boxed_13
dtype: int64
- name: grade_13
dtype: bool
- name: ans_token_len_13
dtype: int64
- name: finished_13
dtype: bool
- name: raw_answer_14
dtype: string
- name: extracted_answer_14
dtype: string
- name: num_boxed_14
dtype: int64
- name: grade_14
dtype: bool
- name: ans_token_len_14
dtype: int64
- name: finished_14
dtype: bool
- name: raw_answer_15
dtype: string
- name: extracted_answer_15
dtype: string
- name: num_boxed_15
dtype: int64
- name: grade_15
dtype: bool
- name: ans_token_len_15
dtype: int64
- name: finished_15
dtype: bool
- name: raw_answer_16
dtype: string
- name: extracted_answer_16
dtype: string
- name: num_boxed_16
dtype: int64
- name: grade_16
dtype: bool
- name: ans_token_len_16
dtype: int64
- name: finished_16
dtype: bool
- name: raw_answer_17
dtype: string
- name: extracted_answer_17
dtype: string
- name: num_boxed_17
dtype: int64
- name: grade_17
dtype: bool
- name: ans_token_len_17
dtype: int64
- name: finished_17
dtype: bool
- name: raw_answer_18
dtype: string
- name: extracted_answer_18
dtype: string
- name: num_boxed_18
dtype: int64
- name: grade_18
dtype: bool
- name: ans_token_len_18
dtype: int64
- name: finished_18
dtype: bool
- name: raw_answer_19
dtype: string
- name: extracted_answer_19
dtype: string
- name: num_boxed_19
dtype: int64
- name: grade_19
dtype: bool
- name: ans_token_len_19
dtype: int64
- name: finished_19
dtype: bool
- name: raw_answer_20
dtype: string
- name: extracted_answer_20
dtype: string
- name: num_boxed_20
dtype: int64
- name: grade_20
dtype: bool
- name: ans_token_len_20
dtype: int64
- name: finished_20
dtype: bool
- name: raw_answer_21
dtype: string
- name: extracted_answer_21
dtype: string
- name: num_boxed_21
dtype: int64
- name: grade_21
dtype: bool
- name: ans_token_len_21
dtype: int64
- name: finished_21
dtype: bool
- name: raw_answer_22
dtype: string
- name: extracted_answer_22
dtype: string
- name: num_boxed_22
dtype: int64
- name: grade_22
dtype: bool
- name: ans_token_len_22
dtype: int64
- name: finished_22
dtype: bool
- name: raw_answer_23
dtype: string
- name: extracted_answer_23
dtype: string
- name: num_boxed_23
dtype: int64
- name: grade_23
dtype: bool
- name: ans_token_len_23
dtype: int64
- name: finished_23
dtype: bool
- name: raw_answer_24
dtype: string
- name: extracted_answer_24
dtype: string
- name: num_boxed_24
dtype: int64
- name: grade_24
dtype: bool
- name: ans_token_len_24
dtype: int64
- name: finished_24
dtype: bool
- name: raw_answer_25
dtype: string
- name: extracted_answer_25
dtype: string
- name: num_boxed_25
dtype: int64
- name: grade_25
dtype: bool
- name: ans_token_len_25
dtype: int64
- name: finished_25
dtype: bool
- name: raw_answer_26
dtype: string
- name: extracted_answer_26
dtype: string
- name: num_boxed_26
dtype: int64
- name: grade_26
dtype: bool
- name: ans_token_len_26
dtype: int64
- name: finished_26
dtype: bool
- name: raw_answer_27
dtype: string
- name: extracted_answer_27
dtype: string
- name: num_boxed_27
dtype: int64
- name: grade_27
dtype: bool
- name: ans_token_len_27
dtype: int64
- name: finished_27
dtype: bool
- name: raw_answer_28
dtype: string
- name: extracted_answer_28
dtype: string
- name: num_boxed_28
dtype: int64
- name: grade_28
dtype: bool
- name: ans_token_len_28
dtype: int64
- name: finished_28
dtype: bool
- name: raw_answer_29
dtype: string
- name: extracted_answer_29
dtype: string
- name: num_boxed_29
dtype: int64
- name: grade_29
dtype: bool
- name: ans_token_len_29
dtype: int64
- name: finished_29
dtype: bool
- name: raw_answer_30
dtype: string
- name: extracted_answer_30
dtype: string
- name: num_boxed_30
dtype: int64
- name: grade_30
dtype: bool
- name: ans_token_len_30
dtype: int64
- name: finished_30
dtype: bool
- name: raw_answer_31
dtype: string
- name: extracted_answer_31
dtype: string
- name: num_boxed_31
dtype: int64
- name: grade_31
dtype: bool
- name: ans_token_len_31
dtype: int64
- name: finished_31
dtype: bool
- name: raw_answer_32
dtype: string
- name: extracted_answer_32
dtype: string
- name: num_boxed_32
dtype: int64
- name: grade_32
dtype: bool
- name: ans_token_len_32
dtype: int64
- name: finished_32
dtype: bool
- name: raw_answer_33
dtype: string
- name: extracted_answer_33
dtype: string
- name: num_boxed_33
dtype: int64
- name: grade_33
dtype: bool
- name: ans_token_len_33
dtype: int64
- name: finished_33
dtype: bool
- name: raw_answer_34
dtype: string
- name: extracted_answer_34
dtype: string
- name: num_boxed_34
dtype: int64
- name: grade_34
dtype: bool
- name: ans_token_len_34
dtype: int64
- name: finished_34
dtype: bool
- name: raw_answer_35
dtype: string
- name: extracted_answer_35
dtype: string
- name: num_boxed_35
dtype: int64
- name: grade_35
dtype: bool
- name: ans_token_len_35
dtype: int64
- name: finished_35
dtype: bool
- name: raw_answer_36
dtype: string
- name: extracted_answer_36
dtype: string
- name: num_boxed_36
dtype: int64
- name: grade_36
dtype: bool
- name: ans_token_len_36
dtype: int64
- name: finished_36
dtype: bool
- name: raw_answer_37
dtype: string
- name: extracted_answer_37
dtype: string
- name: num_boxed_37
dtype: int64
- name: grade_37
dtype: bool
- name: ans_token_len_37
dtype: int64
- name: finished_37
dtype: bool
- name: raw_answer_38
dtype: string
- name: extracted_answer_38
dtype: string
- name: num_boxed_38
dtype: int64
- name: grade_38
dtype: bool
- name: ans_token_len_38
dtype: int64
- name: finished_38
dtype: bool
- name: raw_answer_39
dtype: string
- name: extracted_answer_39
dtype: string
- name: num_boxed_39
dtype: int64
- name: grade_39
dtype: bool
- name: ans_token_len_39
dtype: int64
- name: finished_39
dtype: bool
splits:
- name: train
num_bytes: 138313683
num_examples: 100
download_size: 23935415
dataset_size: 138313683
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
avinash18/codeblox
|
avinash18
|
2025-03-28T11:54:19Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-28T11:53:58Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34818
num_examples: 88
- name: test
num_bytes: 3854
num_examples: 10
download_size: 12598
dataset_size: 38672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
SPRINGLab/IndicTTS_Bengali
|
SPRINGLab
|
2024-12-30T09:31:48Z
| 69 | 1 |
[
"task_categories:text-to-speech",
"language:bn",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"text-to-speech"
] |
2024-11-18T14:11:20Z
| 0 |
---
dataset_info:
features:
- name: utterance_id
dtype: string
- name: text
dtype: string
- name: audio
dtype: audio
- name: gender
dtype: string
splits:
- name: train
num_bytes: 7694726487.52
num_examples: 12852
download_size: 6654425645
dataset_size: 7694726487.52
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-to-speech
language:
- bn
---
# Bengali Indic TTS Dataset
This dataset is derived from the Indic TTS Database project, specifically using the Bengali monolingual recordings from both male and female speakers. The dataset contains high-quality speech recordings with corresponding text transcriptions, making it suitable for text-to-speech (TTS) research and development.
## Dataset Details
- **Language**: Bengali
- **Total Duration**: ~15.06 hours (Male: 10.05 hours, Female: 5.01 hours)
- **Audio Format**: WAV
- **Sampling Rate**: 48000Hz
- **Speakers**: 2 (1 male, 1 female native Bengali speakers)
- **Content Type**: Monolingual Bengali utterances
- **Recording Quality**: Studio-quality recordings
- **Transcription**: Available for all audio files
## Dataset Source
This dataset is derived from the Indic TTS Database, a special corpus of Indian languages developed by the Speech Technology Consortium at IIT Madras. The original database covers 13 major languages of India and contains 10,000+ spoken sentences/utterances for both monolingual and English recordings.
## License & Usage
This dataset is subject to the original Indic TTS license terms. Before using this dataset, please ensure you have read and agreed to the [License For Use of Indic TTS](https://www.iitm.ac.in/donlab/indictts/downloads/license.pdf).
## Acknowledgments
This dataset would not be possible without the work of the Speech Technology Consortium at IIT Madras. Special acknowledgment goes to:
- Speech Technology Consortium
- Department of Computer Science & Engineering and Electrical Engineering, IIT Madras
- Bhashini, MeitY
- Prof. Hema A Murthy & Prof. S Umesh
## Citation
If you use this dataset in your research or applications, please cite the original Indic TTS project:
```bibtex
@misc{indictts2023,
title = {Indic {TTS}: A Text-to-Speech Database for Indian Languages},
author = {Speech Technology Consortium and {Hema A Murthy} and {S Umesh}},
year = {2023},
publisher = {Indian Institute of Technology Madras},
url = {https://www.iitm.ac.in/donlab/indictts/},
institution = {Department of Computer Science and Engineering and Electrical Engineering, IIT MADRAS}
}
```
## Contact
For any issues or queries related to this HuggingFace dataset version, feel free to comment in the Community tab.
For queries related to the original Indic TTS database, please contact: [email protected]
## Original Database Access
The original complete database can be accessed at: https://www.iitm.ac.in/donlab/indictts/database
Note: The original database provides access to data in multiple Indian languages and variants. This HuggingFace dataset specifically contains the Bengali monolingual portion of that database.
|
qfq/eidata_remove_solution_20241026_201944_iter1
|
qfq
|
2024-10-27T03:23:59Z
| 18 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-27T03:23:54Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: sample_id
dtype: int64
- name: question_statement
dtype: string
- name: thinking_trajectory
sequence: string
- name: golden_answer
dtype: string
- name: answer
dtype: string
- name: problem
dtype: string
- name: orig_problem
dtype: string
- name: orig_solution
dtype: string
- name: orig_answer
dtype: string
- name: orig_subject
dtype: string
- name: orig_level
dtype: int64
- name: orig_unique_id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 60845546
num_examples: 11396
- name: test
num_bytes: 3208760
num_examples: 600
download_size: 30091879
dataset_size: 64054306
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Luffytaro-1/asr_en_ar_switch_split_91_final
|
Luffytaro-1
|
2025-02-23T20:23:22Z
| 54 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-15T17:14:43Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 4184248.0
num_examples: 40
download_size: 3687546
dataset_size: 4184248.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alecocc/ontonotes5-pii-med
|
alecocc
|
2025-06-20T15:39:16Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-20T15:39:13Z
| 0 |
---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: string
- name: source_text
dtype: string
splits:
- name: train
num_bytes: 511416
num_examples: 1158
download_size: 185076
dataset_size: 511416
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xzhong/ChemRAG
|
xzhong
|
2025-02-06T21:50:25Z
| 66 | 0 |
[
"task_categories:question-answering",
"language:en",
"size_categories:10M<n<100M",
"arxiv:2402.13178",
"region:us",
"medical",
"question answering",
"large language model",
"retrieval-augmented generation"
] |
[
"question-answering"
] |
2025-02-06T19:09:52Z
| 0 |
---
task_categories:
- question-answering
language:
- en
tags:
- medical
- question answering
- large language model
- retrieval-augmented generation
size_categories:
- 10M<n<100M
---
# The PubMed Corpus in MedRAG
This HF dataset contains the snippets from the PubMed corpus used in [MedRAG](https://arxiv.org/abs/2402.13178). It can be used for medical Retrieval-Augmented Generation (RAG).
## News
- (02/26/2024) The "id" column has been reformatted. A new "PMID" column is added.
## Dataset Details
### Dataset Descriptions
[PubMed](https://pubmed.ncbi.nlm.nih.gov/) is the most widely used literature resource, containing over 36 million biomedical articles.
For MedRAG, we use a PubMed subset of 23.9 million articles with valid titles and abstracts.
This HF dataset contains our ready-to-use snippets for the PubMed corpus, including 23,898,701 snippets with an average of 296 tokens.
### Dataset Structure
Each row is a snippet of PubMed, which includes the following features:
- id: a unique identifier of the snippet
- title: the title of the PubMed article from which the snippet is collected
- content: the abstract of the PubMed article from which the snippet is collected
- contents: a concatenation of 'title' and 'content', which will be used by the [BM25](https://github.com/castorini/pyserini) retriever
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
```shell
git clone https://huggingface.co/datasets/MedRAG/pubmed
```
### Use in MedRAG
```python
>> from src.medrag import MedRAG
>> question = "A lesion causing compression of the facial nerve at the stylomastoid foramen will cause ipsilateral"
>> options = {
"A": "paralysis of the facial muscles.",
"B": "paralysis of the facial muscles and loss of taste.",
"C": "paralysis of the facial muscles, loss of taste and lacrimation.",
"D": "paralysis of the facial muscles, loss of taste, lacrimation and decreased salivation."
}
>> medrag = MedRAG(llm_name="OpenAI/gpt-3.5-turbo-16k", rag=True, retriever_name="MedCPT", corpus_name="PubMed")
>> answer, snippets, scores = medrag.answer(question=question, options=options, k=32) # scores are given by the retrieval system
```
## Citation
```shell
@article{xiong2024benchmarking,
title={Benchmarking Retrieval-Augmented Generation for Medicine},
author={Guangzhi Xiong and Qiao Jin and Zhiyong Lu and Aidong Zhang},
journal={arXiv preprint arXiv:2402.13178},
year={2024}
}
```
|
mlfoundations-dev/d1_science_longest_1k
|
mlfoundations-dev
|
2025-04-27T15:11:41Z
| 73 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-27T15:11:29Z
| 0 |
---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: _source
dtype: string
- name: gpt41_mini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
- name: domain
dtype: string
- name: r1_response
dtype: string
- name: r1_reasoning_content
dtype: string
- name: extract_solution
dtype: string
- name: url
dtype: string
- name: filename
dtype: string
- name: success
dtype: bool
- name: page_count
dtype: int64
- name: page_number
dtype: int64
- name: question_choices_solutions
dtype: string
- name: extracted_question
dtype: string
- name: extracted_answer_choices
sequence: string
- name: matched_solution
dtype: string
- name: qa_validation_outputs
dtype: bool
- name: classifier_reasoning
dtype: string
- name: is_organic_chemistry
dtype: bool
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 824412583.892405
num_examples: 1000
download_size: 324533937
dataset_size: 824412583.892405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
batharun2/train
|
batharun2
|
2025-02-05T20:51:12Z
| 20 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-05T20:48:53Z
| 0 |
---
license: apache-2.0
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.2_num-company_3_dataset_1_for_gen_3_v2
|
HungVu2003
|
2025-05-06T03:07:27Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-06T03:07:25Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4066563
num_examples: 14998
download_size: 2167612
dataset_size: 4066563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
asafd60/heb-synth-law
|
asafd60
|
2025-02-27T10:03:37Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-27T10:01:48Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3142434964.0
num_examples: 5000
- name: test
num_bytes: 286872164.0
num_examples: 452
download_size: 3498820121
dataset_size: 3429307128.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Iliamitin/finetuning_s1_eng_de
|
Iliamitin
|
2025-03-06T15:22:41Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-06T15:22:38Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 47447
num_examples: 121
download_size: 17361
dataset_size: 47447
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JakeOh/rft-finetune-llama-3.2-1b-math-k10
|
JakeOh
|
2025-01-29T20:27:04Z
| 24 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-29T20:26:58Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: with_hint
dtype: bool
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 422537832
num_examples: 331408
- name: test
num_bytes: 24280918
num_examples: 19158
download_size: 105491811
dataset_size: 446818750
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
kothasuhas/anatomy-textbooks-16
|
kothasuhas
|
2024-11-29T22:23:40Z
| 16 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-28T09:43:07Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: textbook_name
dtype: string
splits:
- name: train
num_bytes: 44368206
num_examples: 10437
download_size: 22618503
dataset_size: 44368206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mrlyle/img-nov-20
|
mrlyle
|
2024-11-20T18:14:29Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-20T18:14:28Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': test
'1': train
splits:
- name: train
num_bytes: 20637.0
num_examples: 4
- name: test
num_bytes: 22886.0
num_examples: 3
download_size: 51062
dataset_size: 43523.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
MikeGreen2710/smoothed_all_df_1600_1700
|
MikeGreen2710
|
2024-12-06T17:30:36Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-06T17:30:35Z
| 0 |
---
dataset_info:
features:
- name: week_period
dtype: int64
- name: weighted_mean_price
dtype: float64
- name: variance_street_index
dtype: float64
- name: count_example
dtype: float64
- name: posterior_price
dtype: float64
- name: observation_weight
dtype: float64
- name: smoothed_price
dtype: float64
- name: posterior_weight
dtype: float64
- name: observation_reliable
dtype: float64
- name: smoothed_price_lower
dtype: float64
- name: smoothed_price_upper
dtype: float64
- name: city
dtype: string
- name: district
dtype: string
- name: ward
dtype: string
- name: street
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 958235
num_examples: 5519
download_size: 223601
dataset_size: 958235
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyentranai07/FullData_TechniqueAG
|
nguyentranai07
|
2025-05-16T02:22:36Z
| 80 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-13T15:39:22Z
| 0 |
---
dataset_info:
features:
- name: instructions
dtype: string
- name: Context
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 873541382
num_examples: 180000
download_size: 200117931
dataset_size: 873541382
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kings-crown/FVELer_PISA_Proven
|
kings-crown
|
2025-02-26T21:59:24Z
| 14 | 0 |
[
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-26T04:33:07Z
| 0 |
---
license: mit
dataset_info:
features:
- name: natural_language_statement
dtype: string
- name: formal_proof
dtype: string
splits:
- name: train
num_bytes: 2550366
num_examples: 1138
download_size: 943224
dataset_size: 2550366
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Stratsyn/llama2-1b-resume-classification-test
|
Stratsyn
|
2024-10-21T06:17:51Z
| 52 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-21T06:14:03Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 23060
num_examples: 210
download_size: 1788
dataset_size: 23060
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kevin017/bioS_inverse_QA_birth_large_all_answer
|
kevin017
|
2025-04-10T11:37:57Z
| 17 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-10T11:36:12Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: all_answers
sequence: string
splits:
- name: train
num_bytes: 4071364
num_examples: 34961
- name: test
num_bytes: 4071364
num_examples: 34961
download_size: 2128510
dataset_size: 8142728
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dgambettaphd/D_gen9_run1_llama2-7b_wiki_doc1000_real64_synt64
|
dgambettaphd
|
2024-12-02T07:40:47Z
| 17 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-02T07:40:45Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 584241
num_examples: 1000
download_size: 352050
dataset_size: 584241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fancyzhx/amazon_polarity
|
fancyzhx
|
2024-01-09T12:23:33Z
| 4,531 | 46 |
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1509.01626",
"region:us"
] |
[
"text-classification"
] |
2022-03-02T23:29:22Z
| 1 |
---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Amazon Review Polarity
dataset_info:
config_name: amazon_polarity
features:
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 1604364432
num_examples: 3600000
- name: test
num_bytes: 178176193
num_examples: 400000
download_size: 1145430497
dataset_size: 1782540625
configs:
- config_name: amazon_polarity
data_files:
- split: train
path: amazon_polarity/train-*
- split: test
path: amazon_polarity/test-*
default: true
train-eval-index:
- config: amazon_polarity
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
content: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for Amazon Review Polarity
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://registry.opendata.aws/
- **Repository:** https://github.com/zhangxiangxiao/Crepe
- **Paper:** https://arxiv.org/abs/1509.01626
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Xiang Zhang](mailto:[email protected])
### Dataset Summary
The Amazon reviews dataset consists of reviews from amazon.
The data span a period of 18 years, including ~35 million reviews up to March 2013.
Reviews include product and user information, ratings, and a plaintext review.
### Supported Tasks and Leaderboards
- `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the content and the title, predict the correct star rating.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A typical data point, comprises of a title, a content and the corresponding label.
An example from the AmazonPolarity test set looks as follows:
```
{
'title':'Great CD',
'content':"My lovely Pat has one of the GREAT voices of her generation. I have listened to this CD for YEARS and I still LOVE IT. When I'm in a good mood it makes me feel better. A bad mood just evaporates like sugar in the rain. This CD just oozes LIFE. Vocals are jusat STUUNNING and lyrics just kill. One of life's hidden gems. This is a desert isle CD in my book. Why she never made it big is just beyond me. Everytime I play this, no matter black, white, young, old, male, female EVERYBODY says one thing ""Who was that singing ?""",
'label':1
}
```
### Data Fields
- 'title': a string containing the title of the review - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'label': either 1 (positive) or 0 (negative) rating.
### Data Splits
The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples.
## Dataset Creation
### Curation Rationale
The Amazon reviews polarity dataset is constructed by Xiang Zhang ([email protected]). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Apache License 2.0
### Citation Information
McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013.
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015)
### Contributions
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
|
jeffreygwang/pythia_dedupe_mia_0-97000_97000-98500
|
jeffreygwang
|
2025-01-06T22:49:55Z
| 63 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-05T03:31:17Z
| 0 |
---
dataset_info:
features:
- name: tokens
sequence: int64
- name: text
dtype: string
splits:
- name: member
num_bytes: 368149519
num_examples: 15000
- name: nonmember
num_bytes: 368412465
num_examples: 15000
download_size: 250016950
dataset_size: 736561984
configs:
- config_name: default
data_files:
- split: member
path: data/member-*
- split: nonmember
path: data/nonmember-*
---
|
fatihay/a
|
fatihay
|
2024-10-28T18:43:56Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-28T18:43:51Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 591756.0
num_examples: 663
download_size: 150658
dataset_size: 591756.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FrankFu2027/yourbench_example
|
FrankFu2027
|
2025-04-27T01:17:19Z
| 68 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-15T03:06:28Z
| 0 |
---
dataset_info:
- config_name: chunked
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
- name: chunks
list:
- name: chunk_id
dtype: string
- name: chunk_text
dtype: string
- name: multihop_chunks
list:
- name: chunk_ids
sequence: string
- name: chunks_text
sequence: string
- name: chunk_info_metrics
list:
- name: avg_token_length
dtype: float64
- name: bigram_diversity
dtype: float64
- name: flesch_reading_ease
dtype: float64
- name: gunning_fog
dtype: float64
- name: perplexity
dtype: float64
- name: token_count
dtype: float64
- name: unique_token_ratio
dtype: float64
- name: chunking_model
dtype: string
splits:
- name: train
num_bytes: 83943
num_examples: 2
download_size: 61170
dataset_size: 83943
- config_name: ingested
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
splits:
- name: train
num_bytes: 21568
num_examples: 6
download_size: 15710
dataset_size: 21568
- config_name: lighteval
features:
- name: question
dtype: string
- name: additional_instructions
dtype: string
- name: ground_truth_answer
dtype: string
- name: question_category
dtype: string
- name: kind
dtype: string
- name: estimated_difficulty
dtype: int64
- name: citations
sequence: string
- name: document_id
dtype: string
- name: chunk_ids
sequence: string
- name: question_generating_model
dtype: string
- name: chunks
sequence: string
- name: document
dtype: string
splits:
- name: train
num_bytes: 607609
num_examples: 55
download_size: 47767
dataset_size: 607609
- config_name: multi_hop_questions
features:
- name: document_id
dtype: string
- name: source_chunk_ids
sequence: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: citations
sequence: string
- name: raw_response
dtype: string
splits:
- name: train
num_bytes: 107049
num_examples: 12
download_size: 26711
dataset_size: 107049
- config_name: single_shot_questions
features:
- name: chunk_id
dtype: string
- name: document_id
dtype: string
- name: additional_instructions
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: raw_response
dtype: string
- name: citations
sequence: string
splits:
- name: train
num_bytes: 271212
num_examples: 43
download_size: 42377
dataset_size: 271212
- config_name: summarized
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
splits:
- name: train
num_bytes: 22479
num_examples: 2
download_size: 26371
dataset_size: 22479
configs:
- config_name: chunked
data_files:
- split: train
path: chunked/train-*
- config_name: ingested
data_files:
- split: train
path: ingested/train-*
- config_name: lighteval
data_files:
- split: train
path: lighteval/train-*
- config_name: multi_hop_questions
data_files:
- split: train
path: multi_hop_questions/train-*
- config_name: single_shot_questions
data_files:
- split: train
path: single_shot_questions/train-*
- config_name: summarized
data_files:
- split: train
path: summarized/train-*
---
|
danielshaps/nchlt_speech_nbl
|
danielshaps
|
2025-02-11T18:10:35Z
| 43 | 0 |
[
"task_categories:automatic-speech-recognition",
"language:nr",
"license:cc-by-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"automatic-speech-recognition"
] |
2025-02-11T17:03:47Z
| 0 |
---
license: cc-by-3.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: age
dtype: string
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: duration
dtype: string
- name: pdp_score
dtype: string
- name: utterance_id
dtype: string
- name: disfluency
dtype: bool
- name: text
dtype: string
- name: typos
dtype: bool
- name: split
dtype: string
splits:
- name: train
num_bytes: 3768382188.98
num_examples: 39415
- name: test
num_bytes: 265680079.204
num_examples: 3108
download_size: 3257143817
dataset_size: 4034062268.184
task_categories:
- automatic-speech-recognition
language:
- nr
pretty_name: NCHLT Speech Corpus -- isiNdebele
---
# NCHLT Speech Corpus -- isiNdebele
This is the isiNdebele language part of the NCHLT Speech Corpus of the South African languages.
Language code (ISO 639): `nbl`
URI: <https://hdl.handle.net/20.500.12185/272>
## Licence:
Creative Commons Attribution 3.0 Unported License (CC BY 3.0): <http://creativecommons.org/licenses/by/3.0/legalcode>
## Attribution:
The Department of Arts and Culture of the government of the Republic of South Africa (DAC), Council for Scientific and Industrial Research (CSIR) and North-West University (NWU).
## Citation Information:
```
@inproceedings{barnard2014nchlt,
title={{The NCHLT speech corpus of the South African languages}},
author={Barnard, Etienne and Davel, Marelie H and Van Heerden, Charl and De Wet, Febe and Badenhorst, Jaco},
booktitle={Workshop Spoken Language Technologies for Under-resourced Languages (SLTU)},
year={2014}
}
```
|
timaeus/pubmed_central_max_loss_delta_ablation_l0h4
|
timaeus
|
2025-03-18T11:23:31Z
| 49 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-18T11:23:25Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: pile_set_name
dtype: string
splits:
- name: train
num_bytes: 88448432
num_examples: 10000
download_size: 42568236
dataset_size: 88448432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama3_sft_balanced_rr60k_ep3tmp10
|
1231czx
|
2024-12-28T17:58:15Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-28T17:58:14Z
| 0 |
---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 19935055
num_examples: 5000
download_size: 6581768
dataset_size: 19935055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Minuskid/AndroidControl_300_samples_v2
|
Minuskid
|
2025-04-16T11:47:09Z
| 17 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-16T11:36:27Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: images
sequence: image
- name: problem
dtype: string
- name: answer
dtype: string
- name: pred_steps
sequence: string
- name: image_size
sequence: int64
splits:
- name: train
num_bytes: 197541333.0
num_examples: 312
- name: validation
num_bytes: 92426547.0
num_examples: 158
download_size: 289165293
dataset_size: 289967880.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
EunsuKim/temp_value
|
EunsuKim
|
2025-03-08T06:29:25Z
| 64 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-08T06:29:22Z
| 0 |
---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: string
- name: Prompt
dtype: string
- name: Context
dtype: string
- name: Label
dtype: string
- name: Valid_keys
dtype: string
- name: message_prompt
dtype: string
- name: message
dtype: string
- name: new_prompt
dtype: string
- name: output
dtype: string
- name: task_type
dtype: string
- name: target_type
dtype: string
- name: subject_type
dtype: string
- name: prompt
dtype: string
- name: context
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: reference
dtype: string
- name: problem_type
dtype: string
- name: benchmark_name
dtype: string
- name: original_category
dtype: string
- name: additional_info
dtype: string
splits:
- name: ko
num_bytes: 56903
num_examples: 8
- name: general
num_bytes: 9571
num_examples: 1
download_size: 94714
dataset_size: 66474
configs:
- config_name: default
data_files:
- split: ko
path: data/ko-*
- split: general
path: data/general-*
---
|
french-datasets/bismarck91_enA-frA-tokenised-part13
|
french-datasets
|
2025-06-21T14:12:17Z
| 0 | 0 |
[
"task_categories:audio-to-audio",
"language:fra",
"language:eng",
"region:us"
] |
[
"audio-to-audio"
] |
2025-06-21T14:12:04Z
| 0 |
---
language:
- fra
- eng
viewer: false
task_categories:
- audio-to-audio
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [bismarck91/enA-frA-tokenised-part13](https://huggingface.co/datasets/bismarck91/enA-frA-tokenised-part13).
|
Han03430/CoCoPIF
|
Han03430
|
2025-05-16T01:37:25Z
| 133 | 1 |
[
"task_categories:question-answering",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] |
[
"question-answering"
] |
2025-05-15T07:48:32Z
| 0 |
---
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
language:
- en
tags:
- code
pretty_name: CoCoPIF
size_categories:
- n<1K
---
|
ll4m4i/LLM4
|
ll4m4i
|
2024-12-29T11:28:39Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-29T11:28:00Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 124385.16363636364
num_examples: 49
- name: test
num_bytes: 15230.836363636363
num_examples: 6
download_size: 108042
dataset_size: 139616.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
yulan-team/YuLan-Mini-Datasets
|
yulan-team
|
2025-04-11T14:15:50Z
| 438 | 9 |
[
"task_categories:text-generation",
"language:en",
"language:zh",
"size_categories:10M<n<100M",
"arxiv:2412.17743",
"region:us",
"code",
"math",
"science"
] |
[
"text-generation"
] |
2024-12-25T05:51:49Z
| 0 |
---
language:
- en
- zh
arxiv: 2412.17743
configs:
- config_name: code_instruction_opencoder-sft2-qwen2.5-7B
data_files:
- split: train
path: yulan-mini-syn/code/instruction/opencoder-sft2_qwen2.5_7B/*
- config_name: code_instruction_the-stack-v2-oss-qwen2.5-7B_score3
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-oss_qwen2.5-7B/*/score3/*
- config_name: code_instruction_the-stack-v2-oss-qwen2.5-7B_score4
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-oss_qwen2.5-7B/*/score4/*
- config_name: code_instruction_the-stack-v2-oss-qwen2.5-7B_score5
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-oss_qwen2.5-7B/*/score5/*
- config_name: code_instruction_the-stack-v2-seed_score0
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-seed/score0/*
- config_name: code_instruction_the-stack-v2-seed_score1
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-seed/score1/*
- config_name: code_instruction_the-stack-v2-seed_score2
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-seed/score2/*
- config_name: code_instruction_the-stack-v2-seed_score3
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-seed/score3/*
- config_name: code_instruction_the-stack-v2-seed_score4
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-seed/score4/*
- config_name: code_instruction_the-stack-v2-seed_score5
data_files:
- split: train
path: yulan-mini-syn/code/instruction/the-stack-v2-seed/score5/*
- config_name: code_pretrain_LeetCode_score0
data_files:
- split: train
path: yulan-mini-syn/code/pretrain/LeetCode/score0/*
- config_name: code_pretrain_LeetCode_score1
data_files:
- split: train
path: yulan-mini-syn/code/pretrain/LeetCode/score1/*
- config_name: code_pretrain_LeetCode_score2
data_files:
- split: train
path: yulan-mini-syn/code/pretrain/LeetCode/score2/*
- config_name: code_pretrain_LeetCode_score3
data_files:
- split: train
path: yulan-mini-syn/code/pretrain/LeetCode/score3/*
- config_name: code_pretrain_LeetCode_score4
data_files:
- split: train
path: yulan-mini-syn/code/pretrain/LeetCode/score4/*
- config_name: code_pretrain_LeetCode_score5
data_files:
- split: train
path: yulan-mini-syn/code/pretrain/LeetCode/score5/*
- config_name: cosmopedia_web_samples_v2_scored_score_3
data_files:
- split: train
path: cosmopedia/web_samples_v2_scored/score_3/train-*
- config_name: cosmopedia_web_samples_v2_scored_score_4
data_files:
- split: train
path: cosmopedia/web_samples_v2_scored/score_4/train-*
- config_name: cosmopedia_web_samples_v2_scored_score_5
data_files:
- split: train
path: cosmopedia/web_samples_v2_scored/score_5/train-*
- config_name: infimm-webmath-dedup_score-0
data_files:
- split: train
path: infimm-webmath-dedup/score-0/train-*
- config_name: infimm-webmath-dedup-score-2
data_files:
- split: train
path: infimm-webmath-dedup/score-2/train-*
- config_name: infimm-webmath-dedup_score-3
data_files:
- split: train
path: infimm-webmath-dedup/score-3/train-*
- config_name: infimm-webmath-dedup_score-4
data_files:
- split: train
path: infimm-webmath-dedup/score-4/train-*
- config_name: infimm-webmath-dedup_score-5
data_files:
- split: train
path: infimm-webmath-dedup/score-5/train-*
- config_name: math_instruction_AMPS-mathematica-qwen2.5-7B
data_files:
- split: train
path: yulan-mini-syn/math/instruction/AMPS_mathematica_qwen2.5-7B/*
- config_name: math_instruction_automathetext-oss-qwen2.5-7B_score2
data_files:
- split: train
path: yulan-mini-syn/math/instruction/automathetext-oss_qwen2.5-7B/score2/*
- config_name: math_instruction_automathetext-oss-qwen2.5-7B_score3
data_files:
- split: train
path: yulan-mini-syn/math/instruction/automathetext-oss_qwen2.5-7B/score3/*
- config_name: math_instruction_automathetext-oss-qwen2.5-7B_score4
data_files:
- split: train
path: yulan-mini-syn/math/instruction/automathetext-oss_qwen2.5-7B/score4/*
- config_name: math_instruction_automathetext-oss-qwen2.5-7B_score5
data_files:
- split: train
path: yulan-mini-syn/math/instruction/automathetext-oss_qwen2.5-7B/score5/*
- config_name: math_instruction_deepmind-math-qwen2.5-7B
data_files:
- split: train
path: yulan-mini-syn/math/instruction/deepmind_math_qwen2.5-7B/*
- config_name: math_instruction_dm-math-program-generated
data_files:
- split: train
path: yulan-mini-syn/math/instruction/dm_math_program_generated/*
- config_name: math_instruction_long-cot-qwq
data_files:
- split: train
path: yulan-mini-syn/math/instruction/long_cot_qwq/*
- config_name: math_instruction_math-others
data_files:
- split: train
path: yulan-mini-syn/math/instruction/math_others/*/*
- config_name: math_instruction_math-rethink
data_files:
- split: train
path: yulan-mini-syn/math/instruction/math-rethink/*/*
- config_name: math_instruction_mathinstruct-qwen2.5-7B
data_files:
- split: train
path: yulan-mini-syn/math/instruction/mathinstruct_qwen2.5_7B/*
- config_name: math_instruction_metamathqa-qwen2.5-7B
data_files:
- split: train
path: yulan-mini-syn/math/instruction/metamathqa_qwen2.5-7B/*
- config_name: math_instruction_numina
data_files:
- split: train
path: yulan-mini-syn/math/instruction/numina/*
- config_name: math_instruction_numina-qwen2.5-72B
data_files:
- split: train
path: yulan-mini-syn/math/instruction/numina_qwen2.5_72B/*
- config_name: math_instruction_orca-math-qwen2.5-7B
data_files:
- split: train
path: yulan-mini-syn/math/instruction/orca_math_qwen2.5-7B/*
- config_name: math_instruction_revthink-metemath
data_files:
- split: train
path: yulan-mini-syn/math/instruction/revthink_metemath/*
- config_name: math_instruction_ruc-o1-mcts
data_files:
- split: train
path: yulan-mini-syn/math/instruction/ruc-o1-mcts/*
- config_name: math_instruction_syn-qa-qwen2.5-7B
data_files:
- split: train
path: yulan-mini-syn/math/instruction/syn_qa_qwen2.5-7B/*
- config_name: the-stack-v2_Jupyter_Notebook_md2_scored_classified_score_1
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md2_scored_classified/score_1/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md2_scored_classified_score_2
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md2_scored_classified/score_2/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md2_scored_classified_score_3
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md2_scored_classified/score_3/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md2_scored_classified_score_4
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md2_scored_classified/score_4/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md2_scored_classified_score_5
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md2_scored_classified/score_5/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md_scored_classified_score_2
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md_scored_classified/score_2/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md_scored_classified_score_3
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md_scored_classified/score_3/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md_scored_classified_score_4
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md_scored_classified/score_4/train-*
- config_name: the-stack-v2_Jupyter_Notebook_md_scored_classified_score_5
data_files:
- split: train
path: the-stack-v2/Jupyter_Notebook/md_scored_classified/score_5/train-*
- config_name: the_stack_v2_python_cleaned_scored_dedup_score_1
data_files:
- split: train
path: the_stack_v2_python_cleaned_scored_dedup/score_1/train-*
- config_name: the_stack_v2_python_cleaned_scored_dedup_score_2
data_files:
- split: train
path: the_stack_v2_python_cleaned_scored_dedup/score_2/train-*
- config_name: the_stack_v2_python_cleaned_scored_dedup_score_3
data_files:
- split: train
path: the_stack_v2_python_cleaned_scored_dedup/score_3/train-*
- config_name: the_stack_v2_python_cleaned_scored_dedup_score_4
data_files:
- split: train
path: the_stack_v2_python_cleaned_scored_dedup/score_4/train-*
- config_name: the_stack_v2_python_cleaned_scored_dedup_score_5
data_files:
- split: train
path: the_stack_v2_python_cleaned_scored_dedup/score_5/train-*
tags:
- code
- math
- science
task_categories:
- text-generation
---
## YuLan-Mini Datasets
- **🔥 Updated (April 11, 2025): For a clearer presentation of the information, see the table at this link: [link](https://docs.google.com/spreadsheets/d/1YP8-loVUxgxo36UEpOwflR3GRHLieBnLlCy8g10g8RU/edit?gid=0#gid=0).**
This datasets contains:
1. Classified data using [`python-edu-scorer`](https://huggingface.co/HuggingFaceTB/python-edu-scorer) and [`fineweb-edu-classifier`](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier)
2. Synthesized data (math, code, instruction, ...)
3. Retrieved data using [`math`](https://huggingface.co/yulan-team/math-classifier), [`code`](https://huggingface.co/yulan-team/code-classifier), and [`reasoninig-classifier`](https://huggingface.co/yulan-team/reasoning-classifier)
## Notice
Since we have used BPE-Dropout, in order to ensure accuracy, the data we uploaded is tokenized.
由于我们使用了BPE-Dropout,为了保证准确性,我们上传的是tokenized后的数据。
For text datasets, please refer to [YuLan-Mini-Text-Datasets](https://huggingface.co/datasets/yulan-team/YuLan-Mini-Text-Datasets).
你可以在[YuLan-Mini-Text-Datasets](https://huggingface.co/datasets/yulan-team/YuLan-Mini-Text-Datasets)找到文本格式的数据集。
You can find the link of seed data [here](https://github.com/RUC-GSAI/YuLan-Mini/tree/main/pretrain/datasets).
你可以在[这里](https://github.com/RUC-GSAI/YuLan-Mini/tree/main/pretrain/datasets)找到种子数据。
---
## Contributing
We welcome any form of contribution, including feedback on model bad cases, feature suggestions, and example contributions. You can do so by submitting an [issue](https://github.com/RUC-GSAI/YuLan-Mini/issues).
## The Team
YuLan-Mini is developed and maintained by [AI Box, Renmin University of China](http://aibox.ruc.edu.cn/).
## License
- The code in this repository, the model weights, and optimizer states are released under the [MIT License](./LICENSE).
- Policies regarding the use of model weights, intermediate optimizer states, and training data will be announced in future updates.
- Limitations: Despite our efforts to mitigate safety concerns and encourage the generation of ethical and lawful text, the probabilistic nature of language models may still lead to unexpected outputs. For instance, responses might contain bias, discrimination, or other harmful content. Please refrain from disseminating such content. We are not liable for any consequences arising from the spread of harmful information.
## Citation
If you find YuLan-Mini helpful for your research or development, please cite [our technical report](https://arxiv.org/abs/2412.17743):
```
@article{hu2024yulan,
title={YuLan-Mini: An Open Data-efficient Language Model},
author={Hu, Yiwen and Song, Huatong and Deng, Jia and Wang, Jiapeng and Chen, Jie and Zhou, Kun and Zhu, Yutao and Jiang, Jinhao and Dong, Zican and Zhao, Wayne Xin and others},
journal={arXiv preprint arXiv:2412.17743},
year={2024}
}
```
|
Mike22g/tanarav2
|
Mike22g
|
2025-04-05T23:12:53Z
| 14 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-04-05T23:10:32Z
| 0 |
---
license: apache-2.0
---
|
michsethowusu/dyula-tsonga_sentence-pairs
|
michsethowusu
|
2025-04-02T12:58:00Z
| 6 | 0 |
[
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-02T12:57:56Z
| 0 |
---
dataset_info:
features:
- name: score
dtype: float32
- name: Dyula
dtype: string
- name: Tsonga
dtype: string
splits:
- name: train
num_bytes: 10700833
num_examples: 82077
download_size: 10700833
dataset_size: 10700833
configs:
- config_name: default
data_files:
- split: train
path: Dyula-Tsonga_Sentence-Pairs.csv
---
# Dyula-Tsonga_Sentence-Pairs Dataset
This dataset contains sentence pairs for African languages along with similarity scores. It can be used for machine translation, sentence alignment, or other natural language processing tasks.
This dataset is based on the NLLBv1 dataset, published on OPUS under an open-source initiative led by META. You can find more information here: [OPUS - NLLB-v1](https://opus.nlpl.eu/legacy/NLLB-v1.php)
## Metadata
- **File Name**: Dyula-Tsonga_Sentence-Pairs
- **Number of Rows**: 82077
- **Number of Columns**: 3
- **Columns**: score, Dyula, Tsonga
## Dataset Description
The dataset contains sentence pairs in African languages with an associated similarity score. Each row consists of three columns:
1. `score`: The similarity score between the two sentences (range from 0 to 1).
2. `Dyula`: The first sentence in the pair (language 1).
3. `Tsonga`: The second sentence in the pair (language 2).
This dataset is intended for use in training and evaluating machine learning models for tasks like translation, sentence similarity, and cross-lingual transfer learning.
## References
Below are papers related to how the data was collected and used in various multilingual and cross-lingual applications:
[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017
[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.
[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018
[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.
[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.
[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.
[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.
[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining, NeurIPS 2021, pages 15748-15761.
[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
hieuhb2003/quora
|
hieuhb2003
|
2025-02-14T10:30:39Z
| 16 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-14T10:30:36Z
| 0 |
---
dataset_info:
features:
- name: dialogue
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 61566670
num_examples: 56402
download_size: 35530878
dataset_size: 61566670
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Samll/gsm8k_early_answering_data_v2
|
Samll
|
2025-01-20T11:21:53Z
| 19 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-20T11:21:51Z
| 0 |
---
dataset_info:
features:
- name: Original_Prompt
dtype: string
- name: Original_COT
dtype: string
- name: Original_CoT_answer
dtype: string
- name: Truncated_Input
dtype: string
- name: Truncated_Input_response
dtype: string
- name: Truncated_Input_extracted_answer
dtype: string
- name: Truncated_CoT_length
dtype: int64
- name: Correct_Answer
dtype: string
splits:
- name: train
num_bytes: 22528435
num_examples: 8147
download_size: 1452620
dataset_size: 22528435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Chojins/chess_game_002_white
|
Chojins
|
2025-05-22T04:45:09Z
| 31 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"chess",
"game"
] |
[
"robotics"
] |
2025-01-06T23:52:37Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- chess
- game
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 50,
"total_frames": 23962,
"total_tasks": 1,
"total_videos": 100,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
TAUR-dev/MEASURESrw_v3_4d_eval__test
|
TAUR-dev
|
2025-05-23T18:40:30Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-23T17:00:37Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: eval_prompt
dtype: string
- name: eval_internal_cot
dtype: string
- name: eval_solution
dtype: string
- name: raw_eval_prompt
dtype: string
- name: judge_correct
dtype: bool
- name: judge_reasoning
dtype: string
- name: model_name
dtype: string
- name: measurement_answer_verification_reasoning
dtype: string
- name: measurement_answer_verification_final_count
dtype: int64
- name: measurement_answer_verification_metadata
sequence: string
- name: measurement_answer_verification_raw_response
dtype: string
- name: extracted_answer_values
sequence: string
- name: measurement_answer_verification_final_recount
dtype: int64
splits:
- name: train
num_bytes: 19509
num_examples: 5
download_size: 23073
dataset_size: 19509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haydn-jones/TxGemma_V0_200k
|
haydn-jones
|
2025-04-04T15:38:01Z
| 16 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-04T15:37:53Z
| 0 |
---
dataset_info:
features:
- name: input
dtype: large_string
- name: output
dtype: large_string
splits:
- name: train
num_bytes: 199280938
num_examples: 200000
download_size: 29278107
dataset_size: 199280938
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
uzair921/CONLL2003_LLM_RAG_42_50_MiniLM
|
uzair921
|
2025-01-08T10:44:59Z
| 17 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-08T10:44:55Z
| 0 |
---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 2470469
num_examples: 9829
- name: validation
num_bytes: 866541
num_examples: 3250
- name: test
num_bytes: 784956
num_examples: 3453
download_size: 1002827
dataset_size: 4121966
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
MewtwoX23/ade20k
|
MewtwoX23
|
2024-11-10T15:34:27Z
| 40 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-10T09:11:55Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: seg
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 31089648080.43
num_examples: 20210
download_size: 31282897836
dataset_size: 31089648080.43
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vlinhd11/viVoice-v1-p7
|
vlinhd11
|
2025-03-02T17:27:32Z
| 19 | 0 |
[
"region:us"
] |
[] |
2025-01-16T16:40:28Z
| 0 |
---
dataset_info:
features:
- name: channel
dtype: string
- name: text
dtype: string
- name: audio
dtype: audio
- name: id
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 23797146613.0
num_examples: 120000
download_size: 22782359395
dataset_size: 23797146613.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kh4dien/amazon_reviews
|
kh4dien
|
2025-02-17T06:33:48Z
| 14 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-17T06:29:15Z
| 0 |
---
dataset_info:
features:
- name: user
dtype: string
- name: assistant
dtype: string
- name: label
dtype: int64
- name: category
dtype: string
splits:
- name: train
num_bytes: 2535310.587701613
num_examples: 3132
download_size: 1408404
dataset_size: 2535310.587701613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jjeccles/venue-dataset02-08
|
jjeccles
|
2025-02-11T10:12:14Z
| 14 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-11T10:12:10Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 36451312
num_examples: 3634
download_size: 6022038
dataset_size: 36451312
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Port-Pilot/PortMIS-QA-Dataset
|
Port-Pilot
|
2024-10-13T06:02:18Z
| 22 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-13T05:25:01Z
| 0 |
---
dataset_info:
features:
- name: CONTEXT
dtype: string
- name: QUESTION
dtype: string
- name: ANSWER
dtype: string
splits:
- name: train
num_bytes: 893330
num_examples: 2500
download_size: 222275
dataset_size: 893330
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
abhinav302019/olympiad_data_417
|
abhinav302019
|
2025-03-07T04:15:44Z
| 17 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-07T04:15:42Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: Known_Solution
dtype: string
- name: Known_Answer
dtype: string
- name: Generated_Solution
dtype: string
- name: Generated_Answer
dtype: string
- name: Judge_Evaluation
dtype: string
- name: Judge_Rating
dtype: string
- name: Judge_Justification
dtype: string
splits:
- name: train
num_bytes: 33798
num_examples: 10
download_size: 35348
dataset_size: 33798
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kasianenko/samsum
|
kasianenko
|
2025-05-22T08:46:45Z
| 0 | 0 |
[
"license:cc-by-nc-4.0",
"region:us"
] |
[] |
2025-05-22T08:46:45Z
| 0 |
---
license: cc-by-nc-4.0
---
|
MilaWang/math7k
|
MilaWang
|
2025-03-05T04:08:52Z
| 15 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-05T04:08:51Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: difficulty
dtype: float64
- name: type
dtype: string
splits:
- name: train
num_bytes: 1860760
num_examples: 7474
download_size: 887944
dataset_size: 1860760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_fffa87fb-0fd5-4faa-a3c3-66aeaf3d9eb7
|
argilla-internal-testing
|
2024-11-06T09:05:43Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-06T09:05:42Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cedricclyburn/DoclingTests
|
cedricclyburn
|
2025-04-26T09:37:03Z
| 38 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-04-26T09:36:35Z
| 0 |
---
license: apache-2.0
---
|
amraly1983/shadcn_chat_template
|
amraly1983
|
2025-05-05T21:52:37Z
| 0 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-05-05T21:45:53Z
| 0 |
---
license: apache-2.0
---
|
higopires/RePro-categories-multilabel
|
higopires
|
2025-01-24T18:14:35Z
| 192 | 0 |
[
"task_categories:text-classification",
"language:pt",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"multilabel"
] |
[
"text-classification"
] |
2025-01-24T17:47:21Z
| 0 |
---
dataset_info:
features:
- name: review_text
dtype: string
- name: ENTREGA
dtype: int64
- name: OUTROS
dtype: int64
- name: PRODUTO
dtype: int64
- name: CONDICOESDERECEBIMENTO
dtype: int64
- name: INADEQUADA
dtype: int64
- name: ANUNCIO
dtype: int64
splits:
- name: train
num_bytes: 1599889
num_examples: 8002
- name: validation
num_bytes: 194266
num_examples: 994
- name: test
num_bytes: 199787
num_examples: 1007
download_size: 987114
dataset_size: 1993942
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: cc-by-sa-4.0
language:
- pt
task_categories:
- text-classification
tags:
- multilabel
size_categories:
- 1K<n<10K
---
# RePro: A Benchmark Dataset for Opinion Mining in Brazilian Portuguese
RePro, which stands for "REview of PROducts," is a benchmark dataset for opinion mining in Brazilian Portuguese. It consists of 10,000 humanly annotated e-commerce product reviews, each labeled with sentiment and topic information. The dataset was created based on data from one of the largest Brazilian e-commerce platforms, which produced the B2W-Reviews01 dataset (https://github.com/americanas-tech/b2w-reviews01). The RePro dataset aims to provide a valuable resource for tasks related to sentiment analysis and topic modeling in the context of Brazilian Portuguese e-commerce product reviews. It is designed to serve as a benchmark for future research in natural language processing and related fields.
This dataset is a processed version of RePro, where only the columns with the opinions and their categories are kept. Three stratified splits (80%/10%/10%) were created during the processing, using the scikit-multilearn library.
# Citation
When utilizing or referencing this dataset, kindly cite the following publication:
``` latex
@inproceedings{dos2024repro,
title={RePro: a benchmark for Opinion Mining for Brazilian Portuguese},
author={dos Santos Silva, Lucas Nildaimon and Real, Livy and Zandavalle, Ana Claudia Bianchini and Rodrigues, Carolina Francisco Gadelha and da Silva Gama, Tatiana and Souza, Fernando Guedes and Zaidan, Phillipe Derwich Silva},
booktitle={Proceedings of the 16th International Conference on Computational Processing of Portuguese},
pages={432--440},
year={2024}
}
```
# Contributions
Thanks to [@lucasnil](https://github.com/lucasnil) for adding this dataset.
|
mlfoundations-dev/openmathreasoning_30k
|
mlfoundations-dev
|
2025-04-28T04:45:34Z
| 45 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-28T04:43:49Z
| 0 |
---
dataset_info:
features:
- name: expected_answer
dtype: string
- name: problem_type
dtype: string
- name: problem_source
dtype: string
- name: generation_model
dtype: string
- name: pass_rate_72b_tir
dtype: string
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: inference_mode
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1411683041.7943301
num_examples: 31600
download_size: 632002281
dataset_size: 1411683041.7943301
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Fayeben/GetMesh
|
Fayeben
|
2025-06-14T03:42:48Z
| 0 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-06-14T03:42:48Z
| 0 |
---
license: apache-2.0
---
|
ljding94/Ladder_Polymer
|
ljding94
|
2025-05-01T14:31:25Z
| 27 | 0 |
[
"task_categories:feature-extraction",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"doi:10.57967/hf/5316",
"region:us",
"physics",
"chemistry"
] |
[
"feature-extraction"
] |
2025-03-25T05:20:08Z
| 0 |
---
license: mit
task_categories:
- feature-extraction
language:
- en
tags:
- physics
- chemistry
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
|
konwoo/auto-rm-erall-k800000-lr1e-5-epochs1-er1-0
|
konwoo
|
2025-04-22T15:32:41Z
| 23 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-22T15:31:33Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2677548118
num_examples: 800000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 1690050373
dataset_size: 2686123097
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
french-datasets/ekazuki-text_to_french_parliament_group_debates
|
french-datasets
|
2025-03-29T20:32:26Z
| 16 | 0 |
[
"language:fra",
"region:us"
] |
[] |
2025-03-29T20:32:25Z
| 0 |
---
language: "fra"
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données huggingface.co/datasets/ekazuki/text_to_french_parliament_group_debates.
|
mlfoundations-dev/aops_forum_diagrams
|
mlfoundations-dev
|
2025-01-23T20:46:30Z
| 12 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-23T20:46:28Z
| 0 |
---
dataset_info:
features:
- name: source
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 418050.6568003965
num_examples: 144
download_size: 425997
dataset_size: 418050.6568003965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
netvu21/llama2_custom_code
|
netvu21
|
2024-11-28T07:48:32Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-28T07:48:29Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7786
num_examples: 32
download_size: 4172
dataset_size: 7786
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zhexian17/blend-dataset
|
zhexian17
|
2025-04-20T06:10:15Z
| 21 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-20T06:09:20Z
| 0 |
---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: blended_image
dtype: image
splits:
- name: train
num_bytes: 205649347.0
num_examples: 1000
download_size: 205622248
dataset_size: 205649347.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
infinite-dataset-hub/B2BConnect
|
infinite-dataset-hub
|
2025-02-08T16:19:58Z
| 10 | 0 |
[
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] |
[] |
2025-02-08T16:19:57Z
| 0 |
---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# B2BConnect
tags: relationship mapping, B2B, Jacksonville
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The CSV file 'B2BConnect.csv' is designed to catalog and facilitate the identification of potential business-to-business (B2B) networking opportunities in Jacksonville, Florida. The dataset focuses on capturing the essence of local businesses and their networking preferences, aiming to aid ML practitioners in predicting successful B2B relationships. Each row represents a business entity with attributes related to its profile, networking history, and preferences. The 'labels' column indicates whether the business has been previously involved in successful networking events or partnerships.
**CSV Content Preview:**
```
name,business_type,industry,years_in_business,participation_history,preferred_networking_events,labels
"Florida Tech Enterprises", "Technology", "IT Services", 10, "Recent - 5 events", "Tech Meetups, Hackathons", "Positive"
"Northbank Consulting", "Consulting", "Business Advisory", 15, "Recent - 3 events", "Seminars, Professional Associations", "Negative"
"Jacksonville Health Solutions", "Healthcare", "Medical Devices", 8, "Occasional - 2 events", "Healthcare Expos, Innovation Conferences", "Neutral"
"Beachfront Catering", "Food & Beverage", "Catering Services", 5, "Never", "Local Food Festivals, Corporate Events", "Positive"
"Riverdale Constructions", "Construction", "Commercial Buildings", 12, "Recent - 4 events", "Industry Conferences, Networking Events", "Negative"
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'Jacksonville florida b2b networking':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=Jacksonville+florida+b2b+networking&dataset=B2BConnect&tags=relationship+mapping,+B2B,+Jacksonville
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
CharlesPing/climate-cross-encoder-mixed-neg-v2
|
CharlesPing
|
2025-05-17T06:24:20Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-17T06:24:14Z
| 0 |
---
dataset_info:
features:
- name: query
dtype: string
- name: doc
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 5218831
num_examples: 18660
- name: validation
num_bytes: 534073
num_examples: 1950
download_size: 2092645
dataset_size: 5752904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Lingalingeswaran/combined_english_sinhala
|
Lingalingeswaran
|
2025-05-14T18:40:36Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-14T18:40:23Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 116505746.66666667
num_examples: 1600
- name: test
num_bytes: 29126648.266666666
num_examples: 400
download_size: 166667739
dataset_size: 145632394.93333334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
b6Amine/MNLP_M2_quantized_dataset
|
b6Amine
|
2025-05-27T21:04:57Z
| 0 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-27T21:02:22Z
| 0 |
---
license: apache-2.0
---
|
YRC10/MASH
|
YRC10
|
2025-05-14T07:39:09Z
| 0 | 0 |
[
"task_categories:text-classification",
"task_categories:image-classification",
"task_categories:video-classification",
"annotations_creators:expert-annotated",
"annotations_creators:LLM",
"annotations_creators:mix",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"climate",
"social-media",
"hurricane",
"TikTok",
"Twitter",
"YouTube",
"Reddit",
"multimodal",
"information-integrity",
"humanitarian",
"bias"
] |
[
"text-classification",
"image-classification",
"video-classification"
] |
2025-05-14T04:48:10Z
| 0 |
---
MASH: A Multiplatform Annotated Dataset for Societal Impact of Hurricane
license: cc-by-4.0
tags:
- climate
- social-media
- hurricane
- TikTok
- Twitter
- YouTube
- Reddit
- multimodal
- information-integrity
- humanitarian
- bias
size_categories:
- 10K<n<100K
pretty_name: Multiplatform Annotated Dataset for Societal Impact of Hurricane
language: en
description: >
We present a Multiplatform Annotated Dataset for Societal Impact of Hurricane (MASH) that includes 98,662 relevant social media data posts from Reddit, X, TikTok, and YouTube.
In addition, all relevant posts are annotated on three dimensions: Humanitarian Classes, Bias Classes, and Information Integrity Classes in a multi-modal approach that considers both textual and visual content, providing a rich labeled dataset for in-depth analysis.
The dataset is also complemented by an Online Analytics Platform that not only allows users to view hurricane-related posts and articles, but also explores high-frequency keywords, user sentiment, and the locations where posts were made.
To our best knowledge, MASH is the first large-scale, multi-platform, multimodal, and multi-dimensionally annotated hurricane dataset.
We envision that MASH can contribute to the study of hurricanes' impact on society, such as disaster severity classification, event detections, public sentiment analysis, and bias identification.
dataset_type:
- multimodal
- text
- image
- video
annotations_creators:
- expert-annotated
- LLM
- mix
task_categories:
- text-classification
- image-classification
- video-classification
date_published: 2025-05-14
version: 1.0
modalities:
- text
- image
- video
---
# MASH: A Multiplatform Annotated Dataset for Societal Impact of Hurricane
We present a Multiplatform Annotated Dataset for Societal Impact of Hurricane (MASH) that includes **98,662** relevant social media data posts from **Reddit**, **X**, **TikTok**, and **YouTube**.
In addition, all relevant posts are annotated on three dimensions: **Humanitarian Classes**, **Bias Classes**, and **Information Integrity Classes** in a multi-modal approach that considers both textual and visual content (text, images, and videos), providing a rich labeled dataset for in-depth analysis.
The dataset is also complemented by an [Online Analytics Platform](https://hurricane.web.illinois.edu/) that not only allows users to view hurricane-related posts and articles, but also explores high-frequency keywords, user sentiment, and the locations where posts were made.
To our best knowledge, MASH is the first large-scale, multi-platform, multimodal, and multi-dimensionally annotated hurricane dataset.
We envision that MASH can contribute to the study of hurricanes' impact on society, such as disaster severity classification, event detections, public sentiment analysis, and bias identification.
## 📘 Usage Notice
This dataset includes four annotation files:
• reddit_anno_publish.csv
• tiktok_anno_publish.csv
• twitter_anno_publish.csv
• youtube_anno_publish.csv
Each file contains post IDs and corresponding annotations on three dimensions:
Humanitarian Classes, Bias Classes, and Information Integrity Classes.
To protect user privacy, only post IDs are released. We recommend retrieving the full post content via the official APIs of each platform, in accordance with their respective terms of service.
- [Reddit API](https://www.reddit.com/dev/api)
- [TikTok API](https://developers.tiktok.com/products/research-api)
- [X/Twitter API](https://developer.x.com/en/docs/x-api)
- [YouTube API](https://developers.google.com/youtube/v3)
## Humanitarian Classes
Each post is annotated with seven binary humanitarian classes. For each class, the label is either:
• True – the post contains this humanitarian information
• False – the post does not contain this information
These seven humanitarian classes include:
• Casualty: The post reports people or animals who are killed, injured, or missing during the hurricane.
• Evacuation: The post describes the evacuation, relocation, rescue, or displacement of individuals or animals due to the hurricane.
• Damage: The post reports damage to infrastructure or public utilities caused by the hurricane.
• Advice: The post provides advice, guidance, or suggestions related to hurricanes, including how to stay safe, protect property, or prepare for the disaster.
• Request: Request for help, support, or resources due to the hurricane
• Assistance: This includes both physical aid and emotional or psychological support provided by individuals, communities, or organizations.
• Recovery: The post describes efforts or activities related to the recovery and rebuilding process after the hurricane.
Note: A single post may be labeled as True for multiple humanitarian categories.
## Bias Classes
Each post is annotated with five binary bias classes. For each class, the label is either:
• True – the post contains this bias information
• False – the post does not contain this information
These five bias classes include:
• Linguistic Bias: The post contains biased, inappropriate, or offensive language, with a focus on word choice, tone, or expression.
• Political Bias: The post expresses political ideology, showing favor or disapproval toward specific political actors, parties, or policies.
• Gender Bias: The post contains biased, stereotypical, or discriminatory language or viewpoints related to gender.
• Hate Speech: The post contains language that expresses hatred, hostility, or dehumanization toward a specific group or individual, especially those belonging to minority or marginalized communities.
• Racial Bias: The post contains biased, discriminatory, or stereotypical statements directed toward one or more racial or ethnic groups.
Note: A single post may be labeled as True for multiple bias categories.
## Information Integrity Classes
Each post is also annotated with a single information integrity class, represented by an integer:
• -1 → False information (i.e., misinformation or disinformation)
• 0 → Unverifiable information (unclear or lacking sufficient evidence)
• 1 → True information (verifiable and accurate)
## Citation
ZENODO DOI: [10.5281/zenodo.15401479](https://zenodo.org/records/15401479)
|
appletea2333/activity_tvg
|
appletea2333
|
2025-03-19T15:35:34Z
| 103 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-19T15:35:28Z
| 0 |
---
dataset_info:
features:
- name: video
dtype: string
- name: caption
dtype: string
- name: timestamp
sequence: float16
splits:
- name: test
num_bytes: 1612225
num_examples: 17031
download_size: 820906
dataset_size: 1612225
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
svjack/Sebastian_Michaelis_Videos_Captioned
|
svjack
|
2025-04-22T00:21:17Z
| 144 | 0 |
[
"size_categories:n<1K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-04-22T00:06:15Z
| 0 |
---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
- "metadata.csv"
---



|
gtsaidata/SelfCompoundedDevanagariCharacters
|
gtsaidata
|
2025-02-22T04:22:17Z
| 11 | 0 |
[
"task_categories:text-classification",
"language:en",
"region:us",
"Self Compounded Devanagari Characters",
"Optical Character Recognition",
"OCR technology for Devanagari"
] |
[
"text-classification"
] |
2025-02-22T04:16:51Z
| 0 |
---
language:
- en
task_categories:
- text-classification
tags:
- Self Compounded Devanagari Characters
- Optical Character Recognition
- OCR technology for Devanagari
---
Description:
<a href="https://gts.ai/dataset-download/self-compounded-devanagari-characters/" target="_blank">👉 Download the dataset here</a>
The Self-Compounded Devanagari Characters dataset focuses on a crucial aspect of Optical Character Recognition (OCR) for Devanagari script, essential for preserving ancient scriptures and making them more accessible in the digital age. By leveraging this dataset, researchers can enhance AI systems to recognize complex Devanagari characters accurately. Digitalization also makes handwritten text “web-friendly” and searchable, preserving knowledge that would otherwise risk being lost or damaged.
Purpose of the Dataset
While OCR technology for Devanagari exists, one major gap has been the recognition of compound characters—those consisting of a half letter and a full letter. This dataset specifically addresses this gap, aiding researchers, developers, and academic institutions in creating systems that can accurately detect and digitize these characters, thus preserving not only the text but also the language itself.
Download Dataset
Dataset Composition
The dataset primarily consists of compound Devanagari characters, which are essential for enhancing OCR systems for Devanagari script. Each entry was carefully cleaned and validated before inclusion in the final dataset. This cleaning process ensures that the dataset is ready for immediate use in research and development of AI models, specifically those focused on Devanagari OCR.
Applications of the Dataset
This dataset has several potential applications:
Academic Research: Aiding studies in linguistics, script recognition, and AI for ancient language preservation.
AI & Machine Learning: Training OCR models to improve recognition of complex Devanagari script.
Language Digitization: Helping in the digital preservation of sacred texts, manuscripts, and other handwritten documents.
Collaborative Development: Open-source software available for expansion and adaptation to other languages, enabling a wide range of future applications.
Future Scope
As the dataset continues to grow, contributions from researchers, developers, and data scientists are encouraged. Future extensions could include additional languages, more complex ligature combinations, and larger sample sizes to create an even more comprehensive resource for OCR development. With the open-source data collection tool, the research community is invited to expand the dataset and collaborate in furthering OCR technology for a wide range of scripts.
Conclusion
The Self-Compounded Devanagari Characters Dataset fills a crucial gap in OCR technology for Devanagari script. Created during a challenging global situation, this project has laid the foundation for the digital preservation of ancient texts. With continued collaboration and contributions, it will serve as an invaluable resource for linguistic and AI research, helping to preserve cultural heritage in the digital age.
This dataset is sourced from Kaggle.
|
deu05232/promptriever-ours-v6-vanilla2
|
deu05232
|
2025-04-11T10:07:49Z
| 13 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-11T10:01:20Z
| 0 |
---
dataset_info:
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: explanation
dtype: string
- name: followir_score
dtype: float64
- name: joint_id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: only_instruction
dtype: string
- name: only_query
dtype: string
- name: has_instruction
dtype: bool
- name: new_negatives
list:
- name: docid
dtype: string
- name: explanation
dtype: string
- name: followir_score
dtype: float64
- name: joint_id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: d_inst_negatives
list:
- name: docid
dtype: string
- name: explanation
dtype: string
- name: followir_score
dtype: float64
- name: joint_id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: is_gpt
dtype: bool
splits:
- name: train
num_bytes: 13230352617
num_examples: 980247
download_size: 7506456800
dataset_size: 13230352617
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
uzair921/QWEN_SKILLSPAN_EMBEDDINGS_LLM_RAG_50
|
uzair921
|
2024-10-11T19:54:47Z
| 51 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-11T19:54:43Z
| 0 |
---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-Skill
'2': I-Skill
splits:
- name: train
num_bytes: 1028438
num_examples: 2065
- name: validation
num_bytes: 715196
num_examples: 1397
- name: test
num_bytes: 758463
num_examples: 1523
download_size: 450493
dataset_size: 2502097
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Dongkkka/koch_test10
|
Dongkkka
|
2025-04-15T07:18:40Z
| 48 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] |
[
"robotics"
] |
2025-04-15T07:18:35Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 2,
"total_frames": 888,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
justus27/s2-bigmath
|
justus27
|
2025-05-05T23:23:29Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-05T23:23:27Z
| 0 |
---
dataset_info:
features:
- name: problem_id
dtype: string
- name: task_type
dtype: string
- name: prompt
dtype: string
- name: verification_info
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 94831922
num_examples: 251122
download_size: 33716599
dataset_size: 94831922
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nsarrazin/lichess-games-2016-12
|
nsarrazin
|
2024-10-29T06:22:07Z
| 68 | 0 |
[
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-14T01:55:55Z
| 0 |
---
dataset_info:
features:
- name: white_elo
dtype: uint16
- name: black_elo
dtype: uint16
- name: avg_elo
dtype: uint16
- name: result
dtype: uint8
- name: moves
sequence: string
- name: checkmate
dtype: bool_
splits:
- name: train
num_bytes: 5183946245
num_examples: 9322331
download_size: 1093073273
dataset_size: 5183946245
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RobotisSW/omx_5
|
RobotisSW
|
2025-04-25T01:34:42Z
| 53 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] |
[
"robotics"
] |
2025-04-25T01:34:37Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "omx",
"total_episodes": 2,
"total_frames": 162,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
5
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
5
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_gripper"
]
},
"observation.images.cam_1": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
seongil-dn/miracl-relevance-dataset
|
seongil-dn
|
2025-03-17T07:55:48Z
| 14 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-17T07:55:44Z
| 0 |
---
dataset_info:
features:
- name: query
dtype: string
- name: positives
sequence: string
- name: negatives
sequence: string
splits:
- name: train
num_bytes: 6820437
num_examples: 1474
download_size: 3851414
dataset_size: 6820437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nbd22/gsm8k-deepseek-completions
|
nbd22
|
2025-01-27T21:29:03Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-27T21:29:01Z
| 0 |
---
dataset_info:
features:
- name: completions
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompts
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 23124
num_examples: 12
download_size: 11498
dataset_size: 23124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,326