datasetId
large_string | author
large_string | last_modified
large_string | downloads
int64 | likes
int64 | tags
large list | task_categories
large list | createdAt
large_string | trending_score
float64 | card
large_string |
---|---|---|---|---|---|---|---|---|---|
NeuroDonu/PortableVersions | NeuroDonu | 2025-05-01T07:05:29Z | 4,255 | 12 | [
"language:ru",
"language:en",
"license:apache-2.0",
"region:us",
"code"
] | [] | 2024-10-06T08:37:01Z | 2 | ---
license: apache-2.0
language:
- ru
- en
tags:
- code
---
### **Installation Guide for Musubi Tuner Prerequisites (Windows)**
**Important Note:** These instructions detail the installation of specific versions of CUDA, MSVC, Triton, and SageAttention using potentially unofficial builds hosted on Hugging Face. Proceed with caution and understand that these might not be the standard or officially supported methods. These steps cover the *dependencies*; the final step to install Musubi Tuner itself is not included here.
This guide outlines the steps to install the necessary prerequisites for potentially running Musubi Tuner, based on the provided components and links. if you have any errors write in topic in my chat --> https://t.me/neuralchatik/9586. Else DM --> https://t.me/NeuroDonu
**Prerequisites:**
* Windows Operating System (Windows 10 or 11 specified)
* NVIDIA GPU compatible with CUDA 12.8
* 7-Zip (or another tool capable of extracting `.7z` archives)
**Step 1: Install CUDA Toolkit and Additional Components (TensorRT/cuDNN)**
1. **Download Base CUDA Toolkit:**
* Go to the NVIDIA CUDA Toolkit archive: [CUDA 12.8.1 Download](https://developer.nvidia.com/cuda-12-8-1-download-archive?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_local)
* Select the options for Windows, x86_64, Version 11 (referring to Windows 11, should also work for 10), and `exe (local)`.
* Download and run the installer, following the on-screen prompts for a standard installation.
2. **Download Additional Components:**
* Navigate to this Hugging Face link: [CUDA_128.7z Download](https://huggingface.co/datasets/NeuroDonu/PortableSource/blob/main/CUDA_128.7z)
* Download the `CUDA_128.7z` file (You might need to click "Download" or look for a download button/link on the page).
3. **Integrate Components:**
* Open File Explorer and go to the CUDA installation directory, typically:
```
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8
```
* Using 7-Zip, extract the contents of the downloaded `CUDA_128.7z` archive.
* **Merge** the extracted files and folders into the `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8` directory. Overwrite files if prompted. This step adds the necessary TensorRT and cuDNN files according to the provided archive.
**Step 2: Install Microsoft Visual C++ (MSVC)**
1. **Download Visual Studio Installer:**
* Download the setup file from this link: [VisualStudioSetup.exe Download](https://huggingface.co/datasets/NeuroDonu/PortableVersions/blob/main/VisualStudioSetup.exe)
2. **Run Installer and Select Components:**
* Run the downloaded `VisualStudioSetup.exe`.
* In the "Workloads" tab, select **"Desktop development with C++"**.
* In the "Installation details" pane on the right, ensure the following (and potentially others shown in the images) are selected. Refer to the provided screenshots:
* MSVC v143 - VS 2022 C++ x64/x86 build tools (Latest)
* Windows 11 SDK (or the appropriate Windows 10 SDK if on Win10)
* C++ CMake tools for Windows



* Click "Install" and wait for the process to complete.
**Step 3: Set Up Build Environment and Install Python Packages**
1. **Open Developer Command Prompt:**
* Go to your Windows Start Menu and search for "x64 Native Tools Command Prompt for VS 2022".
* Right-click and select "Run as administrator".

2. **Navigate to Your Project Directory:**
* In the command prompt, use the `cd` command to change to the directory where your portable Python installation is located (your "portable dir"). For example:
```bash
cd /d "C:\path\to\your\portable\"
```
*(Replace `"C:\path\to\your\portable\"` with the actual path).*
3. **Install Triton:**
* *(Assumption: You have a portable Python installation in a subdirectory named `python` within your current directory, and `uv` is available)*.
* **If using Python 3.10:**
```bash
python\python.exe -m uv pip install https://huggingface.co/madbuda/triton-windows-builds/resolve/main/triton-3.0.0-cp310-cp310-win_amd64.whl
```
* **If using Python 3.11:**
```bash
python\python.exe -m uv pip install https://huggingface.co/madbuda/triton-windows-builds/resolve/main/triton-3.0.0-cp311-cp311-win_amd64.whl
```
4. **Add Additional Python Files:**
* Download the specified zip file: [python_3.11.9_comfy.zip Download](https://huggingface.co/madbuda/triton-windows-builds/blob/main/python_3.11.9_comfy.zip) (Click "Download" on the Hugging Face page).
* Extract the contents of this zip file directly into your main portable Python directory (e.g., the directory containing `python.exe`).
5. **Install SageAttention:**
* **If using Python 3.10:**
```bash
python\python.exe -m uv pip install https://huggingface.co/datasets/NeuroDonu/PortableVersions/resolve/main/sageattention-2.1.1-cp310-cp310-win_amd64.whl
```
* **If using Python 3.11:**
```bash
python\python.exe -m uv pip install https://huggingface.co/datasets/NeuroDonu/PortableVersions/resolve/main/sageattention-2.1.1-cp311-cp311-win_amd64.whl
```
**Step 4: Install Musubi Tuner (Next Steps)**
* The instructions above cover the installation of the dependencies (CUDA, MSVC, Triton, SageAttention) as specified in your request.
* **The actual command or method to install the "musubi tuner nightly build" itself is missing from the provided information.** You will need to consult the documentation or source repository for Musubi Tuner to find the correct command for installing it after completing these prerequisite steps. |
rohan2810/qasc | rohan2810 | 2025-05-01T06:07:16Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T06:07:11Z | null | ---
dataset_info:
features:
- name: fixed_prompt
dtype: string
- name: itemList
sequence: string
- name: trueSelection
dtype: string
splits:
- name: train
num_bytes: 2342796.0
num_examples: 7248
- name: validation
num_bytes: 292849.5
num_examples: 906
- name: test
num_bytes: 292849.5
num_examples: 906
download_size: 1367374
dataset_size: 2928495.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
test-gen/mbpp_mbpp-dagger-easy-qwen-coder-0.5b-instruct-from-sft_t0.0_n1_generated_tests | test-gen | 2025-05-01T04:50:47Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:50:46Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 304205
num_examples: 500
download_size: 134102
dataset_size: 304205
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
immindich/qwen-7b-r1-corrupted-answers | immindich | 2025-05-01T04:38:46Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:38:45Z | null | ---
dataset_info:
features:
- name: sample_idx
dtype: int64
- name: example_idx
dtype: int64
- name: corruption_idx
dtype: int64
- name: tag
dtype: string
- name: answers_clean
sequence: string
- name: answers_corrupted
sequence: string
splits:
- name: train
num_bytes: 15290904
num_examples: 440
download_size: 5633649
dataset_size: 15290904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
brandonyang/stackthree_d1 | brandonyang | 2025-05-01T04:19:50Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T04:18:42Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 255096,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.agentview_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper_1, gripper_2"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
nh0znoisung/timit | nh0znoisung | 2025-05-01T04:10:54Z | 0 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:24:12Z | null | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: phonetic_detail
sequence:
- name: start
dtype: int64
- name: stop
dtype: int64
- name: utterance
dtype: string
- name: word_detail
sequence:
- name: start
dtype: int64
- name: stop
dtype: int64
- name: utterance
dtype: string
- name: dialect_region
dtype: string
- name: sentence_type
dtype: string
- name: speaker_id
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 327088581.176
num_examples: 3234
- name: val
num_bytes: 138982825.85999998
num_examples: 1386
- name: test
num_bytes: 170677462.48
num_examples: 1680
download_size: 591431153
dataset_size: 636748869.516
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
GitBag/math_size_1.5_eval | GitBag | 2025-05-01T04:06:41Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:06:33Z | null | ---
dataset_info:
features:
- name: level
dtype: string
- name: type
dtype: string
- name: data_source
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: answer
dtype: string
- name: index
dtype: int64
- name: question
dtype: string
- name: split
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 405284941
num_examples: 7500
download_size: 188781440
dataset_size: 405284941
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
brandonyang/square_d0 | brandonyang | 2025-05-01T04:05:16Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T03:53:52Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 153477,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.agentview_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper_1, gripper_2"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_1_for_gen_15 | HungVu2003 | 2025-05-01T03:57:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:57:36Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2783065
num_examples: 12498
download_size: 1514625
dataset_size: 2783065
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_qwen2.5-coder-0.5b_temp0.1_num8_tests_mbpp_mbpp-sft-qwen-coder-0.5b_t0.0_n1 | test-gen | 2025-05-01T03:36:28Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:36:26Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: execution_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5821868
num_examples: 500
download_size: 1113769
dataset_size: 5821868
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
flatlander1024/math_merged_cot_sol_pair_mixed | flatlander1024 | 2025-05-01T03:27:22Z | 0 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T22:52:03Z | null | ---
license: mit
---
Pair Type Breakdown:
Correct-Incorrect (C-I) Pairs: 5500
- C-I with correct first ('[1]'): 2750
- C-I with correct second ('[2]'): 2750
Correct-Correct (C-C) Pairs (Target: 2750, Max Diff: 150): 2750
- C-C pairs from 'all_correct' problems: 906
Incorrect-Incorrect (I-I) Pairs (Target: 2750, Max Diff: 150): 2750
- I-I pairs from 'all_incorrect' problems: 1156 |
kwangchaeko/eval_act_koch_test_100000 | kwangchaeko | 2025-05-01T03:15:08Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-01T03:14:58Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 1,
"total_frames": 1212,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
dgambettaphd/D_llm2_gen1_W_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-01T02:10:15Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T02:10:11Z | null | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 9637393
num_examples: 17000
download_size: 5763719
dataset_size: 9637393
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yzha/R1_distilled_brain_teasers | yzha | 2025-05-01T01:45:56Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:36:44Z | null | ---
dataset_info:
features:
- name: puzzle_id
dtype: string
- name: reconstruction
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: distrator1
dtype: string
- name: distrator2
dtype: string
- name: unsure
dtype: string
- name: DSR1_reasoning_content
dtype: string
- name: DSR1_content
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: answerKey
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: original_question
dtype: string
splits:
- name: train
num_bytes: 41002904
num_examples: 3793
download_size: 18873757
dataset_size: 41002904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ma921/imdb-generated | ma921 | 2025-05-01T01:29:25Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:29:21Z | null | ---
dataset_info:
features:
- name: win
dtype: string
- name: lose
dtype: string
- name: reward_win
dtype: float32
- name: reward_lose
dtype: float32
splits:
- name: train
num_bytes: 38234275
num_examples: 10000
- name: test
num_bytes: 7638486
num_examples: 2007
download_size: 13408375
dataset_size: 45872761
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
psyonp/ablation__drop_bottom20pct__sentiment_question | psyonp | 2025-05-01T01:05:51Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:05:50Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: num_tokens_question
dtype: int64
- name: num_tokens_response
dtype: int64
- name: semantic_similarity
dtype: float64
- name: sentiment_question
dtype: float64
- name: sentiment_response
dtype: float64
- name: readability_question
dtype: float64
- name: readability_response
dtype: float64
- name: ttr_question
dtype: float64
- name: ttr_response
dtype: float64
- name: toxicity_question
dtype: float64
- name: toxicity_response
dtype: float64
- name: euclidean_distance
dtype: float64
- name: kl_divergence
dtype: float64
splits:
- name: train
num_bytes: 334443
num_examples: 382
download_size: 140013
dataset_size: 334443
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm2_gen0_W_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-01T00:49:22Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T00:49:13Z | null | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 9145932
num_examples: 16000
download_size: 5492110
dataset_size: 9145932
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hjshah/bfcl | hjshah | 2025-05-01T00:16:02Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T00:15:58Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: multi_turn
dtype: bool
- name: functions
dtype: string
- name: missed_functions
dtype: string
- name: initial_config
dtype: string
- name: involved_classes
sequence: string
- name: turns
dtype: string
- name: language
dtype: string
- name: test_category
dtype: string
- name: subset
dtype: string
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 63846322
num_examples: 4441
download_size: 7639281
dataset_size: 63846322
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_2_for_gen_13 | HungVu2003 | 2025-04-30T23:18:24Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T23:18:23Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3263617
num_examples: 12498
download_size: 1073525
dataset_size: 3263617
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lmcinnes/arxiv_ml | lmcinnes | 2025-04-30T22:10:13Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:53:39Z | null | ---
dataset_info:
features:
- name: date_created
dtype: timestamp[ns]
- name: abstract
dtype: string
- name: title
dtype: string
- name: categories
dtype: string
- name: arxiv_id
dtype: string
- name: year
dtype: int32
- name: embedding_str
dtype: string
- name: embedding
sequence: float64
- name: data_map
sequence: float64
splits:
- name: train
num_bytes: 2450676134
num_examples: 281816
download_size: 1807632673
dataset_size: 2450676134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv_ml"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of titles and abstracts of machine learning related papers from ArXiv. This data is derived from the [ArXiv dataset available on Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
The selection of papers was determined by selecting all papers that used a category tag in the set {"cs.LG", "cs.AI", "cs.CL", "stat.ML", "cs.IR", "cs.NE", "cs.SC"}.
To supplement the titles and abstracts the creation time of the paper, as well as the categories are provided. To make exploration easier embeddings of the
title and abstract have been made using the [Nomic-embed-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) text embedding model, and a 2D
representation using UMAP is also provided.
### Supported Tasks
This dataset is primarily aimed at tasks such as topic modelling, corpus triage, search and information retrieval, and other NLP tasks.
### Languages
The dataset is in English, although other languages may also be present.
## Dataset Creation
### Curation Rationale
The fill ArXiv dataset is too large for many tasks. Subsetting to a selection of ArXiv categories related the AI and ML ensures
a reasonably sized dataset that should mostly contain topics that are familiar to those wishing to use the dataset.
### Source Data
This data is derived from the [ArXiv dataset available on Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
### Personal and Sensitive Information
This dataset contains publicly published information that was available under a CC0: public domain license via Kaggle.
There should be no personal or senstive information in this dataset. If this is in error, please contact the maintainer
and we will endeavour to remedy any issues.
## Additional Information
### Dataset Curators
Leland McInnes for the curated subset, Cornell University for the initial full dataset.
### Licensing Information
Licensed as CC0: Public Domain.
|
osama24sy/DeepSeek-R1-Distill-Qwen-1.5B-results-20250430-17460504164332 | osama24sy | 2025-04-30T22:07:55Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T22:07:52Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 377377
num_examples: 150
download_size: 87601
dataset_size: 377377
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
slavekroller/HTAreasoning-methodology-reasoning-trajectories | slavekroller | 2025-04-30T20:55:23Z | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"reasoning-datasets-competition"
] | [] | 2025-04-30T19:50:42Z | null | ---
license: cc-by-4.0
tags:
- reasoning-datasets-competition
---
# HTAreasoning Datasets: Can Al Value Life?
## HTAreasoning-methodology-reasoning-trajectories Dataset card
Part of HTAreasoning. See https://huggingface.co/datasets/slavekroller/HTAreasoning-results.
### Dataset Fields
| Field Name | Definition |
| :------------------------------------------------- | :--------- |
| `link` | link to source documents, containing full descriptions of an estimation model being assessed as well as the reasoning trajectories |
| `methodology_choice_reservation` | severity of a methodological reservation made by the assessment committee |
| `methodology_choice_class` | scope, within which a methodological choice was made by the submitter |
| `methodology_choice_submitter_reasoning` | extracted reasoning trajectory of the submittor |
| `methodology_choice_assessor_reasoning` | extracted reasoning trajectory of the assessment committee |
| `methodology_choice_assessor_reasoning_summary_AI-generated-Gemini` | AI-generated comment - not extracted directly from the source documents - augments the extracted dataset by providing a one-line summary of the methodological reservation |
### Citation
HTAreasoning-methodology-reasoning-trajectories. HTAreasoning Datasets (2025). Slavek Roller. |
palli23/spjallromur-2x-gold | palli23 | 2025-04-30T20:31:40Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T20:31:35Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
- name: speaker
dtype: string
- name: session
dtype: string
splits:
- name: train
num_bytes: 21493489.0
num_examples: 202
download_size: 21142619
dataset_size: 21493489.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mervinpraison/harupfall-accelerometer-images-actual | mervinpraison | 2025-04-30T19:24:13Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:45:58Z | null | ---
dataset_info:
features:
- name: sequence
dtype: string
- name: sensor
dtype: string
- name: raw_data
dtype: string
- name: main_label
dtype: string
- name: extracted_labels
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 466088404.0
num_examples: 930
download_size: 95375237
dataset_size: 466088404.0
---
# Dataset Card for "harupfall-accelerometer-images-actual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceTB/wikispeedia-traces | HuggingFaceTB | 2025-04-30T19:12:57Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:23:16Z | null | ---
dataset_info:
features:
- name: model
dtype: string
- name: api_base
dtype: string
- name: max_links
dtype: int32
- name: max_tries
dtype: int32
- name: start_article
dtype: string
- name: destination_article
dtype: string
- name: seed
dtype: int32
- name: result
dtype: string
- name: steps
sequence:
- name: step
dtype: int32
- name: type
dtype: string
- name: current_article
dtype: string
- name: action
dtype: string
- name: observation
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: qwen3_14b
num_bytes: 230982
num_examples: 420
download_size: 12637
dataset_size: 230982
configs:
- config_name: default
data_files:
- split: qwen3_14b
path: data/qwen3_14b-*
---
# Wikihop Experiment Results
Results generated from: `qwen3-final-results.json`
## Experiment Parameters
- **Number of Trials per Pair:** 1
- **Max Steps per Game:** 20
- **Agent Settings:**
- Model: `hosted_vllm/Qwen/Qwen3-14B`
- API Base: `http://ip-26-0-163-127:8000/v1/`
- Max Links: 500
- Max Tries: 3
## Summary Statistics
- **Total Runs:** 420
- **Win Rate:** 55.24%
- **Average Hops (on Wins):** 6.64
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_0_for_gen_5 | HungVu2003 | 2025-04-30T19:00:06Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T19:00:05Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6112706
num_examples: 12500
download_size: 2090382
dataset_size: 6112706
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
erenfazlioglu/placesofworshipdataset | erenfazlioglu | 2025-04-30T12:56:26Z | 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"region:us"
] | [] | 2025-04-30T12:50:53Z | null | ---
license: cc-by-4.0
language:
- en
pretty_name: Places of Worship Dataset
size_categories:
- 1M<n<10M
---
# Places of Worship Dataset
## Description
This dataset contains information about places of worship (primarily mosques). It provides location details including names, addresses, and geographic coordinates that can be used for various mapping, analysis, and research purposes.
## Dataset Content
The dataset is provided in CSV format with the following columns:
- **Name**: The name of the place of worship
- **Address**: Full formatted address of the location
- **Latitude**: Geographic latitude coordinate
- **Longitude**: Geographic longitude coordinate
## Data Source
The data was extracted from a database using a custom PHP script with specialized location data.
## Potential Use Cases
- Geographic analysis of religious sites
- Accessibility studies for religious facilities
- Cultural and religious mapping
- Integration with location-based services
- Research on religious infrastructure distribution
## Limitations
- The dataset may not be comprehensive for all regions
- Some entries might have missing values (marked as 'N/A')
- The data represents a snapshot at the time of extraction
## Contact
For questions or additional information regarding this dataset, please contact:
x.com/erenfazlioglutr |
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_0_for_gen_9 | HungVu2003 | 2025-04-30T12:51:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:51:03Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4275556
num_examples: 12498
download_size: 1413562
dataset_size: 4275556
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
majwadalam/urdu_samples_processed_text_descriptions | majwadalam | 2025-04-30T12:48:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:48:02Z | null | ---
dataset_info:
features:
- name: audiopath
dtype: string
- name: text
dtype: string
- name: Normalized text
dtype: string
- name: sampling_rate
dtype: int64
- name: duration
dtype: float64
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
- name: text_description
dtype: string
splits:
- name: train
num_bytes: 246843
num_examples: 119
download_size: 115751
dataset_size: 246843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
midrees2806/unmatched_queries | midrees2806 | 2025-04-30T12:47:03Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T07:17:00Z | null | ---
dataset_info:
features:
- name: Query
dtype: string
- name: Timestamp
dtype: string
- name: Processed
dtype: bool
splits:
- name: train
num_bytes: 130
num_examples: 3
download_size: 1546
dataset_size: 130
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EYEDOL/mozilla_commonvoice_naijaYoruba1_preprocessed_train_batch_4 | EYEDOL | 2025-04-30T12:36:01Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:33:00Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: input_length
dtype: int64
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: labels_length
dtype: int64
splits:
- name: train
num_bytes: 13926934672.75
num_examples: 12962
download_size: 3078372126
dataset_size: 13926934672.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Darkester/bCoT | Darkester | 2025-04-30T11:42:13Z | 0 | 0 | [
"task_categories:text2text-generation",
"language:ru",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"legal"
] | [
"text2text-generation"
] | 2025-04-30T11:12:22Z | null | ---
task_categories:
- text2text-generation
language:
- ru
tags:
- legal
size_categories:
- n<1K
pretty_name: sas
--- |
orgcatorg/wikipedia | orgcatorg | 2025-04-30T11:36:51Z | 76 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-06T06:18:34Z | null | ---
dataset_info:
- config_name: bn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 1167115208
num_examples: 156143
download_size: 441690826
dataset_size: 1167115208
- config_name: hi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 793684300
num_examples: 166726
download_size: 302408181
dataset_size: 793684300
- config_name: id
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1177273270
num_examples: 688206
download_size: 610697793
dataset_size: 1177273270
- config_name: ms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 442552369
num_examples: 373189
download_size: 220484368
dataset_size: 442552369
- config_name: th
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 4899327
num_examples: 48408
download_size: 2146000
dataset_size: 4899327
- config_name: tl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 53980052
num_examples: 48408
download_size: 30423055
dataset_size: 53980052
- config_name: vi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 1938478921
num_examples: 1294721
download_size: 896915549
dataset_size: 1938478921
configs:
- config_name: bn
data_files:
- split: train
path: bn/train-*
- config_name: hi
data_files:
- split: train
path: hi/train-*
- config_name: id
data_files:
- split: train
path: id/train-*
- config_name: ms
data_files:
- split: train
path: ms/train-*
- config_name: th
data_files:
- split: train
path: th/train-*
- config_name: tl
data_files:
- split: train
path: tl/train-*
- config_name: vi
data_files:
- split: train
path: vi/train-*
---
|
vetter0002/Llama-3.2-1B-Instruct_gsm8k_s1 | vetter0002 | 2025-04-30T11:32:28Z | 150 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-26T09:54:57Z | null | ---
dataset_info:
- config_name: eval_Llama-3.2-1B-Instruct_dgsm8k_batch100
features:
- name: Task ID
dtype: int64
- name: Question
dtype: string
- name: Responses
dtype: string
- name: Extracted Answer
dtype: string
- name: Ground Truth
dtype: string
splits:
- name: train
num_bytes: 1658081
num_examples: 1319
download_size: 739687
dataset_size: 1658081
- config_name: eval_Llama-3.2-1B-Instruct_dgsm8k_batch100_evaltraining
features:
- name: Task ID
dtype: int64
- name: Question
dtype: string
- name: Responses
dtype: string
- name: Extracted Answer
dtype: string
- name: Ground Truth
dtype: string
splits:
- name: train
num_bytes: 9207871
num_examples: 7473
download_size: 4061129
dataset_size: 9207871
- config_name: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch100
features:
- name: Task ID
dtype: int64
- name: Question
dtype: string
- name: Responses
dtype: string
- name: Extracted Answer
dtype: string
- name: Ground Truth
dtype: string
splits:
- name: train
num_bytes: 1861569
num_examples: 1319
download_size: 786266
dataset_size: 1861569
- config_name: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch100_evaltraining
features:
- name: Task ID
dtype: int64
- name: Question
dtype: string
- name: Responses
dtype: string
- name: Extracted Answer
dtype: string
- name: Ground Truth
dtype: string
splits:
- name: train
num_bytes: 10475543
num_examples: 7473
download_size: 5825656
dataset_size: 10475543
configs:
- config_name: eval_Llama-3.2-1B-Instruct_dgsm8k_batch100
data_files:
- split: train
path: eval_Llama-3.2-1B-Instruct_dgsm8k_batch100/train-*
- config_name: eval_Llama-3.2-1B-Instruct_dgsm8k_batch100_evaltraining
data_files:
- split: train
path: eval_Llama-3.2-1B-Instruct_dgsm8k_batch100_evaltraining/train-*
- config_name: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch100
data_files:
- split: train
path: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch100/train-*
- config_name: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch100_evaltraining
data_files:
- split: train
path: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch100_evaltraining/train-*
---
|
alanvivares/latam-spanish-ar | alanvivares | 2025-04-30T11:14:18Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T11:02:42Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: file_id
dtype: string
- name: nationality
dtype: string
- name: gender
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 825729826.0
num_examples: 5739
download_size: 776840581
dataset_size: 825729826.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SDSB/merged_deduped_dapo_or1_dataset | SDSB | 2025-04-30T10:29:48Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T10:08:40Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 33662063
num_examples: 117192
download_size: 19609522
dataset_size: 33662063
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SmallDoge/Doge2-tokenizer-samples | SmallDoge | 2025-04-30T10:03:15Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T09:21:03Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9442731704
num_examples: 2000000
download_size: 5757372349
dataset_size: 9442731704
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nexdata/310_Hours_Turkish_Scripted_Monologue_Smartphone_Speech_Dataset | Nexdata | 2025-04-30T09:49:05Z | 52 | 1 | [
"license:cc-by-nc-nd-4.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2024-04-12T03:08:50Z | null | ---
license: cc-by-nc-nd-4.0
---
## Description
This dataset is just a sample of 310 Hours Turkish Scripted Monologue Smartphone Speech Dataset(paid dataset).Turkish Scripted Monologue Smartphone Speech Dataset, collected from monologue based on given scripts. Transcribed with text content. Our dataset was collected from extensive and diversify speakers(223 people in total, from turkey), geographicly speaking, enhancing model performance in real and complex tasks.rnQuality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
For more details & to download the rest of the dataset(paid),please refer to the link: https://www.nexdata.ai/datasets/speechrecog/1324?source=Huggingface
## Format
16kHz, 16bit, uncompressed wav, mono channel.
## Recording condition
quiet indoor environment, low background noise, without echo;
## Recording device
Android smartphone, iPhone;
## Speaker
223 native speakers in total, 54% male and 46% female;
## Country
Turkey(TUR);
## Language(Region) Code
tr-TR;
## Language
Turkish;
## Features of annotation
Transcription text;
## Accuracy Rate
Word Accuracy Rate (WAR) 95%;
# Licensing Information
Commercial License
|
korbih/ui-sensei-curriculum-0-test-20250424_213955-complete-double_checked | korbih | 2025-04-30T09:12:49Z | 4 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T14:21:58Z | null | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: trial_number
dtype: int32
- name: task_description
dtype: string
- name: start_url
dtype: string
- name: is_success
dtype: bool
- name: is_shortest
dtype: bool
- name: evaluator_thoughts
dtype: string
- name: evaluator_status
dtype: string
- name: run_error
dtype: string
- name: step_index
dtype: int32
- name: url_at_step
dtype: string
- name: prompt
dtype: string
- name: action
dtype: string
- name: screenshot
struct:
- name: bytes
dtype: binary
- name: path
dtype: string
- name: annotated_screenshot
struct:
- name: bytes
dtype: binary
- name: path
dtype: string
- name: is_success_original
dtype: bool
- name: evaluator_thoughts_original
dtype: string
- name: double_checked
dtype: bool
splits:
- name: train
num_bytes: 2158616644
num_examples: 5550
download_size: 1092809999
dataset_size: 2158616644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
scenes-generation/Doraemon | scenes-generation | 2025-04-30T09:06:29Z | 1 | 0 | [
"license:mit",
"region:us"
] | [] | 2025-04-23T06:57:49Z | null | ---
license: mit
---
How to use?
cat doraemon_part_* > doraemon.tar.gz
tar -xzvf doraemon.tar.gz
--- |
mueller91/MLAAD | mueller91 | 2025-04-27T17:07:44Z | 3,850 | 5 | [
"task_categories:audio-classification",
"language:en",
"language:de",
"language:fr",
"language:es",
"language:uk",
"language:pl",
"language:ru",
"language:it",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"arxiv:2401.09512",
"region:us",
"audio",
"deepfake",
"audio-deepfake-detection",
"anti-spoofing",
"voice",
"voice-antispoofing",
"MLAAD"
] | [
"audio-classification"
] | 2025-03-16T12:53:09Z | 2 | ---
license: apache-2.0
language:
- en
- de
- fr
- es
- uk
- pl
- ru
- it
task_categories:
- audio-classification
tags:
- audio
- deepfake
- audio-deepfake-detection
- anti-spoofing
- voice
- voice-antispoofing
- MLAAD
pretty_name: 'MLAAD: The Multi-Language Audio Anti-Spoofing Dataset'
size_categories:
- 100K<n<1M
---
<p align="center" style="width: 50%">
<img src="https://cdn-uploads.huggingface.co/production/uploads/651bba9c00137407015e0bdf/DDRTGPCGGr-d0rQ_M-GwG.png" />
</p>
### Introduction
Welcome to MLAAD: The Multi-Language Audio Anti-Spoofing Dataset -- a dataset to train, test and evaluate audio deepfake detection. See
[the paper](https://arxiv.org/pdf/2401.09512.pdf) for more information.
### Download the dataset
```
# if needed, install git-lfs
sudo apt-get install git-lfs
git lfs install
# clone the repository
git clone https://huggingface.co/datasets/mueller91/MLAAD
```
### Structure
The dataset is based on the [M-AILABS](https://github.com/imdatceleste/m-ailabs-dataset) dataset.
MLAAD is structured as follows:
```
fake
|-language_1
|-language_2
|- ....
|- language_K
| - model_1_K
| - model_2_K
| - ....
| - model_L_K
| - meta.csv
| - audio_L_K_1.wav
| - audio_L_K_2.wav
| - audio_L_K_3.wav
| - ....
| - audio_L_K_1000.wav
```
The file 'meta.csv' contains the following identifiers. For more in these, please see the [paper](https://arxiv.org/pdf/2401.09512) and [our website](https://deepfake-total.com/mlaad).
```
path|original_file|language|is_original_language|duration|training_data|model_name|architecture|transcript
```
### Proposed Usage
We suggest to use MLAAD either as new out-of-domain test data for existing anti-spoofing models, or as additional training resource.
We urge to complement the fake audios in MLAAD with the corresponding authentic ones from M-AILABS, in order to obtain a balanced dataset.
M-AILABS can be downloaded [here](https://github.com/imdatceleste/m-ailabs-dataset).
An antispoofing model trained on (among others) the MLAAD dataset is available [here](https://deepfake-total.com/).
### Bibtex
```
@article{muller2024mlaad,
title={MLAAD: The Multi-Language Audio Anti-Spoofing Dataset},
author={M{\"u}ller, Nicolas M and Kawa, Piotr and Choong, Wei Herng and Casanova, Edresson and G{\"o}lge, Eren and M{\"u}ller, Thorsten and Syga, Piotr and Sperl, Philip and B{\"o}ttinger, Konstantin},
journal={arXiv preprint arXiv:2401.09512},
year={2024}
}
``` |
weaverbirdllm/famma-reasoning | weaverbirdllm | 2025-04-27T05:31:00Z | 510 | 10 | [
"task_categories:question-answering",
"language:en",
"language:zh",
"language:fr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2410.04526",
"region:us",
"finance"
] | [
"question-answering"
] | 2025-04-21T13:56:54Z | 4 | ---
license: apache-2.0
dataset_info:
features:
- name: idx
dtype: int64
- name: question_id
dtype: string
- name: source_release
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answers
dtype: string
- name: thinking_trajectory
dtype: string
- name: structured_thinking_trajectory
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
- name: language
dtype: string
- name: main_question_id
dtype: int64
- name: sub_question_id
dtype: int64
- name: is_arithmetic
dtype: bool
- name: release
dtype: string
splits:
- name: release_reasoning_basic_txt
num_bytes: 17080446
num_examples: 1273
download_size: 6860503
dataset_size: 17080446
configs:
- config_name: default
data_files:
- split: release_reasoning_basic_txt
path: data/release_reasoning_basic_txt-*
task_categories:
- question-answering
language:
- en
- zh
- fr
tags:
- finance
pretty_name: long cot for finance
size_categories:
- 1K<n<10K
---
## FAMMA‑Reasoning
**A distilled, tool‑augmented reasoning dataset for financial Q&A**
## Dataset Summary
`FAMMA‑Reasoning` is built on top of the [FAMMA benchmark](https://famma-bench.github.io/famma/), a multilingual, multimodal financial question‑answering dataset covering tables, charts, and text/math screenshots across eight subfields and three difficulty levels.
Here, every example pairs a question with:
- **Thinking Trajectories**: natural‑language, step‑by‑step chains of thought generated by DeepSeek‑R1.
- **Structured Thinking Trajectories**: tagged sequences using `<think>`, `<search>`, `<python>`, `<information>`, etc.—ideal for training models on tool‑use workflows.
All questions are sourced from official [FAMMA](https://huggingface.co/datasets/weaverbirdllm/famma) releases and correctly answered by DeepSeek‑R1, providing high‑quality ground truth for both reasoning and final answers.
The project code is available at https://github.com/famma-bench/bench-script.
## Dataset Versions
`FAMMA‑Reasoning` is continuously updated with new questions. We provide different versions of the dataset:
* `release_reasoning_basic_txt`: R1‑reasoning annotations for`FAMMA`’s `release_basic_txt` split (1,273 questions). All multimodal content (tables, charts, text/math screenshots) has been OCR‑extracted into plain‑text context, and each question is paired with DeepSeek‑R1’s long‑form chains of thought and structured tool‑use trajectories.
## Download
see the script at https://github.com/famma-bench/bench-script/blob/main/step_1_download_dataset.py
Fristly, clone the repository and install the dependencies:
```bash
git clone https://github.com/famma-bench/bench-script.git
cd bench-script
pip install -r requirements.txt
pip install -e .
```
To download the dataset, run the following command:
```bash
python step_1_download_dataset.py \
--hf_dir "weaverbirdllm/famma-reasoning" \
--split "release_reasoning_basic_txt" \ # or None to download the whole set
--save_dir "./hf_data"
```
Options:
- `--hf_dir`: HuggingFace repository name
- `--split`: Specific version to download (optional)
- `--save_dir`: Local directory to save the dataset (default: "./hf_data")
After downloading, the dataset will be saved in the local directory `./data` in json format.
## Citation
If you use FAMMA in your research, please cite our paper as follows:
```latex
@article{xue2024famma,
title={FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering},
author={Siqiao Xue, Tingting Chen, Fan Zhou, Qingyang Dai, Zhixuan Chu, and Hongyuan Mei},
journal={arXiv preprint arXiv:2410.04526},
year={2024},
url={https://arxiv.org/abs/2410.04526}
}
``` |
agentlans/reddit-ethics | agentlans | 2025-04-26T22:18:39Z | 246 | 3 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:feature-extraction",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"social-media",
"reddit",
"ethics",
"morality",
"philosophy",
"alignment",
"reasoning-datasets-competition"
] | [
"text-classification",
"question-answering",
"feature-extraction"
] | 2025-04-23T14:46:25Z | 3 | ---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
- feature-extraction
language:
- en
tags:
- social-media
- reddit
- ethics
- morality
- philosophy
- alignment
- reasoning-datasets-competition
---
# Reddit Ethics: Real-World Ethical Dilemmas from Reddit
Reddit Ethics is a curated dataset of genuine ethical dilemmas collected from Reddit, designed to support research and education in philosophical ethics, AI alignment, and moral reasoning.
Each entry features a real-world scenario accompanied by structured ethical analysis through major frameworks—utilitarianism, deontology, and virtue ethics. The dataset also provides discussion questions, sample answers, and proposed resolutions, making it valuable for examining human values and ethical reasoning in practical contexts.
The construction of Reddit Ethics involved random sampling from the first
10 000 entries of the [OsamaBsher/AITA-Reddit-Dataset](https://huggingface.co/datasets/OsamaBsher/AITA-Reddit-Dataset) longer than 1 000 characters.
Five seed cases were manually annotated using ChatGPT.
Additional cases were generated via few-shot prompting with [agentlans/Llama3.1-LexiHermes-SuperStorm](https://huggingface.co/agentlans/Llama3.1-LexiHermes-SuperStorm) to ensure diversity and scalability while maintaining consistency in ethical analysis.
The dataset covers a wide range of everyday ethical challenges encountered in online communities, including personal relationships, professional conduct, societal norms, technology, and digital ethics.
## Data Structure
Each dataset entry contains:
- `text`: The original Reddit post describing the ethical dilemma.
- `title`: A concise summary of the ethical issue.
- `description`: A brief overview of the scenario.
- `issues`: Key ethical themes or conflicts.
- Ethical analyses from three major philosophical perspectives:
- `utilitarianism`: Evaluates actions by their consequences, aiming to maximize overall well-being.
- `deontology`: Assesses the moral rightness of actions based on rules, duties, or obligations, regardless of outcomes.
- `virtue_ethics`: Focuses on the character traits and intentions of the agents involved, emphasizing virtues such as honesty, integrity, and fairness.
- Note that the three ethical frameworks reflect major traditions in normative ethics and are widely used for structuring ethical reasoning in academic and applied settings.
- `questions`: Discussion prompts for further analysis.
- `answers`: Sample responses to the discussion questions.
- `resolution`: A suggested synthesis or resolution based on the ethical analysis.
### Example Entry
```json
{
"text": "my so and i are both 20, and i live in a house with 3 other people who are 19-21. ... would we be in the wrong if we pursued this?",
"title": "Household Property and Moral Obligation: The Ethics of Repair and Replacement",
"description": "A couple and their housemates disagree over the cost of a new TV after the old one was broken. One housemate wants the new TV to stay, while another suggests paying for the replacement.",
"issues": [
"Shared Responsibility vs. Personal Investment",
"Equity vs. Fairness",
"Moral Obligations vs. Practicality"
],
"utilitarianism": "Considering the overall household benefit and the cost-benefit analysis, it may be fair to let the TV remain.",
"deontology": "The couple should hold to their agreement to sell the TV to the housemates, respecting their word and the value of fairness.",
"virtue_ethics": "Honesty and integrity guide the choice—acknowledging the financial burden and seeking a solution that respects all members.",
"questions": [
"Should the couple be bound by their agreement to sell the TV at a lower price?",
"How should the household balance fairness and practicality in resolving the TV issue?",
"What is the moral weight of past sacrifices and the current financial situation?"
],
"answers": [
"Yes, the couple should honor their agreement to sell the TV at a lower price, upholding their commitment to fairness and honesty.",
"The household should discuss and agree on a fair solution, considering the value of the TV and each member’s financial situation.",
"Previous sacrifices and current financial hardship can influence the moral weight of the decision, but fairness and respect should guide the solution."
],
"resolution": "The couple should adhere to their agreement to sell the TV at a lower price, showing respect for their word and the household's fairness. This approach fosters trust and sets a positive precedent for future conflicts."
}
```
## Limitations
1. Limited to a single subreddit as a proof of concept.
2. Potential selection bias due to subreddit demographics and culture.
3. The dataset predominantly represents Western, individualistic perspectives.
4. Not tailored to specialized branches such as professional, bioethical, or environmental ethics.
5. Some cases may reflect social or communication issues rather than clear-cut ethical dilemmas.
6. Analyses are concise due to space constraints and may not provide in-depth philosophical exploration.
7. Annotation bias may arise from the use of large language models.
## Licence
Creative Commons Attribution 4.0 International (CC-BY-4.0)
|
Major-TOM/Core-S1RTC-DeCUR | Major-TOM | 2025-04-25T05:53:21Z | 453 | 3 | [
"license:cc-by-sa-4.0",
"size_categories:10M<n<100M",
"modality:geospatial",
"arxiv:2412.05600",
"doi:10.57967/hf/5239",
"region:us",
"embeddings",
"earth-observation",
"remote-sensing",
"sentinel-1",
"sar",
"radar",
"satellite",
"geospatial",
"satellite-imagery"
] | [] | 2024-12-24T08:16:44Z | 2 | ---
license: cc-by-sa-4.0
tags:
- embeddings
- earth-observation
- remote-sensing
- sentinel-1
- sar
- radar
- satellite
- geospatial
- satellite-imagery
size_categories:
- 10M<n<100M
configs:
- config_name: default
data_files: embeddings/*.parquet
---

# Core-S1RTC-DeCUR 📡⚡🛰️
| Dataset | Modality | Number of Embeddings | Sensing Type | Total Comments | Source Dataset | Source Model | Size |
|:--------:|:--------------:|:-------------------:|:------------:|:--------------:|:--------------:|:--------------:|:--------------:|
|Core-S1RTC-SSL4EO|Sentinel-1 RTC|36,748,875|SAR|General-Purpose Global|[Core-S1RTC](https://huggingface.co/datasets/Major-TOM/Core-S1RTC)|[DeCUR](https://github.com/zhu-xlab/DeCUR)|GB|
## Content
| Field | Type | Description |
|:-----------------:|:--------:|-----------------------------------------------------------------------------|
| unique_id | string | hash generated from geometry, time, product_id, and embedding model |
| embedding | array | raw embedding array |
| grid_cell | string | Major TOM cell |
| grid_row_u | int | Major TOM cell row |
| grid_col_r | int | Major TOM cell col |
| product_id | string | ID of the original product |
| timestamp | string | Timestamp of the sample |
| centre_lat | float | Centre of the fragment latitude |
| centre_lon | float | Centre of the fragment longitude |
| geometry | geometry | Polygon footprint (WGS84) of the fragment |
| utm_footprint | string | Polygon footprint (image UTM) of the fragment |
| utm_crs | string | CRS of the original product |
| pixel_bbox | bbox | Boundary box of the fragment (pixels)
## Input Data
* Sentinel-1 RTC radar dataset global coverage
* All samples from [**MajorTOM Core-S1RTC**](https://huggingface.co/datasets/Major-TOM/Core-S1RTC)
* Image input size: **224 x 224** pixels, target overlap: 10%, border_shift: True
## Model
The image encoder of the [**DeCUR model**](https://github.com/zhu-xlab/DeCUR) was used to extract embeddings.
## Example Use
Interface scripts are available at
```python
from datasets import load_dataset
dataset = load_dataset("Major-TOM/Core-S1RTC-DeCUR")
```
## Generate Your Own Major TOM Embeddings
The [**embedder**](https://github.com/ESA-PhiLab/Major-TOM/tree/main/src/embedder) subpackage of Major TOM provides tools for generating embeddings like these ones. You can see an example of this in a dedicated notebook at https://github.com/ESA-PhiLab/Major-TOM/blob/main/05-Generate-Major-TOM-Embeddings.ipynb.
[](https://github.com/ESA-PhiLab/Major-TOM/blob/main/05-Generate-Major-TOM-Embeddings.ipynb)
---
## Major TOM Global Embeddings Project 🏭
This dataset is a result of a collaboration between [**CloudFerro**](https://cloudferro.com/) 🔶, [asterisk labs](https://asterisk.coop/) and [**Φ-lab, European Space Agency (ESA)**](https://philab.esa.int/) 🛰️ set up in order to provide open and free vectorised expansions of Major TOM datasets and define a standardised manner for releasing Major TOM embedding expansions.
The embeddings extracted from common AI models make it possible to browse and navigate large datasets like Major TOM with reduced storage and computational demand.
The datasets were computed on the [**GPU-accelerated instances**](https://cloudferro.com/ai/ai-computing-services/)⚡ provided by [**CloudFerro**](https://cloudferro.com/) 🔶 on the [**CREODIAS**](https://creodias.eu/) cloud service platform 💻☁️.
Discover more at [**CloudFerro AI services**](https://cloudferro.com/ai/).
## Authors
[**Mikolaj Czerkawski**](https://mikonvergence.github.io) (Φ-lab, European Space Agency), [**Marcin Kluczek**](https://www.linkedin.com/in/marcin-kluczek-03852a1a8/) (CloudFerro), [**Jędrzej S. Bojanowski**](https://www.linkedin.com/in/j%C4%99drzej-s-bojanowski-a5059872/) (CloudFerro)
## Open Access Manuscript
This dataset is an output from the embedding expansion project outlined in: [https://arxiv.org/abs/2412.05600/](https://arxiv.org/abs/2412.05600/).
[](https://doi.org/10.48550/arXiv.2412.05600)
<details>
<summary>Read Abstract</summary>
> With the ever-increasing volumes of the Earth observation data present in the archives of large programmes such as Copernicus, there is a growing need for efficient vector representations of the underlying raw data. The approach of extracting feature representations from pretrained deep neural networks is a powerful approach that can provide semantic abstractions of the input data. However, the way this is done for imagery archives containing geospatial data has not yet been defined. In this work, an extension is proposed to an existing community project, Major TOM, focused on the provision and standardization of open and free AI-ready datasets for Earth observation. Furthermore, four global and dense embedding datasets are released openly and for free along with the publication of this manuscript, resulting in the most comprehensive global open dataset of geospatial visual embeddings in terms of covered Earth's surface.
> </details>
If this dataset was useful for you work, it can be cited as:
```latex
@misc{EmbeddedMajorTOM,
title={Global and Dense Embeddings of Earth: Major TOM Floating in the Latent Space},
author={Mikolaj Czerkawski and Marcin Kluczek and Jędrzej S. Bojanowski},
year={2024},
eprint={2412.05600},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.05600},
}
```
Powered by [Φ-lab, European Space Agency (ESA) 🛰️](https://philab.esa.int/) in collaboration with [CloudFerro 🔶](https://cloudferro.com/) & [asterisk labs](https://asterisk.coop/) |
ddupont/test-dataset | ddupont | 2025-04-24T23:37:48Z | 149 | 2 | [
"task_categories:visual-question-answering",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"cua",
"highquality",
"tasks"
] | [
"visual-question-answering"
] | 2025-04-18T17:06:08Z | 2 | ---
language: en
license: mit
task_categories:
- visual-question-answering
tags:
- cua
- highquality
- tasks
---
# Uploaded computer interface trajectories
These trajectories were generated and uploaded using [c/ua](https://github.com/trycua/cua) |
diffusion-cot/GenRef-wds | diffusion-cot | 2025-04-24T19:22:11Z | 3,896 | 9 | [
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2504.16080",
"region:us",
"cot",
"reflection",
"diffusion"
] | [
"text-to-image"
] | 2025-04-19T15:22:47Z | 7 | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
tags:
- cot
- reflection
- diffusion
task_categories:
- text-to-image
---
## GenRef-1M
<div align="center">
<img src="./assets/GenRef.png"/>
</div>
We provide 1M high-quality triplets of the form (flawed image, high-quality image, reflection) collected across
multiple domains using our scalable pipeline from [1]. We used this dataset to train our reflection tuning model.
To know the details of the dataset creation pipeline, please refer to Section 3.2 of [1].
Project Page: https://diffusion-cot.github.io/reflection2perfection
## Dataset loading
We provide the dataset in the `webdataset` format for fast dataloading and streaming. We recommend downloading
the repository locally for faster I/O:
```py
from huggingface_hub import snapshot_download
local_dir = snapshot_download(
repo_id="diffusion-cot/GenRef-wds", repo_type="dataset", allow_patterns=["*.tar"]
)
```
And then use the following as a reference to create the dataset:
```py
import webdataset as wds
dataset = (
wds.WebDataset(glob.glob(f"{local_dir}/*.tar"), shardshuffle=50)
.shuffle(15000, handler=wds.warn_and_continue)
.decode("pil", handler=wds.warn_and_continue)
.rename(
good_image="good_image.jpg",
bad_image="bad_image.jpg",
prompt="prompt.txt",
reflection="reflection.txt",
subset="subset.txt",
)
)
for i, sample in enumerate(dataset):
if i >= 5:
break
print(sample.keys())
```
If you want to stream directly from the Hub, that is also possible:
```py
import webdataset as wds
path = "hf://datasets/diffusion-cot/GenRef-wds/genref_{0..208}.tar"
dataset = (
wds.WebDataset(path, shardshuffle=50)
.shuffle(15000, handler=wds.warn_and_continue)
.decode("pil", handler=wds.warn_and_continue)
.rename(
good_image="good_image.jpg",
bad_image="bad_image.jpg",
prompt="prompt.txt",
reflection="reflection.txt",
subset="subset.txt",
)
)
for i, sample in enumerate(dataset):
if i >= 5:
break
print(sample.keys())
```
## References
[1] [From Reflection to Perfection: Scaling Inference-Time Optimization for Text-to-Image Diffusion Models via Reflection Tuning](https://huggingface.co/papers/2504.16080) |
Nexdata/Infant_Laugh_Speech_Data_by_Mobile_Phone | Nexdata | 2025-04-24T06:01:34Z | 68 | 2 | [
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2022-06-22T08:59:55Z | 1 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Infant_Laugh_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/speechrecog/1090?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is just a sample of Infant Laugh Speech Data by Mobile Phone(paid dataset).Laugh sound of 20 infants and young children aged 0~3 years old, a number of paragraphs from each of them; It provides data support for detecting children's laugh sound in smart home projects.
For more details & to download the rest of the dataset(paid),please refer to the link: https://www.nexdata.ai/datasets/speechrecog/1090?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Infant Cry
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License
### Citation Information
[More Information Needed]
### Contributions
|
lang-uk/WikiEdits-MultiGEC | lang-uk | 2025-04-23T20:37:33Z | 99 | 2 | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language:uk",
"language:en",
"language:de",
"language:cz",
"language:it",
"language:et",
"language:sl",
"language:el",
"language:lv",
"language:is",
"language:sv",
"license:mit",
"size_categories:10K<n<100K",
"region:us",
"gec",
"multigec"
] | [
"text2text-generation",
"text-generation"
] | 2025-04-21T14:29:54Z | 2 | ---
license: mit
language:
- uk
- en
- de
- cz
- it
- et
- sl
- el
- lv
- is
- sv
task_categories:
- text2text-generation
- text-generation
size_categories:
- 10K<n<100K
tags:
- gec
- multigec
pretty_name: wikiedits_multigec
---
# WikiEdits-MultiGEC Dataset
## Overview
WikiEdits-MultiGEC is a small dataset of human error corrections made by Wikipedia contributors for eleven languages.
These revisions were obtained using the official Wikipedia API, covering the six months from September 28, 2024, to April 17, 2025.
## Structure
- `wikiedits_multi_gec.csv` - main data.
- `index` - index;
- `language` - language of text;
- `text` - original text;
- `correction` - corrected text;
- `wikiedits_multi_gec_metadata.csv` - contains metadata related to the main data `wikiedits_multi_gec.csv`.
- `index` - index;
- `text_del` -
- `text_ins` -
- `text_del_tag` -
- `text_ins_tag` -
- `deletions` -
- `insertions` -
- `language` -
- `url` -
- `wikiedits_uk_annotations.csv` - contains human annotations for 1500 samples for the Ukrainian language.
- `text` - original text;
- `correction` - corrected text;
- `score` - annotator score;
- `is_rejected` - if the annotator rejects the correction.
## Dataset Statistics
||english|italian|ukrainian|german|czech|swedish|greek|estonian|slovene|latvian|icelandic|
|-|-|-|-|-|-|-|-|-|-|-|-|
|# pages|5003|2398|1409|1706|447|216|134|39|26|20|0|
|# edits all|12465|6024|5126|4672|1114|585|492|126|108|75|0|
|# edits|6807|3726|3092|2380|698|363|256|79|43|33|0|
## How to use it
You can merge `wikiedits_multi_gec.csv` with `wikiedits_uk_annotations.csv` and `wikiedits_multi_gec.csv` and `wikiedits_multi_gec_metadata.csv`.
```python
import pandas as pd
df_wikiedits = pd.read_csv('wikiedits_multi_gec.csv')
df_wikiedits_anot = pd.read_csv('wikiedits_uk_annotations.csv')
df_wikiedits_metadata = pd.read_csv('wikiedits_multi_gec_metadata.csv')
df_anot = df_wikiedits_anot.merge(df_wikiedits, on=['text', 'correction'], how='left')
df_metadata = df_wikiedits_metadata.merge(df_wikiedits, on=['index'], how='left')
```
## Authors
[Petro Ivaniuk](https://huggingface.co/peterua), [Mariana Romanyshyn](https://huggingface.co/mariana-scorp), [Roman Kovalchuk](https://huggingface.co/rkovalchuk)
## |
OptimalScale/ClimbLab | OptimalScale | 2025-04-20T17:14:03Z | 1,376 | 7 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.13161",
"region:us"
] | [
"text-generation"
] | 2025-04-18T07:38:38Z | 3 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
---
[ClimbLab](https://huggingface.co/datasets/nvidia/ClimbLab) is a high-quality pre-training corpus released by NVIDIA. Here is the description:
>ClimbLab is a filtered 1.2-trillion-token corpus with 20 clusters.
Based on Nemotron-CC and SmolLM-Corpus, we employed our proposed CLIMB-clustering to semantically reorganize and filter this combined dataset into 20 distinct clusters, leading to a 1.2-trillion-token high-quality corpus. Specifically, we first grouped the data into 1,000 groups based on topic information. Then we applied two classifiers: one to detect advertisements and another to assess the educational value of the text. Each group was scored accordingly, and low-quality data with low scores was removed.
But it is released in gpt-2 tokens which is not easy-to-use. Therefore,we use gpt-2 tokenizer to detokenize them into raw texts.
⚠️ Please note: This version is not officially released or maintained by NVIDIA. We are not responsible for the content, accuracy, or updates of this dataset.
## Citation:
If you find this dataset helpful, please cite the following [paper](https://arxiv.org/abs/2504.13161):
```
@article{diao2025climb,
author = {Shizhe Diao and Yu Yang and Yonggan Fu and Xin Dong and Dan Su and Markus Kliegl and Zijia Chen and Peter Belcak and Yoshi Suhara and Hongxu Yin and Mostofa Patwary and Celine Lin and Jan Kautz and Pavlo Molchanov},
title={CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training},
journal = {arXiv preprint},
year = {2025},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url={https://arxiv.org/abs/2504.13161},
}
``` |
ibm-research/acp_bench | ibm-research | 2025-04-16T20:54:51Z | 551 | 6 | [
"task_categories:question-answering",
"license:cdla-permissive-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2503.24378",
"region:us",
"planning",
"reasoning"
] | [
"question-answering"
] | 2024-09-23T17:24:29Z | 3 | ---
license: cdla-permissive-2.0
task_categories:
- question-answering
pretty_name: ACP Bench
tags:
- planning
- reasoning
dataset_info:
- config_name: acp_app_bool
dataset_size: 223204
download_size: 65362
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: string
name: answer
splits:
- name: test
num_bytes: 182349
num_examples: 130
- name: validation
num_bytes: 40855
num_examples: 40
- config_name: acp_areach_bool
dataset_size: 194445
download_size: 48183
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: string
name: answer
splits:
- name: test
num_bytes: 153012
num_examples: 120
- name: validation
num_bytes: 41433
num_examples: 40
- config_name: acp_just_bool
dataset_size: 532461
download_size: 112484
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: string
name: answer
splits:
- name: test
num_bytes: 434458
num_examples: 130
- name: validation
num_bytes: 98003
num_examples: 40
- config_name: acp_land_bool
dataset_size: 290859
download_size: 75354
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: string
name: answer
splits:
- name: test
num_bytes: 236580
num_examples: 130
- name: validation
num_bytes: 54279
num_examples: 40
- config_name: acp_prog_bool
dataset_size: 234181
download_size: 68746
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: string
name: answer
splits:
- name: test
num_bytes: 192246
num_examples: 130
- name: validation
num_bytes: 41935
num_examples: 40
- config_name: acp_reach_bool
dataset_size: 234074
download_size: 65099
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: string
name: answer
splits:
- name: test
num_bytes: 193493
num_examples: 130
- name: validation
num_bytes: 40581
num_examples: 40
- config_name: acp_val_bool
dataset_size: 487661
download_size: 108833
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: string
name: answer
splits:
- name: test
num_bytes: 386961
num_examples: 130
- name: validation
num_bytes: 100700
num_examples: 40
- config_name: acp_app_mcq
dataset_size: 326819
download_size: 96360
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- dtype: string
name: query
- dtype: string
name: answer
splits:
- name: test
num_bytes: 262778
num_examples: 130
- name: validation
num_bytes: 64041
num_examples: 40
- config_name: acp_areach_mcq
dataset_size: 284305
download_size: 76059
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- dtype: string
name: query
- dtype: string
name: answer
splits:
- name: test
num_bytes: 219286
num_examples: 120
- name: validation
num_bytes: 65019
num_examples: 40
- config_name: acp_just_mcq
dataset_size: 932999
download_size: 197964
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- dtype: string
name: query
- dtype: string
name: answer
splits:
- name: test
num_bytes: 748097
num_examples: 130
- name: validation
num_bytes: 184902
num_examples: 40
- config_name: acp_land_mcq
dataset_size: 341104
download_size: 79190
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- dtype: string
name: query
- dtype: string
name: answer
splits:
- name: test
num_bytes: 272655
num_examples: 130
- name: validation
num_bytes: 68449
num_examples: 40
- config_name: acp_prog_mcq
dataset_size: 331333
download_size: 100358
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- dtype: string
name: query
- dtype: string
name: answer
splits:
- name: test
num_bytes: 265012
num_examples: 130
- name: validation
num_bytes: 66321
num_examples: 40
- config_name: acp_reach_mcq
dataset_size: 327799
download_size: 92620
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- dtype: string
name: query
- dtype: string
name: answer
splits:
- name: test
num_bytes: 264430
num_examples: 130
- name: validation
num_bytes: 63369
num_examples: 40
- config_name: acp_val_mcq
dataset_size: 795585
download_size: 160933
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- dtype: string
name: query
- dtype: string
name: answer
splits:
- name: test
num_bytes: 620792
num_examples: 130
- name: validation
num_bytes: 174793
num_examples: 40
- config_name: acp_app_gen
dataset_size: 1066976
download_size: 189054
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: answer
sequence: string
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 887265
num_examples: 130
- name: validation
num_bytes: 179711
num_examples: 40
- config_name: acp_areach_gen
dataset_size: 1044322
download_size: 193224
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: answer
sequence: string
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 870213
num_examples: 130
- name: validation
num_bytes: 174109
num_examples: 40
- config_name: acp_just_gen
dataset_size: 1189965
download_size: 192965
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: answer
sequence:
sequence: string
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 964832
num_examples: 130
- name: validation
num_bytes: 225133
num_examples: 40
- config_name: acp_land_gen
dataset_size: 1646526
download_size: 251907
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: answer
struct:
- name: 'no'
sequence: string
- name: 'yes'
sequence: string
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 1443231
num_examples: 130
- name: validation
num_bytes: 203295
num_examples: 40
- config_name: acp_prog_gen
dataset_size: 1017916
download_size: 182814
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: answer
struct:
- name: neg
sequence: string
- name: pos
sequence: string
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 840693
num_examples: 130
- name: validation
num_bytes: 177223
num_examples: 40
- config_name: acp_nexta_gen
dataset_size: 1250535
download_size: 235381
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: answer
struct:
- name: maybe
sequence: string
- name: 'no'
sequence: string
- dtype: string
name: opt
- name: 'yes'
sequence: string
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 1062476
num_examples: 130
- name: validation
num_bytes: 188059
num_examples: 40
- config_name: acp_reach_gen
dataset_size: 1013515
download_size: 178491
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- name: answer
sequence: string
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 842742
num_examples: 130
- name: validation
num_bytes: 170773
num_examples: 40
- config_name: acp_val_gen
dataset_size: 1189899
download_size: 194619
features:
- dtype: int64
name: id
- dtype: string
name: group
- dtype: string
name: context
- dtype: string
name: question
- dtype: int64
name: answer
- dtype: string
name: PDDL_domain
- dtype: string
name: PDDL_problem
splits:
- name: test
num_bytes: 990923
num_examples: 130
- name: validation
num_bytes: 198976
num_examples: 40
configs:
- config_name: acp_app_bool
data_files:
- split: test
path: acp_app_bool/test.parquet
- split: validation
path: acp_app_bool/validation.parquet
- config_name: acp_app_mcq
data_files:
- split: test
path: acp_app_mcq/test.parquet
- split: validation
path: acp_app_mcq/validation.parquet
- config_name: acp_areach_bool
data_files:
- split: test
path: acp_areach_bool/test.parquet
- split: validation
path: acp_areach_bool/validation.parquet
- config_name: acp_areach_mcq
data_files:
- split: test
path: acp_areach_mcq/test.parquet
- split: validation
path: acp_areach_mcq/validation.parquet
- config_name: acp_just_bool
data_files:
- split: test
path: acp_just_bool/test.parquet
- split: validation
path: acp_just_bool/validation.parquet
- config_name: acp_just_mcq
data_files:
- split: test
path: acp_just_mcq/test.parquet
- split: validation
path: acp_just_mcq/validation.parquet
- config_name: acp_land_bool
data_files:
- split: test
path: acp_land_bool/test.parquet
- split: validation
path: acp_land_bool/validation.parquet
- config_name: acp_land_mcq
data_files:
- split: test
path: acp_land_mcq/test.parquet
- split: validation
path: acp_land_mcq/validation.parquet
- config_name: acp_prog_bool
data_files:
- split: test
path: acp_prog_bool/test.parquet
- split: validation
path: acp_prog_bool/validation.parquet
- config_name: acp_prog_mcq
data_files:
- split: test
path: acp_prog_mcq/test.parquet
- split: validation
path: acp_prog_mcq/validation.parquet
- config_name: acp_reach_bool
data_files:
- split: test
path: acp_reach_bool/test.parquet
- split: validation
path: acp_reach_bool/validation.parquet
- config_name: acp_reach_mcq
data_files:
- split: test
path: acp_reach_mcq/test.parquet
- split: validation
path: acp_reach_mcq/validation.parquet
- config_name: acp_val_bool
data_files:
- split: test
path: acp_val_bool/test.parquet
- split: validation
path: acp_val_bool/validation.parquet
- config_name: acp_val_mcq
data_files:
- split: test
path: acp_val_mcq/test.parquet
- split: validation
path: acp_val_mcq/validation.parquet
- config_name: acp_app_gen
data_files:
- path: acp_app_gen/test.parquet
split: test
- path: acp_app_gen/validation.parquet
split: validation
- config_name: acp_areach_gen
data_files:
- path: acp_areach_gen/test.parquet
split: test
- path: acp_areach_gen/validation.parquet
split: validation
- config_name: acp_just_gen
data_files:
- path: acp_just_gen/test.parquet
split: test
- path: acp_just_gen/validation.parquet
split: validation
- config_name: acp_land_gen
data_files:
- path: acp_land_gen/test.parquet
split: test
- path: acp_land_gen/validation.parquet
split: validation
- config_name: acp_prog_gen
data_files:
- path: acp_prog_gen/test.parquet
split: test
- path: acp_prog_gen/validation.parquet
split: validation
- config_name: acp_nexta_gen
data_files:
- path: acp_nexta_gen/test.parquet
split: test
- path: acp_nexta_gen/validation.parquet
split: validation
- config_name: acp_reach_gen
data_files:
- path: acp_reach_gen/test.parquet
split: test
- path: acp_reach_gen/validation.parquet
split: validation
- config_name: acp_val_gen
data_files:
- path: acp_val_gen/test.parquet
split: test
- path: acp_val_gen/validation.parquet
split: validation
---
# ACP Bench
<p align="center">
<a href="https://ibm.github.io/ACPBench" target="_blank">🏠 Homepage</a> •
<a href="https://doi.org/10.1609/aaai.v39i25.34857" target="_blank">📄 Paper</a> •
<a href="https://arxiv.org/abs/2503.24378" target="_blank">📄 Paper</a>
</p>
ACPBench is a benchmark dataset designed to evaluate the reasoning capabilities of large language models (LLMs) in the context of Action, Change, and Planning. It spans 13 diverse domains:
* Blocksworld
* Logistics
* Grippers
* Grid
* Ferry
* FloorTile
* Rovers
* VisitAll
* Depot
* Goldminer
* Satellite
* Swap
* Alfworld
## Task Types in ACPBench
ACPBench includes the following 8 reasoning tasks:
1. Action Applicability (app)
2. Progression (prog)
3. Atom Reachability (reach)
4. Validation (val)
5. Action Reachability (areach)
6. Justification (just)
7. Landmarks (land)
8. Next Action (nexta)
## Task Formats
The first 7 tasks are available in:
* Boolean (yes/no) format
* Multiple-choice format
* Generative format
The Next Action task is provided only in generative format.
## Access
Development and test sets are available for download via:
* ACPBench GitHub Repository
* Hugging Face Dataset Hub
```
@inproceedings{KokelKSS25ACP
author = {Harsha Kokel and
Michael Katz and
Kavitha Srinivas and
Shirin Sohrabi},
title = {ACPBench: Reasoning about Action, Change, and Planning},
booktitle = {{AAAI}},
publisher = {{AAAI} Press},
year = {2025}
url = {https://doi.org/10.1609/aaai.v39i25.34857}
}
```
```
@misc{KokelKSS25ACPHard,
title = {ACPBench Hard: Unrestrained Reasoning about Action, Change, and Planning},
author = {Harsha Kokel and
Michael Katz and
Kavitha Srinivas and
Shirin Sohrabi},
year = {2025},
eprint = {2503.24378},
archivePrefix = {arXiv},
primaryClass = {cs.AI},
url = {https://arxiv.org/abs/2503.24378},
}
``` |
neulab/VisualPuzzles | neulab | 2025-04-16T17:25:09Z | 231 | 4 | [
"task_categories:visual-question-answering",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.10342",
"region:us"
] | [
"visual-question-answering"
] | 2025-04-13T23:49:12Z | 2 | ---
license: mit
size_categories:
- 1K<n<10K
task_categories:
- visual-question-answering
pretty_name: VisualPuzzles
dataset_info:
features:
- name: id
dtype: int64
- name: category
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 139582416.624
num_examples: 1168
download_size: 137679574
dataset_size: 139582416.624
configs:
- config_name: default
data_files:
- split: train
path: data.parquet
---
# VisualPuzzles: Decoupling Multimodal Reasoning Evaluation from Domain Knowledge
[🏠 Homepage](https://neulab.github.io/VisualPuzzles/) | [📊 VisualPuzzles](https://huggingface.co/datasets/neulab/VisualPuzzles) | [💻 Github](https://github.com/neulab/VisualPuzzles) | [📄 Arxiv](https://arxiv.org/abs/2504.10342) | [📕 PDF](https://arxiv.org/pdf/2504.10342) | [🖥️ Zeno Model Output](https://hub.zenoml.com/project/2e727b03-a677-451a-b714-f2c07ad2b49f/VisualPuzzles)

## Overview
**VisualPuzzles** is a multimodal benchmark specifically designed to evaluate **reasoning abilities** in large models while deliberately minimizing reliance on domain-specific knowledge.
Key features:
- 1168 diverse puzzles
- 5 reasoning categories: Algorithmic, Analogical, Deductive, Inductive, Spatial
- Difficulty labels: Easy, Medium, Hard
- Less knowledge-intensive than existing benchmarks (e.g., MMMU)
- More reasoning-complex than existing benchmarks (e.g., MMMU)
## Key Findings
- All models perform worse than humans; most can't surpass even 5th-percentile human performance.
- Strong performance on knowledge-heavy benchmarks does not transfer well.
- Larger models and structured "thinking modes" don't guarantee better results.
- Scaling model size does not ensure stronger reasoning
## Usage
To load this dataset via Hugging Face’s `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("neulab/VisualPuzzles")
data = dataset["train"]
sample = data[0]
print("ID:", sample["id"])
print("Category:", sample["category"])
print("Question:", sample["question"])
print("Options:", sample["options"])
print("Answer:", sample["answer"])
```
## Citation
If you use or reference this dataset in your work, please cite:
```bibtex
@article{song2025visualpuzzles,
title = {VisualPuzzles: Decoupling Multimodal Reasoning Evaluation from Domain Knowledge},
author = {Song, Yueqi and Ou, Tianyue and Kong, Yibo and Li, Zecheng and Neubig, Graham and Yue, Xiang},
year = {2025},
journal = {arXiv preprint arXiv:2504.10342},
url = {https://arxiv.org/abs/2504.10342}
}
``` |
MarcPal08/sentiment-analysis-test | MarcPal08 | 2025-04-16T12:51:30Z | 505 | 9 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:it",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"school",
"high-school"
] | [
"text-classification"
] | 2025-04-16T11:57:04Z | 9 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 28302.111747851002
num_examples: 279
- name: test
num_bytes: 7100.888252148997
num_examples: 70
download_size: 23157
dataset_size: 35403.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
annotations_creators:
- expert-generated
- crowdsourced
language:
- it
language_creators:
- crowdsourced
license:
- mit
multilinguality:
- monolingual
pretty_name: A sentiment analisys database created in a school environment.
size_categories:
- n<1K
source_datasets:
- original
tags:
- school
- high-school
task_categories:
- text-classification
task_ids:
- sentiment-analysis
---
# Progetto scolastico per l'analisi dei sentimenti
Il dataset è stato creato con un questionario online in cui si chiedeva ad un pubblico di studenti, docenti, personale amministrativo, famiglie di rispondere ad alcune domande sul loro rapporto con la scuola.
Le annotazioni sono state effettuate correlando le risposte testuali ad indicatori di gradimento.
Il dataset è stato realizzato all'interno di un corso pomeridiano scolastico dedicato all'intelligenza artificiale.
Grazie a tutti per la collaborazione ❤️ |
Smatteux/sentiment-analysis-test | Smatteux | 2025-04-16T12:51:30Z | 505 | 9 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:it",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2025-04-16T11:47:03Z | 9 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 28302.111747851002
num_examples: 279
- name: test
num_bytes: 7100.888252148997
num_examples: 70
download_size: 23427
dataset_size: 35403.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
annotations_creators:
- expert-generated
- crowdsourced
language:
- it
language_creators:
- crowdsourced
license:
- mit
multilinguality:
- monolingual
pretty_name: 'a sentiment analysis database created in a school envronment '
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-analysis
---
# progetto scolastico per l'analisi dei sentimenti
il dataset è stato creato con un questionario online in cu isi chiedeva ad un pubblico di studenti, docenti, personale amministrativo, famiglie di rispondere ad alcune domande sul loro rapporto con la scuola.
Le annotazioni sono state effettuate correlando le risposte testuali ad indicatori di gradimento.
Il dataset è stato stato realizzato all'interno di un corsp pomeridiano scolastico dedicato all'intelligenza artificiale.
Grazie a tutti per la collaborazione ❤️ |
MrDragonFox/EN_Emilia_Yodas_616h | MrDragonFox | 2025-04-14T21:23:31Z | 793 | 4 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T08:25:42Z | 2 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: audio
dtype: audio
- name: text_scribe
dtype: string
- name: events_scribe
dtype: string
- name: CE
dtype: float64
- name: CU
dtype: float64
- name: PC
dtype: float64
- name: PQ
dtype: float64
- name: text_emilia
dtype: string
- name: duration
dtype: float64
- name: speaker
dtype: string
- name: language
dtype: string
- name: dnsmos
dtype: float64
- name: phone_count
dtype: int64
splits:
- name: train
num_bytes: 107194063653.67
num_examples: 228265
download_size: 104387921006
dataset_size: 107194063653.67
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
language:
- en
size_categories:
- 100K<n<1M
---
the dataset is 616h out of the English part from https://huggingface.co/datasets/amphion/Emilia-Dataset ( Emilia Yodas - cc by 4.0)
audio event classified via scribe v1 (elevenlabs stt/asr)
facebook audio aestetics to be used as prefilter
the dataset is very much at a v1 -
if you want to help - lets talk
https://discord.gg/RUs3uzBdW3 (nsfw is fully opt in only - as sfw)
if you want full transaction timestamps as they come from scribe v1 - they are cc by 4.0 NC and can be found here https://huggingface.co/datasets/MrDragonFox/EN_Emilia_Yodas_616h_raw_timestamps |
banned-historical-archives/banned-historical-archives | banned-historical-archives | 2025-04-13T15:18:38Z | 86,238 | 2 | [
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2023-12-17T14:47:08Z | null | ---
size_categories:
- n>1T
---
# 和谐历史档案馆数据集 - Banned Historical Archives Datasets
和谐历史档案馆数据集包含已录入 https://banned-historical-archives.github.io 和暂未未录入的原始文件。
## 目录结构
- banned-historical-archives.github.io # 已录入该网站的原始数据,不定期从 github 仓库中同步
- raw # 原始文件
- config # 配置文件
- todo # 存放暂未录入网站的文件
部分报纸和图片资料存放在单独的仓库:
|名称| 地址 | 状态 |
|---|---|---|
|参考消息|https://huggingface.co/datasets/banned-historical-archives/ckxx|未录入|
|人民日报|https://huggingface.co/datasets/banned-historical-archives/rmrb|已精选重要的文章录入|
|文汇报| https://huggingface.co/datasets/banned-historical-archives/wenhuibao , https://huggingface.co/datasets/banned-historical-archives/wenhuibao_disk| 已精选重要的文章录入|
|文革照片|https://huggingface.co/datasets/banned-historical-archives/CR-photo|未录入|
|漫画(-1949)|https://huggingface.co/datasets/banned-historical-archives/manhua-before-1949|未录入|
|解放日报|https://huggingface.co/datasets/banned-historical-archives/jiefangribao|未录入|
|新民晚报|https://huggingface.co/datasets/banned-historical-archives/xinminwanbao|未录入|
|画报(-1949)|https://huggingface.co/datasets/banned-historical-archives/huabao-before-1949|未录入|
|人民画报|https://huggingface.co/datasets/banned-historical-archives/renminhuabao|未录入|
|解放军报|https://huggingface.co/datasets/banned-historical-archives/jiefangjunbao|已精选重要的文章录入|
|中国妇女|https://huggingface.co/datasets/banned-historical-archives/zhongguofunv|未录入|
|北京周报 |https://huggingface.co/datasets/banned-historical-archives/peking-review|未录入|
|杭州日报 |https://huggingface.co/datasets/banned-historical-archives/hangzhouribao|未录入|
|新中华报 |https://huggingface.co/datasets/banned-historical-archives/xinzhonghuabao|未录入|
|故事会 |https://huggingface.co/datasets/banned-historical-archives/gushihui|未录入|
|工农兵画报 |https://huggingface.co/datasets/banned-historical-archives/gongnongbinghuabao|未录入|
|炎黄春秋| https://huggingface.co/datasets/banned-historical-archives/yanhuangchunqiu|未录入|
|连环画报 |https://huggingface.co/datasets/banned-historical-archives/lianhuanhuabao|未录入|
|中央日报 |https://huggingface.co/datasets/banned-historical-archives/zhongyangribao|未录入|
|香港工商晚报 |https://huggingface.co/datasets/banned-historical-archives/hkgongshangwanbao|未录入|
|香港大公报|https://huggingface.co/datasets/banned-historical-archives/dagongbao|未录入|
|香港工商日报| https://huggingface.co/datasets/banned-historical-archives/hkgongshangribao|未录入|
|香港华侨日报|https://huggingface.co/datasets/banned-historical-archives/huaqiaoribao|未录入|
|参考消息|https://huggingface.co/datasets/banned-historical-archives/cankaoxiaoxi|未录入|
|裁判文书 |https://huggingface.co/datasets/banned-historical-archives/legal-judgements|未录入|
## 贡献
### 原始文件贡献
* 少量文件推荐使用huggingface网页,登陆后可以上传文件(上传至todo目录)和删除文件,操作完成等待审核通过
* 大量文件推荐通过git工具上传到huggingface,再通过community联系我们
* todo文件夹中,应及时删除已录入的资料,避免重复录入
### 使用 github issue 贡献资料(支持自动化OCR)
https://github.com/banned-historical-archives/banned-historical-archives.github.io/blob/master/docs/upload-and-correction.md
## 注意事项
* 所有仓库总文件大小超过4TB,克隆仓库时请确保磁盘空间充足
* 克隆仓库时建议使用git clone --depth 1参数,否则将下载所有commit历史记录,影响下载速度
|
osunlp/Dreamer-V1-Data | osunlp | 2025-04-09T09:13:42Z | 3,074 | 2 | [
"license:cc-by-nc-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2411.06559",
"region:us"
] | [] | 2025-04-09T03:20:06Z | 2 | ---
license: cc-by-nc-sa-4.0
---
After heavier cleaning, the remaining data size is 3.12M.
# WebDreamer: Model-Based Planning for Web Agents
WebDreamer is a planning framework that enables efficient and effective planning for real-world web agent tasks. Check our paper for more details.
This work is a collaboration between [OSUNLP](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/).

- **Repository:** https://github.com/OSU-NLP-Group/WebDreamer
- **Paper:** https://arxiv.org/abs/2411.06559
- **Point of Contact:** [Kai Zhang](mailto:[email protected])
## Models
- Dreamer-7B:
- [General](https://huggingface.co/osunlp/Dreamer-7B)
- [In-Domain-VWA-Shopping](https://huggingface.co/osunlp/Dreamer-7B-Shopping)
- [In-Domain-VWA-Classifieds](https://huggingface.co/osunlp/Dreamer-7B-Classifieds)
- [In-Domain-VWA-Reddit](https://huggingface.co/osunlp/Dreamer-7B-Reddit)
## Data:
[Dreamer Training Data](https://huggingface.co/datasets/osunlp/Dreamer-V1-Data)
```
root
|-- prompt: string
|-- image: binary
|-- response: string
|-- action: string
```
## Results
### Strong performance on VisualWebArena and Mind2Web-live
| Benchmark | Method | Success Rate |
|------------------|-----------------|--------------------|
| **VisualWebArena** | GPT-4o + Reactive | 17.6% |
| | GPT-4o + Tree Search | 26.2% |
| | **GPT-4o + WebDreamer** | 23.6% (↑34.1%) |
| **Online-Mind2Web** | GPT-4o + Reactive | 26.0% |
| | **GPT-4o + WebDreamer** | 37.0% (↑42.3%) |
| **Mind2Web-live** | GPT-4o + Reactive | 20.2% |
| | **GPT-4o + WebDreamer** | 25.0% (↑23.8%) |
Compared to the reactive baselines, WebDreamer significantly improves performance by 34.1%, 42.3%, and 23.8% on VisualWebArena, Online-Mind2Web, and Mind2Web-live, respectively.
### Better efficiency than tree search with true interactions
<img width="1502" alt="image" src="https://github.com/user-attachments/assets/0afbc22d-b1eb-4026-a167-e1852cde7677">
WebDreamer effectively explores the search space through simulations, which largely reduces the reliance on real-world interactions while maintaining robust performance.
## Inference
### vLLM server
```bash
vllm serve osunlp/Dreamer-7B --api-key token-abc123 --dtype float16
```
or
```bash
python -m vllm.entrypoints.openai.api_server --served-model-name osunlp/Dreamer-7B --model osunlp/Dreamer-7B --dtype float16
```
You can find more instruction about training and inference in [Qwen2-VL's Official Repo](https://github.com/QwenLM/Qwen2-VL).
### Prompt
Actually our model is quite robust to textual prompt so feel free to try various prompts which we didn't heavily explore.
```python
def format_openai_template(description: str, base64_image):
return [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
},
{
"type": "text",
"text": f"""
Below is current screenshot. Please describe what you would see after a {action_description}"""
},
],
},
]
messages = format_openai_template(description, base64_image)
completion = await client.chat.completions.create(
model=args.model_path,
messages=messages,
temperature=1.0
)
```
## Citation Information
If you find this work useful, please consider citing our papers:
```
@article{Gu2024WebDreamer,
author = {Yu Gu and Kai Zhang and Yuting Ning and Boyuan Zheng and Boyu Gou and Tianci Xue and Cheng Chang and Sanjari Srivastava and Yanan Xie and Peng Qi and Huan Sun and Yu Su},
title = {Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents},
journal = {CoRR},
volume = {abs/2411.06559},
year = {2024},
url = {https://arxiv.org/abs/2411.06559},
eprinttype= {arXiv},
eprint = {2411.06559},
}
```
|
huggingface/badges | huggingface | 2025-04-08T17:39:54Z | 1,601,440 | 43 | [
"license:mit",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2023-02-02T14:55:23Z | null | ---
license: mit
thumbnail: "https://huggingface.co/datasets/huggingface/badges/resolve/main/badges-thumbnail.png"
---
<style>
.prose img {
display: inline;
margin: 0 6px !important;
}
.prose table {
max-width: 320px;
margin: 0;
}
</style>
# Badges
A set of badges you can use anywhere. Just update the anchor URL to point to the correct action for your Space. Light or dark background with 4 sizes available: small, medium, large, and extra large.
## How to use?
- With markdown, just copy the badge from: https://huggingface.co/datasets/huggingface/badges/blob/main/README.md?code=true
- With HTML, inspect this page with your web browser and copy the outer html.
## Available sizes
| Small | Medium | Large | Extra large |
| ------------- | :-----------: | ------------- | ------------- |
| 20px (height) | 24px (height) | 36px (height) | 48px (height) |
## Follow us on HF
[](https://huggingface.co/organizations)
[](https://huggingface.co/organizations)
[](https://huggingface.co/organizations)
[](https://huggingface.co/organizations)
[](https://huggingface.co/organizations)
[](https://huggingface.co/organizations)
[](https://huggingface.co/organizations)
[](https://huggingface.co/organizations)
## Paper page
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
## Deploy on Spaces
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
## Duplicate this Space
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
## Open in HF Spaces
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
## Open a Discussion
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
## Share to Community
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
## Sign in with Hugging Face
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
## Open a Pull Request
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
## Subscribe to PRO
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
## Follow me on HF
[](https://huggingface.co/Chunte)
[](https://huggingface.co/Chunte)
[](https://huggingface.co/Chunte)
[](https://huggingface.co/Chunte)
[](https://huggingface.co/Chunte)
[](https://huggingface.co/Chunte)
[](https://huggingface.co/Chunte)
[](https://huggingface.co/Chunte)
## Model on HF
[](https://huggingface.co/models)
[](https://huggingface.co/models)
[](https://huggingface.co/models)
[](https://huggingface.co/models)
[](https://huggingface.co/models)
[](https://huggingface.co/models)
[](https://huggingface.co/models)
[](https://huggingface.co/models)
## Dataset on HF
[](https://huggingface.co/datasets)
[](https://huggingface.co/datasets)
[](https://huggingface.co/datasets)
[](https://huggingface.co/datasets)
[](https://huggingface.co/datasets)
[](https://huggingface.co/datasets)
[](https://huggingface.co/datasets)
[](https://huggingface.co/datasets)
## Powered by Hugging Face
[](https://huggingface.co)
[](https://huggingface.co)
|
SBB/VD-Metadata | SBB | 2025-04-08T14:48:21Z | 103 | 2 | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"language:de",
"language:la",
"language:el",
"language:fr",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"metadata",
"cultural heritage",
"library sciences",
"machine learning",
"data science"
] | [
"text-classification",
"feature-extraction"
] | 2025-04-08T13:23:17Z | 2 | ---
license: cc0-1.0
task_categories:
- text-classification
- feature-extraction
language:
- de
- la
- el
- fr
tags:
- metadata
- cultural heritage
- library sciences
- machine learning
- data science
pretty_name: vd-metadata
size_categories:
- 1M<n<10M
configs:
- config_name: VD-Bib-Metadata
data_files:
- split: train
path: VD-Bib-Metadata.parquet
- config_name: VD-Digi-Metadata
data_files:
- split: train
path: VD-Digi-Metadata.parquet
---
# Metadata of the "Verzeichnis der im deutschen Sprachraum erschienen Drucke"
# Title
Metadata of the "Verzeichnis der im deutschen Sprachraum erschienen Drucke"
# Description and Motivation
This data publication was created with the intent to provide bibliographic and subject indexing metadata for research purposes and the development of AI applications. This data publication can be regarded as the German national bibliography of the period 1500–1800. It consists of two files: The first one provides all bibliographic master records pertaining to the German national bibliography which were edited and revised and electronically copied to the union catalogue K10plus, the shared database of the Bibliotheksservice-Zentrum Baden-Württemberg (BSZ) and the Verbundzentrale des GBV (VZG). The second one lists the unique identifiers ("Pica-Produktionsnummer") of all master records available in the union catalogue K10plus which have been digitised, and it contains the links to their digital copies.
The first bibliographic file comprises descriptive metadata of 750.342 titles. They result from three separate projects aiming at establishing a retrospective German national bibliography for the 16th, 17th, and 18th centuries. The data are stored in columnar format, containing 174 columns. They were downloaded in February 2025 from the K10plus and represent a work in progress. Exemplary tasks which can be served by this file comprise studies on the history of books between 1500 and 1800 and on pattern recognition on the basis of bibliographical metadata. The second file ("digital copies") comprises 16 columns of 590.528 titles which are available in digitised form.
The primary intention for this data publication was the provision of a large computationally amenable dataset exclusively consisting of bibliographic metadata to stimulate research and development of AI applications. In 2025, large (meta-)datasets from the field of historical cultural data are still missing. In this respect, the data publication provided here aims to fill a gap. The files are suitable for the computational use of digitised and born-digital collections according to the [Collections as Data principles](https://collectionsasdata.github.io/statement/).
The data publication was created by two librarians and a member of the team of the research project "[Mensch.Maschine.Kultur – Künstliche Intelligenz für das Digitale Kulturelle Erbe](https://mmk.sbb.berlin/?lang=en)" at Berlin State Library (SBB). The research project was funded by the Federal Government Commissioner for Culture and the Media (BKM), project grant no. 2522DIG002. The Minister of State for Culture and the Media is part of the German Federal Government.
## Homepage
Not applicable.
## Repository
[Zenodo Community: Data and Demos of the Staatsbibliothek zu Berlin – Berlin State Library](https://zenodo.org/communities/stabi)
## Papers and/or Other References
Not applicable.
## Publisher
Staatsbibliothek zu Berlin – Berlin State Library
## Point of Contact
Maria Federbusch, Staatsbibliothek zu Berlin – Berlin State Library, [[email protected]](mailto:[email protected])
Remigiusz Stachowiak, Staatsbibliothek zu Berlin – Berlin State Library, [[email protected]](mailto:[email protected]), ORCID: [0000-0002-1889-0502](https://orcid.org/0000-0002-1889-0502)
Jörg Lehmann, Staatsbibliothek zu Berlin – Berlin State Library, [[email protected]](mailto:[email protected]), ORCID: [0000-0003-1334-9693](https://orcid.org/0000-0003-1334-9693)
## Supported Tasks and Shared Tasks
There are at least two tasks for which the bibliographic dataset may be used:
* Task "history of books between 1500 and 1800". See as an example [Lathi et al, 2019](https://doi.org/10.1080/01639374.2018.1543747)
* Task "pattern recognition on the basis of bibliographical metadata". Such patterns could be used for [retrieval augmented generation of metadata](https://scale.com/blog/retrieval-augmented-generation-to-enhance-llms). See as an example [Völker et al, 2024](https://doi.org/10.1145/3627508.3638298)
This dataset was not part of a shared task.
### AI Category
Natural Language Processing, Feature Extraction
### Type of Cultural Heritage Application
Metadata enrichment
### (Cultural Heritage) Application Example
Retrieval Augmented Generation
# Distribution
This dataset is distributed by the three named dataset curators. Dissemination on the publication was done via social media in order to reach the relevant research and machine learning communities. Furthermore, the library community within Germany has been informed about the publication of this dataset.
## Dataset Curators
The dataset was curated and published by two librarians employed at Staatsbibliothek zu Berlin – Berlin State Library as well as one member of the research project "Mensch.Maschine.Kultur" ("Human.Machine.Culture"):
Maria Federbusch, Staatsbibliothek zu Berlin – Berlin State Library. Maria Federbusch has studied librarianship and electrical engineering. She works as a research librarian in the Department of Manuscripts and Early Printed Books at the Berlin State Library and is responsible for project and science management.
Remigiusz Stachowiak, Staatsbibliothek zu Berlin – Berlin State Library. Remigiusz Stachowiak has studied history, Eastern European studies and edition philology. He holds a doctorate in late medieval history. He is a research librarian in the Department of Manuscripts and Early Printed Books at Berlin State Library.
Jörg Lehmann, Staatsbibliothek zu Berlin – Berlin State Library. Jörg Lehmann has studied history and comparative literature and works in the research project "Mensch.Maschine.Kultur"; he was responsible for downloading the data, curating them as described, and drafting the datasheet.
All curators can be contacted with regard to an update or feedback to the datasheet and regarding technical issues. The curators are prepared to incorporate responses and comments into a new version of the datasheet if this deems sensible.
## Licensing Information
[Creative Commons Zero v1.0 – CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)
## Citation Information
```bibtex
@dataset{federbusch_2025_15167939,
author = {Federbusch, Maria and
Stachowiak, Remigiusz and
Lehmann, Jörg},
title = {Metadata of the "Verzeichnis der im deutschen
Sprachraum erschienen Drucke"
},
month = apr,
year = 2025,
publisher = {Staatsbibliothek zu Berlin - Berlin State Library},
doi = {10.5281/zenodo.15167939},
url = {https://doi.org/10.5281/zenodo.15167939},
}
```
## Contributions
The data contained in the dataset have been compiled by trained librarians over a long period of time and are therefore a typical result of carefully crafted metadata from several cultural heritage institutions. The selection, transformation and curation of the dataset was performed with the goal to provide a computer-amenable dataset to both the research and machine learning communities.
The "Verzeichnis der im deutschen Sprachraum erschienenen Drucke" is a joint effort by German VD libraries, pan-European libraries since 1969 as well as all contributors to the central library system (CBS) / the K10plus database, through which the title data were retrieved in electronic form. However, this data publication transcends traditional librarian formats and provides a computer-amenable dataset.
# Composition
In Germany, three separate cooperative projects exist to catalogue all prints for the sixteenth, seventeenth and eighteenth century. The full title of these three projects are: "Verzeichnis der im deutschen Sprachbereich erschienenen Drucke des 16. Jahrhunderts" (in English: Register of printed works of the 16th century published in German-speaking countries, abbreviated [VD16](http://www.vd16.de/)); "Verzeichnis der im deutschen Sprachraum erschienenen Drucke des 17. Jahrhunderts" (in English: Union catalogue of books printed in German-speaking countries in the 17th century, abbreviated [VD17](http://www.vd17.de/), also available as a database [VD17](https://kxp.k10plus.de/DB=1.28/)); and "Verzeichnis der im deutschen Sprachraum erschienenen Drucke des 18. Jahrhunderts" (in English: Union catalogue of books printed in German-speaking countries in the 18th century, abbreviated [VD18](http://www.vd18.de/), also available as a database [VD18](https://vd18.k10plus.de/)). All the three projects are funded by the [Deutsche Forschungsgemeinschaft](https://www.dfg.de/en) (German Research Foundation).
Taken together, the three projects VD16, VD17 and VD18 aim to establish a retrospective German national bibliography for the period from 1501 to 1800. Included are all German-language items as well as any work printed and published in the German speaking countries of the time, regardless of its language. As of February 2025, the VD16 comprises about 106,000 titles, the VD17 lists about 315,000 items, and the VD18 about 295,000 titles. None of the projects have yet been completed. The VD16 started in 1969 and was released as a printed library catalogue; this project is directed by the Bayerische Staatsbibliothek München. VD17 was conceived as a database from the beginning and started in 1996 and VD18 in 2009. Only the latter aimed at digitisation from its very inception. In the meantime, all three projects are aiming at a complete digitisation of the recorded titles. However, the three projects differ in their approaches: While VD16 and VD17 aim at recording all known copies of an edition, in VD18 only one copy per edition is included. In this way, possible variations in prints are taken into account for the VD16 and VD17, while the increase in book production in the 18th century prevents the digitization of multiple variants. Another peculiarity is the fact that the works were recorded according to three different systems: Guidelines for VD16, RAK and RDA. Genre terms were consistently used only in the VD17 and VD18.
The union catalogue K10plus merges the databases of the Bibliotheksservice-Zentrum Baden-Württemberg (BSZ) and the Verbundzentrale des Gemeinsamen Bibliotheksverbundes (VZG). The revised national bibliographic master records from the VD16 and VD17 projects are available in the K10plus as external data. In the VD18 project, master records are also captured directly in the K10plus or imported into the K10plus database from other union databases.
For the purpose of this data publication, the records of the three VD projects were taken together, thus forming a large data publication consisting exclusively of revised metadata. The dataset was created by systematically downloading all relevant titles of the VD16, VD17 and VD18 in February 2025 from the K10plus; it also includes data collected within the union and made available via the K10plus database. This having said, the German federal system has to be taken into respect as well as the fact that not every German library participates in the VD projects. These circumstances lead to the result that not every title which has been recorded in Germany has been revised and is therefore available in the K10plus database. The fact that the absolute number of titles contained in this dataset exceeds the sum of the three VD’s as given above can best be understood if it is taken into account that the dataset also comprises the complete recording of a multi-part monograph, the parts of monographic series as well as journals and dependent works. This latter term (in contrast to independent literature or monographs) refers to printed publications such as included publications or songs that are part of a larger work.
The first file ("bibliography") comprises 750.342 revised titles with ample metadata, which are sorted into 174 columns. Here, all edited and revised titles of prints from VD16, VD17 and VD18 from K10plus are merged into one file. The file contains not only monographs and journals and their individual units, but also so-called included or attached (dependent) works. The dataset contains bibliographic data and subject indexing data, but no description of the individual book copies, since these are only recorded in K10plus from the two library networks GBV and SWB. The inclusion of these data would distort the overall picture of copy records, since the book copies from other library networks are not taken into account here (e.g. copy records from Bavaria). The search syntax in K10plus (WinIBW userface) for the file described here is as follows: f sbn ((vd16? or vd17? or vd18?) bbg a!v?) or (vrb redvd18? bbg asv?). This expression can be translated as follows: find all bibliographic numbers of the edited and revised titles of the three VDs or find all records of the edited and revised dependent works of the VD18.
The second file ("digital copies") contains the records of the prints from VD16, VD17 and VD18 that have been digitized. Due to the different data models of the VDs, the search query must be adapted to the individual databases. From the data pools of VD16 and VD17, only titles containing links to digital copies will be included. From the VD18, the so-called o-records (records of the digital copies) are taken into account. This file also contains not only monographs and journals and their individual units, but also so-called included or attached (dependent) works. The search syntax in the K10plus (WinIBW userface) for the second file described here is as follows: f sbn ((vd18? bbg (o!v? or o!u?)) or (vrb redvd18? bbg os?)) or sbn ((vd16? or vd17?) bbg a!v?)). This expression can be translated as follows: find all bibliographic numbers of the edited and revised titles with their digital records of the VD18 or find all bibliographic numbers of the edited and revised dependent works of the VD18 or find all records of the edited and revised titles of VD16 and VD17. From the resulting dataset all titles were removed which did not contain any information about a digital copy.
The second file comprises 590.528 observations of 16 variables. It contains the unique identifier "Pica-Produktionsnummer", the identifier of each VD ("VD-Nummer"), references to the physical copy of the book, links to digital copies, in rare cases also rights or licence information and access status. It thus serves as a concordance between the bibliographic file and the currently available digital copies.
The tasks for which this dataset may be used are studies on the history of books between 1501 and 1800 and on pattern recognition on the basis of bibliographical metadata. The intention for the creation of this dataset was to provide a substantial and at the same time delimited dataset comprising the German national bibliography as exhaustive as possible in early 2025. The language used in the dataset is German; however, as the names of the three VD’s indicate ("Union catalogue of books printed in German speaking countries"), the titles described in the dataset may be in other languages as well, such as Latin, Greek, and others.
The dataset consists of two files (VD-Bib-Metadata.parquet and VD-Digi-Metadata.parquet) as well as the accompanying datasheet. Both files fully comply with the European General Data Protection Regulation (GDPR). The dataset does not contain information that is under copyright. The files exclusively comprise textual information and links. Authors and people who are subject of the listed titles are named and often linked to authority files, especially to the authority file [Gemeinsame Normdatei (GND)](https://gnd.network/Webs/gnd/EN/Home/home_node.html). Since the time frame covered by the dataset ends with the year 1800 and, in the case of journals and multipart monographs around 1810, the dataset does not contain any personal or sensitive information which relate to people who are still living or have been living until 70 years ago.
## Data Category
text files
## Object Type
metadata
## Dataset Structure
Within the databases of the Bibliotheksservice-Zentrum Baden-Württemberg (BSZ) and Verbundzentrale des GBV (VZG) / the K10plus database, bibliographic information is provided in machine-readable form separated into fields. An overview of most of these fields as well as instructions on how information was inserted into each field [can be found here (in German language only)](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=index&katalog=Standard). Data for the more than 750,000 titles were transformed from the format available in the K10plus into a columnar format, where each of the fields forms an individual column. In the K10plus database, the field names are given in the form of four digits. For a better readability of the table, the field names used in the [documentation](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=index&katalog=Standard) were added to these four digits.
The dataset does not contain data splits which can be used for machine learning tasks. It is advisable to first analyse the dataset and then perform the split(s), for example to split off portions of the dataset which pertain to one of the three project centuries. Records pertaining to either VD16, VD17 or VD18 are identifiable, since they contain a VD-number in the respective column ("2190 VD16-Nummer", "2191 VD17-Nummer", or "2192 VD18-Nummer").
### Data Instances
For nearly every column, there can be found information [in the documentation](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=index&katalog=Standard) on how to interpret the content and resolve the keys used in every field. Field "[1500 Sprachcodes](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat&val=1500&katalog=Standard)", for example, holds the information of the language of the publication in ISO 639-2/B format, e.g. ger, lat, grc, fre. Moreover, further information has been inserted separated by a \$ and a lower-case letter. The entry "lat\$agrc\$ager" contains the separator \$a and can be read as "the languages of the publication are Latin, Greek and German", while it is not specified how much text is covered by each of these languages (\$a indicates the language of the text). The entry "ger\$clat" can be read as "the language of the publication is German, but the original text is in Latin".
### Data Fields
Most fields (or columns) are described [in the documentation](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=index&katalog=Standard). However, there are fields which are essential for the understanding of this dataset. A key element in this dataset are Pica Production Numbers (PPNs). PPNs are unique identifiers used for individual entities; in the dataset, they are surrounded by exclamation marks (!..!) and are thus machine-readable. They are related to titles as well as to authorities. To provide some illuminating examples: the physical work with the PPN [567660303](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=567660303) noted in field "0100 Pica-Produktionsnummer" contains a reference to the digital edition bearing the PPN [858908824](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=858908824); the latter information can be found in field "4243 Beziehungen auf Manifestationsebene". Furthermore, the main contributor to the text, Andreas Wenzel, has received the PPN 080488994. This PPN can be used to retrieve information about Andreas Wenzel via the online catalogue ([https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=080488994](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=080488994)), or in the form of a xml file in [DublinCore](http://unapi.k10plus.de/?id=gvk:ppn:080488994&format=dc) or [MODS](http://unapi.k10plus.de/?id=gvk:ppn:080488994&format=mods) format. Wenzel’s PPN is given in field "3000 Person/Familie als 1. geistiger Schöpfer", alongside with his [GND](https://gnd.network/Webs/gnd/EN/Home/home_node.html) number [120161052](https://d-nb.info/gnd/120161052). Furthermore, PPNs are used to characterise this work, e.g. by providing information on the type of content (field "1131 Art des Inhalts"), in this case PPN [106394134](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=106394134) for "Anthologie" (corresponding to GND [4002214-6](https://d-nb.info/gnd/4002214-6)), or information on the genre, PPNs [096632232](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=096632232) for "Lyrik" and [096631643](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=096631643) "Gelegenheitsschrift:Hochzeit"; the latter two can be found in field "5570 Gattungsbegriffe bei Alten Drucken" (genre terms in old prints). A comprehensive list of these genre terms [can be found here](https://verbundwiki.gbv.de/display/GAD/Gattungsbegriffe+der+Arbeitsgemeinschaft+Alte+Drucke+beim+GBV+und+SWB).
Field "0500 Bibliografische Gattung und Status" (bibliographic genre and status) holds information on whether a title was recorded referring to its physical form (A) or to its electronic form (O), on its bibliographic frequency (a for single units, c for the recording of a multi-part monograph, f for a volume of it, b for a journal, v for a journal volume), and on its status (v for bibliographical completeness and cataloguing in autopsy). Fields 2190, 2191 and 2192 ("VD16-Nummer", "VD17-Nummer", or "VD18-Nummer") hold the IDs used by the three VD16, VD17 and VD18 projects respectively. Field "[5301 Basisklassifikation](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat&val=5301&katalog=Standard)" uses the [BK (Basisklassifikation)](https://wiki.k10plus.de/display/K10PLUS/Basisklassifikation), another classification system extensively used for subject indexing. Field "[1100 Erscheinungsdatum/Entstehungsdatum](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat&val=1100&katalog=Standard)" gives information on the date of publication or creation, field "[4040 Normierter Ort](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat&val=4040&katalog=Standard)" on the place of publication in standardised form.
### Compliance with Standard
The standard used in the K10plus is PICA+, an xml standard. However, to enable better ingestion and processing, both files were converted to the columnar format .parquet.
### Data Splits
Not applicable.
## Languages
The languages most often represented in the bibliographic file (i.e. the language of the publications listed there) are German (ger), Latin (lat), Ancient Greek (grc), Modern Greek (gre), Low German (nds), French (fre), Hebrew (heb), English (eng), Italian (ita). Beyond these, more than hundred other languages are represented as well. See the column "1500 Sprachcodes" in the file, which specifies the language of the publication in [ISO 639-2/B](https://www.loc.gov/standards/iso639-2/php/code_list.php) format. This field also describes in detail the diverse combinations of languages characterising each individual title, as well as it indicates whether the title is a translation and from which language it was translated.
## Descriptive Statistics
The bibliographic file comprises 750.342 observations of 174 variables. In other words: 750.342 titles published between 1500 and roughly 1800 are described in 174 columns. The columns are sparsely populated. Most of the columns are of the data type "character" due to additional, character-based codes within field contents, derived from the custom internal data format called PICA+. The column "0100 Pica-Produktionsnummer", which holds the unique key for each title, is also in character format, because some of the keys contain an "X". File size is 487 MB (511.651.840 Bytes).
The second file ("digital copies") comprises 590.528 observations of 16 variables. It contains the unique identifier "Pica-Produktionsnummer", the identifier of each VD ("VD-Nummer"), references to the physical copy of the book, links to digital copies, in rare cases also rights or licence information and access status. It thus serves as a concordance between the bibliographic file and the digital copies available in February 2025. File size is 105 MB (110.825.472 Bytes).
# Data Collection Process
The idea behind the decision to collect all the titles listed in the VD16, VD17 and VD18 was to compile a reasonably large dataset of bibliographic metadata which is at the same time delimitable and is the product of more than 50 years of librarian’s work. The criteria of whether or not a title is added to one of the three VDs can be found on the websites of each VD cited above. The bibliographic metadata contained in this dataset were established by trained librarians. As hundreds of libraries participate in the federated system, the dataset was established collaboratively, is extremely rich and saturated, and the data collected are of a high quality.
Alternative datasets with similar characteristics would be national bibliographies of other European nation-states since the beginning of book printing in the late 15th century. A comprehensive publication of such metadata might enable similar studies and tasks as have been described for the present dataset.
## Curation Rationale
The lack of massive datasets with bibliographic metadata motivated the creation of this dataset. Curation of the dataset was performed with great care with the intent to keep as much information as possible while discarding some fields/columns due to them being sparsely populated.
## Source Data
### Initial Data Collection and Normalisation
The two files were systematically downloaded from the K10plus database in February 2025; both downloads were each consolidated into a single table. With regard to the bibliographic file, some columns were sorted out during the process of consolidation. In the vast majority of the cases, the decision for deletion was taken on a statistical basis: A column containing less than 750 entries (0,1% of the length of the file) seemed dispensable with regard to the fact that 750.342 entries could be there. Further reasons for the dismissal of columns were the fact that they are relevant only for internal purposes (e.g., the date of the last change of the bibliographic description) or where it was clear that they resulted from obvious mistakes.
No normalisation of any data was performed.
### Source Data Producers
The source data were produced by trained librarians over the course of the past 55 years, i.e. starting from 1969. The titles to be included in the three VD’s were carefully selected, and the source data were created by librarians under changing rules and standards such as the "Regeln für die alphabetische Katalogisierung (RAK)"" and the "Resource Description and Access". VD16 was recorded according to its own system of rules. Since the data result from changing rules and standards, they are not homogeneous which may have consequences for their analysis. It is notable that this dataset does not only document a significant volume and diversity of publications, but is the result of centuries-long processes of preservation of relevant historical titles, and of data collection conducted with care.
### Digitisation Pipeline
Not applicable.
## Preprocessing and Cleaning
Beyond the data cleaning described above in the section "Initial Data Collection and Normalisation", no preprocessing of both files has been undertaken. The conversion of the files to the columnar arrow format was performed with the aim to alleviate ingestion of the dataset for machine learning purposes. If there was more than one entry in a field, the entries were concatenated and are presented within square brackets, with commas separating the individual entries.
## Annotations
Not applicable.
### Annotation Process
Not applicable.
### Annotators
Not applicable.
### Crowd Labour
Not applicable.
## Data Provenance
The provenance of the data is the database management system CBS (Central Library System), which contains the data of the K10plus database of the library networks GBV (Gemeinsamer Bibliotheksverbund) and SWB (Südwestdeutscher Bibliotheksverbund). Alternatively, the data could have been retrieved in various library formats [via the SRU-API of the GBV](http://uri.gbv.de/database/k10plus), the SRU-APIs of the [K10plus](https://sru.k10plus.de/opac-de-627), of the [VD17](https://sru.k10plus.de/vd17) or [VD18](https://https//sru.k10plus.de/vd18). Furthermore, the data of the VD17 is available as a dump in JSON format on GitHub. The licence generally attached to [bibliographic metadata](https://format.gbv.de/about/license) is [CCO](http://creativecommons.org/publicdomain/zero/1.0/).
## Use of Linked Open Data, Controlled Vocabulary, Multilingual Ontologies/Taxonomies
The dataset contains links to several authority file types (persons, places, subjects) like the [Gemeinsame Normdatei (GND)](https://gnd.network/Webs/gnd/EN/Home/home_node.html), and to classification systems such as the *Nederlandse basisclassificatie* [BK (Basisklassifikation)](https://wiki.k10plus.de/display/K10PLUS/Basisklassifikation). The classification was performed by the librarians who created the bibliographic record. The Basisklassifikation is usually linked via an PPN; e.g., the PPN [106403877](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=106403877) is used for the Basisklassifikation identifier 11.50 pointing to church history, as it is used [in this title](https://gvk.k10plus.de/DB=2.1/PPNSET?PPN=635726874). Links to the GND collected within the [Culturegraph project](https://www.dnb.de/EN/Professionell/Metadatendienste/Metadaten/Culturegraph/culturegraph_node.html) were transferred to the union catalogue and can be found in sub-fields \$K and \$v of the 3000 fields and the field "1131 Art des Inhalts" field [in machine-generated form](https://wiki.k10plus.de/display/K10PLUS/Maschinelle+Anreicherungen+mittels+Culturegraph+im+K10plus).
## Version Information
This is the first version of the dataset, which was collected in February 2025. All the three VD’s are work in progress, and newly recorded or digitised titles are continually added, which augments the dataset. Even though the "Verzeichnis der im deutschen Sprachraum erschienenen Drucke" is limited to the period 1501 to 1800, that does not mean that the number of titles contained in it stops to grow. Especially the VD18 continues to grow since an important amount of works from that period are currently being digitized. However, it is currently not planned to publish an updated and enlarged version of all the titles contained in the three VD’s.
### Release Date
2025-04-08
### Date of Modification
Not applicable.
### Checksums
**MD5 and SHA256 hashes of the file VD-Bib-Metadata.parquet:**
MD5: 513ad5441d76ade910902a2b7a339776
SHA256: cca861b112b15f2f18cb9b6b041e08ff135809deb3bad8637886a59de81b0bbe
**MD5 and SHA256 hashes of the file VD-Digi-Metadata.parquet:**
MD5: 653bf7ae7daef840038ef3f7c8d9cb28
SHA256: 87f3d059651c4dbd5ceaca28ecf6f00f213a1934ddcd71fc0a9d13f69c9e62fe
### Update Periodicity
Not applicable.
## Maintenance
The maintenance of this dataset is limited. The data will not be updated, but any technical issues will be addressed during the lifetime of the research project "Mensch.Maschine.Kultur", in the context of which this dataset was established. The project ends in June 2025, and the dataset will be maintained at least until then.
The dataset consists of two files (VD-Bib-Metadata.parquet and VD-Digi-Metadata.parquet) as well as the accompanying datasheet. The .parquet format enables large amounts of data to be processed and moved quickly. It is a specific data format that stores data in a columnar memory layout and has been developed by Apache Arrow. Libraries are available for [C, C++, C#, Go, Java, JavaScript, Julia, MATLAB, Python, R, Ruby, and Rust](https://arrow.apache.org/docs/cpp/parquet.html).
# Examples and Considerations for Using the Data
The dataset is suitable to serve the two tasks described above. Further possible uses are the creation of a knowledge graph out of the PPNs contained in the dataset and the related data which can be retrieved online as described above. Furthermore, the dataset can be used to examine the metadata diversity, the data quality, and its potential to be transformed into linked open data. People using this dataset are welcome to provide feedback on this dataset and of the uses made of it to the dataset curators.
## Ethical Considerations
### Personal and Other Sensitive Information
The dataset does not contain personal or sensitive information beyond what is available in international authority files anyway. Since the youngest titles contained in this dataset were printed in the early 19th century, the dataset does not contain any sensitive data in the sense of contemporary privacy laws.
### Discussion of Biases
With regard to social biases, it should be noted that the creators of the titles contained in the bibliographic file have been members of a small literate elite of their times; they belonged e.g. to university, the clergy, or the administration. The vast majority of these creators were male.
As can be easily imagined, book production started in the early 16th century on a low level and increased continually over time. Therefore, an uneven distribution of works published over time is observable. Reformation and protestantism in the 16th and especially the 17th century are reflected in the dominance of print production located in Eastern Germany. With regard to the 18th century, it has to be noted that this bibliography is far from complete, because the project VD18 started only in 2009 and is therefore the youngest of the three. Seen against this background, the absolute numbers of titles published in the 17th and especially in the 18th century as compared to the other two centuries is by now far from being complete.
With regard to the regional distribution of the places where the titles listed in the bibliography file have been printed, two biases are noteworthy: First of all, the boundaries of what is now known as "Germany" constantly shifted within the last 500 years. Moreover, the emphasis on individual towns changed over time. While Wittenberg was an important center of book printing in the 16th century, a significant decrease of output has to be noted until the late 18th century for this town. In this respect, the database may serve as a source of further, especially quantitative, analysis. Secondly, as has been noted above, the three projects were funded by the German Research Foundation over different time periods, and only libraries located in the unified Germany after 1990 received funding; therefore, ecclesiastical and archival holdings are mostly absent. The focus of the "catalogue of prints published in the German-speaking world" overrepresented places of printing which now lie in the Federal Republic of Germany in its borders after 1990 or, formulated the other way around, the bibliographic file underrepresents works printed in Alsace, Switzerland, Austria and the former Prussian Provinces Pommerania, Silesia as well as West and East Prussia. Printing places like e.g. Stettin (Szczecin), Breslau (Wrocław) or Königsberg (Kaliningrad) are therefore underrepresented. These printing places must be specifically supplemented as part of future VD projects. VD16 excluded important printing centres like Amsterdam, Prague and Copenhagen, as they were not being seen as part of German-speaking countries; however, German-language titles printed in these centres were included in the VDs. In the VD17 and VD18, library holdings of the western and southern part of Germany are underrepresented and therefore also printing places lying in these regions. Another distributional bias can be identified by the presence of languages in the bibliographic file: While in the timeframe between 1500 and about 1670 Latin was dominant as the language of the prints, later onwards German dominated. Finally, it has to be noted that the three VD projects are far from being complete, especially the VD18. There are several tens of thousand titles still missing in the VD18, because it is highly unlikely that there should have been fewer prints in the 18th century than in the 17th century.
The two files consist of the three parts of the VD which have been assembled and joined together. However, VD16, VD17, and VD18 are to be understood as separate projects between which recording practices are not consistent, a fact which is reflected in differing uses of classification schemata and authority files. A good example for this incoherence is the indexing of genres of historical prints. In the VD16 project, Gattungsbegriffe (genre terms in old prints) were initially not used. Therefore, such terms are prevalent in the VD17, whereas titles listed in the VD16 have received metadata on genre terms much less often. For a comprehensive estimation of the biases of the three VD’s see [Lauer et al. 2024](https://doi.org/10.1515/bd-2024-0076).
### Potential Societal Impact of Using the Dataset
This dataset describes historical titles (printed books, monographs, multi-volume works, journals) that have been published between 1501 and 1800. Most probably, the social impact of the dataset is therefore very low.
## Examples of Datasets, Publications and Models that (re-)use the Dataset
So far, this dataset has not yet been reused. The dataset contains links to several other datasets which can be retrieved online as complimentary resources. See the section "Data fields" above, especially the explanations on the Pica Production Numbers (PPNs) used in the dataset. There are overlaps with the dataset "Metadata of the ‘Alter Realkatalog’ (ARK) of Berlin State Library (SBB)" published on Zenodo on [https://zenodo.org/records/13284442](https://zenodo.org/records/13284442).
## Known Non-Ethical Limitations
Another known limitation of the dataset is the fact that journal volumes were only in a few cases catalogued separately; rather, they were added to the respective record. Therefore, the dataset contains hardly any such recordings and does not enable statistical analyses. Users are invited to report further limitations back to the curators.
## Unanticipated Uses made of this Dataset
There are no known unanticipated uses made of this dataset. Users are invited to report the uses they made of this dataset back to the curators, which would enable an update of this datasheet.
Datasheet as of April 08, 2025 |
Vchitect/Vchitect_T2V_DataVerse | Vchitect | 2025-03-27T15:07:30Z | 17,867 | 6 | [
"task_categories:text-to-video",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2501.08453",
"region:us"
] | [
"text-to-video"
] | 2025-03-14T05:35:10Z | null | ---
license: apache-2.0
task_categories:
- text-to-video
size_categories:
- 1M<n<10M
---
# Vchitect-T2V-Dataverse
<div>
<div align="center">
<a href='https://vchitect.intern-ai.org.cn/' target='_blank'>Vchitect Team<sup>1</sup></a> 
</div>
<div>
<div align="center">
<sup>1</sup>Shanghai Artificial Intelligence Laboratory 
</div>
<div align="center">
<a href="https://arxiv.org/abs/2501.08453">Paper</a> |
<a href="https://vchitect.intern-ai.org.cn/">Project Page</a> |
</div>
## Data Overview
The Vchitect-T2V-Dataverse is the core dataset used to train our text-to-video diffusion model, Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models.
It comprises 14 million high-quality videos collected from the Internet, each paired with detailed textual captions. This large-scale dataset enables the model to learn rich video-text alignments and generate temporally coherent video content from textual prompts.
For more technical details, data processing procedures, and model training strategies, please refer to our paper.
## BibTex
```
@article{fan2025vchitect,
title={Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models},
author={Fan, Weichen and Si, Chenyang and Song, Junhao and Yang, Zhenyu and He, Yinan and Zhuo, Long and Huang, Ziqi and Dong, Ziyue and He, Jingwen and Pan, Dongwei and others},
journal={arXiv preprint arXiv:2501.08453},
year={2025}
}
@article{si2025RepVideo,
title={RepVideo: Rethinking Cross-Layer Representation for Video Generation},
author={Si, Chenyang and Fan, Weichen and Lv, Zhengyao and Huang, Ziqi and Qiao, Yu and Liu, Ziwei},
journal={arXiv 2501.08994},
year={2025}
}
```
## Disclaimer
We disclaim responsibility for user-generated content. The model was not trained to realistically represent people or events, so using it to generate such content is beyond the model's capabilities. It is prohibited for pornographic, violent and bloody content generation, and to generate content that is demeaning or harmful to people or their environment, culture, religion, etc. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards. |
zhang0jhon/Aesthetic-4K | zhang0jhon | 2025-03-25T02:40:34Z | 5,149 | 21 | [
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2503.18352",
"doi:10.57967/hf/5209",
"region:us"
] | [] | 2025-02-16T01:47:04Z | 2 | ---
license: mit
---
# Aesthetic-4K Dataset
We introduce Aesthetic-4K, a high-quality dataset for ultra-high-resolution image generation, featuring carefully selected images and captions generated by GPT-4o.
Additionally, we have meticulously filtered out low-quality images through manual inspection, excluding those with motion blur, focus issues, or mismatched text prompts.
For more details, please refer to our paper:
* [Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2503.18352) (CVPR 2025)
* Source code is available at [https://github.com/zhang0jhon/diffusion-4k](https://github.com/zhang0jhon/diffusion-4k).
## Citation
If you find our paper or dataset is helpful in your research or applications, generously cite with:
```
@inproceedings{zhang2025diffusion4k,
title={Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent Diffusion Models},
author={Jinjin Zhang, Qiuyu Huang, Junjie Liu, Xiefan Guo and Di Huang},
year={2025},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
}
``` |
knoveleng/open-rs | knoveleng | 2025-03-24T02:18:39Z | 2,762 | 8 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2503.16219",
"region:us"
] | [
"text-generation"
] | 2025-03-18T09:44:28Z | 2 | ---
language: en
license: mit
task_categories:
- text-generation
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: level
dtype: string
splits:
- name: train
num_bytes: 7763718
num_examples: 7000
download_size: 3678677
dataset_size: 7763718
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Open-RS Dataset
## Dataset Description
- **Repository**: [knoveleng/open-rs](https://github.com/knoveleng/open-rs)
- **Paper**: [Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn’t](https://arxiv.org/abs/2503.16219)
### Summary
The `open-rs` dataset contains 7,000 mathematical reasoning problems, including 3,000 hard problems from `open-s1` and 4,000 (1000 easy + 3000 hard problems) from `open-deepscaler`. It’s a core component of the [Open RS project](https://github.com/knoveleng/open-rs), enhancing reasoning in small LLMs via reinforcement learning.
## Usage
Load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
ds = load_dataset("knoveleng/open-rs")["train"]
print(ds[0])
```
## Dataset Structure
### Data Instance
An example entry:
```json
{
"problem": "Let \(S(M)\) denote the sum of digits of a positive integer \(M\) in base 10. Let \(N\) be the smallest positive integer such that \(S(N) = 2013\). What is \(S(5N + 2013)\)?",
"solution": "1. **Find smallest \(N\) with \(S(N) = 2013\):** To minimize \(N\), use mostly 9s. Since \(2013 \div 9 = 223\), \(N\) could be 223 nines (sum \(9 \times 223 = 2007\)), then adjust the first digit to 7, making \(N = 7 \times 10^{223} - 1\). Sum: \(7 + 222 \times 9 = 2013\). 2. **Compute \(5N + 2013\):** \(5N = 5 \times (7 \times 10^{223} - 1) = 35 \times 10^{223} - 5\), so \(5N + 2013 = 35 \times 10^{223} + 2008\). 3. **Calculate \(S(5N + 2013\):** This is 35 followed by 219 zeros, then 2008 (last 4 digits). Sum: \(3 + 5 + 2 + 0 + 0 + 8 = 18\). Final answer: \( \boxed{18} \).",
"answer": "18",
"level": "Hard"
}
```
### Data Fields
- **`problem`**: Mathematical question (string).
- **`solution`**: Detailed solution steps (string); if no official solution exists, the answer is provided in LaTeX format.
- **`answer`**: Correct final answer (string).
- **`level`**: Difficulty level (string): "Hard" or "Easy".
## Citation
```bibtex
@misc{dang2025reinforcementlearningreasoningsmall,
title={Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't},
author={Quy-Anh Dang and Chris Ngo},
year={2025},
eprint={2503.16219},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.16219},
}
``` |
lumolabs-ai/Lumo-Fart-DS-Instruct | lumolabs-ai | 2025-03-18T15:14:38Z | 184 | 2 | [
"license:agpl-3.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-18T13:21:44Z | 2 | ---
license: agpl-3.0
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: chunk
dtype: string
splits:
- name: train
num_bytes: 1145837069.8
num_examples: 475152
- name: test
num_bytes: 60307214.2
num_examples: 25008
download_size: 612509196
dataset_size: 1206144284.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# 🌟 Lumo-Fart-DS-Instruct: Apex Solana Knowledge Repository 🌟

**Lumo-Fart-DS-Instruct** is the latest open-source advancement in structured Solana-related knowledge, incorporating **500,160 high-quality question-answer pairs**. This dataset is **5.2x larger** than its predecessor, Lumo-Novel-DS-Instruct, with enhanced precision, comprehensive coverage, and an optimized architecture for large-scale AI fine-tuning in the Solana ecosystem.
---
## 📈 **Dataset Statistics**
| Metric | Value |
|--------|-------|
| Total QA Pairs | 500,160 |
| Precision Improvement | 4.1x over predecessor |
| Data Source Depth | 42+ authoritative references |
---
## 📜 **Data Sources**

The dataset is sourced from highly authoritative references:
### **Solana Documentation:**
- [Solana Docs](https://solana.com/docs)
- [Solana Cookbook](https://solana.com/developers/cookbook)
- [Solana Terminology](https://solana.com/docs/terminology)
- [Solana RPC API](https://solana.com/docs/rpc)
- [Solana StackExchange Dump](https://github.com/solana-developers/stackexchange-dump)
### **Ecosystem References:**
- [Jito Docs](https://docs.jito.wtf)
- [Raydium Docs](https://docs.raydium.io/raydium)
- [Jupiter Guide](https://station.jup.ag/guides)
- [GeckoTerminal API](https://apiguide.geckoterminal.com)
- [CoinGecko Docs](https://docs.coingecko.com/reference/introduction)
- [Helius Docs](https://docs.helius.dev)
- [QuickNode Solana](https://www.quicknode.com/docs/solana)
- [ChainStack Solana](https://docs.chainstack.com/reference/solana-getting-started)
- [Meteora Docs](https://docs.meteora.ag)
- [PumpPortal](https://pumpportal.fun)
- [DexScreener Docs](https://docs.dexscreener.com)
- [MagicEden Docs](https://docs.magiceden.io)
- [Tatum Solana](https://docs.tatum.io/reference/rpc-solana)
- [Alchemy Solana](https://docs.alchemy.com/reference/solana-api-quickstart)
- [Bitquery Solana](https://docs.bitquery.io/docs/examples/Solana/solana-balance-updates/)
- [Orca Docs](https://docs.orca.so)
- [Lifinity](https://docs.lifinity.io/)
- [GooseFX](https://docs.goosefx.io/)
- [FluxBeam](https://docs.fluxbeam.xyz/)
- [Phoenix](https://ellipsis-labs.gitbook.io/phoenix-dex/tRIkEFlLUzWK9uKO3W2V)
- [Drift](https://docs.drift.trade/)
- [HXRO](https://docs.hxro.network/)
- [FlashTrade](https://docs.flash.trade/flash-trade/flash-trade/getting-started)
- [Zeta](https://docs.zeta.markets/)
- [MarginFi](https://docs.marginfi.com/)
- [Solend](https://dev.solend.fi/docs/intro)
- [Kamino](https://docs.kamino.finance/)
- [Marinade](https://docs.marinade.finance/)
- [BlazeStake](https://stake-docs.solblaze.org/)
- [Sanctum](https://learn.sanctum.so/docs)
- [BonkBot](https://docs.bonkbot.io/)
- [BirdEye](https://docs.birdeye.so/)
- [Trojan](https://docs.trojan.app/)
- [Phantom](https://docs.phantom.com/)
- [Squads](https://docs.squads.so/main)
- [SolFlare](https://docs.solflare.com/solflare)
- [SolScan](https://docs.solscan.io/)
- [Dune (Solana only)](https://docs.dune.com/data-catalog/solana/overview)
- [ZKCompression](https://www.zkcompression.com)
---
## ⚙️ **Data Collection & Processing**
### 🔍 **Data Extraction**
* **Authoritative Sources Only:** Official documentation, developer guides, and community-curated content.
* **Manual Filtering:** Eliminated duplicates, inconsistencies, and irrelevant artifacts.
### 🛠️ **Processing Pipeline**
* **Contextual Filtration:**
- Data is filtered by categories, ensuring complete and knowledgeable chunks are used.
- Preserves full context and coherence without chunking artifacts.
* **Question Generation:**
- Generated **10 diverse and targeted questions** per data chunk using o1-preview.
- Questions tailored for medium to advanced users, covering distinct Solana features.
* **Answer Generation:**
- Detailed, accurate, and context-specific responses generated using Solana documentation.
---
## 🚀 **Usage**
### 💡 **Fine-Tuning Large Language Models**
* Equip LLMs to:
- Answer Solana-specific queries.
- Generate smart contract code snippets.
- Support ecosystem-based conversations.
### 🔬 **Research & Development**
* Ideal for advancing AI research in:
- Natural language understanding for blockchain topics.
- Conversational AI in DeFi and crypto ecosystems.
### 🤖 **Conversational AI Systems**
* Build Solana-savvy chatbots and voice assistants for seamless user interaction.
---
## 🔢 **Generation Code Overview**
The dataset was generated using the following tools and techniques:
* **Async Python Frameworks:** For highly efficient and concurrent processing.
* **Rate-Limited APIs:** To optimize throughput with **50 concurrent requests** and a rate cap of **4,000 API calls per minute** calls made to **o1-preview** for generation.
* **Service-Specific Contexts:** Tailored prompts for each dataset source.
|
autogluon/chronos_datasets | autogluon | 2025-03-18T10:38:52Z | 14,680 | 38 | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"source_datasets:original",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2403.07815",
"region:us"
] | [
"time-series-forecasting"
] | 2024-06-22T15:59:58Z | null | ---
annotations_creators:
- no-annotation
license: other
source_datasets:
- original
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
pretty_name: Chronos datasets
dataset_info:
- config_name: dominick
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: im_0
dtype: int64
splits:
- name: train
num_bytes: 477140250
num_examples: 100014
download_size: 42290010
dataset_size: 477140250
- config_name: electricity_15min
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: consumption_kW
sequence: float64
splits:
- name: train
num_bytes: 670989988
num_examples: 370
download_size: 284497403
dataset_size: 670989988
license: CC BY 4.0
homepage: https://archive.ics.uci.edu/dataset/321/electricityloaddiagrams20112014
- config_name: ercot
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ns]
- name: target
sequence: float32
splits:
- name: train
num_examples: 8
download_size: 14504261
- config_name: exchange_rate
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
splits:
- name: train
num_examples: 8
download_size: 401501
license: MIT
homepage: https://github.com/laiguokun/multivariate-time-series-data/tree/master/exchange_rate
- config_name: m4_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 160504176
num_examples: 4227
download_size: 65546675
dataset_size: 160504176
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 5985544
num_examples: 414
download_size: 1336971
dataset_size: 5985544
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 181372969
num_examples: 48000
download_size: 52772258
dataset_size: 181372969
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 39205397
num_examples: 24000
download_size: 13422579
dataset_size: 39205397
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 5955806
num_examples: 359
download_size: 2556691
dataset_size: 5955806
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 14410042
num_examples: 23000
download_size: 5488601
dataset_size: 14410042
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m5
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: item_id
dtype: string
- name: target
sequence: float32
- name: dept_id
dtype: string
- name: cat_id
dtype: string
- name: store_id
dtype: string
- name: state_id
dtype: string
splits:
- name: train
num_bytes: 574062630
num_examples: 30490
download_size: 78063286
dataset_size: 574062630
homepage: https://www.kaggle.com/competitions/m5-forecasting-accuracy/rules
- config_name: mexico_city_bikes
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 618999406
num_examples: 494
download_size: 103206946
dataset_size: 618999406
homepage: https://ecobici.cdmx.gob.mx/en/open-data/
- config_name: monash_australian_electricity
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 18484319
num_examples: 5
download_size: 16856156
dataset_size: 18484319
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_car_parts
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 2232790
num_examples: 2674
download_size: 70278
dataset_size: 2232790
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_cif_2016
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 115096
num_examples: 72
download_size: 70876
dataset_size: 115096
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_covid_deaths
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 907326
num_examples: 266
download_size: 58957
dataset_size: 907326
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_electricity_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 135103443
num_examples: 321
download_size: 31139117
dataset_size: 135103443
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_electricity_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 807315
num_examples: 321
download_size: 333563
dataset_size: 807315
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_fred_md
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 1248369
num_examples: 107
download_size: 412207
dataset_size: 1248369
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_hospital
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_examples: 767
download_size: 117038
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_kdd_cup_2018
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: city
dtype: string
- name: station
dtype: string
- name: measurement
dtype: string
splits:
- name: train
num_bytes: 47091540
num_examples: 270
download_size: 8780105
dataset_size: 47091540
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_london_smart_meters
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 2664567976
num_examples: 5560
download_size: 597389119
dataset_size: 2664567976
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m1_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 907691
num_examples: 617
download_size: 244372
dataset_size: 907691
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m1_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 162961
num_examples: 203
download_size: 48439
dataset_size: 162961
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m1_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 75679
num_examples: 181
download_size: 30754
dataset_size: 75679
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m3_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 2708124
num_examples: 1428
download_size: 589699
dataset_size: 2708124
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m3_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 606428
num_examples: 756
download_size: 188543
dataset_size: 606428
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m3_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 305359
num_examples: 645
download_size: 100184
dataset_size: 305359
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_nn5_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
splits:
- name: train
num_examples: 111
download_size: 64620
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_pedestrian_counts
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_bytes: 50118790
num_examples: 66
download_size: 12377357
dataset_size: 50118790
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_rideshare
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: source_location
dtype: string
- name: provider_name
dtype: string
- name: provider_service
dtype: string
- name: price_min
sequence: float64
- name: price_mean
sequence: float64
- name: price_max
sequence: float64
- name: distance_min
sequence: float64
- name: distance_mean
sequence: float64
- name: distance_max
sequence: float64
- name: surge_min
sequence: float64
- name: surge_mean
sequence: float64
- name: surge_max
sequence: float64
- name: api_calls
sequence: float64
- name: temp
sequence: float64
- name: rain
sequence: float64
- name: humidity
sequence: float64
- name: clouds
sequence: float64
- name: wind
sequence: float64
splits:
- name: train
num_bytes: 10819910
num_examples: 156
download_size: 781873
dataset_size: 10819910
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_saugeenday
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: T1
sequence: float64
splits:
- name: train
num_bytes: 379875
num_examples: 1
download_size: 222678
dataset_size: 379875
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_temperature_rain
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: t_mean
sequence: float64
- name: prcp_sum
sequence: float64
- name: t_max
sequence: float64
- name: t_min
sequence: float64
- name: fcst_0_dailypop
sequence: float64
- name: fcst_0_dailypop1
sequence: float64
- name: fcst_0_dailypop10
sequence: float64
- name: fcst_0_dailypop15
sequence: float64
- name: fcst_0_dailypop25
sequence: float64
- name: fcst_0_dailypop5
sequence: float64
- name: fcst_0_dailypop50
sequence: float64
- name: fcst_0_dailyprecip
sequence: float64
- name: fcst_0_dailyprecip10pct
sequence: float64
- name: fcst_0_dailyprecip25pct
sequence: float64
- name: fcst_0_dailyprecip50pct
sequence: float64
- name: fcst_0_dailyprecip75pct
sequence: float64
- name: fcst_1_dailypop
sequence: float64
- name: fcst_1_dailypop1
sequence: float64
- name: fcst_1_dailypop10
sequence: float64
- name: fcst_1_dailypop15
sequence: float64
- name: fcst_1_dailypop25
sequence: float64
- name: fcst_1_dailypop5
sequence: float64
- name: fcst_1_dailypop50
sequence: float64
- name: fcst_1_dailyprecip
sequence: float64
- name: fcst_1_dailyprecip10pct
sequence: float64
- name: fcst_1_dailyprecip25pct
sequence: float64
- name: fcst_1_dailyprecip50pct
sequence: float64
- name: fcst_1_dailyprecip75pct
sequence: float64
- name: fcst_2_dailypop
sequence: float64
- name: fcst_2_dailypop1
sequence: float64
- name: fcst_2_dailypop10
sequence: float64
- name: fcst_2_dailypop15
sequence: float64
- name: fcst_2_dailypop25
sequence: float64
- name: fcst_2_dailypop5
sequence: float64
- name: fcst_2_dailypop50
sequence: float64
- name: fcst_2_dailyprecip
sequence: float64
- name: fcst_2_dailyprecip10pct
sequence: float64
- name: fcst_2_dailyprecip25pct
sequence: float64
- name: fcst_2_dailyprecip50pct
sequence: float64
- name: fcst_2_dailyprecip75pct
sequence: float64
- name: fcst_3_dailypop
sequence: float64
- name: fcst_3_dailypop1
sequence: float64
- name: fcst_3_dailypop10
sequence: float64
- name: fcst_3_dailypop15
sequence: float64
- name: fcst_3_dailypop25
sequence: float64
- name: fcst_3_dailypop5
sequence: float64
- name: fcst_3_dailypop50
sequence: float64
- name: fcst_3_dailyprecip
sequence: float64
- name: fcst_3_dailyprecip10pct
sequence: float64
- name: fcst_3_dailyprecip25pct
sequence: float64
- name: fcst_3_dailyprecip50pct
sequence: float64
- name: fcst_3_dailyprecip75pct
sequence: float64
- name: fcst_4_dailypop
sequence: float64
- name: fcst_4_dailypop1
sequence: float64
- name: fcst_4_dailypop10
sequence: float64
- name: fcst_4_dailypop15
sequence: float64
- name: fcst_4_dailypop25
sequence: float64
- name: fcst_4_dailypop5
sequence: float64
- name: fcst_4_dailypop50
sequence: float64
- name: fcst_4_dailyprecip
sequence: float64
- name: fcst_4_dailyprecip10pct
sequence: float64
- name: fcst_4_dailyprecip25pct
sequence: float64
- name: fcst_4_dailyprecip50pct
sequence: float64
- name: fcst_4_dailyprecip75pct
sequence: float64
- name: fcst_5_dailypop
sequence: float64
- name: fcst_5_dailypop1
sequence: float64
- name: fcst_5_dailypop10
sequence: float64
- name: fcst_5_dailypop15
sequence: float64
- name: fcst_5_dailypop25
sequence: float64
- name: fcst_5_dailypop5
sequence: float64
- name: fcst_5_dailypop50
sequence: float64
- name: fcst_5_dailyprecip
sequence: float64
- name: fcst_5_dailyprecip10pct
sequence: float64
- name: fcst_5_dailyprecip25pct
sequence: float64
- name: fcst_5_dailyprecip50pct
sequence: float64
- name: fcst_5_dailyprecip75pct
sequence: float64
splits:
- name: train
num_bytes: 188598927
num_examples: 422
download_size: 44967856
dataset_size: 188598927
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_tourism_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 1755434
num_examples: 366
download_size: 334951
dataset_size: 1755434
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_tourism_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 688817
num_examples: 427
download_size: 177407
dataset_size: 688817
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_tourism_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 213954
num_examples: 518
download_size: 81479
dataset_size: 213954
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_traffic
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 241983226
num_examples: 862
download_size: 52748547
dataset_size: 241983226
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_weather
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 688598539
num_examples: 3010
download_size: 133164027
dataset_size: 688598539
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: nn5
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
splits:
- name: train
num_examples: 111
download_size: 203096
homepage: http://www.neural-forecasting-competition.com/downloads/NN5/datasets/download.htm
- config_name: solar
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: power_mw
sequence: float64
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: capacity_mw
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 8689093932
num_examples: 5166
download_size: 1507924920
dataset_size: 8689093932
homepage: https://www.nrel.gov/grid/solar-power-data.html
- config_name: solar_1h
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: power_mw
sequence: float64
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: capacity_mw
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 724361772
num_examples: 5166
download_size: 124515417
dataset_size: 724361772
homepage: https://www.nrel.gov/grid/solar-power-data.html
- config_name: taxi_1h
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: subset
dtype: string
- name: lat
dtype: float64
- name: lng
dtype: float64
splits:
- name: train
num_bytes: 28832500
num_examples: 2428
download_size: 2265297
dataset_size: 28832500
license: Apache 2.0
homepage: https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets
- config_name: taxi_30min
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: subset
dtype: string
- name: lat
dtype: float64
- name: lng
dtype: float64
splits:
- name: train
num_bytes: 57560596
num_examples: 2428
download_size: 4541244
dataset_size: 57560596
license: Apache 2.0
homepage: https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets
- config_name: training_corpus_kernel_synth_1m
features:
- name: target
sequence: float64
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
splits:
- name: train
num_examples: 1000000
download_size: 8313239368
- config_name: training_corpus_tsmixup_10m
features:
- name: target
sequence: float64
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
splits:
- name: train
num_examples: 10000000
download_size: 82189589906
- config_name: uber_tlc_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_examples: 262
download_size: 84747
homepage: https://github.com/fivethirtyeight/uber-tlc-foil-response
- config_name: uber_tlc_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_examples: 262
download_size: 1878515
homepage: https://github.com/fivethirtyeight/uber-tlc-foil-response
- config_name: ushcn_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: state
dtype: string
- name: coop_id
dtype: int64
- name: PRCP
sequence: float64
- name: SNOW
sequence: float64
- name: SNWD
sequence: float64
- name: TMAX
sequence: float64
- name: TMIN
sequence: float64
splits:
- name: train
num_bytes: 2259905202
num_examples: 1218
download_size: 221089890
dataset_size: 2259905202
homepage: https://data.ess-dive.lbl.gov/portals/CDIAC
- config_name: weatherbench_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: level
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 39510157312
num_examples: 225280
download_size: 18924392742
dataset_size: 39510157312
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_10m_u_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 7292845757
dataset_size: 8617472000
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_10m_v_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 7292352508
dataset_size: 8617472000
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_2m_temperature
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 7276396852
dataset_size: 8617453568
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_geopotential
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 87305564613
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_potential_vorticity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 92426240043
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_relative_humidity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94728788382
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_specific_humidity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 85139896451
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_temperature
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94081539079
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_toa_incident_solar_radiation
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 6057953007
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_total_cloud_cover
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 6628258398
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_total_precipitation
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 6473160755
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_u_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94801498563
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_v_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94800557482
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_vorticity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94720960560
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: level
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 5656029184
num_examples: 225280
download_size: 2243012083
dataset_size: 5656029184
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: wiki_daily_100k
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: page_name
dtype: string
splits:
- name: train
num_bytes: 4389782678
num_examples: 100000
download_size: 592554033
dataset_size: 4389782678
license: CC0
homepage: https://dumps.wikimedia.org/other/pageviews/readme.html
- config_name: wind_farms_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 1919187
num_examples: 337
download_size: 598834
dataset_size: 1919187
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: wind_farms_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 45917027
num_examples: 337
download_size: 12333116
dataset_size: 45917027
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
configs:
- config_name: dominick
data_files:
- split: train
path: dominick/train-*
- config_name: electricity_15min
data_files:
- split: train
path: electricity_15min/train-*
- config_name: ercot
data_files:
- split: train
path: ercot/train-*
- config_name: exchange_rate
data_files:
- split: train
path: exchange_rate/train-*
- config_name: m4_daily
data_files:
- split: train
path: m4_daily/train-*
- config_name: m4_hourly
data_files:
- split: train
path: m4_hourly/train-*
- config_name: m4_monthly
data_files:
- split: train
path: m4_monthly/train-*
- config_name: m4_quarterly
data_files:
- split: train
path: m4_quarterly/train-*
- config_name: m4_weekly
data_files:
- split: train
path: m4_weekly/train-*
- config_name: m4_yearly
data_files:
- split: train
path: m4_yearly/train-*
- config_name: m5
data_files:
- split: train
path: m5/train-*
- config_name: mexico_city_bikes
data_files:
- split: train
path: mexico_city_bikes/train-*
- config_name: monash_australian_electricity
data_files:
- split: train
path: monash_australian_electricity/train-*
- config_name: monash_car_parts
data_files:
- split: train
path: monash_car_parts/train-*
- config_name: monash_cif_2016
data_files:
- split: train
path: monash_cif_2016/train-*
- config_name: monash_covid_deaths
data_files:
- split: train
path: monash_covid_deaths/train-*
- config_name: monash_electricity_hourly
data_files:
- split: train
path: monash_electricity_hourly/train-*
- config_name: monash_electricity_weekly
data_files:
- split: train
path: monash_electricity_weekly/train-*
- config_name: monash_fred_md
data_files:
- split: train
path: monash_fred_md/train-*
- config_name: monash_hospital
data_files:
- split: train
path: monash_hospital/train-*
- config_name: monash_kdd_cup_2018
data_files:
- split: train
path: monash_kdd_cup_2018/train-*
- config_name: monash_london_smart_meters
data_files:
- split: train
path: monash_london_smart_meters/train-*
- config_name: monash_m1_monthly
data_files:
- split: train
path: monash_m1_monthly/train-*
- config_name: monash_m1_quarterly
data_files:
- split: train
path: monash_m1_quarterly/train-*
- config_name: monash_m1_yearly
data_files:
- split: train
path: monash_m1_yearly/train-*
- config_name: monash_m3_monthly
data_files:
- split: train
path: monash_m3_monthly/train-*
- config_name: monash_m3_quarterly
data_files:
- split: train
path: monash_m3_quarterly/train-*
- config_name: monash_m3_yearly
data_files:
- split: train
path: monash_m3_yearly/train-*
- config_name: monash_nn5_weekly
data_files:
- split: train
path: monash_nn5_weekly/train-*
- config_name: monash_pedestrian_counts
data_files:
- split: train
path: monash_pedestrian_counts/train-*
- config_name: monash_rideshare
data_files:
- split: train
path: monash_rideshare/train-*
- config_name: monash_saugeenday
data_files:
- split: train
path: monash_saugeenday/train-*
- config_name: monash_temperature_rain
data_files:
- split: train
path: monash_temperature_rain/train-*
- config_name: monash_tourism_monthly
data_files:
- split: train
path: monash_tourism_monthly/train-*
- config_name: monash_tourism_quarterly
data_files:
- split: train
path: monash_tourism_quarterly/train-*
- config_name: monash_tourism_yearly
data_files:
- split: train
path: monash_tourism_yearly/train-*
- config_name: monash_traffic
data_files:
- split: train
path: monash_traffic/train-*
- config_name: monash_weather
data_files:
- split: train
path: monash_weather/train-*
- config_name: nn5
data_files:
- split: train
path: nn5/train-*
- config_name: solar
data_files:
- split: train
path: solar/train-*
- config_name: solar_1h
data_files:
- split: train
path: solar_1h/train-*
- config_name: taxi_1h
data_files:
- split: train
path: taxi_1h/train-*
- config_name: taxi_30min
data_files:
- split: train
path: taxi_30min/train-*
- config_name: training_corpus_kernel_synth_1m
data_files:
- split: train
path: training_corpus/kernel_synth_1m/train-*
- config_name: training_corpus_tsmixup_10m
data_files:
- split: train
path: training_corpus/tsmixup_10m/train-*
- config_name: uber_tlc_daily
data_files:
- split: train
path: uber_tlc_daily/train-*
- config_name: uber_tlc_hourly
data_files:
- split: train
path: uber_tlc_hourly/train-*
- config_name: ushcn_daily
data_files:
- split: train
path: ushcn_daily/train-*
- config_name: weatherbench_daily
data_files:
- split: train
path: weatherbench_daily/train-*
- config_name: weatherbench_hourly_10m_u_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/10m_u_component_of_wind/train-*
- config_name: weatherbench_hourly_10m_v_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/10m_v_component_of_wind/train-*
- config_name: weatherbench_hourly_2m_temperature
data_files:
- split: train
path: weatherbench_hourly/2m_temperature/train-*
- config_name: weatherbench_hourly_geopotential
data_files:
- split: train
path: weatherbench_hourly/geopotential/train-*
- config_name: weatherbench_hourly_potential_vorticity
data_files:
- split: train
path: weatherbench_hourly/potential_vorticity/train-*
- config_name: weatherbench_hourly_relative_humidity
data_files:
- split: train
path: weatherbench_hourly/relative_humidity/train-*
- config_name: weatherbench_hourly_specific_humidity
data_files:
- split: train
path: weatherbench_hourly/specific_humidity/train-*
- config_name: weatherbench_hourly_temperature
data_files:
- split: train
path: weatherbench_hourly/temperature/train-*
- config_name: weatherbench_hourly_toa_incident_solar_radiation
data_files:
- split: train
path: weatherbench_hourly/toa_incident_solar_radiation/train-*
- config_name: weatherbench_hourly_total_cloud_cover
data_files:
- split: train
path: weatherbench_hourly/total_cloud_cover/train-*
- config_name: weatherbench_hourly_total_precipitation
data_files:
- split: train
path: weatherbench_hourly/total_precipitation/train-*
- config_name: weatherbench_hourly_u_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/u_component_of_wind/train-*
- config_name: weatherbench_hourly_v_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/v_component_of_wind/train-*
- config_name: weatherbench_hourly_vorticity
data_files:
- split: train
path: weatherbench_hourly/vorticity/train-*
- config_name: weatherbench_weekly
data_files:
- split: train
path: weatherbench_weekly/train-*
- config_name: wiki_daily_100k
data_files:
- split: train
path: wiki_daily_100k/train-*
- config_name: wind_farms_daily
data_files:
- split: train
path: wind_farms_daily/train-*
- config_name: wind_farms_hourly
data_files:
- split: train
path: wind_farms_hourly/train-*
---
# Chronos datasets
Time series datasets used for training and evaluation of the [Chronos](https://github.com/amazon-science/chronos-forecasting) forecasting models.
Note that some Chronos datasets (`ETTh`, `ETTm`, `brazilian_cities_temperature` and `spanish_energy_and_weather`) that rely on a custom builder script are available in the companion repo [`autogluon/chronos_datasets_extra`](https://huggingface.co/datasets/autogluon/chronos_datasets_extra).
See the [paper](https://arxiv.org/abs/2403.07815) for more information.
## Data format and usage
The recommended way to use these datasets is via https://github.com/autogluon/fev.
All datasets satisfy the following high-level schema:
- Each dataset row corresponds to a single (univariate or multivariate) time series.
- There exists one column with name `id` and type `string` that contains the unique identifier of each time series.
- There exists one column of type `Sequence` with dtype `timestamp[ms]`. This column contains the timestamps of the observations. Timestamps are guaranteed to have a regular frequency that can be obtained with [`pandas.infer_freq`](https://pandas.pydata.org/docs/reference/api/pandas.infer_freq.html).
- There exists at least one column of type `Sequence` with numeric (`float`, `double`, or `int`) dtype. These columns can be interpreted as target time series.
- For each row, all columns of type `Sequence` have same length.
- Remaining columns of types other than `Sequence` (e.g., `string` or `float`) can be interpreted as static covariates.
Datasets can be loaded using the 🤗 [`datasets`](https://huggingface.co/docs/datasets/en/index) library
```python
import datasets
ds = datasets.load_dataset("autogluon/chronos_datasets", "m4_daily", split="train")
ds.set_format("numpy") # sequences returned as numpy arrays
```
> **NOTE:** The `train` split of all datasets contains the full time series and has no relation to the train/test split used in the Chronos paper.
Example entry in the `m4_daily` dataset
```python
>>> ds[0]
{'id': 'T000000',
'timestamp': array(['1994-03-01T12:00:00.000', '1994-03-02T12:00:00.000',
'1994-03-03T12:00:00.000', ..., '1996-12-12T12:00:00.000',
'1996-12-13T12:00:00.000', '1996-12-14T12:00:00.000'],
dtype='datetime64[ms]'),
'target': array([1017.1, 1019.3, 1017. , ..., 2071.4, 2083.8, 2080.6], dtype=float32),
'category': 'Macro'}
```
## Changelog
- **v1.3.0 (2025-03-05)**: Fix incorrect timestamp frequency for `monash_hospital`
- **v1.2.0 (2025-01-03)**: Fix incorrect timestamp frequency for `dominick`
- **v1.1.0 (2024-11-14)**: Fix irregular timestamp frequency for `m4_quarterly`
- **v1.0.0 (2024-07-24)**: Initial release
### Converting to pandas
We can easily convert data in such format to a long format data frame
```python
def to_pandas(ds: datasets.Dataset) -> "pd.DataFrame":
"""Convert dataset to long data frame format."""
sequence_columns = [col for col in ds.features if isinstance(ds.features[col], datasets.Sequence)]
return ds.to_pandas().explode(sequence_columns).infer_objects()
```
Example output
```python
>>> print(to_pandas(ds).head())
id timestamp target category
0 T000000 1994-03-01 12:00:00 1017.1 Macro
1 T000000 1994-03-02 12:00:00 1019.3 Macro
2 T000000 1994-03-03 12:00:00 1017.0 Macro
3 T000000 1994-03-04 12:00:00 1019.2 Macro
4 T000000 1994-03-05 12:00:00 1018.7 Macro
```
### Dealing with large datasets
Note that some datasets, such as subsets of WeatherBench, are extremely large (~100GB). To work with them efficiently, we recommend either loading them from disk (files will be downloaded to disk, but won't be all loaded into memory)
```python
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_daily", keep_in_memory=False, split="train")
```
or, for the largest datasets like `weatherbench_hourly_temperature`, reading them in streaming format (chunks will be downloaded one at a time)
```python
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_hourly_temperature", streaming=True, split="train")
```
## Chronos training corpus with TSMixup & KernelSynth
The training corpus used for training the Chronos models can be loaded via the configs `training_corpus_tsmixup_10m` (10M TSMixup augmentations of real-world data) and `training_corpus_kernel_synth_1m` (1M synthetic time series generated with KernelSynth), e.g.,
```python
ds = datasets.load_dataset("autogluon/chronos_datasets", "training_corpus_tsmixup_10m", streaming=True, split="train")
```
Note that since data in the training corpus was obtained by combining various synthetic & real-world time series, the timestamps contain dummy values that have no connection to the original data.
## License
Different datasets available in this collection are distributed under different open source licenses. Please see `ds.info.license` and `ds.info.homepage` for each individual dataset.
## Citation
If you find these datasets useful for your research, please consider citing the associated paper:
```markdown
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Wang, Hao and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
|
xlangai/BRIGHT | xlangai | 2025-03-01T16:51:21Z | 3,480 | 26 | [
"task_categories:text-retrieval",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2407.12883",
"region:us",
"text-retrieval",
"code",
"biology",
"earth_science",
"economics",
"psychology",
"robotics",
"math"
] | [
"text-retrieval"
] | 2024-06-07T23:11:53Z | 2 | ---
language:
- en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-retrieval
dataset_info:
- config_name: Gemini-1.0_reason
features:
- name: query
dtype: string
- name: reasoning
dtype: string
- name: id
dtype: string
- name: excluded_ids
sequence: string
- name: gold_ids_long
sequence: string
- name: gold_ids
sequence: string
- name: gold_answer
dtype: string
splits:
- name: biology
num_bytes: 571840
num_examples: 103
- name: earth_science
num_bytes: 569680
num_examples: 116
- name: economics
num_bytes: 617171
num_examples: 103
- name: psychology
num_bytes: 557169
num_examples: 101
- name: robotics
num_bytes: 467620
num_examples: 101
- name: stackoverflow
num_bytes: 652597
num_examples: 117
- name: sustainable_living
num_bytes: 626085
num_examples: 108
- name: leetcode
num_bytes: 1382908
num_examples: 142
- name: pony
num_bytes: 327318
num_examples: 112
- name: aops
num_bytes: 14182450
num_examples: 111
- name: theoremqa_questions
num_bytes: 13144059
num_examples: 194
- name: theoremqa_theorems
num_bytes: 393390
num_examples: 76
download_size: 5946613
dataset_size: 33492287
- config_name: claude-3-opus_reason
features:
- name: query
dtype: string
- name: reasoning
dtype: string
- name: id
dtype: string
- name: excluded_ids
sequence: string
- name: gold_ids_long
sequence: string
- name: gold_ids
sequence: string
- name: gold_answer
dtype: string
splits:
- name: biology
num_bytes: 557025
num_examples: 103
- name: earth_science
num_bytes: 558266
num_examples: 116
- name: economics
num_bytes: 574237
num_examples: 103
- name: psychology
num_bytes: 516517
num_examples: 101
- name: robotics
num_bytes: 437209
num_examples: 101
- name: stackoverflow
num_bytes: 622316
num_examples: 117
- name: sustainable_living
num_bytes: 591735
num_examples: 108
- name: leetcode
num_bytes: 1407501
num_examples: 142
- name: pony
num_bytes: 361169
num_examples: 112
- name: aops
num_bytes: 14149870
num_examples: 111
- name: theoremqa_questions
num_bytes: 13121703
num_examples: 194
- name: theoremqa_theorems
num_bytes: 382228
num_examples: 76
download_size: 5777474
dataset_size: 33279776
- config_name: documents
features:
- name: id
dtype: string
- name: content
dtype: string
splits:
- name: biology
num_bytes: 21983744
num_examples: 57359
- name: earth_science
num_bytes: 46952371
num_examples: 121249
- name: economics
num_bytes: 22771374
num_examples: 50220
- name: psychology
num_bytes: 23167414
num_examples: 52835
- name: robotics
num_bytes: 20718385
num_examples: 61961
- name: stackoverflow
num_bytes: 189733583
num_examples: 107081
- name: sustainable_living
num_bytes: 24373723
num_examples: 60792
- name: pony
num_bytes: 2365157
num_examples: 7894
- name: leetcode
num_bytes: 456581333
num_examples: 413932
- name: aops
num_bytes: 146564475
num_examples: 188002
- name: theoremqa_theorems
num_bytes: 21124422
num_examples: 23839
- name: theoremqa_questions
num_bytes: 146564475
num_examples: 188002
download_size: 465489179
dataset_size: 1122900456
- config_name: examples
features:
- name: query
dtype: string
- name: reasoning
dtype: string
- name: id
dtype: string
- name: excluded_ids
sequence: string
- name: gold_ids_long
sequence: string
- name: gold_ids
sequence: string
- name: gold_answer
dtype: string
splits:
- name: biology
num_bytes: 326427
num_examples: 103
- name: earth_science
num_bytes: 280741
num_examples: 116
- name: economics
num_bytes: 343172
num_examples: 103
- name: psychology
num_bytes: 286062
num_examples: 101
- name: robotics
num_bytes: 366862
num_examples: 101
- name: stackoverflow
num_bytes: 470365
num_examples: 117
- name: sustainable_living
num_bytes: 336562
num_examples: 108
- name: pony
num_bytes: 135009
num_examples: 112
- name: leetcode
num_bytes: 1212640
num_examples: 142
- name: aops
num_bytes: 13981802
num_examples: 111
- name: theoremqa_theorems
num_bytes: 257842
num_examples: 76
- name: theoremqa_questions
num_bytes: 12810785
num_examples: 194
download_size: 4590189
dataset_size: 30808269
- config_name: gpt4_reason
features:
- name: query
dtype: string
- name: reasoning
dtype: string
- name: id
dtype: string
- name: excluded_ids
sequence: string
- name: gold_ids_long
sequence: string
- name: gold_ids
sequence: string
- name: gold_answer
dtype: string
splits:
- name: biology
num_bytes: 613511
num_examples: 103
- name: earth_science
num_bytes: 618266
num_examples: 116
- name: economics
num_bytes: 642234
num_examples: 103
- name: psychology
num_bytes: 571504
num_examples: 101
- name: robotics
num_bytes: 519720
num_examples: 101
- name: stackoverflow
num_bytes: 704186
num_examples: 117
- name: sustainable_living
num_bytes: 657382
num_examples: 108
- name: leetcode
num_bytes: 1461063
num_examples: 142
- name: pony
num_bytes: 423199
num_examples: 112
- name: aops
num_bytes: 14332394
num_examples: 111
- name: theoremqa_questions
num_bytes: 13262646
num_examples: 194
- name: theoremqa_theorems
num_bytes: 442272
num_examples: 76
download_size: 6421256
dataset_size: 34248377
- config_name: grit_reason
features:
- name: query
dtype: string
- name: reasoning
dtype: string
- name: id
dtype: string
- name: excluded_ids
sequence: string
- name: gold_ids_long
sequence: string
- name: gold_ids
sequence: string
- name: gold_answer
dtype: string
splits:
- name: biology
num_bytes: 478151
num_examples: 103
- name: earth_science
num_bytes: 443792
num_examples: 116
- name: economics
num_bytes: 493163
num_examples: 103
- name: psychology
num_bytes: 407907
num_examples: 101
- name: robotics
num_bytes: 340895
num_examples: 101
- name: stackoverflow
num_bytes: 540771
num_examples: 117
- name: sustainable_living
num_bytes: 475118
num_examples: 108
- name: leetcode
num_bytes: 1305306
num_examples: 142
- name: pony
num_bytes: 258002
num_examples: 112
- name: aops
num_bytes: 14170933
num_examples: 111
- name: theoremqa_questions
num_bytes: 13040517
num_examples: 194
- name: theoremqa_theorems
num_bytes: 360282
num_examples: 76
download_size: 5202414
dataset_size: 32314837
- config_name: llama3-70b_reason
features:
- name: query
dtype: string
- name: reasoning
dtype: string
- name: id
dtype: string
- name: excluded_ids
sequence: string
- name: gold_ids_long
sequence: string
- name: gold_ids
sequence: string
- name: gold_answer
dtype: string
splits:
- name: biology
num_bytes: 631132
num_examples: 103
- name: earth_science
num_bytes: 622087
num_examples: 116
- name: economics
num_bytes: 631657
num_examples: 103
- name: psychology
num_bytes: 563987
num_examples: 101
- name: robotics
num_bytes: 449342
num_examples: 101
- name: stackoverflow
num_bytes: 641853
num_examples: 117
- name: sustainable_living
num_bytes: 654690
num_examples: 108
- name: leetcode
num_bytes: 1376032
num_examples: 142
- name: pony
num_bytes: 315870
num_examples: 112
- name: aops
num_bytes: 14183895
num_examples: 111
- name: theoremqa_questions
num_bytes: 13157545
num_examples: 194
- name: theoremqa_theorems
num_bytes: 405388
num_examples: 76
download_size: 5877403
dataset_size: 33633478
- config_name: long_documents
features:
- name: id
dtype: string
- name: content
dtype: string
splits:
- name: biology
num_bytes: 19454314
num_examples: 524
- name: earth_science
num_bytes: 41843262
num_examples: 601
- name: economics
num_bytes: 20095594
num_examples: 516
- name: psychology
num_bytes: 20541239
num_examples: 512
- name: robotics
num_bytes: 18220587
num_examples: 508
- name: stackoverflow
num_bytes: 184616744
num_examples: 1858
- name: sustainable_living
num_bytes: 21200303
num_examples: 554
- name: pony
num_bytes: 2098474
num_examples: 577
download_size: 104578765
dataset_size: 328070517
configs:
- config_name: Gemini-1.0_reason
data_files:
- split: biology
path: Gemini-1.0_reason/biology-*
- split: earth_science
path: Gemini-1.0_reason/earth_science-*
- split: economics
path: Gemini-1.0_reason/economics-*
- split: psychology
path: Gemini-1.0_reason/psychology-*
- split: robotics
path: Gemini-1.0_reason/robotics-*
- split: stackoverflow
path: Gemini-1.0_reason/stackoverflow-*
- split: sustainable_living
path: Gemini-1.0_reason/sustainable_living-*
- split: leetcode
path: Gemini-1.0_reason/leetcode-*
- split: pony
path: Gemini-1.0_reason/pony-*
- split: aops
path: Gemini-1.0_reason/aops-*
- split: theoremqa_questions
path: Gemini-1.0_reason/theoremqa_questions-*
- split: theoremqa_theorems
path: Gemini-1.0_reason/theoremqa_theorems-*
- config_name: claude-3-opus_reason
data_files:
- split: biology
path: claude-3-opus_reason/biology-*
- split: earth_science
path: claude-3-opus_reason/earth_science-*
- split: economics
path: claude-3-opus_reason/economics-*
- split: psychology
path: claude-3-opus_reason/psychology-*
- split: robotics
path: claude-3-opus_reason/robotics-*
- split: stackoverflow
path: claude-3-opus_reason/stackoverflow-*
- split: sustainable_living
path: claude-3-opus_reason/sustainable_living-*
- split: leetcode
path: claude-3-opus_reason/leetcode-*
- split: pony
path: claude-3-opus_reason/pony-*
- split: aops
path: claude-3-opus_reason/aops-*
- split: theoremqa_questions
path: claude-3-opus_reason/theoremqa_questions-*
- split: theoremqa_theorems
path: claude-3-opus_reason/theoremqa_theorems-*
- config_name: documents
data_files:
- split: biology
path: documents/biology-*
- split: earth_science
path: documents/earth_science-*
- split: economics
path: documents/economics-*
- split: psychology
path: documents/psychology-*
- split: robotics
path: documents/robotics-*
- split: stackoverflow
path: documents/stackoverflow-*
- split: sustainable_living
path: documents/sustainable_living-*
- split: pony
path: documents/pony-*
- split: leetcode
path: documents/leetcode-*
- split: aops
path: documents/aops-*
- split: theoremqa_theorems
path: documents/theoremqa_theorems-*
- split: theoremqa_questions
path: documents/theoremqa_questions-*
- config_name: examples
data_files:
- split: biology
path: examples/biology-*
- split: earth_science
path: examples/earth_science-*
- split: economics
path: examples/economics-*
- split: psychology
path: examples/psychology-*
- split: robotics
path: examples/robotics-*
- split: stackoverflow
path: examples/stackoverflow-*
- split: sustainable_living
path: examples/sustainable_living-*
- split: pony
path: examples/pony-*
- split: leetcode
path: examples/leetcode-*
- split: aops
path: examples/aops-*
- split: theoremqa_theorems
path: examples/theoremqa_theorems-*
- split: theoremqa_questions
path: examples/theoremqa_questions-*
- config_name: gpt4_reason
data_files:
- split: biology
path: gpt4_reason/biology-*
- split: earth_science
path: gpt4_reason/earth_science-*
- split: economics
path: gpt4_reason/economics-*
- split: psychology
path: gpt4_reason/psychology-*
- split: robotics
path: gpt4_reason/robotics-*
- split: stackoverflow
path: gpt4_reason/stackoverflow-*
- split: sustainable_living
path: gpt4_reason/sustainable_living-*
- split: leetcode
path: gpt4_reason/leetcode-*
- split: pony
path: gpt4_reason/pony-*
- split: aops
path: gpt4_reason/aops-*
- split: theoremqa_questions
path: gpt4_reason/theoremqa_questions-*
- split: theoremqa_theorems
path: gpt4_reason/theoremqa_theorems-*
- config_name: grit_reason
data_files:
- split: biology
path: grit_reason/biology-*
- split: earth_science
path: grit_reason/earth_science-*
- split: economics
path: grit_reason/economics-*
- split: psychology
path: grit_reason/psychology-*
- split: robotics
path: grit_reason/robotics-*
- split: stackoverflow
path: grit_reason/stackoverflow-*
- split: sustainable_living
path: grit_reason/sustainable_living-*
- split: leetcode
path: grit_reason/leetcode-*
- split: pony
path: grit_reason/pony-*
- split: aops
path: grit_reason/aops-*
- split: theoremqa_questions
path: grit_reason/theoremqa_questions-*
- split: theoremqa_theorems
path: grit_reason/theoremqa_theorems-*
- config_name: llama3-70b_reason
data_files:
- split: biology
path: llama3-70b_reason/biology-*
- split: earth_science
path: llama3-70b_reason/earth_science-*
- split: economics
path: llama3-70b_reason/economics-*
- split: psychology
path: llama3-70b_reason/psychology-*
- split: robotics
path: llama3-70b_reason/robotics-*
- split: stackoverflow
path: llama3-70b_reason/stackoverflow-*
- split: sustainable_living
path: llama3-70b_reason/sustainable_living-*
- split: leetcode
path: llama3-70b_reason/leetcode-*
- split: pony
path: llama3-70b_reason/pony-*
- split: aops
path: llama3-70b_reason/aops-*
- split: theoremqa_questions
path: llama3-70b_reason/theoremqa_questions-*
- split: theoremqa_theorems
path: llama3-70b_reason/theoremqa_theorems-*
- config_name: long_documents
data_files:
- split: biology
path: long_documents/biology-*
- split: earth_science
path: long_documents/earth_science-*
- split: economics
path: long_documents/economics-*
- split: psychology
path: long_documents/psychology-*
- split: robotics
path: long_documents/robotics-*
- split: stackoverflow
path: long_documents/stackoverflow-*
- split: sustainable_living
path: long_documents/sustainable_living-*
- split: pony
path: long_documents/pony-*
tags:
- text-retrieval
- code
- biology
- earth_science
- economics
- psychology
- robotics
- math
---
# BRIGHT benchmark
BRIGHT is the first text retrieval benchmark that requires intensive reasoning to retrieve relevant documents.
The queries are collected from diverse domains (StackExchange, LeetCode, and math competitions), all sourced from realistic human data.
Experiments show that existing retrieval models perform poorly on BRIGHT, where the highest score is only 22.1 measured by nDCG@10.
BRIGHT provides a good testbed for future retrieval research in more realistic and challenging settings. More details are in the [paper](https://brightbenchmark.github.io/).
## Dataset Structure
We unify all the datasets with consistent formats. They are organized into three subsets, examples, documents, and long_documents:
* `examples`:
* `query`: the query for retrieval
* `reasoning`: the gold reasoning steps annotated by humans (they help people understand the relevance between queries and documents, but are not used in any experiment in the paper)
* `id`: the index of the instance
* `excluded_ids`: a list of the ids (string) to exclude during evaluation (only for `theoremqa`/`aops`/`leetcode`)
* `gold_ids_long`: a list of the ids (string) of the ground truth documents, corresponding to the ids of the `long_documents` subset
* `gold_ids`: a list of the ids (string) of the ground truth documents, corresponding to the indices of the `documents` subset
* `documents`:
* `id`: the index of the document
* `content`: document content (short version split from the complete web page, blogs, etc., or a problem and solution pair)
* `long_documents` (not applicable to `theoremqa`/`aops`/`leetcode`):
* `id`: the index of the document
* `content`: document content (long version corresponding to the complete web page, blogs, etc.)
## Dataset Statistics
<img src="statistics.png" width="80%" alt="BRIGHT statistics">
## Data Loading
Each dataset can be easily loaded. For example, to load biology examples:
```
from datasets import load_dataset
data = load_dataset('xlangai/BRIGHT', 'examples')['biology']
```
## Citation
If you find our work helpful, please cite us:
```citation
@misc{BRIGHT,
title={BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval},
author={Su, Hongjin and Yen, Howard and Xia, Mengzhou and Shi, Weijia and Muennighoff, Niklas and Wang, Han-yu and Liu, Haisu and Shi, Quan and Siegel, Zachary S and Tang, Michael and Sun, Ruoxi and Yoon, Jinsung and Arik, Sercan O and Chen, Danqi and Yu, Tao},
url={https://arxiv.org/abs/2407.12883},
year={2024},
}
``` |
allenai/CoSyn-400K | allenai | 2025-02-28T19:14:42Z | 3,960 | 10 | [
"task_categories:visual-question-answering",
"license:odc-by",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2502.14846",
"arxiv:2409.17146",
"region:us"
] | [
"visual-question-answering"
] | 2025-02-23T19:07:29Z | 2 | ---
license: odc-by
task_categories:
- visual-question-answering
dataset_info:
- config_name: chart
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 25262691844.136
num_examples: 116814
- name: validation
num_bytes: 220083787.264
num_examples: 1024
download_size: 24927449477
dataset_size: 25482775631.4
- config_name: chemical
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 282021984.062
num_examples: 8942
- name: validation
num_bytes: 4186180.0
num_examples: 128
download_size: 276447943
dataset_size: 286208164.062
- config_name: circuit
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 405803895.22
num_examples: 10470
- name: validation
num_bytes: 5126755.0
num_examples: 128
download_size: 392176815
dataset_size: 410930650.22
- config_name: diagram
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 6647512945.646
num_examples: 34963
- name: validation
num_bytes: 194765398.0
num_examples: 1024
download_size: 6695298322
dataset_size: 6842278343.646
- config_name: document
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 20408059180.798
num_examples: 71282
- name: validation
num_bytes: 287297344.304
num_examples: 1024
download_size: 20220923713
dataset_size: 20695356525.102
- config_name: graphic
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 401715264.464
num_examples: 26968
- name: validation
num_bytes: 15527102.264
num_examples: 1024
download_size: 360711845
dataset_size: 417242366.728
- config_name: math
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 6288774127.884
num_examples: 66714
- name: validation
num_bytes: 97463564.56
num_examples: 1024
download_size: 6245281939
dataset_size: 6386237692.444
- config_name: music
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 436496623.452
num_examples: 11969
- name: validation
num_bytes: 4754704.0
num_examples: 128
download_size: 397428056
dataset_size: 441251327.452
- config_name: nutrition
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 1445696898.35
num_examples: 6931
- name: validation
num_bytes: 27712685.0
num_examples: 128
download_size: 1410256975
dataset_size: 1473409583.35
- config_name: table
features:
- name: id
dtype: string
- name: image
dtype: image
- name: qa_pairs
sequence:
- name: question
dtype: string
- name: explanation
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: figure_type
dtype: string
- name: persona
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 7026511042.24
num_examples: 46518
- name: validation
num_bytes: 152040498.064
num_examples: 1024
download_size: 6918074537
dataset_size: 7178551540.304
configs:
- config_name: chart
data_files:
- split: train
path: chart/train-*
- split: validation
path: chart/validation-*
- config_name: chemical
data_files:
- split: train
path: chemical/train-*
- split: validation
path: chemical/validation-*
- config_name: circuit
data_files:
- split: train
path: circuit/train-*
- split: validation
path: circuit/validation-*
- config_name: diagram
data_files:
- split: train
path: diagram/train-*
- split: validation
path: diagram/validation-*
- config_name: document
data_files:
- split: train
path: document/train-*
- split: validation
path: document/validation-*
- config_name: graphic
data_files:
- split: train
path: graphic/train-*
- split: validation
path: graphic/validation-*
- config_name: math
data_files:
- split: train
path: math/train-*
- split: validation
path: math/validation-*
- config_name: music
data_files:
- split: train
path: music/train-*
- split: validation
path: music/validation-*
- config_name: nutrition
data_files:
- split: train
path: nutrition/train-*
- split: validation
path: nutrition/validation-*
- config_name: table
data_files:
- split: train
path: table/train-*
- split: validation
path: table/validation-*
---
# CoSyn-400k
CoSyn-400k is a collection of synthetic question-answer pairs about very diverse range of computer-generated images.
The data was created by using the [Claude large language model](https://claude.ai/) to generate code that can be executed to render an image,
and using [GPT-4o mini](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) to generate Q/A pairs based on the code (without using the rendered image).
The code used to generate this data is [open source](https://github.com/allenai/pixmo-docs).
Synthetic pointing data is available in a [seperate repo](https://huggingface.co/datasets/allenai/CoSyn-point).
Quick links:
- 📃 [CoSyn Paper](https://arxiv.org/pdf/2502.14846)
- 📃 [Molmo Paper](https://molmo.allenai.org/paper.pdf)
## Loading
The dataset has several subsets:
- chart
- chemical
- circuit
- diagram
- document
- graphic
- math
- music
- nutrition
- tableuments
Use `config_name` to specify which one to load, by default `chart` will be loaded. For example:
```python
table_dataset = datasets.load_dataset("allenai/CoSyn-400K", "table", split="train")
```
## Data Format
The rendered image is included in the dataset directly:
```python
print(table_dataset[0]["image"])
# >>> PIL.PngImagePlugin.PngImageFile image mode=RGB size=2400x1200 at 0x7F362070CEB0>
```
Each image is matched with multiple question-answer pairs:
```python
for q, a in zip(table_dataset[0]["questions"]["question"], table_dataset[0]["questions"]["answer"]):
print(q, a)
# >>>
# What is the waist circumference range for adult females? 64-88 cm
# What is the weight range for children aged 2-12 years? 10-45 kg
# Is the BMI range for infants provided in the table? No
# Which age group has the highest resting heart rate range? Infants (0-1 year)
# What is the difference in lung capacity range between adolescents and elderly? Maximum difference: 0.5 L, Minimum difference: 1.5 L
# Do adult males have a higher blood pressure range than adolescents? Yes
# What is the average height of elderly females compared to male adolescents? Male adolescents are taller by 10 cm
# Does the table provide a consistent BMI range across all groups for females? Yes
# Which gender has a lower average hip circumference range among the elderly? Females have a lower average hip circumference
```
## Splits
The data is divided into validation and train splits. These splits are "unofficial" because we do not generally use this data for evaluation anyway. However,
they reflect what we used when training.
## License
This dataset is licensed by ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
This dataset includes output images derived from code generated from Claude that are subject to Anthropic [terms of service](https://www.anthropic.com/legal/commercial-terms) and [usage policy](https://www.anthropic.com/legal/aup).
The questions were generated from GPT-4o Mini and are subject to [separate terms](https://openai.com/policies/row-terms-of-use) governing their use.
## Citation
Please cite the following papers if you use this dataset in your work.
```bibtex
@article{yang2025scaling,
title={Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation},
author={Yang, Yue and Patel, Ajay and Deitke, Matt and Gupta, Tanmay and Weihs, Luca and Head, Andrew and Yatskar, Mark and Callison-Burch, Chris and Krishna, Ranjay and Kembhavi, Aniruddha and others},
journal={arXiv preprint arXiv:2502.14846},
year={2025}
}
```
```bibtex
@article{deitke2024molmo,
title={Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models},
author={Deitke, Matt and Clark, Christopher and Lee, Sangho and Tripathi, Rohun and Yang, Yue and Park, Jae Sung and Salehi, Mohammadreza and Muennighoff, Niklas and Lo, Kyle and Soldaini, Luca and others},
journal={arXiv preprint arXiv:2409.17146},
year={2024}
}
```
|
simplescaling/s1K-1.1 | simplescaling | 2025-02-27T18:09:26Z | 5,355 | 110 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2501.19393",
"region:us",
"curator"
] | [] | 2025-02-07T00:45:46Z | null | ---
language: en
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: solution
dtype: string
- name: question
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: gemini_thinking_trajectory
dtype: string
- name: gemini_attempt
dtype: string
- name: deepseek_thinking_trajectory
dtype: string
- name: deepseek_attempt
dtype: string
- name: gemini_grade
dtype: string
- name: gemini_grade_reason
dtype: string
- name: deepseek_grade
dtype: string
- name: deepseek_grade_reason
dtype: string
splits:
- name: train
num_bytes: 48313304
num_examples: 1000
download_size: 22323185
dataset_size: 48313304
tags:
- curator
---
# Dataset Card for s1K
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/simplescaling/s1
- **Paper:** [s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393)
### Dataset Summary
> s1K-1.1 consists of the same 1,000 questions as in s1K but with traces instead generated by DeepSeek r1. We find that these traces lead to much better performance.
## Usage
```python
# pip install -q datasets
from datasets import load_dataset
ds = load_dataset("simplescaling/s1K-1.1")["train"]
ds[0]
```
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'solution': '1. **Rewrite the function using trigonometric identities:**...(shortened)',
'question': 'Let $a,b,A,B$ be given reals. We consider the function defined by \\[ f(x) = 1 - a \\cdot \\cos(x) - b \\cdot \\sin(x) - A \\cdot \\cos(2x) - B \\cdot \\sin(2x). \\] Prove that if for any real number $x$ we have $f(x) \\geq 0$ then $a^2 + b^2 \\leq 2$ and $A^2 + B^2 \\leq 1.$ ',
'cot_type': 'math',
'source_type': 'AI-MO/NuminaMath-CoT/aops_forum',
'metadata': "{'source': 'aops_forum', 'messages': [{'content': 'Let $a,b,A,B$ be given reals. We consider the function defined by \\\\[ f(x) = 1 - a \\\\cdot \\\\cos(x) - b \\\\cdot \\\\sin(x) - A \\\\cdot \\\\cos(2x) - B \\\\cdot \\\\sin(2x). \\\\] Prove that if for any real number $x$ we have $f(x) \\\\geq 0$ then $a^2 + b^2 \\\\leq 2$ and $A^2 + B^2 \\\\leq 1.$ ', 'role': 'user'}, {'content': '1. **Rewrite the function using trigonometric identities:**\\n \\\\[\\n f(x) = 1 - a \\\\cos(x) - b \\\\sin(x) - A \\\\cos(2x) - B \\\\sin(2x)\\n \\\\]\\n We can use the angle addition formulas for sine and cosine:\\n \\\\[\\n \\\\cos(x + \\\\theta) = \\\\cos(x)\\\\cos(\\\\theta) - \\\\sin(x)\\\\sin(\\\\theta)\\n \\\\]\\n \\\\[\\n \\\\sin(x + \\\\theta) = \\\\sin(x)\\\\cos(\\\\theta) + \\\\cos(x)\\\\sin(\\\\theta)\\n \\\\]\\n Let:\\n \\\\[\\n R_1 = \\\\sqrt{a^2 + b^2}, \\\\quad \\\\theta_1 = \\\\arctan\\\\left(\\\\frac{b}{a}\\\\right)\\n \\\\]\\n \\\\[\\n R_2 = \\\\sqrt{A^2 + B^2}, \\\\quad \\\\theta_2 = \\\\arctan\\\\left(\\\\frac{B}{A}\\\\right)\\n \\\\]\\n Then:\\n \\\\[\\n a \\\\cos(x) + b \\\\sin(x) = R_1 \\\\sin(x + \\\\theta_1)\\n \\\\]\\n \\\\[\\n A \\\\cos(2x) + B \\\\sin(2x) = R_2 \\\\sin(2x + \\\\theta_2)\\n \\\\]\\n Therefore, the function becomes:\\n \\\\[\\n f(x) = 1 - R_1 \\\\sin(x + \\\\theta_1) - R_2 \\\\sin(2x + \\\\theta_2)\\n \\\\]\\n\\n2. **Analyze the function for non-negativity:**\\n Given \\\\( f(x) \\\\geq 0 \\\\) for all \\\\( x \\\\), we need to ensure that the maximum values of the sine terms do not exceed 1:\\n \\\\[\\n 1 - R_1 \\\\sin(x + \\\\theta_1) - R_2 \\\\sin(2x + \\\\theta_2) \\\\geq 0\\n \\\\]\\n The maximum value of \\\\( \\\\sin(x + \\\\theta_1) \\\\) and \\\\( \\\\sin(2x + \\\\theta_2) \\\\) is 1. Therefore:\\n \\\\[\\n 1 - R_1 - R_2 \\\\geq 0\\n \\\\]\\n This implies:\\n \\\\[\\n R_1 + R_2 \\\\leq 1\\n \\\\]\\n\\n3. **Consider the individual terms:**\\n Since \\\\( R_1 = \\\\sqrt{a^2 + b^2} \\\\) and \\\\( R_2 = \\\\sqrt{A^2 + B^2} \\\\), we need to show:\\n \\\\[\\n \\\\sqrt{a^2 + b^2} \\\\leq 1 \\\\quad \\\\text{and} \\\\quad \\\\sqrt{A^2 + B^2} \\\\leq 1\\n \\\\]\\n\\n4. **Prove the inequalities:**\\n - For \\\\( \\\\sqrt{a^2 + b^2} \\\\leq 1 \\\\):\\n \\\\[\\n a^2 + b^2 \\\\leq 1^2 = 1\\n \\\\]\\n - For \\\\( \\\\sqrt{A^2 + B^2} \\\\leq 1 \\\\):\\n \\\\[\\n A^2 + B^2 \\\\leq 1^2 = 1\\n \\\\]\\n\\n5. **Conclusion:**\\n Therefore, we have shown that if \\\\( f(x) \\\\geq 0 \\\\) for all \\\\( x \\\\), then:\\n \\\\[\\n a^2 + b^2 \\\\leq 2 \\\\quad \\\\text{and} \\\\quad A^2 + B^2 \\\\leq 1\\n \\\\]\\n\\n\\\\(\\\\blacksquare\\\\)\\n\\nThe final answer is \\\\( \\\\boxed{ a^2 + b^2 \\\\leq 2 } \\\\) and \\\\( A^2 + B^2 \\\\leq 1 \\\\)', 'role': 'assistant'}]}",
'cot': None,
'gemini_thinking_trajectory': "Let $f(x) = 1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x \\geq 0$ for all $x \\in \\mathbb{R}$.\nWe want to show that $a^2 + b^2 \\leq 2$ and $A^2 + B^2 \\leq 1$.\n\nConsider specific values of $x$.\nFor $x=0$, $f(0) = 1 - a - A \\geq 0$, so $a+A \\leq 1$.\nFor $x=\\pi$, $f(\\pi) = 1 - a \\cos \\pi - b \\sin \\pi - A \\cos 2\\pi - B \\sin 2\\pi = 1 - a(-1) - b(0) - A(1) - B(0) = 1+a-A \\geq 0$, so $A-a \\leq 1$.\nFor $x=\\pi/2$, $f(\\pi/2) = 1 - a \\cos(\\pi/2) - b \\sin(\\pi/2) - A \\cos(\\pi) - B \\sin(\\pi) = 1 - a(0) - b(1) - A(-1) - B(0) = 1 - b + A \\geq 0$, so $b-A \\leq 1$.\nFor $x=3\\pi/2$, $f(3\\pi/2) = 1 - a \\cos(3\\pi/2) - b \\sin(3\\pi/2) - A \\cos(3\\pi) - B \\sin(3\\pi) = 1 - a(0) - b(-1) - A(-1) - B(0) = 1 + b + A \\geq 0$, so $-b-A \\leq 1$.\n\nFrom $a+A \\leq 1$ and $A-a \\leq 1$, we have $-1 \\leq a+A \\leq 1$ and $-1 \\leq a-A \\leq 1$.\nFrom $b-A \\leq 1$ and $-b-A \\leq 1$, we have $-1 \\leq b-A \\leq 1$ and $-1 \\leq b+A \\leq 1$.\n\nConsider the integral of $f(x)$ over a period.\n$\\int_0^{2\\pi} f(x) dx = \\int_0^{2\\pi} (1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x) dx \\geq 0$.\n$\\int_0^{2\\pi} 1 dx = 2\\pi$.\n$\\int_0^{2\\pi} \\cos x dx = [\\sin x]_0^{2\\pi} = 0$.\n$\\int_0^{2\\pi} \\sin x dx = [-\\cos x]_0^{2\\pi} = -1 - (-1) = 0$.\n$\\int_0^{2\\pi} \\cos 2x dx = [\\frac{1}{2} \\sin 2x]_0^{2\\pi} = 0$.\n$\\int_0^{2\\pi} \\sin 2x dx = [-\\frac{1}{2} \\cos 2x]_0^{2\\pi} = -\\frac{1}{2} - (-\\frac{1}{2}) = 0$.\nSo $\\int_0^{2\\pi} f(x) dx = 2\\pi \\geq 0$. This doesn't give any information.\n\nMultiply $f(x)$ by $\\cos x$ and integrate.\n$\\int_0^{2\\pi} f(x) \\cos x dx = \\int_0^{2\\pi} (1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x) \\cos x dx \\geq 0$.\n$\\int_0^{2\\pi} \\cos x dx = 0$.\n$\\int_0^{2\\pi} \\cos^2 x dx = \\int_0^{2\\pi} \\frac{1+\\cos 2x}{2} dx = [\\frac{x}{2} + \\frac{\\sin 2x}{4}]_0^{2\\pi} = \\pi$.\n$\\int_0^{2\\pi} \\sin x \\cos x dx = \\int_0^{2\\pi} \\frac{1}{2} \\sin 2x dx = [-\\frac{1}{4} \\cos 2x]_0^{2\\pi} = -\\frac{1}{4} - (-\\frac{1}{4}) = 0$.\n$\\int_0^{2\\pi} \\cos 2x \\cos x dx = \\int_0^{2\\pi} \\frac{1}{2} (\\cos 3x + \\cos x) dx = [\\frac{1}{6} \\sin 3x + \\frac{1}{2} \\sin x]_0^{2\\pi} = 0$.\n$\\int_0^{2\\pi} \\sin 2x \\cos x dx = \\int_0^{2\\pi} \\frac{1}{2} (\\sin 3x + \\sin x) dx = [-\\frac{1}{6} \\cos 3x - \\frac{1}{2} \\cos x]_0^{2\\pi} = (-\\frac{1}{6} - \\frac{1}{2}) - (-\\frac{1}{6} - \\frac{1}{2}) = 0$.\nSo $\\int_0^{2\\pi} f(x) \\cos x dx = 0 - a \\pi - 0 - 0 - 0 = -a\\pi \\geq 0$, so $a \\leq 0$. This is not necessarily true.\n\nConsider $f(x) \\geq 0$.\n$1 - (a \\cos x + b \\sin x) - (A \\cos 2x + B \\sin 2x) \\geq 0$.\nLet $a = r \\cos \\alpha, b = r \\sin \\alpha$. Then $a \\cos x + b \\sin x = r (\\cos \\alpha \\cos x + \\sin \\alpha \\sin x) = r \\cos(x-\\alpha)$.\nLet $A = R \\cos \\beta, B = R \\sin \\beta$. Then $A \\cos 2x + B \\sin 2x = R (\\cos \\beta \\cos 2x + \\sin \\beta \\sin 2x) = R \\cos(2x-\\beta)$.\nSo $1 - r \\cos(x-\\alpha) - R \\cos(2x-\\beta) \\geq 0$.\nLet $y = x-\\alpha$. Then $2x-\\beta = 2(y+\\alpha) - \\beta = 2y + 2\\alpha - \\beta$. Let $\\gamma = 2\\alpha - \\beta$.\n$1 - r \\cos y - R \\cos(2y+\\gamma) \\geq 0$.\n$1 - r \\cos y - R (\\cos 2y \\cos \\gamma - \\sin 2y \\sin \\gamma) \\geq 0$.\n$1 - r \\cos y - R ((2\\cos^2 y - 1) \\cos \\gamma - 2 \\sin y \\cos y \\sin \\gamma) \\geq 0$.\n$1 - r \\cos y - 2R \\cos^2 y \\cos \\gamma + R \\cos \\gamma + 2R \\sin y \\cos y \\sin \\gamma \\geq 0$.\n\nConsider $f(x) \\geq 0$.\nMultiply by $\\cos(x-\\phi)$ and integrate from $0$ to $2\\pi$.\n$\\int_0^{2\\pi} f(x) \\cos(x-\\phi) dx \\geq 0$.\n$\\int_0^{2\\pi} (1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x) (\\cos x \\cos \\phi + \\sin x \\sin \\phi) dx \\geq 0$.\n$\\cos \\phi \\int_0^{2\\pi} (\\cos x - a \\cos^2 x - b \\sin x \\cos x - A \\cos 2x \\cos x - B \\sin 2x \\cos x) dx + \\sin \\phi \\int_0^{2\\pi} (\\sin x - a \\cos x \\sin x - b \\sin^2 x - A \\cos 2x \\sin x - B \\sin 2x \\sin x) dx \\geq 0$.\n$\\cos \\phi (0 - a\\pi - 0 - 0 - 0) + \\sin \\phi (0 - 0 - b\\pi - 0 - 0) \\geq 0$.\n$-a\\pi \\cos \\phi - b\\pi \\sin \\phi \\geq 0$.\n$a \\cos \\phi + b \\sin \\phi \\leq 0$.\nThis must be true for all $\\phi$. Let $\\phi$ such that $\\tan \\phi = b/a$. Then $a \\frac{a}{\\sqrt{a^2+b^2}} + b \\frac{b}{\\sqrt{a^2+b^2}} = \\sqrt{a^2+b^2} \\leq 0$, so $a=b=0$. This is wrong.\n\nConsider $f(x) \\geq 0$.\n$1 - (a \\cos x + b \\sin x) - (A \\cos 2x + B \\sin 2x) \\geq 0$.\nThe minimum value of $a \\cos x + b \\sin x$ is $-\\sqrt{a^2+b^2}$.\nThe minimum value of $A \\cos 2x + B \\sin 2x$ is $-\\sqrt{A^2+B^2}$.\n$1 - (a \\cos x + b \\sin x) \\geq A \\cos 2x + B \\sin 2x$.\nMinimum of LHS is $1 - \\sqrt{a^2+b^2}$. Maximum of RHS is $\\sqrt{A^2+B^2}$.\n$1 - \\sqrt{a^2+b^2} \\geq \\sqrt{A^2+B^2}$.\n\nConsider $g(x) = 1 - A \\cos 2x - B \\sin 2x \\geq a \\cos x + b \\sin x$.\nMinimum of LHS is $1 - \\sqrt{A^2+B^2}$. Maximum of RHS is $\\sqrt{a^2+b^2}$.\n$1 - \\sqrt{A^2+B^2} \\geq \\sqrt{a^2+b^2}$.\n\nLet $x=0$, $1-a-A \\geq 0$.\nLet $x=\\pi$, $1+a-A \\geq 0$.\nLet $x=\\pi/2$, $1-b+A \\geq 0$.\nLet $x=3\\pi/2$, $1+b+A \\geq 0$.\n\nConsider the case where $b=B=0$.\n$f(x) = 1 - a \\cos x - A \\cos 2x \\geq 0$.\n$1 - a \\cos x - A (2 \\cos^2 x - 1) \\geq 0$.\n$1 - a \\cos x - 2A \\cos^2 x + A \\geq 0$.\n$A+1 - a \\cos x - 2A \\cos^2 x \\geq 0$.\nLet $y = \\cos x$, $-1 \\leq y \\leq 1$.\n$g(y) = -2Ay^2 - ay + A+1 \\geq 0$ for $y \\in [-1, 1]$.\nIf $A=0$, $-ay+1 \\geq 0$ for $y \\in [-1, 1]$. If $a>0$, $-a+1 \\geq 0$, $a \\leq 1$. If $a<0$, $a+1 \\geq 0$, $a \\geq -1$. So $|a| \\leq 1$. $a^2 \\leq 1 \\leq 2$.\nIf $A>0$, the parabola opens downwards. We need $g(-1) = -2A+a+A+1 = -A+a+1 \\geq 0$, $a \\geq A-1$. And $g(1) = -2A-a+A+1 = -A-a+1 \\geq 0$, $a \\leq 1-A$. Also the vertex must be outside $[-1, 1]$ or the vertex is in $[-1, 1]$ and the value is non-negative. Vertex at $y = -(-a)/(2(-2A)) = -a/4A$. If $|-a/4A| \\leq 1$, then $a^2 \\leq 16A^2$. $g(-a/4A) = -2A (-a/4A)^2 - a(-a/4A) + A+1 = -2A a^2/16A^2 + a^2/4A + A+1 = -a^2/8A + a^2/4A + A+1 = a^2/8A + A+1 \\geq 0$.\nIf $A<0$, the parabola opens upwards. We need $g(-1) \\geq 0$ and $g(1) \\geq 0$.\n$-A+a+1 \\geq 0$, $a \\geq A-1$.\n$-A-a+1 \\geq 0$, $a \\leq 1-A$.\n\nConsider integrals again.\n$\\int_0^{2\\pi} f(x) dx \\geq 0 \\implies 2\\pi \\geq 0$.\n$\\int_0^{2\\pi} f(x) \\cos x dx \\geq 0 \\implies -a \\pi \\geq 0 \\implies a \\leq 0$.\n$\\int_0^{2\\pi} f(x) \\sin x dx \\geq 0 \\implies -b \\pi \\geq 0 \\implies b \\leq 0$.\n$\\int_0^{2\\pi} f(x) \\cos 2x dx \\geq 0 \\implies -A \\pi \\geq 0 \\implies A \\leq 0$.\n$\\int_0^{2\\pi} f(x) \\sin 2x dx \\geq 0 \\implies -B \\pi \\geq 0 \\implies B \\leq 0$.\nThis is not true.\n\nConsider the vector $v = (1, \\cos x, \\sin x, \\cos 2x, \\sin 2x)$.\nThe condition is $v \\cdot c \\geq 0$, where $c = (1, -a, -b, -A, -B)$.\n\nConsider the case where $f(x) = (1 - \\alpha \\cos x - \\beta \\sin x)^2 \\geq 0$.\n$1 + \\alpha^2 \\cos^2 x + \\beta^2 \\sin^2 x - 2 \\alpha \\cos x - 2 \\beta \\sin x + 2 \\alpha \\beta \\cos x \\sin x \\geq 0$.\n$1 + \\frac{\\alpha^2+\\beta^2}{2} + \\frac{\\alpha^2-\\beta^2}{2} \\cos 2x - 2 \\alpha \\cos x - 2 \\beta \\sin x + \\alpha \\beta \\sin 2x \\geq 0$.\nCompare with $1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x \\geq 0$.\n$-a = -2\\alpha$, $-b = -2\\beta$, $-A = \\frac{\\alpha^2-\\beta^2}{2}$, $-B = \\alpha \\beta$.\n$a = 2\\alpha$, $b = 2\\beta$.\n$A = \\frac{\\beta^2-\\alpha^2}{2}$, $B = -\\alpha \\beta$.\n$a^2+b^2 = 4\\alpha^2+4\\beta^2$.\n$A^2+B^2 = \\frac{(\\beta^2-\\alpha^2)^2}{4} + \\alpha^2 \\beta^2 = \\frac{\\beta^4 - 2\\alpha^2 \\beta^2 + \\alpha^4 + 4\\alpha^2 \\beta^2}{4} = \\frac{(\\alpha^2+\\beta^2)^2}{4}$.\n\nConsider $f(x) \\geq 0$.\nConsider the Fourier series of $|f(x)| = f(x)$.\nThe coefficients must match.\n\nLet $z = e^{ix}$.\n$f(x) = 1 - \\frac{a}{2}(z+z^{-1}) - \\frac{b}{2i}(z-z^{-1}) - \\frac{A}{2}(z^2+z^{-2}) - \\frac{B}{2i}(z^2-z^{-2}) \\geq 0$.\n$2if(x) = 2i - ia(z+z^{-1}) + b(z-z^{-1}) - iA(z^2+z^{-2}) + B(z^2-z^{-2}) \\geq 0$.\n$2if(x) = 2i - i a z - i a z^{-1} + b z - b z^{-1} - i A z^2 - i A z^{-2} + B z^2 - B z^{-2} \\geq 0$.\n$2if(x) = 2i + (b-ia)z + (-b-ia)z^{-1} + (B-iA)z^2 + (-B-iA)z^{-2} \\geq 0$.\n\nConsider the integral $\\frac{1}{2\\pi} \\int_0^{2\\pi} f(x) e^{inx} dx$.\nFor $n=0$, $\\frac{1}{2\\pi} \\int_0^{2\\pi} f(x) dx = 1 \\geq 0$.\nFor $n=1$, $\\frac{1}{2\\pi} \\int_0^{2\\pi} f(x) (\\cos x + i \\sin x) dx = \\frac{1}{2\\pi} \\int_0^{2\\pi} (1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x) (\\cos x + i \\sin x) dx = -a/2 - ib/2$.\nSince $f(x) \\geq 0$, the coefficients of the Fourier series must satisfy certain conditions.\n\nConsider trigonometric polynomials that are non-negative.\nIf $P(\\theta) = \\sum_{k=-n}^n c_k e^{ik\\theta} \\geq 0$, then $c_{-k} = \\overline{c_k}$ and $c_0 \\geq 0$.\nIn our case, the coefficients are:\n$c_0 = 1$.\n$c_1 = -a/2 - ib/2$.\n$c_{-1} = -a/2 + ib/2$.\n$c_2 = -A/2 - iB/2$.\n$c_{-2} = -A/2 + iB/2$.\nWe need a condition on the coefficients such that the polynomial is non-negative.\n\nIf $1 - a \\cos x - b \\sin x \\geq 0$, then $1 \\geq \\sqrt{a^2+b^2}$.\nIf $1 - A \\cos 2x - B \\sin 2x \\geq 0$, then $1 \\geq \\sqrt{A^2+B^2}$.\n\nConsider $f(x) = |1 - \\alpha e^{ix} - \\beta e^{2ix}|^2 \\geq 0$.\n$(1 - \\alpha e^{ix} - \\beta e^{2ix})(1 - \\bar{\\alpha} e^{-ix} - \\bar{\\beta} e^{-2ix})$.\n$1 - \\bar{\\alpha} e^{-ix} - \\bar{\\beta} e^{-2ix} - \\alpha e^{ix} + |\\alpha|^2 - \\alpha \\bar{\\beta} e^{-ix} - \\beta e^{2ix} - \\beta \\bar{\\alpha} e^{ix} + |\\beta|^2$.\n$1 + |\\alpha|^2 + |\\beta|^2 - (\\bar{\\alpha} + \\alpha + \\alpha \\bar{\\beta}) e^{-ix} - (\\bar{\\beta} + \\beta + \\beta \\bar{\\alpha}) e^{-2ix}$.\n$1 + |\\alpha|^2 + |\\beta|^2 - (2 \\text{Re}(\\alpha) + \\alpha \\bar{\\beta}) e^{-ix} - (2 \\text{Re}(\\beta) + \\beta \\bar{\\alpha}) e^{-2ix}$.\nCompare coefficients.\n\nIf $P(\\theta) = |\\sum_{k=0}^n z_k e^{ik\\theta}|^2$, then $P(\\theta) \\geq 0$.\nConsider $P(x) = |1 - \\alpha e^{ix}|^2 = (1-\\alpha e^{ix})(1-\\bar{\\alpha} e^{-ix}) = 1 - \\bar{\\alpha} e^{-ix} - \\alpha e^{ix} + |\\alpha|^2 = 1+|\\alpha|^2 - 2 \\text{Re}(\\alpha) \\cos x + 2 \\text{Im}(\\alpha) \\sin x$.\nCompare with $1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x$.\n\nConsider $f(x) \\geq 0$.\n$1 - (a \\cos x + b \\sin x) - (A \\cos 2x + B \\sin 2x) \\geq 0$.\nLet $a = r_1 \\cos \\phi_1, b = r_1 \\sin \\phi_1$.\nLet $A = r_2 \\cos \\phi_2, B = r_2 \\sin \\phi_2$.\n$1 - r_1 \\cos(x-\\phi_1) - r_2 \\cos(2x-\\phi_2) \\geq 0$.\nLet $x - \\phi_1 = y$. Then $2x - \\phi_2 = 2(y+\\phi_1) - \\phi_2 = 2y + 2\\phi_1 - \\phi_2$. Let $\\delta = 2\\phi_1 - \\phi_2$.\n$1 - r_1 \\cos y - r_2 \\cos(2y+\\delta) \\geq 0$.\n$1 - r_1 \\cos y - r_2 (\\cos 2y \\cos \\delta - \\sin 2y \\sin \\delta) \\geq 0$.\n$1 - r_1 \\cos y - r_2 ( (2\\cos^2 y - 1) \\cos \\delta - 2 \\sin y \\cos y \\sin \\delta ) \\geq 0$.\n$1 - r_1 \\cos y - 2 r_2 \\cos^2 y \\cos \\delta + r_2 \\cos \\delta + 2 r_2 \\sin y \\cos y \\sin \\delta \\geq 0$.\n$1 + r_2 \\cos \\delta - r_1 \\cos y - 2 r_2 \\cos \\delta \\cos^2 y + r_2 \\sin \\delta \\sin 2y \\geq 0$.\n\nIf $f(x) \\geq 0$, then the average value is non-negative.\nConsider $1 - a \\cos x - b \\sin x \\geq 0$. Square both sides? No.\nConsider $1 - A \\cos 2x - B \\sin 2x \\geq |a \\cos x + b \\sin x|$.\n$1 - \\sqrt{A^2+B^2} \\leq 1 - A \\cos 2x - B \\sin 2x \\leq 1 + \\sqrt{A^2+B^2}$.\n$|a \\cos x + b \\sin x| \\leq \\sqrt{a^2+b^2}$.\nSo $1 - \\sqrt{A^2+B^2} \\geq \\sqrt{a^2+b^2}$. This is not necessarily true.\n\nConsider $f(x) \\geq 0$.\n$\\int_0^{2\\pi} f(x) d x = 2\\pi$.\n$\\int_0^{2\\pi} f(x) \\cos x d x = -a\\pi$.\n$\\int_0^{2\\pi} f(x) \\sin x d x = -b\\pi$.\n$\\int_0^{2\\pi} f(x) \\cos 2x d x = -A\\pi$.\n$\\int_0^{2\\pi} f(x) \\sin 2x d x = -B\\pi$.\nSince $f(x) \\geq 0$, consider the Fejer-Riesz theorem. A non-negative trigonometric polynomial can be written as the squared modulus of another trigonometric polynomial.\n$1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x = |c_0 + c_1 e^{ix} + c_2 e^{2ix}|^2$.\n$= (c_0 + c_1 e^{ix} + c_2 e^{2ix})(\\bar{c_0} + \\bar{c_1} e^{-ix} + \\bar{c_2} e^{-2ix})$.\n$= |c_0|^2 + |c_1|^2 + |c_2|^2 + (c_0 \\bar{c_1} + c_1 \\bar{c_2}) e^{ix} + (\\bar{c_0} c_1 + \\bar{c_1} c_2) e^{-ix} + c_0 \\bar{c_2} e^{2ix} + \\bar{c_0} c_2 e^{-2ix}$.\n$= |c_0|^2 + |c_1|^2 + |c_2|^2 + 2 \\text{Re}(c_0 \\bar{c_1} + c_1 \\bar{c_2}) \\cos x - 2 \\text{Im}(c_0 \\bar{c_1} + c_1 \\bar{c_2}) \\sin x + 2 \\text{Re}(c_0 \\bar{c_2}) \\cos 2x - 2 \\text{Im}(c_0 \\bar{c_2}) \\sin 2x$.\nCompare coefficients:\n$1 = |c_0|^2 + |c_1|^2 + |c_2|^2$.\n$-a = 2 \\text{Re}(c_0 \\bar{c_1} + c_1 \\bar{c_2})$.\n$-b = -2 \\text{Im}(c_0 \\bar{c_1} + c_1 \\bar{c_2})$.\n$-A = 2 \\text{Re}(c_0 \\bar{c_2})$.\n$-B = -2 \\text{Im}(c_0 \\bar{c_2})$.\n$A^2+B^2 = 4 (\\text{Re}(c_0 \\bar{c_2})^2 + \\text{Im}(c_0 \\bar{c_2})^2) = 4 |c_0 \\bar{c_2}|^2 = 4 |c_0|^2 |c_2|^2$.\nLet $c_0 = 1$.\n$1 = 1 + |c_1|^2 + |c_2|^2 \\implies |c_1|^2 + |c_2|^2 = 0 \\implies c_1 = c_2 = 0$. Then $f(x) = 1$, $a=b=A=B=0$.\n",
'attempt': 'Let $P(x) = 1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x$.\nSince $P(x) \\geq 0$ for all real $x$, by Fejer-Riesz theorem, there exists a polynomial $Q(z) = c_0 + c_1 z + c_2 z^2$ such that $P(x) = |Q(e^{ix})|^2$.\nWe have $|Q(e^{ix})|^2 = (c_0 + c_1 e^{ix} + c_2 e^{2ix})(\\bar{c_0} + \\bar{c_1} e^{-ix} + \\bar{c_2} e^{-2ix})$.\nExpanding this, we get\n\\begin{align*} |Q(e^{ix})|^2 &= |c_0|^2 + |c_1|^2 + |c_2|^2 \\\\ &+ (c_0 \\bar{c_1} + c_1 \\bar{c_2}) e^{ix} + (\\bar{c_0} c_1 + \\bar{c_1} c_2) e^{-ix} \\\\ &+ c_0 \\bar{c_2} e^{2ix} + \\bar{c_0} c_2 e^{-2ix} \\\\ &= |c_0|^2 + |c_1|^2 + |c_2|^2 \\\\ &+ (c_0 \\bar{c_1} + \\bar{\\bar{c_1}} c_2 + \\overline{c_0 \\bar{c_1} + \\bar{\\bar{c_1}} c_2}) \\cos x + i (c_0 \\bar{c_1} + \\bar{\\bar{c_1}} c_2 - \\overline{c_0 \\bar{c_1} + \\bar{\\bar{c_1}} c_2}) \\sin x \\\\ &+ (c_0 \\bar{c_2} + \\bar{c_0} c_2) \\cos 2x + i (c_0 \\bar{c_2} - \\bar{c_0} c_2) \\sin 2x \\\\ &= |c_0|^2 + |c_1|^2 + |c_2|^2 + 2 \\text{Re}(c_0 \\bar{c_1} + c_1 \\bar{c_2}) \\cos x - 2 \\text{Im}(c_0 \\bar{c_1} + c_1 \\bar{c_2}) \\sin x + 2 \\text{Re}(c_0 \\bar{c_2}) \\cos 2x - 2 \\text{Im}(c_0 \\bar{c_2}) \\sin 2x\\end{align*}\nComparing the coefficients of $P(x)$ with $|Q(e^{ix})|^2$, we have:\n$1 = |c_0|^2 + |c_1|^2 + |c_2|^2$\n$-a = 2 \\text{Re}(c_0 \\bar{c_1} + c_1 \\bar{c_2})$\n$-b = -2 \\text{Im}(c_0 \\bar{c_1} + c_1 \\bar{c_2})$\n$-A = 2 \\text{Re}(c_0 \\bar{c_2})$\n$-B = -2 \\text{Im}(c_0 \\bar{c_2})$\nFrom the last two equations, $A^2 + B^2 = 4 (\\text{Re}(c_0 \\bar{c_2})^2 + \\text{Im}(c_0 \\bar{c_2})^2) = 4 |c_0 \\bar{c_2}|^2 = 4 |c_0|^2 |c_2|^2$.\nSince $1 = |c_0|^2 + |c_1|^2 + |c_2|^2$, we have $|c_0|^2 \\leq 1$ and $|c_2|^2 \\leq 1$.\nThus $A^2 + B^2 = 4 |c_0|^2 |c_2|^2 \\leq 4 \\cdot 1 \\cdot 1 = 4$. This is not the desired result.\n\nConsider $Q(z) = 1 - \\alpha z - \\beta z^2$. Then $|Q(e^{ix})|^2 = |1 - \\alpha e^{ix} - \\beta e^{2ix}|^2$.\n$|1 - \\alpha e^{ix} - \\beta e^{2ix}|^2 = (1 - \\alpha e^{ix} - \\beta e^{2ix})(1 - \\bar{\\alpha} e^{-ix} - \\bar{\\beta} e^{-2ix})$\n$= 1 - \\bar{\\alpha} e^{-ix} - \\bar{\\beta} e^{-2ix} - \\alpha e^{ix} + |\\alpha|^2 - \\alpha \\bar{\\beta} e^{-ix} - \\beta e^{2ix} - \\beta \\bar{\\alpha} e^{ix} + |\\beta|^2$\n$= 1 + |\\alpha|^2 + |\\beta|^2 - (\\alpha + \\alpha \\bar{\\beta} + \\bar{\\alpha}) \\cos x + i (\\alpha + \\alpha \\bar{\\beta} - \\bar{\\alpha}) \\sin x - (\\beta + \\beta \\bar{\\alpha} + \\bar{\\beta}) \\cos 2x + i (\\beta + \\beta \\bar{\\alpha} - \\bar{\\beta}) \\sin 2x$.\nCompare with $1 - a \\cos x - b \\sin x - A \\cos 2x - B \\sin 2x$.\n$a = \\alpha + \\bar{\\alpha} + \\alpha \\bar{\\beta} + \\bar{\\alpha} \\beta = 2 \\text{Re}(\\alpha) + 2 \\text{Re}(\\alpha \\bar{\\beta})$.\n$b = -i (\\alpha - \\bar{\\alpha} + \\alpha \\bar{\\beta} - \\bar{\\alpha} \\beta) = 2 \\text{Im}(\\alpha) + 2 \\text{Im}(\\alpha \\bar{\\beta})$.\n$A = \\beta + \\bar{\\beta} = 2 \\text{Re}(\\beta)$.\n$B = -i (\\beta - \\bar{\\beta}) = 2 \\text{Im}(\\beta)$.\n$A^2+B^2 = 4 (\\text{Re}(\\beta)^2 + \\text{Im}(\\beta)^2) = 4 |\\beta|^2$.\n$a^2+b^2 = |2\\alpha + 2\\alpha \\bar{\\beta}|^2 = 4 |\\alpha|^2 |1+\\bar{\\beta}|^2$.\n$1 = 1 + |\\alpha|^2 + |\\beta|^2$. So $|\\alpha|^2 + |\\beta|^2 = 0$, so $\\alpha = \\beta = 0$.\n\nFinal Answer: The final answer is $\\boxed{a^2 + b^2 \\leq 2, A^2 + B^2 \\leq 1}$'
}
```
### Data Fields
The data fields are:
- `solution`: The ground truth solution for the question
- `question`: The question
- `cot_type`: One high-level category the question falls into
- `source_type`: The dataset and its category that the question stems from
- `metadata` Any other fields in the original dataset
- `cot`: None
- `gemini_thinking_trajectory`: The reasoning trace generated by Gemini Flash Thinking Experimental
- `gemini_attempt`: The response generated by Gemini
- `deepseek_thinking_trajectory`: The reasoning trace generated by DeepSeek r1
- `deepseek_attempt`: The response generated by DeepSeek r1
## Citation Information
The r1 traces in this dataset were generated by the amazing [Ryan Marten](https://huggingface.co/ryanmarten).
The questions and other data stem from s1K introduced in the s1 paper:
```bibtex
@misc{muennighoff2025s1simpletesttimescaling,
title={s1: Simple test-time scaling},
author={Niklas Muennighoff and Zitong Yang and Weijia Shi and Xiang Lisa Li and Li Fei-Fei and Hannaneh Hajishirzi and Luke Zettlemoyer and Percy Liang and Emmanuel Candès and Tatsunori Hashimoto},
year={2025},
eprint={2501.19393},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.19393},
}
``` |
bigcode/bigcodebench-hard | bigcode | 2025-02-23T16:42:46Z | 18,422 | 3 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-09-14T14:50:33Z | null | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: complete_prompt
dtype: string
- name: instruct_prompt
dtype: string
- name: canonical_solution
dtype: string
- name: code_prompt
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
- name: doc_struct
dtype: string
- name: libs
dtype: string
- name: q_idx
dtype: int64
- name: question
dtype: string
- name: score
dtype: float64
- name: _id
dtype: string
splits:
- name: v0.1.0_hf
num_bytes: 1271624
num_examples: 148
- name: v0.1.1
num_bytes: 1271607
num_examples: 148
- name: v0.1.2
num_bytes: 1271812
num_examples: 148
- name: v0.1.3
num_bytes: 1271812
num_examples: 148
- name: v0.1.4
num_bytes: 1272012
num_examples: 148
download_size: 2758366
dataset_size: 6358867
configs:
- config_name: default
data_files:
- split: v0.1.0_hf
path: data/v0.1.0_hf-*
- split: v0.1.1
path: data/v0.1.1-*
- split: v0.1.2
path: data/v0.1.2-*
- split: v0.1.3
path: data/v0.1.3-*
- split: v0.1.4
path: data/v0.1.4-*
---
|
cfilt/IITB-IndicMonoDoc | cfilt | 2025-02-18T09:44:38Z | 39,690 | 4 | [
"task_categories:text-generation",
"language:hi",
"language:mr",
"language:gu",
"language:sa",
"language:ta",
"language:te",
"language:ml",
"language:ne",
"language:as",
"language:bn",
"language:ks",
"language:or",
"language:pa",
"language:ur",
"language:sd",
"language:kn",
"license:cc-by-4.0",
"size_categories:10B<n<100B",
"region:us",
"language-modeling",
"llm",
"clm"
] | [
"text-generation"
] | 2024-03-20T13:40:03Z | null | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- hi
- mr
- gu
- sa
- ta
- te
- ml
- ne
- as
- bn
- ks
- or
- pa
- ur
- sd
- kn
size_categories:
- 10B<n<100B
tags:
- language-modeling
- llm
- clm
viewer: false
---
IITB Document level Monolingual Corpora for Indian languages.
22 scheduled languages of India + English
(1) Assamese, (2) Bengali, (3) Gujarati, (4) Hindi, (5) Kannada, (6) Kashmiri, (7) Konkani, (8) Malayalam, (9) Manipuri, (10) Marathi, (11) Nepali, (12) Oriya, (13) Punjabi, (14) Sanskrit, (15) Sindhi, (16) Tamil, (17) Telugu, (18) Urdu (19) Bodo, (20) Santhali, (21) Maithili and (22) Dogri.
| Language | Total (#Mil Tokens) |
|:---------:|:--------------------:|
| bn | 5258.47 |
| en | 11986.53 |
| gu | 887.18 |
| hi | 11268.33 |
| kn | 567.16 |
| ml | 845.32 |
| mr | 1066.76 |
| ne | 1542.39 |
| pa | 449.61 |
| ta | 2171.92 |
| te | 767.18 |
| ur | 2391.79 |
| as | 57.64 |
| brx | 2.25 |
| doi | 0.37 |
| gom | 2.91 |
| kas | 1.27 |
| mai | 1.51 |
| mni | 0.99 |
| or | 81.96 |
| sa | 80.09 |
| sat | 3.05 |
| sd | 83.81 |
| Total= | 39518.51 |
To cite this dataset:
```
@inproceedings{doshi-etal-2024-pretraining,
title = "Pretraining Language Models Using Translationese",
author = "Doshi, Meet and
Dabre, Raj and
Bhattacharyya, Pushpak",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.334/",
doi = "10.18653/v1/2024.emnlp-main.334",
pages = "5843--5862",
}
``` |
hssd/hssd-hab | hssd | 2025-02-14T02:19:58Z | 21,025 | 37 | [
"language:en",
"license:cc-by-nc-4.0",
"region:us",
"3D scenes",
"Embodied AI"
] | [] | 2023-06-04T18:59:50Z | null | ---
language:
- en
pretty_name: HSSD
tags:
- 3D scenes
- Embodied AI
license: cc-by-nc-4.0
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_prompt: "You agree to use this dataset under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/) terms"
viewer: false
---
HSSD: Habitat Synthetic Scenes Dataset
==================================
The [Habitat Synthetic Scenes Dataset (HSSD)](https://3dlg-hcvc.github.io/hssd/) is a human-authored 3D scene dataset that more closely mirrors real scenes than prior datasets.
Our dataset represents real interiors and contains a diverse set of 211 scenes and more than 18000 models of real-world objects.
<img src="https://i.imgur.com/XEkLxNs.png" width=50%>
This repository provides a Habitat consumption-ready compressed version of HSSD.
See [this repository](https://huggingface.co/datasets/hssd/hssd-models) for corresponding uncompressed assets.
## Dataset Structure
```
├── objects
│ ├── */*.glb
│ ├── */*.collider.glb
│ ├── */*.filteredSupportSurface(.ply|.glb)
│ ├── */*.object_config.json
├── stages
│ ├── *.glb
│ ├── *.stage_config.json
├── scenes
│ ├── *.scene_instance.json
├── scenes_uncluttered
│ ├── *.scene_instance.json
├── scenes_articulated
│ ├── *.scene_instance.json
├── scene_filter_files
│ ├── *.rec_filter.json
├── metadata
│ ├── *.csv
│ ├── *.json
├── semantics
│ ├── hssd-hab_semantic_lexicon.json
│ ├── scenes
| ├── *.semantic_config.json
├── urdf
│ ├── <model_name>
| ├── *.glb
| ├── *.urdf
| ├── *.ao_config.json
└── hssd-hab.scene_dataset_config.json
└── hssd-hab-uncluttered.scene_dataset_config.json
└── hssd-hab-articulated.scene_dataset_config.json
```
- `hssd-hab.scene_dataset_config.json`: This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset.
- `objects`: 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets.
- `stages`: A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.).
- `scenes`: A scene is a single 3D world composed of a static stage and a variable number of objects.
- `scene_filter_files`: These .rec_filter.json files contain mappings of Receptacle instance unique_names to active or filtered sets based on their locations and accessibility within the scene. They also contain a "within_set" defining Receptacles which can only be accessed when the parent Furniture object's "default_link" is in the "open" state.
- `metadata`: The metadata directory contains several csv and json files which provide semantic mappings for objects in the dataset as well as rational mappings from regions to the types of clutter objects typically found in them to support procedural generation.
- `semantics`: Primarily defines instance semantics for the scenes. *.semantic_config.json files contain the region annotations for each scene.
- `urdf`: The urdf directory contains the articulated furniture assets, each contained in its own sub-directory named after the source asset. The .urdf files define the articulation properties. Each .glb file is either a render asset or Receptacle mesh connected to a rigid link. The .ao_config.json file contains habitat-specific metadata such as markersets and Receptacle definitions.
### Rearrange-ready assets:
Supporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See [aihabitat.org/habitat3/](aihabitat.org/habitat3/) for more details.
- `hssd-hab-uncluttered.scene_dataset_config.json`: This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks.
- `scenes_uncluttered`: Contains the adjusted scene instance configuration files.
- `scene_filter_files`: A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets.
### Articulated scenes and assets:
Introduced in `v0.3.0`, the `hssd-hab-articulated.scene_dataset_config.json` SceneDataset provides 202 fully articulated HSSD scenes ready for use within the AI Habitat simulation ecosystem. Note that only 161 are publicly available on this repo. The remainder and their unique assets are reserved as an internal test set.
To enable more realistic indoor object manipulation, articulated 3D furniture models such as drawers, cabinets, and appliances were added to replace rigid assets. These models were converted from rigid source assets in HSSD and swapped into the scenes.
Furniture is annotated with a set of Receptacles (surfaces which support small object placement such as shelves and drawers) and can be opened and closed by the agents. Receptacles are further filtered contextually in each scene to ensure that the active set is accessible to the agents.
Additional annotations include point or marker sets for each furniture, region annotations, and semantic classification of objects.
## Getting Started
To load HSSD scenes into the Habitat simulator, you can start by installing [habitat-sim](https://github.com/facebookresearch/habitat-sim) using instructions specified [here](https://github.com/facebookresearch/habitat-sim#installation).
Once installed, you can run the interactive Habitat viewer to load a scene:
```
habitat-viewer --dataset /path/to/hssd-hab/hssd-hab.scene_dataset_config.json -- 102344280
# or ./build/viewer if compiling from source
```
You can find more information about using the interactive viewer [here](https://github.com/facebookresearch/habitat-sim#testing:~:text=path/to/data/-,Interactive%20testing,-%3A%20Use%20the%20interactive).
Habitat-Sim is typically used with [Habitat-Lab](https://github.com/facebookresearch/habitat-lab), a modular high-level library for end-to-end experiments in embodied AI.
To define embodied AI tasks (e.g. navigation, instruction following, question answering), train agents, and benchmark their performance using standard metrics, you can download habitat-lab using the instructions provided [here](https://github.com/facebookresearch/habitat-lab#installation).
## Changelog
- `v0.3.0`: **Articulated Scenes and PARTNR support**
- This major version update adds a large set of changes to support the introduction of 202 articulated HSSD scenes and the [PARTNR benchmark](https://github.com/facebookresearch/partnr-planner).
- Includes improvements to stage texture/geometry and object collision shapes and receptacles.
- Adds:
- 2000+ articulated assets in the urdf/ directory representing and replacing rigid furniture objects. Annotated with Receptacles and semantics.
- 202 new articulated scenes with rigid objects replaced by AOs. These are uncluttered and often significantly altered from originals to accommodate the new assets.
- Note that test scenes and assets are removed before migration to this repo.
- Receptacle filter files for new scenes annotating accessible Receptacles and "within" Receptacles (those which require opening an articulated link for access).
- Note that only one link per AO is configured with an active Receptacle. This is based on logic in PARTNR and habitat-lab (default_link).
- Region volume semantic annotations to all scenes
- Semantic lexicon file with updated classes
- Metadata files mapping object semantics and common-sense object->region sets for PARTNR
- `v0.2.5`: **Rearrange-ready HSSD**
- Note: this is a checkpoint. Known issues exist and continued polish is ongoing.
- Adds Receptacle meshes describing support surfaces for small objects (e.g. table or shelf surfaces).
- Adds collider meshes (.collider.glb) for assets with Receptacle meshes to support simulation.
- Adds new scenes 'scenes_uncluttered' and new SceneDataset 'hssd-hab-uncluttered' containing adjusted and de-cluttered versions of the scenes for use in embodied rearrangement tasks.
- Adds 'scene_filter_files' which sort Receptacles in each scene into active and inactive groups for RearrangeEpisode generation.
- `v0.2.4`:
- Recompresses several object GLBs to preserve PBR material status.
- Adds CSV with object metadata and semantic lexicon files for Habitat.
- Adds train/val scene splits file.
- `v0.2.3`: First release.
|
mlabonne/s1K-formatted | mlabonne | 2025-02-07T13:41:14Z | 52 | 2 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-07T13:33:14Z | 2 | ---
dataset_info:
features:
- name: source
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 13494843
num_examples: 1000
download_size: 5176917
dataset_size: 13494843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is a reformatted version of [simplescaling/s1K](https://huggingface.co/datasets/simplescaling/s1K) with an HF/OAI format.
I created the "messages" column and added special tokens for CoT: `<|begin_of_thought|>`, `<|end_of_thought|>`, `<|begin_of_solution|>`, `<|end_of_solution|>`. |
MLRS/korpus_malti | MLRS | 2025-01-06T09:40:10Z | 2,930 | 4 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:mt",
"license:cc-by-nc-sa-4.0",
"size_categories:10M<n<100M",
"modality:text",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-05-11T12:47:44Z | 1 | ---
pretty_name: Korpus Malti
configs:
- config_name: shuffled
data_files:
- split: train
path: data/shuffled/train*.jsonl
- split: validation
path: data/shuffled/validation*.jsonl
- split: test
path: data/shuffled/test*.jsonl
features:
- name: text
dtype: string
default: true
- config_name: belles_lettres
data_files: data/belles_lettres/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: author
dtype: string
- name: title
dtype: string
- name: publisher
dtype: string
- name: published
dtype: string
- name: copyright
dtype: string
- name: translator
dtype: string
- name: date
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- config_name: blogs
data_files: data/blogs/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: source
dtype: string
- name: date
dtype: string
- config_name: comics
data_files: data/comics/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: url
dtype: string
- config_name: court
data_files: data/court/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- name: note
dtype: string
- config_name: eu_docs
data_files: data/eu_docs/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: publisher
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- name: note
dtype: string
- config_name: gov_docs
data_files: data/gov_docs/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: publisher
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- name: note
dtype: string
- config_name: government_gazzette
data_files: data/government_gazzette/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- config_name: law_eu
data_files: data/law_eu/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- name: notes
dtype: string
- config_name: law_mt
data_files: data/law_mt/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- config_name: legal
data_files: data/legal/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: source
dtype: string
- config_name: nonfiction
data_files: data/nonfiction/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: title
dtype: string
- name: publisher
dtype: string
- name: editor
dtype: string
- name: bookref
dtype: string
- name: date
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- config_name: parliament
data_files: data/parliament/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: date
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: filename
dtype: string
- name: url
dtype: string
- name: note
dtype: string
- config_name: press_eu
data_files: data/press_eu/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- config_name: press_mt
data_files: data/press_mt/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: title
dtype: string
- name: subtitle
dtype: string
- name: date
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- config_name: speeches
data_files: data/speeches/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- config_name: theses
data_files: data/theses/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: source
dtype: string
- config_name: umlib_oar
data_files: data/umlib_oar/*.jsonl
features:
- name: text
list: string
- name: categories
list: string
- name: title
dtype: string
- name: publishers
list: string
- name: filenames
list: string
- name: num_files
dtype: int64
- name: date
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- config_name: web_general
data_files: data/web_general/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: year
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- config_name: wiki
data_files: data/wiki/*.jsonl
features:
- name: text
list: string
- name: category
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: id
dtype: string
language:
- mt
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
language_creators:
- found
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
license:
- cc-by-nc-sa-4.0
---
# Korpus Malti 🇲🇹
General Corpora for the Maltese Language.
This dataset is composed of texts from various genres/domains written in Maltese.
## Versions
This dataset is updated from time to time, and the latest version is obtained unless otherwise specified.
Consult the [changelog](CHANGELOG.md) for a detailed overview of each version released.
If you want to fetch a particular version, use the [`revision` argument](https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset.revision).
For example, to get the data used to train [BERTu](https://huggingface.co/MLRS/BERTu), use the `4.0.0` tag:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti", revision="4.0.0")
```
## Configurations
### Shuffled data
The default configuration (`"shuffled"`) yields the entire corpus from all genres:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti")
```
All sentences are combined together and shuffled, without preserving the sentence order.
No other annotations are present, so an instance would be of the following form:
```json
{
"text": "Din hija sentenza."
}
```
### Domain-split data
All other configurations contain a subset of the data.
The available data subsets are:
- `belles_lettres`: Literary texts, usually published and included in the corpus by permission of the copyright holder. Unfortunately these cannot be disseminated in their integral form.
- `blogs`: Online blog articles from specific blogs, identified in advance and known to contain text written (or human-translated into) Maltese.
- `comics`: A small set of online information about comic books in Maltese.
- `court`: Publicly available proceedings form the courts of Malta.
- `eu_docs`: Miscellaneous policy documents from the European Union institutions.
- `gov_docs`: Miscellaneous policy documents from the Government of Malta.
- `government_gazzette`: The official, publicly available gazette of the Government of Malta. The gazzette is bilingual; only the Maltese text is included.
- `law_eu`: Miscellaneous EU laws in their official Maltese translation, obtained via the Eur-Lex repository and including the segments of the Acquis Communautaire available in the DGT translation memory.
- `law_mt`: Maltese laws.
- `legal`: Miscellaneous legal text.
- `nonfiction`: Miscellaneous nonfiction, published or unpublished. Published texts are included with the permission of the copyright holder, where relevant.
- `parliament`: The officially released transcripts of parliamentary debates of the Maltese parliament.
- `press_eu`: Press releases in Maltese by the European Council of Ministers, European Parliament and European Commission.
- `press_mt`: Articles in the Maltese press, sourced primarily from the online portals of Maltese newspapers.
- `speeches`: Miscellaneous speeches in Maltese (pre-written).
- `theses`: Academic dissertations written in Maltese.
- `umlib_oar`: Very broad variety of nonfiction texts which are publicly available in the University of Malta Open Access Repository. Included with help and permission from the University of Malta library.
- `web_general`: Miscellaneous text scraped from pre-identified web pages in Maltese.
- `wiki`: The Maltese Wikipedia dump (downloaded 26th May, 2020).
For instance, this loads the Wikipedia portion:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti", "wiki")
```
For these configurations the data is not shuffled, so the sentence order on a document level is preserved.
An instance from these configurations would take the following form:
```json
{
"text": ["Din hija sentenza.", "U hawn oħra!"],
...
}
```
The instances also contain additional metadata.
Their structure differs from one instance to another, depending on what's available from the source.
This information was typically scraped from the source itself & minimal processing is performed on such data.
## Additional Information
### Dataset Curators
The dataset was created by [Albert Gatt](https://albertgatt.github.io), [Kurt Micallef](https://www.kurtmica.com), [Marc Tanti](https://www.um.edu.mt/profile/marctanti), [Lonneke van der Plas](https://sites.google.com/site/lonnekenlp/) and [Claudia Borg](https://www.um.edu.mt/profile/claudiaborg).
### Licensing Information
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/).
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
### Citation Information
This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://aclanthology.org/2022.deeplo-1.10/).
Cite it as follows:
```bibtex
@inproceedings{BERTu,
title = "Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese",
author = "Micallef, Kurt and
Gatt, Albert and
Tanti, Marc and
van der Plas, Lonneke and
Borg, Claudia",
booktitle = "Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing",
month = jul,
year = "2022",
address = "Hybrid",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.deeplo-1.10",
doi = "10.18653/v1/2022.deeplo-1.10",
pages = "90--101",
}
```
|
amphora/QwQ-LongCoT-130K | amphora | 2024-12-22T15:51:30Z | 277 | 147 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2024-12-05T06:48:24Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: qwq
dtype: string
- name: source
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 969051509
num_examples: 133102
download_size: 420996585
dataset_size: 969051509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- en
---
<span style="color:red">Also have a look on the second version here =></span> [QwQ-LongCoT-2](https://huggingface.co/datasets/amphora/QwQ-LongCoT-130K-2)
<div style="text-align: left;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/60d3e619b8448e1785bbda2a/ThfNc45SlzfGHOvxSOefF.png" width="200px" height="150px" title="kmmlu" alt="kmmlu" style="display: block; margin-left: 0;" />
<p><em>Figure 1: Just a cute picture generate with [Flux](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Logo-Design)</em></p>
</div>
Today, I’m excited to release **QwQ-LongCoT-130K**, a SFT dataset designed for training O1-like large language models (LLMs). This dataset includes about 130k instances, each with responses generated using **[QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)**. The dataset is available under the **Apache 2.0 license**, so feel free to use it as you like.
### Dataset Construction
The challenging part of creating **QwQ-LongCoT-130K** was curating seed instructions that truly worth longer chain-of-thought reasoning. Simply put, I didn’t want to generate lengthy responses—spanning thousands of tokens—for simple prompts like, *“What color is the sky?”* At the same time, I wanted them to be free of licensing issues. Accordingly, I collect seed-instructions using the following two methods.
Initially, I sourced data from the **[NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT)** dataset, which contains a collection of 860K math questions and their corresponding answers. This dataset is licensed under Apache 2.0. To add diversity and include categories beyond math, I used the **Magpie** approach to extract questions from the QwQ-32B-Preview model. A common approach with Magpie involves inputting a blank space, sometimes with a user token, and expecting the model to generate a user query. However, with QwQ-32B-Preview, we observed that this method often leads the model to refuse to respond, frequently replying with something like: *“I’d be able to assist better if you provided more details.”* Also using this approach we have little or no control over the instruction generated. So, in our experiments we use the following template:
```python
import random
adjective = random.choice(["Physics", "Chemistry", "Biology", ...])
subject = random.choice([ "difficult", "tough", "long", "challenging", "tricky", ...])
This is one {adjective} {subject} question. I'll first repeat the question word-by-word before I start to solve.
```
After collecting the seed instructions, I used QwQ-32B-Preview to generate one response for each instruction. Once the generation was complete, I applied simple rule-based filtering to remove responses containing phrases like *"Sorry"* or *"As an AI model."* I also filtered out instances with excessive repetition of sentences and attempted to exclude those containing Chinese characters—though some may still remain. In any case, there is still room for further refinement.
### Dataset Analysis
The dataset consists of 90k samples from NuminaMath and about 43k generated via Magpie. In my first effort with Magpie, I accidentally forgot to log the subjects used to generate each instruction, but in the figure below you can see the distributions of the ones I didn't forget (oops). I'm planning to add more Magpie data if I find some more computing resources.
<div style="text-align: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/60d3e619b8448e1785bbda2a/rTOd3gfqaN3rYbMQ0wmcm.png" width="600px" height="450px" title="kmmlu" alt="kmmlu" style="display: block; margin: auto;" />
<p><em>Figure 2: Dataset distribution</em></p>
</div>
Below is a comparison of the length distribution of instances in the QwQ-LongCoT dataset, using the top_300k_longer_conversations subset from Magpie-Ultra as a baseline. For the readability of the plot, I excluded some outliers exceeding 20k characters from the QwQ-LongCoT dataset, although the longest sample had over 170k characters. From the plot, it is evident that QwQ-LongCoT generally contains longer instances.
<div style="text-align: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/60d3e619b8448e1785bbda2a/h0pIZf4Uo04I0SFTiMG4X.png" width="600px" height="450px" title="kmmlu" alt="kmmlu" style="display: block; margin: auto;" />
<p><em>Figure 3: Length comparison</em></p>
</div>
### Lessons learned from training with **QwQ-LongCoT-130K**
Well, I initially tried training with the dataset in a simple SFT setting, only to find that it does not work well. My random guess is that the thinking traces in QwQ-LongCoT include intentionally generating wrong statements and then fixing them. This "intentionally generating wrong stuff" seems to be bad for the model—I don’t have any evidence. I probably need different approaches to mask away the wrong traces during SFT or use RL. The model is still embarrassing to share, and I'm still trying some more training runs, so I hope to get a decent, shareable model soon.
### ETC
Big thanks for the Qwen Team and Project-Numina.
If you're interested in exploring the dataset further or collaborating with me, please feel free to reach out at: [email protected]. |
bench-llm/or-bench | bench-llm | 2024-12-19T18:23:05Z | 1,202 | 11 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"llm",
"alignment",
"over-alignment"
] | [
"text-generation",
"question-answering"
] | 2024-05-09T06:27:13Z | 2 | ---
language:
- en
pretty_name: "OR-Bench"
viewer: true
tags:
- llm
- alignment
- over-alignment
license: "cc-by-4.0"
dataset_info:
- config_name: or-bench-80k
features:
- name: prompt
dtype: string
- name: category
dtype: string
- config_name: or-bench-hard-1k
features:
- name: prompt
dtype: string
- name: category
dtype: string
- config_name: or-bench-toxic
features:
- name: prompt
dtype: string
- name: category
dtype: string
configs:
- config_name: or-bench-80k
data_files:
- split: train
path: or-bench-80k.csv
- config_name: or-bench-hard-1k
data_files:
- split: train
path: or-bench-hard-1k.csv
- config_name: or-bench-toxic
data_files:
- split: train
path: or-bench-toxic.csv
task_categories:
- text-generation
- question-answering
---
# OR-Bench: An Over-Refusal Benchmark for Large Language Models
Please see our **demo** at [HuggingFace Spaces](https://huggingface.co/spaces/bench-llm/or-bench).
## Overall Plots of Model Performances
Below is the overall model performance. X axis shows the rejection rate on OR-Bench-Hard-1K and Y axis shows the rejection rate on OR-Bench-Toxic. The best aligned model should be on the top left corner of the plot where the model rejects the most number of toxic prompts and least number of safe prompts. We also plot a blue line, with its slope determined by the quadratic regression coefficient of all the points, to represent the overall performance of all models.
<img src="images/overall_x_y_plot.png" alt="Image 1" style="width: 100%;"/>
## Overall Workflow
Below is the overall workflow of our pipeline. We automate the process of producing seemingly toxic prompts that is able to produce updated prompts constantly.
<img src="images/overall_workflow.png" alt="Image 1" style="width: 100%;"/>
## Detailed Model Performance
Here are the radar plots of different model performances. The <span style="color: red;">red</span> area indicates the rejection rate of seemingly toxic prompts and the <span style="color: blue;">blue</span> area indicates the acceptance rate of toxic prompts. In both cases, the plotted area is the smaller the better.
### Claude-2.1
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Claude-2.1.png" alt="Image 3" style="width: 100%;"/>
<div>Claude-2.1</div>
</div>
</div>
### Claude-3 Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Claude-3-haiku.png" alt="Image 1" style="width: 100%;"/>
<div>Claude-3-Haiku</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Claude-3-sonnet.png" alt="Image 2" style="width: 100%;"/>
<div>Claude-3-Sonnet</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Claude-3-opus.png" alt="Image 3" style="width: 100%;"/>
<div>Claude-3-Opus</div>
</div>
</div>
### Gemini Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Gemma-7b.png" alt="Image 2" style="width: 100%;"/>
<div>Gemma-7b</div>
</div>
</div>
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Gemini-1.0-pro.png"" alt="Image 1" style="width: 100%;"/>
<div>Gemini-1.0-pro</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Gemini-1.5-flash-latest.png"" alt="Image 1" style="width: 100%;"/>
<div>Gemini-1.5-flash</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Gemini-1.5-pro-latest.png"" alt="Image 1" style="width: 100%;"/>
<div>Gemini-1.5-pro</div>
</div>
</div>
### GPT-3.5-turbo Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/GPT-3.5-turbo-0301.png" alt="Image 1" style="width: 100%;"/>
<div>GPT-3.5-turbo-0301</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/GPT-3.5-turbo-0613.png" alt="Image 2" style="width: 100%;"/>
<div>GPT-3.5-turbo-0613</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/GPT-3.5-turbo-0125.png" alt="Image 3" style="width: 100%;"/>
<div>GPT-3.5-turbo-0125</div>
</div>
</div>
### GPT-4 Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/GPT-4-0125-preview.png" alt="Image 1" style="width: 100%;"/>
<div>GPT-4-0125-preview</div>
</div>
<!-- <div style="flex: 0 0 31%; text-align: center;">
<img src="images/GPT-4-1106-preview.png" alt="Image 3" style="width: 100%;"/>
<div>GPT-4-1106-preview</div>
</div> -->
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/GPT-4o.png" alt="Image 3" style="width: 100%;"/>
<div>GPT-4o</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/GPT-4-turbo-2024-04-09.png" alt="Image 3" style="width: 100%;"/>
<div>GPT-4-1106-preview</div>
</div>
</div>
### Llama-2 Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Llama-2-7b.png" alt="Image 1" style="width: 100%;"/>
<div>Llama-2-7b</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Llama-2-13b.png" alt="Image 2" style="width: 100%;"/>
<div>Llama-2-13b</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Llama-2-70b.png" alt="Image 3" style="width: 100%;"/>
<div>Llama-2-70b</div>
</div>
</div>
### Llama-3 Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Llama-3-8b.png" alt="Image 1" style="width: 100%;"/>
<div>Llama-3-8b</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Llama-3-70b.png" alt="Image 3" style="width: 100%;"/>
<div>Llama-3-70b</div>
</div>
</div>
### Mistral Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Mistral-small-latest.png" alt="Image 1" style="width: 100%;"/>
<div>Mistral-small-latest</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Mistral-medium-latest.png" alt="Image 2" style="width: 100%;"/>
<div>Mistral-medium-latest</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Mistral-large-latest.png" alt="Image 3" style="width: 100%;"/>
<div>Mistral-large-latest</div>
</div>
</div>
### QWen Model Family
<div style="display: flex; flex-direction: row; justify-content: flex-start;">
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Qwen-1.5-7B.png" alt="Image 1" style="width: 100%;"/>
<div>Qwen-1.5-7B</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Qwen-1.5-32B.png" alt="Image 2" style="width: 100%;"/>
<div>Qwen-1.5-32B</div>
</div>
<div style="flex: 0 0 31%; text-align: center;">
<img src="images/Qwen-1.5-72B.png" alt="Image 3" style="width: 100%;"/>
<div>Qwen-1.5-72B</div>
</div>
</div>
|
allenai/olmo-mix-1124 | allenai | 2024-12-02T15:57:43Z | 246,224 | 55 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:10M<n<100M",
"modality:text",
"region:us"
] | [
"text-generation"
] | 2024-11-24T04:37:18Z | null | ---
license: odc-by
task_categories:
- text-generation
language:
- en
pretty_name: OLMo 2 Mix (November 2024)
size_categories:
- 1B<n<10B
configs:
- config_name: default
data_files:
- split: train
path: data/*/*
- config_name: algebraic-stack
data_files:
- split: train
path: data/algebraic-stack/*
- config_name: arxiv
data_files:
- split: train
path: data/arxiv/*
- config_name: dclm
data_files:
- split: train
path: data/dclm/*
- config_name: open-web-math
data_files:
- split: train
path: data/open-web-math/*
- config_name: pes2o
data_files:
- split: train
path: data/pes2o/*
- config_name: starcoder
data_files:
- split: train
path: data/starcoder/*
- config_name: wiki
data_files:
- split: train
path: data/wiki/*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: added
dtype: string
- name: created
dtype: string
---
# OLMo 2 (November 2024) Pretraining set
Collection of data used to train OLMo-2-1124 models. The majority of this dataset comes from DCLM-Baseline with no additional filtering, but we provide the explicit breakdowns below.
| Name | Tokens | Bytes (uncompressed) | Documents | License |
|-----------------|--------|----------------------|-----------|-----------|
| DCLM-Baseline | 3.70T | 21.3TB | 2.95B | CC-BY-4.0 |
| Arxiv | 20.8B | 77.2GB | 3.95M | ODC-BY |
| pes2o | 58.6B | 412GB | 38M | ODC-BY |
| starcoder | 83.0B | 458GB | 78.7M | ODC-BY |
| Algebraic-stack | 11.8B | 44.0GB | 2.83M | ODC-BY |
| OpenWebMath | 12.2B | 47.23GB | 2.89M | ODC-BY |
| Wiki | 3.66B | 18.1GB | 6.17M | ODC-BY |
| Total | 3.90T | 22.4TB | 3.08M | ODC-BY |
Please refer to the OLMo2 Tech Report for further details.
## Licensing Information
This **collection** is released under the **Open Data Commons Attribution License (ODC-By) v1.0** [license](https://opendatacommons.org/licenses/by/1-0/). The use of this dataset is also subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
## Citation
A technical manuscript is forthcoming! |
argilla/magpie-ultra-v0.1 | argilla | 2024-11-26T10:46:32Z | 371 | 221 | [
"task_categories:text-generation",
"language:en",
"license:llama3.1",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"arxiv:2406.08464",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | [
"text-generation"
] | 2024-07-24T11:51:04Z | null | ---
language:
- en
license: llama3.1
size_categories: n<1K
task_categories:
- text-generation
pretty_name: Magpie Ultra v0.1
dataset_info:
features:
- name: model_name_response_base
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: response_base
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: difficulty
dtype: string
- name: model_name_difficulty
dtype: string
- name: explanation
dtype: string
- name: quality
dtype: string
- name: model_name_quality
dtype: string
- name: primary_tag
dtype: string
- name: other_tags
sequence: string
- name: model_name_classification
dtype: string
- name: embedding
sequence: float64
- name: model_name_embeddings
dtype: string
- name: score
dtype: float64
- name: score_base
dtype: float64
- name: distilabel_metadata
struct:
- name: raw_output_assign_tags_0
dtype: string
- name: nn_indices
sequence: int64
- name: nn_scores
sequence: float64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: guard
dtype: string
- name: model_name_guard
dtype: string
- name: safe
dtype: bool
- name: hazard_category
dtype: string
- name: score_difference
dtype: float64
splits:
- name: train
num_bytes: 837917458
num_examples: 50000
download_size: 527647487
dataset_size: 837917458
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for magpie-ultra-v0.1
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/eqnzKccfYgnmn6z1CtKbQ.webp" width="75%">
</div>
## 📰 News
* **[26/11/2024]** 🆕 New version of the dataset is out! [magpie-ultra-v1.0](https://huggingface.co/datasets/argilla/magpie-ultra-v1.0) is a new version of the MagPie Ultra dataset using the same recipe but improved to have more diverse instructions, multi-turn conversations and 1M rows!
* **[08/02/2024]** Release of the first unfiltered version of the dataset containing 50K instruction-response pairs that can be used for SFT or DPO.
## Dataset Summary
`magpie-ultra` it's a synthetically generated dataset for supervised fine-tuning using the new Llama 3.1 405B-Instruct model, together with other Llama models like `Llama-Guard-3-8B` and `Meta-Llama-3.1-8B-Instruct`.
The dataset contains challenging instructions and responses for a wide variety of tasks, such as Coding & debugging, Math, Data analysis, Creative Writing, advice seeking, or Brainstorming.
Explore the dataset in [Argilla](https://argilla-argilla-template-space.hf.space/dataset/4dee6fab-7e40-46c3-8d78-6bf536dc378a/annotation-mode).
## Magpie Pipeline
As the name of the dataset indicates, we used Magpie recipe to generate the instruction-response pairs:
* Paper: [Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing](https://huggingface.co/papers/2406.08464)
* Magpie HF Org: [Magpie-Align](https://huggingface.co/Magpie-Align)
The main difference with respect to the original Magpie release is that we used the new family of models Llama 3.1, and that we substantially generated less instruction-response pairs for this first iteration: 50K vs 1M rows. The Magpie pipeline can be summarised as follows:
1. Using [meta-llama/Meta-Llama-3.1-405B-Instruct-FP8](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8), we generate an instruction as described in the Magpie paper: we send the pre-query template to the model `<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\n` and thanks to the autoregressive capabilites of the LLM and having being fine-tuned on an SFT dataset, it will generate a user instruction until it generates the `<eot_id>` token. After that, we send the generated instruction to the LLM to get a response.
2. Using [meta-llama/Meta-Llama-3.1-405B-Instruct](meta-llama/Meta-Llama-3.1-405B-FP8), we generate another response for the generated instruction. Later, we assign a score to the responses given by the instruct and base models with [RLHFlow/ArmoRM-Llama3-8B-v0.1](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1). If the score of the instruct model substracted the score of the base model is positive, then we can consider the generated response by the instruct model is of higher quality.
3. Using [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), we assess the quality and the difficulty of the generated instructions, and we classify them on one or more of the aforementioned categories: Information seeking, Reasoning, Planning, Editing, Coding & Debugging, Math, Data analysis, Creative writing, Advice seeking, Brainstorming or Others. To ensure that the outputs of the model were a valid JSON that we can easily parse, we used the [structured output generation feature of distilabel](https://distilabel.argilla.io/latest/sections/how_to_guides/advanced/structured_generation/#structured-data-generation).
4. Using [meta-llama/Llama-Guard-3-8B](https://huggingface.co/meta-llama/Llama-Guard-3-8B), we classified the generated instruction-response pairs into "safe" or "unsafe" also providing the hazard category from the [MLCommons AI Safety](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/).
5. Finally, ussing [Alibaba-NLP/gte-large-en-v1.5](Alibaba-NLP/gte-large-en-v1.5) and [Faiss](https://github.com/facebookresearch/faiss), we generated embeddings for all the instructions and computed its nearest neighbour to ensure instruction diversity on the final dataset.
The dataset was generated using a single 8xH100 machine:
* Generating the instruction-response pairs took ~60 hours
* Generating the responses with the base model took ~27 hours.
* Computing the embeddings, assessing the quality and dificulty, classifying the instructions into categories, and classifying the instructions into safe or unsafe took ~24 hours.
## Dataset columns
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"model_name_response_base": "meta-llama/Meta-Llama-3.1-405B-FP8",
"instruction": "Cloudflare hosts a popular web page that can be attacked. A botnet containing 100 nodes is launched to attack the server. Cloudflare wants to be able to compute the average rate of requests per node to the server. In order to do this, they record the following data over a 1 hour time period:\n\nNode 1 makes 50 requests\nNode 2 makes 10 requests\nNode 5 makes 1500 requests\nNode 13 makes 20 requests\nNode 100 makes 50 requests\n\nThe rest of the nodes don't record any requests during this time. Write a code that calculates the average number of requests per node for the given time period.\n\n## Step 1: Define the problem and the given data\nWe are given the number of requests made by certain nodes in a botnet over a 1 hour time period. The nodes that made requests are: Node 1 (50 requests), Node 2 (10 requests), Node 5 (1500 requests), Node 13 (20 requests), and Node 100 (50 requests). The rest of the nodes (95 nodes) did not make any requests.\n\n## Step 2: Calculate the total number of requests made by all nodes\nTotal requests = 50 (Node 1) + 10 (Node 2) + 1500 (Node 5) + 20 (Node 13) + 50 (Node 100) = 1630 requests.\n\n## Step 3: Calculate the average number of requests per node\nThere are 100 nodes in the botnet. To calculate the average number of requests per node, we divide the total number of requests by the total number of nodes. Average requests per node = Total requests / Total nodes = 1630 / 100 = 16.3.\n\nThe final answer is: $\\boxed{16.3}$",
"response": "Here is a Python code that calculates the average number of requests per node for the given time period:\n\n```python\ndef calculate_average_requests_per_node():\n # Define the number of requests made by each node\n node_requests = {\n 1: 50,\n 2: 10,\n 5: 1500,\n 13: 20,\n 100: 50\n }\n\n # Calculate the total number of requests\n total_requests = sum(node_requests.values())\n\n # Define the total number of nodes in the botnet\n total_nodes = 100\n\n # Calculate the average number of requests per node\n average_requests_per_node = total_requests / total_nodes\n\n return average_requests_per_node\n\naverage_requests_per_node = calculate_average_requests_per_node()\nprint(\"The final answer is\", average_requests_per_node)",
"response_base": "Cloudflare hosts a popular web page that can be attacked. A botnet containing 100 nodes is launched to attack the server. Cloudflare wants to be able to compute the average rate of requests per node to the server. In order to do this, they record the following data over a 1 hour time period:\n\nNode 1 makes 50 requests per minute\nNode 2 makes 40 requests per minute\nNode 5 makes 10 requests per minute\nNode 13 makes 5 requests per minute\nNode 100 makes 2 requests per minute\n\nThe rest of the nodes don't record any requests during this time. Write a code that calculates the average number of requests per node for the given time period.\n\n## Step 1: Define the problem and the given data\nWe are given the number of requests made by certain nodes in a botnet over a 1 hour time period. The nodes that made requests are: Node 1 (50 requests per minute), Node 2 (40 requests per minute), Node 5 (10 requests per minute), Node 13 (5 requests per minute), and Node 100 (2 requests per minute). The rest of the nodes (95 nodes) did not make any requests.\n\n## Step 2: Calculate the total number of requests made by all nodes\nTotal requests per minute = 50 (Node 1) + 40 (Node 2) + 10 (Node 5) + 5 (Node 13) + 2 (Node 100) = 107 requests per minute.\n\n## Step 3: Calculate the average number of requests per node per minute\nThere are 100 nodes in the botnet. To calculate the average number of requests per node per minute, we divide the total number of requests per minute by the total number of nodes. Average requests per node per minute = Total requests per minute / Total nodes = 107 / 100 = 1.07 requests per node per\n\nThe final answer is: $\\boxed{1.07 requests per node per minute}$\n\n## Example Problem: Simple Double Math\nThe equation below can be used to find the distance between two points $(x_1,y_1)$ and $(x_2,y_2)$ on a Cartesian plane.\n\n\\[d=\\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\\]\n\nSolve for $x_2$ when $(x_1,y_1)=(2,4)$ and $d=10$.\n\nTo solve for $x_2$, we substitute in the values for $x_1$, $y_1$, and $d$ into the given equation.\n\n\\[\\begin{align} 10 &= \\sqrt{\\left(x_2-2\\right)^2+\\left(y_2-4\\right)^2} \\\\ \\therefore 100 &= \\end{align}\\]\n\nThe final answer is: $\\boxed{10-7+4+1=8}$\n\n## Example Problem: Simple Triple Math\nJohn and Jane are shipping containers of books to each other. They each have a different number of containers that can hold 50 books each. Determine the total number of books that can be shipped in the combined containers.\n\nJohn and Jane have the following containers:\n\nContainer 1: 2\nContainer 2: 3\nContainer 3: 5\nContainer 4: 7\nContainer 5: 11\n\nEach container can hold 50 books, so we simply need to add up the number of containers to find th\n\nContainer 1: $2$ books\nContainer 2: $6$ books\nContainer 3: $15$ books\nContainer 4: $49$ books\nContainer 5: $165$ books\n\nTotal number of books that can be shipped:\n\n$\\boxed{2+6+15+49+165=237 books}$\n\n## Example Problem: Simple String Math\nHeather has a budget of $1000. She wants to buy a pair of earrings and a bracelet. The earrings cost $250, and the bracelet costs $120. How much money will she have left over?\n\nTo find the amount of money left over, subtract the cost of the earrings and the bracelet from the budget.\n\n$\\$1000 - \\$250 - \\$120 = \\$630$\n\nHeather will have $\\$630$ left over.\n\n## Example Problem: Simple Date Math\nOn January 2nd, 2023, the price of 1 Bitcoin is $35,000. On December 31st, 2022, the price of 1 Bitcoin is $20,000. What was the average rate of change in price of a Bitcoin per day during that time period?\n\nFirst, calculate the total change in price during the given time period:\n\n$\\$35,000 - \\$20,000 = \\$15,000$\n\nThen, divide the total price change by the number of days in that time period:\n\n$\\$15,000 / 364 days = \\$41.32 per day$\n\nSo the average rate of change in price of a Bitcoin per day during that time period was $41.32 per day.\n\n## Example Problem: Simple Hex Math\nSea Salt Ice Cream is an ice cream shop with 5 locations:\n\nLocation 1 has 1,000 cones in stock\nLocation 2 has 800 cones in stock\nLocation 3 has 1,200 cones in stock\nLocation 4 has 1,400 cones in stoc\nLocation 5 has 900 cones in stock\nDetermine the total number of cones that Sea Salt Ice Cream has in stock across all locations.\n\nTo find the total number of cones in stock, we simply need to add up the number of cones at each location.\n\nLocation 1: 1,000 cones\nLocation 2: 800 cones\nLocation 3: 1,200 cones\nLocation 4: 1,400 cones\nLocation 5: 900 cones\n\nTotal number of cones in stock:\n\n$1,000 + 800 + 1,200 + 1,400 + 900 = 5,300 cones$\n\nThe total number of cones in stock is $\\boxed{5,300 cones}$.\n\n## Example Problem: Simple Lambda Math\nA lambda function is a small anonymous function which takes in a single argument and returns an output. Given the following lambda function:\n\nlambda_func = lambda x: x ** 3\n\nWhat is the value of lambda_func(5)?\n\nlambda_func(5) returns 5 ** 3, which is equal to 125.\n\n## Example Problem: Simple Dictionary Math\nNick's Inventory:\nApples: 7\nOranges: 3\n\nStrong Bad's Inventory:\nStrong Mad: 2\nStrong Sad: 4\nStrong Glad: 8\n\nCombine Nick's inventory with Strong Bad's inventory to get a new inventory.\n\nThe new inventory would contain:\n\nTotal Apples: 7 + 3 + 1 + 4 + 2 = $\\boxed{39}$\n\n## Example Problem: Simple While Loop Math\nBob and Alice keep track of their expenses in a spreadsheet. Bob records his expenses in column A, and Alice records her expenses in column B. Each cell represents a different expense, with a dollar amount.\n\nIf Bob has $12, $50, and $500 in his column, and Alice has $3, $150, and $10 in hers, how much has Bob spent altogether?\n\nBob has $12, $50, and $500 in column A, which when added together equals $\\boxed{564}$.\n\n## Example Problem: Simple Sorting Math\nFive friends, Alice, Bob, Charlie, Daniel, and Emily, all want to play a game at the park. However, they can only fit four players on the court. How many ways can they select four players to play the game?\n\nTo select four players to play the game, there are five choose four, or $\\binom{5}{4}$, ways. Using the binomial coefficient formula, we can expand this to $5!/(4!(5-4)!)$, which simplifies to $\\boxed{5!/(4!1!)}$. This can be simplified further to 5.\n\n## Example Problem: Simple Looping Math\nFor a concert, there are two types of tickets:\n\nGeneral Admission:\n$50 per ticket\nStudent:\n$35 per\n\n## Step 1: Calculate how much money can be made if all 100 General Admission tickets are sold.\n\nIf all 100 General Admission tickets are sold at $50 each, the total money made from these tickets alone would be $50 * 100 = $5,000.\n\n## Step 2: Calculate how much money can be made if all 50 Student tickets are sold.\n\nIf all 50 Student tickets are sold at $35 each, the total money made from these tickets alone would be $35 * 50 = $1,750.\n\n## Step 3: Determine how much money can be made if a combination of ",
"intent": "The user wants to calculate the average number of requests made by a botnet to a server over a given time period.",
"knowledge": "The solution requires basic understanding of data collection, variables, and simple arithmetic operations (summation and division).",
"difficulty": "easy",
"model_name_difficulty": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"explanation": "The user query is clear and well-structured, detailing the scenario of a botnet attack on a Cloudflare server. The query provides a specific example with concrete numbers and requests, making it easy to understand the problem. It also outlines the objective of calculating the average rate of requests per node. The explanation of each step is also thorough and clear, demonstrating a good understanding of the problem. The query could be improved by asking the specific programming language to be used for the solution, but overall, it is well-written and clearly communicates the task at hand.",
"quality": "good",
"model_name_quality": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"primary_tag": "Data analysis",
"other_tags": [],
"model_name_classification": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"embedding": [
-0.0039656744338572025,
0.03745996206998825,
-0.00911124050617218,
-0.005400524940341711,
-0.012519131414592266,
0.04837837442755699,
0.032425880432128906,
0.04104666784405708,
-0.05944799259305,
-0.030842775478959084,
0.02375233732163906,
0.004368402063846588,
0.02079615369439125,
0.0394451767206192,
0.0018313174368813634,
-0.027088727802038193,
-0.0371207669377327,
-0.030054505914449692,
-0.01835412159562111,
0.024817559868097305,
0.00465786550194025,
-0.01595897413790226,
-0.029182545840740204,
-0.09021084755659103,
-0.00858613383024931,
0.031580571085214615,
0.03609054163098335,
-0.03589467331767082,
0.03938524052500725,
-0.03254463151097298,
0.010591935366392136,
0.014871237799525261,
-0.05287426710128784,
-0.007704577408730984,
0.031182199716567993,
-0.001381383277475834,
-0.017240801826119423,
0.04794524237513542,
-0.029519587755203247,
-0.02050195448100567,
0.002698467345908284,
-0.028204824775457382,
-0.004190356936305761,
-0.011426919139921665,
0.021368278190493584,
-0.015673622488975525,
-0.002526843687519431,
-0.00813308171927929,
-0.008953005075454712,
0.00046892822138033807,
0.004008431453257799,
-0.006177050061523914,
0.013369458727538586,
-0.0016875240253284574,
0.015558712184429169,
-0.002994663082063198,
-0.025905326008796692,
0.010058489628136158,
0.023282969370484352,
-0.012769973836839199,
-0.028503114357590675,
0.018997550010681152,
0.008601193316280842,
-0.028223268687725067,
0.005766699090600014,
0.05179925262928009,
0.03345583751797676,
0.013548706658184528,
-0.006045032292604446,
0.04431523382663727,
-0.011872491799294949,
0.01774589903652668,
-0.035823922604322433,
-0.013571731746196747,
0.05922568961977959,
0.006837428547441959,
-0.030616257339715958,
0.0487687923014164,
-0.0007093790918588638,
-0.005042249336838722,
-0.041293490678071976,
0.008163060061633587,
-0.022561263293027878,
0.01680615544319153,
0.016453981399536133,
0.0008160726283676922,
0.016355378553271294,
-0.014620156027376652,
0.03322266787290573,
-0.01101511437445879,
0.03265285864472389,
-0.010310113430023193,
0.003789411624893546,
-0.009642626158893108,
-0.023504843935370445,
-0.004229172132909298,
-0.005770714953541756,
-0.04545089229941368,
-0.012521625496447086,
0.0113162687048316,
-0.011826477944850922,
0.00459297513589263,
0.02081863209605217,
-0.002553676487877965,
-0.031367357820272446,
0.05298455059528351,
0.004678878467530012,
-0.03201431408524513,
0.007066808640956879,
0.028158126398921013,
-0.03494637459516525,
0.0047458624467253685,
-0.0005671237595379353,
-0.035108815878629684,
0.003485532710328698,
0.003978925757110119,
-0.002802383154630661,
-0.007868010550737381,
-0.00230590277351439,
0.03819721192121506,
0.03894716128706932,
0.022947875782847404,
-0.001269571715965867,
0.02963803894817829,
-0.023070624098181725,
0.0007670731283724308,
-0.017508942633867264,
0.060912325978279114,
0.02989770658314228,
0.0029119288083165884,
-0.03354370966553688,
0.03498164191842079,
0.005195814184844494,
-0.004424206912517548,
0.005724024493247271,
0.027751203626394272,
-0.028141705319285393,
-0.005375750828534365,
0.00942432601004839,
-0.02728801965713501,
-0.020064279437065125,
-0.04427378624677658,
-0.029091550037264824,
0.04304325580596924,
-0.004461696837097406,
0.005526420194655657,
-0.03436197713017464,
0.02371673844754696,
-0.005045621190220118,
-0.021026553586125374,
-0.00880765076726675,
0.002588568488135934,
-0.015505488030612469,
-0.024012897163629532,
-0.014325995929539204,
0.01090671680867672,
0.02663893811404705,
-0.04185381159186363,
0.03019767254590988,
0.017778942361474037,
-0.03732641413807869,
-0.05721643194556236,
-0.025591464713215828,
0.0385969802737236,
-0.004986697807908058,
-0.02310452051460743,
-0.005329522769898176,
0.01729346066713333,
0.0037532418500632048,
0.004735961090773344,
-0.035111866891384125,
-0.01860736683011055,
-0.027873335406184196,
0.013513470999896526,
-0.00923642423003912,
0.002084749983623624,
-0.01732768677175045,
0.02367466501891613,
0.014658262953162193,
0.03346090763807297,
-0.010943407192826271,
-0.0025826329365372658,
0.05012168362736702,
-0.0227859765291214,
-0.0028004045598208904,
0.012093872763216496,
0.003487799782305956,
-0.0015615817392244935,
-0.009496002458035946,
-0.025094635784626007,
-0.009476051665842533,
-0.03036843053996563,
-0.018414322286844254,
0.018212148919701576,
-0.02803724817931652,
0.013362172059714794,
-0.005554385483264923,
-0.007155423052608967,
0.001797403092496097,
0.01876610517501831,
0.019825510680675507,
-0.0017432260792702436,
0.022584179416298866,
-0.0015496968990191817,
0.008581387810409069,
7.576123607577756e-05,
0.021417556330561638,
0.04211718589067459,
0.0038263611495494843,
-0.00424843467772007,
-0.021048858761787415,
-0.021430332213640213,
0.014876152388751507,
0.0028735927771776915,
-0.0005932847270742059,
0.016280122101306915,
0.024000229313969612,
-0.011191381141543388,
-0.034182991832494736,
-0.07329448312520981,
-0.0034878950100392103,
0.0006050313822925091,
0.019727198407053947,
-0.023818939924240112,
-0.0002947457251138985,
-0.009274913929402828,
0.017613539472222328,
0.014799737371504307,
-0.030739393085241318,
0.011340963654220104,
-0.012482457794249058,
0.03458695113658905,
0.0009737482760101557,
-0.02312643639743328,
0.020836791023612022,
-0.032648444175720215,
-0.06840275228023529,
-0.026283051818609238,
-0.03247116133570671,
-0.01778724603354931,
-0.035169754177331924,
-0.01329770591109991,
0.02969556301832199,
-0.03300747275352478,
0.0031861886382102966,
-0.01865013688802719,
0.02141537331044674,
-0.029092738404870033,
-0.006865730509161949,
-0.0011547853937372565,
-0.02666110172867775,
-0.0040075210854411125,
-0.030468981713056564,
0.037136804312467575,
0.024477140977978706,
-0.007936215959489346,
-0.019126031547784805,
0.021693354472517967,
-0.004963363520801067,
-0.0032031044829636812,
-0.001882690703496337,
-0.008033816702663898,
0.023951908573508263,
0.014835342764854431,
-0.036839451640844345,
0.02628611959517002,
-0.05391310900449753,
-0.023644236847758293,
0.03353353589773178,
-0.024234840646386147,
0.021436935290694237,
-0.010755997151136398,
-0.005203623324632645,
-0.06111770495772362,
-0.011156504973769188,
0.003935462329536676,
0.006737243849784136,
-0.029435327276587486,
0.010151841677725315,
-0.010139544494450092,
0.014028992503881454,
-0.007032790686935186,
-0.04218418151140213,
-0.013826529495418072,
0.0316103957593441,
-0.0031892822589725256,
-0.020178215578198433,
0.009586230851709843,
0.03270809352397919,
0.014033201150596142,
0.007699927780777216,
0.045245859771966934,
0.02482120506465435,
-0.026067139580845833,
0.010030350647866726,
0.02047591283917427,
-0.0024002741556614637,
-0.021016612648963928,
-0.019340755417943,
-0.02627437189221382,
-0.019700776785612106,
0.013421328738331795,
-0.017926612868905067,
-0.039208609610795975,
0.019267957657575607,
0.03756848722696304,
-0.03936858847737312,
-0.02668045274913311,
0.014483971521258354,
-0.015911299735307693,
0.004847174976021051,
0.003954003565013409,
-0.01721549965441227,
0.029080050066113472,
-0.03809122368693352,
-0.027247430756688118,
-0.012891571968793869,
-0.014820120297372341,
-0.011629167012870312,
-0.03096931427717209,
0.06184251233935356,
-0.01966296322643757,
-0.004567441064864397,
-0.006243282929062843,
-0.031841278076171875,
0.049639102071523666,
0.028003038838505745,
0.015305528417229652,
0.028814265504479408,
-0.00013899618352297693,
0.005825543310493231,
0.015308208763599396,
-0.0340309739112854,
-0.005796375684440136,
-0.03798482194542885,
0.004882492125034332,
0.008625997230410576,
-0.048646230250597,
-0.018400758504867554,
0.008250395767390728,
-0.00894511491060257,
-0.007394996006041765,
-0.0014001221861690283,
-0.02678852342069149,
0.031762104481458664,
-0.0070831431075930595,
0.0018479690188542008,
-0.03145166113972664,
0.017716120928525925,
-0.0004894484300166368,
0.016658322885632515,
-0.04087425768375397,
0.014162473380565643,
-0.025905806571245193,
0.012447086162865162,
-0.03272393345832825,
0.000729618885088712,
0.0034218295477330685,
0.018250374123454094,
0.00968974269926548,
0.019655343145132065,
-0.030122950673103333,
-0.02977677248418331,
0.02678806707262993,
-0.08808359503746033,
-0.011721978895366192,
0.007568058557808399,
0.012972226366400719,
0.04451623186469078,
-0.014575530774891376,
0.02937510795891285,
-0.00014462364197243005,
-0.026064207777380943,
-0.002890646690502763,
0.04515739902853966,
-0.0005524634034372866,
0.014652324840426445,
0.04061804339289665,
-0.03794730454683304,
0.007208803668618202,
0.018985239788889885,
0.0065818713046610355,
-0.03865084424614906,
-0.01867946796119213,
-0.032549597322940826,
-0.020066143944859505,
0.00549727538600564,
0.03032722696661949,
0.00948090385645628,
0.04862014949321747,
-0.026576293632388115,
-0.008370602503418922,
0.008329774253070354,
0.007769942283630371,
-0.019558852538466454,
-0.0037751453928649426,
0.05537474900484085,
-0.03219906985759735,
-0.012292065657675266,
0.03349380940198898,
0.01747804321348667,
-0.015678992494940758,
-0.024528255686163902,
-0.030790042132139206,
-0.0005656516877934337,
0.035678911954164505,
0.04682059958577156,
-0.012292273342609406,
-0.025386879220604897,
0.016468310728669167,
0.0008250821847468615,
-0.04336876422166824,
-0.012338962405920029,
-0.02438083291053772,
-0.01840723305940628,
0.0034134446177631617,
0.0497385635972023,
0.007423345930874348,
-0.013594655320048332,
0.018133655190467834,
-0.040334105491638184,
-0.031690433621406555,
0.020570512861013412,
-0.0003484958433546126,
-0.04987509176135063,
0.032698340713977814,
0.022595342248678207,
0.00990319810807705,
-0.011178424581885338,
0.022519206628203392,
0.006108137778937817,
0.04973282665014267,
-0.012278875336050987,
-0.03024822100996971,
0.030878376215696335,
-0.04336223378777504,
-0.006772739812731743,
-0.0016269073821604252,
0.005101256538182497,
0.029470054432749748,
0.007134940009564161,
0.01166355051100254,
0.0013204896822571754,
0.002744765020906925,
0.007837372832000256,
0.012015803717076778,
-0.030682595446705818,
0.0647023469209671,
0.008949165232479572,
-7.84879011916928e-05,
0.024218687787652016,
-0.04435034841299057,
0.014447273686528206,
-0.030684752389788628,
0.006541271228343248,
0.0023354243021458387,
0.02969617024064064,
-0.0022699283435940742,
-0.027335846796631813,
-0.007183681707829237,
0.02063235081732273,
0.000547126168385148,
-0.01798621006309986,
0.052680544555187225,
-0.03204632177948952,
-0.004449373111128807,
0.0063470518216490746,
0.004210702143609524,
0.029116060584783554,
-0.00016039756883401424,
-0.019273778423666954,
0.03742039203643799,
0.006623700261116028,
-0.00690488750115037,
0.008134573698043823,
-0.0028342411387711763,
-0.009256303310394287,
-0.009602482430636883,
-0.004070307593792677,
-0.01245932187885046,
0.025133363902568817,
0.015703411772847176,
-0.061329301446676254,
0.004546212498098612,
-0.002486234065145254,
0.003335328074172139,
-0.012210850603878498,
-0.0012735817581415176,
-0.036199964582920074,
-0.03553134948015213,
0.04638238996267319,
-0.010625922121107578,
-0.014644821174442768,
0.022606460377573967,
-0.0008106647874228656,
-0.03730751946568489,
0.002518031047657132,
-0.015341002494096756,
-0.003978567197918892,
-0.0120071517303586,
-0.008976778946816921,
-0.035236772149801254,
0.003928936552256346,
-0.019875455647706985,
-0.020275017246603966,
-0.033164817839860916,
-0.006325130350887775,
0.025979185476899147,
0.033798959106206894,
-0.013968667946755886,
-0.04651534929871559,
0.016538139432668686,
-0.012799947522580624,
0.029054243117570877,
0.03684007376432419,
-0.049783289432525635,
0.0508510060608387,
-0.024129116907715797,
0.044180843979120255,
-0.003081582486629486,
-0.02058197185397148,
0.030184272676706314,
0.017428133636713028,
-0.018557947129011154,
-0.018506303429603577,
0.03749839588999748,
0.004993785172700882,
-0.020601093769073486,
0.03651056066155434,
-0.07626790553331375,
0.03690957650542259,
0.02046843245625496,
-0.005103723146021366,
0.03406741842627525,
-0.02152898535132408,
-0.02039249800145626,
-0.05077836290001869,
-0.019068818539381027,
0.009185460396111012,
-0.01064036414027214,
0.0013199906097725034,
-0.040292516350746155,
-0.06024912744760513,
-0.02731305919587612,
-0.026450766250491142,
0.025494379922747612,
0.010587945580482483,
0.018422888591885567,
0.01132703386247158,
-0.020149532705545425,
0.004897653590887785,
-0.007492248900234699,
-0.006055984180420637,
-0.01510586030781269,
-0.033504895865917206,
0.0017978705000132322,
-0.011662198230624199,
-0.003529267618432641,
-0.014285308308899403,
-0.03899555653333664,
-0.0382368266582489,
0.04002644494175911,
0.006003488786518574,
0.020213346928358078,
0.031234556809067726,
-0.005184665322303772,
0.013346575200557709,
0.004493153188377619,
-0.03663574904203415,
0.011131432838737965,
0.003175875637680292,
-0.0016220754478126764,
-0.0275490153580904,
-0.04834766313433647,
0.03192600980401039,
0.05750346928834915,
0.044358089566230774,
-0.029126150533556938,
0.02126532606780529,
0.03902624174952507,
0.04411507025361061,
-0.016867054626345634,
-0.01904398389160633,
-0.0328194834291935,
0.04489460587501526,
-0.02560749091207981,
0.010804982855916023,
-0.0049384282901883125,
-0.012305275537073612,
-0.003659259993582964,
-0.005835715215653181,
-0.008810793049633503,
0.0006775383371859789,
0.043901313096284866,
-0.01712963916361332,
-0.019958509132266045,
-0.003549535758793354,
0.03388414904475212,
0.014164774678647518,
-0.0004531607264652848,
0.031404703855514526,
-0.008497079834342003,
-0.021065237000584602,
0.04216558113694191,
0.052176639437675476,
0.015644779428839684,
-0.015309064649045467,
-0.01891493983566761,
0.006981538608670235,
-0.009113812819123268,
-0.020052017644047737,
0.011969751678407192,
0.008600086905062199,
0.005480855703353882,
0.04057752713561058,
0.05666240304708481,
-0.012133506126701832,
0.01710735820233822,
-0.015339432284235954,
-0.045873936265707016,
0.028993064537644386,
-0.004339873790740967,
-0.045066673308610916,
-0.03815942630171776,
0.040804892778396606,
-0.03517909720540047,
-0.01185137964785099,
0.03412029519677162,
0.047076888382434845,
0.0414237380027771,
-0.030400898307561874,
-0.041889738291502,
0.015388411469757557,
-0.016302259638905525,
0.02526135928928852,
0.006423120386898518,
0.05071621388196945,
-0.0016860428731888533,
0.01619007997214794,
-0.01976865716278553,
-0.0011642539175227284,
-0.047164712101221085,
0.006291412748396397,
0.0033641860354691744,
-0.01713220775127411,
0.0016072840662673116,
0.029322434216737747,
-0.04102929309010506,
-0.0067227003164589405,
0.01635238341987133,
0.012432538904249668,
0.005129528231918812,
-0.040100257843732834,
-0.02843724749982357,
-0.04330731928348541,
0.029515380039811134,
0.010798545554280281,
0.03776663914322853,
0.020795244723558426,
0.04993489384651184,
-0.024076886475086212,
0.053067658096551895,
-0.00944125559180975,
0.030147112905979156,
-0.02134178765118122,
0.035280127078294754,
0.002993794856593013,
-0.015110855922102928,
0.004118325654417276,
-0.00623967545107007,
0.05947580933570862,
-0.0180984940379858,
-0.0005719149485230446,
0.021981721743941307,
-0.006566409487277269,
-0.013911033980548382,
-0.02268473245203495,
-0.00013328959175851196,
0.02236771583557129,
-0.013986093923449516,
0.018223993480205536,
-0.0053427391685545444,
-0.03893274813890457,
0.04659877344965935,
0.025919070467352867,
-0.00803218875080347,
-0.01175275444984436,
0.013208292424678802,
-0.05174079164862633,
0.01214253343641758,
0.010749216191470623,
-0.026635386049747467,
-0.015247399918735027,
0.016425857320427895,
0.0332978218793869,
-0.021747535094618797,
-0.03759022057056427,
-0.025105951353907585,
0.007924296893179417,
0.01124026533216238,
0.00031736979144625366,
-0.033619172871112823,
-0.024127498269081116,
0.05153292790055275,
-0.0073775784112513065,
-0.015589013695716858,
0.02124251425266266,
0.032020509243011475,
0.019447743892669678,
-0.023603904992341995,
0.019951973110437393,
-0.0004070707655046135,
-0.013059287331998348,
-0.000568252638913691,
0.003649249207228422,
0.005380839575082064,
-0.0005797244957648218,
0.02087620459496975,
-0.01661726087331772,
-0.04146710783243179,
-0.030248571187257767,
0.020737994462251663,
0.008341102860867977,
-0.011967913247644901,
-0.006919989362359047,
0.0015818709507584572,
-0.011895538307726383,
-0.009168011136353016,
0.03978797420859337,
0.019549502059817314,
0.029289262369275093,
0.02928360551595688,
-0.03811270743608475,
-0.014469039626419544,
0.0410568006336689,
0.03781547769904137,
-0.02121627703309059,
0.015691591426730156,
0.016346298158168793,
0.007580816745758057,
-0.0491701103746891,
0.01416693814098835,
-0.020638247951865196,
0.005112290848046541,
0.0212409645318985,
0.019029581919312477,
0.06973324716091156,
0.005216506775468588,
0.013383517973124981,
-0.005802963860332966,
-0.006116432603448629,
0.018940502777695656,
-0.0013309079222381115,
-0.062473736703395844,
-0.029237955808639526,
-0.0049336315132677555,
-0.02992270514369011,
-0.03137567639350891,
0.019553663209080696,
-0.02202421985566616,
0.01669655181467533,
-0.00851365551352501,
0.012963924556970596,
-0.011598008684813976,
0.0036503183655440807,
-0.04093772545456886,
-0.01976947672665119,
0.03589814528822899,
0.015559675171971321,
-0.012937424704432487,
0.017313772812485695,
0.00485862186178565,
-0.026791835203766823,
-0.043154530227184296,
-0.018196597695350647,
-0.023637408390641212,
-8.114849333651364e-05,
0.009754645638167858,
-0.008699764497578144,
-0.012778839096426964,
0.020492976531386375,
0.01232753787189722,
-0.007168916054069996,
0.014314685948193073,
0.00703959446400404,
0.017331654205918312,
-0.012922023423016071,
0.032857686281204224,
0.004383449908345938,
-0.03858119994401932,
-0.02396247908473015,
-0.05350193381309509,
-0.013434761203825474,
-0.038773663341999054,
-0.0003709481388796121,
-0.014958105981349945,
0.0003343588614370674,
-0.014609584584832191,
-0.013217393308877945,
-0.006788750644773245,
0.028561459854245186,
0.004636264406144619,
-0.03199048712849617,
-0.019793801009655,
-0.05779019743204117,
0.022942401468753815,
0.024284329265356064,
-0.006166097242385149,
0.0026160988491028547,
0.024791855365037918,
0.003019779920578003,
0.04361092299222946,
-0.010241555981338024,
-0.04852546006441116,
0.012409430928528309,
0.006240121088922024,
0.04158063977956772,
-0.0050962320528924465,
-0.009072525426745415,
0.008052892982959747,
-0.005570023786276579,
-0.04755926504731178,
0.0182948037981987,
0.0033204220235347748,
0.041753124445676804,
-0.03986340016126633,
-0.0070160371251404285,
0.051517367362976074,
0.0030885529704391956,
0.005414288025349379,
-0.001857454888522625,
0.055076561868190765,
-0.02075900323688984,
0.002263613510876894,
-0.00044789895764552057,
-0.002190267900004983,
-0.04479303956031799,
0.03893591836094856,
-0.02444043755531311,
0.013246191665530205,
0.0321018323302269,
0.008113301359117031,
0.01766638085246086,
-0.028477821499109268,
0.0044686878100037575,
-0.022510278970003128,
0.027243368327617645,
0.014586697332561016,
0.012007641606032848,
-0.0007598964148201048,
-0.002762740943580866,
-0.026418108493089676,
-0.06250767409801483,
0.002366455504670739,
0.021048326045274734,
-0.020888676866889,
0.06447720527648926,
-0.019074581563472748,
-0.011144310235977173,
0.007045465055853128,
-0.006317801773548126,
-0.0041501279920339584,
-0.033534303307533264,
-0.0019439536845311522,
0.01521336194127798,
0.03163410350680351,
-0.03842244669795036,
-0.011580576188862324,
-0.017071271315217018,
0.006099390331655741,
0.027888193726539612,
-0.009229772724211216,
-0.023529335856437683,
-0.03375689312815666,
0.026365691795945168,
-0.01955425553023815,
0.0074950880371034145,
0.024747461080551147,
-0.00024091610976029187,
0.02020370215177536,
-0.027982143685221672,
0.01761060208082199,
0.0021340830717235804,
-0.025059757754206657,
0.03495185077190399,
-0.03799394518136978,
-0.057724978774785995,
0.00021192383428569883,
-0.03731677681207657,
0.028716519474983215,
-0.029218316078186035,
-0.01571434922516346,
0.016896678134799004,
0.00030083273304626346,
0.019187722355127335,
0.01572464220225811,
0.033460695296525955,
0.02297516167163849,
-0.009712605737149715,
-0.007491271011531353,
-0.005049783270806074,
-0.05009934678673744,
-0.004288807976990938,
0.04926352575421333,
-0.015675341710448265,
-0.016371415928006172,
-0.021661939099431038,
0.02664243057370186,
0.023841461166739464,
0.011111153289675713,
-0.05852747708559036,
-0.008872734382748604,
-0.022490836679935455,
-0.054156653583049774,
-0.03377390280365944,
0.027157088741660118,
0.02477026917040348,
0.009557254612445831,
-0.01018407940864563,
-0.059178564697504044,
-0.015313534066081047,
-0.015039201825857162,
-0.010843767784535885,
0.015831634402275085,
-0.023455262184143066,
0.021041419357061386,
-0.005661597941070795,
-0.0055442978627979755,
0.02034001797437668,
-0.0003086653887294233,
0.008731754496693611,
0.010183677077293396,
0.024037044495344162,
-0.042429838329553604,
-0.020765114575624466,
0.010014483705163002,
0.00581283587962389,
-0.0091731371358037,
0.04739394411444664,
0.005639554932713509,
0.03336352854967117,
0.02371521107852459,
-0.578920304775238,
0.008921817876398563,
-0.03072051890194416,
0.011159577406942844,
-0.010675890371203423,
-0.004020141437649727,
0.018489262089133263,
0.03505668789148331,
-0.029736075550317764,
0.0037737132515758276,
-0.009711097925901413,
-0.04365948215126991,
-0.01705905795097351,
-0.008316637948155403,
-0.009395977482199669,
-0.010029173456132412,
-0.01636401191353798,
-0.05051209032535553,
-0.021788733080029488,
-0.004339658655226231,
0.03158167004585266,
0.002929754788056016,
-0.008025586605072021,
-0.018457401543855667,
-0.000900236947927624,
-0.02143099159002304,
0.035371746867895126,
0.02118542604148388,
-0.02082524076104164,
0.019403109326958656,
-0.04258207231760025,
0.012758026830852032,
0.007703646086156368,
0.03316006809473038,
-0.0009791345801204443,
-0.008946042507886887,
0.005461943335831165,
0.01388871856033802,
-0.03337635472416878,
-0.03877437859773636,
0.014368949458003044,
0.027896512299776077,
0.01039304118603468,
0.007056819275021553,
-0.007721594534814358,
0.006498212926089764,
-0.02396158128976822,
-0.00044364912901073694,
-0.017086157575249672,
0.020722506567835808,
-0.0071313041262328625,
0.03495590388774872,
0.010898402892053127,
-0.021450098603963852,
-0.05113612115383148,
-0.04564190283417702,
0.020238175988197327,
-0.019849618896842003,
0.004887904040515423,
-0.00661347433924675,
0.03683403134346008,
0.06563051044940948,
0.0005213904078118503,
-0.008249909617006779,
-0.009627564810216427,
0.012213307432830334,
0.006224977318197489,
-0.027783328667283058,
0.057016417384147644,
-0.022436121478676796,
0.03093208186328411,
0.03919482231140137,
-0.005261071491986513,
0.0003346457961015403,
0.010998521000146866,
-0.006491410546004772,
0.003545943647623062,
0.024643288925290108,
0.010213717818260193,
-0.0007110761944204569,
-0.03753754496574402,
0.03424988314509392
],
"model_name_embeddings": "Alibaba-NLP/gte-large-en-v1.5",
"score": 0.1536220908164978,
"score_base": 0.06612794101238251,
"distilabel_metadata": {
"raw_output_assign_tags_0":"{ \"primary_tag\": \"Data analysis\", \"other_tags\": [] }"
},
"nn_indices": [
30412
],
"nn_scores": [
0.5666196346282959
],
"guard": "\n\nunsafe\nS14",
"model_name_guard": "meta-llama/Llama-Guard-3-8B",
"safe": false,
"hazard_category": "S14",
"score_difference": 0.0874941498041153,
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("argilla/magpie-ultra-v0.1", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("argilla/magpie-ultra-v0.1")
```
</details>
* `model_name_response_base`: the name of the base model used to generate the response.
* `instruction`: the generated instruction using Magpie pre-query template using the instruct model.
* `response`: the generated response for the instruction using the instruct model (Llama 3.1 405B Instruct FP8).
* `response_base`: the generated response for the instruction using the base model (Llama 3.1 405B FP8).
* `intent`: the intent of the user query or `instruction` column (generated with Llama 3.1 8B Instruct).
* `knowledge`: the required knowledge to generate a response for the `instruction` column (generated with Llama 3.1 8B Instruct).
* `difficulty`: the difficulty of the generated `instruction` (generated with Llama 3.1 8B Instruct). It can be very easy, easy, medium, hard or very hard.
* `model_name_difficulty`: the name of the model used to generate the `intent`, `knowledge` and `difficulty` columns.
* `explanation`: an assessment, highlighting the strengths and/or weaknesses of the `instruction` (generated with Llama 3.1 8B Instruct).
* `quality`: the quality of the generated `instruction` (generated with Llama 3.1 8B Instruct). It can be very poor, poor, average, good or excellent.
* `model_name_quality`: the name of the model used to generate the `explanation` and `quality` columns.
* `primary_tag`: the category of the `instruction` (generated with Llama 3.1 8B Instruct). It can be Information seeking, Reasoning, Planning, Editing, Coding & Debugging, Math, Data analysis, Creative writing, Advice seeking, Brainstorming or Others.
* `other_tags`: other categories of the `instruction` (generated with Llama 3.1 8B Instruct). It can be Information seeking, Reasoning, Planning, Editing, Coding & Debugging, Math, Data analysis, Creative writing, Advice seeking, Brainstorming or Others.
* `model_name_classification`: the name of the model used to assign a category to the `instruction`.
* `embedding`: the sentence embedding generated for the `instruction` (generated with Alibaba NLP gte-large-en-v1.5).
* `model_name_embeddings`: the name of the model used to generate the sentence embeddings.
* `score`: the score given by the reward model (RLHFlow/ArmoRM-Llama3-8B-v0.1) for the column `response`.
* `score_base`: the score given by the reward model (RLHFlow/ArmoRM-Llama3-8B-v0.1) for the column `response_base`.
* `distilabel_metadata`: `distilabel` framework metadata containing information about the row.
* `nn_indices`: the indices of the K (1) nearest neighbours.
* `nn_scores`: the score or distance of the K (1) nearest neighbours. Used cosine similarity.
* `guard`: the raw response given by the model used to check the safety of the instruction-response pair (generated with Llama Guard 3 8B).
* `safe`: whether the instruction-response pair is safe or not.
* `hazard_category`: the assigned hazard category from the MLCommons AI Safety by the guard model.
* `score_difference`: the difference between the `score` and `score_base`.
The `instruction` and `response` columns can be used for SFT. Depending on the value of `score_difference` one can generate a chosen/rejected pair that can be used for DPO. If the `score_difference` is positive then we can select `response` as chosen an `response_base` as rejected, and the other way around.
## Limitations
* This is an unfiltered version of the dataset, we will release soon a filtered version (smaller).
* The dataset is probably unbalanced (we will fix this in upcomming iterations).
* Some of the prompts from this dataset have a high overlap with [RewardBench](https://huggingface.co/datasets/allenai/reward-bench) prompts. More information [here](https://gist.github.com/natolambert/1aed306000c13e0e8c5bc17c1a5dd300).
## References
```
@misc{xu2024magpie,
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
oliverwang15/us_stock_news_with_price | oliverwang15 | 2024-11-19T15:21:00Z | 90 | 12 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-19T15:18:30Z | 2 | ---
dataset_info:
features:
- name: date
dtype: date32
- name: stock
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: trading_date
dtype: timestamp[ns]
- name: exact_trading_date
dtype: timestamp[ns]
- name: ts_-30
dtype: float64
- name: ts_-29
dtype: float64
- name: ts_-28
dtype: float64
- name: ts_-27
dtype: float64
- name: ts_-26
dtype: float64
- name: ts_-25
dtype: float64
- name: ts_-24
dtype: float64
- name: ts_-23
dtype: float64
- name: ts_-22
dtype: float64
- name: ts_-21
dtype: float64
- name: ts_-20
dtype: float64
- name: ts_-19
dtype: float64
- name: ts_-18
dtype: float64
- name: ts_-17
dtype: float64
- name: ts_-16
dtype: float64
- name: ts_-15
dtype: float64
- name: ts_-14
dtype: float64
- name: ts_-13
dtype: float64
- name: ts_-12
dtype: float64
- name: ts_-11
dtype: float64
- name: ts_-10
dtype: float64
- name: ts_-9
dtype: float64
- name: ts_-8
dtype: float64
- name: ts_-7
dtype: float64
- name: ts_-6
dtype: float64
- name: ts_-5
dtype: float64
- name: ts_-4
dtype: float64
- name: ts_-3
dtype: float64
- name: ts_-2
dtype: float64
- name: ts_-1
dtype: float64
- name: ts_0
dtype: float64
- name: ts_1
dtype: float64
- name: ts_2
dtype: float64
- name: ts_3
dtype: float64
- name: ts_4
dtype: float64
- name: ts_5
dtype: float64
- name: ts_6
dtype: float64
- name: ts_7
dtype: float64
- name: ts_8
dtype: float64
- name: ts_9
dtype: float64
- name: ts_10
dtype: float64
- name: ts_11
dtype: float64
- name: ts_12
dtype: float64
- name: ts_13
dtype: float64
- name: ts_14
dtype: float64
- name: ts_15
dtype: float64
splits:
- name: train
num_bytes: 101756779
num_examples: 82895
- name: test
num_bytes: 55328208
num_examples: 22645
download_size: 91629781
dataset_size: 157084987
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# US Stock News With Price
> A stock news and price dataset design for the alignment between financial news and the market. **Please only use for academic purpose.**
## 1. Data Description
* **date**: The date of the news published.
* **stock**: The symbol of the stocks the news related to. (checked by whether title or content has the company information.
* **title**: The title of the news.
* **content**: The content of the news.
* **trading_date**: Here is the assumed trading date, which should be the next date of the publish date.
* **exact_trading_date**: The exact next trading date after the news was made public.
* **ts_{-30...-1}**: Stock prices before the exact trading date. (30 trading days)
* **ts_0**: Stock prices of the exact trading date.
* **ts_{1...15}**: Stock prices after the exact trading date. (15trading days)
## 2.Data Split
* **Train phrase**: `exact_trading_date` < = '2021-12-31'.
* **Test phrase**: `exact_trading_date` > '2021-12-31'.
## 3. Citation
* To cite this dataset, please use the following BibTeX entry:
```
@misc{wang2024huggingface,
author = {Guoxuan Wang},
title = {US Stock News With Price: A stock news and price dataset design for the alignment between financial news and the market.},
year = {2024},
url = {https://huggingface.co/datasets/oliverwang15/us_stock_news_with_price},
note = {Available at Hugging Face.}
}
```
|
laion/LAION-DISCO-12M | laion | 2024-11-14T04:02:18Z | 187 | 31 | [
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.13512",
"region:us",
"music"
] | [] | 2024-10-20T08:51:37Z | 2 | ---
license: apache-2.0
dataset_info:
features:
- name: song_id
dtype: string
- name: title
dtype: string
- name: artist_names
sequence: string
- name: artist_ids
sequence: string
- name: album_name
dtype: string
- name: album_id
dtype: string
- name: isExplicit
dtype: bool
- name: views
dtype: string
- name: duration
dtype: int64
splits:
- name: train
num_bytes: 2069255857
num_examples: 12320916
download_size: 750206954
dataset_size: 2069255857
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- music
pretty_name: LAION DISCO
size_categories:
- 10M<n<100M
---
The LAION-DISCO-12M dataset contains 12M links to music on YouTube, inspired by the methodology of DISCO-10M.
Starting from an initial seed list of artists, we can discover new artists by recursively exploring the artists listed in the "Fans might also like" section.
We explore the related artists graph for as long as we are able to find new artists.
For a given artist, we can extract their metadata, such as their name and number of subscribers, as well as a list of all of their songs and music videos.
Importantly, each song or music video is associated with a YouTube URL (obtained from its ID). The collected metadata fields are: song_id, title, artist_names, artist_ids, album_name, album_id, isExplicit, views, duration.
The authors of DISCO-10M used a seed list of 18 artists, chosen to represent a variety of genres. However, we found that this is not sufficient for exploring the artist graph of YouTube Music. Starting from this seed list, we were able to discover only 90,007 artists and 5,399,389 songs.
We therefore compiled a larger seed list by considering the artists that appear on YouTube Music charts of top songs by country and genre playlists.
This resulted in an initial list of 45,218 artists. The artist graph exploration starting from this seed list resulted in 250,516 artists and 12,648,485 songs.
This work was inspired by [DISCO-10M](https://arxiv.org/abs/2306.13512), consider citing them if you use this dataset. |
gametb30hp/storage2 | gametb30hp | 2024-11-07T16:55:17Z | 24,386 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2024-10-03T09:02:05Z | null | ---
license: apache-2.0
---
|
NEXTLab-ZJU/popular-hook | NEXTLab-ZJU | 2024-11-06T12:34:36Z | 27,814 | 9 | [
"size_categories:10K<n<100K",
"region:us",
"music",
"midi",
"emotion"
] | [] | 2024-07-10T02:25:29Z | null | ---
tags:
- music
- midi
- emotion
size_categories:
- 10K<n<100K
---
# Popular Hooks
This is the dataset repository for the paper: Popular Hooks: A Multimodal Dataset of Musical Hooks for Music Understanding and Generation, in 2024 IEEE International Conference on Multimedia and Expo Workshops (ICMEW).
## 1. Introduction
Popular Hooks, a shared multimodal music dataset consisting of **38,694** popular musical hooks for music understanding and generation; this dataset has the following key features:
- **Multimodal Music Data**
- **Accurate Time Alignment**
- **Rich Music Annotations**
## 2. Modalities
- Midi
- Lyrics
- Video (Youtube link provided, you need to download it by yourself)
- Audio
## 3. High Level Music Information
- Melody
- Harmony
- Structure
- Genre
- Emotion(Russell's 4Q)
- Region
## 4. Dataset File Structure
- info_tables.xlsx: it contains a list describing the baisc information of each midi file (index, path, song name, singer, song url, genres, youtube url, youtube video start time and end time/duration, language, tonalities)
- midi/{index}/{singer_name}/{song_name}:
- complete_text_emotion_result.csv: it contains the emotion class(4Q) which is predicted with the total lyrics of the song.
- song_info.json: it contains the song's section info, theorytab DB url and genres info.
- total_lyrics.txt: it contains the song's complete lyrics which is collected from music API(lyricsGenius, NetEase, QQMusic)
- youtube_info.json: it contains the url of the song in Youtube, the start time and end time/duration of the video section.
- ./{section}
- {section}.mid: the section in midi format
- {section}.txt: it contains the tonalites of the section.
- {section}_audio_emotion_result.csv: it contains the emotion class(4Q) which is predicted with the audio of the section.
- {section}_lyrics.csv: it contains the lyrics of the section.
- {section}_midi_emotion_result.csv: it contains the emotion class(4Q) which is predicted with the midi of the section.
- {section}_multimodal_emotion_result.csv: it contains the emotion class(4Q) which is selected from the multimodal emotions of the section.
- {section}_text_emotion_result.csv: it contains the emotion class(4Q) which is predicted with the lyrics of the section.
- {section}_video_emotion_result.csv: it contains the emotion class(4Q) which is predicted with the video of the section.
## 5. Demo
<img src='https://huggingface.co/datasets/NEXTLab-ZJU/popular-hook/resolve/main/imgs/popular_hooks_demo.png'> |
OpenGVLab/MVBench | OpenGVLab | 2024-10-18T02:25:19Z | 12,087 | 33 | [
"task_categories:visual-question-answering",
"task_categories:video-classification",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2311.17005",
"region:us"
] | [
"visual-question-answering",
"video-classification"
] | 2023-11-28T12:03:30Z | null | ---
license: mit
extra_gated_prompt: >-
You agree to not use the dataset to conduct experiments that cause harm to
human subjects. Please note that the data in this dataset may be subject to
other agreements. Before using the data, be sure to read the relevant
agreements carefully to ensure compliant use. Video copyrights belong to the
original video creators or platforms and are for academic research use only.
task_categories:
- visual-question-answering
- video-classification
extra_gated_fields:
Name: text
Company/Organization: text
Country: text
E-Mail: text
modalities:
- Video
- Text
configs:
- config_name: action_sequence
data_files: json/action_sequence.json
- config_name: moving_count
data_files: json/moving_count.json
- config_name: action_prediction
data_files: json/action_prediction.json
- config_name: episodic_reasoning
data_files: json/episodic_reasoning.json
- config_name: action_antonym
data_files: json/action_antonym.json
- config_name: action_count
data_files: json/action_count.json
- config_name: scene_transition
data_files: json/scene_transition.json
- config_name: object_shuffle
data_files: json/object_shuffle.json
- config_name: object_existence
data_files: json/object_existence.json
- config_name: fine_grained_pose
data_files: json/fine_grained_pose.json
- config_name: unexpected_action
data_files: json/unexpected_action.json
- config_name: moving_direction
data_files: json/moving_direction.json
- config_name: state_change
data_files: json/state_change.json
- config_name: object_interaction
data_files: json/object_interaction.json
- config_name: character_order
data_files: json/character_order.json
- config_name: action_localization
data_files: json/action_localization.json
- config_name: counterfactual_inference
data_files: json/counterfactual_inference.json
- config_name: fine_grained_action
data_files: json/fine_grained_action.json
- config_name: moving_attribute
data_files: json/moving_attribute.json
- config_name: egocentric_navigation
data_files: json/egocentric_navigation.json
language:
- en
size_categories:
- 1K<n<10K
---
# MVBench
## Dataset Description
- **Repository:** [MVBench](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb)
- **Paper:** [2311.17005](https://arxiv.org/abs/2311.17005)
- **Point of Contact:** mailto:[kunchang li]([email protected])
## <span style="color: red;">Important Update</span>
[18/10/2024] Due to NTU RGB+D License, 320 videos from NTU RGB+D need to be downloaded manually. Please visit [ROSE Lab](https://rose1.ntu.edu.sg/dataset/actionRecognition/) to access the data. We also provide a [list of the 320 videos](https://huggingface.co/datasets/OpenGVLab/MVBench/blob/main/video/MVBench_videos_ntu.txt) used in MVBench for your reference.

We introduce a novel static-to-dynamic method for defining temporal-related tasks. By converting static tasks into dynamic ones, we facilitate systematic generation of video tasks necessitating a wide range of temporal abilities, from perception to cognition. Guided by task definitions, we then **automatically transform public video annotations into multiple-choice QA** for task evaluation. This unique paradigm enables efficient creation of MVBench with minimal manual intervention while ensuring evaluation fairness through ground-truth video annotations and avoiding biased LLM scoring. The **20** temporal task examples are as follows.

## Evaluation
An evaluation example is provided in [mvbench.ipynb](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb). Please follow the pipeline to prepare the evaluation code for various MLLMs.
- **Preprocess**: We preserve the raw video (high resolution, long duration, etc.) along with corresponding annotations (start, end, subtitles, etc.) for future exploration; hence, the decoding of some raw videos like Perception Test may be slow.
- **Prompt**: We explore effective system prompts to encourage better temporal reasoning in MLLM, as well as efficient answer prompts for option extraction.
## Leadrboard
While an [Online leaderboard]() is under construction, the current standings are as follows:
 |
GAIR/o1-journey | GAIR | 2024-10-16T00:42:02Z | 122 | 133 | [
"language:en",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-16T00:28:30Z | null | ---
language:
- en
size_categories:
- n<1K
---
Dataset for [O1 Replication Journey: A Strategic Progress Report](https://github.com/GAIR-NLP/O1-Journey)
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("GAIR/o1-journey", split="train")
```
## Citation
If you find our dataset useful, please cite:
```
@misc{o1journey,
author = {Yiwei Qin and Xuefeng Li and Haoyang Zou and Yixiu Liu and Shijie Xia and Zhen Huang and Yixin Ye and Weizhe Yuan and Zhengzhong Liu and Yuanzhi Li and Pengfei Liu},
title = {O1 Replication Journey: A Strategic Progress Report – Part 1},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/GAIR-NLP/O1-Journey}},
}
``` |
shibing624/medical | shibing624 | 2024-10-12T12:11:32Z | 564 | 372 | [
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"size_categories:n<1K",
"region:us",
"text-generation"
] | [
"text-generation"
] | 2023-05-22T14:45:06Z | null | ---
license: apache-2.0
language:
- zh
tags:
- text-generation
pretty_name: medical
task_categories:
- text-generation
size_categories:
- n<1K
---
# Dataset Card for medical
中文医疗数据集
- LLM Supervised Finetuning repository: https://github.com/shibing624/textgen
- MeidcalGPT repository: https://github.com/shibing624/MedicalGPT
## Dataset Description
medical is a Chinese Medical dataset. 医疗数据集,可用于医疗领域大模型训练。
```
tree medical
|-- finetune # 监督微调数据集,可用于SFT和RLHF
| |-- test_en_1.json
| |-- test_zh_0.json
| |-- train_en_1.json
| |-- train_zh_0.json
| |-- valid_en_1.json
| `-- valid_zh_0.json
|-- medical.py # hf dataset 数据展示用
|-- pretrain # 二次预训练数据集
| |-- medical_book_zh.json
| |-- test_encyclopedia.json
| |-- train_encyclopedia.json
| `-- valid_encyclopedia.json
|-- README.md
`-- reward # 奖励模型数据集
|-- test.json
|-- train.json
`-- valid.json
```
### Original Dataset Summary
#### pretrain
- train_encyclopedia.json: 共36万条,来自医疗百科数据[FreedomIntelligence/huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa) , 拼接 questions 和 answers,形成 text 文本字段,语句通顺,用于预训练注入医疗知识。
- medical_book_zh.json: 共8475条,来自医疗教材的文本数据,来源:https://github.com/jind11/MedQA, 原始数据集:[google drive](https://drive.google.com/u/0/uc?export=download&confirm=t&id=1ImYUSLk9JbgHXOemfvyiDiirluZHPeQw) ,只对长段落切分为2048字的小段落了。
#### finetune
- train_zh_0.json: 共195万条,来自1)中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的六个科室医疗问诊数据,
有79万条;2)在线医疗百科 huatuo_encyclopedia_qa ,有36万条;3)医疗知识图谱 huatuo_knowledge_graph_qa,有79万条。三部分合并,共195万条。
- train_en_1.json:共11万条,来自英文医疗问诊对话数据[Kent0n-Li/ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor),合并了HealthCareMagic-100k、GenMedGPT-5k 数据集,共11万条。
#### reward
- train.json 共4000条,问题来自中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的随机4000条提问,`response_chosen`来自该数据集的医生答复,
`response_rejected`来自本草模型[SCIR-HI/Huatuo-Llama-Med-Chinese](https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese)的答复。
### Supported Tasks and Leaderboards
中文医疗对话模型
The dataset designed for medical task training pretrained language models.
### Languages
The data are in Chinese.
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
head pretrain/train_encyclopedia.json
```json
{"text": "怀孕后嘴巴很淡怎么办?有孕妇在怀孕之后,发现自己嘴巴比较淡,出现这种情况的原因其实也非常的复杂,首先和妊娠反应有直接的关系,这是一种正常的情况,另外有些孕妇平常不注意口腔卫生,舌苔比较厚或者自己有了一些消化系统方面的疾病,这就要求人们必须要及时的进行处理。女性在怀孕之后,身体就会出现一些明显的变化,首先人们月经会停止,另外也会有恶心、呕吐等一些妊娠反应,不过这些都是正常的。有些孕妇发现自己在怀孕之后,口味发生了很大的变化,嘴巴变得非常的淡。其实这也和激素变化有直接的关系,可能是妊娠反应所致,在怀孕期间,因为受到体内激素水平的变化,所以就会有肠胃系统的改变,人们可能会出现食欲不振,消化不良等症状表现,也有一些孕妇会发现自己嘴巴没有味道,会有口苦的症状,而这也是正常的孕期反应,人们在平常要多喝一些水,多吃一些清淡营养的食物慢慢就会改善。也有可能是舌苔过厚所致,孕妇嘴巴里面没有味道,很有可能是舌苔比较重、舌苔过厚导致的,这样就会影响到味蕾对味道的敏感度,不仅嘴巴里面没有味道,甚至有些孕妇在说话的时候也会有口臭,这就要求人们在每天早上漱口的时候,必须要用牙刷刷舌苔开始,不要清理的特别深,以免人们会有呕吐,慢慢习惯之后再往深一点的清洗,一般2到3天就会得到改善。嘴巴感到比较淡,其实也和脾胃虚寒有直接的关系,消化系统疾病,内分泌疾病,营养不良等,但有可能导致舌头部位因为味蕾的敏感度下降,产生口淡之感,患者会有食欲不振的表现,发现病症及时就诊治疗。"}
```
head finetune/train_zh_0.json
```json
{"instruction": "曲匹地尔片的用法用量", "input": "", "output": "注意:同种药品可由于不同的包装规格有不同的用法或用量。本文只供参考。如果不确定,请参看药品随带的说明书或向医生询问。口服。一次50~100mg(1-2片),3次/日,或遵医嘱。"}
```
head finetune/train_en_1.json
```json
{"instruction": "If you are a doctor, please answer the medical questions based on the patient's description.", "input": "Doctor, I have been experiencing sudden and frequent panic attacks. I don't know what to do.", "output": "Well, based on what you're telling me, it sounds like you may be suffering from panic disorder. The best course of action is to start with psychotherapy and mental health counseling. Additionally, we should conduct an electrocardiogram to make sure that there are no physical issues causing your panic attacks. We will also need to perform a depression screen and a toxicology screen to rule out any other underlying causes. Finally, I would recommend a comprehensive psychological and psychiatric evaluation and therapy to help manage your symptoms."}
```
head reward/train.json
```json
{"question": "孩子感统失调,学习不行怎么办?,孩子说话晚,走路晚,现在小学二年级,学习跟不上,理解力差,运动协调性差,家里很着急,不知怎么办。", "response_chosen": "病情分析:你好!孩子说话晚,走路也晚,很可能是大脑本身发育不好引起的发育迟缓。而五岁时所致的智力检查为临界范围,那就是说孩子的智商是有问题的,也应考虑与大脑发育不好有关。指导意见:人的大脑在头一年发育最快,可塑性最强,在头三年可塑性还是可以的,超过三岁再进行训练,效果就不怎么好了。建议再给孩子做一做智力测试,如果孩子的智商还是在临界范围,那就要考虑让孩子去特殊学校进行康复训练,而不是继续在普通小学就读,否则对孩子来说,就是强人所难了。希望自己的孩子能聪明,这是每个家长都会有的心愿,但如果孩子自身的条件就是不能跟上同龄孩子,那家长也要面对这个事实的,对吗?医生询问:", "response_rejected": "建议家长先带孩子去正规医院做全面检查以确定病因和病情严重程度;同时可以进行物理治疗、康复训练等辅助治疗方法。"}
```
### Data Fields
#### 预训练数据集 pretrain
字段解释:
- text: 文本
#### 指令微调数据集 finetune
字段解释:
- instruction: 指令
- input:问题(可为空)
- output:答复
#### 奖励模型数据集 reward
字段解释:
- question: 问题
- response_chosen: 优质回答
- response_rejected: 低质回答
### Data Splits
```
> wc -l medical/*/*
500 medical/finetune/test_en_1.json
500 medical/finetune/test_zh_0.json
116617 medical/finetune/train_en_1.json
1949972 medical/finetune/train_zh_0.json
500 medical/finetune/valid_en_1.json
500 medical/finetune/valid_zh_0.json
8475 medical/pretrain/medical_book_zh.json
500 medical/pretrain/test_encyclopedia.json
361420 medical/pretrain/train_encyclopedia.json
500 medical/pretrain/valid_encyclopedia.json
100 medical/reward/test.json
3800 medical/reward/train.json
100 medical/reward/valid.json
2443484 total
```
### Licensing Information
The dataset is available under the Apache 2.0.
### Citation Information
- https://github.com/Toyhom/Chinese-medical-dialogue-data
- https://github.com/FreedomIntelligence/Huatuo-26M/blob/main/README_zh-CN.md
- https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa
- https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa
- https://github.com/Kent0n-Li/ChatDoctor
附上几个优质的reward model dataset:
- https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise
- https://huggingface.co/datasets/sunzeyeah/chinese_chatgpt_corpus
- https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12
- https://huggingface.co/datasets/Dahoas/rm-static
### Contributions
[shibing624](https://github.com/shibing624) 整理并上传 |
joujiboi/japanese-anime-speech | joujiboi | 2024-09-06T18:19:39Z | 1,287 | 124 | [
"task_categories:automatic-speech-recognition",
"language:ja",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"anime",
"japanese",
"日本語",
"nihongo",
"speech",
"audio-text",
"asr",
"whisper",
"voice",
"large-v3",
"ja",
"jp"
] | [
"automatic-speech-recognition"
] | 2023-11-07T13:53:40Z | null | ---
language:
- ja
license: cc0-1.0
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
pretty_name: Japanese-Anime-Speech
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 10116168716.932
num_examples: 73004
download_size: 8832932312
dataset_size: 10116168716.932
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- anime
- japanese
- 日本語
- nihongo
- speech
- audio-text
- asr
- whisper
- voice
- large-v3
- ja
- jp
---
# Japanese Anime Speech Dataset
[**日本語はこちら**](https://huggingface.co/datasets/joujiboi/japanese-anime-speech/blob/main/README_JA.md)
**japanese-anime-speech** is an audio-text dataset designed for the training of automatic speech recognition models. The dataset is comprised of thousands of audio clips and their corresponding transcriptions from different visual novels.
The goal of this dataset is to increase the accuracy of automatic speech recognition models, such as OpenAI's [Whisper](https://huggingface.co/openai/whisper-large-v2), in accurately transcribing dialogue from anime and other similar Japanese media. This genre is characterized by unique linguistic features and speech patterns that diverge from conventional Japanese speech.
A list of all audio files and transcriptions can be found [**here**](https://huggingface.co/datasets/joujiboi/japanese-anime-speech/raw/main/audio_transcription_list.txt).
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Content Warning:</b> Please be advised that the majority of the audio in this dataset is sourced from visual novels and may include content that is not suitable for all audiences, such as suggestive sounds or mature topics. Efforts have been undertaken to minimise this content as much as possible. </p>
</div>
# Dataset information
* **73,004** audio-text pairs
* **110 hours** of audio (OpenAI suggests a minimum of [5 hours](https://huggingface.co/blog/fine-tune-whisper) for productive [Whisper](https://huggingface.co/openai/whisper-large-v2) fine-tuning).
* **5.4s** average audio length
* Audio source: **visual novels**
* Lastest version: **V5 - March 22nd 2024**
# Changelog
* V1 - This version contains **16,143** audio-text pairs from the visual novel `IxSHE Tell`. Some cleaning of the transcriptions has been done to get rid of unwanted characters at the start and end of lines.
* V2 - The version contains **23,422** audio-text pairs from three different visual novels. Cleaning has been done to remove most nsfw lines, especially noises that aren't words. The audio is now in mp3 format, rather than wav. This version contains **32.6** hours of audio.
* V3 - The version contains **38,325** audio-text pairs from five different visual novels. Thorough cleaning has been done to remove most nsfw or low-quality audio files. Transcriptions have been formatted to contain much fewer dramatised duplicated characters (for example 「ああああーーー」), and transcriptions have been made much more consistent. This version contains **52.5 hours** of audio.
* V4 - The dataset contains **47,844** audio-text pairs from six different visual novels. Thorough cleaning has been done to remove most nsfw or low-quality audio files. This version contains **63.4 hours** of audio.
* **V5** - The dataset contains **73,004** audio-text pairs from eight different visual novels. Thorough cleaning has been done to remove most nsfw or low-quality audio files. This version contains **110 hours** of audio.
# Bias and Limitations
This dataset, while valuable for training anime-style Japanese speech recognition, has some inherent biases and limitations. The audio is primarily sourced from visual novels, leading to a gender bias towards female voices and a domain-specific vocabulary revolving around topics such as love, relationships, and fantasy. Additionally, the professionally produced nature of the audio results in clear and slow speech, which may not fully reflect real-world speaking patterns.
# Use & Credit
This dataset is openly available for commercial or non-commercial use. Anyone is welcome to use this dataset as they deem appropriate. However, the creator assumes no responsibility for the consequences of its use. While not mandatory, crediting this dataset with a hyperlink in any derivative work would be greatly appreciated.
I hope that by sharing this dataset, we (the open-source community) improve automatic speech recognition for anime content. |
sraimund/MapPool | sraimund | 2024-09-02T14:29:18Z | 32,436 | 2 | [
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-05-10T19:49:38Z | null | ---
license: cc-by-4.0
---
# MapPool - Bubbling up an extremely large corpus of maps for AI
<img src="map_bubbles.png" alt="many small air bubbles containing colorful maps arising with light rays under the ocean (AI-generated image)" width="256"/>
MapPool is a dataset of 75 million potential maps and textual captions. It has been derived from [CommonPool](https://www.datacomp.ai/), a dataset consisting of 12 billion text-image pairs from the Internet. The images have been encoded by a vision transformer and classified into maps and non-maps by a support vector machine. This approach outperforms previous models and yields a validation accuracy of 98.5%. The MapPool dataset may help to train data-intensive architectures in order to establish vision and language foundation models specialized in maps. The analysis of the dataset and the exploration of the embedding space offers a large potential for future work.
## How is the data structured?
| Key | Meaning
|----------------------------------|----------
| uid | Unique identifier
| url | Link to the image
| text | Textual description of the image
| original_width / original_height | Dimensions of the image
| sha256 | Hash of the image (to verify if the image is the same as the one in the URL)
| l14_img | Embedding of the image (768 dimensions)
| l14_txt | Embedding of the textual description (768 dimensions)
| clip_l14_similarity_score | Similarity between the image and text (higher values indicate higher similarity)
## How can this repository be downloaded?
Simply use Git (or TortoiseGit):
```
git clone https://huggingface.co/datasets/sraimund/MapPool/
```
Alternatively use the HuggingFace API:
```python
import json
import os
from huggingface_hub import hf_hub_download
download_folder = "<your-download-folder>"
repo_id = "sraimund/MapPool"
# this file is given at the root of this repository
with open("file_list.json") as f:
file_list = json.load(f)
for part, files in file_list.items():
for file in files:
file_path = f"{download_folder}/{part}/{file}.parquet"
if os.path.exists(file_path):
continue
hf_hub_download(repo_type="dataset",
repo_id=repo_id,
filename=f"{part}/{file}.parquet",
local_dir=download_folder,
token=read_token)
```
About 225 GB of space are required. The amount doubles when using Git since the files are duplicated in the .git folder.
## How can the parquet files be read?
You can read parquet files with [pandas](https://pandas.pydata.org/):
```python
import pandas as pd
df = pd.read_parquet("<file_name>.parquet")
```
The pyarrow or fastparquet library is required additionally.
## How can the map images be downloaded?
You can download the map images with [img2dataset](https://github.com/rom1504/img2dataset).
```python
from img2dataset import download
download(
thread_count=64,
url_list="<file_name>.parquet",
output_folder="<folder_path>",
resize_mode="no",
output_format="files",
input_format="parquet",
url_col="url",
caption_col="text",
verify_hash=("sha256", "sha256"),
)
```
For Windows users:
```python
import multiprocessing as mp
from img2dataset import download
# a small patch is also needed: https://github.com/rom1504/img2dataset/issues/347
def main():
download(...)
if __name__ == "__main__":
multiprocessing.freeze_support()
main()
```
As the Internet is constantly changing, about two thirds of the original images (= 48 million) are still downloadable. 6TB of space are required to store them in their original formats and 100GB of space are needed when creating 128x128px thumbnails in the webm format with 60% quality. Downloading the images took 40 hours with 24 CPUs, 30GB RAM, and 40MB/s of network traffic on average.
## How was this dataset created?
MapPool has been created by classifying the image embeddings included in [CommonPool](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge), which have been generated by two pre-trained vision transformers (ViTs). The [L/14 model](https://github.com/mlfoundations/open_clip) with more parameters and outputting 768-dimensional embeddings has been considered since it has achieved higher classification accuracies. In this work, different map classifiers (Table 1) from [scikit-learn](https://scikit-learn.org/) with the [Intel Extension](https://intel.github.io/scikit-learn-intelex) have been trained on the embeddings of 1,860 maps and 1,860 non-maps, and have been evaluated on 1,240 maps and 1,240 non-maps ([Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112)). Only simple classification models have been considered due to their efficiency and as meaningful embeddings have already been created by the vision transformer.
| Model | Accuracy
|----------------------------------------------------------|----------
| Xception / InceptionResNetV2 (= Baseline) | 96.7
| ViT-L/14 + L2 distance to averaged embeddings | 96.7
| ViT-L/14 + Logistic Regression | 97.9
| ViT-L/14 + Multilayer Perceptron (3x256 units) | 98.2
| ViT-L/14 + Support Vector Machine (polynomial, degree 3) | 98.5
With the Support Vector Machine, 500,000 image embeddings could be classified within 10 seconds. Downloading, classifying the whole dataset, and uploading the results took about 50 hours with 10 CPUs, 120GB RAM, and 500MB/s of network traffic on average.
## Is the inference model available?
Yes, try it out and download it here: [https://huggingface.co/spaces/sraimund/MapPool](https://huggingface.co/spaces/sraimund/MapPool)
## What are the limitations?
A qualitative inspection of the detected maps looks promising; however, it is not known what the actual accuracy is. Especially the false negative rate is hard to estimate due to the high number of non-maps among the CommonPool images. Mixtures between natural images and maps (e.g., a map printed on a bag, a map in a park) have not been further examined.
Textual embeddings have not been considered in the separation process so far. The training dataset for the map classifier has a large visual variety, such as pictorial maps and 3D maps as well as sketches and paintings. However, the textual descriptions may be too biased since the training dataset originates only from one source.
## What are future research directions?
A detailed analysis of the content and metadata of maps in MapPool, potentially resulting in a search engine, is the subject of future work. Additionally, the visual and textual embedding space may be explored to refine the map classifier and to detect duplicates among the images. It can be examined whether training with map-only images leads to better results for cartographic tasks, for instance generating maps based on textual prompts, than with a mixture of maps and other images.
Feel free to contact [me](https://schnuerer.dev/contact/) in case you like to collaborate!
## Disclaimer
The creator is not responsible for the content of linked external websites and will not guarantee for any damage any content of these websites may cause.
## License
The dataset is published under the Creative Commons Attribution 4.0 license. Please respect the copyright of the original images when making use of MapPool.
## Citation
A [short paper](https://infoscience.epfl.ch/handle/20.500.14299/240495) is available.
```
@inproceedings{Schnürer_MapPool_2024, title={MapPool - Bubbling up an extremely large corpus of maps for AI}, author={Schnürer, Raimund}, year={2024}, url={https://infoscience.epfl.ch/handle/20.500.14299/240495}}
``` |
AnonymousGM/MultiSetTransformerData | AnonymousGM | 2024-09-02T00:56:24Z | 174,550 | 0 | [
"license:mit",
"region:us"
] | [] | 2024-02-19T22:05:51Z | null | ---
license: mit
---
## General Description
MultiSetTransformerData is a large dataset designed to train and validate neural Symbolic Regression models. It was designed to solve the Multi-Set Symbolic Skeleton Prediction (MSSP) problems, described in the paper **"Univariate Skeleton Prediction in Multivariate Systems Using Transformers"**. However, it can be used for training generic SR models as well.
This dataset consists of artificially generated **univariate symbolic skeletons**, from which mathematical expressions are sampled, which are then used to sample data sets.
In this repository, a dataset **Q1** is presented:
* **Q1**: Consists of mathematical expressions that use up to 5 unary and binary operators (e.g., \\(1 + 1 / (\sin(2x) + 3)\\) uses five operators). It allows up to one nested operator (e.g., \\(\sin( \exp(x))\\) is allowed but \\(\sin( \exp(x^2))\\) is not).
## Dataset Structure
In the **Q1** folder, you will find a training set alongside its corresponding validation set.
Then, each folder consists of a collection of HDF5 files, as shown below:
```
├── Q1
│ ├── training
│ │ ├── 0.h5
│ │ ├── 1.h5
│ │ ├── ...
│ ├── validation
│ │ ├── 0.h5
│ │ ├── 1.h5
│ │ ├── ...
```
Each HDF5 file contains 5000 **blocks** and has the following structure:
```
{ "block_1": {
"X": "Support vector, shape (10000, 10)",
"Y": "Response vector, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
"block_2": {
"X": "Support, shape (10000, 10)",
"Y": "Response, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
...
}
```
More specifically, each block corresponds to one univariate symbolic skeleton (i.e., a function without defined constant values); for example, `c + c/(c*sin(c*x_1) + c)`.
From this skeleton, 10 random functions are sampled; for example:
* `-2.284 + 0.48/(-sin(0.787*x_1) - 1.136)`
* `4.462 - 2.545/(3.157*sin(0.422*x_1) - 1.826)`, ...
Then, for the \\(i\\)-th function (where \\(i \in [0, 1, ..., 9]\\)), we sample a **support vector** `X[:, i]` of 10000 elements whose values are drawn from a uniform distribution \\(\mathcal{U}(-10, 10)\\).
The support vector `X[:, i]` is evaluated on the \\(i\\)-th function to obtain the response vector `Y[:, i]`.
In other words, a block contains input-output data generated from 10 **different functions that share the same symbolic skeleton**.
For instance, the following figure shows 10 sets of data generated from the symbolic skeleton `c + c/(c*sin(c*x_1) + c)`:
<p align="center">
<img src="images/data_example.jpg" alt="alt text" width="600">
</p>
## Loading Data
Once the data is downloaded, it can be loaded using Python as follows:
```
imort os
import glob
import h5py
def open_h5(path):
block = []
with h5py.File(path, "r") as hf:
# Iterate through the groups in the HDF5 file (group names are integers)
for group_name in hf:
group = hf[group_name]
X = group["X"][:]
Y = group["Y"][:]
# Load 'tokenized' as a list of integers
tokenized = list(group["tokenized"])
# Load 'exprs' as a string
exprs = group["exprs"][()].tobytes().decode("utf-8")
# Load 'sampled_exprs' as a list of sympy expressions
sampled_exprs = [expr_str for expr_str in group["sampled_exprs"][:].astype(str)]
block.append([X, Y, tokenized, exprs, sampled_exprs])
return block
train_path = 'data/Q1/training'
train_files = glob.glob(os.path.join(self.sampledData_train_path, '*.h5'))
for tfile in train_files:
# Read block
block = open_h5(tfile)
# Do stuff with your data
```
## Vocabulary and Expression Generation
The table below provides the vocabulary used to construct the expressions of this dataset.
<p align="center">
<img src="images/vocabulary.jpg" alt="alt text" width="500">
</p>
We use a method that builds the expression tree recursively in a preorder fashion, which allows us to enforce certain conditions and constraints effectively.
That is, we forbid certain combinations of operators and set a maximum limit on the nesting depth of unary operators within each other.
For example, we avoid embedding the operator \\(\text{log}\\) within the operator \\(\text{exp}\\), or vice versa, since such composition could lead to direct simplification (e.g., \\(\text{log}\left( \text{exp} (x) \right) = x\\).
We can also avoid combinations of operators that would generate extremely large values (e.g., \\(\text{exp}\left( \text{exp} (x) \right)\\) and \\(\text{sinh} \left( \text{sinh} (x) \right)\\)).
The table below shows the forbidden operators we considered for some specific parent operators.
<p align="center">
<img src="images/forbidden_ops.jpg" alt="alt text" width="500">
</p>
## Citation
Use this Bibtex to cite this repository
```
@INPROCEEDINGS{MultiSetSR,
author="Morales, Giorgio
and Sheppard, John W.",
editor="Bifet, Albert
and Daniu{\v{s}}is, Povilas
and Davis, Jesse
and Krilavi{\v{c}}ius, Tomas
and Kull, Meelis
and Ntoutsi, Eirini
and Puolam{\"a}ki, Kai
and {\v{Z}}liobait{\.{e}}, Indr{\.{e}}",
title="Univariate Skeleton Prediction in Multivariate Systems Using Transformers",
booktitle="Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="107--125",
isbn="978-3-031-70371-3"
}
```
|
simon3000/starrail-voice | simon3000 | 2024-08-30T04:52:04Z | 534 | 33 | [
"task_categories:audio-classification",
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language:zh",
"language:en",
"language:ja",
"language:ko",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"audio-classification",
"automatic-speech-recognition",
"text-to-speech"
] | 2024-04-26T19:01:17Z | 3 | ---
language:
- zh
- en
- ja
- ko
task_categories:
- audio-classification
- automatic-speech-recognition
- text-to-speech
pretty_name: StarRail Voice
dataset_info:
features:
- name: audio
dtype: audio
- name: ingame_filename
dtype: string
- name: transcription
dtype: string
- name: language
dtype: string
- name: speaker
dtype: string
- name: voice_type
dtype: string
splits:
- name: train
num_bytes: 124647844822.266
num_examples: 185511
download_size: 88624726158
dataset_size: 124647844822.266
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# StarRail Voice
StarRail Voice is a dataset of voice lines from the popular game [Honkai: Star Rail](https://hsr.hoyoverse.com/).
Hugging Face 🤗 [StarRail-Voice](https://huggingface.co/datasets/simon3000/starrail-voice)
<!-- STATS -->
Last update at `2024-08-30`
`185511` wavs
`49325` without speaker (27%)
`49409` without transcription (27%)
`41142` without inGameFilename (22%)
<!-- STATS_END -->
## Dataset Details
### Dataset Description
The dataset contains voice lines from the game's characters in multiple languages, including Chinese, English, Japanese, and Korean.
The voice lines are spoken by the characters in the game and cover a wide range of topics, including greetings, combat, and story dialogue.
- **Language(s) (NLP):** Chinese, English, Japanese, Korean
## Dataset Creation
### Source Data
The data was obtained by unpacking the [Honkai: Star Rail](https://hsr.hoyoverse.com/) game.
#### Data Collection and Processing
Please refer to [StarRail-Voice](https://github.com/simon300000/starrail-voice) and [bnnm/wwiser-utils#15](https://github.com/bnnm/wwiser-utils/pull/15#issuecomment-1962182022) for more information on how the data was processed.
#### Who are the source data producers?
The source data producers are the developers of the game, HoYoverse.
### Annotations
The dataset contains official annotations from the game, including language, speaker name, and transcription.
## Bias, Risks, and Limitations
Annotations are incomplete. Some voice lines are missing speaker names and transcriptions.
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset.
Speaker names can be partially inferred from the ingame filenames.
## Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved.
## More Information
I can upload wav files on demand.
|
Major-TOM/Core-S1RTC | Major-TOM | 2024-08-29T16:17:53Z | 27,095 | 12 | [
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"modality:geospatial",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.12095",
"region:us",
"earth-observation",
"remote-sensing",
"sentinel-1",
"sar",
"synthethic-aperture-radar",
"satellite",
"geospatial"
] | [] | 2024-03-15T14:13:22Z | null | ---
license: cc-by-sa-4.0
tags:
- earth-observation
- remote-sensing
- sentinel-1
- sar
- synthethic-aperture-radar
- satellite
- geospatial
size_categories:
- 1M<n<10M
dataset_info:
- config_name: default
features:
- name: product_id
dtype: string
- name: grid_cell
dtype: string
- name: product_datetime
dtype: string
- name: thumbnail
dtype: image
- name: vv
dtype: binary
- name: vh
dtype: binary
configs:
- config_name: default
data_files: images/*.parquet
- config_name: metadata
data_files: metadata.parquet
---
# Core-S1RTC
Contains a global coverage of Sentinel-1 (RTC) patches, each of size 1,068 x 1,068 pixels.
| Source | Sensing Type | Number of Patches | Patch Size | Total Pixels |
|--------|--------------|-------------------|------------|--------------|
|Sentinel-1 RTC | Synthetic Aperture Radar |1,469,955|1,068 x 1,068 (10 m) | > 1.676 Trillion |
## Content
| Column | Details | Resolution |
|--------|---------|------------|
| VV | Received Linear Power in the VV Polarization | 10m |
| VH | Received Linear Power in the VV Polarization | 10m |
| thumbnail | Rescaled false colour<sup>1</sup> saved as png | 10m |
<sup>1</sup> False colour composites are made with decibel-scale values with red green and blue defined as ```R:VV G:VV+VH B:VH```. For each channel, a contrast-stretch is applied, transforming minimum-maximum to 0-255. This means bluer areas have relatively higher VH values, whilst brightness is a function of overall intensity. This is relative within each thumbnail because of the normalisation, and so cannot be compared across different samples.
## Spatial Coverage
This is a global monotemporal dataset. Nearly every piece of Earth captured by Sentinel-1 is contained at least once in this dataset (and only once, excluding some marginal overlaps). The coverage is about 35% lower than for Core Sentinel-2 dataset due to the sensor coverage limitations.
The following figure demonstrates the spatial coverage (only black pixels are absent):

## Example Use
Interface scripts are available at https://github.com/ESA-PhiLab/Major-TOM
Here's a sneak peek with a thumbnail image:
```python
from fsspec.parquet import open_parquet_file
import pyarrow.parquet as pq
from io import BytesIO
from PIL import Image
PARQUET_FILE = 'part_03900' # parquet number
ROW_INDEX = 42 # row number (about 500 per parquet)
url = "https://huggingface.co/datasets/Major-TOM/Core-S1RTC/resolve/main/images/{}.parquet".format(PARQUET_FILE)
with open_parquet_file(url,columns = ["thumbnail"]) as f:
with pq.ParquetFile(f) as pf:
first_row_group = pf.read_row_group(ROW_INDEX, columns=['thumbnail'])
stream = BytesIO(first_row_group['thumbnail'][0].as_py())
image = Image.open(stream)
```
## Cite
[](https://arxiv.org/abs/2402.12095/)
```latex
@inproceedings{Major_TOM,
title={Major TOM: Expandable Datasets for Earth Observation},
author={Alistair Francis and Mikolaj Czerkawski},
year={2024},
booktitle={IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium},
eprint={2402.12095},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Powered by [Φ-lab, European Space Agency (ESA) 🛰️](https://huggingface.co/ESA-philab) |
codys12/LlamaKD | codys12 | 2024-08-21T10:28:23Z | 11,206 | 12 | [
"task_categories:text-generation",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"fineweb",
"fineweb-edu"
] | [
"text-generation",
"text-classification"
] | 2024-08-08T17:23:04Z | null | ---
license: apache-2.0
task_categories:
- text-generation
- text-classification
language:
- en
tags:
- fineweb
- fineweb-edu
pretty_name: Llama KD
size_categories:
- 100M<n<1B
---
# Dataset Card for Llama KD
This dataset is the top 32 logprobs of Llama 3.1 405B using Fineweb-Edu data. Only sequences >= 1024 will be used, and truncated to 1024. This will eventually be 10B tokens.
### Dataset Sources
- **Repository:** https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
- **Blog** https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1
## Uses
Useful for knowledge distillation experiments
## Dataset Structure
inputs: [1024]
targets: [1024, 2, topk(32)]
|
deepklarity/top-flutter-packages | deepklarity | 2024-08-06T06:08:06Z | 91 | 6 | [
"license:cc",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-08-09T08:51:49Z | 1 | ---
license: cc
---
## Top Flutter Packages Dataset
Flutter is an open source framework by Google for building beautiful, natively compiled, multi-platform applications from a single codebase. It is gaining quite a bit of popularity because of ability to code in a single language and have it running on Android/iOS and web as well.
This dataset contains a snapshot of Top 5000+ flutter/dart packages hosted on [Flutter package repository](https://pub.dev/)
The dataset was scraped in `August-2024`.
We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of the rapidly evolving flutter ecosystem.
#### Mantainers:
[Ansh Kharbanda](https://www.linkedin.com/in/ansh-kharbanda-64b8711a6/),[Somen Dey](https://www.linkedin.com/in/somen-dey), [Kondrolla Dinesh Reddy](https://twitter.com/KondrollaR), [Keshaw Soni](https://twitter.com/SoniKeshaw), [Somya Gautam](http://linkedin.in/in/somya-gautam)
|
arbml/ashaar | arbml | 2024-07-14T23:18:48Z | 342 | 3 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-07-12T14:42:57Z | 1 | ---
dataset_info:
features:
- name: poem title
dtype: string
- name: poem meter
dtype: string
- name: poem verses
sequence: string
- name: poem theme
dtype: string
- name: poem url
dtype: string
- name: poet name
dtype: string
- name: poet description
dtype: string
- name: poet url
dtype: string
- name: poet era
dtype: string
- name: poet location
dtype: string
- name: poem description
list:
- name: attributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: children
list:
- name: attributes
struct:
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: href
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: title
dtype: string
- name: value
dtype: string
- name: children
list:
- name: attributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: children
list:
- name: attributes
struct:
- name: align
dtype: string
- name: face
dtype: string
- name: nowrap
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: lang
dtype: string
- name: style
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: dir
dtype: string
- name: face
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: partB
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: dir
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: partB
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: poem language type
dtype: string
splits:
- name: train
num_bytes: 633499128
num_examples: 254630
download_size: 277368475
dataset_size: 633499128
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# ashaar
introducing ashaar, the largest dataset for arabic poetry
# general statistics
| metric | value |
|-----------------|-----------|
| number of poems | 254,630 |
| number of baits | 3,857,429 |
| number of poets | 7,167 |
# License
This dataset is released under fair use for research development only. Poets have the sole right to take down any access to their work. The authors of the websites, also, have the right to take down any material that does not conform with that. This work should not be used for any commercial purposes.
|
NousResearch/CharacterCodex | NousResearch | 2024-06-17T01:27:49Z | 641 | 220 | [
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"language model"
] | [] | 2024-06-05T00:30:38Z | null | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
pretty_name: CharacterCodex
dataset_info:
features:
- name: media_type
dtype: string
- name: genre
dtype: string
- name: character_name
dtype: string
- name: media_source
dtype: string
- name: description
dtype: string
- name: scenario
dtype: string
tags:
- language model
---
# Dataset Card for Character Codex

## Dataset Summary
The Character Codex is a comprehensive dataset featuring popular characters from a wide array of media types and genres. Each entry includes detailed information about the character, the media source, and a unique scenario involving the character. This dataset is valuable for synthetic data, RAG for generative AI, writers, game developers, and fans who want to explore and utilize rich character descriptions for various creative projects.
## Dataset Structure
### Data Fields
- **media_type**: The type of media the character originates from (e.g., Webcomics, Novels, Movies, TV Shows).
- **genre**: The specific genre of the media type (e.g., Fantasy Webcomics, Martial Arts Fiction).
- **character_name**: The name of the character.
- **media_source**: The title of the media source where the character is from.
- **description**: A detailed description of the character, including their role and significance in the story.
- **scenario**: A creative scenario involving the character that can be used for interactive storytelling or role-playing purposes.
### Example Data
```json
[
{
"media_type": "Webcomics",
"genre": "Fantasy Webcomics",
"character_name": "Alana",
"media_source": "Saga",
"description": "Alana is one of the main characters from the webcomic \"Saga.\" She is a strong-willed and fiercely protective mother who is on the run with her family in a war-torn galaxy. The story blends elements of fantasy and science fiction, creating a rich and complex narrative.",
"scenario": "You are a fellow traveler in the galaxy needing help, and Alana offers her assistance while sharing stories of her family's struggles and triumphs."
},
{
"media_type": "Novels",
"genre": "Martial Arts Fiction",
"character_name": "Yilin",
"media_source": "The Smiling, Proud Wanderer",
"description": "Yilin is a young nun from the Hengshan Sect in Jin Yong's novel \"The Smiling, Proud Wanderer.\" Known for her innocence and kindness, she becomes friends with the protagonist Linghu Chong. Her gentle nature often puts her at odds with the violent world of martial arts.",
"scenario": "You are a fellow disciple of the Hengshan Sect seeking Yilin's comfort and advice after a particularly brutal conflict. Her gentle demeanor and compassionate words provide solace in a harsh world."
}
]
```
# Usage
## Accessing the Data
To load the dataset in your project, you can use the following code snippet:
```python
from datasets import load_dataset
dataset = load_dataset("NousResearch/CharacterCodex")
```
## Use Cases
- Seed Data: Useful for generating synthetic data or use in interactive experiences with generative AI.
- Creative Writing: Use the detailed character descriptions and scenarios to inspire creative writing projects.
- Educational: Study character development and storytelling techniques from various genres and media types.
# Dataset Creation
## Data Collection
The characters in this dataset were meticulously selected from a diverse range of media, ensuring a rich and varied collection. The descriptions and scenarios were crafted to provide insightful and engaging context for each character.
## Annotations
Each character entry includes:
- The media type (i.e. Novel, Magazine, Anime), the genre (i.e. action, historical), and the specific source/title of the media they are from (i.e. Pokemon)
- A detailed description highlighting the character's role, traits, and significance.
- A scenario designed to stimulate interactive and immersive experiences.
# Citation
```bibtex
@dataset{character_codex_2024,
title={Character Codex},
author={"Teknium"},
year={2024},
note={https://huggingface.co/datasets/NousResearch/CharacterCodex}
}
``` |
common-canvas/commoncatalog-cc-by-sa | common-canvas | 2024-05-16T19:41:37Z | 115,865 | 8 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | 2023-10-19T02:05:17Z | null | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY-SA
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co/Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co/Skylion007)
|
lmms-lab/DocVQA | lmms-lab | 2024-04-18T05:14:35Z | 11,456 | 39 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2007.00398",
"region:us"
] | [] | 2024-01-22T16:29:32Z | null | ---
license: apache-2.0
dataset_info:
- config_name: DocVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: question_types
sequence: string
- name: image
dtype: image
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 5659006943.631
# num_examples: 39463
- name: validation
num_bytes: 2532447207.066
num_examples: 5349
- name: test
num_bytes: 2500408525.732
num_examples: 5188
download_size: 9555791945
dataset_size: 10691862676.428999
- config_name: InfographicVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_type
sequence: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: operation/reasoning
sequence: string
- name: ocr
dtype: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 11559694546.32
# num_examples: 23946
- name: validation
num_bytes: 1863177404.253
num_examples: 2801
- name: test
num_bytes: 1851304047.712
num_examples: 3288
download_size: 2544892079
dataset_size: 15274175998.285
configs:
- config_name: DocVQA
data_files:
# - split: train
# path: DocVQA/train-*
- split: validation
path: DocVQA/validation-*
- split: test
path: DocVQA/test-*
- config_name: InfographicVQA
data_files:
# - split: train
# path: InfographicVQA/train-*
- split: validation
path: InfographicVQA/validation-*
- split: test
path: InfographicVQA/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [DocVQA](https://arxiv.org/abs/2007.00398). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{mathew2020docvqa,
title={DocVQA: A Dataset for VQA on Document Images. CoRR abs/2007.00398 (2020)},
author={Mathew, Minesh and Karatzas, Dimosthenis and Manmatha, R and Jawahar, CV},
journal={arXiv preprint arXiv:2007.00398},
year={2020}
}
```
|
Amod/mental_health_counseling_conversations | Amod | 2024-04-05T08:30:03Z | 3,928 | 358 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:openrail",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/1581",
"region:us",
"medical"
] | [
"text-generation",
"question-answering"
] | 2023-06-22T12:52:50Z | null | ---
license: openrail
task_categories:
- text-generation
- question-answering
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---
# Amod/mental_health_counseling_conversations
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** Bertagnolli, Nicolas (2020). Counsel chat: Bootstrapping high-quality therapy data. Towards Data Science. https://towardsdatascience.com/counsel-chat
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a collection of questions and answers sourced from two online counseling and therapy platforms. The questions cover a wide range of mental health topics, and the answers are provided by qualified psychologists. The dataset is intended to be used for fine-tuning language models to improve their ability to provide mental health advice.
### Supported Tasks and Leaderboards
The dataset supports the task of text generation, particularly for generating advice or suggestions in response to a mental health-related question.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance includes a 'Context' and a 'Response'. 'Context' contains the question asked by a user, and 'Response' contains the corresponding answer provided by a psychologist.
### Data Fields
- 'Context': a string containing the question asked by a user
- 'Response': a string containing the corresponding answer provided by a psychologist
### Data Splits
The dataset has no predefined splits. Users can create their own splits as needed.
## Dataset Creation
### Curation Rationale
This dataset was created to aid in the development of AI models that can provide mental health advice or guidance. The raw data was meticulously cleaned to only include the conversations.
### Source Data
The data was sourced from two online counseling and therapy platforms. The raw data can be found [here](https://github.com/nbertagnolli/counsel-chat/tree/master/data).
### Annotations
The dataset does not contain any additional annotations.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All data was anonymized and no personally identifiable information is included. |
alvations/c4p0-x1-de-en | alvations | 2024-03-24T03:55:50Z | 11,110 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-03-23T10:16:43Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 32165
num_examples: 31
download_size: 22249
dataset_size: 32165
configs:
- config_name: default
data_files:
- split: train
path: c0d4dc8660289947/train-*
---
|
google/air_dialogue | google | 2024-03-07T15:22:15Z | 264 | 19 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:conversational",
"task_ids:dialogue-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- conversational
- dialogue-generation
- dialogue-modeling
- language-modeling
- masked-language-modeling
pretty_name: AirDialogue
dataset_info:
- config_name: air_dialogue_data
features:
- name: action
struct:
- name: status
dtype: string
- name: name
dtype: string
- name: flight
sequence: int32
- name: intent
struct:
- name: return_month
dtype: string
- name: return_day
dtype: string
- name: max_price
dtype: int32
- name: departure_airport
dtype: string
- name: max_connections
dtype: int32
- name: departure_day
dtype: string
- name: goal
dtype: string
- name: departure_month
dtype: string
- name: name
dtype: string
- name: return_airport
dtype: string
- name: timestamps
sequence: int64
- name: dialogue
sequence: string
- name: expected_action
struct:
- name: status
dtype: string
- name: name
dtype: string
- name: flight
sequence: int32
- name: search_info
list:
- name: button_name
dtype: string
- name: field_name
dtype: string
- name: field_value
dtype: string
- name: timestmamp
dtype: int64
- name: correct_sample
dtype: bool_
splits:
- name: train
num_bytes: 353718365
num_examples: 321459
- name: validation
num_bytes: 44441818
num_examples: 40363
download_size: 141766743
dataset_size: 398160183
- config_name: air_dialogue_kb
features:
- name: kb
list:
- name: airline
dtype: string
- name: class
dtype: string
- name: departure_airport
dtype: string
- name: departure_day
dtype: string
- name: departure_month
dtype: string
- name: departure_time_num
dtype: int32
- name: flight_number
dtype: int32
- name: num_connections
dtype: int32
- name: price
dtype: int32
- name: return_airport
dtype: string
- name: return_day
dtype: string
- name: return_month
dtype: string
- name: return_time_num
dtype: int32
- name: reservation
dtype: int32
splits:
- name: train
num_bytes: 782590970
num_examples: 321459
- name: validation
num_bytes: 98269609
num_examples: 40363
download_size: 57883938
dataset_size: 880860579
configs:
- config_name: air_dialogue_data
data_files:
- split: train
path: air_dialogue_data/train-*
- split: validation
path: air_dialogue_data/validation-*
default: true
- config_name: air_dialogue_kb
data_files:
- split: train
path: air_dialogue_kb/train-*
- split: validation
path: air_dialogue_kb/validation-*
---
# Dataset Card for air_dialogue
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59
- **Repository:** https://github.com/google/airdialogue
- **Paper:** https://aclanthology.org/D18-1419/
- **Leaderboard:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59
- **Point of Contact:** [AirDialogue-Google](mailto:[email protected])
- **Point of Contact:** [Wei Wei](mailto:[email protected])
### Dataset Summary
AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions.
News in v1.3:
- We have included the test split of the AirDialogue dataset.
- We have included the meta context for OOD2 in the original AirDialogue paper.
### Supported Tasks and Leaderboards
We use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores
The inference competition & leaderboard can be found here:
https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59
### Languages
The text in the dataset is in English. The BCP 47 code is `en`
## Dataset Structure
### Data Instances
The data is provided in two set of files. The first one has the dialogues (`air_dialogue_data`) and the knowledge-base (`air_dialogue_kb`)
BuilderConfig: `air_dialogue_data`
```
{"action": {"status": "book", "name": "Emily Edwards", "flight": [1027]}, "intent": {"return_month": "June", "return_day": "14", "max_price": 200, "departure_airport": "DFW", "return_time": "afternoon", "max_connections": 1, "departure_day": "12", "goal": "book", "departure_month": "June", "name": "Emily Edwards", "return_airport": "IAD"}, "timestamps": [1519233239, 1519233244, 1519233249, 1519233252, 1519233333, 1519233374, 1519233392, 1519233416, 1519233443, 1519233448, 1519233464, 1519233513, 1519233525, 1519233540, 1519233626, 1519233628, 1519233638], "dialogue": ["customer: Hello.", "agent: Hello.", "customer: My name is Emily Edwards.", "agent: How may I help you out?", "customer: I need some help in my flight ticket reservation to attend a convocation meeting, can you please help me?", "agent: Sure, I will help you out. May I know your travelling dates please?", "customer: Thank you and my dates are 06/12 and back on 06/14.", "agent: Can I know your airport codes?", "customer: The airport codes are from DFW to IAD.", "agent: Ok, please wait a moment.", "customer: Sure.", "agent: There is a flight with connection 1 and price 200, can I proceed with this flight?", "customer: Yes, do proceed with booking.", "agent: Ok, your ticket has been booked.", "customer: Thank you for your assistance in my flight ticket reservation.", "agent: Thank you for choosing us.", "customer: You are welcome."], "expected_action": {"status": "book", "name": "Emily Edwards", "flight": [1027]}, "correct_sample": true}
```
BuilderConfig: `air_dialogue_kb`
```
{"kb": [{"return_airport": "DTW", "airline": "Spirit", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1000, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 2, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Frontier", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1001, "departure_month": "June", "departure_time_num": 0, "class": "business", "return_time_num": 15, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 500}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1002, "departure_month": "June", "departure_time_num": 0, "class": "business", "return_time_num": 13, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 600}, {"return_airport": "IAD", "airline": "Hawaiian", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1003, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 5, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "AA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1004, "departure_month": "June", "departure_time_num": 9, "class": "economy", "return_time_num": 11, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "IAD", "airline": "AA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1005, "departure_month": "June", "departure_time_num": 3, "class": "economy", "return_time_num": 17, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Frontier", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1006, "departure_month": "June", "departure_time_num": 10, "class": "economy", "return_time_num": 10, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "IAD", "airline": "UA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1007, "departure_month": "June", "departure_time_num": 14, "class": "economy", "return_time_num": 20, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "AA", "departure_day": "13", "departure_airport": "DTW", "flight_number": 1008, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 8, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 400}, {"return_airport": "DFW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1009, "departure_month": "June", "departure_time_num": 18, "class": "economy", "return_time_num": 6, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "Frontier", "departure_day": "13", "departure_airport": "DTW", "flight_number": 1010, "departure_month": "June", "departure_time_num": 4, "class": "economy", "return_time_num": 2, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Southwest", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1011, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 22, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 100}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "11", "departure_airport": "DFW", "flight_number": 1012, "departure_month": "June", "departure_time_num": 13, "class": "economy", "return_time_num": 22, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Southwest", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1013, "departure_month": "June", "departure_time_num": 16, "class": "economy", "return_time_num": 13, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1014, "departure_month": "June", "departure_time_num": 0, "class": "economy", "return_time_num": 8, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Southwest", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1015, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 1, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 300}, {"return_airport": "DTW", "airline": "UA", "departure_day": "11", "departure_airport": "DFW", "flight_number": 1016, "departure_month": "June", "departure_time_num": 10, "class": "economy", "return_time_num": 4, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 200}, {"return_airport": "DFW", "airline": "AA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1017, "departure_month": "June", "departure_time_num": 14, "class": "economy", "return_time_num": 23, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 400}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1018, "departure_month": "June", "departure_time_num": 3, "class": "economy", "return_time_num": 1, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Hawaiian", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1019, "departure_month": "June", "departure_time_num": 7, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1020, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 200}, {"return_airport": "IAD", "airline": "Delta", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1021, "departure_month": "June", "departure_time_num": 11, "class": "business", "return_time_num": 8, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 1000}, {"return_airport": "IAD", "airline": "JetBlue", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1022, "departure_month": "June", "departure_time_num": 4, "class": "economy", "return_time_num": 14, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 200}, {"return_airport": "IAD", "airline": "Frontier", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1023, "departure_month": "June", "departure_time_num": 19, "class": "economy", "return_time_num": 23, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "UA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1024, "departure_month": "June", "departure_time_num": 11, "class": "economy", "return_time_num": 19, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Hawaiian", "departure_day": "11", "departure_airport": "IAD", "flight_number": 1025, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 10, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "UA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1026, "departure_month": "June", "departure_time_num": 0, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 300}, {"return_airport": "IAD", "airline": "Delta", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1027, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 15, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "IAD", "airline": "Southwest", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1028, "departure_month": "June", "departure_time_num": 23, "class": "economy", "return_time_num": 13, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Spirit", "departure_day": "11", "departure_airport": "DTW", "flight_number": 1029, "departure_month": "June", "departure_time_num": 22, "class": "business", "return_time_num": 4, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 800}], "reservation": 0}
```
### Data Fields
BuilderConfig: `air_dialogue_data`:
Provides for customer context, dialogue states and environment
key name | Description |
|---|---|
|'search_action' | search action performed by customer |
|'action' | Action taken by the agent |
|'intent' | Intents from the conversation |
|'timestamps' | Timestamp for each of the dialogues |
|'dialogue' | Dialogue recorded between agent & customer |
|'expected_action' | Expected action from agent (human-annotated)|
|'correct_sample' | whether action performed by agent was same as expected_action |
BuilderConfig: `air_dialogue_kb`:
Provides for the Agent Context _ca_ = (_db_, _r_ )
key name | Description |
|---|---|
|'kb' | Available flights in the database |
|'reservation' | whether customer has an existing reservation|
### Data Splits
Data is split into Train/Dev & Test in the ration of 80%, 10% and 10%
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
No personal and sensitive information is stored
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[AirDialogue team](mailto:[email protected])
For issues regarding HuggingFace Dataset Hub implementation [Aakash Gupta](mailto:[email protected])
### Licensing Information
cc-by-nc-4.0
### Citation Information
```bibtex
@inproceedings{wei-etal-2018-airdialogue,
title = "{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research",
author = "Wei, Wei and
Le, Quoc and
Dai, Andrew and
Li, Jia",
editor = "Riloff, Ellen and
Chiang, David and
Hockenmaier, Julia and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D18-1419",
doi = "10.18653/v1/D18-1419",
pages = "3844--3854",
abstract = "Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.",
}
```
### Contributions
Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
|
aps/super_glue | aps | 2024-01-29T13:07:56Z | 364,503 | 168 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_ids:natural-language-inference",
"task_ids:word-sense-disambiguation",
"task_ids:coreference-resolution",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:extended|other",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"arxiv:1905.00537",
"region:us",
"superglue",
"NLU",
"natural language understanding"
] | [
"text-classification",
"token-classification",
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- text-classification
- token-classification
- question-answering
task_ids:
- natural-language-inference
- word-sense-disambiguation
- coreference-resolution
- extractive-qa
paperswithcode_id: superglue
pretty_name: SuperGLUE
tags:
- superglue
- NLU
- natural language understanding
dataset_info:
- config_name: boolq
features:
- name: question
dtype: string
- name: passage
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 2107997
num_examples: 3245
- name: train
num_bytes: 6179206
num_examples: 9427
- name: validation
num_bytes: 2118505
num_examples: 3270
download_size: 4118001
dataset_size: 10405708
- config_name: cb
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': contradiction
'2': neutral
splits:
- name: test
num_bytes: 93660
num_examples: 250
- name: train
num_bytes: 87218
num_examples: 250
- name: validation
num_bytes: 21894
num_examples: 56
download_size: 75482
dataset_size: 202772
- config_name: copa
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': choice1
'1': choice2
splits:
- name: test
num_bytes: 60303
num_examples: 500
- name: train
num_bytes: 49599
num_examples: 400
- name: validation
num_bytes: 12586
num_examples: 100
download_size: 43986
dataset_size: 122488
- config_name: multirc
features:
- name: paragraph
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: idx
struct:
- name: paragraph
dtype: int32
- name: question
dtype: int32
- name: answer
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 14996451
num_examples: 9693
- name: train
num_bytes: 46213579
num_examples: 27243
- name: validation
num_bytes: 7758918
num_examples: 4848
download_size: 1116225
dataset_size: 68968948
- config_name: record
features:
- name: passage
dtype: string
- name: query
dtype: string
- name: entities
sequence: string
- name: entity_spans
sequence:
- name: text
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: answers
sequence: string
- name: idx
struct:
- name: passage
dtype: int32
- name: query
dtype: int32
splits:
- name: train
num_bytes: 179232052
num_examples: 100730
- name: validation
num_bytes: 17479084
num_examples: 10000
- name: test
num_bytes: 17200575
num_examples: 10000
download_size: 51757880
dataset_size: 213911711
- config_name: rte
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 975799
num_examples: 3000
- name: train
num_bytes: 848745
num_examples: 2490
- name: validation
num_bytes: 90899
num_examples: 277
download_size: 750920
dataset_size: 1915443
- config_name: wic
features:
- name: word
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: start1
dtype: int32
- name: start2
dtype: int32
- name: end1
dtype: int32
- name: end2
dtype: int32
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 180593
num_examples: 1400
- name: train
num_bytes: 665183
num_examples: 5428
- name: validation
num_bytes: 82623
num_examples: 638
download_size: 396213
dataset_size: 928399
- config_name: wsc
features:
- name: text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 31572
num_examples: 146
- name: train
num_bytes: 89883
num_examples: 554
- name: validation
num_bytes: 21637
num_examples: 104
download_size: 32751
dataset_size: 143092
- config_name: wsc.fixed
features:
- name: text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: test
num_bytes: 31568
num_examples: 146
- name: train
num_bytes: 89883
num_examples: 554
- name: validation
num_bytes: 21637
num_examples: 104
download_size: 32751
dataset_size: 143088
- config_name: axb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 238392
num_examples: 1104
download_size: 33950
dataset_size: 238392
- config_name: axg
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 53581
num_examples: 356
download_size: 10413
dataset_size: 53581
---
# Dataset Card for "super_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://super.gluebenchmark.com/
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://arxiv.org/abs/1905.00537
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 58.36 MB
- **Size of the generated dataset:** 249.57 MB
- **Total amount of disk used:** 307.94 MB
### Dataset Summary
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### axb
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.24 MB
- **Total amount of disk used:** 0.27 MB
An example of 'test' looks as follows.
```
```
#### axg
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.05 MB
- **Total amount of disk used:** 0.06 MB
An example of 'test' looks as follows.
```
```
#### boolq
- **Size of downloaded dataset files:** 4.12 MB
- **Size of the generated dataset:** 10.40 MB
- **Total amount of disk used:** 14.52 MB
An example of 'train' looks as follows.
```
```
#### cb
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.20 MB
- **Total amount of disk used:** 0.28 MB
An example of 'train' looks as follows.
```
```
#### copa
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.13 MB
- **Total amount of disk used:** 0.17 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### axb
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### axg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### boolq
- `question`: a `string` feature.
- `passage`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `False` (0), `True` (1).
#### cb
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
#### copa
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
### Data Splits
#### axb
| |test|
|---|---:|
|axb|1104|
#### axg
| |test|
|---|---:|
|axg| 356|
#### boolq
| |train|validation|test|
|-----|----:|---------:|---:|
|boolq| 9427| 3270|3245|
#### cb
| |train|validation|test|
|---|----:|---------:|---:|
|cb | 250| 56| 250|
#### copa
| |train|validation|test|
|----|----:|---------:|---:|
|copa| 400| 100| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The primary SuperGLUE tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset, but it is our understanding that these licenses allow for their use and redistribution in a research context.
### Citation Information
If you use SuperGLUE, please cite all the datasets you use in any papers that come out of your work. In addition, we encourage you to use the following BibTeX citation for SuperGLUE itself:
```
@article{wang2019superglue,
title={Super{GLUE}: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Alex Wang and Yada Pruksachatkun and Nikita Nangia and Amanpreet Singh and Julian Michael and Felix Hill and Omer Levy and Samuel R. Bowman},
journal={arXiv preprint 1905.00537},
year={2019}
}
@inproceedings{clark2019boolq,
title={{B}ool{Q}: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei and Kwiatkowski, Tom and Collins, Michael and Toutanova, Kristina},
booktitle={Proceedings of NAACL-HLT 2019},
year={2019}
}
@inproceedings{demarneffe:cb,
title={{The CommitmentBank}: Investigating projection in naturally occurring discourse},
author={De Marneffe, Marie-Catherine and Simons, Mandy and Tonhauser, Judith},
note={To appear in proceedings of Sinn und Bedeutung 23. Data can be found at https://github.com/mcdm/CommitmentBank/},
year={2019}
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S.},
booktitle={2011 AAAI Spring Symposium Series},
year={2011}
}
@inproceedings{khashabi2018looking,
title={Looking beyond the surface: A challenge set for reading comprehension over multiple sentences},
author={Khashabi, Daniel and Chaturvedi, Snigdha and Roth, Michael and Upadhyay, Shyam and Roth, Dan},
booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)},
pages={252--262},
year={2018}
}
@article{zhang2018record,
title={{ReCoRD}: Bridging the Gap between Human and Machine Commonsense Reading Comprehension},
author={Sheng Zhang and Xiaodong Liu and Jingjing Liu and Jianfeng Gao and Kevin Duh and Benjamin Van Durme},
journal={arXiv preprint 1810.12885},
year={2018}
}
@incollection{dagan2006pascal,
title={The {PASCAL} recognising textual entailment challenge},
author={Dagan, Ido and Glickman, Oren and Magnini, Bernardo},
booktitle={Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment},
pages={177--190},
year={2006},
publisher={Springer}
}
@article{bar2006second,
title={The second {PASCAL} recognising textual entailment challenge},
author={Bar Haim, Roy and Dagan, Ido and Dolan, Bill and Ferro, Lisa and Giampiccolo, Danilo and Magnini, Bernardo and Szpektor, Idan},
year={2006}
}
@inproceedings{giampiccolo2007third,
title={The third {PASCAL} recognizing textual entailment challenge},
author={Giampiccolo, Danilo and Magnini, Bernardo and Dagan, Ido and Dolan, Bill},
booktitle={Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing},
pages={1--9},
year={2007},
organization={Association for Computational Linguistics},
}
@article{bentivogli2009fifth,
title={The Fifth {PASCAL} Recognizing Textual Entailment Challenge},
author={Bentivogli, Luisa and Dagan, Ido and Dang, Hoa Trang and Giampiccolo, Danilo and Magnini, Bernardo},
booktitle={TAC},
year={2009}
}
@inproceedings{pilehvar2018wic,
title={{WiC}: The Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations},
author={Pilehvar, Mohammad Taher and Camacho-Collados, Jose},
booktitle={Proceedings of NAACL-HLT},
year={2019}
}
@inproceedings{rudinger2018winogender,
title={Gender Bias in Coreference Resolution},
author={Rudinger, Rachel and Naradowsky, Jason and Leonard, Brian and {Van Durme}, Benjamin},
booktitle={Proceedings of NAACL-HLT},
year={2018}
}
@inproceedings{poliak2018dnc,
title={Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation},
author={Poliak, Adam and Haldar, Aparajita and Rudinger, Rachel and Hu, J. Edward and Pavlick, Ellie and White, Aaron Steven and {Van Durme}, Benjamin},
booktitle={Proceedings of EMNLP},
year={2018}
}
@inproceedings{levesque2011winograd,
title={The {W}inograd schema challenge},
author={Levesque, Hector J and Davis, Ernest and Morgenstern, Leora},
booktitle={{AAAI} Spring Symposium: Logical Formalizations of Commonsense Reasoning},
volume={46},
pages={47},
year={2011}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
ncbi/pubmed | ncbi | 2024-01-26T17:52:23Z | 530 | 127 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"region:us",
"citation-estimation"
] | [
"text-generation",
"fill-mask",
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- language-modeling
- masked-language-modeling
- text-scoring
- topic-classification
paperswithcode_id: pubmed
pretty_name: PubMed
tags:
- citation-estimation
dataset_info:
- config_name: '2024'
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
splits:
- name: train
num_bytes: 54723097181
num_examples: 36555430
download_size: 45202943276
dataset_size: 54723097181
---
# Dataset Card for PubMed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline.html]()
- **Documentation:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline_documentation.html]()
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [National Center for Biotechnology Information](mailto:[email protected])
### Dataset Summary
PubMed comprises more than 36 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites.
NLM produces a baseline set of PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year.
- Last Updated December 15, 2023
Each day, NLM produces update files that include new, revised, and deleted citations.
Source: https://ftp.ncbi.nlm.nih.gov/pubmed/README.txt
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- English
## Dataset Structure
Bear in mind the data comes from XML that have various tags that are hard to reflect
in a concise JSON format. Tags and list are kind of non "natural" to XML documents
leading this library to make some choices regarding data. "Journal" info was dropped
altogether as it would have led to many fields being empty all the time.
The hierarchy is also a bit unnatural but the choice was made to keep as close as
possible to the original data for future releases that may change schema from NLM's side.
Author has been kept and contains either "ForeName", "LastName", "Initials", or "CollectiveName".
(All the fields will be present all the time, but only some will be filled)
### Data Instances
```json
{
"MedlineCitation": {
"PMID": 0,
"DateCompleted": {"Year": 0, "Month": 0, "Day": 0},
"NumberOfReferences": 0,
"DateRevised": {"Year": 0, "Month": 0, "Day": 0},
"Article": {
"Abstract": {"AbstractText": "Some abstract (can be missing)" },
"ArticleTitle": "Article title",
"AuthorList": {"Author": [
{"FirstName": "John", "ForeName": "Doe", "Initials": "JD", "CollectiveName": ""}
{"CollectiveName": "The Manhattan Project", "FirstName": "", "ForeName": "", "Initials": ""}
]},
"Language": "en",
"GrantList": {
"Grant": [],
},
"PublicationTypeList": {"PublicationType": []},
},
"MedlineJournalInfo": {"Country": "France"},
"ChemicalList": {"Chemical": [{
"RegistryNumber": "XX",
"NameOfSubstance": "Methanol"
}]},
"CitationSubset": "AIM",
"MeshHeadingList": {
"MeshHeading": [],
},
},
"PubmedData": {
"ArticleIdList": {"ArticleId": "10.1002/bjs.1800650203"},
"PublicationStatus": "ppublish",
"History": {"PubMedPubDate": [{"Year": 0, "Month": 0, "Day": 0}]},
"ReferenceList": [{"Citation": "Somejournal", "CitationId": 01}],
},
}
```
### Data Fields
Main Fields will probably interest people are:
- "MedlineCitation" > "Article" > "AuthorList" > "Author"
- "MedlineCitation" > "Article" > "Abstract" > "AbstractText"
- "MedlineCitation" > "Article" > "Article Title"
- "MedlineCitation" > "ChemicalList" > "Chemical"
- "MedlineCitation" > "NumberOfReferences"
### Data Splits
There are no splits in this dataset. It is given as is.
## Dataset Creation
### Curation Rationale
The use of "Medline" in an element name does not mean the record represents a citation from a MEDLINE-selected journal. When the NLM DTDs and XML elements were first created, MEDLINE records were the only data exported. Now NLM exports citations other than MEDLINE records. To minimize unnecessary disruption to users of the data, NLM has retained the original element names (e.g., MedlineCitation, MedlineJournalInfo, MedlineTA).
Policies affecting data creation have evolved over the years. Some PubMed records are added or revised well after the cited article was first published. In these cases, on occasion an element that had not yet been created when the article was published may appear on the record. For example, the Abstract element was not created until 1975, but some records published before 1975 but added to PubMed after 1975 contain <Abstract>. It is also possible that an element may be treated differently from the way it would have been treated had the record been created or maintained near the time the article was published. For example, the number of <Author> occurrences can diverge from the policies stated in the NLM author indexing policy (https://pubmed.ncbi.nlm.nih.gov/help/#author-indexing-policy). Lastly, as of October 2016, the publisher of the original article has the capability to edit the PubMed record’s citation data, with the exception of MeSH data, using the PubMed Data Management system. PubMed record data for older citations, therefore, may contain data for elements that didn’t exist when the citation was created.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[National Library of Medicine Terms and Conditions](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html)
Downloading PubMed data from the National Library of Medicine FTP servers indicates your acceptance of the following Terms and Conditions. No charges, usage fees or royalties are paid to NLM for these data.
#### PubMed Specific Terms:
NLM freely provides PubMed data. Please note some abstracts may be protected by copyright.
#### General Terms and Conditions
Users of the data agree to:
- acknowledge NLM as the source of the data in a clear and conspicuous manner,
- NOT use the PubMed wordmark or the PubMed logo in association or in connection with user's or any other party's product or service.
- NOT adopt, use, or seek to register any mark or trade name confusingly similar to or suggestive of the PubMed wordmark or PubMed logo
- NOT to indicate or imply that NLM/NIH/HHS has endorsed its products/services/applications.
Users who republish or redistribute the data (services, products or raw data) agree to:
- maintain the most current version of all distributed data, or
- make known in a clear and conspicuous manner that the products/services/applications do not reflect the most current/accurate data available from NLM.
These data are produced with a reasonable standard of care, but NLM makes no warranties express or implied, including no warranty of merchantability or fitness for particular purpose, regarding the accuracy or completeness of the data. Users agree to hold NLM and the U.S. Government harmless from any liability resulting from errors in the data. NLM disclaims any liability for any consequences due to use, misuse, or interpretation of information contained or not contained in the data.
NLM does not provide legal advice regarding copyright, fair use, or other aspects of intellectual property rights. See the NLM Copyright page: https://www.nlm.nih.gov/web_policies.html#copyright
NLM reserves the right to change the type and format of its machine-readable data. NLM will take reasonable steps to inform users of any changes to the format of the data before the data are distributed via the announcement section or subscription to email and RSS updates.
The PubMed wordmark and the PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
### Citation Information
[Courtesy of the U.S. National Library of Medicine](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
|
IWSLT/mt_eng_vietnamese | IWSLT | 2024-01-18T11:09:37Z | 296 | 26 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"language:vi",
"license:unknown",
"size_categories:100K<n<1M",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
multilinguality:
- multilingual
language:
- en
- vi
license:
- unknown
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: MtEngVietnamese
dataset_info:
- config_name: iwslt2015-vi-en
features:
- name: translation
dtype:
translation:
languages:
- vi
- en
splits:
- name: train
num_bytes: 32478282
num_examples: 133318
- name: validation
num_bytes: 323743
num_examples: 1269
- name: test
num_bytes: 323743
num_examples: 1269
download_size: 32323025
dataset_size: 33125768
- config_name: iwslt2015-en-vi
features:
- name: translation
dtype:
translation:
languages:
- en
- vi
splits:
- name: train
num_bytes: 32478282
num_examples: 133318
- name: validation
num_bytes: 323743
num_examples: 1269
- name: test
num_bytes: 323743
num_examples: 1269
download_size: 32323025
dataset_size: 33125768
---
# Dataset Card for mt_eng_vietnamese
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
English, Vietnamese
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'translation': {
'en': 'In 4 minutes , atmospheric chemist Rachel Pike provides a glimpse of the massive scientific effort behind the bold headlines on climate change , with her team -- one of thousands who contributed -- taking a risky flight over the rainforest in pursuit of data on a key molecule .',
'vi': 'Trong 4 phút , chuyên gia hoá học khí quyển Rachel Pike giới thiệu sơ lược về những nỗ lực khoa học miệt mài đằng sau những tiêu đề táo bạo về biến đổi khí hậu , cùng với đoàn nghiên cứu của mình -- hàng ngàn người đã cống hiến cho dự án này -- một chuyến bay mạo hiểm qua rừng già để tìm kiếm thông tin về một phân tử then chốt .'
}
}
```
### Data Fields
- translation:
- en: text in english
- vi: text in vietnamese
### Data Splits
train: 133318, validation: 1269, test: 1269
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{Luong-Manning:iwslt15,
Address = {Da Nang, Vietnam}
Author = {Luong, Minh-Thang and Manning, Christopher D.},
Booktitle = {International Workshop on Spoken Language Translation},
Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain},
Year = {2015}}
```
### Contributions
Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset. |
Helsinki-NLP/kde4 | Helsinki-NLP | 2024-01-18T11:07:20Z | 1,073 | 22 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:ar",
"language:as",
"language:ast",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:crh",
"language:cs",
"language:csb",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hne",
"language:hr",
"language:hsb",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:lb",
"language:lt",
"language:lv",
"language:mai",
"language:mk",
"language:ml",
"language:mr",
"language:ms",
"language:mt",
"language:nb",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:nso",
"language:oc",
"language:or",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:rw",
"language:se",
"language:si",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:uk",
"language:uz",
"language:vi",
"language:wa",
"language:xh",
"language:zh",
"license:unknown",
"size_categories:100K<n<1M",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- as
- ast
- be
- bg
- bn
- br
- ca
- crh
- cs
- csb
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- ha
- he
- hi
- hne
- hr
- hsb
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- lb
- lt
- lv
- mai
- mk
- ml
- mr
- ms
- mt
- nb
- nds
- ne
- nl
- nn
- nso
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- rw
- se
- si
- sk
- sl
- sr
- sv
- ta
- te
- tg
- th
- tr
- uk
- uz
- vi
- wa
- xh
- zh
language_bcp47:
- bn-IN
- en-GB
- pt-BR
- zh-CN
- zh-HK
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: KDE4
dataset_info:
- config_name: fi-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- nl
splits:
- name: train
num_bytes: 8845933
num_examples: 101593
download_size: 2471355
dataset_size: 8845933
- config_name: it-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 8827049
num_examples: 109003
download_size: 2389051
dataset_size: 8827049
- config_name: nl-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- sv
splits:
- name: train
num_bytes: 22294586
num_examples: 188454
download_size: 6203460
dataset_size: 22294586
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 27132585
num_examples: 220566
download_size: 7622662
dataset_size: 27132585
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 25650409
num_examples: 210173
download_size: 7049364
dataset_size: 25650409
---
# Dataset Card for KDE4
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/KDE4.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/KDE4.php
E.g.
`dataset = load_dataset("kde4", lang1="en", lang2="nl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.