sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
016358461fdb4c69f338636deb3d455a534f692c
|
An imitation learning environment for the door-open-v2 environment, sample for the policy door-open-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_door_open_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_door_open_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_door_open_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:01+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-10T16:35:27+00:00
|
efd7497395076cc46c7b52528c3e35925c0b4e39
|
An imitation learning environment for the door-unlock-v2 environment, sample for the policy door-unlock-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_door_unlock_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_door_unlock_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_door_unlock_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:10+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-10T16:36:58+00:00
|
35adfff7691ad5e5178f50c3f3b4236442d6898a
|
An imitation learning environment for the drawer-close-v2 environment, sample for the policy drawer-close-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_drawer_close_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_drawer_close_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_drawer_close_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:18+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-10T16:38:38+00:00
|
d906c95d8d42b46d47382af6fa34451c2f6b3d36
|
An imitation learning environment for the drawer-open-v2 environment, sample for the policy drawer-open-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_drawer_open_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_drawer_open_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_drawer_open_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:26+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-10T16:40:23+00:00
|
d8be4b4a81c4ec46194264eef815acdfcdc869b2
|
An imitation learning environment for the faucet-close-v2 environment, sample for the policy faucet-close-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_faucet_close_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_faucet_close_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_faucet_close_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:34+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-10T16:42:02+00:00
|
20214396cdbf2d57437f81ff9d76566d70f74aa1
|
An imitation learning environment for the faucet-open-v2 environment, sample for the policy faucet-open-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_faucet_open_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_faucet_open_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_faucet_open_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:43+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-10T16:43:36+00:00
|
04f6b2e8d5b66bf058d93eedb99fa634ffeebe72
|
An imitation learning environment for the hammer-v2 environment, sample for the policy hammer-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_hammer_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_hammer_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_hammer_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:51+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-10T16:45:25+00:00
|
ede12b39f93acd3d13d64206d3cc75c2b5a756ac
|
An imitation learning environment for the hand-insert-v2 environment, sample for the policy hand-insert-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_hand_insert_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_hand_insert_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_hand_insert_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:36:59+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T17:58:42+00:00
|
4bf3875b6e8cc5d9a26f8c1254cdbd2e3dfa4394
|
An imitation learning environment for the basketball-v2 environment, sample for the policy basketball-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_basketball_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_basketball_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_basketball_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:39:02+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:01:53+00:00
|
600a1f8c52eae9b65c2aafa46380a45b3ed3099c
|
An imitation learning environment for the bin-picking-v2 environment, sample for the policy bin-picking-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_bin_picking_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_bin_picking_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_bin_picking_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:39:12+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:04:13+00:00
|
875b2de6956d28a17b3c582b6f4133e070346ac5
|
An imitation learning environment for the box-close-v2 environment, sample for the policy box-close-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_box_close_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_box_close_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_box_close_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:39:23+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:06:46+00:00
|
6ab927c9363644ecfa9b6bf16f71424ca0076358
|
An imitation learning environment for the button-press-topdown-v2 environment, sample for the policy button-press-topdown-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_button_press_topdown_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_button_press_topdown_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_button_press_topdown_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:39:33+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:09:12+00:00
|
9cfe381abcdc6e5ae3411062c695e9012b215c56
|
An imitation learning environment for the button-press-topdown-wall-v2 environment, sample for the policy button-press-topdown-wall-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_button_press_topdown_wall_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_button_press_topdown_wall_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_button_press_topdown_wall_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T15:39:43+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:11:31+00:00
|
49382f46fd73fc9821bff06c9e9913893cfa3ba0
|
# Dataset Card for "perseuslatin_UD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pnadel/perseuslatin_UD
|
[
"region:us"
] |
2023-03-08T15:50:53+00:00
|
{"dataset_info": {"features": [{"name": "fname", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 920071, "num_examples": 4936}], "download_size": 342519, "dataset_size": 920071}}
|
2023-03-18T20:58:02+00:00
|
d9b3fb8f0227bf4a93e25036a8f7addca2d083b1
|
An imitation learning environment for the button-press-v2 environment, sample for the policy button-press-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_button_press_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_button_press_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_button_press_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T16:13:55+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:14:00+00:00
|
c909e4901e24716fa6f3164d5e0e202a839d399b
|
An imitation learning environment for the button-press-wall-v2 environment, sample for the policy button-press-wall-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_button_press_wall_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_button_press_wall_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_button_press_wall_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T16:16:35+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:16:39+00:00
|
b6cafb68d17ad71ea66351ee2ffbc53376d553a6
|
An imitation learning environment for the coffee-button-v2 environment, sample for the policy coffee-button-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_coffee_button_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_coffee_button_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_coffee_button_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T16:19:02+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:19:06+00:00
|
027757e76a5e3beaeffcc98f4f5aa08665f03840
|
An imitation learning environment for the coffee-pull-v2 environment, sample for the policy coffee-pull-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_coffee_pull_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_coffee_pull_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_coffee_pull_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T16:21:44+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:21:48+00:00
|
5fb84b937e99f15c8f4a4b606c1bac858a9de14e
|
An imitation learning environment for the coffee-push-v2 environment, sample for the policy coffee-push-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_coffee_push_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_coffee_push_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_coffee_push_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T16:24:19+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T16:24:23+00:00
|
de7ac846a1ad4843f036c8a38cbdbc16f2bf465b
|
mwirth-epo/epo-nmt-datasets
|
[
"task_categories:translation",
"license:cc-by-sa-4.0",
"legal",
"region:us"
] |
2023-03-08T17:10:44+00:00
|
{"license": "cc-by-sa-4.0", "task_categories": ["translation"], "tags": ["legal"]}
|
2023-03-17T10:06:40+00:00
|
|
1792be1ef3ece466f1cc7750baba302c16c93a47
|
# Dataset name: "modified_anthropic_convo_data"
# Dataset Card for Conversational AI Bot
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/Anthropic/hh-rlhf
### Dataset Summary
- This dataset is the augmented version of the same dataset found here https://huggingface.co/datasets/Anthropic/hh-rlhf
- Two new columns have been added called ***human_speaker*** and ***assistant_speaker*** to make it easier to directly use the data for a causal langauage modeling task
- Currently only one pair of conversations have been picked, the dataset will be updated soon
### Contributions
Thanks to https://huggingface.co/datasets/Anthropic/hh-rlhf for adding this dataset.
|
vyas21/modified_anthropic_convo_data
|
[
"task_categories:conversational",
"conversation",
"dialogue",
"anthropic",
"region:us"
] |
2023-03-08T17:13:56+00:00
|
{"task_categories": ["conversational"], "pretty_name": "modified_anthropic_convo_data", "dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "human_speaker", "dtype": "string"}, {"name": "assistant_speaker", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 370789375, "num_examples": 160800}, {"name": "test", "num_bytes": 19894559, "num_examples": 8552}], "download_size": 223189497, "dataset_size": 390683934}, "tags": ["conversation", "dialogue", "anthropic"]}
|
2023-03-08T17:21:47+00:00
|
9d848ab703b14931c473253cc48f6dafecc3ab6b
|
An imitation learning environment for the disassemble-v2 environment, sample for the policy disassemble-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_disassemble_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_disassemble_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_disassemble_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-08T18:02:12+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-08T18:02:16+00:00
|
014891aa03f9d869c99be0072aa39f483d7a178f
|
kadirnar/deneme
|
[
"license:apache-2.0",
"region:us"
] |
2023-03-08T18:06:34+00:00
|
{"license": "apache-2.0"}
|
2023-03-08T18:07:01+00:00
|
|
053fed5e38f17d9560fc2bf1c0854ab8deabcda1
|
# Dataset Card for "ia_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/ia_embedded
|
[
"region:us"
] |
2023-03-08T18:14:56+00:00
|
{"dataset_info": {"features": [{"name": "crawl_date", "dtype": "int64"}, {"name": "last_modified_date", "dtype": "float64"}, {"name": "url", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "extension", "dtype": "string"}, {"name": "mime_type_web_server", "dtype": "string"}, {"name": "mime_type_tika", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "md5", "dtype": "string"}, {"name": "sha1", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1053063432.5, "num_examples": 6764}], "download_size": 1041773215, "dataset_size": 1053063432.5}}
|
2023-03-08T20:28:35+00:00
|
fb6a760df211962001d69fda7f3b42568ca938f8
|
bilgeyucel/seven-wonders
|
[
"size_categories:n<1K",
"language:en",
"region:us"
] |
2023-03-08T18:44:17+00:00
|
{"language": ["en"], "size_categories": ["n<1K"]}
|
2023-03-09T14:25:43+00:00
|
|
80ee62302515d7eab20491a2ce3ff8fc423ebf5e
|
# Dataset Card for "stackxchange"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
voidful/stackxchange
|
[
"region:us"
] |
2023-03-08T19:00:50+00:00
|
{"dataset_info": {"features": [{"name": "Name", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Title_RAW", "dtype": "string"}, {"name": "Question", "dtype": "string"}, {"name": "Question_RAW", "dtype": "string"}, {"name": "AcceptedAnswers", "list": [{"name": "Body", "dtype": "string"}, {"name": "Body_RAW", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}, {"name": "OtherAnswers", "list": [{"name": "Body", "dtype": "string"}, {"name": "Body_RAW", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}, {"name": "page", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37087996849, "num_examples": 6378706}], "download_size": 20193071433, "dataset_size": 37087996849}}
|
2023-03-29T17:30:07+00:00
|
9e8925a0f87652f26912c3e8f34739182d890ff5
|
# Dataset Card for "sidewalk-imagery"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
coralexbadea/sidewalk-imagery
|
[
"region:us"
] |
2023-03-08T19:02:59+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 86083036.0, "num_examples": 10}], "download_size": 3930138, "dataset_size": 86083036.0}}
|
2023-03-08T19:03:02+00:00
|
5802a0231552af839004effe29e7a62b2e5e76f2
|
# Dataset Card for "cms-icd10-categorical"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lowem1/cms-icd10-categorical
|
[
"region:us"
] |
2023-03-08T19:12:49+00:00
|
{"dataset_info": {"features": [{"name": "group_no", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11507555, "num_examples": 87474}], "download_size": 1989506, "dataset_size": 11507555}}
|
2023-03-08T19:12:55+00:00
|
fbdc97f5a2412e1dd03525efc43c7ddddbe0a518
|
# Dataset Card for "OK_VQA_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_filter_ns_100_OE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/OK_VQA_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_filter_ns_100_OE
|
[
"region:us"
] |
2023-03-08T19:33:07+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 18400, "num_examples": 100}], "download_size": 11086, "dataset_size": 18400}}
|
2023-03-08T20:01:20+00:00
|
f53b5805351e39a823e14d304b84fe44780942e2
|
# Dataset Card for "apps_partial_0_120"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minimario/apps_partial_0_120
|
[
"region:us"
] |
2023-03-08T19:33:45+00:00
|
{"dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 634469951, "num_examples": 439648}], "download_size": 13803138, "dataset_size": 634469951}}
|
2023-03-08T19:37:46+00:00
|
d414892a2443539d9f03b93ed64787f147156f1d
|
unlabeledstudiosllc/client-messages
|
[
"license:unknown",
"region:us"
] |
2023-03-08T19:35:12+00:00
|
{"license": "unknown"}
|
2023-03-08T19:35:13+00:00
|
|
bdf4a70508ac7eafc002d8956cdaa3ed70e982c6
|
# Dataset Card for "apps_partial_120_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minimario/apps_partial_120_200
|
[
"region:us"
] |
2023-03-08T19:40:02+00:00
|
{"dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 254632633, "num_examples": 181046}], "download_size": 4512497, "dataset_size": 254632633}}
|
2023-03-08T19:40:11+00:00
|
33597f348695efafb6cff6ed04cb59732739c789
|
# Dataset Card for "OK_VQA_google_flan_ul2_mode_VQAv2_visclues_detection_caption_module_filter_ns_100_OE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/OK_VQA_google_flan_ul2_mode_VQAv2_visclues_detection_caption_module_filter_ns_100_OE
|
[
"region:us"
] |
2023-03-08T19:50:37+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 18358, "num_examples": 100}], "download_size": 11109, "dataset_size": 18358}}
|
2023-03-08T20:14:53+00:00
|
fd793ef523145a94417c9659e0ab72f9e560ad9c
|
trondizzy/para_legal
|
[
"task_categories:translation",
"size_categories:n<1K",
"language:uk",
"language:en",
"license:cc",
"region:us"
] |
2023-03-08T19:58:47+00:00
|
{"language": ["uk", "en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["translation"]}
|
2023-03-08T22:47:03+00:00
|
|
03a508f01d59b95f1ddd7da8d85ef4d091a7f9c6
|
# Dataset Card for "OcclusionSwissJudgmentPrediction": An implementation of an occlusion based explainability method for Swiss judgment prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Summary](#dataset-summary)
- [Documents](#documents)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset **str**ucture](#dataset-**str**ucture)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Summary
This dataset contains an implementation of occlusion for the SwissJudgmentPrediction task.
Note that this dataset only provides a test set and should be used in comination with the [Swiss-Judgment-Prediction](https://huggingface.co/datasets/swiss_judgment_prediction) dataset.
### Documents
Occlusion-Swiss-Judgment-Prediction is a subset of the [Swiss-Judgment-Prediction](https://huggingface.co/datasets/swiss_judgment_prediction) dataset.
The Swiss-Judgment-Prediction dataset is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), the publication year, the legal area and the canton of origin per case. Occlusion-Swiss-Judgment-Prediction extends this dataset by adding sentence splitting with explainability labels.
### Supported Tasks and Leaderboards
OcclusionSwissJudgmentPrediction can be used for performing the occlusion in the legal judgment prediction task.
### Languages
Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings.
## Dataset structure
### Data Instances
## Data Instances
**Multilingual use of the dataset**
When the dataset is used in a multilingual setting selecting the the 'all_languages' flag:
```python
from datasets import load_dataset
dataset = load_dataset('rcds/occlusion_swiss_judgment_prediction', 'all')
```
**Monolingual use of the dataset**
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 3 supported languages. For example:
```python
from datasets import load_dataset
dataset = load_dataset('rcds/occlusion_swiss_judgment_prediction', 'de')
```
### Data Fields
The following data fields are provided for documents (Test_1/Test_2/Test_3/Test_4):
id: (**int**) a unique identifier of the for the document <br/>
year: (**int**) the publication year<br/>
label: (**str**) the judgment outcome: dismissal or approval<br/>
language: (**str**) one of (de, fr, it)<br/>
region: (**str**) the region of the lower court<br/>
canton: (**str**) the canton of the lower court<br/>
legal area: (**str**) the legal area of the case<br/>
explainability_label (**str**): the explainability label assigned to the occluded text: Supports judgment, Opposes judgment, Neutral, Baseline<br/>
occluded_text (**str**): the occluded text<br/>
text: (**str**) the facts of the case w/o the occluded text except for cases w/ explainability label "Baseline" (contain entire facts)<br/>
Note that Baseline cases are only contained in version 1 of the occlusion test set, since they do not change from experiment to experiment.
### Data Splits (Including Swiss Judgment Prediction)
Language | Subset | Number of Rows (Test_1/Test_2/Test_3/Test_4)
| ----------- | ----------- | ----------- |
German| de | __427__ / __1366__ / __3567__ / __7235__
French | fr | __307__ / __854__ / __1926__ / __3279__
Italian | it | __299__ /__919__ / __2493__ / __5733__
All | all | __1033__ / __3139__ / __7986__/ __16247__
Language | Subset | Number of Documents (is the same for Test_1/Test_2/Test_3/Test_4)
| ----------- | ----------- | ----------- |
German| de | __38__
French | fr | __36__
Italian | it | __34__
All | all | __108__
## Dataset Creation
### Curation Rationale
The dataset was curated by Niklaus et al. (2021) and Nina Baumgartner.
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
The decisions have been annotated with the binarized judgment outcome using parsers and regular expressions. In addition a subset of the test set (27 cases in German, 24 in French and 23 in Italian spanning over the years 2017 an 20200) was annotated by legal experts, splitting sentences/group of sentences and annotated with one of the following explainability label: Supports judgment, Opposes Judgment and Neutral. The test sets have each sentence/ group of sentence once occluded, enabling an analysis of the changes in the model's performance. The legal expert annotation were conducted from April 2020 to August 2020.
#### Who are the annotators?
Joel Niklaus and Adrian Jörg annotated the binarized judgment outcomes. Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). The group of legal experts consists of Thomas Lüthi (lawyer), Lynn Grau (law student at master's level) and Angela Stefanelli (law student at master's level).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Additional Information
### Dataset Curators
Niklaus et al. (2021) and Nina Baumgartner
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2000-2020
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
```
@misc{baumgartner_nina_occlusion_2022,
title = {From Occlusion to Transparancy – An Occlusion-Based Explainability Approach for Legal Judgment Prediction in Switzerland},
shorttitle = {From Occlusion to Transparancy},
abstract = {Natural Language Processing ({NLP}) models have been used for more and more complex tasks such as Legal Judgment Prediction ({LJP}). A {LJP} model predicts the outcome of a legal case by utilizing its facts. This increasing deployment of Artificial Intelligence ({AI}) in high-stakes domains such as law and the involvement of sensitive data has increased the need for understanding such systems. We propose a multilingual occlusion-based explainability approach for {LJP} in Switzerland and conduct a study on the bias using Lower Court Insertion ({LCI}). We evaluate our results using different explainability metrics introduced in this thesis and by comparing them to high-quality Legal Expert Annotations using Inter Annotator Agreement. Our findings show that the model has a varying understanding of the semantic meaning and context of the facts section, and struggles to distinguish between legally relevant and irrelevant sentences. We also found that the insertion of a different lower court can have an effect on the prediction, but observed no distinct effects based on legal areas, cantons, or regions. However, we did identify a language disparity with Italian performing worse than the other languages due to representation inequality in the training data, which could lead to potential biases in the prediction in multilingual regions of Switzerland. Our results highlight the challenges and limitations of using {NLP} in the judicial field and the importance of addressing concerns about fairness, transparency, and potential bias in the development and use of {NLP} systems. The use of explainable artificial intelligence ({XAI}) techniques, such as occlusion and {LCI}, can help provide insight into the decision-making processes of {NLP} systems and identify areas for improvement. Finally, we identify areas for future research and development in this field in order to address the remaining limitations and challenges.},
author = {{Baumgartner, Nina}},
year = {2022},
langid = {english}
}
```
### Contributions
Thanks to [@ninabaumgartner](https://github.com/ninabaumgartner) for adding this dataset.
|
rcds/occlusion_swiss_judgment_prediction
|
[
"task_categories:text-classification",
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|swiss_judgment_prediction",
"language:de",
"language:fr",
"language:it",
"language:en",
"license:cc-by-sa-4.0",
"explainability-judgment-prediction",
"occlusion",
"region:us"
] |
2023-03-08T20:14:10+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated", "found"], "language": ["de", "fr", "it", "en"], "license": "cc-by-sa-4.0", "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|swiss_judgment_prediction"], "task_categories": ["text-classification", "other"], "task_ids": [], "pretty_name": "OcclusionSwissJudgmentPrediction", "tags": ["explainability-judgment-prediction", "occlusion"]}
|
2023-03-28T07:19:29+00:00
|
d2cde298e79c94fb05bc320999deb4b7889b0464
|
# Dataset Card for Invoices (Sparrow)
This dataset contains 500 invoice documents annotated and processed to be ready for Donut ML model fine-tuning.
Annotation and data preparation task was done by [Katana ML](https://www.katanaml.io) team.
[Sparrow](https://github.com/katanaml/sparrow/tree/main) - open-source data extraction solution by Katana ML.
Original dataset [info](https://data.mendeley.com/datasets/tnj49gpmtz): Kozłowski, Marek; Weichbroth, Paweł (2021), “Samples of electronic invoices”, Mendeley Data, V2, doi: 10.17632/tnj49gpmtz.2
|
katanaml-org/invoices-donut-data-v1
|
[
"task_categories:feature-extraction",
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] |
2023-03-08T20:44:29+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["feature-extraction"], "pretty_name": "Sparrow Invoice Dataset", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 234024421, "num_examples": 425}, {"name": "test", "num_bytes": 14512665, "num_examples": 26}, {"name": "validation", "num_bytes": 27661738, "num_examples": 50}], "download_size": 197512750, "dataset_size": 276198824}}
|
2023-05-09T06:05:11+00:00
|
fb3da03f110264cd891d81eb6f69545904de38f6
|
# Dataset Card for "nyt_headlines"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pnadel/nyt_headlines
|
[
"region:us"
] |
2023-03-08T21:15:37+00:00
|
{"dataset_info": {"features": [{"name": "headline", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7115360, "num_examples": 92285}], "download_size": 4519003, "dataset_size": 7115360}}
|
2023-03-08T21:15:46+00:00
|
d960fe328b0eeaa461928e856d1935e642475aa2
|
Eternalenv/aaaaaa
|
[
"license:openrail",
"region:us"
] |
2023-03-08T21:28:39+00:00
|
{"license": "openrail"}
|
2023-03-08T21:28:39+00:00
|
|
594dee62efbadabf0ca69d5f45098d1d3dfbc298
|
# Dataset Card for "back_translation_fr_on_small_persian_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jalalnb/back_translation_fr_on_small_persian_QA
|
[
"region:us"
] |
2023-03-08T21:32:57+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 262697, "num_examples": 130}, {"name": "train", "num_bytes": 2553488, "num_examples": 1261}], "download_size": 88831, "dataset_size": 2816185}}
|
2023-03-10T16:57:42+00:00
|
42933e7d3129501465825a610b5d14615612c824
|
# Dataset Card for "back_translation_en_on_small_persian_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jalalnb/back_translation_en_on_small_persian_QA
|
[
"region:us"
] |
2023-03-08T21:33:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 262797, "num_examples": 130}, {"name": "train", "num_bytes": 2553868, "num_examples": 1261}], "download_size": 1043078, "dataset_size": 2816665}}
|
2023-03-08T21:33:28+00:00
|
d4af8a28c0857059ea85f00a8a2f69a7d7b949fa
|
# Dataset Card for "back_translation_hy_on_small_persian_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jalalnb/back_translation_hy_on_small_persian_QA
|
[
"region:us"
] |
2023-03-08T21:33:29+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 262683, "num_examples": 130}, {"name": "train", "num_bytes": 2552922, "num_examples": 1261}], "download_size": 1042907, "dataset_size": 2815605}}
|
2023-03-08T21:33:44+00:00
|
d4af833b41e79e82e0fcadeb99f55a62bc7b20c7
|
jonathanscruz/amcruzmn
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-03-08T21:58:45+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-03-08T21:58:45+00:00
|
|
c4bb665268b07bb33c3fc70475c9eb2047cfe9e2
|
# Dataset Card for "self-instruct-eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
argilla/self-instruct-eval
|
[
"region:us"
] |
2023-03-08T23:00:35+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "input", "dtype": "string"}, {"name": "response", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "vectors", "struct": [{"name": "completion", "sequence": "float64"}, {"name": "prompt", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "null"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 1037904569, "num_examples": 82612}], "download_size": 834389885, "dataset_size": 1037904569}}
|
2023-03-09T00:02:04+00:00
|
933ef3a8ec0b20158197f3f41d4f5ae49fc1d990
|
# Dataset Card for "vira-intents-live"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
codesj/vira-intents-live
|
[
"region:us"
] |
2023-03-08T23:14:52+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 536982, "num_examples": 7434}, {"name": "validation", "num_bytes": 227106, "num_examples": 3140}], "download_size": 348952, "dataset_size": 764088}}
|
2023-03-08T23:14:55+00:00
|
01974f95a4c8ff221ac97374a25a53e1ac6527af
|
# Dataset Card for "miniwob_T5_balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwob_T5_balanced
|
[
"region:us"
] |
2023-03-08T23:31:34+00:00
|
{"dataset_info": {"features": [{"name": "episodes", "dtype": "string"}, {"name": "target_actions", "dtype": "string"}, {"name": "target_refs", "dtype": "int64"}, {"name": "target_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7287315, "num_examples": 14211}], "download_size": 1208349, "dataset_size": 7287315}}
|
2023-03-08T23:54:01+00:00
|
871857b06d74bd5d1f9609f516d3c1b9ffadf8a9
|
# Dataset Card for "random-walk-reddit-corpus-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/random-walk-reddit-corpus-small
|
[
"region:us"
] |
2023-03-09T00:20:52+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15525948, "num_examples": 8286}], "download_size": 8990634, "dataset_size": 15525948}}
|
2023-03-09T00:25:14+00:00
|
df214f1c3974ebd88e485390ced7ab64017bfca5
|
# Dataset Card for "top-2-reddit-corpus-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmayhem93/top-2-reddit-corpus-small
|
[
"region:us"
] |
2023-03-09T00:25:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16545775, "num_examples": 8286}], "download_size": 9560714, "dataset_size": 16545775}}
|
2023-03-09T00:25:20+00:00
|
4a147e36c24e7afc4a4858afd3f6e3128870a929
|
# Dataset Card for "lld-onlyicon-ko"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Babypotatotang/lld-onlyicon-ko
|
[
"region:us"
] |
2023-03-09T00:42:04+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 196434759.578, "num_examples": 14959}, {"name": "test", "num_bytes": 49110770.04, "num_examples": 3740}], "download_size": 156811914, "dataset_size": 245545529.618}}
|
2023-03-17T04:27:40+00:00
|
899de144a47a4409f71eee61ae591a0e1e838ba6
|
wdsaaaa/asas
|
[
"license:unknown",
"region:us"
] |
2023-03-09T01:06:31+00:00
|
{"license": "unknown"}
|
2023-03-09T01:06:31+00:00
|
|
c3eebba63d68f7f4c4a8010425725abde721a913
|
Enzaz/Arktoriaz
|
[
"license:unknown",
"region:us"
] |
2023-03-09T01:59:03+00:00
|
{"license": "unknown"}
|
2023-03-09T01:59:03+00:00
|
|
888f2ffeb2398ae737ccc6e64cc7215e129436e7
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** : https://explainthejoke.com/
### Dataset Summary
Corpus for testing whether your LLM can explain the joke well. But this is a rather small dataset, if someone can point to a larger ones would be very nice.
### Languages
English
## Dataset Structure
### Data Fields
* url : link to the explaination
* joke : the original joke
* explaination : the explaination of the joke
### Data Splits
Since its so small, there's no splits just like gsm8k
|
theblackcat102/joke_explaination
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"joke",
"high quality",
"region:us"
] |
2023-03-09T02:29:11+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-generation", "text2text-generation"], "tags": ["joke", "high quality"]}
|
2023-03-09T02:35:40+00:00
|
c9aadd98a23acfe92856dc645ab6d69d72ae5063
|
wenjiewu/dataset_f
|
[
"license:mit",
"region:us"
] |
2023-03-09T02:30:17+00:00
|
{"license": "mit"}
|
2023-03-09T02:30:17+00:00
|
|
870df28d7855e18c6354ef1a4e803df6e6e34ae7
|
smileyes/ssDataSet
|
[
"region:us"
] |
2023-03-09T02:32:36+00:00
|
{}
|
2023-03-09T02:34:00+00:00
|
|
8f2485fff341fefe8061318c695961f65248ae8a
|
# Dataset Card for "aihub_food"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MarkJeong/aihub_food
|
[
"region:us"
] |
2023-03-09T02:39:58+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "01011001", "1": "01012001", "2": "01012002", "3": "01012003", "4": "01012004", "5": "01012005", "6": "01012006", "7": "01013001", "8": "01014008", "9": "01014009", "10": "01014010", "11": "01014011", "12": "01014012", "13": "01014013", "14": "01015002", "15": "01015003", "16": "01015012", "17": "01015013", "18": "01015014", "19": "01015015", "20": "01015016", "21": "01015017", "22": "01015018", "23": "01015019", "24": "01016001", "25": "01016002", "26": "01016003", "27": "01016004", "28": "01016005", "29": "01016006", "30": "01016007", "31": "01016008", "32": "02011006", "33": "02011007", "34": "02011008", "35": "02011009", "36": "02011010", "37": "02011011", "38": "02011012", "39": "02011013", "40": "02011014", "41": "02011015", "42": "02011016", "43": "02011017", "44": "02011018", "45": "02011019", "46": "02011020", "47": "02011021", "48": "02011023", "49": "02011024", "50": "02011025", "51": "02011027", "52": "02011028", "53": "02011029", "54": "02011030", "55": "02011031", "56": "02011032", "57": "02011033", "58": "02011034", "59": "02011035", "60": "02011036", "61": "02011037", "62": "02011038", "63": "02011039", "64": "02011040", "65": "02012001", "66": "02012002", "67": "02012003", "68": "02012004", "69": "02012005", "70": "03011001", "71": "03011002", "72": "03011003", "73": "03011004", "74": "03011005", "75": "03011006", "76": "03011007", "77": "03011008", "78": "03011009", "79": "03011010", "80": "03011011", "81": "03012001", "82": "03012002", "83": "04011001", "84": "04011002", "85": "04011003", "86": "04011004", "87": "04011005", "88": "04011006", "89": "04011007", "90": "04011008", "91": "04011010", "92": "04011011", "93": "04011012", "94": "04011013", "95": "04011014", "96": "04011015", "97": "04011016", "98": "04012001", "99": "04012002", "100": "04012003", "101": "04012004", "102": "04012005", "103": "04012006", "104": "04012007", "105": "04012008", "106": "04012009", "107": "04012010", "108": "04012011", "109": "04012012", "110": "04012013", "111": "04013002", "112": "04013003", "113": "04013004", "114": "04013005", "115": "04013006", "116": "04013007", "117": "04013008", "118": "04013009", "119": "04013010", "120": "04013011", "121": "04013012", "122": "04013013", "123": "04013014", "124": "04013015", "125": "04013017", "126": "04013018", "127": "04013019", "128": "04015003", "129": "04016001", "130": "04017001", "131": "04017002", "132": "04018001", "133": "04018002", "134": "04018003", "135": "04018004", "136": "04019001", "137": "04019002", "138": "04019003", "139": "04019004", "140": "04019005", "141": "04019006", "142": "04019007", "143": "04019008", "144": "05011001", "145": "05011002", "146": "05011004", "147": "05011008", "148": "05011010", "149": "05011011", "150": "05011012", "151": "05012001", "152": "05012002", "153": "05012003", "154": "05012004", "155": "05012005", "156": "05013001", "157": "06012001", "158": "06012002", "159": "06012003", "160": "06012011", "161": "07011003", "162": "07011004", "163": "07012001", "164": "07012002", "165": "07012003", "166": "07013001", "167": "07013002", "168": "07013003", "169": "07013004", "170": "07013005", "171": "07013006", "172": "07013007", "173": "07013008", "174": "07013009", "175": "07013010", "176": "07013011", "177": "08011004", "178": "08011005", "179": "08011006", "180": "08011007", "181": "08011008", "182": "08012001", "183": "08012002", "184": "08012003", "185": "08012004", "186": "08012005", "187": "08012006", "188": "08012007", "189": "08012008", "190": "08012009", "191": "08012010", "192": "08013001", "193": "08013002", "194": "08013003", "195": "08013004", "196": "08013005", "197": "08013006", "198": "08014001", "199": "08014002", "200": "08014003", "201": "09012001", "202": "09012002", "203": "09013001", "204": "09013002", "205": "09014001", "206": "09014002", "207": "09014003", "208": "09014004", "209": "09015001", "210": "09015002", "211": "09015003", "212": "09016001", "213": "10011001", "214": "10011002", "215": "10011003", "216": "10011004", "217": "11011001", "218": "11011002", "219": "11011003", "220": "11011004", "221": "11011005", "222": "11011006", "223": "11011007", "224": "11011008", "225": "11011009", "226": "11011010", "227": "11011011", "228": "11012001", "229": "11012002", "230": "11012003", "231": "11012004", "232": "11013001", "233": "11013002", "234": "11013003", "235": "11013004", "236": "11013005", "237": "11013006", "238": "11013007", "239": "11013009", "240": "11013010", "241": "11013011", "242": "11013012", "243": "11014001", "244": "11014002", "245": "11014003", "246": "11014004", "247": "11014005", "248": "11014006", "249": "11014007", "250": "11014008", "251": "11014009", "252": "11014010", "253": "11015001", "254": "11015002", "255": "12011001", "256": "12011002", "257": "12011003", "258": "12011004", "259": "12011005", "260": "12011006", "261": "12011007", "262": "12011008", "263": "12011009", "264": "12011010", "265": "12011011", "266": "12011012", "267": "12011013", "268": "12011014", "269": "12011015", "270": "13011001", "271": "13011002", "272": "13011003", "273": "13011011", "274": "13011012", "275": "13012001", "276": "13012002", "277": "14011001", "278": "14011002", "279": "14011004", "280": "14011005", "281": "14012001", "282": "14012002", "283": "15011001", "284": "15011002", "285": "15011003", "286": "15011004", "287": "15011005", "288": "15011006", "289": "15011007", "290": "15011008", "291": "15011009", "292": "15011010", "293": "15011011", "294": "15011012", "295": "15011013", "296": "15011014", "297": "15011015", "298": "15011016", "299": "15011017", "300": "16011001", "301": "16011002", "302": "16011003", "303": "16011004", "304": "16011005", "305": "16011006"}}}}], "splits": [{"name": "train", "num_bytes": 14812723538.728, "num_examples": 486839}, {"name": "test", "num_bytes": 33069619665.134, "num_examples": 21178}, {"name": "validation", "num_bytes": 33770989851.48, "num_examples": 21180}], "download_size": 82692432131, "dataset_size": 81653333055.342}}
|
2023-03-09T17:13:22+00:00
|
07af1a55204a38125c8b8aed7ab48c4620625dd0
|
trondizzy/acts_laws
|
[
"task_categories:translation",
"size_categories:100K<n<1M",
"language:uk",
"language:en",
"license:cc",
"region:us"
] |
2023-03-09T03:00:54+00:00
|
{"language": ["uk", "en"], "license": "cc", "size_categories": ["100K<n<1M"], "task_categories": ["translation"]}
|
2023-03-09T03:03:04+00:00
|
|
62625bd1cd59a89e1d76a00a921671a68791087d
|
Falcon2006VN/pascal-code-generation-2mb
|
[
"license:mit",
"region:us"
] |
2023-03-09T04:25:39+00:00
|
{"license": "mit"}
|
2023-03-09T08:16:59+00:00
|
|
41029fad23089e21c84d897dd621c4b5bb5f3ed2
|
indiehacker/Test
|
[
"region:us"
] |
2023-03-09T04:50:58+00:00
|
{}
|
2023-03-09T04:53:47+00:00
|
|
9ced8931752f9cf5e2a1411539bf4fed0110e639
|
Ales21/Workss
|
[
"size_categories:1K<n<10K",
"license:openrail",
"region:us"
] |
2023-03-09T05:05:43+00:00
|
{"license": "openrail", "size_categories": ["1K<n<10K"], "pretty_name": "model_for_all_44"}
|
2023-03-09T05:08:34+00:00
|
|
72566a11ba24611f41bbbd314284b17b42dcb1ad
|
# Dataset Card for "spectrogram_data_Upbeat-4s"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
deepak-newzera/spectrogram_data_Upbeat-4s
|
[
"region:us"
] |
2023-03-09T05:34:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 105472108.125, "num_examples": 3495}], "download_size": 104843147, "dataset_size": 105472108.125}}
|
2023-03-09T05:36:06+00:00
|
0c0d32d57da9c2335a868f2d6a2ff87a6982028f
|
# Dataset Card for "trainofasys"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Hantao/trainofasys
|
[
"region:us"
] |
2023-03-09T05:46:33+00:00
|
{"dataset_info": {"features": [{"name": "0", "dtype": "int64"}, {"name": "ocr", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 204855956.375, "num_examples": 1325}], "download_size": 200935734, "dataset_size": 204855956.375}}
|
2023-03-09T06:04:49+00:00
|
736ed539e9da2b2779ddb92b7d5cf0ab8b087843
|
# Dataset Card for "boolq_pt_r"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reaganjlee/boolq_pt_r
|
[
"region:us"
] |
2023-03-09T05:46:40+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "label (class label)", "dtype": "bool"}, {"name": "passage", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6258250, "num_examples": 9427}], "download_size": 3947154, "dataset_size": 6258250}}
|
2023-03-09T05:49:08+00:00
|
0ad5607dce2d604aceb428f769b4e22cfa210b86
|
**Label Description**
0 : Fake,
1 : Real
|
pushpdeep/fake_news_combined
|
[
"license:apache-2.0",
"region:us"
] |
2023-03-09T06:04:04+00:00
|
{"license": "apache-2.0"}
|
2023-04-10T17:59:26+00:00
|
38282981ce7e57d081395c6771b625e58f20cabc
|
# Dataset Card for "boolq_pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reaganjlee/boolq_pt
|
[
"region:us"
] |
2023-03-09T06:15:08+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "validation", "num_bytes": 1604091, "num_examples": 3270}, {"name": "train", "num_bytes": 4624752, "num_examples": 9427}], "download_size": 3843346, "dataset_size": 6228843}}
|
2023-05-04T03:38:45+00:00
|
145ff5e53cd4b5e3fbf3d0a2a74c3618ec7e30ec
|
These recording and transcripts have been copied from the Russian President's website at kremlin.ru. All content on this site is licensed under Creative Commons Attribution 4.0 International.
http://en.kremlin.ru/about/copyrights
|
spdenisov/prezident_ru
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-03-09T07:06:19+00:00
|
{"license": "cc-by-4.0"}
|
2023-03-10T18:16:28+00:00
|
9bf59ccccdbe271abb898b4466ad6f6469ddda10
|
# Dataset Card for "github-code-scala"
This contains just the scala data in [github-code-clean](https://huggingface.co/datasets/codeparrot/github-code). There are 817k samples with a total download size of 1.52GB.
|
blastwind/github-code-scala
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"region:us"
] |
2023-03-09T07:24:09+00:00
|
{"size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "code", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "size", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3330521484.4803743, "num_examples": 654001}, {"name": "valid", "num_bytes": 416314548.9934581, "num_examples": 81750}, {"name": "test", "num_bytes": 416319641.5261675, "num_examples": 81751}], "download_size": 1534670727, "dataset_size": 4163155675.0}}
|
2023-03-21T19:19:22+00:00
|
4491123f2558a04da05ed1e6ba433f83069388d5
|
yanyc/SciGraph
|
[
"license:mit",
"region:us"
] |
2023-03-09T07:39:07+00:00
|
{"license": "mit"}
|
2023-03-10T18:01:21+00:00
|
|
577f652b57fffa3a03abf748251496d9edcabf14
|
pushpdeep/fake_news_test
|
[
"license:apache-2.0",
"region:us"
] |
2023-03-09T07:44:08+00:00
|
{"license": "apache-2.0"}
|
2023-03-09T13:42:43+00:00
|
|
e11c39cd2de9fd1e2063e84e08bfbe1d6ea657da
|
# Dataset Card for "temp1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Oshan/temp1
|
[
"region:us"
] |
2023-03-09T07:53:05+00:00
|
{"dataset_info": {"features": [{"name": "bnd_idcs", "sequence": {"sequence": "int64"}}, {"name": "atm_type", "sequence": "int64"}, {"name": "bnd_type", "sequence": "int64"}, {"name": "y", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1869800, "num_examples": 2000}], "download_size": 130309, "dataset_size": 1869800}}
|
2023-03-09T07:53:15+00:00
|
783bd6ba635f64039dbfb36707273a7403da0e43
|
# Dataset Card for "boolq_pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
christykoh/boolq_pt
|
[
"region:us"
] |
2023-03-09T07:57:12+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "answer", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 4550515, "num_examples": 9427}, {"name": "validation", "num_bytes": 1578340, "num_examples": 3270}], "download_size": 3842223, "dataset_size": 6128855}}
|
2023-05-02T22:43:01+00:00
|
f5bb6204e1b09e2e56c86e9416dd30e6f07d08a8
|
# Dataset Card for "reklamation24_reisen-tourismus-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_reisen-tourismus-full
|
[
"region:us"
] |
2023-03-09T08:14:35+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "mini-lm-sentence-transformers", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 139906308, "num_examples": 23759}], "download_size": 0, "dataset_size": 139906308}}
|
2023-04-25T13:09:00+00:00
|
2048171505f3065d2b46f518ba316daa6460e68d
|
# Dataset Card for "reklamation24_schoenheit-wellness-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_schoenheit-wellness-full
|
[
"region:us"
] |
2023-03-09T08:15:53+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "mini-lm-sentence-transformers", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 21984670, "num_examples": 4158}], "download_size": 0, "dataset_size": 21984670}}
|
2023-04-25T13:10:02+00:00
|
f85a78260a7b0abebf8231960b3e06348a5a6867
|
# Dataset Card for "reklamation24_reisen-tourismus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_reisen-tourismus
|
[
"region:us"
] |
2023-03-09T08:22:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 247525, "num_examples": 444}, {"name": "test", "num_bytes": 59699, "num_examples": 111}], "download_size": 0, "dataset_size": 307224}}
|
2023-04-19T07:28:55+00:00
|
53b9a88c2d9181b9667dec5bc3c2f247c02563ed
|
# Dataset Card for "reklamation24_schoenheit-wellness"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_schoenheit-wellness
|
[
"region:us"
] |
2023-03-09T08:23:03+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 215900, "num_examples": 464}, {"name": "test", "num_bytes": 56138, "num_examples": 117}], "download_size": 0, "dataset_size": 272038}}
|
2023-04-19T07:29:58+00:00
|
a317387bb96c724d4c8956a92f6d3f61bd0b2b17
|
RollRoys/OkCu
|
[
"license:unknown",
"region:us"
] |
2023-03-09T08:29:44+00:00
|
{"license": "unknown"}
|
2023-03-09T08:32:55+00:00
|
|
41ef425e5945714fccdad32c23c78880b2369db0
|
An imitation learning environment for the handle-press-side-v2 environment, sample for the policy handle-press-side-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_handle_press_side_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_handle_press_side_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_handle_press_side_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-09T09:08:02+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-09T09:08:06+00:00
|
0e77121d3776dd4a3fb1112b10259b86eb4e78a1
|
An imitation learning environment for the handle-press-v2 environment, sample for the policy handle-press-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_handle_press_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_handle_press_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_handle_press_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-09T09:09:28+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-09T09:09:33+00:00
|
b4a949430732bba48b3aa2500fdfa95f52f3b015
|
# Dataset Card for "issues_content_500k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/issues_content_500k
|
[
"region:us"
] |
2023-03-09T09:14:04+00:00
|
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 973521579, "num_examples": 500000}], "download_size": 489652577, "dataset_size": 973521579}}
|
2023-03-09T09:14:35+00:00
|
d1e1a78a5e0f5eb79dcaf46d92c453e68f67538d
|
An imitation learning environment for the handle-pull-side-v2 environment, sample for the policy handle-pull-side-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_handle_pull_side_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_handle_pull_side_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_handle_pull_side_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-09T09:25:09+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-09T09:25:14+00:00
|
67176dfe7ccb77de7a9549ee5ecb09bb20cf46b8
|
An imitation learning environment for the handle-pull-v2 environment, sample for the policy handle-pull-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_handle_pull_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_handle_pull_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_handle_pull_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-09T09:26:46+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-09T09:26:51+00:00
|
d3dfcc9f4fabb0bfeb80ed80ae5c6d0002637030
|
An imitation learning environment for the lever-pull-v2 environment, sample for the policy lever-pull-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_lever_pull_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_lever_pull_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
qgallouedec/prj_gia_dataset_metaworld_lever_pull_v2_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-03-09T09:28:20+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-03-09T09:28:24+00:00
|
2019c80d837eb390b0b4b74ec2948b05e10fb68b
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: 0ys/mt5-small-finetuned-amazon-en-es
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@raviteja2](https://huggingface.co/raviteja2) for evaluating this model.
|
autoevaluate/autoeval-eval-samsum-samsum-8c5714-39885103812
|
[
"autotrain",
"evaluation",
"region:us"
] |
2023-03-09T09:42:17+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "0ys/mt5-small-finetuned-amazon-en-es", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
|
2023-03-09T09:43:08+00:00
|
a59366e1b8eb6d6f3d014ae3dd7041a0835d42c8
|
# Dataset Card for "prj_gia_dataset_metaworld_assembly_v2_1111_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
qgallouedec/prj_gia_dataset_metaworld_assembly_v2_1111_demo
|
[
"region:us"
] |
2023-03-09T10:19:54+00:00
|
{"dataset_info": {"features": [{"name": "observations", "sequence": "float32"}, {"name": "actions", "sequence": "float32"}, {"name": "dones", "dtype": "bool"}, {"name": "rewards", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 18412500, "num_examples": 100000}], "download_size": 8875331, "dataset_size": 18412500}}
|
2023-03-10T16:23:12+00:00
|
77896f938062293560304eee77676f001d41c4b3
|
tarta-ai/jobs-in-california-february-2023
|
[
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:other",
"job",
"jobs",
"california jobs",
"region:us"
] |
2023-03-09T10:57:14+00:00
|
{"language": ["en"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["text-classification"], "pretty_name": "Comprehensive Job Count Information by Company in California", "tags": ["job", "jobs", "california jobs"]}
|
2023-03-09T11:08:25+00:00
|
|
b45984ffbef8596a8b6ac2ac8959f2f8998049bb
|
# Dataset Card for "flowers-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pranked03/flowers-blip-captions
|
[
"region:us"
] |
2023-03-09T11:02:26+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 270279282.848, "num_examples": 6552}], "download_size": 277165211, "dataset_size": 270279282.848}}
|
2023-03-09T11:29:27+00:00
|
a7d8afe06177bea23b412cc025066c60bba9079c
|
# Dataset Card for "maps_parquet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/maps_parquet
|
[
"region:us"
] |
2023-03-09T11:03:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no building or railspace", "1": "railspace", "2": "building", "3": "railspace and non railspace building"}}}}, {"name": "map_sheet", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 957911247.448, "num_examples": 37212}, {"name": "validation", "num_bytes": 316304202.708, "num_examples": 12404}, {"name": "test", "num_bytes": 323743326.376, "num_examples": 12404}], "download_size": 1600455354, "dataset_size": 1597958776.5319998}}
|
2023-03-09T11:05:08+00:00
|
845c49cc3b3bce553a78b07c25c7aca51722ea68
|
test
|
yunosuken/sentiment-train
|
[
"region:us"
] |
2023-03-09T11:28:42+00:00
|
{"viewer": true, "dataset_info": {"homepage": "httsp://www.yahoo.co.jp", "features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 897816, "num_examples": 8476}, {"name": "validation", "num_bytes": 52805, "num_examples": 497}, {"name": "test", "num_bytes": 109825, "num_examples": 1002}], "download_size": 601239, "dataset_size": 1060446, "description": "hoge"}}
|
2023-03-15T13:48:44+00:00
|
5e492c24fec9fee51a80a3945ab60260fdb01ed3
|
# Dataset Card for "reklambox3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklambox3
|
[
"region:us"
] |
2023-03-09T12:58:50+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 645273.5305832148, "num_examples": 1124}, {"name": "test", "num_bytes": 161892.4694167852, "num_examples": 282}], "download_size": 446344, "dataset_size": 807166.0}}
|
2023-03-09T12:59:51+00:00
|
755f26834ed9be29f64fee9b38941f9f3db146e9
|
# Dataset Card for "to_label_samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
active-learning/to_label_samples
|
[
"region:us"
] |
2023-03-09T13:01:14+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1391.0870983935743, "num_examples": 5}], "download_size": 3878, "dataset_size": 1391.0870983935743}}
|
2023-09-04T20:47:10+00:00
|
cab57870780d884a7240928d7a01b60baa899e5a
|
saitsharipov/ddpm-butterflies-128
|
[
"license:unknown",
"region:us"
] |
2023-03-09T13:22:21+00:00
|
{"license": "unknown"}
|
2023-03-09T13:22:21+00:00
|
|
dd0d6a1537093e3f4304a7469603c1ce388fb07a
|
# Dataset Card for "symptom_text_to_disease_mk2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
venetis/symptom_text_to_disease_mk2
|
[
"region:us"
] |
2023-03-09T13:23:32+00:00
|
{"dataset_info": {"features": [{"name": "audio_clipping", "dtype": "string"}, {"name": "audio_clipping:confidence", "dtype": "float64"}, {"name": "background_noise_audible", "dtype": "string"}, {"name": "background_noise_audible:confidence", "dtype": "float64"}, {"name": "overall_quality_of_the_audio", "dtype": "float64"}, {"name": "quiet_speaker", "dtype": "string"}, {"name": "quiet_speaker:confidence", "dtype": "float64"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "file_download", "dtype": "string"}, {"name": "file_name", "dtype": "string"}, {"name": "phrase", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "writer_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2016289, "num_examples": 6661}], "download_size": 409095, "dataset_size": 2016289}}
|
2023-03-09T13:23:37+00:00
|
2e477f41b0d05aa8041913e4c1ffe4ee58efe0dd
|
# Dataset Card for "symptom_text_to_disease_mk3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
venetis/symptom_text_to_disease_mk3
|
[
"region:us"
] |
2023-03-09T13:24:15+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "emotional pain", "1": "hair falling out", "2": "heart hurts", "3": "infected wound", "4": "foot ache", "5": "shoulder pain", "6": "injury from sports", "7": "skin issue", "8": "stomach ache", "9": "knee pain", "10": "joint pain", "11": "hard to breath", "12": "head ache", "13": "body feels weak", "14": "feeling dizzy", "15": "back pain", "16": "open wound", "17": "internal pain", "18": "blurry vision", "19": "acne", "20": "muscle pain", "21": "neck pain", "22": "cough", "23": "ear ache", "24": "feeling cold"}}}}], "splits": [{"name": "train", "num_bytes": 330494.3762197868, "num_examples": 5328}, {"name": "test", "num_bytes": 41373.82675273983, "num_examples": 667}, {"name": "valid", "num_bytes": 41311.79702747335, "num_examples": 666}], "download_size": 146385, "dataset_size": 413180.0}}
|
2023-03-09T13:24:21+00:00
|
8da059c6b31546ec0bbeb46d531c2be3a4f6cc18
|
# Machine translation dataset for NLU (Virual Assistant) with slot transfer between languages
version: 0.5.1
## Dataset Summary
Disclaimer: This is for research purposes only. Please have a look at the license section below. Some of the datasets used to construct IVA_MT have an unknown license.
IVA_MT is a machine translation dataset that can be used to train, adapt and evaluate MT models used in Virtual Assistant NLU context (e.g. to translate trainig corpus of NLU).
## Dataset Composition
### en-pl
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 11514 | 2033 | 2974 |
| [Leyzer 0.2.0](https://github.com/cartesinus/leyzer/tree/0.2.0) | 3974 | 701 | 1380 |
| [OpenSubtitles from OPUS](https://opus.nlpl.eu/OpenSubtitles-v1.php) | 2329 | 411 | 500 |
| [KDE from OPUS](https://opus.nlpl.eu/KDE4.php) | 1154 | 241 | 241 |
| [CCMatrix from Opus](https://opus.nlpl.eu/CCMatrix.php) | 1096 | 232 | 237 |
| [Ubuntu from OPUS](https://opus.nlpl.eu/Ubuntu.php) | 281 | 60 | 59 |
| [Gnome from OPUS](https://opus.nlpl.eu/GNOME.php) | 14 | 3 | 3 |
| *total* | 20362 | 3681 | 5394 |
### en-de
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7536 | 1346 | 1955 |
### en-es
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8415 | 1526 | 2202 |
### en-sv
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7540 | 1360 | 1921 |
### en-fr
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 6800 | 1203 | 1757 |
### en-pt
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7368 | 1296 | 1885 |
### en-hi
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 6702 | 1175 | 1747 |
### en-tr
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8269 | 1474 | 2170 |
### en-ja
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8066 | 1434 | 2085 |
### en-zh
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8433 | 1513 | 2179 |
| ChatGPT | 1312 | 200 | 200 |
## Tools
Scripts used to generate this dataset can be found on [github](https://github.com/cartesinus/iva_mt).
## Citation
If you use this models please cite:
```
@article{Sowanski2023SlotLI,
title={Slot Lost in Translation? Not Anymore: A Machine Translation Model for Virtual Assistants with Type-Independent Slot Transfer},
author={Marcin Sowanski and Artur Janicki},
journal={2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP)},
year={2023},
pages={1-5}
}
```
## License
This is a composition of 7 datasets, and the license is as defined in original release:
- MASSIVE: [CC-BY 4.0](https://huggingface.co/datasets/AmazonScience/massive/blob/main/LICENSE)
- Leyzer: [CC BY-NC 4.0](https://github.com/cartesinus/leyzer/blob/master/LICENSE)
- OpenSubtitles: unknown
- KDE: [GNU Public License](https://l10n.kde.org/about.php)
- CCMatrix: no license given, therefore assuming it is LASER project license [BSD](https://github.com/facebookresearch/LASER/blob/main/LICENSE)
- Ubuntu: [GNU Public License](https://help.launchpad.net/Legal)
- Gnome: unknown
|
cartesinus/iva_mt_wslot
|
[
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:pl",
"language:de",
"language:es",
"language:sv",
"language:fr",
"language:pt",
"license:cc-by-4.0",
"machine translation",
"nlu",
"natural-language-understanding",
"virtual assistant",
"region:us"
] |
2023-03-09T14:02:00+00:00
|
{"language": ["en", "pl", "de", "es", "sv", "fr", "pt"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["translation"], "pretty_name": "Machine translation for NLU with slot transfer", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "origin", "dtype": "string"}, {"name": "partition", "dtype": "string"}, {"name": "translation_utt", "dtype": {"translation": {"languages": ["en", "pl"]}}}, {"name": "translation_xml", "dtype": {"translation": {"languages": ["en", "pl"]}}}, {"name": "src_bio", "dtype": "string"}, {"name": "tgt_bio", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6187206, "num_examples": 20362}, {"name": "validation", "num_bytes": 1115480, "num_examples": 3681}, {"name": "test", "num_bytes": 1587613, "num_examples": 5394}], "download_size": 3851892, "dataset_size": 8890299}, "tags": ["machine translation", "nlu", "natural-language-understanding", "virtual assistant"]}
|
2024-02-08T14:33:40+00:00
|
b0875a6944533b8011fccfc0a712df06959ad055
|
# Dataset Card for Habr QnA
## Table of Contents
- [Dataset Card for Habr QnA](#dataset-card-for-habr-qna)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
## Dataset Description
- **Repository:** https://github.com/its5Q/habr-qna-parser
### Dataset Summary
This is a dataset of questions and answers scraped from [Habr QnA](https://qna.habr.com/). There are 723430 asked questions with answers, comments and other metadata.
### Languages
The dataset is mostly Russian with source code in different languages.
## Dataset Structure
### Data Fields
Data fields can be previewed on the dataset card page.
### Data Splits
All 723430 examples are in the train split, there is no validation split.
## Dataset Creation
The data was scraped with a script, located in [my GitHub repository](https://github.com/its5Q/habr-qna-parser)
## Additional Information
### Dataset Curators
- https://github.com/its5Q
|
its5Q/habr_qna
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ru",
"license:cc0-1.0",
"region:us"
] |
2023-03-09T14:02:50+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ru"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "question-answering"], "task_ids": ["language-modeling", "open-domain-qa"], "pretty_name": "Habr QnA", "tags": []}
|
2023-03-11T04:43:35+00:00
|
048e9a835bf265c5292cd3c8f4128d04bd41c84f
|
# Dataset Card for "avatar-lite_captioned-augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jlbaker361/avatar-lite_captioned-augmented
|
[
"region:us"
] |
2023-03-09T14:04:10+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "src", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 441803035.75, "num_examples": 1890}], "download_size": 441599217, "dataset_size": 441803035.75}}
|
2023-03-18T19:06:06+00:00
|
776c9f763c6c3874e44360505aef3ab275a3c63e
|
# Dataset Card for "cherry_picked_compleetions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lewtun/cherry_picked_completions
|
[
"region:us"
] |
2023-03-09T14:12:19+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completions", "list": [{"name": "completions", "sequence": "string"}, {"name": "creation_date", "dtype": "string"}, {"name": "policy", "dtype": "string"}]}, {"name": "meta", "struct": [{"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 72786, "num_examples": 16}], "download_size": 25787, "dataset_size": 72786}}
|
2023-03-09T15:05:19+00:00
|
5a39018156af9b3515b17a3f73cd16752b0ddc7a
|
# Dataset Card for "OK_VQA_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_filter_ns_5046_OE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/OK_VQA_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_filter_ns_5046_OE
|
[
"region:us"
] |
2023-03-09T14:23:14+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 920304, "num_examples": 5046}], "download_size": 356829, "dataset_size": 920304}}
|
2023-03-09T17:32:38+00:00
|
e22fb0ce2c73d603ff182183fbfc1476d0032d1d
|
# MODIS Water Lake Powell Toy Dataset
### Dataset Summary
Tabular dataset comprised of MODIS surface reflectance bands along with calculated indices and a label (water/not-water)
## Dataset Structure
### Data Fields
- `water`: Label, water or not-water (binary)
- `sur_refl_b01_1`: MODIS surface reflection band 1 (-100, 16000)
- `sur_refl_b02_1`: MODIS surface reflection band 2 (-100, 16000)
- `sur_refl_b03_1`: MODIS surface reflection band 3 (-100, 16000)
- `sur_refl_b04_1`: MODIS surface reflection band 4 (-100, 16000)
- `sur_refl_b05_1`: MODIS surface reflection band 5 (-100, 16000)
- `sur_refl_b06_1`: MODIS surface reflection band 6 (-100, 16000)
- `sur_refl_b07_1`: MODIS surface reflection band 7 (-100, 16000)
- `ndvi`: Normalized differential vegetation index (-20000, 20000)
- `ndwi1`: Normalized differential water index 1 (-20000, 20000)
- `ndwi2`: Normalized differential water index 2 (-20000, 20000)
### Data Splits
Train and test split. Test is 200 rows, train is 800.
## Dataset Creation
## Source Data
[MODIS MOD44W](https://lpdaac.usgs.gov/products/mod44wv006/)
[MODIS MOD09GA](https://lpdaac.usgs.gov/products/mod09gav006/)
[MODIS MOD09GQ](https://lpdaac.usgs.gov/products/mod09gqv006/)
## Annotation process
Labels were created by using the MOD44W C6 product to designate pixels in MODIS surface reflectance products as land or water.
|
nasa-cisto-data-science-group/modis-lake-powell-toy-dataset
|
[
"size_categories:n<1K",
"license:apache-2.0",
"region:us"
] |
2023-03-09T14:45:40+00:00
|
{"license": "apache-2.0", "size_categories": ["n<1K"]}
|
2023-05-04T00:39:33+00:00
|
dea58326a42f399b6c22b00e89c2cfd7c4c92db8
|
# Dataset Card for "MedQuAD_47441_Question_Answer_Pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AnonymousSub/MedQuAD_47441_Question_Answer_Pairs
|
[
"region:us"
] |
2023-03-09T15:02:27+00:00
|
{"dataset_info": {"features": [{"name": "Questions", "dtype": "string"}, {"name": "Answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24216623, "num_examples": 47441}], "download_size": 9258859, "dataset_size": 24216623}}
|
2023-03-09T15:02:29+00:00
|
c9490aa39c878e2353c357407880795aed55b77d
|
# Dataset Card for "OK_VQA_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_ns_5046_OE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/OK_VQA_google_flan_t5_xxl_mode_VQAv2_visclues_detection_caption_module_ns_5046_OE
|
[
"region:us"
] |
2023-03-09T15:04:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 919899, "num_examples": 5046}], "download_size": 356578, "dataset_size": 919899}}
|
2023-03-09T17:56:24+00:00
|
cb0dab9f8b6c0e9ccf6801079699b5a2beaa2047
|
# Dataset Card for "tib_wip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gigant/tib_wip
|
[
"region:us"
] |
2023-03-09T15:05:59+00:00
|
{"dataset_info": {"features": [{"name": "doi", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "video_url", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "release_year", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "contributors", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "transcript", "dtype": "string"}, {"name": "transcript_segments", "sequence": [{"name": "id", "dtype": "int32"}, {"name": "seek", "dtype": "int32"}, {"name": "start", "dtype": "float32"}, {"name": "end", "dtype": "float32"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "int32"}, {"name": "temperature", "dtype": "float32"}, {"name": "avg_logprob", "dtype": "float32"}, {"name": "compression_ratio", "dtype": "float32"}, {"name": "no_speech_prob", "dtype": "float32"}]}, {"name": "keyframes", "sequence": [{"name": "slide", "dtype": "string"}, {"name": "frames", "sequence": "int32"}, {"name": "timestamp", "sequence": "float32"}]}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1262918268, "num_examples": 11043}], "download_size": 607894050, "dataset_size": 1262918268}}
|
2023-03-23T00:50:47+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.