sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
b9c046b2bb0b97ee11c86b9647f89c15f183c64f
|
An imitation learning environment for the atari_frostbite environment, sample for the policy atari_2B_atari_frostbite_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_frostbite_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:26:26+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:27:18+00:00
|
d2cb6068cb0ca6864621ae61d5508d11c6757cb0
|
An imitation learning environment for the atari_gopher environment, sample for the policy atari_2B_atari_gopher_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_gopher_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:32:46+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:33:46+00:00
|
bbf9e1317a1558c8af39e315301a74f1ed3e6349
|
An imitation learning environment for the atari_gravitar environment, sample for the policy atari_2B_atari_gravitar_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_gravitar_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:40:13+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:41:08+00:00
|
95ab9a687849050caf6094336c84c3b4cdbf4e49
|
An imitation learning environment for the atari_hero environment, sample for the policy atari_2B_atari_hero_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_hero_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:47:57+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:48:43+00:00
|
8f435ec3d8d968a2a9a5e606800aa127d624d672
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://www.inf.pucrs.br/linatural/wordpress/recursos-e-ferramentas/blogset-br/
- **Leaderboard:** Grupo de Processamento da Linguagem Natural da PUC-RS
- **Point of Contact:** Site oficial
### Dataset Summary
Este Dataset foi criado a partir dos dados disponibilizados pelo Grupo de Processamento de Linguagem Natural da PUC-RS. O site oficial pode ser encontrado aqui: https://www.inf.pucrs.br/linatural/wordpress/recursos-e-ferramentas/blogset-br/
### Supported Tasks and Leaderboards
Indicado para treinamento de modelos de linguagem.
### Languages
Português do Brasil
#### Initial Data Collection and Normalization
Informações sobre a criação do Dataset podem ser encontradas aqui: https://www.inf.pucrs.br/linatural/wordpress/recursos-e-ferramentas/blogset-br/
### Licensing Information
Apache V2
### Contributions
Esta página é meramente uma configuração para o formato Huggingface do trabalho realizado pelo equipe PLN da PUC-RS.
### Huggingface format
O código a seguir foi utilizado para a criação do dataset. Decisões quanto a estrutura:
1. Somente a coluna relacionada ao texto foi utilizada (coluna 4).
2. Foi aplicada uma bateria de ajustes visando limpar o texto conforme pode ser observado no código.
3. Procurou-se manter o limite de 512 palavras em cada linha.
Gist: https://gist.github.com/rdemorais/ce2e708af4c07aba47930bc12ed92472
|
thegoodfellas/blogset-br
|
[
"size_categories:1M<n<10M",
"language:pt",
"license:apache-2.0",
"region:us"
] |
2023-02-21T15:50:05+00:00
|
{"language": ["pt"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "pretty_name": "Blogset BR"}
|
2023-02-21T21:53:47+00:00
|
910b430979efa6990b6a2346f9bf0c8a75241792
|
An imitation learning environment for the atari_icehockey environment, sample for the policy atari_2B_atari_icehockey_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_icehockey_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:54:51+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:55:27+00:00
|
fd67f1921093ed31b2d823e191e78875232a6c43
|
An imitation learning environment for the atari_jamesbond environment, sample for the policy atari_2B_atari_jamesbond_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_jamesbond_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:01:19+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:02:42+00:00
|
b3dba1ae680fce5482413bf911dba26b679f7a0e
|
An imitation learning environment for the atari_kangaroo environment, sample for the policy atari_2B_atari_kangaroo_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_kangaroo_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:09:09+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:10:11+00:00
|
1748e48bf30a5ad16c45ab3c60213d3e0bf15a4e
|
An imitation learning environment for the atari_krull environment, sample for the policy atari_2B_atari_krull_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_krull_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:16:44+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:17:56+00:00
|
14c70843bab946e6fe9d3433c719aab080454880
|
About Dataset
This dataset is taken from https://www.kaggle.com/datasets/bolattleubayev/nursultan-nazarbayev-speech-dataset
The dataset consists of manually labelled 9341 wav files (around 14.8 hours) taken from speeches of The First President of the Republic of Kazakhstan Nursultan Nazarbayev published online. 7919 files (12.1 hours) are in Russian and 1422 files (2.7 hours) in Kazakh. Minimum duration: 0.42 sec, maximum: 13.00 sec, mean: 5.71 sec.
The dataset was collected as a part of a research effort of Nazarabyev University Human-Robot Interaction Lab by Bolat Tleubayev, Ruslan Polichshuk, Zhanel Zhexenova, and Anara Sandygulova.
This is ongoing open source project, so the dataset might expand in future.
The .csv files are separated by '|' instead of ',' to avoid confusion with punctuation.
|
Shirali/N_Nazarbayev_Speech_corpus
|
[
"license:cc0-1.0",
"region:us"
] |
2023-02-21T16:23:03+00:00
|
{"license": "cc0-1.0"}
|
2023-02-22T17:33:57+00:00
|
62a6615f1955648f927541ac3570029619dd48cd
|
An imitation learning environment for the atari_kongfumaster environment, sample for the policy atari_2B_atari_kongfumaster_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_kongfumaster_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:24:17+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:25:14+00:00
|
0edc9437459f81e54118765e7d826d714f546123
|
An imitation learning environment for the atari_montezuma environment, sample for the policy atari_2B_atari_montezuma_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_montezuma_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:31:42+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:32:32+00:00
|
a9e01ca7953a654a8559e139ab01bc4991b8eb62
|
An imitation learning environment for the atari_mspacman environment, sample for the policy atari_2B_atari_mspacman_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_mspacman_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:38:31+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:39:30+00:00
|
8756ca81f6c2aada2e7a6176795f58fd0efe17af
|
An imitation learning environment for the atari_namethisgame environment, sample for the policy atari_2B_atari_namethisgame_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_namethisgame_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:45:28+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:46:14+00:00
|
00351445810c2c701b1f2a280d0a34bbf5fa89a9
|
An imitation learning environment for the atari_phoenix environment, sample for the policy atari_2B_atari_phoenix_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_phoenix_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T16:51:48+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T16:54:05+00:00
|
dd9595d00ba533605177c5d62b2f4622750d6646
|
# Dataset Card for "smithsonian_butterflies_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hotfinda/smithsonian_butterflies_subset
|
[
"region:us"
] |
2023-02-21T16:58:41+00:00
|
{"dataset_info": {"features": [{"name": "image_url", "dtype": "string"}, {"name": "image_alt", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "scientific_name", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "taxonomy", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "locality", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "usnm_no", "dtype": "string"}, {"name": "guid", "dtype": "string"}, {"name": "edan_url", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "stage", "dtype": "float64"}, {"name": "image", "dtype": "image"}, {"name": "image_hash", "dtype": "string"}, {"name": "sim_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 237753960.0, "num_examples": 1000}], "download_size": 237446351, "dataset_size": 237753960.0}}
|
2023-02-21T16:59:11+00:00
|
974219a4e44ae5ecdba6e3ae42dba27eca911a26
|
An imitation learning environment for the atari_pitfall environment, sample for the policy atari_2B_atari_pitfall_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_pitfall_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:00:10+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:00:54+00:00
|
b1ca27589a10a019f92e63e71f5efc8913a08159
|
nielsgl/dreambooth-ace
|
[
"license:mit",
"region:us"
] |
2023-02-21T17:07:37+00:00
|
{"license": "mit"}
|
2023-03-24T10:47:57+00:00
|
|
bb9a1dda6ffd86b32384b7de5b77f3b3b45283e0
|
An imitation learning environment for the atari_privateye environment, sample for the policy atari_2B_atari_privateye_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_privateye_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:14:08+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:15:03+00:00
|
680cf894d52eb65dfd9a1405dcf0ebfcef9b1656
|
An imitation learning environment for the atari_qbert environment, sample for the policy atari_2B_atari_qbert_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_qbert_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:21:02+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:21:39+00:00
|
768b0061fd2949b96d6a941a865137a5afea2e71
|
An imitation learning environment for the atari_riverraid environment, sample for the policy atari_2B_atari_riverraid_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_riverraid_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:28:21+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:29:14+00:00
|
1aeb7b2090f442881daa6ce67c688419742c3c12
|
An imitation learning environment for the atari_roadrunner environment, sample for the policy atari_2B_atari_roadrunner_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_roadrunner_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:35:34+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:36:10+00:00
|
9259a62218165a4c4c971de8a0e2a0a9a8cf8f17
|
An imitation learning environment for the atari_robotank environment, sample for the policy atari_2B_atari_robotank_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_robotank_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:42:55+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:43:49+00:00
|
536155fd0d2282076a369d27374c74bd68f84f6e
|
An imitation learning environment for the atari_seaquest environment, sample for the policy atari_2B_atari_seaquest_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_seaquest_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:49:45+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:50:23+00:00
|
c800301a1388ccfde9409838f45f2cfe06024548
|
An imitation learning environment for the atari_skiing environment, sample for the policy atari_2B_atari_skiing_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_skiing_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T17:56:35+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:57:29+00:00
|
6ccc62e107cadb979bb799bdfc99d305147d7bf1
|
An imitation learning environment for the atari_solaris environment, sample for the policy atari_2B_atari_solaris_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_solaris_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:04:03+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:05:14+00:00
|
25600e8556dcdc22c32f221ec4394d219fad9466
|
An imitation learning environment for the atari_spaceinvaders environment, sample for the policy atari_2B_atari_spaceinvaders_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_spaceinvaders_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:10:43+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:11:49+00:00
|
09538f1d7efeeb1e44878bb36cf572058e4798d1
|
An imitation learning environment for the atari_stargunner environment, sample for the policy atari_2B_atari_stargunner_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_stargunner_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:17:28+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:18:11+00:00
|
145cbdb9a725d8a22b4b1be3b4b46899309757d0
|
An imitation learning environment for the atari_tennis environment, sample for the policy atari_2B_atari_tennis_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_tennis_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:24:20+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:25:30+00:00
|
bbc5108726a9cf0cc9ccf71140b100ea3ca0b7a6
|
An imitation learning environment for the atari_timepilot environment, sample for the policy atari_2B_atari_timepilot_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_timepilot_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:31:00+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:32:51+00:00
|
11a32e3725358c0702ce11a725c487131715154c
|
An imitation learning environment for the atari_tutankham environment, sample for the policy atari_2B_atari_tutankham_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_tutankham_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:38:33+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:40:10+00:00
|
cdc71f25a76eac312a6c3347e351a2c785f0fa02
|
An imitation learning environment for the atari_upndown environment, sample for the policy atari_2B_atari_upndown_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_upndown_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:47:28+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:48:32+00:00
|
ea3c08c3b0dade898136ba912f64544dcd25ac37
|
An imitation learning environment for the atari_venture environment, sample for the policy atari_2B_atari_venture_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_venture_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T18:54:32+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T18:55:16+00:00
|
959a0bcbfa06df12f14c5530d3aaa6859f342dc5
|
An imitation learning environment for the atari_videopinball environment, sample for the policy atari_2B_atari_videopinball_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_videopinball_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T19:01:50+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T19:02:43+00:00
|
fdf505c8d6afb78e62e71b58c012f80657fb7289
|
An imitation learning environment for the atari_wizardofwor environment, sample for the policy atari_2B_atari_wizardofwor_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_wizardofwor_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T19:08:26+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T19:09:20+00:00
|
9862df54ca61d77cc881660cc339d3c4daaf690e
|
An imitation learning environment for the atari_yarsrevenge environment, sample for the policy atari_2B_atari_yarsrevenge_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_yarsrevenge_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T19:14:58+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T19:15:52+00:00
|
b788f7d8c47455a0d24b6c04a1bfed0c2c79c895
|
An imitation learning environment for the atari_zaxxon environment, sample for the policy atari_2B_atari_zaxxon_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_zaxxon_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T19:21:45+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T19:22:40+00:00
|
0bef29bf7e2719b2c8a44835b96d91d64c836b2f
|
# Dataset Card for "product-10k-part1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
matterr/product-10k-part1
|
[
"region:us"
] |
2023-02-21T19:47:48+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 751920.0, "num_examples": 2}], "download_size": 754822, "dataset_size": 751920.0}}
|
2023-02-21T20:35:59+00:00
|
7d31f770123adbf09829cc14dbb707a72a98d741
|
# Dataset Card for "wikipedia.reorder.natural.de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.natural.de
|
[
"region:us"
] |
2023-02-21T19:55:17+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2385745587, "num_examples": 1137317}], "download_size": 0, "dataset_size": 2385745587}}
|
2023-02-21T20:24:01+00:00
|
234801d97bbee2422cbe97bc9ba6f286e4d3f642
|
# Dataset Card for "wikipedia.reorder.svo.de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.svo.de
|
[
"region:us"
] |
2023-02-21T20:00:17+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2385745587, "num_examples": 1137317}], "download_size": 1063402086, "dataset_size": 2385745587}}
|
2023-02-21T20:02:49+00:00
|
765d7785ff409c71c8e4c0347f8daa7b29474f3d
|
# Dataset Card for "wikipedia.reorder.vos.de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.vos.de
|
[
"region:us"
] |
2023-02-21T20:06:21+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2385745587, "num_examples": 1137317}], "download_size": 1068076681, "dataset_size": 2385745587}}
|
2023-02-21T20:08:01+00:00
|
88c570e2c28e45290aa502e34c174a9516cd3d36
|
this is a very bad dataset. a better one comming soon.
|
breadlicker45/youtube-comments
|
[
"region:us"
] |
2023-02-21T20:07:34+00:00
|
{}
|
2023-02-22T20:45:23+00:00
|
785ea48ca190d3e15adde61cec84eae357ad1b2f
|
# Binhvq News
- Source: https://github.com/binhvq/news-corpus
- Num examples: 19,365,593
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/binhvq_news_vi")
```
|
vietgpt/binhvq_news_vi
|
[
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:vi",
"LM",
"region:us"
] |
2023-02-21T20:08:06+00:00
|
{"language": ["vi"], "size_categories": ["10M<n<100M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8211350978.574438, "num_examples": 19365593}], "download_size": 4780706833, "dataset_size": 8211350978.574438}, "tags": ["LM"]}
|
2023-03-30T17:58:53+00:00
|
6730e6512892378562b4055f6483a9691139d937
|
# Dataset Card for "wikipedia.reorder.osv.de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.osv.de
|
[
"region:us"
] |
2023-02-21T20:09:23+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2385745587, "num_examples": 1137317}], "download_size": 1065735715, "dataset_size": 2385745587}}
|
2023-02-21T20:11:06+00:00
|
7f0c663fabfbd4e1bd0f5bb352257e487ecc2f71
|
# Dataset Card for "wikipedia.reorder.sov.de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.sov.de
|
[
"region:us"
] |
2023-02-21T20:12:27+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2385745587, "num_examples": 1137317}], "download_size": 1068439913, "dataset_size": 2385745587}}
|
2023-02-21T20:14:11+00:00
|
177b5220bea8df17d089a5cdd714eae93d17f4d8
|
# Dataset Card for "wikipedia.reorder.vso.de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.vso.de
|
[
"region:us"
] |
2023-02-21T20:15:24+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2385745587, "num_examples": 1137317}], "download_size": 1063715741, "dataset_size": 2385745587}}
|
2023-02-21T20:16:45+00:00
|
28351f3479088c8c4ac5bfdc185773e387db93a0
|
# Dataset Card for "wikipedia.reorder.ovs.de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.ovs.de
|
[
"region:us"
] |
2023-02-21T20:17:11+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2385745587, "num_examples": 1137317}], "download_size": 1064795572, "dataset_size": 2385745587}}
|
2023-02-21T20:27:31+00:00
|
64ecf097c8db0f6dcfc86046e3450de8da5a3e41
|
# Wikipedia
- Source: https://huggingface.co/datasets/wikipedia
- Num examples: 1,281,412
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikipedia_vi")
```
|
vietgpt/wikipedia_vi
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:vi",
"LM",
"region:us"
] |
2023-02-21T20:39:38+00:00
|
{"language": ["vi"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "revid", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1053551922.960177, "num_examples": 1284930}], "download_size": 569515706, "dataset_size": 1053551922.960177}, "tags": ["LM"]}
|
2023-09-16T04:11:18+00:00
|
35e7ad2906ea22bfb293e0b82ca1f153fa8bb399
|
# Dataset Card for "patched_test_p_10_f_SPOUT_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_10_f_SPOUT_v4
|
[
"region:us"
] |
2023-02-21T20:40:47+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 537304297, "num_examples": 1675599}], "download_size": 54326651, "dataset_size": 537304297}}
|
2023-02-21T20:41:00+00:00
|
f9e872d95e3585575d77e0630461d525f4fde0f2
|
# Dataset Card for "patched_test_p_20_f_SPOUT_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_20_f_SPOUT_v4
|
[
"region:us"
] |
2023-02-21T20:41:46+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 517784141, "num_examples": 1607399}], "download_size": 52108156, "dataset_size": 517784141}}
|
2023-02-21T20:41:58+00:00
|
b10bf4a93c64501fdf23e4fc98e6bafc20d2c244
|
# Dataset Card for "patched_test_p_40_f_SPOUT_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_40_f_SPOUT_v4
|
[
"region:us"
] |
2023-02-21T20:42:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 478745882, "num_examples": 1470999}], "download_size": 0, "dataset_size": 478745882}}
|
2023-02-21T20:43:09+00:00
|
a087db33e07fb6087eec13c8267f7e324b7f49d5
|
# Dataset Card for "patched_test_p_10_f_ATCaseOTCase_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_10_f_ATCaseOTCase_v4
|
[
"region:us"
] |
2023-02-21T20:43:52+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 52568328, "num_examples": 143667}], "download_size": 5044378, "dataset_size": 52568328}}
|
2023-02-21T20:43:59+00:00
|
a43f98cad170ead879adbf26c415b1527522d9c0
|
# Dataset Card for "patched_test_p_20_f_ATCaseOTCase_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_20_f_ATCaseOTCase_v4
|
[
"region:us"
] |
2023-02-21T20:44:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 50950318, "num_examples": 139207}], "download_size": 4851567, "dataset_size": 50950318}}
|
2023-02-21T20:44:40+00:00
|
72f47cf00e5c7f885a4e1ed1cbd6c020eaaa5c38
|
# Dataset Card for "patched_test_p_40_f_ATCaseOTCase_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_40_f_ATCaseOTCase_v4
|
[
"region:us"
] |
2023-02-21T20:46:47+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 47714312, "num_examples": 130287}], "download_size": 4461993, "dataset_size": 47714312}}
|
2023-02-21T20:46:53+00:00
|
4b8f55e5df3fa7117687066e015c466aa523e927
|
# Dataset Card for "patched_test_p_40_f_membrane_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_40_f_membrane_v4
|
[
"region:us"
] |
2023-02-21T20:47:15+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1946930959, "num_examples": 3134581}], "download_size": 162353372, "dataset_size": 1946930959}}
|
2023-02-21T20:47:43+00:00
|
f6ea6e9fa5ec42907140665c6e0f4ec7a72aaf9e
|
# Dataset Card for "patched_test_p_80_f_membrane_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_80_f_membrane_v4
|
[
"region:us"
] |
2023-02-21T20:49:23+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1802548319, "num_examples": 2865341}], "download_size": 151479669, "dataset_size": 1802548319}}
|
2023-02-21T20:49:52+00:00
|
eb22e64311cc724864cbc610d668d0084326160a
|
# Dataset Card for "patched_test_p_150_f_membrane_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_150_f_membrane_v4
|
[
"region:us"
] |
2023-02-21T20:50:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1552272870, "num_examples": 2394171}], "download_size": 128097844, "dataset_size": 1552272870}}
|
2023-02-21T20:50:32+00:00
|
bc0824e81679ac2c895d0a0f9a4eb447afbd8c72
|
# Dataset Card for "patched_test_p_200_f_membrane_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_200_f_membrane_v4
|
[
"region:us"
] |
2023-02-21T20:50:54+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1371458168, "num_examples": 2057621}], "download_size": 112398698, "dataset_size": 1371458168}}
|
2023-02-21T20:51:13+00:00
|
81d3b7c52745184913fea4455689c8127d8dc47f
|
# Wikipedia
- Source: https://huggingface.co/datasets/wikipedia
- Num examples: 6,623,239
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikipedia_en")
```
|
vietgpt/wikipedia_en
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"LM",
"region:us"
] |
2023-02-21T20:52:04+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21102365479, "num_examples": 6623239}], "download_size": 12161597141, "dataset_size": 21102365479}, "tags": ["LM"]}
|
2023-03-30T17:35:12+00:00
|
fe6c56666c2907859caa531e8eb4dd35717d8b20
|
# OpenSubtitles
- Source: https://huggingface.co/datasets/open_subtitles
- Num examples: 3,505,276
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/open_subtitles_envi")
```
- Format for Translation task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="<|endofprompt|>",
end_key="<|endoftext|>",
en2vi=True,
):
if en2vi:
if random.random() < 0.5:
instruction = "Translate the following sentences from English into Vietnamese."
else:
instruction = "Dịch các câu sau từ tiếng Anh sang tiếng Việt."
input = sample['en'].strip()
response = sample['vi'].strip()
else:
if random.random() < 0.5:
instruction = "Translate the following sentences from Vietnamese into English."
else:
instruction = "Dịch các câu sau từ tiếng Việt sang tiếng Anh."
input = sample['vi'].strip()
response = sample['en'].strip()
return {'text': """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Dịch các câu sau từ tiếng Anh sang tiếng Việt.
Input:
Line up, I say!
<|endofprompt|>
Sắp hàng, nghe chưa!
<|endoftext|>
"""
```
|
vietgpt/open_subtitles_envi
|
[
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:vi",
"LM",
"region:us"
] |
2023-02-21T21:01:10+00:00
|
{"language": ["en", "vi"], "size_categories": ["1M<n<10M"], "task_categories": ["translation"], "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "vi", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 280063489, "num_examples": 3505276}], "download_size": 176803145, "dataset_size": 280063489}, "tags": ["LM"]}
|
2023-07-03T16:52:41+00:00
|
cb9b5b1d39ea76d36db00322c4b8d984e388f3cf
|
alignment/mm-cot
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-21T21:03:17+00:00
|
{"license": "apache-2.0"}
|
2023-02-22T01:41:00+00:00
|
|
62aff9ab74aa4b73a36c45a9f6c91279a57db5d3
|
# Ted Talks
- Source: https://huggingface.co/datasets/ted_talks_iwslt
- Num examples: 2,293
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/ted_talks_iwslt_en")
```
|
vietgpt/ted_talks_iwslt_en
|
[
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"LM",
"region:us"
] |
2023-02-21T21:22:06+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27242341, "num_examples": 2293}], "download_size": 15366817, "dataset_size": 27242341}, "tags": ["LM"]}
|
2023-03-30T17:28:08+00:00
|
b33d09325104597f1ea2dc19caffe68b79f8998d
|
krishnagarg09/SemEval2016Task6
|
[
"license:mit",
"region:us"
] |
2023-02-21T21:51:48+00:00
|
{"license": "mit"}
|
2023-02-21T21:58:21+00:00
|
|
4cc7f8afebc04cfae966086ee9286148c6c1001f
|
- This Dataset has been downloaded from PubMed
- It has abstracts and titles that are related to Lung Cancer
- the data has been cleaned before uploading
- it could be used for any NLP task, such as Domain Adaptation
|
Gaborandi/Lung_Cancer_pubmed_abstracts
|
[
"region:us"
] |
2023-02-21T22:14:05+00:00
|
{}
|
2023-02-21T23:20:11+00:00
|
049b6d3855cf4f9c75489457e5e7f66864501348
|
- This Dataset has been downloaded from PubMed
- It has abstracts and titles that are related to type 2 DM
- the data has been cleaned before uploading
- it could be used for any NLP task, such as Domain Adaptation
|
Gaborandi/diabetes_mellitus_type2_pubmed_abstracts
|
[
"region:us"
] |
2023-02-21T22:22:15+00:00
|
{}
|
2023-02-21T23:10:29+00:00
|
1bd89a6d51f3a71656e17e3aec8c209c49b7ba10
|
# Dataset Card for "trivia_qa_wiki_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
manu/trivia_qa_wiki
|
[
"region:us"
] |
2023-02-21T22:25:14+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "question_source", "dtype": "string"}, {"name": "entity_pages", "sequence": [{"name": "doc_source", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "wiki_context", "dtype": "string"}]}, {"name": "search_results", "sequence": [{"name": "description", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "rank", "dtype": "int32"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "search_context", "dtype": "string"}]}, {"name": "answer", "struct": [{"name": "aliases", "sequence": "string"}, {"name": "normalized_aliases", "sequence": "string"}, {"name": "matched_wiki_entity_name", "dtype": "string"}, {"name": "normalized_matched_wiki_entity_name", "dtype": "string"}, {"name": "normalized_value", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "validation", "num_bytes": 430166050, "num_examples": 7993}], "download_size": 234775285, "dataset_size": 430166050}}
|
2023-02-21T22:25:45+00:00
|
160ae214bccc54d9b716e53eb6b0b50ee50ec39c
|
# Dataset Card for "patched_test_p_10_f_SPOUT_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_10_f_SPOUT_m1_predictions
|
[
"region:us"
] |
2023-02-22T00:19:01+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 544006693, "num_examples": 1675599}], "download_size": 55789813, "dataset_size": 544006693}}
|
2023-02-22T00:19:13+00:00
|
5cf6eb2c2622d5ee0ca97df42c62bfd3fd1fabc4
|
# Dataset Card for "nldv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fifi777/nldv
|
[
"region:us"
] |
2023-02-22T00:20:52+00:00
|
{"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 3231756186.42971, "num_examples": 235728}, {"name": "valid", "num_bytes": 65957285.57029006, "num_examples": 4811}], "download_size": 24134199, "dataset_size": 3297713472.0}}
|
2023-02-24T05:40:14+00:00
|
6a9105995137339c23500017e0adc23f779bfed1
|
# Dataset Card for "rlhf-qa-conditional-generation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kastan/rlhf-qa-conditional-generation
|
[
"region:us"
] |
2023-02-22T00:28:08+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 27303.451327433628, "num_examples": 90}, {"name": "valid", "num_bytes": 6977.548672566371, "num_examples": 23}], "download_size": 6067, "dataset_size": 34281.0}}
|
2023-03-06T20:36:48+00:00
|
af96311d66ccf05456d5c9018f3a5037b9d6bb5c
|
# Dataset Card for "simpsons-blip-captions-pil"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jinmel/simpsons-blip-captions-pil
|
[
"region:us"
] |
2023-02-22T02:02:08+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 27091297.0, "num_examples": 755}], "download_size": 26505319, "dataset_size": 27091297.0}}
|
2023-02-22T02:17:04+00:00
|
0d9a0ccf4a30b81e5b1867894af5259de4fdec02
|
Plachta/GLIP-test-images
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-22T03:36:33+00:00
|
{"license": "apache-2.0"}
|
2023-02-22T07:00:11+00:00
|
|
6a809b2996ee5319983402858dd76796354c3dc3
|
Metahunter/ddpm-butterflies-128
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-02-22T03:56:57+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2023-02-22T03:56:57+00:00
|
|
df984f873641538b8e9da6e870e6faa8597f0301
|
# Dataset Card for "1.predict_last_word"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.1.predict_last_word
|
[
"region:us"
] |
2023-02-22T06:22:11+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 4628980, "num_examples": 39905}, {"name": "validation", "num_bytes": 98368, "num_examples": 848}, {"name": "test", "num_bytes": 200680, "num_examples": 1730}], "download_size": 0, "dataset_size": 4928028}}
|
2023-02-22T11:26:30+00:00
|
052bb3029a972578757f03071e53af45849597ae
|
Hobospider132/Mahiru-Proto
|
[
"license:gpl-3.0",
"region:us"
] |
2023-02-22T07:38:38+00:00
|
{"license": "gpl-3.0", "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "line", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 285269, "num_examples": 5243}], "download_size": 155441, "dataset_size": 285269}}
|
2023-04-21T12:25:14+00:00
|
|
a1d33300f1b3d2d2b650725ee0c0c10256faa031
|
# m0_fine_tuning_ref_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ref_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ref_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_fine_tuning_ref_cmbert_io")
|
nlpso/m0_fine_tuning_ref_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T07:59:18+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T07:59:33+00:00
|
7e298b9cb1c1b1663e48a9a3eafadff9577a59dd
|
# m0_fine_tuning_ref_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ref_ptrn_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ref_ptrn_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_fine_tuning_ref_ptrn_cmbert_io")
|
nlpso/m0_fine_tuning_ref_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T07:59:34+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T07:59:50+00:00
|
e2bfd9d3f3402d9b73eca0e6ded84296137bd421
|
# m0_fine_tuning_ocr_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ocr_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ocr_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_fine_tuning_ocr_cmbert_io")
|
nlpso/m0_fine_tuning_ocr_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T07:59:51+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:00:08+00:00
|
9ffe7505d9ffbd4b89bffe83b64ae2aa22d26726
|
# m0_fine_tuning_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ocr_ptrn_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_fine_tuning_ocr_ptrn_cmbert_io")
|
nlpso/m0_fine_tuning_ocr_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:00:09+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:00:25+00:00
|
b7fdc0584cf7557e3243a13062649b4ff397ac22
|
# m1_fine_tuning_ref_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ref_cmbert_io")
|
nlpso/m1_fine_tuning_ref_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:00:26+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:38:54+00:00
|
58f017c9f7c788fe1d270e77529dac55e50ec80e
|
# m1_fine_tuning_ref_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ref_ptrn_cmbert_io")
|
nlpso/m1_fine_tuning_ref_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:00:43+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:39:11+00:00
|
9515895676b077111c27ddb25b6810fa2ce2d9f4
|
# m1_fine_tuning_ref_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ref_cmbert_iob2")
|
nlpso/m1_fine_tuning_ref_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:01:00+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:39:28+00:00
|
e05852d2522a90922dc79f2226708d87fb8ed77e
|
# m1_fine_tuning_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ref_ptrn_cmbert_iob2")
|
nlpso/m1_fine_tuning_ref_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:01:17+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:39:45+00:00
|
d98466297b61f9ebd2bb0935ccffd6c8e8f85144
|
# m1_fine_tuning_ocr_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ocr_cmbert_io")
|
nlpso/m1_fine_tuning_ocr_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:01:33+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:03+00:00
|
93c57e74f8dda7b5da11bf541fc0d69aef22b3ea
|
# m1_fine_tuning_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ocr_ptrn_cmbert_io")
|
nlpso/m1_fine_tuning_ocr_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:01:50+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:20+00:00
|
1706a26a28960d69cc338419dcb2e1355831fdfc
|
# m1_fine_tuning_ocr_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ocr_cmbert_iob2")
|
nlpso/m1_fine_tuning_ocr_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:02:06+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:38+00:00
|
911cbf35e1a1d3f8fc63ab119dc2e9e0003e7229
|
# m1_fine_tuning_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approach : M1
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* Level-1 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1)
* Level 2 : [nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m1_fine_tuning_ocr_ptrn_cmbert_iob2")
|
nlpso/m1_fine_tuning_ocr_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:02:22+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:40:55+00:00
|
e13c439b5bb9039fe4e01ac52624e513412a7cc4
|
# m2m3_fine_tuning_ref_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ref_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ref_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ref_cmbert_io")
|
nlpso/m2m3_fine_tuning_ref_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:02:39+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:02:54+00:00
|
ad7af67ab3af8550be9a1c9eeba509f81f9ad06b
|
# m2m3_fine_tuning_ref_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_ptrn_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ref_ptrn_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ref_ptrn_cmbert_io")
|
nlpso/m2m3_fine_tuning_ref_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:02:55+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:03:10+00:00
|
16e2ff4dd53420c35c41ead732be8f3de2181e90
|
# m2m3_fine_tuning_ref_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ref_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ref_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ref_cmbert_iob2")
|
nlpso/m2m3_fine_tuning_ref_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:03:11+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:03:27+00:00
|
a177274163f64ee1eaf97c7ed93d0de2bfbd3261
|
# m2m3_fine_tuning_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ref_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ref_ptrn_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ref_ptrn_cmbert_iob2")
|
nlpso/m2m3_fine_tuning_ref_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:03:28+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:03:43+00:00
|
9be3fa1235e9381a04829250488e5feefe14400b
|
# m0_qualitative_analysis_ref_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ref_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ref_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_qualitative_analysis_ref_cmbert_io")
|
nlpso/m0_qualitative_analysis_ref_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:03:43+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:06:23+00:00
|
74ebc32ee7c7c702b25485d8e948e435b8adfbb3
|
# m2m3_fine_tuning_ocr_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ocr_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ocr_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ocr_cmbert_io")
|
nlpso/m2m3_fine_tuning_ocr_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:03:44+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:04:00+00:00
|
318b02b7a0b61427fbe5448a9d277b477fd634e1
|
# m0_qualitative_analysis_ref_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : ground-truth
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ref_ptrn_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ref_ptrn_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_qualitative_analysis_ref_ptrn_cmbert_io")
|
nlpso/m0_qualitative_analysis_ref_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:04:00+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:06:25+00:00
|
11fa31d795706f2ec314864d40462657f5c745ff
|
# m2m3_fine_tuning_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m2_joint_label_ocr_ptrn_cmbert_io)
* M3 : [nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ocr_ptrn_cmbert_io")
|
nlpso/m2m3_fine_tuning_ocr_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:04:01+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:04:16+00:00
|
2366871f90b5d11baf60c17810b032d5db8bed0d
|
# m0_qualitative_analysis_ocr_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ocr_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ocr_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_qualitative_analysis_ocr_cmbert_io")
|
nlpso/m0_qualitative_analysis_ocr_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:04:16+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:06:27+00:00
|
5f3cfb133d13e8aeb82b2889cbea39305e95e4a9
|
# m2m3_fine_tuning_ocr_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ocr_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ocr_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ocr_cmbert_iob2")
|
nlpso/m2m3_fine_tuning_ocr_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:04:18+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:04:33+00:00
|
365c34e1e047594037d8f82ddbc8149c7de9c24a
|
# m0_qualitative_analysis_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ocr_ptrn_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ocr_ptrn_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_qualitative_analysis_ocr_ptrn_cmbert_io")
|
nlpso/m0_qualitative_analysis_ocr_ptrn_cmbert_io
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:04:33+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:06:29+00:00
|
b5b6c9bf5d62859c37bdf48d329a0117c0091123
|
# m2m3_fine_tuning_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset parameters
* Approachrd : M2 and M3
* Dataset type : noisy (Pero OCR)
* Tokenizer : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Tagging format : IOB2
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned models :
* M2 : [nlpso/m2_joint_label_ocr_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m2_joint_label_ocr_ptrn_cmbert_iob2)
* M3 : [nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_iob2](https://huggingface.co/nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_iob2)
## Entity types
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m2m3_fine_tuning_ocr_ptrn_cmbert_iob2")
|
nlpso/m2m3_fine_tuning_ocr_ptrn_cmbert_iob2
|
[
"task_categories:token-classification",
"multilinguality:monolingual",
"language:fr",
"region:us"
] |
2023-02-22T08:04:34+00:00
|
{"language": ["fr"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"]}
|
2023-02-22T08:04:49+00:00
|
9b17ba5b7b152cd7db4d591b8f43f39326dd0a38
|
# Dataset Card for "nlp.2.predict_middle_word"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.2.predict_middle_word
|
[
"region:us"
] |
2023-02-22T08:04:56+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 5711991, "num_examples": 44279}, {"name": "validation", "num_bytes": 111069, "num_examples": 861}, {"name": "test", "num_bytes": 229104, "num_examples": 1776}], "download_size": 0, "dataset_size": 6052164}}
|
2023-02-22T11:26:32+00:00
|
81b1f95c8cdc378b77cc693737b17f78afa58e2b
|
# Dataset Card for "nlp.3.reading_for_understanding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/nlp.3.reading_for_understanding
|
[
"region:us"
] |
2023-02-22T08:26:57+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 19646064, "num_examples": 10106}, {"name": "validation", "num_bytes": 398520, "num_examples": 205}], "download_size": 3916983, "dataset_size": 20044584}}
|
2023-02-23T02:10:06+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.