sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1bb04cdb3fae76469e502db2bdeae0eea7f98e56 | # Dataset Card for "wikisql_codellama_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | defog/wikisql_codellama_1000 | [
"region:us"
]
| 2023-11-01T10:15:41+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6652069, "num_examples": 1000}], "download_size": 850430, "dataset_size": 6652069}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-09T09:14:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "wikisql_codellama_1000"
More Information needed | [
"# Dataset Card for \"wikisql_codellama_1000\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikisql_codellama_1000\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikisql_codellama_1000\"\n\nMore Information needed"
]
|
8ff2a56b62d7cc52586eaf3e6d6a7084f94dddae | # Dataset Card for "wikipedia_20220620_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tinhpx2911/wikipedia_20220620_cleaned | [
"region:us"
]
| 2023-11-01T10:17:59+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1261183070, "num_examples": 1273468}], "download_size": 587594604, "dataset_size": 1261183070}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T10:22:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "wikipedia_20220620_cleaned"
More Information needed | [
"# Dataset Card for \"wikipedia_20220620_cleaned\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikipedia_20220620_cleaned\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikipedia_20220620_cleaned\"\n\nMore Information needed"
]
|
b777e2d1d46c705cfdd0b07fb2a5cca695cdb4fe |
# Glot500 Corpus
A dataset of natural language data collected by putting together more than 150
existing mono-lingual and multilingual datasets together and crawling known multilingual websites.
The focus of this dataset is on 500 extremely low-resource languages.
(More Languages still to be uploaded here)
This dataset is used to train the [Glot500](https://huggingface.co/cis-lmu/glot500-base) model.
- **Homepage:** [homepage](https://github.com/cisnlp/Glot500)
- **Repository:** [github](https://github.com/cisnlp/Glot500)
- **Paper:** [acl](https://aclanthology.org/2023.acl-long.61/), [arxiv](https://arxiv.org/abs/2305.12182)
## Usage
Replace `nbl_Latn` with your specific language.
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/Glot500', 'nbl_Latn', split='train')
print(dataset['train'][0]) # First row of nbl_Latn
```
<details>
<summary>Click to show supported languages:</summary>
```
ton_Latn
nld_Latn
tzo_Latn
leh_Latn
cuk_Latn
ibg_Latn
uzb_Cyrl
jav_Latn
rap_Latn
zpa_Latn
bak_Cyrl
por_Latn
quy_Latn
ast_Latn
cos_Latn
fon_Latn
sna_Latn
dzo_Tibt
nob_Latn
nch_Latn
ish_Latn
che_Cyrl
ext_Latn
ldi_Latn
dtp_Latn
yue_Hani
kbd_Cyrl
mar_Deva
ron_Latn
acr_Latn
afb_Arab
sqi_Latn
eng_Latn
ksd_Latn
rus_Cyrl
bcl_Latn
ksh_Latn
hin_Latn
myv_Cyrl
kjh_Cyrl
sah_Cyrl
gkp_Latn
naq_Latn
tdt_Latn
rmn_Cyrl
kac_Latn
cak_Latn
kir_Cyrl
mps_Latn
yid_Hebr
dhv_Latn
srn_Latn
div_Thaa
mkd_Cyrl
idu_Latn
bre_Latn
bas_Latn
ven_Latn
pxm_Latn
wuu_Hani
mwl_Latn
miq_Latn
kss_Latn
wes_Latn
slv_Latn
hrv_Latn
hmo_Latn
som_Latn
bod_Tibt
pls_Latn
ile_Latn
luo_Latn
pus_Arab
fao_Latn
fas_Arab
swa_Latn
ifb_Latn
ary_Arab
tbz_Latn
hus_Latn
ote_Latn
ilo_Latn
ctd_Latn
abk_Cyrl
bqc_Latn
hil_Latn
pon_Latn
zul_Latn
als_Latn
pes_Arab
bpy_Beng
bos_Latn
sot_Latn
lin_Latn
tuk_Cyrl
gla_Latn
wln_Latn
apc_Arab
hin_Deva
hye_Armn
tir_Ethi
pap_Latn
gcf_Latn
cjk_Latn
pcd_Latn
tur_Latn
kon_Latn
mwn_Latn
izz_Latn
xho_Latn
lam_Latn
guc_Latn
aka_Latn
kea_Latn
sme_Latn
fat_Latn
csb_Latn
bak_Latn
djk_Latn
xav_Latn
oci_Latn
acm_Arab
rmy_Cyrl
bim_Latn
mck_Latn
krc_Cyrl
cym_Latn
lus_Latn
ncx_Latn
ngu_Latn
yom_Latn
tam_Taml
ajp_Arab
epo_Latn
fra_Latn
ita_Latn
seh_Latn
sxn_Latn
pdt_Latn
hbs_Latn
uzn_Cyrl
bhw_Latn
ksw_Mymr
pms_Latn
zlm_Latn
ami_Latn
qub_Latn
twx_Latn
tsz_Latn
kaa_Cyrl
toj_Latn
toh_Latn
kos_Latn
ogo_Latn
kab_Latn
pan_Guru
nan_Latn
aze_Latn
prk_Latn
ara_Arab
meu_Latn
nba_Latn
lvs_Latn
nbl_Latn
loz_Latn
crh_Latn
bci_Latn
kbp_Latn
tgl_Latn
kmb_Latn
hun_Latn
nzi_Latn
yao_Latn
arn_Latn
hyw_Cyrl
vmw_Latn
jbo_Latn
mzn_Arab
lzh_Hani
heb_Hebr
cce_Latn
bjn_Latn
gug_Latn
yor_Latn
ban_Latn
tlh_Latn
chv_Cyrl
sin_Sinh
ind_Latn
dua_Latn
sid_Latn
amh_Ethi
zea_Latn
kpg_Latn
crh_Cyrl
nyu_Latn
dln_Latn
ibo_Latn
tih_Latn
msa_Latn
nap_Latn
mgr_Latn
bik_Latn
srp_Cyrl
lao_Laoo
guw_Latn
kom_Cyrl
sop_Latn
nde_Latn
hui_Latn
cfm_Latn
new_Deva
kur_Arab
sco_Latn
nyk_Latn
lun_Latn
suz_Deva
wal_Latn
asm_Beng
rar_Latn
san_Deva
kaz_Cyrl
tog_Latn
iba_Latn
tuk_Latn
nso_Latn
run_Latn
ctu_Latn
bam_Latn
fin_Latn
gor_Latn
kmr_Latn
ben_Beng
pag_Latn
niu_Latn
xmf_Geor
ekk_Latn
tsc_Latn
lmo_Latn
mhr_Cyrl
plt_Latn
qvi_Latn
roh_Latn
oke_Latn
mah_Latn
tok_Latn
mgh_Latn
eml_Latn
urh_Latn
pnb_Arab
yua_Latn
nav_Latn
zne_Latn
bin_Latn
cat_Latn
gym_Latn
sat_Olck
snd_Arab
isl_Latn
rmn_Grek
bba_Latn
kal_Latn
aoj_Latn
qug_Latn
zai_Latn
guj_Gujr
min_Latn
tob_Latn
grc_Grek
hmn_Latn
ido_Latn
khm_Khmr
ikk_Latn
iku_Cans
tat_Latn
bel_Cyrl
dyu_Latn
que_Latn
efi_Latn
quw_Latn
nyn_Latn
wol_Latn
hne_Deva
zho_Hani
swh_Latn
bum_Latn
kua_Latn
ncj_Latn
ewe_Latn
hat_Latn
ina_Latn
mfe_Latn
ahk_Latn
srm_Latn
lug_Latn
ach_Latn
rmy_Latn
tpm_Latn
smo_Latn
mos_Latn
srd_Latn
srp_Latn
azb_Arab
ori_Orya
mzh_Latn
kur_Latn
phm_Latn
kwn_Latn
crs_Latn
ada_Latn
ttj_Latn
hif_Latn
tzh_Latn
tdx_Latn
bbc_Latn
cnh_Latn
pcm_Latn
tso_Latn
nor_Latn
bsb_Latn
kqn_Latn
gaa_Latn
ukr_Cyrl
lav_Latn
nep_Deva
kmr_Cyrl
ige_Latn
pis_Latn
lhu_Latn
nya_Latn
tiv_Latn
mny_Latn
kri_Latn
nyy_Latn
poh_Latn
nnb_Latn
grn_Latn
mco_Latn
ory_Orya
ful_Latn
diq_Latn
sag_Latn
tel_Telu
afr_Latn
haw_Latn
umb_Latn
hsb_Latn
fij_Latn
hbs_Cyrl
san_Latn
vls_Latn
zsm_Latn
lij_Latn
quc_Latn
mam_Latn
tuc_Latn
dan_Latn
rue_Cyrl
ace_Latn
bem_Latn
kam_Latn
ndo_Latn
mbb_Latn
mrw_Latn
ajg_Latn
oss_Cyrl
her_Latn
lit_Latn
frr_Latn
yap_Latn
bzj_Latn
gom_Latn
swe_Latn
lfn_Latn
cmn_Hani
mon_Cyrl
vep_Latn
ixl_Latn
gil_Latn
mau_Latn
aym_Latn
gom_Deva
fur_Latn
cgg_Latn
chw_Latn
kin_Latn
alz_Latn
ndc_Latn
gcr_Latn
rmn_Latn
sgs_Latn
bih_Deva
skg_Latn
bts_Latn
vie_Latn
tha_Thai
tcf_Latn
pau_Latn
est_Latn
lue_Latn
rug_Latn
gur_Latn
kik_Latn
mri_Latn
ber_Latn
ssw_Latn
cab_Latn
quz_Latn
arb_Arab
mai_Deva
tat_Cyrl
mya_Mymr
alt_Cyrl
nno_Latn
nse_Latn
hrx_Latn
hau_Latn
koo_Latn
gsw_Latn
pam_Latn
sun_Latn
lat_Latn
bis_Latn
btx_Latn
udm_Cyrl
xmv_Latn
tca_Latn
uig_Arab
glg_Latn
tah_Latn
llb_Latn
ckb_Arab
gle_Latn
lim_Latn
slk_Latn
nds_Latn
kor_Hang
uzb_Latn
gkn_Latn
pfl_Latn
azj_Latn
glv_Latn
jam_Latn
kat_Geor
abn_Latn
fry_Latn
kat_Latn
twi_Latn
eus_Latn
toi_Latn
mlg_Latn
ifa_Latn
tyv_Cyrl
arz_Arab
chk_Latn
vol_Latn
kek_Latn
teo_Latn
ell_Grek
kan_Knda
rng_Latn
tpi_Latn
mdy_Ethi
lua_Latn
mad_Latn
top_Latn
scn_Latn
ngl_Latn
mal_Mlym
szl_Latn
orm_Latn
nia_Latn
urd_Arab
mxv_Latn
cbk_Latn
```
</details>
## License
We don't own any part of the data. The original source of each sentence of the data is indicated in dataset field.
To see the copyright license of the original datasets visit [here](https://github.com/cisnlp/Glot500#glot500-c).
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0.
If you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at [email protected].
## Ethical Considerations
**1. Biases:** The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for news sources and social medias.
**2. Representativeness:** While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.
**3. Ethics:** We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.
## Citation
If you use any part of this code and data in your research, please cite it using the following BibTeX entry.
```
@inproceedings{imanigooghari-etal-2023-glot500,
title = "Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages",
author = {ImaniGooghari, Ayyoob and
Lin, Peiqin and
Kargaran, Amir Hossein and
Severini, Silvia and
Jalili Sabet, Masoud and
Kassner, Nora and
Ma, Chunlan and
Schmid, Helmut and
Martins, Andr{\'e} and
Yvon, Fran{\c{c}}ois and
Sch{\"u}tze, Hinrich},
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.61",
doi = "10.18653/v1/2023.acl-long.61",
pages = "1082--1117",
abstract = "The NLP community has mainly focused on scaling Large Language Models (LLMs) vertically, i.e., making them better for about 100 languages. We instead scale LLMs horizontally: we create, through continued pretraining, Glot500-m, an LLM that covers 511 predominantly low-resource languages. An important part of this effort is to collect and clean Glot500-c, a corpus that covers these 511 languages and allows us to train Glot500-m. We evaluate Glot500-m on five diverse tasks across these languages. We observe large improvements for both high-resource and low-resource languages compared to an XLM-R baseline. Our analysis shows that no single factor explains the quality of multilingual LLM representations. Rather, a combination of factors determines quality including corpus size, script, {``}help{''} from related languages and the total capacity of the model. Our work addresses an important goal of NLP research: we should notlimit NLP to a small fraction of the world{'}s languages and instead strive to support as many languages as possible to bring the benefits of NLP technology to all languages and cultures. Code, data and models are available at \url{https://github.com/cisnlp/Glot500}.",
}
``` | cis-lmu/Glot500 | [
"multilinguality:multilingual",
"language:abk",
"language:abn",
"language:ace",
"language:ach",
"language:acm",
"language:acr",
"language:ada",
"language:afb",
"language:afr",
"language:ahk",
"language:ajg",
"language:ajp",
"language:aka",
"language:als",
"language:alt",
"language:alz",
"language:amh",
"language:ami",
"language:aoj",
"language:apc",
"language:ara",
"language:arb",
"language:arn",
"language:ary",
"language:arz",
"language:asm",
"language:ast",
"language:aym",
"language:azb",
"language:aze",
"language:azj",
"language:bak",
"language:bam",
"language:ban",
"language:bas",
"language:bba",
"language:bbc",
"language:bci",
"language:bcl",
"language:bel",
"language:bem",
"language:ben",
"language:ber",
"language:bhw",
"language:bih",
"language:bik",
"language:bim",
"language:bin",
"language:bis",
"language:bjn",
"language:bod",
"language:bos",
"language:bpy",
"language:bqc",
"language:bre",
"language:bsb",
"language:bts",
"language:btx",
"language:bum",
"language:bzj",
"language:cab",
"language:cak",
"language:cat",
"language:cbk",
"language:cce",
"language:cfm",
"language:cgg",
"language:che",
"language:chk",
"language:chv",
"language:chw",
"language:cjk",
"language:ckb",
"language:cmn",
"language:cnh",
"language:cos",
"language:crh",
"language:crs",
"language:csb",
"language:ctd",
"language:ctu",
"language:cuk",
"language:cym",
"language:dan",
"language:dhv",
"language:diq",
"language:div",
"language:djk",
"language:dln",
"language:dtp",
"language:dua",
"language:dyu",
"language:dzo",
"language:efi",
"language:ekk",
"language:ell",
"language:eml",
"language:eng",
"language:epo",
"language:est",
"language:eus",
"language:ewe",
"language:ext",
"language:fao",
"language:fas",
"language:fat",
"language:fij",
"language:fin",
"language:fon",
"language:fra",
"language:frr",
"language:fry",
"language:ful",
"language:fur",
"language:gaa",
"language:gcf",
"language:gcr",
"language:gil",
"language:gkn",
"language:gkp",
"language:gla",
"language:gle",
"language:glg",
"language:glv",
"language:gom",
"language:gor",
"language:grc",
"language:grn",
"language:gsw",
"language:guc",
"language:gug",
"language:guj",
"language:gur",
"language:guw",
"language:gym",
"language:hat",
"language:hau",
"language:haw",
"language:hbs",
"language:heb",
"language:her",
"language:hif",
"language:hil",
"language:hin",
"language:hmn",
"language:hmo",
"language:hne",
"language:hrv",
"language:hrx",
"language:hsb",
"language:hui",
"language:hun",
"language:hus",
"language:hye",
"language:hyw",
"language:iba",
"language:ibg",
"language:ibo",
"language:ido",
"language:idu",
"language:ifa",
"language:ifb",
"language:ige",
"language:ikk",
"language:iku",
"language:ile",
"language:ilo",
"language:ina",
"language:ind",
"language:ish",
"language:isl",
"language:ita",
"language:ixl",
"language:izz",
"language:jam",
"language:jav",
"language:jbo",
"language:kaa",
"language:kab",
"language:kac",
"language:kal",
"language:kam",
"language:kan",
"language:kat",
"language:kaz",
"language:kbd",
"language:kbp",
"language:kea",
"language:kek",
"language:khm",
"language:kik",
"language:kin",
"language:kir",
"language:kjh",
"language:kmb",
"language:kmr",
"language:kom",
"language:kon",
"language:koo",
"language:kor",
"language:kos",
"language:kpg",
"language:kqn",
"language:krc",
"language:kri",
"language:ksd",
"language:ksh",
"language:kss",
"language:ksw",
"language:kua",
"language:kur",
"language:kwn",
"language:lam",
"language:lao",
"language:lat",
"language:lav",
"language:ldi",
"language:leh",
"language:lfn",
"language:lhu",
"language:lij",
"language:lim",
"language:lin",
"language:lit",
"language:llb",
"language:lmo",
"language:loz",
"language:lua",
"language:lue",
"language:lug",
"language:lun",
"language:luo",
"language:lus",
"language:lvs",
"language:lzh",
"language:mad",
"language:mah",
"language:mai",
"language:mal",
"language:mam",
"language:mar",
"language:mau",
"language:mbb",
"language:mck",
"language:mco",
"language:mdy",
"language:meu",
"language:mfe",
"language:mgh",
"language:mgr",
"language:mhr",
"language:min",
"language:miq",
"language:mkd",
"language:mlg",
"language:mny",
"language:mon",
"language:mos",
"language:mps",
"language:mri",
"language:mrw",
"language:msa",
"language:mwl",
"language:mwn",
"language:mxv",
"language:mya",
"language:myv",
"language:mzh",
"language:mzn",
"language:nan",
"language:nap",
"language:naq",
"language:nav",
"language:nba",
"language:nbl",
"language:nch",
"language:ncj",
"language:ncx",
"language:ndc",
"language:nde",
"language:ndo",
"language:nds",
"language:nep",
"language:new",
"language:ngl",
"language:ngu",
"language:nia",
"language:niu",
"language:nld",
"language:nnb",
"language:nno",
"language:nob",
"language:nor",
"language:nse",
"language:nso",
"language:nya",
"language:nyk",
"language:nyn",
"language:nyu",
"language:nyy",
"language:nzi",
"language:oci",
"language:ogo",
"language:oke",
"language:ori",
"language:orm",
"language:ory",
"language:oss",
"language:ote",
"language:pag",
"language:pam",
"language:pan",
"language:pap",
"language:pau",
"language:pcd",
"language:pcm",
"language:pdt",
"language:pes",
"language:pfl",
"language:phm",
"language:pis",
"language:pls",
"language:plt",
"language:pms",
"language:pnb",
"language:poh",
"language:pon",
"language:por",
"language:prk",
"language:pus",
"language:pxm",
"language:qub",
"language:quc",
"language:que",
"language:qug",
"language:quw",
"language:quy",
"language:quz",
"language:qvi",
"language:rap",
"language:rar",
"language:rmn",
"language:rmy",
"language:rng",
"language:roh",
"language:ron",
"language:rue",
"language:rug",
"language:run",
"language:rus",
"language:sag",
"language:sah",
"language:san",
"language:sat",
"language:scn",
"language:sco",
"language:seh",
"language:sgs",
"language:sid",
"language:sin",
"language:skg",
"language:slk",
"language:slv",
"language:sme",
"language:smo",
"language:sna",
"language:snd",
"language:som",
"language:sop",
"language:sot",
"language:sqi",
"language:srd",
"language:srm",
"language:srn",
"language:srp",
"language:ssw",
"language:sun",
"language:suz",
"language:swa",
"language:swe",
"language:swh",
"language:sxn",
"language:szl",
"language:tah",
"language:tam",
"language:tat",
"language:tbz",
"language:tca",
"language:tcf",
"language:tdt",
"language:tdx",
"language:tel",
"language:teo",
"language:tgl",
"language:tha",
"language:tih",
"language:tir",
"language:tiv",
"language:tlh",
"language:tob",
"language:tog",
"language:toh",
"language:toi",
"language:toj",
"language:tok",
"language:ton",
"language:top",
"language:tpi",
"language:tpm",
"language:tsc",
"language:tso",
"language:tsz",
"language:ttj",
"language:tuc",
"language:tuk",
"language:tur",
"language:twi",
"language:twx",
"language:tyv",
"language:tzh",
"language:tzo",
"language:udm",
"language:uig",
"language:ukr",
"language:umb",
"language:urd",
"language:urh",
"language:uzb",
"language:uzn",
"language:ven",
"language:vep",
"language:vie",
"language:vls",
"language:vmw",
"language:vol",
"language:wal",
"language:wes",
"language:wln",
"language:wol",
"language:wuu",
"language:xav",
"language:xho",
"language:xmf",
"language:xmv",
"language:yao",
"language:yap",
"language:yid",
"language:yom",
"language:yor",
"language:yua",
"language:yue",
"language:zai",
"language:zea",
"language:zho",
"language:zlm",
"language:zne",
"language:zpa",
"language:zsm",
"language:zul",
"license:other",
"arxiv:2305.12182",
"region:us"
]
| 2023-11-01T10:25:59+00:00 | {"language": ["abk", "abn", "ace", "ach", "acm", "acr", "ada", "afb", "afr", "ahk", "ajg", "ajp", "aka", "als", "alt", "alz", "amh", "ami", "aoj", "apc", "ara", "arb", "arn", "ary", "arz", "asm", "ast", "aym", "azb", "aze", "azj", "bak", "bam", "ban", "bas", "bba", "bbc", "bci", "bcl", "bel", "bem", "ben", "ber", "bhw", "bih", "bik", "bim", "bin", "bis", "bjn", "bod", "bos", "bpy", "bqc", "bre", "bsb", "bts", "btx", "bum", "bzj", "cab", "cak", "cat", "cbk", "cce", "cfm", "cgg", "che", "chk", "chv", "chw", "cjk", "ckb", "cmn", "cnh", "cos", "crh", "crs", "csb", "ctd", "ctu", "cuk", "cym", "dan", "dhv", "diq", "div", "djk", "dln", "dtp", "dua", "dyu", "dzo", "efi", "ekk", "ell", "eml", "eng", "epo", "est", "eus", "ewe", "ext", "fao", "fas", "fat", "fij", "fin", "fon", "fra", "frr", "fry", "ful", "fur", "gaa", "gcf", "gcr", "gil", "gkn", "gkp", "gla", "gle", "glg", "glv", "gom", "gor", "grc", "grn", "gsw", "guc", "gug", "guj", "gur", "guw", "gym", "hat", "hau", "haw", "hbs", "heb", "her", "hif", "hil", "hin", "hmn", "hmo", "hne", "hrv", "hrx", "hsb", "hui", "hun", "hus", "hye", "hyw", "iba", "ibg", "ibo", "ido", "idu", "ifa", "ifb", "ige", "ikk", "iku", "ile", "ilo", "ina", "ind", "ish", "isl", "ita", "ixl", "izz", "jam", "jav", "jbo", "kaa", "kab", "kac", "kal", "kam", "kan", "kat", "kaz", "kbd", "kbp", "kea", "kek", "khm", "kik", "kin", "kir", "kjh", "kmb", "kmr", "kom", "kon", "koo", "kor", "kos", "kpg", "kqn", "krc", "kri", "ksd", "ksh", "kss", "ksw", "kua", "kur", "kwn", "lam", "lao", "lat", "lav", "ldi", "leh", "lfn", "lhu", "lij", "lim", "lin", "lit", "llb", "lmo", "loz", "lua", "lue", "lug", "lun", "luo", "lus", "lvs", "lzh", "mad", "mah", "mai", "mal", "mam", "mar", "mau", "mbb", "mck", "mco", "mdy", "meu", "mfe", "mgh", "mgr", "mhr", "min", "miq", "mkd", "mlg", "mny", "mon", "mos", "mps", "mri", "mrw", "msa", "mwl", "mwn", "mxv", "mya", "myv", "mzh", "mzn", "nan", "nap", "naq", "nav", "nba", "nbl", "nch", "ncj", "ncx", "ndc", "nde", "ndo", "nds", "nep", "new", "ngl", "ngu", "nia", "niu", "nld", "nnb", "nno", "nob", "nor", "nse", "nso", "nya", "nyk", "nyn", "nyu", "nyy", "nzi", "oci", "ogo", "oke", "ori", "orm", "ory", "oss", "ote", "pag", "pam", "pan", "pap", "pau", "pcd", "pcm", "pdt", "pes", "pfl", "phm", "pis", "pls", "plt", "pms", "pnb", "poh", "pon", "por", "prk", "pus", "pxm", "qub", "quc", "que", "qug", "quw", "quy", "quz", "qvi", "rap", "rar", "rmn", "rmy", "rng", "roh", "ron", "rue", "rug", "run", "rus", "sag", "sah", "san", "sat", "scn", "sco", "seh", "sgs", "sid", "sin", "skg", "slk", "slv", "sme", "smo", "sna", "snd", "som", "sop", "sot", "sqi", "srd", "srm", "srn", "srp", "ssw", "sun", "suz", "swa", "swe", "swh", "sxn", "szl", "tah", "tam", "tat", "tbz", "tca", "tcf", "tdt", "tdx", "tel", "teo", "tgl", "tha", "tih", "tir", "tiv", "tlh", "tob", "tog", "toh", "toi", "toj", "tok", "ton", "top", "tpi", "tpm", "tsc", "tso", "tsz", "ttj", "tuc", "tuk", "tur", "twi", "twx", "tyv", "tzh", "tzo", "udm", "uig", "ukr", "umb", "urd", "urh", "uzb", "uzn", "ven", "vep", "vie", "vls", "vmw", "vol", "wal", "wes", "wln", "wol", "wuu", "xav", "xho", "xmf", "xmv", "yao", "yap", "yid", "yom", "yor", "yua", "yue", "zai", "zea", "zho", "zlm", "zne", "zpa", "zsm", "zul"], "license": "other", "multilinguality": ["multilingual"], "pretty_name": "Glot500 Corpus", "license_name": "license", "license_link": "LICENSE", "configs": [{"config_name": "mlt_Mlym", "data_files": [{"split": "train", "path": "mlt_Mlym/train/*.arrow"}, {"split": "dev", "path": "mlt_Mlym/dev/*.arrow"}, {"split": "test", "path": "mlt_Mlym/test/*.arrow"}]}, {"config_name": "knv_Latn", "data_files": [{"split": "train", "path": "knv_Latn/train/*.arrow"}, {"split": "dev", "path": "knv_Latn/dev/*.arrow"}, {"split": "test", "path": "knv_Latn/test/*.arrow"}]}, {"config_name": "guj_Arab", "data_files": [{"split": "train", "path": "guj_Arab/train/*.arrow"}, {"split": "dev", "path": "guj_Arab/dev/*.arrow"}, {"split": "test", "path": "guj_Arab/test/*.arrow"}]}, {"config_name": "ton_Latn", "data_files": [{"split": "train", "path": "ton_Latn/train/*.arrow"}, {"split": "dev", "path": "ton_Latn/dev/*.arrow"}, {"split": "test", "path": "ton_Latn/test/*.arrow"}]}, {"config_name": "nld_Latn", "data_files": [{"split": "train", "path": "nld_Latn/train/*.arrow"}, {"split": "dev", "path": "nld_Latn/dev/*.arrow"}, {"split": "test", "path": "nld_Latn/test/*.arrow"}]}, {"config_name": "tzo_Latn", "data_files": [{"split": "train", "path": "tzo_Latn/train/*.arrow"}, {"split": "dev", "path": "tzo_Latn/dev/*.arrow"}, {"split": "test", "path": "tzo_Latn/test/*.arrow"}]}, {"config_name": "tsn_Hani", "data_files": [{"split": "train", "path": "tsn_Hani/train/*.arrow"}, {"split": "dev", "path": "tsn_Hani/dev/*.arrow"}, {"split": "test", "path": "tsn_Hani/test/*.arrow"}]}, {"config_name": "aze_Zinh", "data_files": [{"split": "train", "path": "aze_Zinh/train/*.arrow"}, {"split": "dev", "path": "aze_Zinh/dev/*.arrow"}, {"split": "test", "path": "aze_Zinh/test/*.arrow"}]}, {"config_name": "cuk_Latn", "data_files": [{"split": "train", "path": "cuk_Latn/train/*.arrow"}, {"split": "dev", "path": "cuk_Latn/dev/*.arrow"}, {"split": "test", "path": "cuk_Latn/test/*.arrow"}]}, {"config_name": "uzb_Cyrl", "data_files": [{"split": "train", "path": "uzb_Cyrl/train/*.arrow"}, {"split": "dev", "path": "uzb_Cyrl/dev/*.arrow"}, {"split": "test", "path": "uzb_Cyrl/test/*.arrow"}]}, {"config_name": "jav_Latn", "data_files": [{"split": "train", "path": "jav_Latn/train/*.arrow"}, {"split": "dev", "path": "jav_Latn/dev/*.arrow"}, {"split": "test", "path": "jav_Latn/test/*.arrow"}]}, {"config_name": "rap_Latn", "data_files": [{"split": "train", "path": "rap_Latn/train/*.arrow"}, {"split": "dev", "path": "rap_Latn/dev/*.arrow"}, {"split": "test", "path": "rap_Latn/test/*.arrow"}]}, {"config_name": "bak_Cyrl", "data_files": [{"split": "train", "path": "bak_Cyrl/train/*.arrow"}, {"split": "dev", "path": "bak_Cyrl/dev/*.arrow"}, {"split": "test", "path": "bak_Cyrl/test/*.arrow"}]}, {"config_name": "por_Latn", "data_files": [{"split": "train", "path": "por_Latn/train/*.arrow"}, {"split": "dev", "path": "por_Latn/dev/*.arrow"}, {"split": "test", "path": "por_Latn/test/*.arrow"}]}, {"config_name": "mlt_Hang", "data_files": [{"split": "train", "path": "mlt_Hang/train/*.arrow"}, {"split": "dev", "path": "mlt_Hang/dev/*.arrow"}, {"split": "test", "path": "mlt_Hang/test/*.arrow"}]}, {"config_name": "quy_Latn", "data_files": [{"split": "train", "path": "quy_Latn/train/*.arrow"}, {"split": "dev", "path": "quy_Latn/dev/*.arrow"}, {"split": "test", "path": "quy_Latn/test/*.arrow"}]}, {"config_name": "hnj_Latn", "data_files": [{"split": "train", "path": "hnj_Latn/train/*.arrow"}, {"split": "dev", "path": "hnj_Latn/dev/*.arrow"}, {"split": "test", "path": "hnj_Latn/test/*.arrow"}]}, {"config_name": "ast_Latn", "data_files": [{"split": "train", "path": "ast_Latn/train/*.arrow"}, {"split": "dev", "path": "ast_Latn/dev/*.arrow"}, {"split": "test", "path": "ast_Latn/test/*.arrow"}]}, {"config_name": "cos_Latn", "data_files": [{"split": "train", "path": "cos_Latn/train/*.arrow"}, {"split": "dev", "path": "cos_Latn/dev/*.arrow"}, {"split": "test", "path": "cos_Latn/test/*.arrow"}]}, {"config_name": "fon_Latn", "data_files": [{"split": "train", "path": "fon_Latn/train/*.arrow"}, {"split": "dev", "path": "fon_Latn/dev/*.arrow"}, {"split": "test", "path": "fon_Latn/test/*.arrow"}]}, {"config_name": "sna_Latn", "data_files": [{"split": "train", "path": "sna_Latn/train/*.arrow"}, {"split": "dev", "path": "sna_Latn/dev/*.arrow"}, {"split": "test", "path": "sna_Latn/test/*.arrow"}]}, {"config_name": "dzo_Tibt", "data_files": [{"split": "train", "path": "dzo_Tibt/train/*.arrow"}, {"split": "dev", "path": "dzo_Tibt/dev/*.arrow"}, {"split": "test", "path": "dzo_Tibt/test/*.arrow"}]}, {"config_name": "nob_Latn", "data_files": [{"split": "train", "path": "nob_Latn/train/*.arrow"}, {"split": "dev", "path": "nob_Latn/dev/*.arrow"}, {"split": "test", "path": "nob_Latn/test/*.arrow"}]}, {"config_name": "nch_Latn", "data_files": [{"split": "train", "path": "nch_Latn/train/*.arrow"}, {"split": "dev", "path": "nch_Latn/dev/*.arrow"}, {"split": "test", "path": "nch_Latn/test/*.arrow"}]}, {"config_name": "che_Cyrl", "data_files": [{"split": "train", "path": "che_Cyrl/train/*.arrow"}, {"split": "dev", "path": "che_Cyrl/dev/*.arrow"}, {"split": "test", "path": "che_Cyrl/test/*.arrow"}]}, {"config_name": "ext_Latn", "data_files": [{"split": "train", "path": "ext_Latn/train/*.arrow"}, {"split": "dev", "path": "ext_Latn/dev/*.arrow"}, {"split": "test", "path": "ext_Latn/test/*.arrow"}]}, {"config_name": "dtp_Latn", "data_files": [{"split": "train", "path": "dtp_Latn/train/*.arrow"}, {"split": "dev", "path": "dtp_Latn/dev/*.arrow"}, {"split": "test", "path": "dtp_Latn/test/*.arrow"}]}, {"config_name": "yue_Hani", "data_files": [{"split": "train", "path": "yue_Hani/train/*.arrow"}, {"split": "dev", "path": "yue_Hani/dev/*.arrow"}, {"split": "test", "path": "yue_Hani/test/*.arrow"}]}, {"config_name": "kbd_Cyrl", "data_files": [{"split": "train", "path": "kbd_Cyrl/train/*.arrow"}, {"split": "dev", "path": "kbd_Cyrl/dev/*.arrow"}, {"split": "test", "path": "kbd_Cyrl/test/*.arrow"}]}, {"config_name": "mar_Deva", "data_files": [{"split": "train", "path": "mar_Deva/train/*.arrow"}, {"split": "dev", "path": "mar_Deva/dev/*.arrow"}, {"split": "test", "path": "mar_Deva/test/*.arrow"}]}, {"config_name": "ron_Latn", "data_files": [{"split": "train", "path": "ron_Latn/train/*.arrow"}, {"split": "dev", "path": "ron_Latn/dev/*.arrow"}, {"split": "test", "path": "ron_Latn/test/*.arrow"}]}, {"config_name": "acr_Latn", "data_files": [{"split": "train", "path": "acr_Latn/train/*.arrow"}, {"split": "dev", "path": "acr_Latn/dev/*.arrow"}, {"split": "test", "path": "acr_Latn/test/*.arrow"}]}, {"config_name": "afb_Arab", "data_files": [{"split": "train", "path": "afb_Arab/train/*.arrow"}, {"split": "dev", "path": "afb_Arab/dev/*.arrow"}, {"split": "test", "path": "afb_Arab/test/*.arrow"}]}, {"config_name": "mon_Hani", "data_files": [{"split": "train", "path": "mon_Hani/train/*.arrow"}, {"split": "dev", "path": "mon_Hani/dev/*.arrow"}, {"split": "test", "path": "mon_Hani/test/*.arrow"}]}, {"config_name": "sqi_Latn", "data_files": [{"split": "train", "path": "sqi_Latn/train/*.arrow"}, {"split": "dev", "path": "sqi_Latn/dev/*.arrow"}, {"split": "test", "path": "sqi_Latn/test/*.arrow"}]}, {"config_name": "eng_Latn", "data_files": [{"split": "train", "path": "eng_Latn/train/*.arrow"}, {"split": "dev", "path": "eng_Latn/dev/*.arrow"}, {"split": "test", "path": "eng_Latn/test/*.arrow"}]}, {"config_name": "ksd_Latn", "data_files": [{"split": "train", "path": "ksd_Latn/train/*.arrow"}, {"split": "dev", "path": "ksd_Latn/dev/*.arrow"}, {"split": "test", "path": "ksd_Latn/test/*.arrow"}]}, {"config_name": "rus_Cyrl", "data_files": [{"split": "train", "path": "rus_Cyrl/train/*.arrow"}, {"split": "dev", "path": "rus_Cyrl/dev/*.arrow"}, {"split": "test", "path": "rus_Cyrl/test/*.arrow"}]}, {"config_name": "bcl_Latn", "data_files": [{"split": "train", "path": "bcl_Latn/train/*.arrow"}, {"split": "dev", "path": "bcl_Latn/dev/*.arrow"}, {"split": "test", "path": "bcl_Latn/test/*.arrow"}]}, {"config_name": "ksh_Latn", "data_files": [{"split": "train", "path": "ksh_Latn/train/*.arrow"}, {"split": "dev", "path": "ksh_Latn/dev/*.arrow"}, {"split": "test", "path": "ksh_Latn/test/*.arrow"}]}, {"config_name": "hin_Latn", "data_files": [{"split": "train", "path": "hin_Latn/train/*.arrow"}, {"split": "dev", "path": "hin_Latn/dev/*.arrow"}, {"split": "test", "path": "hin_Latn/test/*.arrow"}]}, {"config_name": "myv_Cyrl", "data_files": [{"split": "train", "path": "myv_Cyrl/train/*.arrow"}, {"split": "dev", "path": "myv_Cyrl/dev/*.arrow"}, {"split": "test", "path": "myv_Cyrl/test/*.arrow"}]}, {"config_name": "kjh_Cyrl", "data_files": [{"split": "train", "path": "kjh_Cyrl/train/*.arrow"}, {"split": "dev", "path": "kjh_Cyrl/dev/*.arrow"}, {"split": "test", "path": "kjh_Cyrl/test/*.arrow"}]}, {"config_name": "sah_Cyrl", "data_files": [{"split": "train", "path": "sah_Cyrl/train/*.arrow"}, {"split": "dev", "path": "sah_Cyrl/dev/*.arrow"}, {"split": "test", "path": "sah_Cyrl/test/*.arrow"}]}, {"config_name": "naq_Latn", "data_files": [{"split": "train", "path": "naq_Latn/train/*.arrow"}, {"split": "dev", "path": "naq_Latn/dev/*.arrow"}, {"split": "test", "path": "naq_Latn/test/*.arrow"}]}, {"config_name": "tdt_Latn", "data_files": [{"split": "train", "path": "tdt_Latn/train/*.arrow"}, {"split": "dev", "path": "tdt_Latn/dev/*.arrow"}, {"split": "test", "path": "tdt_Latn/test/*.arrow"}]}, {"config_name": "kac_Latn", "data_files": [{"split": "train", "path": "kac_Latn/train/*.arrow"}, {"split": "dev", "path": "kac_Latn/dev/*.arrow"}, {"split": "test", "path": "kac_Latn/test/*.arrow"}]}, {"config_name": "cak_Latn", "data_files": [{"split": "train", "path": "cak_Latn/train/*.arrow"}, {"split": "dev", "path": "cak_Latn/dev/*.arrow"}, {"split": "test", "path": "cak_Latn/test/*.arrow"}]}, {"config_name": "kir_Cyrl", "data_files": [{"split": "train", "path": "kir_Cyrl/train/*.arrow"}, {"split": "dev", "path": "kir_Cyrl/dev/*.arrow"}, {"split": "test", "path": "kir_Cyrl/test/*.arrow"}]}, {"config_name": "mps_Latn", "data_files": [{"split": "train", "path": "mps_Latn/train/*.arrow"}, {"split": "dev", "path": "mps_Latn/dev/*.arrow"}, {"split": "test", "path": "mps_Latn/test/*.arrow"}]}, {"config_name": "yid_Hebr", "data_files": [{"split": "train", "path": "yid_Hebr/train/*.arrow"}, {"split": "dev", "path": "yid_Hebr/dev/*.arrow"}, {"split": "test", "path": "yid_Hebr/test/*.arrow"}]}, {"config_name": "mlt_Beng", "data_files": [{"split": "train", "path": "mlt_Beng/train/*.arrow"}, {"split": "dev", "path": "mlt_Beng/dev/*.arrow"}, {"split": "test", "path": "mlt_Beng/test/*.arrow"}]}, {"config_name": "srn_Latn", "data_files": [{"split": "train", "path": "srn_Latn/train/*.arrow"}, {"split": "dev", "path": "srn_Latn/dev/*.arrow"}, {"split": "test", "path": "srn_Latn/test/*.arrow"}]}, {"config_name": "div_Thaa", "data_files": [{"split": "train", "path": "div_Thaa/train/*.arrow"}, {"split": "dev", "path": "div_Thaa/dev/*.arrow"}, {"split": "test", "path": "div_Thaa/test/*.arrow"}]}, {"config_name": "mlt_Kana", "data_files": [{"split": "train", "path": "mlt_Kana/train/*.arrow"}, {"split": "dev", "path": "mlt_Kana/dev/*.arrow"}, {"split": "test", "path": "mlt_Kana/test/*.arrow"}]}, {"config_name": "mkd_Cyrl", "data_files": [{"split": "train", "path": "mkd_Cyrl/train/*.arrow"}, {"split": "dev", "path": "mkd_Cyrl/dev/*.arrow"}, {"split": "test", "path": "mkd_Cyrl/test/*.arrow"}]}, {"config_name": "bre_Latn", "data_files": [{"split": "train", "path": "bre_Latn/train/*.arrow"}, {"split": "dev", "path": "bre_Latn/dev/*.arrow"}, {"split": "test", "path": "bre_Latn/test/*.arrow"}]}, {"config_name": "tvl_Latn", "data_files": [{"split": "train", "path": "tvl_Latn/train/*.arrow"}, {"split": "test", "path": "tvl_Latn/test/*.arrow"}]}, {"config_name": "ven_Latn", "data_files": [{"split": "train", "path": "ven_Latn/train/*.arrow"}, {"split": "dev", "path": "ven_Latn/dev/*.arrow"}, {"split": "test", "path": "ven_Latn/test/*.arrow"}]}, {"config_name": "mlt_Mymr", "data_files": [{"split": "train", "path": "mlt_Mymr/train/*.arrow"}, {"split": "dev", "path": "mlt_Mymr/dev/*.arrow"}, {"split": "test", "path": "mlt_Mymr/test/*.arrow"}]}, {"config_name": "wuu_Hani", "data_files": [{"split": "train", "path": "wuu_Hani/train/*.arrow"}, {"split": "dev", "path": "wuu_Hani/dev/*.arrow"}, {"split": "test", "path": "wuu_Hani/test/*.arrow"}]}, {"config_name": "mwl_Latn", "data_files": [{"split": "train", "path": "mwl_Latn/train/*.arrow"}, {"split": "dev", "path": "mwl_Latn/dev/*.arrow"}, {"split": "test", "path": "mwl_Latn/test/*.arrow"}]}, {"config_name": "miq_Latn", "data_files": [{"split": "train", "path": "miq_Latn/train/*.arrow"}]}, {"config_name": "slv_Latn", "data_files": [{"split": "train", "path": "slv_Latn/train/*.arrow"}, {"split": "dev", "path": "slv_Latn/dev/*.arrow"}, {"split": "test", "path": "slv_Latn/test/*.arrow"}]}, {"config_name": "hrv_Latn", "data_files": [{"split": "train", "path": "hrv_Latn/train/*.arrow"}, {"split": "dev", "path": "hrv_Latn/dev/*.arrow"}, {"split": "test", "path": "hrv_Latn/test/*.arrow"}]}, {"config_name": "hmo_Latn", "data_files": [{"split": "train", "path": "hmo_Latn/train/*.arrow"}, {"split": "dev", "path": "hmo_Latn/dev/*.arrow"}, {"split": "test", "path": "hmo_Latn/test/*.arrow"}]}, {"config_name": "som_Latn", "data_files": [{"split": "train", "path": "som_Latn/train/*.arrow"}, {"split": "dev", "path": "som_Latn/dev/*.arrow"}, {"split": "test", "path": "som_Latn/test/*.arrow"}]}, {"config_name": "bod_Tibt", "data_files": [{"split": "train", "path": "bod_Tibt/train/*.arrow"}, {"split": "dev", "path": "bod_Tibt/dev/*.arrow"}, {"split": "test", "path": "bod_Tibt/test/*.arrow"}]}, {"config_name": "pls_Latn", "data_files": [{"split": "train", "path": "pls_Latn/train/*.arrow"}, {"split": "dev", "path": "pls_Latn/dev/*.arrow"}, {"split": "test", "path": "pls_Latn/test/*.arrow"}]}, {"config_name": "ile_Latn", "data_files": [{"split": "train", "path": "ile_Latn/train/*.arrow"}, {"split": "dev", "path": "ile_Latn/dev/*.arrow"}, {"split": "test", "path": "ile_Latn/test/*.arrow"}]}, {"config_name": "luo_Latn", "data_files": [{"split": "train", "path": "luo_Latn/train/*.arrow"}, {"split": "dev", "path": "luo_Latn/dev/*.arrow"}, {"split": "test", "path": "luo_Latn/test/*.arrow"}]}, {"config_name": "pus_Arab", "data_files": [{"split": "train", "path": "pus_Arab/train/*.arrow"}, {"split": "dev", "path": "pus_Arab/dev/*.arrow"}, {"split": "test", "path": "pus_Arab/test/*.arrow"}]}, {"config_name": "fao_Latn", "data_files": [{"split": "train", "path": "fao_Latn/train/*.arrow"}, {"split": "dev", "path": "fao_Latn/dev/*.arrow"}, {"split": "test", "path": "fao_Latn/test/*.arrow"}]}, {"config_name": "fas_Arab", "data_files": [{"split": "train", "path": "fas_Arab/train/*.arrow"}, {"split": "dev", "path": "fas_Arab/dev/*.arrow"}, {"split": "test", "path": "fas_Arab/test/*.arrow"}]}, {"config_name": "swa_Latn", "data_files": [{"split": "train", "path": "swa_Latn/train/*.arrow"}, {"split": "dev", "path": "swa_Latn/dev/*.arrow"}, {"split": "test", "path": "swa_Latn/test/*.arrow"}]}, {"config_name": "mlt_Hebr", "data_files": [{"split": "train", "path": "mlt_Hebr/train/*.arrow"}, {"split": "dev", "path": "mlt_Hebr/dev/*.arrow"}, {"split": "test", "path": "mlt_Hebr/test/*.arrow"}]}, {"config_name": "ary_Arab", "data_files": [{"split": "train", "path": "ary_Arab/train/*.arrow"}, {"split": "dev", "path": "ary_Arab/dev/*.arrow"}, {"split": "test", "path": "ary_Arab/test/*.arrow"}]}, {"config_name": "hus_Latn", "data_files": [{"split": "train", "path": "hus_Latn/train/*.arrow"}, {"split": "dev", "path": "hus_Latn/dev/*.arrow"}, {"split": "test", "path": "hus_Latn/test/*.arrow"}]}, {"config_name": "ote_Latn", "data_files": [{"split": "train", "path": "ote_Latn/train/*.arrow"}, {"split": "dev", "path": "ote_Latn/dev/*.arrow"}, {"split": "test", "path": "ote_Latn/test/*.arrow"}]}, {"config_name": "ilo_Latn", "data_files": [{"split": "train", "path": "ilo_Latn/train/*.arrow"}, {"split": "dev", "path": "ilo_Latn/dev/*.arrow"}, {"split": "test", "path": "ilo_Latn/test/*.arrow"}]}, {"config_name": "abk_Cyrl", "data_files": [{"split": "train", "path": "abk_Cyrl/train/*.arrow"}, {"split": "dev", "path": "abk_Cyrl/dev/*.arrow"}, {"split": "test", "path": "abk_Cyrl/test/*.arrow"}]}, {"config_name": "bqc_Latn", "data_files": [{"split": "train", "path": "bqc_Latn/train/*.arrow"}, {"split": "dev", "path": "bqc_Latn/dev/*.arrow"}, {"split": "test", "path": "bqc_Latn/test/*.arrow"}]}, {"config_name": "mlt_Taml", "data_files": [{"split": "train", "path": "mlt_Taml/train/*.arrow"}, {"split": "dev", "path": "mlt_Taml/dev/*.arrow"}, {"split": "test", "path": "mlt_Taml/test/*.arrow"}]}, {"config_name": "hil_Latn", "data_files": [{"split": "train", "path": "hil_Latn/train/*.arrow"}]}, {"config_name": "pon_Latn", "data_files": [{"split": "train", "path": "pon_Latn/train/*.arrow"}, {"split": "dev", "path": "pon_Latn/dev/*.arrow"}, {"split": "test", "path": "pon_Latn/test/*.arrow"}]}, {"config_name": "zul_Latn", "data_files": [{"split": "train", "path": "zul_Latn/train/*.arrow"}, {"split": "dev", "path": "zul_Latn/dev/*.arrow"}, {"split": "test", "path": "zul_Latn/test/*.arrow"}]}, {"config_name": "als_Latn", "data_files": [{"split": "train", "path": "als_Latn/train/*.arrow"}, {"split": "dev", "path": "als_Latn/dev/*.arrow"}, {"split": "test", "path": "als_Latn/test/*.arrow"}]}, {"config_name": "pes_Arab", "data_files": [{"split": "train", "path": "pes_Arab/train/*.arrow"}, {"split": "dev", "path": "pes_Arab/dev/*.arrow"}, {"split": "test", "path": "pes_Arab/test/*.arrow"}]}, {"config_name": "bpy_Beng", "data_files": [{"split": "train", "path": "bpy_Beng/train/*.arrow"}, {"split": "dev", "path": "bpy_Beng/dev/*.arrow"}, {"split": "test", "path": "bpy_Beng/test/*.arrow"}]}, {"config_name": "bos_Latn", "data_files": [{"split": "train", "path": "bos_Latn/train/*.arrow"}, {"split": "dev", "path": "bos_Latn/dev/*.arrow"}, {"split": "test", "path": "bos_Latn/test/*.arrow"}]}, {"config_name": "sot_Latn", "data_files": [{"split": "train", "path": "sot_Latn/train/*.arrow"}, {"split": "dev", "path": "sot_Latn/dev/*.arrow"}, {"split": "test", "path": "sot_Latn/test/*.arrow"}]}, {"config_name": "lin_Latn", "data_files": [{"split": "train", "path": "lin_Latn/train/*.arrow"}, {"split": "dev", "path": "lin_Latn/dev/*.arrow"}, {"split": "test", "path": "lin_Latn/test/*.arrow"}]}, {"config_name": "tuk_Cyrl", "data_files": [{"split": "train", "path": "tuk_Cyrl/train/*.arrow"}, {"split": "dev", "path": "tuk_Cyrl/dev/*.arrow"}]}, {"config_name": "gla_Latn", "data_files": [{"split": "train", "path": "gla_Latn/train/*.arrow"}, {"split": "dev", "path": "gla_Latn/dev/*.arrow"}, {"split": "test", "path": "gla_Latn/test/*.arrow"}]}, {"config_name": "wln_Latn", "data_files": [{"split": "train", "path": "wln_Latn/train/*.arrow"}, {"split": "dev", "path": "wln_Latn/dev/*.arrow"}, {"split": "test", "path": "wln_Latn/test/*.arrow"}]}, {"config_name": "apc_Arab", "data_files": [{"split": "train", "path": "apc_Arab/train/*.arrow"}, {"split": "dev", "path": "apc_Arab/dev/*.arrow"}, {"split": "test", "path": "apc_Arab/test/*.arrow"}]}, {"config_name": "aze_Hira", "data_files": [{"split": "train", "path": "aze_Hira/train/*.arrow"}, {"split": "dev", "path": "aze_Hira/dev/*.arrow"}, {"split": "test", "path": "aze_Hira/test/*.arrow"}]}, {"config_name": "hin_Deva", "data_files": [{"split": "train", "path": "hin_Deva/train/*.arrow"}, {"split": "dev", "path": "hin_Deva/dev/*.arrow"}, {"split": "test", "path": "hin_Deva/test/*.arrow"}]}, {"config_name": "hye_Armn", "data_files": [{"split": "train", "path": "hye_Armn/train/*.arrow"}, {"split": "dev", "path": "hye_Armn/dev/*.arrow"}, {"split": "test", "path": "hye_Armn/test/*.arrow"}]}, {"config_name": "tir_Ethi", "data_files": [{"split": "train", "path": "tir_Ethi/train/*.arrow"}, {"split": "dev", "path": "tir_Ethi/dev/*.arrow"}, {"split": "test", "path": "tir_Ethi/test/*.arrow"}]}, {"config_name": "aze_Ethi", "data_files": [{"split": "train", "path": "aze_Ethi/train/*.arrow"}, {"split": "dev", "path": "aze_Ethi/dev/*.arrow"}, {"split": "test", "path": "aze_Ethi/test/*.arrow"}]}, {"config_name": "pap_Latn", "data_files": [{"split": "train", "path": "pap_Latn/train/*.arrow"}, {"split": "dev", "path": "pap_Latn/dev/*.arrow"}, {"split": "test", "path": "pap_Latn/test/*.arrow"}]}, {"config_name": "mlt_Ethi", "data_files": [{"split": "train", "path": "mlt_Ethi/train/*.arrow"}, {"split": "dev", "path": "mlt_Ethi/dev/*.arrow"}, {"split": "test", "path": "mlt_Ethi/test/*.arrow"}]}, {"config_name": "gcf_Latn", "data_files": [{"split": "train", "path": "gcf_Latn/train/*.arrow"}, {"split": "dev", "path": "gcf_Latn/dev/*.arrow"}, {"split": "test", "path": "gcf_Latn/test/*.arrow"}]}, {"config_name": "cjk_Latn", "data_files": [{"split": "train", "path": "cjk_Latn/train/*.arrow"}, {"split": "dev", "path": "cjk_Latn/dev/*.arrow"}, {"split": "test", "path": "cjk_Latn/test/*.arrow"}]}, {"config_name": "pcd_Latn", "data_files": [{"split": "train", "path": "pcd_Latn/train/*.arrow"}, {"split": "dev", "path": "pcd_Latn/dev/*.arrow"}, {"split": "test", "path": "pcd_Latn/test/*.arrow"}]}, {"config_name": "tur_Latn", "data_files": [{"split": "train", "path": "tur_Latn/train/*.arrow"}, {"split": "dev", "path": "tur_Latn/dev/*.arrow"}, {"split": "test", "path": "tur_Latn/test/*.arrow"}]}, {"config_name": "kon_Latn", "data_files": [{"split": "train", "path": "kon_Latn/train/*.arrow"}, {"split": "dev", "path": "kon_Latn/dev/*.arrow"}, {"split": "test", "path": "kon_Latn/test/*.arrow"}]}, {"config_name": "csy_Latn", "data_files": [{"split": "train", "path": "csy_Latn/train/*.arrow"}, {"split": "dev", "path": "csy_Latn/dev/*.arrow"}, {"split": "test", "path": "csy_Latn/test/*.arrow"}]}, {"config_name": "xho_Latn", "data_files": [{"split": "train", "path": "xho_Latn/train/*.arrow"}, {"split": "dev", "path": "xho_Latn/dev/*.arrow"}, {"split": "test", "path": "xho_Latn/test/*.arrow"}]}, {"config_name": "guc_Latn", "data_files": [{"split": "train", "path": "guc_Latn/train/*.arrow"}]}, {"config_name": "aka_Latn", "data_files": [{"split": "train", "path": "aka_Latn/train/*.arrow"}, {"split": "dev", "path": "aka_Latn/dev/*.arrow"}, {"split": "test", "path": "aka_Latn/test/*.arrow"}]}, {"config_name": "kea_Latn", "data_files": [{"split": "train", "path": "kea_Latn/train/*.arrow"}, {"split": "dev", "path": "kea_Latn/dev/*.arrow"}, {"split": "test", "path": "kea_Latn/test/*.arrow"}]}, {"config_name": "bar_Latn", "data_files": [{"split": "train", "path": "bar_Latn/train/*.arrow"}, {"split": "dev", "path": "bar_Latn/dev/*.arrow"}, {"split": "test", "path": "bar_Latn/test/*.arrow"}]}, {"config_name": "sme_Latn", "data_files": [{"split": "train", "path": "sme_Latn/train/*.arrow"}, {"split": "dev", "path": "sme_Latn/dev/*.arrow"}, {"split": "test", "path": "sme_Latn/test/*.arrow"}]}, {"config_name": "aze_Hang", "data_files": [{"split": "train", "path": "aze_Hang/train/*.arrow"}, {"split": "dev", "path": "aze_Hang/dev/*.arrow"}, {"split": "test", "path": "aze_Hang/test/*.arrow"}]}, {"config_name": "csb_Latn", "data_files": [{"split": "train", "path": "csb_Latn/train/*.arrow"}, {"split": "dev", "path": "csb_Latn/dev/*.arrow"}, {"split": "test", "path": "csb_Latn/test/*.arrow"}]}, {"config_name": "bak_Latn", "data_files": [{"split": "train", "path": "bak_Latn/train/*.arrow"}, {"split": "dev", "path": "bak_Latn/dev/*.arrow"}, {"split": "test", "path": "bak_Latn/test/*.arrow"}]}, {"config_name": "djk_Latn", "data_files": [{"split": "train", "path": "djk_Latn/train/*.arrow"}, {"split": "dev", "path": "djk_Latn/dev/*.arrow"}, {"split": "test", "path": "djk_Latn/test/*.arrow"}]}, {"config_name": "xav_Latn", "data_files": [{"split": "train", "path": "xav_Latn/train/*.arrow"}, {"split": "dev", "path": "xav_Latn/dev/*.arrow"}, {"split": "test", "path": "xav_Latn/test/*.arrow"}]}, {"config_name": "oci_Latn", "data_files": [{"split": "train", "path": "oci_Latn/train/*.arrow"}, {"split": "dev", "path": "oci_Latn/dev/*.arrow"}, {"split": "test", "path": "oci_Latn/test/*.arrow"}]}, {"config_name": "acm_Arab", "data_files": [{"split": "train", "path": "acm_Arab/train/*.arrow"}, {"split": "dev", "path": "acm_Arab/dev/*.arrow"}, {"split": "test", "path": "acm_Arab/test/*.arrow"}]}, {"config_name": "rmy_Cyrl", "data_files": [{"split": "train", "path": "rmy_Cyrl/train/*.arrow"}]}, {"config_name": "krc_Cyrl", "data_files": [{"split": "train", "path": "krc_Cyrl/train/*.arrow"}, {"split": "dev", "path": "krc_Cyrl/dev/*.arrow"}, {"split": "test", "path": "krc_Cyrl/test/*.arrow"}]}, {"config_name": "cym_Latn", "data_files": [{"split": "train", "path": "cym_Latn/train/*.arrow"}, {"split": "dev", "path": "cym_Latn/dev/*.arrow"}, {"split": "test", "path": "cym_Latn/test/*.arrow"}]}, {"config_name": "lus_Latn", "data_files": [{"split": "train", "path": "lus_Latn/train/*.arrow"}, {"split": "dev", "path": "lus_Latn/dev/*.arrow"}, {"split": "test", "path": "lus_Latn/test/*.arrow"}]}, {"config_name": "ngu_Latn", "data_files": [{"split": "train", "path": "ngu_Latn/train/*.arrow"}, {"split": "dev", "path": "ngu_Latn/dev/*.arrow"}, {"split": "test", "path": "ngu_Latn/test/*.arrow"}]}, {"config_name": "yom_Latn", "data_files": [{"split": "train", "path": "yom_Latn/train/*.arrow"}, {"split": "dev", "path": "yom_Latn/dev/*.arrow"}, {"split": "test", "path": "yom_Latn/test/*.arrow"}]}, {"config_name": "tam_Taml", "data_files": [{"split": "train", "path": "tam_Taml/train/*.arrow"}, {"split": "dev", "path": "tam_Taml/dev/*.arrow"}, {"split": "test", "path": "tam_Taml/test/*.arrow"}]}, {"config_name": "ajp_Arab", "data_files": [{"split": "train", "path": "ajp_Arab/train/*.arrow"}, {"split": "dev", "path": "ajp_Arab/dev/*.arrow"}, {"split": "test", "path": "ajp_Arab/test/*.arrow"}]}, {"config_name": "epo_Latn", "data_files": [{"split": "train", "path": "epo_Latn/train/*.arrow"}, {"split": "dev", "path": "epo_Latn/dev/*.arrow"}, {"split": "test", "path": "epo_Latn/test/*.arrow"}]}, {"config_name": "fra_Latn", "data_files": [{"split": "train", "path": "fra_Latn/train/*.arrow"}, {"split": "dev", "path": "fra_Latn/dev/*.arrow"}, {"split": "test", "path": "fra_Latn/test/*.arrow"}]}, {"config_name": "ita_Latn", "data_files": [{"split": "train", "path": "ita_Latn/train/*.arrow"}, {"split": "dev", "path": "ita_Latn/dev/*.arrow"}, {"split": "test", "path": "ita_Latn/test/*.arrow"}]}, {"config_name": "seh_Latn", "data_files": [{"split": "train", "path": "seh_Latn/train/*.arrow"}, {"split": "dev", "path": "seh_Latn/dev/*.arrow"}, {"split": "test", "path": "seh_Latn/test/*.arrow"}]}, {"config_name": "hbs_Latn", "data_files": [{"split": "train", "path": "hbs_Latn/train/*.arrow"}, {"split": "dev", "path": "hbs_Latn/dev/*.arrow"}, {"split": "test", "path": "hbs_Latn/test/*.arrow"}]}, {"config_name": "uzn_Cyrl", "data_files": [{"split": "train", "path": "uzn_Cyrl/train/*.arrow"}, {"split": "dev", "path": "uzn_Cyrl/dev/*.arrow"}, {"split": "test", "path": "uzn_Cyrl/test/*.arrow"}]}, {"config_name": "ksw_Mymr", "data_files": [{"split": "train", "path": "ksw_Mymr/train/*.arrow"}]}, {"config_name": "pms_Latn", "data_files": [{"split": "train", "path": "pms_Latn/train/*.arrow"}, {"split": "dev", "path": "pms_Latn/dev/*.arrow"}, {"split": "test", "path": "pms_Latn/test/*.arrow"}]}, {"config_name": "zlm_Latn", "data_files": [{"split": "train", "path": "zlm_Latn/train/*.arrow"}, {"split": "dev", "path": "zlm_Latn/dev/*.arrow"}, {"split": "test", "path": "zlm_Latn/test/*.arrow"}]}, {"config_name": "qub_Latn", "data_files": [{"split": "train", "path": "qub_Latn/train/*.arrow"}, {"split": "dev", "path": "qub_Latn/dev/*.arrow"}]}, {"config_name": "arg_Latn", "data_files": [{"split": "train", "path": "arg_Latn/train/*.arrow"}, {"split": "dev", "path": "arg_Latn/dev/*.arrow"}, {"split": "test", "path": "arg_Latn/test/*.arrow"}]}, {"config_name": "kaa_Cyrl", "data_files": [{"split": "train", "path": "kaa_Cyrl/train/*.arrow"}, {"split": "dev", "path": "kaa_Cyrl/dev/*.arrow"}, {"split": "test", "path": "kaa_Cyrl/test/*.arrow"}]}, {"config_name": "toj_Latn", "data_files": [{"split": "train", "path": "toj_Latn/train/*.arrow"}, {"split": "dev", "path": "toj_Latn/dev/*.arrow"}, {"split": "test", "path": "toj_Latn/test/*.arrow"}]}, {"config_name": "aze_Grek", "data_files": [{"split": "train", "path": "aze_Grek/train/*.arrow"}, {"split": "dev", "path": "aze_Grek/dev/*.arrow"}, {"split": "test", "path": "aze_Grek/test/*.arrow"}]}, {"config_name": "guj_Cyrl", "data_files": [{"split": "train", "path": "guj_Cyrl/train/*.arrow"}, {"split": "dev", "path": "guj_Cyrl/dev/*.arrow"}, {"split": "test", "path": "guj_Cyrl/test/*.arrow"}]}, {"config_name": "kab_Latn", "data_files": [{"split": "train", "path": "kab_Latn/train/*.arrow"}, {"split": "dev", "path": "kab_Latn/dev/*.arrow"}, {"split": "test", "path": "kab_Latn/test/*.arrow"}]}, {"config_name": "pan_Guru", "data_files": [{"split": "train", "path": "pan_Guru/train/*.arrow"}, {"split": "dev", "path": "pan_Guru/dev/*.arrow"}, {"split": "test", "path": "pan_Guru/test/*.arrow"}]}, {"config_name": "nan_Latn", "data_files": [{"split": "train", "path": "nan_Latn/train/*.arrow"}, {"split": "dev", "path": "nan_Latn/dev/*.arrow"}, {"split": "test", "path": "nan_Latn/test/*.arrow"}]}, {"config_name": "aze_Latn", "data_files": [{"split": "train", "path": "aze_Latn/train/*.arrow"}, {"split": "dev", "path": "aze_Latn/dev/*.arrow"}, {"split": "test", "path": "aze_Latn/test/*.arrow"}]}, {"config_name": "ara_Arab", "data_files": [{"split": "train", "path": "ara_Arab/train/*.arrow"}, {"split": "dev", "path": "ara_Arab/dev/*.arrow"}, {"split": "test", "path": "ara_Arab/test/*.arrow"}]}, {"config_name": "aze_Mymr", "data_files": [{"split": "train", "path": "aze_Mymr/train/*.arrow"}, {"split": "dev", "path": "aze_Mymr/dev/*.arrow"}, {"split": "test", "path": "aze_Mymr/test/*.arrow"}]}, {"config_name": "meu_Latn", "data_files": [{"split": "train", "path": "meu_Latn/train/*.arrow"}, {"split": "dev", "path": "meu_Latn/dev/*.arrow"}, {"split": "test", "path": "meu_Latn/test/*.arrow"}]}, {"config_name": "mon_Arab", "data_files": [{"split": "train", "path": "mon_Arab/train/*.arrow"}, {"split": "dev", "path": "mon_Arab/dev/*.arrow"}, {"split": "test", "path": "mon_Arab/test/*.arrow"}]}, {"config_name": "lvs_Latn", "data_files": [{"split": "train", "path": "lvs_Latn/train/*.arrow"}, {"split": "dev", "path": "lvs_Latn/dev/*.arrow"}, {"split": "test", "path": "lvs_Latn/test/*.arrow"}]}, {"config_name": "nbl_Latn", "data_files": [{"split": "train", "path": "nbl_Latn/train/*.arrow"}, {"split": "dev", "path": "nbl_Latn/dev/*.arrow"}, {"split": "test", "path": "nbl_Latn/test/*.arrow"}]}, {"config_name": "crh_Latn", "data_files": [{"split": "train", "path": "crh_Latn/train/*.arrow"}, {"split": "dev", "path": "crh_Latn/dev/*.arrow"}, {"split": "test", "path": "crh_Latn/test/*.arrow"}]}, {"config_name": "kbp_Latn", "data_files": [{"split": "train", "path": "kbp_Latn/train/*.arrow"}, {"split": "dev", "path": "kbp_Latn/dev/*.arrow"}, {"split": "test", "path": "kbp_Latn/test/*.arrow"}]}, {"config_name": "tgl_Latn", "data_files": [{"split": "train", "path": "tgl_Latn/train/*.arrow"}, {"split": "dev", "path": "tgl_Latn/dev/*.arrow"}, {"split": "test", "path": "tgl_Latn/test/*.arrow"}]}, {"config_name": "kmb_Latn", "data_files": [{"split": "train", "path": "kmb_Latn/train/*.arrow"}, {"split": "dev", "path": "kmb_Latn/dev/*.arrow"}, {"split": "test", "path": "kmb_Latn/test/*.arrow"}]}, {"config_name": "hun_Latn", "data_files": [{"split": "train", "path": "hun_Latn/train/*.arrow"}, {"split": "dev", "path": "hun_Latn/dev/*.arrow"}, {"split": "test", "path": "hun_Latn/test/*.arrow"}]}, {"config_name": "aze_Thai", "data_files": [{"split": "train", "path": "aze_Thai/train/*.arrow"}, {"split": "dev", "path": "aze_Thai/dev/*.arrow"}, {"split": "test", "path": "aze_Thai/test/*.arrow"}]}, {"config_name": "yao_Latn", "data_files": [{"split": "train", "path": "yao_Latn/train/*.arrow"}, {"split": "dev", "path": "yao_Latn/dev/*.arrow"}, {"split": "test", "path": "yao_Latn/test/*.arrow"}]}, {"config_name": "arn_Latn", "data_files": [{"split": "train", "path": "arn_Latn/train/*.arrow"}, {"split": "dev", "path": "arn_Latn/dev/*.arrow"}, {"split": "test", "path": "arn_Latn/test/*.arrow"}]}, {"config_name": "jbo_Latn", "data_files": [{"split": "train", "path": "jbo_Latn/train/*.arrow"}, {"split": "dev", "path": "jbo_Latn/dev/*.arrow"}, {"split": "test", "path": "jbo_Latn/test/*.arrow"}]}, {"config_name": "mzn_Arab", "data_files": [{"split": "train", "path": "mzn_Arab/train/*.arrow"}, {"split": "dev", "path": "mzn_Arab/dev/*.arrow"}, {"split": "test", "path": "mzn_Arab/test/*.arrow"}]}, {"config_name": "lzh_Hani", "data_files": [{"split": "train", "path": "lzh_Hani/train/*.arrow"}, {"split": "dev", "path": "lzh_Hani/dev/*.arrow"}, {"split": "test", "path": "lzh_Hani/test/*.arrow"}]}, {"config_name": "heb_Hebr", "data_files": [{"split": "train", "path": "heb_Hebr/train/*.arrow"}, {"split": "dev", "path": "heb_Hebr/dev/*.arrow"}, {"split": "test", "path": "heb_Hebr/test/*.arrow"}]}, {"config_name": "bjn_Latn", "data_files": [{"split": "train", "path": "bjn_Latn/train/*.arrow"}, {"split": "dev", "path": "bjn_Latn/dev/*.arrow"}, {"split": "test", "path": "bjn_Latn/test/*.arrow"}]}, {"config_name": "gug_Latn", "data_files": [{"split": "train", "path": "gug_Latn/train/*.arrow"}, {"split": "dev", "path": "gug_Latn/dev/*.arrow"}, {"split": "test", "path": "gug_Latn/test/*.arrow"}]}, {"config_name": "mlt_Hira", "data_files": [{"split": "train", "path": "mlt_Hira/train/*.arrow"}, {"split": "dev", "path": "mlt_Hira/dev/*.arrow"}, {"split": "test", "path": "mlt_Hira/test/*.arrow"}]}, {"config_name": "swc_Latn", "data_files": [{"split": "train", "path": "swc_Latn/train/*.arrow"}, {"split": "dev", "path": "swc_Latn/dev/*.arrow"}, {"split": "test", "path": "swc_Latn/test/*.arrow"}]}, {"config_name": "yor_Latn", "data_files": [{"split": "train", "path": "yor_Latn/train/*.arrow"}, {"split": "dev", "path": "yor_Latn/dev/*.arrow"}, {"split": "test", "path": "yor_Latn/test/*.arrow"}]}, {"config_name": "ban_Latn", "data_files": [{"split": "train", "path": "ban_Latn/train/*.arrow"}, {"split": "dev", "path": "ban_Latn/dev/*.arrow"}, {"split": "test", "path": "ban_Latn/test/*.arrow"}]}, {"config_name": "aze_Guru", "data_files": [{"split": "train", "path": "aze_Guru/train/*.arrow"}, {"split": "dev", "path": "aze_Guru/dev/*.arrow"}, {"split": "test", "path": "aze_Guru/test/*.arrow"}]}, {"config_name": "tlh_Latn", "data_files": [{"split": "train", "path": "tlh_Latn/train/*.arrow"}, {"split": "dev", "path": "tlh_Latn/dev/*.arrow"}, {"split": "test", "path": "tlh_Latn/test/*.arrow"}]}, {"config_name": "chv_Cyrl", "data_files": [{"split": "train", "path": "chv_Cyrl/train/*.arrow"}, {"split": "dev", "path": "chv_Cyrl/dev/*.arrow"}, {"split": "test", "path": "chv_Cyrl/test/*.arrow"}]}, {"config_name": "sin_Sinh", "data_files": [{"split": "train", "path": "sin_Sinh/train/*.arrow"}, {"split": "dev", "path": "sin_Sinh/dev/*.arrow"}, {"split": "test", "path": "sin_Sinh/test/*.arrow"}]}, {"config_name": "aze_Gujr", "data_files": [{"split": "train", "path": "aze_Gujr/train/*.arrow"}, {"split": "dev", "path": "aze_Gujr/dev/*.arrow"}, {"split": "test", "path": "aze_Gujr/test/*.arrow"}]}, {"config_name": "ind_Latn", "data_files": [{"split": "train", "path": "ind_Latn/train/*.arrow"}, {"split": "dev", "path": "ind_Latn/dev/*.arrow"}, {"split": "test", "path": "ind_Latn/test/*.arrow"}]}, {"config_name": "amh_Ethi", "data_files": [{"split": "train", "path": "amh_Ethi/train/*.arrow"}, {"split": "dev", "path": "amh_Ethi/dev/*.arrow"}, {"split": "test", "path": "amh_Ethi/test/*.arrow"}]}, {"config_name": "zea_Latn", "data_files": [{"split": "train", "path": "zea_Latn/train/*.arrow"}, {"split": "dev", "path": "zea_Latn/dev/*.arrow"}, {"split": "test", "path": "zea_Latn/test/*.arrow"}]}, {"config_name": "kpg_Latn", "data_files": [{"split": "train", "path": "kpg_Latn/train/*.arrow"}, {"split": "dev", "path": "kpg_Latn/dev/*.arrow"}, {"split": "test", "path": "kpg_Latn/test/*.arrow"}]}, {"config_name": "glk_Arab", "data_files": [{"split": "train", "path": "glk_Arab/train/*.arrow"}, {"split": "dev", "path": "glk_Arab/dev/*.arrow"}, {"split": "test", "path": "glk_Arab/test/*.arrow"}]}, {"config_name": "crh_Cyrl", "data_files": [{"split": "train", "path": "crh_Cyrl/train/*.arrow"}, {"split": "dev", "path": "crh_Cyrl/dev/*.arrow"}, {"split": "test", "path": "crh_Cyrl/test/*.arrow"}]}, {"config_name": "nyu_Latn", "data_files": [{"split": "train", "path": "nyu_Latn/train/*.arrow"}]}, {"config_name": "aze_Beng", "data_files": [{"split": "train", "path": "aze_Beng/train/*.arrow"}, {"split": "dev", "path": "aze_Beng/dev/*.arrow"}, {"split": "test", "path": "aze_Beng/test/*.arrow"}]}, {"config_name": "ibo_Latn", "data_files": [{"split": "train", "path": "ibo_Latn/train/*.arrow"}, {"split": "dev", "path": "ibo_Latn/dev/*.arrow"}, {"split": "test", "path": "ibo_Latn/test/*.arrow"}]}, {"config_name": "msa_Latn", "data_files": [{"split": "train", "path": "msa_Latn/train/*.arrow"}, {"split": "dev", "path": "msa_Latn/dev/*.arrow"}, {"split": "test", "path": "msa_Latn/test/*.arrow"}]}, {"config_name": "prs_Arab", "data_files": [{"split": "train", "path": "prs_Arab/train/*.arrow"}, {"split": "dev", "path": "prs_Arab/dev/*.arrow"}, {"split": "test", "path": "prs_Arab/test/*.arrow"}]}, {"config_name": "nap_Latn", "data_files": [{"split": "train", "path": "nap_Latn/train/*.arrow"}, {"split": "dev", "path": "nap_Latn/dev/*.arrow"}, {"split": "test", "path": "nap_Latn/test/*.arrow"}]}, {"config_name": "bik_Latn", "data_files": [{"split": "train", "path": "bik_Latn/train/*.arrow"}, {"split": "dev", "path": "bik_Latn/dev/*.arrow"}, {"split": "test", "path": "bik_Latn/test/*.arrow"}]}, {"config_name": "srp_Cyrl", "data_files": [{"split": "train", "path": "srp_Cyrl/train/*.arrow"}, {"split": "dev", "path": "srp_Cyrl/dev/*.arrow"}, {"split": "test", "path": "srp_Cyrl/test/*.arrow"}]}, {"config_name": "lao_Laoo", "data_files": [{"split": "train", "path": "lao_Laoo/train/*.arrow"}, {"split": "dev", "path": "lao_Laoo/dev/*.arrow"}, {"split": "test", "path": "lao_Laoo/test/*.arrow"}]}, {"config_name": "kom_Cyrl", "data_files": [{"split": "train", "path": "kom_Cyrl/train/*.arrow"}, {"split": "dev", "path": "kom_Cyrl/dev/*.arrow"}, {"split": "test", "path": "kom_Cyrl/test/*.arrow"}]}, {"config_name": "nde_Latn", "data_files": [{"split": "train", "path": "nde_Latn/train/*.arrow"}, {"split": "dev", "path": "nde_Latn/dev/*.arrow"}, {"split": "test", "path": "nde_Latn/test/*.arrow"}]}, {"config_name": "hui_Latn", "data_files": [{"split": "train", "path": "hui_Latn/train/*.arrow"}, {"split": "dev", "path": "hui_Latn/dev/*.arrow"}, {"split": "test", "path": "hui_Latn/test/*.arrow"}]}, {"config_name": "uig_Latn", "data_files": [{"split": "train", "path": "uig_Latn/train/*.arrow"}, {"split": "dev", "path": "uig_Latn/dev/*.arrow"}, {"split": "test", "path": "uig_Latn/test/*.arrow"}]}, {"config_name": "new_Deva", "data_files": [{"split": "train", "path": "new_Deva/train/*.arrow"}, {"split": "dev", "path": "new_Deva/dev/*.arrow"}, {"split": "test", "path": "new_Deva/test/*.arrow"}]}, {"config_name": "kur_Arab", "data_files": [{"split": "train", "path": "kur_Arab/train/*.arrow"}, {"split": "dev", "path": "kur_Arab/dev/*.arrow"}, {"split": "test", "path": "kur_Arab/test/*.arrow"}]}, {"config_name": "sco_Latn", "data_files": [{"split": "train", "path": "sco_Latn/train/*.arrow"}, {"split": "dev", "path": "sco_Latn/dev/*.arrow"}, {"split": "test", "path": "sco_Latn/test/*.arrow"}]}, {"config_name": "ayr_Latn", "data_files": [{"split": "train", "path": "ayr_Latn/train/*.arrow"}, {"split": "dev", "path": "ayr_Latn/dev/*.arrow"}, {"split": "test", "path": "ayr_Latn/test/*.arrow"}]}, {"config_name": "suz_Deva", "data_files": [{"split": "train", "path": "suz_Deva/train/*.arrow"}, {"split": "dev", "path": "suz_Deva/dev/*.arrow"}, {"split": "test", "path": "suz_Deva/test/*.arrow"}]}, {"config_name": "wal_Latn", "data_files": [{"split": "train", "path": "wal_Latn/train/*.arrow"}, {"split": "dev", "path": "wal_Latn/dev/*.arrow"}, {"split": "test", "path": "wal_Latn/test/*.arrow"}]}, {"config_name": "mlt_Latn", "data_files": [{"split": "train", "path": "mlt_Latn/train/*.arrow"}, {"split": "dev", "path": "mlt_Latn/dev/*.arrow"}, {"split": "test", "path": "mlt_Latn/test/*.arrow"}]}, {"config_name": "asm_Beng", "data_files": [{"split": "train", "path": "asm_Beng/train/*.arrow"}, {"split": "dev", "path": "asm_Beng/dev/*.arrow"}, {"split": "test", "path": "asm_Beng/test/*.arrow"}]}, {"config_name": "aze_Syrc", "data_files": [{"split": "train", "path": "aze_Syrc/train/*.arrow"}, {"split": "dev", "path": "aze_Syrc/dev/*.arrow"}, {"split": "test", "path": "aze_Syrc/test/*.arrow"}]}, {"config_name": "san_Deva", "data_files": [{"split": "train", "path": "san_Deva/train/*.arrow"}, {"split": "dev", "path": "san_Deva/dev/*.arrow"}, {"split": "test", "path": "san_Deva/test/*.arrow"}]}, {"config_name": "kaz_Cyrl", "data_files": [{"split": "train", "path": "kaz_Cyrl/train/*.arrow"}, {"split": "dev", "path": "kaz_Cyrl/dev/*.arrow"}, {"split": "test", "path": "kaz_Cyrl/test/*.arrow"}]}, {"config_name": "iba_Latn", "data_files": [{"split": "train", "path": "iba_Latn/train/*.arrow"}]}, {"config_name": "tuk_Latn", "data_files": [{"split": "train", "path": "tuk_Latn/train/*.arrow"}, {"split": "dev", "path": "tuk_Latn/dev/*.arrow"}, {"split": "test", "path": "tuk_Latn/test/*.arrow"}]}, {"config_name": "nso_Latn", "data_files": [{"split": "train", "path": "nso_Latn/train/*.arrow"}, {"split": "dev", "path": "nso_Latn/dev/*.arrow"}, {"split": "test", "path": "nso_Latn/test/*.arrow"}]}, {"config_name": "aze_Geor", "data_files": [{"split": "train", "path": "aze_Geor/train/*.arrow"}, {"split": "dev", "path": "aze_Geor/dev/*.arrow"}, {"split": "test", "path": "aze_Geor/test/*.arrow"}]}, {"config_name": "run_Latn", "data_files": [{"split": "train", "path": "run_Latn/train/*.arrow"}, {"split": "dev", "path": "run_Latn/dev/*.arrow"}, {"split": "test", "path": "run_Latn/test/*.arrow"}]}, {"config_name": "ctu_Latn", "data_files": [{"split": "train", "path": "ctu_Latn/train/*.arrow"}, {"split": "dev", "path": "ctu_Latn/dev/*.arrow"}, {"split": "test", "path": "ctu_Latn/test/*.arrow"}]}, {"config_name": "bam_Latn", "data_files": [{"split": "train", "path": "bam_Latn/train/*.arrow"}, {"split": "dev", "path": "bam_Latn/dev/*.arrow"}, {"split": "test", "path": "bam_Latn/test/*.arrow"}]}, {"config_name": "fin_Latn", "data_files": [{"split": "train", "path": "fin_Latn/train/*.arrow"}, {"split": "dev", "path": "fin_Latn/dev/*.arrow"}, {"split": "test", "path": "fin_Latn/test/*.arrow"}]}, {"config_name": "gor_Latn", "data_files": [{"split": "train", "path": "gor_Latn/train/*.arrow"}, {"split": "dev", "path": "gor_Latn/dev/*.arrow"}, {"split": "test", "path": "gor_Latn/test/*.arrow"}]}, {"config_name": "kmr_Latn", "data_files": [{"split": "train", "path": "kmr_Latn/train/*.arrow"}, {"split": "dev", "path": "kmr_Latn/dev/*.arrow"}, {"split": "test", "path": "kmr_Latn/test/*.arrow"}]}, {"config_name": "ben_Beng", "data_files": [{"split": "train", "path": "ben_Beng/train/*.arrow"}, {"split": "dev", "path": "ben_Beng/dev/*.arrow"}, {"split": "test", "path": "ben_Beng/test/*.arrow"}]}, {"config_name": "pag_Latn", "data_files": [{"split": "train", "path": "pag_Latn/train/*.arrow"}, {"split": "dev", "path": "pag_Latn/dev/*.arrow"}, {"split": "test", "path": "pag_Latn/test/*.arrow"}]}, {"config_name": "niu_Latn", "data_files": [{"split": "train", "path": "niu_Latn/train/*.arrow"}]}, {"config_name": "xmf_Geor", "data_files": [{"split": "train", "path": "xmf_Geor/train/*.arrow"}, {"split": "dev", "path": "xmf_Geor/dev/*.arrow"}, {"split": "test", "path": "xmf_Geor/test/*.arrow"}]}, {"config_name": "ekk_Latn", "data_files": [{"split": "train", "path": "ekk_Latn/train/*.arrow"}, {"split": "dev", "path": "ekk_Latn/dev/*.arrow"}, {"split": "test", "path": "ekk_Latn/test/*.arrow"}]}, {"config_name": "lmo_Latn", "data_files": [{"split": "train", "path": "lmo_Latn/train/*.arrow"}, {"split": "dev", "path": "lmo_Latn/dev/*.arrow"}, {"split": "test", "path": "lmo_Latn/test/*.arrow"}]}, {"config_name": "mhr_Cyrl", "data_files": [{"split": "train", "path": "mhr_Cyrl/train/*.arrow"}, {"split": "dev", "path": "mhr_Cyrl/dev/*.arrow"}, {"split": "test", "path": "mhr_Cyrl/test/*.arrow"}]}, {"config_name": "plt_Latn", "data_files": [{"split": "train", "path": "plt_Latn/train/*.arrow"}, {"split": "dev", "path": "plt_Latn/dev/*.arrow"}, {"split": "test", "path": "plt_Latn/test/*.arrow"}]}, {"config_name": "qvi_Latn", "data_files": [{"split": "train", "path": "qvi_Latn/train/*.arrow"}, {"split": "dev", "path": "qvi_Latn/dev/*.arrow"}, {"split": "test", "path": "qvi_Latn/test/*.arrow"}]}, {"config_name": "mlt_Zinh", "data_files": [{"split": "train", "path": "mlt_Zinh/train/*.arrow"}, {"split": "dev", "path": "mlt_Zinh/dev/*.arrow"}, {"split": "test", "path": "mlt_Zinh/test/*.arrow"}]}, {"config_name": "roh_Latn", "data_files": [{"split": "train", "path": "roh_Latn/train/*.arrow"}, {"split": "dev", "path": "roh_Latn/dev/*.arrow"}, {"split": "test", "path": "roh_Latn/test/*.arrow"}]}, {"config_name": "mah_Latn", "data_files": [{"split": "train", "path": "mah_Latn/train/*.arrow"}]}, {"config_name": "npi_Deva", "data_files": [{"split": "train", "path": "npi_Deva/train/*.arrow"}, {"split": "dev", "path": "npi_Deva/dev/*.arrow"}, {"split": "test", "path": "npi_Deva/test/*.arrow"}]}, {"config_name": "guj_Telu", "data_files": [{"split": "train", "path": "guj_Telu/train/*.arrow"}, {"split": "dev", "path": "guj_Telu/dev/*.arrow"}, {"split": "test", "path": "guj_Telu/test/*.arrow"}]}, {"config_name": "tok_Latn", "data_files": [{"split": "train", "path": "tok_Latn/train/*.arrow"}, {"split": "dev", "path": "tok_Latn/dev/*.arrow"}, {"split": "test", "path": "tok_Latn/test/*.arrow"}]}, {"config_name": "eml_Latn", "data_files": [{"split": "train", "path": "eml_Latn/train/*.arrow"}, {"split": "dev", "path": "eml_Latn/dev/*.arrow"}, {"split": "test", "path": "eml_Latn/test/*.arrow"}]}, {"config_name": "pnb_Arab", "data_files": [{"split": "train", "path": "pnb_Arab/train/*.arrow"}, {"split": "dev", "path": "pnb_Arab/dev/*.arrow"}, {"split": "test", "path": "pnb_Arab/test/*.arrow"}]}, {"config_name": "tsn_Hira", "data_files": [{"split": "train", "path": "tsn_Hira/train/*.arrow"}, {"split": "dev", "path": "tsn_Hira/dev/*.arrow"}, {"split": "test", "path": "tsn_Hira/test/*.arrow"}]}, {"config_name": "nav_Latn", "data_files": [{"split": "train", "path": "nav_Latn/train/*.arrow"}, {"split": "dev", "path": "nav_Latn/dev/*.arrow"}, {"split": "test", "path": "nav_Latn/test/*.arrow"}]}, {"config_name": "hyw_Latn", "data_files": [{"split": "train", "path": "hyw_Latn/train/*.arrow"}]}, {"config_name": "cat_Latn", "data_files": [{"split": "train", "path": "cat_Latn/train/*.arrow"}, {"split": "dev", "path": "cat_Latn/dev/*.arrow"}, {"split": "test", "path": "cat_Latn/test/*.arrow"}]}, {"config_name": "gym_Latn", "data_files": [{"split": "train", "path": "gym_Latn/train/*.arrow"}, {"split": "dev", "path": "gym_Latn/dev/*.arrow"}, {"split": "test", "path": "gym_Latn/test/*.arrow"}]}, {"config_name": "sat_Olck", "data_files": [{"split": "train", "path": "sat_Olck/train/*.arrow"}, {"split": "dev", "path": "sat_Olck/dev/*.arrow"}, {"split": "test", "path": "sat_Olck/test/*.arrow"}]}, {"config_name": "snd_Arab", "data_files": [{"split": "train", "path": "snd_Arab/train/*.arrow"}, {"split": "dev", "path": "snd_Arab/dev/*.arrow"}, {"split": "test", "path": "snd_Arab/test/*.arrow"}]}, {"config_name": "isl_Latn", "data_files": [{"split": "train", "path": "isl_Latn/train/*.arrow"}, {"split": "dev", "path": "isl_Latn/dev/*.arrow"}, {"split": "test", "path": "isl_Latn/test/*.arrow"}]}, {"config_name": "mlt_Telu", "data_files": [{"split": "train", "path": "mlt_Telu/train/*.arrow"}, {"split": "dev", "path": "mlt_Telu/dev/*.arrow"}, {"split": "test", "path": "mlt_Telu/test/*.arrow"}]}, {"config_name": "kal_Latn", "data_files": [{"split": "train", "path": "kal_Latn/train/*.arrow"}, {"split": "dev", "path": "kal_Latn/dev/*.arrow"}, {"split": "test", "path": "kal_Latn/test/*.arrow"}]}, {"config_name": "aoj_Latn", "data_files": [{"split": "train", "path": "aoj_Latn/train/*.arrow"}, {"split": "dev", "path": "aoj_Latn/dev/*.arrow"}, {"split": "test", "path": "aoj_Latn/test/*.arrow"}]}, {"config_name": "zai_Latn", "data_files": [{"split": "train", "path": "zai_Latn/train/*.arrow"}, {"split": "dev", "path": "zai_Latn/dev/*.arrow"}, {"split": "test", "path": "zai_Latn/test/*.arrow"}]}, {"config_name": "guj_Gujr", "data_files": [{"split": "train", "path": "guj_Gujr/train/*.arrow"}, {"split": "dev", "path": "guj_Gujr/dev/*.arrow"}, {"split": "test", "path": "guj_Gujr/test/*.arrow"}]}, {"config_name": "min_Latn", "data_files": [{"split": "train", "path": "min_Latn/train/*.arrow"}, {"split": "dev", "path": "min_Latn/dev/*.arrow"}, {"split": "test", "path": "min_Latn/test/*.arrow"}]}, {"config_name": "grc_Grek", "data_files": [{"split": "train", "path": "grc_Grek/train/*.arrow"}, {"split": "dev", "path": "grc_Grek/dev/*.arrow"}, {"split": "test", "path": "grc_Grek/test/*.arrow"}]}, {"config_name": "hmn_Latn", "data_files": [{"split": "train", "path": "hmn_Latn/train/*.arrow"}, {"split": "dev", "path": "hmn_Latn/dev/*.arrow"}, {"split": "test", "path": "hmn_Latn/test/*.arrow"}]}, {"config_name": "ido_Latn", "data_files": [{"split": "train", "path": "ido_Latn/train/*.arrow"}, {"split": "dev", "path": "ido_Latn/dev/*.arrow"}, {"split": "test", "path": "ido_Latn/test/*.arrow"}]}, {"config_name": "khm_Khmr", "data_files": [{"split": "train", "path": "khm_Khmr/train/*.arrow"}, {"split": "dev", "path": "khm_Khmr/dev/*.arrow"}, {"split": "test", "path": "khm_Khmr/test/*.arrow"}]}, {"config_name": "quh_Latn", "data_files": [{"split": "train", "path": "quh_Latn/train/*.arrow"}, {"split": "dev", "path": "quh_Latn/dev/*.arrow"}, {"split": "test", "path": "quh_Latn/test/*.arrow"}]}, {"config_name": "ikk_Latn", "data_files": [{"split": "train", "path": "ikk_Latn/train/*.arrow"}, {"split": "dev", "path": "ikk_Latn/dev/*.arrow"}, {"split": "test", "path": "ikk_Latn/test/*.arrow"}]}, {"config_name": "iku_Cans", "data_files": [{"split": "train", "path": "iku_Cans/train/*.arrow"}, {"split": "dev", "path": "iku_Cans/dev/*.arrow"}, {"split": "test", "path": "iku_Cans/test/*.arrow"}]}, {"config_name": "tat_Latn", "data_files": [{"split": "train", "path": "tat_Latn/train/*.arrow"}, {"split": "dev", "path": "tat_Latn/dev/*.arrow"}, {"split": "test", "path": "tat_Latn/test/*.arrow"}]}, {"config_name": "bel_Cyrl", "data_files": [{"split": "train", "path": "bel_Cyrl/train/*.arrow"}, {"split": "dev", "path": "bel_Cyrl/dev/*.arrow"}, {"split": "test", "path": "bel_Cyrl/test/*.arrow"}]}, {"config_name": "dyu_Latn", "data_files": [{"split": "train", "path": "dyu_Latn/train/*.arrow"}, {"split": "dev", "path": "dyu_Latn/dev/*.arrow"}, {"split": "test", "path": "dyu_Latn/test/*.arrow"}]}, {"config_name": "guj_Thai", "data_files": [{"split": "train", "path": "guj_Thai/train/*.arrow"}, {"split": "dev", "path": "guj_Thai/dev/*.arrow"}, {"split": "test", "path": "guj_Thai/test/*.arrow"}]}, {"config_name": "que_Latn", "data_files": [{"split": "train", "path": "que_Latn/train/*.arrow"}, {"split": "dev", "path": "que_Latn/dev/*.arrow"}, {"split": "test", "path": "que_Latn/test/*.arrow"}]}, {"config_name": "wol_Latn", "data_files": [{"split": "train", "path": "wol_Latn/train/*.arrow"}, {"split": "dev", "path": "wol_Latn/dev/*.arrow"}, {"split": "test", "path": "wol_Latn/test/*.arrow"}]}, {"config_name": "hne_Deva", "data_files": [{"split": "train", "path": "hne_Deva/train/*.arrow"}, {"split": "dev", "path": "hne_Deva/dev/*.arrow"}, {"split": "test", "path": "hne_Deva/test/*.arrow"}]}, {"config_name": "zho_Hani", "data_files": [{"split": "train", "path": "zho_Hani/train/*.arrow"}, {"split": "dev", "path": "zho_Hani/dev/*.arrow"}, {"split": "test", "path": "zho_Hani/test/*.arrow"}]}, {"config_name": "tum_Latn", "data_files": [{"split": "train", "path": "tum_Latn/train/*.arrow"}, {"split": "dev", "path": "tum_Latn/dev/*.arrow"}, {"split": "test", "path": "tum_Latn/test/*.arrow"}]}, {"config_name": "swh_Latn", "data_files": [{"split": "train", "path": "swh_Latn/train/*.arrow"}, {"split": "dev", "path": "swh_Latn/dev/*.arrow"}, {"split": "test", "path": "swh_Latn/test/*.arrow"}]}, {"config_name": "kua_Latn", "data_files": [{"split": "train", "path": "kua_Latn/train/*.arrow"}]}, {"config_name": "ncj_Latn", "data_files": [{"split": "train", "path": "ncj_Latn/train/*.arrow"}, {"split": "dev", "path": "ncj_Latn/dev/*.arrow"}, {"split": "test", "path": "ncj_Latn/test/*.arrow"}]}, {"config_name": "ewe_Latn", "data_files": [{"split": "train", "path": "ewe_Latn/train/*.arrow"}, {"split": "dev", "path": "ewe_Latn/dev/*.arrow"}, {"split": "test", "path": "ewe_Latn/test/*.arrow"}]}, {"config_name": "mlt_Geor", "data_files": [{"split": "train", "path": "mlt_Geor/train/*.arrow"}, {"split": "dev", "path": "mlt_Geor/dev/*.arrow"}, {"split": "test", "path": "mlt_Geor/test/*.arrow"}]}, {"config_name": "hat_Latn", "data_files": [{"split": "train", "path": "hat_Latn/train/*.arrow"}, {"split": "dev", "path": "hat_Latn/dev/*.arrow"}, {"split": "test", "path": "hat_Latn/test/*.arrow"}]}, {"config_name": "guj_Hani", "data_files": [{"split": "train", "path": "guj_Hani/train/*.arrow"}, {"split": "dev", "path": "guj_Hani/dev/*.arrow"}, {"split": "test", "path": "guj_Hani/test/*.arrow"}]}, {"config_name": "ina_Latn", "data_files": [{"split": "train", "path": "ina_Latn/train/*.arrow"}, {"split": "dev", "path": "ina_Latn/dev/*.arrow"}, {"split": "test", "path": "ina_Latn/test/*.arrow"}]}, {"config_name": "ahk_Latn", "data_files": [{"split": "train", "path": "ahk_Latn/train/*.arrow"}, {"split": "dev", "path": "ahk_Latn/dev/*.arrow"}, {"split": "test", "path": "ahk_Latn/test/*.arrow"}]}, {"config_name": "srm_Latn", "data_files": [{"split": "train", "path": "srm_Latn/train/*.arrow"}, {"split": "dev", "path": "srm_Latn/dev/*.arrow"}, {"split": "test", "path": "srm_Latn/test/*.arrow"}]}, {"config_name": "lug_Latn", "data_files": [{"split": "train", "path": "lug_Latn/train/*.arrow"}, {"split": "dev", "path": "lug_Latn/dev/*.arrow"}, {"split": "test", "path": "lug_Latn/test/*.arrow"}]}, {"config_name": "ach_Latn", "data_files": [{"split": "train", "path": "ach_Latn/train/*.arrow"}]}, {"config_name": "rmy_Latn", "data_files": [{"split": "train", "path": "rmy_Latn/train/*.arrow"}, {"split": "dev", "path": "rmy_Latn/dev/*.arrow"}, {"split": "test", "path": "rmy_Latn/test/*.arrow"}]}, {"config_name": "smo_Latn", "data_files": [{"split": "train", "path": "smo_Latn/train/*.arrow"}, {"split": "dev", "path": "smo_Latn/dev/*.arrow"}, {"split": "test", "path": "smo_Latn/test/*.arrow"}]}, {"config_name": "mos_Latn", "data_files": [{"split": "train", "path": "mos_Latn/train/*.arrow"}, {"split": "dev", "path": "mos_Latn/dev/*.arrow"}, {"split": "test", "path": "mos_Latn/test/*.arrow"}]}, {"config_name": "srd_Latn", "data_files": [{"split": "train", "path": "srd_Latn/train/*.arrow"}, {"split": "dev", "path": "srd_Latn/dev/*.arrow"}, {"split": "test", "path": "srd_Latn/test/*.arrow"}]}, {"config_name": "srp_Latn", "data_files": [{"split": "train", "path": "srp_Latn/train/*.arrow"}, {"split": "dev", "path": "srp_Latn/dev/*.arrow"}, {"split": "test", "path": "srp_Latn/test/*.arrow"}]}, {"config_name": "azb_Arab", "data_files": [{"split": "train", "path": "azb_Arab/train/*.arrow"}, {"split": "dev", "path": "azb_Arab/dev/*.arrow"}, {"split": "test", "path": "azb_Arab/test/*.arrow"}]}, {"config_name": "aze_Arab", "data_files": [{"split": "train", "path": "aze_Arab/train/*.arrow"}, {"split": "dev", "path": "aze_Arab/dev/*.arrow"}, {"split": "test", "path": "aze_Arab/test/*.arrow"}]}, {"config_name": "ori_Orya", "data_files": [{"split": "train", "path": "ori_Orya/train/*.arrow"}, {"split": "dev", "path": "ori_Orya/dev/*.arrow"}, {"split": "test", "path": "ori_Orya/test/*.arrow"}]}, {"config_name": "mzh_Latn", "data_files": [{"split": "train", "path": "mzh_Latn/train/*.arrow"}, {"split": "dev", "path": "mzh_Latn/dev/*.arrow"}, {"split": "test", "path": "mzh_Latn/test/*.arrow"}]}, {"config_name": "kur_Latn", "data_files": [{"split": "train", "path": "kur_Latn/train/*.arrow"}, {"split": "dev", "path": "kur_Latn/dev/*.arrow"}, {"split": "test", "path": "kur_Latn/test/*.arrow"}]}, {"config_name": "wbm_Latn", "data_files": [{"split": "train", "path": "wbm_Latn/train/*.arrow"}, {"split": "dev", "path": "wbm_Latn/dev/*.arrow"}]}, {"config_name": "crs_Latn", "data_files": [{"split": "train", "path": "crs_Latn/train/*.arrow"}]}, {"config_name": "aze_Deva", "data_files": [{"split": "train", "path": "aze_Deva/train/*.arrow"}, {"split": "dev", "path": "aze_Deva/dev/*.arrow"}, {"split": "test", "path": "aze_Deva/test/*.arrow"}]}, {"config_name": "tsn_Arab", "data_files": [{"split": "train", "path": "tsn_Arab/train/*.arrow"}, {"split": "dev", "path": "tsn_Arab/dev/*.arrow"}, {"split": "test", "path": "tsn_Arab/test/*.arrow"}]}, {"config_name": "ada_Latn", "data_files": [{"split": "train", "path": "ada_Latn/train/*.arrow"}]}, {"config_name": "hif_Latn", "data_files": [{"split": "train", "path": "hif_Latn/train/*.arrow"}, {"split": "dev", "path": "hif_Latn/dev/*.arrow"}, {"split": "test", "path": "hif_Latn/test/*.arrow"}]}, {"config_name": "guj_Grek", "data_files": [{"split": "train", "path": "guj_Grek/train/*.arrow"}, {"split": "dev", "path": "guj_Grek/dev/*.arrow"}, {"split": "test", "path": "guj_Grek/test/*.arrow"}]}, {"config_name": "pcm_Latn", "data_files": [{"split": "train", "path": "pcm_Latn/train/*.arrow"}, {"split": "dev", "path": "pcm_Latn/dev/*.arrow"}, {"split": "test", "path": "pcm_Latn/test/*.arrow"}]}, {"config_name": "tso_Latn", "data_files": [{"split": "train", "path": "tso_Latn/train/*.arrow"}, {"split": "dev", "path": "tso_Latn/dev/*.arrow"}, {"split": "test", "path": "tso_Latn/test/*.arrow"}]}, {"config_name": "nor_Latn", "data_files": [{"split": "train", "path": "nor_Latn/train/*.arrow"}, {"split": "dev", "path": "nor_Latn/dev/*.arrow"}, {"split": "test", "path": "nor_Latn/test/*.arrow"}]}, {"config_name": "bsb_Latn", "data_files": [{"split": "train", "path": "bsb_Latn/train/*.arrow"}, {"split": "dev", "path": "bsb_Latn/dev/*.arrow"}, {"split": "test", "path": "bsb_Latn/test/*.arrow"}]}, {"config_name": "uig_Cyrl", "data_files": [{"split": "train", "path": "uig_Cyrl/train/*.arrow"}, {"split": "dev", "path": "uig_Cyrl/dev/*.arrow"}, {"split": "test", "path": "uig_Cyrl/test/*.arrow"}]}, {"config_name": "gaa_Latn", "data_files": [{"split": "train", "path": "gaa_Latn/train/*.arrow"}]}, {"config_name": "ukr_Cyrl", "data_files": [{"split": "train", "path": "ukr_Cyrl/train/*.arrow"}, {"split": "dev", "path": "ukr_Cyrl/dev/*.arrow"}, {"split": "test", "path": "ukr_Cyrl/test/*.arrow"}]}, {"config_name": "lav_Latn", "data_files": [{"split": "train", "path": "lav_Latn/train/*.arrow"}, {"split": "dev", "path": "lav_Latn/dev/*.arrow"}, {"split": "test", "path": "lav_Latn/test/*.arrow"}]}, {"config_name": "mon_Latn", "data_files": [{"split": "train", "path": "mon_Latn/train/*.arrow"}, {"split": "dev", "path": "mon_Latn/dev/*.arrow"}, {"split": "test", "path": "mon_Latn/test/*.arrow"}]}, {"config_name": "nep_Deva", "data_files": [{"split": "train", "path": "nep_Deva/train/*.arrow"}, {"split": "dev", "path": "nep_Deva/dev/*.arrow"}, {"split": "test", "path": "nep_Deva/test/*.arrow"}]}, {"config_name": "aze_Telu", "data_files": [{"split": "train", "path": "aze_Telu/train/*.arrow"}, {"split": "dev", "path": "aze_Telu/dev/*.arrow"}, {"split": "test", "path": "aze_Telu/test/*.arrow"}]}, {"config_name": "guj_Deva", "data_files": [{"split": "train", "path": "guj_Deva/train/*.arrow"}, {"split": "dev", "path": "guj_Deva/dev/*.arrow"}, {"split": "test", "path": "guj_Deva/test/*.arrow"}]}, {"config_name": "pis_Latn", "data_files": [{"split": "train", "path": "pis_Latn/train/*.arrow"}]}, {"config_name": "lhu_Latn", "data_files": [{"split": "train", "path": "lhu_Latn/train/*.arrow"}, {"split": "dev", "path": "lhu_Latn/dev/*.arrow"}, {"split": "test", "path": "lhu_Latn/test/*.arrow"}]}, {"config_name": "bew_Latn", "data_files": [{"split": "train", "path": "bew_Latn/train/*.arrow"}, {"split": "dev", "path": "bew_Latn/dev/*.arrow"}, {"split": "test", "path": "bew_Latn/test/*.arrow"}]}, {"config_name": "nya_Latn", "data_files": [{"split": "train", "path": "nya_Latn/train/*.arrow"}, {"split": "dev", "path": "nya_Latn/dev/*.arrow"}, {"split": "test", "path": "nya_Latn/test/*.arrow"}]}, {"config_name": "poh_Latn", "data_files": [{"split": "train", "path": "poh_Latn/train/*.arrow"}, {"split": "dev", "path": "poh_Latn/dev/*.arrow"}, {"split": "test", "path": "poh_Latn/test/*.arrow"}]}, {"config_name": "nnb_Latn", "data_files": [{"split": "train", "path": "nnb_Latn/train/*.arrow"}, {"split": "dev", "path": "nnb_Latn/dev/*.arrow"}, {"split": "test", "path": "nnb_Latn/test/*.arrow"}]}, {"config_name": "grn_Latn", "data_files": [{"split": "train", "path": "grn_Latn/train/*.arrow"}, {"split": "dev", "path": "grn_Latn/dev/*.arrow"}, {"split": "test", "path": "grn_Latn/test/*.arrow"}]}, {"config_name": "mco_Latn", "data_files": [{"split": "train", "path": "mco_Latn/train/*.arrow"}, {"split": "dev", "path": "mco_Latn/dev/*.arrow"}, {"split": "test", "path": "mco_Latn/test/*.arrow"}]}, {"config_name": "ory_Orya", "data_files": [{"split": "train", "path": "ory_Orya/train/*.arrow"}, {"split": "dev", "path": "ory_Orya/dev/*.arrow"}, {"split": "test", "path": "ory_Orya/test/*.arrow"}]}, {"config_name": "ful_Latn", "data_files": [{"split": "train", "path": "ful_Latn/train/*.arrow"}, {"split": "dev", "path": "ful_Latn/dev/*.arrow"}, {"split": "test", "path": "ful_Latn/test/*.arrow"}]}, {"config_name": "diq_Latn", "data_files": [{"split": "train", "path": "diq_Latn/train/*.arrow"}, {"split": "dev", "path": "diq_Latn/dev/*.arrow"}, {"split": "test", "path": "diq_Latn/test/*.arrow"}]}, {"config_name": "sag_Latn", "data_files": [{"split": "train", "path": "sag_Latn/train/*.arrow"}, {"split": "dev", "path": "sag_Latn/dev/*.arrow"}, {"split": "test", "path": "sag_Latn/test/*.arrow"}]}, {"config_name": "tel_Telu", "data_files": [{"split": "train", "path": "tel_Telu/train/*.arrow"}, {"split": "dev", "path": "tel_Telu/dev/*.arrow"}, {"split": "test", "path": "tel_Telu/test/*.arrow"}]}, {"config_name": "afr_Latn", "data_files": [{"split": "train", "path": "afr_Latn/train/*.arrow"}, {"split": "dev", "path": "afr_Latn/dev/*.arrow"}, {"split": "test", "path": "afr_Latn/test/*.arrow"}]}, {"config_name": "haw_Latn", "data_files": [{"split": "train", "path": "haw_Latn/train/*.arrow"}, {"split": "dev", "path": "haw_Latn/dev/*.arrow"}, {"split": "test", "path": "haw_Latn/test/*.arrow"}]}, {"config_name": "bar_Arab", "data_files": [{"split": "train", "path": "bar_Arab/train/*.arrow"}, {"split": "dev", "path": "bar_Arab/dev/*.arrow"}, {"split": "test", "path": "bar_Arab/test/*.arrow"}]}, {"config_name": "umb_Latn", "data_files": [{"split": "train", "path": "umb_Latn/train/*.arrow"}, {"split": "dev", "path": "umb_Latn/dev/*.arrow"}, {"split": "test", "path": "umb_Latn/test/*.arrow"}]}, {"config_name": "hsb_Latn", "data_files": [{"split": "train", "path": "hsb_Latn/train/*.arrow"}, {"split": "dev", "path": "hsb_Latn/dev/*.arrow"}, {"split": "test", "path": "hsb_Latn/test/*.arrow"}]}, {"config_name": "fij_Latn", "data_files": [{"split": "train", "path": "fij_Latn/train/*.arrow"}, {"split": "dev", "path": "fij_Latn/dev/*.arrow"}, {"split": "test", "path": "fij_Latn/test/*.arrow"}]}, {"config_name": "hbs_Cyrl", "data_files": [{"split": "train", "path": "hbs_Cyrl/train/*.arrow"}, {"split": "dev", "path": "hbs_Cyrl/dev/*.arrow"}, {"split": "test", "path": "hbs_Cyrl/test/*.arrow"}]}, {"config_name": "san_Latn", "data_files": [{"split": "train", "path": "san_Latn/train/*.arrow"}, {"split": "dev", "path": "san_Latn/dev/*.arrow"}, {"split": "test", "path": "san_Latn/test/*.arrow"}]}, {"config_name": "vls_Latn", "data_files": [{"split": "train", "path": "vls_Latn/train/*.arrow"}, {"split": "dev", "path": "vls_Latn/dev/*.arrow"}, {"split": "test", "path": "vls_Latn/test/*.arrow"}]}, {"config_name": "zsm_Latn", "data_files": [{"split": "train", "path": "zsm_Latn/train/*.arrow"}, {"split": "dev", "path": "zsm_Latn/dev/*.arrow"}, {"split": "test", "path": "zsm_Latn/test/*.arrow"}]}, {"config_name": "lij_Latn", "data_files": [{"split": "train", "path": "lij_Latn/train/*.arrow"}, {"split": "dev", "path": "lij_Latn/dev/*.arrow"}, {"split": "test", "path": "lij_Latn/test/*.arrow"}]}, {"config_name": "quc_Latn", "data_files": [{"split": "train", "path": "quc_Latn/train/*.arrow"}, {"split": "dev", "path": "quc_Latn/dev/*.arrow"}, {"split": "test", "path": "quc_Latn/test/*.arrow"}]}, {"config_name": "mam_Latn", "data_files": [{"split": "train", "path": "mam_Latn/train/*.arrow"}, {"split": "dev", "path": "mam_Latn/dev/*.arrow"}, {"split": "test", "path": "mam_Latn/test/*.arrow"}]}, {"config_name": "tls_Latn", "data_files": [{"split": "train", "path": "tls_Latn/train/*.arrow"}, {"split": "dev", "path": "tls_Latn/dev/*.arrow"}, {"split": "test", "path": "tls_Latn/test/*.arrow"}]}, {"config_name": "tuc_Latn", "data_files": [{"split": "train", "path": "tuc_Latn/train/*.arrow"}, {"split": "dev", "path": "tuc_Latn/dev/*.arrow"}, {"split": "test", "path": "tuc_Latn/test/*.arrow"}]}, {"config_name": "dan_Latn", "data_files": [{"split": "train", "path": "dan_Latn/train/*.arrow"}, {"split": "dev", "path": "dan_Latn/dev/*.arrow"}, {"split": "test", "path": "dan_Latn/test/*.arrow"}]}, {"config_name": "rue_Cyrl", "data_files": [{"split": "train", "path": "rue_Cyrl/train/*.arrow"}, {"split": "dev", "path": "rue_Cyrl/dev/*.arrow"}, {"split": "test", "path": "rue_Cyrl/test/*.arrow"}]}, {"config_name": "mlt_Guru", "data_files": [{"split": "train", "path": "mlt_Guru/train/*.arrow"}, {"split": "dev", "path": "mlt_Guru/dev/*.arrow"}, {"split": "test", "path": "mlt_Guru/test/*.arrow"}]}, {"config_name": "ace_Latn", "data_files": [{"split": "train", "path": "ace_Latn/train/*.arrow"}, {"split": "dev", "path": "ace_Latn/dev/*.arrow"}, {"split": "test", "path": "ace_Latn/test/*.arrow"}]}, {"config_name": "bem_Latn", "data_files": [{"split": "train", "path": "bem_Latn/train/*.arrow"}, {"split": "dev", "path": "bem_Latn/dev/*.arrow"}, {"split": "test", "path": "bem_Latn/test/*.arrow"}]}, {"config_name": "kam_Latn", "data_files": [{"split": "train", "path": "kam_Latn/train/*.arrow"}, {"split": "dev", "path": "kam_Latn/dev/*.arrow"}, {"split": "test", "path": "kam_Latn/test/*.arrow"}]}, {"config_name": "uig_Hani", "data_files": [{"split": "train", "path": "uig_Hani/train/*.arrow"}, {"split": "dev", "path": "uig_Hani/dev/*.arrow"}, {"split": "test", "path": "uig_Hani/test/*.arrow"}]}, {"config_name": "kaa_Latn", "data_files": [{"split": "train", "path": "kaa_Latn/train/*.arrow"}, {"split": "dev", "path": "kaa_Latn/dev/*.arrow"}, {"split": "test", "path": "kaa_Latn/test/*.arrow"}]}, {"config_name": "ndo_Latn", "data_files": [{"split": "train", "path": "ndo_Latn/train/*.arrow"}, {"split": "dev", "path": "ndo_Latn/dev/*.arrow"}, {"split": "test", "path": "ndo_Latn/test/*.arrow"}]}, {"config_name": "aze_Knda", "data_files": [{"split": "train", "path": "aze_Knda/train/*.arrow"}, {"split": "dev", "path": "aze_Knda/dev/*.arrow"}, {"split": "test", "path": "aze_Knda/test/*.arrow"}]}, {"config_name": "oss_Cyrl", "data_files": [{"split": "train", "path": "oss_Cyrl/train/*.arrow"}, {"split": "dev", "path": "oss_Cyrl/dev/*.arrow"}, {"split": "test", "path": "oss_Cyrl/test/*.arrow"}]}, {"config_name": "lit_Latn", "data_files": [{"split": "train", "path": "lit_Latn/train/*.arrow"}, {"split": "dev", "path": "lit_Latn/dev/*.arrow"}, {"split": "test", "path": "lit_Latn/test/*.arrow"}]}, {"config_name": "frr_Latn", "data_files": [{"split": "train", "path": "frr_Latn/train/*.arrow"}, {"split": "dev", "path": "frr_Latn/dev/*.arrow"}, {"split": "test", "path": "frr_Latn/test/*.arrow"}]}, {"config_name": "yap_Latn", "data_files": [{"split": "train", "path": "yap_Latn/train/*.arrow"}, {"split": "dev", "path": "yap_Latn/dev/*.arrow"}, {"split": "test", "path": "yap_Latn/test/*.arrow"}]}, {"config_name": "gom_Latn", "data_files": [{"split": "train", "path": "gom_Latn/train/*.arrow"}, {"split": "dev", "path": "gom_Latn/dev/*.arrow"}, {"split": "test", "path": "gom_Latn/test/*.arrow"}]}, {"config_name": "swe_Latn", "data_files": [{"split": "train", "path": "swe_Latn/train/*.arrow"}, {"split": "dev", "path": "swe_Latn/dev/*.arrow"}, {"split": "test", "path": "swe_Latn/test/*.arrow"}]}, {"config_name": "lfn_Latn", "data_files": [{"split": "train", "path": "lfn_Latn/train/*.arrow"}, {"split": "dev", "path": "lfn_Latn/dev/*.arrow"}, {"split": "test", "path": "lfn_Latn/test/*.arrow"}]}, {"config_name": "cmn_Hani", "data_files": [{"split": "train", "path": "cmn_Hani/train/*.arrow"}, {"split": "dev", "path": "cmn_Hani/dev/*.arrow"}, {"split": "test", "path": "cmn_Hani/test/*.arrow"}]}, {"config_name": "mon_Cyrl", "data_files": [{"split": "train", "path": "mon_Cyrl/train/*.arrow"}, {"split": "dev", "path": "mon_Cyrl/dev/*.arrow"}, {"split": "test", "path": "mon_Cyrl/test/*.arrow"}]}, {"config_name": "vep_Latn", "data_files": [{"split": "train", "path": "vep_Latn/train/*.arrow"}, {"split": "dev", "path": "vep_Latn/dev/*.arrow"}, {"split": "test", "path": "vep_Latn/test/*.arrow"}]}, {"config_name": "ixl_Latn", "data_files": [{"split": "train", "path": "ixl_Latn/train/*.arrow"}, {"split": "dev", "path": "ixl_Latn/dev/*.arrow"}, {"split": "test", "path": "ixl_Latn/test/*.arrow"}]}, {"config_name": "mlt_Gujr", "data_files": [{"split": "train", "path": "mlt_Gujr/train/*.arrow"}, {"split": "dev", "path": "mlt_Gujr/dev/*.arrow"}, {"split": "test", "path": "mlt_Gujr/test/*.arrow"}]}, {"config_name": "gil_Latn", "data_files": [{"split": "train", "path": "gil_Latn/train/*.arrow"}]}, {"config_name": "mau_Latn", "data_files": [{"split": "train", "path": "mau_Latn/train/*.arrow"}, {"split": "dev", "path": "mau_Latn/dev/*.arrow"}, {"split": "test", "path": "mau_Latn/test/*.arrow"}]}, {"config_name": "tsn_Latn", "data_files": [{"split": "train", "path": "tsn_Latn/train/*.arrow"}, {"split": "dev", "path": "tsn_Latn/dev/*.arrow"}, {"split": "test", "path": "tsn_Latn/test/*.arrow"}]}, {"config_name": "aym_Latn", "data_files": [{"split": "train", "path": "aym_Latn/train/*.arrow"}, {"split": "dev", "path": "aym_Latn/dev/*.arrow"}, {"split": "test", "path": "aym_Latn/test/*.arrow"}]}, {"config_name": "vec_Latn", "data_files": [{"split": "train", "path": "vec_Latn/train/*.arrow"}, {"split": "dev", "path": "vec_Latn/dev/*.arrow"}, {"split": "test", "path": "vec_Latn/test/*.arrow"}]}, {"config_name": "gom_Deva", "data_files": [{"split": "train", "path": "gom_Deva/train/*.arrow"}, {"split": "dev", "path": "gom_Deva/dev/*.arrow"}, {"split": "test", "path": "gom_Deva/test/*.arrow"}]}, {"config_name": "fur_Latn", "data_files": [{"split": "train", "path": "fur_Latn/train/*.arrow"}, {"split": "dev", "path": "fur_Latn/dev/*.arrow"}, {"split": "test", "path": "fur_Latn/test/*.arrow"}]}, {"config_name": "kin_Latn", "data_files": [{"split": "train", "path": "kin_Latn/train/*.arrow"}, {"split": "dev", "path": "kin_Latn/dev/*.arrow"}, {"split": "test", "path": "kin_Latn/test/*.arrow"}]}, {"config_name": "guj_Hang", "data_files": [{"split": "train", "path": "guj_Hang/train/*.arrow"}, {"split": "dev", "path": "guj_Hang/dev/*.arrow"}, {"split": "test", "path": "guj_Hang/test/*.arrow"}]}, {"config_name": "gcr_Latn", "data_files": [{"split": "train", "path": "gcr_Latn/train/*.arrow"}]}, {"config_name": "sgs_Latn", "data_files": [{"split": "train", "path": "sgs_Latn/train/*.arrow"}, {"split": "dev", "path": "sgs_Latn/dev/*.arrow"}, {"split": "test", "path": "sgs_Latn/test/*.arrow"}]}, {"config_name": "bih_Deva", "data_files": [{"split": "train", "path": "bih_Deva/train/*.arrow"}, {"split": "dev", "path": "bih_Deva/dev/*.arrow"}, {"split": "test", "path": "bih_Deva/test/*.arrow"}]}, {"config_name": "guj_Guru", "data_files": [{"split": "train", "path": "guj_Guru/train/*.arrow"}, {"split": "dev", "path": "guj_Guru/dev/*.arrow"}, {"split": "test", "path": "guj_Guru/test/*.arrow"}]}, {"config_name": "vie_Latn", "data_files": [{"split": "train", "path": "vie_Latn/train/*.arrow"}, {"split": "dev", "path": "vie_Latn/dev/*.arrow"}, {"split": "test", "path": "vie_Latn/test/*.arrow"}]}, {"config_name": "tha_Thai", "data_files": [{"split": "train", "path": "tha_Thai/train/*.arrow"}, {"split": "dev", "path": "tha_Thai/dev/*.arrow"}, {"split": "test", "path": "tha_Thai/test/*.arrow"}]}, {"config_name": "pau_Latn", "data_files": [{"split": "train", "path": "pau_Latn/train/*.arrow"}]}, {"config_name": "est_Latn", "data_files": [{"split": "train", "path": "est_Latn/train/*.arrow"}, {"split": "dev", "path": "est_Latn/dev/*.arrow"}, {"split": "test", "path": "est_Latn/test/*.arrow"}]}, {"config_name": "lue_Latn", "data_files": [{"split": "train", "path": "lue_Latn/train/*.arrow"}]}, {"config_name": "rug_Latn", "data_files": [{"split": "train", "path": "rug_Latn/train/*.arrow"}, {"split": "dev", "path": "rug_Latn/dev/*.arrow"}, {"split": "test", "path": "rug_Latn/test/*.arrow"}]}, {"config_name": "kjb_Latn", "data_files": [{"split": "train", "path": "kjb_Latn/train/*.arrow"}, {"split": "dev", "path": "kjb_Latn/dev/*.arrow"}, {"split": "test", "path": "kjb_Latn/test/*.arrow"}]}, {"config_name": "kik_Latn", "data_files": [{"split": "train", "path": "kik_Latn/train/*.arrow"}, {"split": "dev", "path": "kik_Latn/dev/*.arrow"}, {"split": "test", "path": "kik_Latn/test/*.arrow"}]}, {"config_name": "mri_Latn", "data_files": [{"split": "train", "path": "mri_Latn/train/*.arrow"}, {"split": "dev", "path": "mri_Latn/dev/*.arrow"}, {"split": "test", "path": "mri_Latn/test/*.arrow"}]}, {"config_name": "ber_Latn", "data_files": [{"split": "train", "path": "ber_Latn/train/*.arrow"}, {"split": "dev", "path": "ber_Latn/dev/*.arrow"}, {"split": "test", "path": "ber_Latn/test/*.arrow"}]}, {"config_name": "ssw_Latn", "data_files": [{"split": "train", "path": "ssw_Latn/train/*.arrow"}, {"split": "dev", "path": "ssw_Latn/dev/*.arrow"}, {"split": "test", "path": "ssw_Latn/test/*.arrow"}]}, {"config_name": "guj_Beng", "data_files": [{"split": "train", "path": "guj_Beng/train/*.arrow"}, {"split": "dev", "path": "guj_Beng/dev/*.arrow"}, {"split": "test", "path": "guj_Beng/test/*.arrow"}]}, {"config_name": "quz_Latn", "data_files": [{"split": "train", "path": "quz_Latn/train/*.arrow"}]}, {"config_name": "arb_Arab", "data_files": [{"split": "train", "path": "arb_Arab/train/*.arrow"}, {"split": "dev", "path": "arb_Arab/dev/*.arrow"}, {"split": "test", "path": "arb_Arab/test/*.arrow"}]}, {"config_name": "mlt_Sinh", "data_files": [{"split": "train", "path": "mlt_Sinh/train/*.arrow"}, {"split": "dev", "path": "mlt_Sinh/dev/*.arrow"}, {"split": "test", "path": "mlt_Sinh/test/*.arrow"}]}, {"config_name": "mai_Deva", "data_files": [{"split": "train", "path": "mai_Deva/train/*.arrow"}, {"split": "dev", "path": "mai_Deva/dev/*.arrow"}, {"split": "test", "path": "mai_Deva/test/*.arrow"}]}, {"config_name": "mlt_Thai", "data_files": [{"split": "train", "path": "mlt_Thai/train/*.arrow"}, {"split": "dev", "path": "mlt_Thai/dev/*.arrow"}, {"split": "test", "path": "mlt_Thai/test/*.arrow"}]}, {"config_name": "bew_Cyrl", "data_files": [{"split": "train", "path": "bew_Cyrl/train/*.arrow"}, {"split": "dev", "path": "bew_Cyrl/dev/*.arrow"}, {"split": "test", "path": "bew_Cyrl/test/*.arrow"}]}, {"config_name": "tat_Cyrl", "data_files": [{"split": "train", "path": "tat_Cyrl/train/*.arrow"}, {"split": "dev", "path": "tat_Cyrl/dev/*.arrow"}, {"split": "test", "path": "tat_Cyrl/test/*.arrow"}]}, {"config_name": "mya_Mymr", "data_files": [{"split": "train", "path": "mya_Mymr/train/*.arrow"}, {"split": "dev", "path": "mya_Mymr/dev/*.arrow"}, {"split": "test", "path": "mya_Mymr/test/*.arrow"}]}, {"config_name": "alt_Cyrl", "data_files": [{"split": "train", "path": "alt_Cyrl/train/*.arrow"}, {"split": "dev", "path": "alt_Cyrl/dev/*.arrow"}, {"split": "test", "path": "alt_Cyrl/test/*.arrow"}]}, {"config_name": "nno_Latn", "data_files": [{"split": "train", "path": "nno_Latn/train/*.arrow"}, {"split": "dev", "path": "nno_Latn/dev/*.arrow"}, {"split": "test", "path": "nno_Latn/test/*.arrow"}]}, {"config_name": "hrx_Latn", "data_files": [{"split": "train", "path": "hrx_Latn/train/*.arrow"}, {"split": "dev", "path": "hrx_Latn/dev/*.arrow"}, {"split": "test", "path": "hrx_Latn/test/*.arrow"}]}, {"config_name": "hau_Latn", "data_files": [{"split": "train", "path": "hau_Latn/train/*.arrow"}, {"split": "dev", "path": "hau_Latn/dev/*.arrow"}, {"split": "test", "path": "hau_Latn/test/*.arrow"}]}, {"config_name": "gsw_Latn", "data_files": [{"split": "train", "path": "gsw_Latn/train/*.arrow"}, {"split": "dev", "path": "gsw_Latn/dev/*.arrow"}, {"split": "test", "path": "gsw_Latn/test/*.arrow"}]}, {"config_name": "pam_Latn", "data_files": [{"split": "train", "path": "pam_Latn/train/*.arrow"}, {"split": "dev", "path": "pam_Latn/dev/*.arrow"}, {"split": "test", "path": "pam_Latn/test/*.arrow"}]}, {"config_name": "mlt_Deva", "data_files": [{"split": "train", "path": "mlt_Deva/train/*.arrow"}, {"split": "dev", "path": "mlt_Deva/dev/*.arrow"}, {"split": "test", "path": "mlt_Deva/test/*.arrow"}]}, {"config_name": "sun_Latn", "data_files": [{"split": "train", "path": "sun_Latn/train/*.arrow"}, {"split": "dev", "path": "sun_Latn/dev/*.arrow"}, {"split": "test", "path": "sun_Latn/test/*.arrow"}]}, {"config_name": "aze_Sinh", "data_files": [{"split": "train", "path": "aze_Sinh/train/*.arrow"}, {"split": "dev", "path": "aze_Sinh/dev/*.arrow"}, {"split": "test", "path": "aze_Sinh/test/*.arrow"}]}, {"config_name": "lat_Latn", "data_files": [{"split": "train", "path": "lat_Latn/train/*.arrow"}, {"split": "dev", "path": "lat_Latn/dev/*.arrow"}, {"split": "test", "path": "lat_Latn/test/*.arrow"}]}, {"config_name": "bis_Latn", "data_files": [{"split": "train", "path": "bis_Latn/train/*.arrow"}, {"split": "dev", "path": "bis_Latn/dev/*.arrow"}, {"split": "test", "path": "bis_Latn/test/*.arrow"}]}, {"config_name": "udm_Cyrl", "data_files": [{"split": "train", "path": "udm_Cyrl/train/*.arrow"}, {"split": "dev", "path": "udm_Cyrl/dev/*.arrow"}, {"split": "test", "path": "udm_Cyrl/test/*.arrow"}]}, {"config_name": "tca_Latn", "data_files": [{"split": "train", "path": "tca_Latn/train/*.arrow"}, {"split": "dev", "path": "tca_Latn/dev/*.arrow"}, {"split": "test", "path": "tca_Latn/test/*.arrow"}]}, {"config_name": "uig_Arab", "data_files": [{"split": "train", "path": "uig_Arab/train/*.arrow"}, {"split": "dev", "path": "uig_Arab/dev/*.arrow"}, {"split": "test", "path": "uig_Arab/test/*.arrow"}]}, {"config_name": "glg_Latn", "data_files": [{"split": "train", "path": "glg_Latn/train/*.arrow"}, {"split": "dev", "path": "glg_Latn/dev/*.arrow"}, {"split": "test", "path": "glg_Latn/test/*.arrow"}]}, {"config_name": "tah_Latn", "data_files": [{"split": "train", "path": "tah_Latn/train/*.arrow"}, {"split": "dev", "path": "tah_Latn/dev/*.arrow"}]}, {"config_name": "glk_Latn", "data_files": [{"split": "train", "path": "glk_Latn/train/*.arrow"}, {"split": "dev", "path": "glk_Latn/dev/*.arrow"}, {"split": "test", "path": "glk_Latn/test/*.arrow"}]}, {"config_name": "aze_Tfng", "data_files": [{"split": "train", "path": "aze_Tfng/train/*.arrow"}, {"split": "dev", "path": "aze_Tfng/dev/*.arrow"}, {"split": "test", "path": "aze_Tfng/test/*.arrow"}]}, {"config_name": "ckb_Arab", "data_files": [{"split": "train", "path": "ckb_Arab/train/*.arrow"}, {"split": "dev", "path": "ckb_Arab/dev/*.arrow"}, {"split": "test", "path": "ckb_Arab/test/*.arrow"}]}, {"config_name": "gle_Latn", "data_files": [{"split": "train", "path": "gle_Latn/train/*.arrow"}, {"split": "dev", "path": "gle_Latn/dev/*.arrow"}, {"split": "test", "path": "gle_Latn/test/*.arrow"}]}, {"config_name": "lim_Latn", "data_files": [{"split": "train", "path": "lim_Latn/train/*.arrow"}, {"split": "dev", "path": "lim_Latn/dev/*.arrow"}, {"split": "test", "path": "lim_Latn/test/*.arrow"}]}, {"config_name": "slk_Latn", "data_files": [{"split": "train", "path": "slk_Latn/train/*.arrow"}, {"split": "dev", "path": "slk_Latn/dev/*.arrow"}, {"split": "test", "path": "slk_Latn/test/*.arrow"}]}, {"config_name": "nds_Latn", "data_files": [{"split": "train", "path": "nds_Latn/train/*.arrow"}, {"split": "dev", "path": "nds_Latn/dev/*.arrow"}, {"split": "test", "path": "nds_Latn/test/*.arrow"}]}, {"config_name": "kor_Hang", "data_files": [{"split": "train", "path": "kor_Hang/train/*.arrow"}, {"split": "dev", "path": "kor_Hang/dev/*.arrow"}, {"split": "test", "path": "kor_Hang/test/*.arrow"}]}, {"config_name": "uzb_Latn", "data_files": [{"split": "train", "path": "uzb_Latn/train/*.arrow"}, {"split": "dev", "path": "uzb_Latn/dev/*.arrow"}, {"split": "test", "path": "uzb_Latn/test/*.arrow"}]}, {"config_name": "pfl_Latn", "data_files": [{"split": "train", "path": "pfl_Latn/train/*.arrow"}, {"split": "dev", "path": "pfl_Latn/dev/*.arrow"}, {"split": "test", "path": "pfl_Latn/test/*.arrow"}]}, {"config_name": "azj_Latn", "data_files": [{"split": "train", "path": "azj_Latn/train/*.arrow"}, {"split": "dev", "path": "azj_Latn/dev/*.arrow"}, {"split": "test", "path": "azj_Latn/test/*.arrow"}]}, {"config_name": "glv_Latn", "data_files": [{"split": "train", "path": "glv_Latn/train/*.arrow"}, {"split": "dev", "path": "glv_Latn/dev/*.arrow"}, {"split": "test", "path": "glv_Latn/test/*.arrow"}]}, {"config_name": "jam_Latn", "data_files": [{"split": "train", "path": "jam_Latn/train/*.arrow"}, {"split": "dev", "path": "jam_Latn/dev/*.arrow"}, {"split": "test", "path": "jam_Latn/test/*.arrow"}]}, {"config_name": "kat_Geor", "data_files": [{"split": "train", "path": "kat_Geor/train/*.arrow"}, {"split": "dev", "path": "kat_Geor/dev/*.arrow"}, {"split": "test", "path": "kat_Geor/test/*.arrow"}]}, {"config_name": "fry_Latn", "data_files": [{"split": "train", "path": "fry_Latn/train/*.arrow"}, {"split": "dev", "path": "fry_Latn/dev/*.arrow"}, {"split": "test", "path": "fry_Latn/test/*.arrow"}]}, {"config_name": "guj_Knda", "data_files": [{"split": "train", "path": "guj_Knda/train/*.arrow"}, {"split": "dev", "path": "guj_Knda/dev/*.arrow"}, {"split": "test", "path": "guj_Knda/test/*.arrow"}]}, {"config_name": "kat_Latn", "data_files": [{"split": "train", "path": "kat_Latn/train/*.arrow"}, {"split": "dev", "path": "kat_Latn/dev/*.arrow"}, {"split": "test", "path": "kat_Latn/test/*.arrow"}]}, {"config_name": "twi_Latn", "data_files": [{"split": "train", "path": "twi_Latn/train/*.arrow"}, {"split": "dev", "path": "twi_Latn/dev/*.arrow"}, {"split": "test", "path": "twi_Latn/test/*.arrow"}]}, {"config_name": "eus_Latn", "data_files": [{"split": "train", "path": "eus_Latn/train/*.arrow"}, {"split": "dev", "path": "eus_Latn/dev/*.arrow"}, {"split": "test", "path": "eus_Latn/test/*.arrow"}]}, {"config_name": "toi_Latn", "data_files": [{"split": "train", "path": "toi_Latn/train/*.arrow"}]}, {"config_name": "mlt_Armn", "data_files": [{"split": "train", "path": "mlt_Armn/train/*.arrow"}, {"split": "dev", "path": "mlt_Armn/dev/*.arrow"}, {"split": "test", "path": "mlt_Armn/test/*.arrow"}]}, {"config_name": "mon_Hira", "data_files": [{"split": "train", "path": "mon_Hira/train/*.arrow"}, {"split": "dev", "path": "mon_Hira/dev/*.arrow"}, {"split": "test", "path": "mon_Hira/test/*.arrow"}]}, {"config_name": "mlg_Latn", "data_files": [{"split": "train", "path": "mlg_Latn/train/*.arrow"}, {"split": "dev", "path": "mlg_Latn/dev/*.arrow"}, {"split": "test", "path": "mlg_Latn/test/*.arrow"}]}, {"config_name": "tyv_Cyrl", "data_files": [{"split": "train", "path": "tyv_Cyrl/train/*.arrow"}, {"split": "dev", "path": "tyv_Cyrl/dev/*.arrow"}, {"split": "test", "path": "tyv_Cyrl/test/*.arrow"}]}, {"config_name": "arz_Arab", "data_files": [{"split": "train", "path": "arz_Arab/train/*.arrow"}, {"split": "dev", "path": "arz_Arab/dev/*.arrow"}, {"split": "test", "path": "arz_Arab/test/*.arrow"}]}, {"config_name": "hyw_Armn", "data_files": [{"split": "train", "path": "hyw_Armn/train/*.arrow"}]}, {"config_name": "chk_Latn", "data_files": [{"split": "train", "path": "chk_Latn/train/*.arrow"}, {"split": "dev", "path": "chk_Latn/dev/*.arrow"}, {"split": "test", "path": "chk_Latn/test/*.arrow"}]}, {"config_name": "vol_Latn", "data_files": [{"split": "train", "path": "vol_Latn/train/*.arrow"}, {"split": "dev", "path": "vol_Latn/dev/*.arrow"}, {"split": "test", "path": "vol_Latn/test/*.arrow"}]}, {"config_name": "kek_Latn", "data_files": [{"split": "train", "path": "kek_Latn/train/*.arrow"}, {"split": "dev", "path": "kek_Latn/dev/*.arrow"}, {"split": "test", "path": "kek_Latn/test/*.arrow"}]}, {"config_name": "teo_Latn", "data_files": [{"split": "train", "path": "teo_Latn/train/*.arrow"}]}, {"config_name": "ell_Grek", "data_files": [{"split": "train", "path": "ell_Grek/train/*.arrow"}, {"split": "dev", "path": "ell_Grek/dev/*.arrow"}, {"split": "test", "path": "ell_Grek/test/*.arrow"}]}, {"config_name": "kan_Knda", "data_files": [{"split": "train", "path": "kan_Knda/train/*.arrow"}, {"split": "dev", "path": "kan_Knda/dev/*.arrow"}, {"split": "test", "path": "kan_Knda/test/*.arrow"}]}, {"config_name": "tpi_Latn", "data_files": [{"split": "train", "path": "tpi_Latn/train/*.arrow"}, {"split": "dev", "path": "tpi_Latn/dev/*.arrow"}, {"split": "test", "path": "tpi_Latn/test/*.arrow"}]}, {"config_name": "rop_Latn", "data_files": [{"split": "train", "path": "rop_Latn/train/*.arrow"}, {"split": "dev", "path": "rop_Latn/dev/*.arrow"}, {"split": "test", "path": "rop_Latn/test/*.arrow"}]}, {"config_name": "aze_Mlym", "data_files": [{"split": "train", "path": "aze_Mlym/train/*.arrow"}, {"split": "dev", "path": "aze_Mlym/dev/*.arrow"}, {"split": "test", "path": "aze_Mlym/test/*.arrow"}]}, {"config_name": "lua_Latn", "data_files": [{"split": "train", "path": "lua_Latn/train/*.arrow"}, {"split": "dev", "path": "lua_Latn/dev/*.arrow"}]}, {"config_name": "mad_Latn", "data_files": [{"split": "train", "path": "mad_Latn/train/*.arrow"}, {"split": "dev", "path": "mad_Latn/dev/*.arrow"}, {"split": "test", "path": "mad_Latn/test/*.arrow"}]}, {"config_name": "top_Latn", "data_files": [{"split": "train", "path": "top_Latn/train/*.arrow"}, {"split": "dev", "path": "top_Latn/dev/*.arrow"}, {"split": "test", "path": "top_Latn/test/*.arrow"}]}, {"config_name": "scn_Latn", "data_files": [{"split": "train", "path": "scn_Latn/train/*.arrow"}, {"split": "dev", "path": "scn_Latn/dev/*.arrow"}, {"split": "test", "path": "scn_Latn/test/*.arrow"}]}, {"config_name": "aze_Thaa", "data_files": [{"split": "train", "path": "aze_Thaa/train/*.arrow"}, {"split": "dev", "path": "aze_Thaa/dev/*.arrow"}, {"split": "test", "path": "aze_Thaa/test/*.arrow"}]}, {"config_name": "guj_Latn", "data_files": [{"split": "train", "path": "guj_Latn/train/*.arrow"}, {"split": "dev", "path": "guj_Latn/dev/*.arrow"}, {"split": "test", "path": "guj_Latn/test/*.arrow"}]}, {"config_name": "ngl_Latn", "data_files": [{"split": "train", "path": "ngl_Latn/train/*.arrow"}, {"split": "dev", "path": "ngl_Latn/dev/*.arrow"}]}, {"config_name": "mal_Mlym", "data_files": [{"split": "train", "path": "mal_Mlym/train/*.arrow"}, {"split": "dev", "path": "mal_Mlym/dev/*.arrow"}, {"split": "test", "path": "mal_Mlym/test/*.arrow"}]}, {"config_name": "szl_Latn", "data_files": [{"split": "train", "path": "szl_Latn/train/*.arrow"}, {"split": "dev", "path": "szl_Latn/dev/*.arrow"}, {"split": "test", "path": "szl_Latn/test/*.arrow"}]}, {"config_name": "orm_Latn", "data_files": [{"split": "train", "path": "orm_Latn/train/*.arrow"}, {"split": "dev", "path": "orm_Latn/dev/*.arrow"}, {"split": "test", "path": "orm_Latn/test/*.arrow"}]}, {"config_name": "urd_Arab", "data_files": [{"split": "train", "path": "urd_Arab/train/*.arrow"}, {"split": "dev", "path": "urd_Arab/dev/*.arrow"}, {"split": "test", "path": "urd_Arab/test/*.arrow"}]}, {"config_name": "cbk_Latn", "data_files": [{"split": "train", "path": "cbk_Latn/train/*.arrow"}, {"split": "dev", "path": "cbk_Latn/dev/*.arrow"}, {"split": "test", "path": "cbk_Latn/test/*.arrow"}]}]} | 2023-12-15T06:41:30+00:00 | [
"2305.12182"
]
| [
"abk",
"abn",
"ace",
"ach",
"acm",
"acr",
"ada",
"afb",
"afr",
"ahk",
"ajg",
"ajp",
"aka",
"als",
"alt",
"alz",
"amh",
"ami",
"aoj",
"apc",
"ara",
"arb",
"arn",
"ary",
"arz",
"asm",
"ast",
"aym",
"azb",
"aze",
"azj",
"bak",
"bam",
"ban",
"bas",
"bba",
"bbc",
"bci",
"bcl",
"bel",
"bem",
"ben",
"ber",
"bhw",
"bih",
"bik",
"bim",
"bin",
"bis",
"bjn",
"bod",
"bos",
"bpy",
"bqc",
"bre",
"bsb",
"bts",
"btx",
"bum",
"bzj",
"cab",
"cak",
"cat",
"cbk",
"cce",
"cfm",
"cgg",
"che",
"chk",
"chv",
"chw",
"cjk",
"ckb",
"cmn",
"cnh",
"cos",
"crh",
"crs",
"csb",
"ctd",
"ctu",
"cuk",
"cym",
"dan",
"dhv",
"diq",
"div",
"djk",
"dln",
"dtp",
"dua",
"dyu",
"dzo",
"efi",
"ekk",
"ell",
"eml",
"eng",
"epo",
"est",
"eus",
"ewe",
"ext",
"fao",
"fas",
"fat",
"fij",
"fin",
"fon",
"fra",
"frr",
"fry",
"ful",
"fur",
"gaa",
"gcf",
"gcr",
"gil",
"gkn",
"gkp",
"gla",
"gle",
"glg",
"glv",
"gom",
"gor",
"grc",
"grn",
"gsw",
"guc",
"gug",
"guj",
"gur",
"guw",
"gym",
"hat",
"hau",
"haw",
"hbs",
"heb",
"her",
"hif",
"hil",
"hin",
"hmn",
"hmo",
"hne",
"hrv",
"hrx",
"hsb",
"hui",
"hun",
"hus",
"hye",
"hyw",
"iba",
"ibg",
"ibo",
"ido",
"idu",
"ifa",
"ifb",
"ige",
"ikk",
"iku",
"ile",
"ilo",
"ina",
"ind",
"ish",
"isl",
"ita",
"ixl",
"izz",
"jam",
"jav",
"jbo",
"kaa",
"kab",
"kac",
"kal",
"kam",
"kan",
"kat",
"kaz",
"kbd",
"kbp",
"kea",
"kek",
"khm",
"kik",
"kin",
"kir",
"kjh",
"kmb",
"kmr",
"kom",
"kon",
"koo",
"kor",
"kos",
"kpg",
"kqn",
"krc",
"kri",
"ksd",
"ksh",
"kss",
"ksw",
"kua",
"kur",
"kwn",
"lam",
"lao",
"lat",
"lav",
"ldi",
"leh",
"lfn",
"lhu",
"lij",
"lim",
"lin",
"lit",
"llb",
"lmo",
"loz",
"lua",
"lue",
"lug",
"lun",
"luo",
"lus",
"lvs",
"lzh",
"mad",
"mah",
"mai",
"mal",
"mam",
"mar",
"mau",
"mbb",
"mck",
"mco",
"mdy",
"meu",
"mfe",
"mgh",
"mgr",
"mhr",
"min",
"miq",
"mkd",
"mlg",
"mny",
"mon",
"mos",
"mps",
"mri",
"mrw",
"msa",
"mwl",
"mwn",
"mxv",
"mya",
"myv",
"mzh",
"mzn",
"nan",
"nap",
"naq",
"nav",
"nba",
"nbl",
"nch",
"ncj",
"ncx",
"ndc",
"nde",
"ndo",
"nds",
"nep",
"new",
"ngl",
"ngu",
"nia",
"niu",
"nld",
"nnb",
"nno",
"nob",
"nor",
"nse",
"nso",
"nya",
"nyk",
"nyn",
"nyu",
"nyy",
"nzi",
"oci",
"ogo",
"oke",
"ori",
"orm",
"ory",
"oss",
"ote",
"pag",
"pam",
"pan",
"pap",
"pau",
"pcd",
"pcm",
"pdt",
"pes",
"pfl",
"phm",
"pis",
"pls",
"plt",
"pms",
"pnb",
"poh",
"pon",
"por",
"prk",
"pus",
"pxm",
"qub",
"quc",
"que",
"qug",
"quw",
"quy",
"quz",
"qvi",
"rap",
"rar",
"rmn",
"rmy",
"rng",
"roh",
"ron",
"rue",
"rug",
"run",
"rus",
"sag",
"sah",
"san",
"sat",
"scn",
"sco",
"seh",
"sgs",
"sid",
"sin",
"skg",
"slk",
"slv",
"sme",
"smo",
"sna",
"snd",
"som",
"sop",
"sot",
"sqi",
"srd",
"srm",
"srn",
"srp",
"ssw",
"sun",
"suz",
"swa",
"swe",
"swh",
"sxn",
"szl",
"tah",
"tam",
"tat",
"tbz",
"tca",
"tcf",
"tdt",
"tdx",
"tel",
"teo",
"tgl",
"tha",
"tih",
"tir",
"tiv",
"tlh",
"tob",
"tog",
"toh",
"toi",
"toj",
"tok",
"ton",
"top",
"tpi",
"tpm",
"tsc",
"tso",
"tsz",
"ttj",
"tuc",
"tuk",
"tur",
"twi",
"twx",
"tyv",
"tzh",
"tzo",
"udm",
"uig",
"ukr",
"umb",
"urd",
"urh",
"uzb",
"uzn",
"ven",
"vep",
"vie",
"vls",
"vmw",
"vol",
"wal",
"wes",
"wln",
"wol",
"wuu",
"xav",
"xho",
"xmf",
"xmv",
"yao",
"yap",
"yid",
"yom",
"yor",
"yua",
"yue",
"zai",
"zea",
"zho",
"zlm",
"zne",
"zpa",
"zsm",
"zul"
]
| TAGS
#multilinguality-multilingual #language-Abkhazian #language-Abua #language-Achinese #language-Acoli #language-Mesopotamian Arabic #language-Achi #language-Adangme #language-Gulf Arabic #language-Afrikaans #language-Akha #language-Aja (Benin) #language-South Levantine Arabic #language-Akan #language-Tosk Albanian #language-Southern Altai #language-Alur #language-Amharic #language-Amis #language-Mufian #language-Levantine Arabic #language-Arabic #language-Standard Arabic #language-Mapudungun #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Aymara #language-South Azerbaijani #language-Azerbaijani #language-North Azerbaijani #language-Bashkir #language-Bambara #language-Balinese #language-Basa (Cameroon) #language-Baatonum #language-Batak Toba #language-Baoulé #language-Central Bikol #language-Belarusian #language-Bemba (Zambia) #language-Bengali #language-ber #language-Biak #language-bih #language-Bikol #language-Bimoba #language-Bini #language-Bislama #language-Banjar #language-Tibetan #language-Bosnian #language-Bishnupriya #language-Boko (Benin) #language-Breton #language-Brunei Bisaya #language-Batak Simalungun #language-Batak Karo #language-Bulu (Cameroon) #language-Belize Kriol English #language-Garifuna #language-Kaqchikel #language-Catalan #language-Chavacano #language-Chopi #language-Falam Chin #language-Chiga #language-Chechen #language-Chuukese #language-Chuvash #language-Chuwabu #language-Chokwe #language-Central Kurdish #language-Mandarin Chinese #language-Hakha Chin #language-Corsican #language-Crimean Tatar #language-Seselwa Creole French #language-Kashubian #language-Tedim Chin #language-Chol #language-San Blas Kuna #language-Welsh #language-Danish #language-Dehu #language-Dimli (individual language) #language-Dhivehi #language-Eastern Maroon Creole #language-Darlong #language-Kadazan Dusun #language-Duala #language-Dyula #language-Dzongkha #language-Efik #language-Standard Estonian #language-Modern Greek (1453-) #language-Emiliano-Romagnolo #language-English #language-Esperanto #language-Estonian #language-Basque #language-Ewe #language-Extremaduran #language-Faroese #language-Persian #language-Fanti #language-Fijian #language-Finnish #language-Fon #language-French #language-Northern Frisian #language-Western Frisian #language-Fulah #language-Friulian #language-Ga #language-Guadeloupean Creole French #language-Guianese Creole French #language-Gilbertese #language-Gokana #language-Guinea Kpelle #language-Scottish Gaelic #language-Irish #language-Galician #language-Manx #language-Goan Konkani #language-Gorontalo #language-Ancient Greek (to 1453) #language-Guarani #language-Swiss German #language-Wayuu #language-Paraguayan Guaraní #language-Gujarati #language-Farefare #language-Gun #language-Ngäbere #language-Haitian #language-Hausa #language-Hawaiian #language-Serbo-Croatian #language-Hebrew #language-Herero #language-Fiji Hindi #language-Hiligaynon #language-Hindi #language-Hmong #language-Hiri Motu #language-Chhattisgarhi #language-Croatian #language-Hunsrik #language-Upper Sorbian #language-Huli #language-Hungarian #language-Huastec #language-Armenian #language-Western Armenian #language-Iban #language-Ibanag #language-Igbo #language-Ido #language-Idoma #language-Amganad Ifugao #language-Batad Ifugao #language-Igede #language-Ika #language-Inuktitut #language-Interlingue #language-Iloko #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Esan #language-Icelandic #language-Italian #language-Ixil #language-Izii #language-Jamaican Creole English #language-Javanese #language-Lojban #language-Kara-Kalpak #language-Kabyle #language-Kachin #language-Kalaallisut #language-Kamba (Kenya) #language-Kannada #language-Georgian #language-Kazakh #language-Kabardian #language-Kabiyè #language-Kabuverdianu #language-Kekchí #language-Khmer #language-Kikuyu #language-Kinyarwanda #language-Kirghiz #language-Khakas #language-Kimbundu #language-Northern Kurdish #language-Komi #language-Kongo #language-Konzo #language-Korean #language-Kosraean #language-Kapingamarangi #language-Kaonde #language-Karachay-Balkar #language-Krio #language-Kuanua #language-Kölsch #language-Southern Kisi #language-S'gaw Karen #language-Kuanyama #language-Kurdish #language-Kwangali #language-Lamba #language-Lao #language-Latin #language-Latvian #language-Laari #language-Lenje #language-Lingua Franca Nova #language-Lahu #language-Ligurian #language-Limburgan #language-Lingala #language-Lithuanian #language-Lolo #language-Lombard #language-Lozi #language-Luba-Lulua #language-Luvale #language-Ganda #language-Lunda #language-Luo (Kenya and Tanzania) #language-Lushai #language-Standard Latvian #language-Literary Chinese #language-Madurese #language-Marshallese #language-Maithili #language-Malayalam #language-Mam #language-Marathi #language-Huautla Mazatec #language-Western Bukidnon Manobo #language-Mbunda #language-Coatlán Mixe #language-Male (Ethiopia) #language-Motu #language-Morisyen #language-Makhuwa-Meetto #language-Mambwe-Lungu #language-Eastern Mari #language-Minangkabau #language-Mískito #language-Macedonian #language-Malagasy #language-Manyawa #language-Mongolian #language-Mossi #language-Dadibi #language-Maori #language-Maranao #language-Malay (macrolanguage) #language-Mirandese #language-Nyamwanga #language-Metlatónoc Mixtec #language-Burmese #language-Erzya #language-Wichí Lhamtés Güisnay #language-Mazanderani #language-Min Nan Chinese #language-Neapolitan #language-Khoekhoe #language-Navajo #language-Nyemba #language-South Ndebele #language-Central Huasteca Nahuatl #language-Northern Puebla Nahuatl #language-Central Puebla Nahuatl #language-Ndau #language-North Ndebele #language-Ndonga #language-Low German #language-Nepali (macrolanguage) #language-Newari #language-Lomwe #language-Guerrero Nahuatl #language-Nias #language-Niuean #language-Dutch #language-Nande #language-Norwegian Nynorsk #language-Norwegian Bokmål #language-Norwegian #language-Nsenga #language-Pedi #language-Nyanja #language-Nyaneka #language-Nyankole #language-Nyungwe #language-Nyakyusa-Ngonde #language-Nzima #language-Occitan (post 1500) #language-Khana #language-Okpe (Southwestern Edo) #language-Oriya (macrolanguage) #language-Oromo #language-Odia #language-Ossetian #language-Mezquital Otomi #language-Pangasinan #language-Pampanga #language-Panjabi #language-Papiamento #language-Palauan #language-Picard #language-Nigerian Pidgin #language-Plautdietsch #language-Iranian Persian #language-Pfaelzisch #language-Phimbi #language-Pijin #language-San Marcos Tlacoyalco Popoloca #language-Plateau Malagasy #language-Piemontese #language-Western Panjabi #language-Poqomchi' #language-Pohnpeian #language-Portuguese #language-Parauk #language-Pushto #language-Quetzaltepec Mixe #language-Huallaga Huánuco Quechua #language-K'iche' #language-Quechua #language-Chimborazo Highland Quichua #language-Tena Lowland Quichua #language-Ayacucho Quechua #language-Cusco Quechua #language-Imbabura Highland Quichua #language-Rapanui #language-Rarotongan #language-Balkan Romani #language-Vlax Romani #language-Ronga #language-Romansh #language-Romanian #language-Rusyn #language-Roviana #language-Rundi #language-Russian #language-Sango #language-Yakut #language-Sanskrit #language-Santali #language-Sicilian #language-Scots #language-Sena #language-Samogitian #language-Sidamo #language-Sinhala #language-Sakalava Malagasy #language-Slovak #language-Slovenian #language-Northern Sami #language-Samoan #language-Shona #language-Sindhi #language-Somali #language-Songe #language-Southern Sotho #language-Albanian #language-Sardinian #language-Saramaccan #language-Sranan Tongo #language-Serbian #language-Swati #language-Sundanese #language-Sunwar #language-Swahili (macrolanguage) #language-Swedish #language-Swahili (individual language) #language-Sangir #language-Silesian #language-Tahitian #language-Tamil #language-Tatar #language-Ditammari #language-Ticuna #language-Malinaltepec Me'phaa #language-Tetun Dili #language-Tandroy-Mahafaly Malagasy #language-Telugu #language-Teso #language-Tagalog #language-Thai #language-Timugon Murut #language-Tigrinya #language-Tiv #language-Klingon #language-Toba #language-Tonga (Nyasa) #language-Gitonga #language-Tonga (Zambia) #language-Tojolabal #language-Toki Pona #language-Tonga (Tonga Islands) #language-Papantla Totonac #language-Tok Pisin #language-Tampulma #language-Tswa #language-Tsonga #language-Purepecha #language-Tooro #language-Mutu #language-Turkmen #language-Turkish #language-Twi #language-Tewe #language-Tuvinian #language-Tzeltal #language-Tzotzil #language-Udmurt #language-Uighur #language-Ukrainian #language-Umbundu #language-Urdu #language-Urhobo #language-Uzbek #language-Northern Uzbek #language-Venda #language-Veps #language-Vietnamese #language-Vlaams #language-Makhuwa #language-Volapük #language-Wolaytta #language-Cameroon Pidgin #language-Walloon #language-Wolof #language-Wu Chinese #language-Xavánte #language-Xhosa #language-Mingrelian #language-Antankarana Malagasy #language-Yao #language-Yapese #language-Yiddish #language-Yombe #language-Yoruba #language-Yucateco #language-Yue Chinese #language-Isthmus Zapotec #language-Zeeuws #language-Chinese #language-Malay (individual language) #language-Zande (individual language) #language-Lachiguiri Zapotec #language-Standard Malay #language-Zulu #license-other #arxiv-2305.12182 #region-us
|
# Glot500 Corpus
A dataset of natural language data collected by putting together more than 150
existing mono-lingual and multilingual datasets together and crawling known multilingual websites.
The focus of this dataset is on 500 extremely low-resource languages.
(More Languages still to be uploaded here)
This dataset is used to train the Glot500 model.
- Homepage: homepage
- Repository: github
- Paper: acl, arxiv
## Usage
Replace 'nbl_Latn' with your specific language.
<details>
<summary>Click to show supported languages:</summary>
</details>
## License
We don't own any part of the data. The original source of each sentence of the data is indicated in dataset field.
To see the copyright license of the original datasets visit here.
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0.
If you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at glot500@URL.
## Ethical Considerations
1. Biases: The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for news sources and social medias.
2. Representativeness: While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.
3. Ethics: We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.
If you use any part of this code and data in your research, please cite it using the following BibTeX entry.
| [
"# Glot500 Corpus\n\nA dataset of natural language data collected by putting together more than 150 \nexisting mono-lingual and multilingual datasets together and crawling known multilingual websites.\nThe focus of this dataset is on 500 extremely low-resource languages. \n(More Languages still to be uploaded here)\n\nThis dataset is used to train the Glot500 model.\n\n\n- Homepage: homepage\n- Repository: github\n- Paper: acl, arxiv",
"## Usage \nReplace 'nbl_Latn' with your specific language.\n\n\n<details>\n <summary>Click to show supported languages:</summary>\n\n\n</details>",
"## License\nWe don't own any part of the data. The original source of each sentence of the data is indicated in dataset field. \n\nTo see the copyright license of the original datasets visit here.\n\nWe license the actual packaging, the metadata and the annotations of these data under the cc0-1.0.\n\nIf you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at glot500@URL.",
"## Ethical Considerations\n\n1. Biases: The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for news sources and social medias.\n\n2. Representativeness: While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.\n\n3. Ethics: We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.\n\n\nIf you use any part of this code and data in your research, please cite it using the following BibTeX entry."
]
| [
"TAGS\n#multilinguality-multilingual #language-Abkhazian #language-Abua #language-Achinese #language-Acoli #language-Mesopotamian Arabic #language-Achi #language-Adangme #language-Gulf Arabic #language-Afrikaans #language-Akha #language-Aja (Benin) #language-South Levantine Arabic #language-Akan #language-Tosk Albanian #language-Southern Altai #language-Alur #language-Amharic #language-Amis #language-Mufian #language-Levantine Arabic #language-Arabic #language-Standard Arabic #language-Mapudungun #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Aymara #language-South Azerbaijani #language-Azerbaijani #language-North Azerbaijani #language-Bashkir #language-Bambara #language-Balinese #language-Basa (Cameroon) #language-Baatonum #language-Batak Toba #language-Baoulé #language-Central Bikol #language-Belarusian #language-Bemba (Zambia) #language-Bengali #language-ber #language-Biak #language-bih #language-Bikol #language-Bimoba #language-Bini #language-Bislama #language-Banjar #language-Tibetan #language-Bosnian #language-Bishnupriya #language-Boko (Benin) #language-Breton #language-Brunei Bisaya #language-Batak Simalungun #language-Batak Karo #language-Bulu (Cameroon) #language-Belize Kriol English #language-Garifuna #language-Kaqchikel #language-Catalan #language-Chavacano #language-Chopi #language-Falam Chin #language-Chiga #language-Chechen #language-Chuukese #language-Chuvash #language-Chuwabu #language-Chokwe #language-Central Kurdish #language-Mandarin Chinese #language-Hakha Chin #language-Corsican #language-Crimean Tatar #language-Seselwa Creole French #language-Kashubian #language-Tedim Chin #language-Chol #language-San Blas Kuna #language-Welsh #language-Danish #language-Dehu #language-Dimli (individual language) #language-Dhivehi #language-Eastern Maroon Creole #language-Darlong #language-Kadazan Dusun #language-Duala #language-Dyula #language-Dzongkha #language-Efik #language-Standard Estonian #language-Modern Greek (1453-) #language-Emiliano-Romagnolo #language-English #language-Esperanto #language-Estonian #language-Basque #language-Ewe #language-Extremaduran #language-Faroese #language-Persian #language-Fanti #language-Fijian #language-Finnish #language-Fon #language-French #language-Northern Frisian #language-Western Frisian #language-Fulah #language-Friulian #language-Ga #language-Guadeloupean Creole French #language-Guianese Creole French #language-Gilbertese #language-Gokana #language-Guinea Kpelle #language-Scottish Gaelic #language-Irish #language-Galician #language-Manx #language-Goan Konkani #language-Gorontalo #language-Ancient Greek (to 1453) #language-Guarani #language-Swiss German #language-Wayuu #language-Paraguayan Guaraní #language-Gujarati #language-Farefare #language-Gun #language-Ngäbere #language-Haitian #language-Hausa #language-Hawaiian #language-Serbo-Croatian #language-Hebrew #language-Herero #language-Fiji Hindi #language-Hiligaynon #language-Hindi #language-Hmong #language-Hiri Motu #language-Chhattisgarhi #language-Croatian #language-Hunsrik #language-Upper Sorbian #language-Huli #language-Hungarian #language-Huastec #language-Armenian #language-Western Armenian #language-Iban #language-Ibanag #language-Igbo #language-Ido #language-Idoma #language-Amganad Ifugao #language-Batad Ifugao #language-Igede #language-Ika #language-Inuktitut #language-Interlingue #language-Iloko #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Esan #language-Icelandic #language-Italian #language-Ixil #language-Izii #language-Jamaican Creole English #language-Javanese #language-Lojban #language-Kara-Kalpak #language-Kabyle #language-Kachin #language-Kalaallisut #language-Kamba (Kenya) #language-Kannada #language-Georgian #language-Kazakh #language-Kabardian #language-Kabiyè #language-Kabuverdianu #language-Kekchí #language-Khmer #language-Kikuyu #language-Kinyarwanda #language-Kirghiz #language-Khakas #language-Kimbundu #language-Northern Kurdish #language-Komi #language-Kongo #language-Konzo #language-Korean #language-Kosraean #language-Kapingamarangi #language-Kaonde #language-Karachay-Balkar #language-Krio #language-Kuanua #language-Kölsch #language-Southern Kisi #language-S'gaw Karen #language-Kuanyama #language-Kurdish #language-Kwangali #language-Lamba #language-Lao #language-Latin #language-Latvian #language-Laari #language-Lenje #language-Lingua Franca Nova #language-Lahu #language-Ligurian #language-Limburgan #language-Lingala #language-Lithuanian #language-Lolo #language-Lombard #language-Lozi #language-Luba-Lulua #language-Luvale #language-Ganda #language-Lunda #language-Luo (Kenya and Tanzania) #language-Lushai #language-Standard Latvian #language-Literary Chinese #language-Madurese #language-Marshallese #language-Maithili #language-Malayalam #language-Mam #language-Marathi #language-Huautla Mazatec #language-Western Bukidnon Manobo #language-Mbunda #language-Coatlán Mixe #language-Male (Ethiopia) #language-Motu #language-Morisyen #language-Makhuwa-Meetto #language-Mambwe-Lungu #language-Eastern Mari #language-Minangkabau #language-Mískito #language-Macedonian #language-Malagasy #language-Manyawa #language-Mongolian #language-Mossi #language-Dadibi #language-Maori #language-Maranao #language-Malay (macrolanguage) #language-Mirandese #language-Nyamwanga #language-Metlatónoc Mixtec #language-Burmese #language-Erzya #language-Wichí Lhamtés Güisnay #language-Mazanderani #language-Min Nan Chinese #language-Neapolitan #language-Khoekhoe #language-Navajo #language-Nyemba #language-South Ndebele #language-Central Huasteca Nahuatl #language-Northern Puebla Nahuatl #language-Central Puebla Nahuatl #language-Ndau #language-North Ndebele #language-Ndonga #language-Low German #language-Nepali (macrolanguage) #language-Newari #language-Lomwe #language-Guerrero Nahuatl #language-Nias #language-Niuean #language-Dutch #language-Nande #language-Norwegian Nynorsk #language-Norwegian Bokmål #language-Norwegian #language-Nsenga #language-Pedi #language-Nyanja #language-Nyaneka #language-Nyankole #language-Nyungwe #language-Nyakyusa-Ngonde #language-Nzima #language-Occitan (post 1500) #language-Khana #language-Okpe (Southwestern Edo) #language-Oriya (macrolanguage) #language-Oromo #language-Odia #language-Ossetian #language-Mezquital Otomi #language-Pangasinan #language-Pampanga #language-Panjabi #language-Papiamento #language-Palauan #language-Picard #language-Nigerian Pidgin #language-Plautdietsch #language-Iranian Persian #language-Pfaelzisch #language-Phimbi #language-Pijin #language-San Marcos Tlacoyalco Popoloca #language-Plateau Malagasy #language-Piemontese #language-Western Panjabi #language-Poqomchi' #language-Pohnpeian #language-Portuguese #language-Parauk #language-Pushto #language-Quetzaltepec Mixe #language-Huallaga Huánuco Quechua #language-K'iche' #language-Quechua #language-Chimborazo Highland Quichua #language-Tena Lowland Quichua #language-Ayacucho Quechua #language-Cusco Quechua #language-Imbabura Highland Quichua #language-Rapanui #language-Rarotongan #language-Balkan Romani #language-Vlax Romani #language-Ronga #language-Romansh #language-Romanian #language-Rusyn #language-Roviana #language-Rundi #language-Russian #language-Sango #language-Yakut #language-Sanskrit #language-Santali #language-Sicilian #language-Scots #language-Sena #language-Samogitian #language-Sidamo #language-Sinhala #language-Sakalava Malagasy #language-Slovak #language-Slovenian #language-Northern Sami #language-Samoan #language-Shona #language-Sindhi #language-Somali #language-Songe #language-Southern Sotho #language-Albanian #language-Sardinian #language-Saramaccan #language-Sranan Tongo #language-Serbian #language-Swati #language-Sundanese #language-Sunwar #language-Swahili (macrolanguage) #language-Swedish #language-Swahili (individual language) #language-Sangir #language-Silesian #language-Tahitian #language-Tamil #language-Tatar #language-Ditammari #language-Ticuna #language-Malinaltepec Me'phaa #language-Tetun Dili #language-Tandroy-Mahafaly Malagasy #language-Telugu #language-Teso #language-Tagalog #language-Thai #language-Timugon Murut #language-Tigrinya #language-Tiv #language-Klingon #language-Toba #language-Tonga (Nyasa) #language-Gitonga #language-Tonga (Zambia) #language-Tojolabal #language-Toki Pona #language-Tonga (Tonga Islands) #language-Papantla Totonac #language-Tok Pisin #language-Tampulma #language-Tswa #language-Tsonga #language-Purepecha #language-Tooro #language-Mutu #language-Turkmen #language-Turkish #language-Twi #language-Tewe #language-Tuvinian #language-Tzeltal #language-Tzotzil #language-Udmurt #language-Uighur #language-Ukrainian #language-Umbundu #language-Urdu #language-Urhobo #language-Uzbek #language-Northern Uzbek #language-Venda #language-Veps #language-Vietnamese #language-Vlaams #language-Makhuwa #language-Volapük #language-Wolaytta #language-Cameroon Pidgin #language-Walloon #language-Wolof #language-Wu Chinese #language-Xavánte #language-Xhosa #language-Mingrelian #language-Antankarana Malagasy #language-Yao #language-Yapese #language-Yiddish #language-Yombe #language-Yoruba #language-Yucateco #language-Yue Chinese #language-Isthmus Zapotec #language-Zeeuws #language-Chinese #language-Malay (individual language) #language-Zande (individual language) #language-Lachiguiri Zapotec #language-Standard Malay #language-Zulu #license-other #arxiv-2305.12182 #region-us \n",
"# Glot500 Corpus\n\nA dataset of natural language data collected by putting together more than 150 \nexisting mono-lingual and multilingual datasets together and crawling known multilingual websites.\nThe focus of this dataset is on 500 extremely low-resource languages. \n(More Languages still to be uploaded here)\n\nThis dataset is used to train the Glot500 model.\n\n\n- Homepage: homepage\n- Repository: github\n- Paper: acl, arxiv",
"## Usage \nReplace 'nbl_Latn' with your specific language.\n\n\n<details>\n <summary>Click to show supported languages:</summary>\n\n\n</details>",
"## License\nWe don't own any part of the data. The original source of each sentence of the data is indicated in dataset field. \n\nTo see the copyright license of the original datasets visit here.\n\nWe license the actual packaging, the metadata and the annotations of these data under the cc0-1.0.\n\nIf you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at glot500@URL.",
"## Ethical Considerations\n\n1. Biases: The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for news sources and social medias.\n\n2. Representativeness: While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.\n\n3. Ethics: We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.\n\n\nIf you use any part of this code and data in your research, please cite it using the following BibTeX entry."
]
| [
2988,
102,
43,
104,
181
]
| [
"passage: "
]
|
fcbb7b9bdb45023c8a29c631dee547bbf2ab3960 | # Dataset Card for "nmsqa_full-dev_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuhsinchan/nmsqa_full-dev_test | [
"region:us"
]
| 2023-11-01T10:27:53+00:00 | {"dataset_info": {"features": [{"name": "case_id", "dtype": "string"}, {"name": "context_code", "sequence": "int16"}, {"name": "context_cnt", "sequence": "int16"}, {"name": "question_code", "sequence": "int16"}, {"name": "question_cnt", "sequence": "int16"}, {"name": "start_idx", "dtype": "int64"}, {"name": "end_idx", "dtype": "int64"}, {"name": "start_time", "dtype": "float64"}, {"name": "end_time", "dtype": "float64"}], "splits": [{"name": "dev", "num_bytes": 102442544, "num_examples": 17155}, {"name": "test", "num_bytes": 2316076, "num_examples": 267}], "download_size": 0, "dataset_size": 104758620}} | 2023-11-01T12:12:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "nmsqa_full-dev_test"
More Information needed | [
"# Dataset Card for \"nmsqa_full-dev_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"nmsqa_full-dev_test\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"nmsqa_full-dev_test\"\n\nMore Information needed"
]
|
c7f9f7c4e68e5a5b1faac26eb3ec25f0ef5d8a4d | # Dataset Card for "orca-lite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | erfanzar/orca-lite | [
"region:us"
]
| 2023-11-01T10:28:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "user", "dtype": "string"}, {"name": "gpt", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "llama_2_prompt_style", "dtype": "string"}, {"name": "prompt_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 770593700, "num_examples": 101397}], "download_size": 437664216, "dataset_size": 770593700}} | 2023-11-01T10:28:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "orca-lite"
More Information needed | [
"# Dataset Card for \"orca-lite\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"orca-lite\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"orca-lite\"\n\nMore Information needed"
]
|
a4a4944beb233ec00cd54ab4012ab6f77de2e444 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | rusheeliyer/german-courts | [
"region:us"
]
| 2023-11-01T10:46:49+00:00 | {"configs": [{"config_name": "bundesfinanzhof", "data_files": [{"split": "train", "path": "data/Bundesfinanzhof_train.csv"}, {"split": "test", "path": "data/Bundesfinanzhof_test.csv"}, {"split": "validation", "path": "data/Bundesfinanzhof_val.csv"}]}, {"config_name": "bundesgerichtshof", "data_files": [{"split": "train", "path": "data/Bundesgerichtshof_train.csv"}, {"split": "test", "path": "data/Bundesgerichtshof_test.csv"}, {"split": "validation", "path": "data/Bundesgerichtshof_val.csv"}]}, {"config_name": "bundesarbeitsgericht", "data_files": [{"split": "train", "path": "data/Bundesarbeitsgericht_train.csv"}, {"split": "test", "path": "data/Bundesarbeitsgericht_test.csv"}, {"split": "validation", "path": "data/Bundesarbeitsgericht_val.csv"}]}, {"config_name": "bundessozialgericht", "data_files": [{"split": "train", "path": "data/Bundessozialgericht_train.csv"}, {"split": "test", "path": "data/Bundessozialgericht_test.csv"}, {"split": "validation", "path": "data/Bundessozialgericht_val.csv"}]}, {"config_name": "bundesverwaltungsgericht", "data_files": [{"split": "train", "path": "data/Bundesverwaltungsgericht_train.csv"}, {"split": "test", "path": "data/Bundesverwaltungsgericht_test.csv"}, {"split": "validation", "path": "data/Bundesverwaltungsgericht_val.csv"}]}, {"config_name": "bundesverfassungsgericht", "data_files": [{"split": "train", "path": "data/Bundesverfassungsgericht_train.csv"}, {"split": "test", "path": "data/Bundesverfassungsgericht_test.csv"}, {"split": "validation", "path": "data/Bundesverfassungsgericht_val.csv"}]}]} | 2023-12-26T08:25:31+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
ce4be3814b8eca1a4c0fa85f983625565f887088 | A list of Chittagonian Dialect of Bangla vulgar words
If you use Vulgar Lexicon dataset, please cite the following paper:
```
@Article{app132111875,
AUTHOR = {Mahmud, Tanjim and Ptaszynski, Michal and Masui, Fumito},
TITLE = {Automatic Vulgar Word Extraction Method with Application to Vulgar Remark Detection in Chittagonian Dialect of Bangla},
JOURNAL = {Applied Sciences},
VOLUME = {13},
YEAR = {2023},
NUMBER = {21},
ARTICLE-NUMBER = {11875},
URL = {https://www.mdpi.com/2076-3417/13/21/11875},
ISSN = {2076-3417},
DOI = {10.3390/app132111875}
}
``` | kit-nlp/Vulgar_Lexicon_of_Chittagonian_Dialect_of_Bangla_or_Bengali | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:bn",
"license:apache-2.0",
"doi:10.57967/hf/1296",
"region:us"
]
| 2023-11-01T10:54:54+00:00 | {"language": ["bn"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]} | 2023-11-01T11:12:50+00:00 | []
| [
"bn"
]
| TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-Bengali #license-apache-2.0 #doi-10.57967/hf/1296 #region-us
| A list of Chittagonian Dialect of Bangla vulgar words
If you use Vulgar Lexicon dataset, please cite the following paper:
| []
| [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-Bengali #license-apache-2.0 #doi-10.57967/hf/1296 #region-us \n"
]
| [
54
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-Bengali #license-apache-2.0 #doi-10.57967/hf/1296 #region-us \n"
]
|
d59364ad5206313973d22fc1b1be9e3305e67523 | This is based on: https://huggingface.co/datasets/jeremyc/Alpaca-Lora-GPT4-Swedish
I've done extensive cleaning (but I'm not yet done).
This includes:
Purging erroneous and sometimes offensive generations by the translator
Fixing code instances up to row 10300. All code was botched. There may still be some html instances to fix, but at least all python should be valid.
| neph1/Alpaca-Lora-GPT4-Swedish-Refined | [
"language:sv",
"region:us"
]
| 2023-11-01T11:02:09+00:00 | {"language": ["sv"]} | 2023-11-06T20:46:53+00:00 | []
| [
"sv"
]
| TAGS
#language-Swedish #region-us
| This is based on: URL
I've done extensive cleaning (but I'm not yet done).
This includes:
Purging erroneous and sometimes offensive generations by the translator
Fixing code instances up to row 10300. All code was botched. There may still be some html instances to fix, but at least all python should be valid.
| []
| [
"TAGS\n#language-Swedish #region-us \n"
]
| [
12
]
| [
"passage: TAGS\n#language-Swedish #region-us \n"
]
|
9adca2eceffef9662731cd9b789948c9b62f4f40 | A list of Chittagonian Dialect of Bangla vulgar words
If you use Vulgar Lexicon dataset, please cite the following paper:
```
@Article{app132111875,
AUTHOR = {Mahmud, Tanjim and Ptaszynski, Michal and Masui, Fumito},
TITLE = {Automatic Vulgar Word Extraction Method with Application to Vulgar Remark Detection in Chittagonian Dialect of Bangla},
JOURNAL = {Applied Sciences},
VOLUME = {13},
YEAR = {2023},
NUMBER = {21},
ARTICLE-NUMBER = {11875},
URL = {https://www.mdpi.com/2076-3417/13/21/11875},
ISSN = {2076-3417},
DOI = {10.3390/app132111875}
}
``` | TanjimKIT/Vulgar_Lexicon_of_Chittagonian_Dialect_of_Bangla_or_Bengali | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:bn",
"license:apache-2.0",
"doi:10.57967/hf/1297",
"region:us"
]
| 2023-11-01T11:08:04+00:00 | {"language": ["bn"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]} | 2023-11-01T11:15:31+00:00 | []
| [
"bn"
]
| TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-Bengali #license-apache-2.0 #doi-10.57967/hf/1297 #region-us
| A list of Chittagonian Dialect of Bangla vulgar words
If you use Vulgar Lexicon dataset, please cite the following paper:
| []
| [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-Bengali #license-apache-2.0 #doi-10.57967/hf/1297 #region-us \n"
]
| [
54
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-Bengali #license-apache-2.0 #doi-10.57967/hf/1297 #region-us \n"
]
|
848e9717469e258417bd82ba80ee34bfd3a2b72c | # Dataset Card for "LlamaPixely"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | LucidBrains/LlamaPixely | [
"region:us"
]
| 2023-11-01T11:16:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 3406311253, "num_examples": 319936}], "download_size": 430672710, "dataset_size": 3406311253}} | 2023-11-01T11:17:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "LlamaPixely"
More Information needed | [
"# Dataset Card for \"LlamaPixely\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"LlamaPixely\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"LlamaPixely\"\n\nMore Information needed"
]
|
515c4d45eb6cf5107094bf2ae0459a6e1300debb | * 2023.11.30更新:增加来自Arxiv的英文论文数据,数据形式和中文论文相同
# 长论文+多任务数据集
* 中文论文是来自知网的论文数据,版权受限,不能直接公开。下载后请勿上传到公开场合。
* QA列中包含对应于此论文的多个问答对
* 此处包含的论文均为长论文,正文大于16000字。
* 为满足指令微调的数据多样性,每条数据包含一篇中文论文以及对应的2-7种任务,任务类型包括:
1. 基于参考文本的知识问答
2. 全文总结
3. 段落总结
4. 选择题
5. 判断题
6. 数学计算
7. 写代码 | yuyijiong/LongPaper_multitask | [
"size_categories:1K<n<10K",
"language:zh",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-11-01T11:44:09+00:00 | {"language": ["zh", "en"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"]} | 2023-12-04T11:40:39+00:00 | []
| [
"zh",
"en"
]
| TAGS
#size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-4.0 #region-us
| * 2023.11.30更新:增加来自Arxiv的英文论文数据,数据形式和中文论文相同
# 长论文+多任务数据集
* 中文论文是来自知网的论文数据,版权受限,不能直接公开。下载后请勿上传到公开场合。
* QA列中包含对应于此论文的多个问答对
* 此处包含的论文均为长论文,正文大于16000字。
* 为满足指令微调的数据多样性,每条数据包含一篇中文论文以及对应的2-7种任务,任务类型包括:
1. 基于参考文本的知识问答
2. 全文总结
3. 段落总结
4. 选择题
5. 判断题
6. 数学计算
7. 写代码 | [
"# 长论文+多任务数据集\n* 中文论文是来自知网的论文数据,版权受限,不能直接公开。下载后请勿上传到公开场合。\n* QA列中包含对应于此论文的多个问答对\n* 此处包含的论文均为长论文,正文大于16000字。\n* 为满足指令微调的数据多样性,每条数据包含一篇中文论文以及对应的2-7种任务,任务类型包括:\n 1. 基于参考文本的知识问答\n 2. 全文总结\n 3. 段落总结\n 4. 选择题\n 5. 判断题\n 6. 数学计算\n 7. 写代码"
]
| [
"TAGS\n#size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-4.0 #region-us \n",
"# 长论文+多任务数据集\n* 中文论文是来自知网的论文数据,版权受限,不能直接公开。下载后请勿上传到公开场合。\n* QA列中包含对应于此论文的多个问答对\n* 此处包含的论文均为长论文,正文大于16000字。\n* 为满足指令微调的数据多样性,每条数据包含一篇中文论文以及对应的2-7种任务,任务类型包括:\n 1. 基于参考文本的知识问答\n 2. 全文总结\n 3. 段落总结\n 4. 选择题\n 5. 判断题\n 6. 数学计算\n 7. 写代码"
]
| [
38,
136
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-4.0 #region-us \n# 长论文+多任务数据集\n* 中文论文是来自知网的论文数据,版权受限,不能直接公开。下载后请勿上传到公开场合。\n* QA列中包含对应于此论文的多个问答对\n* 此处包含的论文均为长论文,正文大于16000字。\n* 为满足指令微调的数据多样性,每条数据包含一篇中文论文以及对应的2-7种任务,任务类型包括:\n 1. 基于参考文本的知识问答\n 2. 全文总结\n 3. 段落总结\n 4. 选择题\n 5. 判断题\n 6. 数学计算\n 7. 写代码"
]
|
0ce79ec7f09ca574a51b0f26c665e0dabfb9fa7a |
# 다음은 한국어로 번역한 내용입니다:
## AlpaCare GPT4 참조 출력에 대한 데이터셋 카드
### 이것은 K23/K23MiniMed의 평가 데이터셋입니다
### 데이터셋 세부 사항
- 데이터셋 출처 [선택 사항]
- 저장소: AlpaCare
- 논문: ALPACARE: 의료용도의 인스트럭션 튜닝된 대형 언어 모델
## 사용
### 직접 사용
이 참조 데이터를 사용하여 모델을 GPT4 응답과 비교 평가하십시오.
### 인용
```citation
@misc{zhang2023alpacareinstructiontuned,
title={AlpaCare:Instruction-tuned Large Language Models for Medical Application},
author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold},
year={2023},
eprint={2310.14558},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Dataset Card for AlpaCare GPT4 Reference Outputs on MedSci
This is an evaluation dataset for [K23/K23MiniMed](https://huggingface.co/pseudolab/K23_MiniMed)
## Dataset Details
### Dataset Description
- **Curated by:** [XZhang](https://github.com/XZhang97666)
- **Shared by [optional]:** [tonic](https://huggingface.co/tonic)
- **Language(s) (NLP):** EN
### Dataset Sources [optional]
- **Repository:** [AlpaCare](https://github.com/XZhang97666/AlpaCare)
- **Paper:** [ALPACARE:INSTRUCTION-TUNED LARGE LANGUAGE MODELS FOR MEDICAL APPLICATION](https://arxiv.org/pdf/2310.14558v1.pdf)
## Uses
### Direct Use
Use these reference data to evaluate your model against GPT4 responses.
## Citation
```citation
@misc{zhang2023alpacareinstructiontuned,
title={AlpaCare:Instruction-tuned Large Language Models for Medical Application},
author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold},
year={2023},
eprint={2310.14558},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| pseudolab/MedSi | [
"language:en",
"license:mit",
"medical",
"arxiv:2310.14558",
"region:us"
]
| 2023-11-01T12:54:39+00:00 | {"language": ["en"], "license": "mit", "tags": ["medical"]} | 2023-11-01T22:48:42+00:00 | [
"2310.14558"
]
| [
"en"
]
| TAGS
#language-English #license-mit #medical #arxiv-2310.14558 #region-us
|
# 다음은 한국어로 번역한 내용입니다:
## AlpaCare GPT4 참조 출력에 대한 데이터셋 카드
### 이것은 K23/K23MiniMed의 평가 데이터셋입니다
### 데이터셋 세부 사항
- 데이터셋 출처 [선택 사항]
- 저장소: AlpaCare
- 논문: ALPACARE: 의료용도의 인스트럭션 튜닝된 대형 언어 모델
## 사용
### 직접 사용
이 참조 데이터를 사용하여 모델을 GPT4 응답과 비교 평가하십시오.
### 인용
# Dataset Card for AlpaCare GPT4 Reference Outputs on MedSci
This is an evaluation dataset for K23/K23MiniMed
## Dataset Details
### Dataset Description
- Curated by: XZhang
- Shared by [optional]: tonic
- Language(s) (NLP): EN
### Dataset Sources [optional]
- Repository: AlpaCare
- Paper: ALPACARE:INSTRUCTION-TUNED LARGE LANGUAGE MODELS FOR MEDICAL APPLICATION
## Uses
### Direct Use
Use these reference data to evaluate your model against GPT4 responses.
| [
"# 다음은 한국어로 번역한 내용입니다:",
"## AlpaCare GPT4 참조 출력에 대한 데이터셋 카드",
"### 이것은 K23/K23MiniMed의 평가 데이터셋입니다",
"### 데이터셋 세부 사항 \n- 데이터셋 출처 [선택 사항] \n- 저장소: AlpaCare \n- 논문: ALPACARE: 의료용도의 인스트럭션 튜닝된 대형 언어 모델",
"## 사용",
"### 직접 사용 \n이 참조 데이터를 사용하여 모델을 GPT4 응답과 비교 평가하십시오.",
"### 인용",
"# Dataset Card for AlpaCare GPT4 Reference Outputs on MedSci\n\nThis is an evaluation dataset for K23/K23MiniMed",
"## Dataset Details",
"### Dataset Description\n\n- Curated by: XZhang\n- Shared by [optional]: tonic\n- Language(s) (NLP): EN",
"### Dataset Sources [optional]\n\n- Repository: AlpaCare \n- Paper: ALPACARE:INSTRUCTION-TUNED LARGE LANGUAGE MODELS FOR MEDICAL APPLICATION",
"## Uses",
"### Direct Use\n\nUse these reference data to evaluate your model against GPT4 responses."
]
| [
"TAGS\n#language-English #license-mit #medical #arxiv-2310.14558 #region-us \n",
"# 다음은 한국어로 번역한 내용입니다:",
"## AlpaCare GPT4 참조 출력에 대한 데이터셋 카드",
"### 이것은 K23/K23MiniMed의 평가 데이터셋입니다",
"### 데이터셋 세부 사항 \n- 데이터셋 출처 [선택 사항] \n- 저장소: AlpaCare \n- 논문: ALPACARE: 의료용도의 인스트럭션 튜닝된 대형 언어 모델",
"## 사용",
"### 직접 사용 \n이 참조 데이터를 사용하여 모델을 GPT4 응답과 비교 평가하십시오.",
"### 인용",
"# Dataset Card for AlpaCare GPT4 Reference Outputs on MedSci\n\nThis is an evaluation dataset for K23/K23MiniMed",
"## Dataset Details",
"### Dataset Description\n\n- Curated by: XZhang\n- Shared by [optional]: tonic\n- Language(s) (NLP): EN",
"### Dataset Sources [optional]\n\n- Repository: AlpaCare \n- Paper: ALPACARE:INSTRUCTION-TUNED LARGE LANGUAGE MODELS FOR MEDICAL APPLICATION",
"## Uses",
"### Direct Use\n\nUse these reference data to evaluate your model against GPT4 responses."
]
| [
27,
10,
15,
15,
45,
2,
19,
4,
34,
4,
34,
47,
3,
20
]
| [
"passage: TAGS\n#language-English #license-mit #medical #arxiv-2310.14558 #region-us \n# 다음은 한국어로 번역한 내용입니다:## AlpaCare GPT4 참조 출력에 대한 데이터셋 카드### 이것은 K23/K23MiniMed의 평가 데이터셋입니다### 데이터셋 세부 사항 \n- 데이터셋 출처 [선택 사항] \n- 저장소: AlpaCare \n- 논문: ALPACARE: 의료용도의 인스트럭션 튜닝된 대형 언어 모델## 사용### 직접 사용 \n이 참조 데이터를 사용하여 모델을 GPT4 응답과 비교 평가하십시오.### 인용# Dataset Card for AlpaCare GPT4 Reference Outputs on MedSci\n\nThis is an evaluation dataset for K23/K23MiniMed## Dataset Details### Dataset Description\n\n- Curated by: XZhang\n- Shared by [optional]: tonic\n- Language(s) (NLP): EN### Dataset Sources [optional]\n\n- Repository: AlpaCare \n- Paper: ALPACARE:INSTRUCTION-TUNED LARGE LANGUAGE MODELS FOR MEDICAL APPLICATION## Uses### Direct Use\n\nUse these reference data to evaluate your model against GPT4 responses."
]
|
4155de885d2bc2e5369e3b0bf750423436c7c650 | # Dataset Card for "rsna_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Phaedrus/rsna_fixed | [
"region:us"
]
| 2023-11-01T13:11:23+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label1", "dtype": "image"}, {"name": "label2", "dtype": "image"}, {"name": "label3", "dtype": "image"}, {"name": "label4", "dtype": "image"}, {"name": "label5", "dtype": "image"}, {"name": "label6", "dtype": "image"}, {"name": "label7", "dtype": "image"}, {"name": "label8", "dtype": "image"}, {"name": "label9", "dtype": "image"}, {"name": "label10", "dtype": "image"}, {"name": "label11", "dtype": "image"}, {"name": "label12", "dtype": "image"}, {"name": "label13", "dtype": "image"}, {"name": "label14", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 29579297463.0, "num_examples": 2000}], "download_size": 1123982789, "dataset_size": 29579297463.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T13:25:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "rsna_fixed"
More Information needed | [
"# Dataset Card for \"rsna_fixed\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"rsna_fixed\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"rsna_fixed\"\n\nMore Information needed"
]
|
13cf748d54933a7572979f4aa845aec479c1e0f2 | # Dataset Card for "math_formulas"
Mathematical dataset containing formulas based on the [AMPS](https://drive.google.com/file/d/1hQsua3TkpEmcJD_UWQx8dmNdEZPyxw23) Khan dataset and the [ARQMath](https://drive.google.com/drive/folders/1YekTVvfmYKZ8I5uiUMbs21G2mKwF9IAm) dataset V1.3. Based on the retrieved LaTeX formulas, more equivalent versions have been generated by applying randomized LaTeX printing with this [SymPy fork](https://drive.google.com/drive/folders/1YekTVvfmYKZ8I5uiUMbs21G2mKwF9IAm). The formulas are intended to be well applicable for MLM. For instance, a masking for a formula like `(a+b)^2 = a^2 + 2ab + b^2` makes sense (e.g., `(a+[MASK])^2 = a^2 + [MASK]ab + b[MASK]2` -> masked tokens are deducable by the context), in contrast, formulas such as `f(x) = 3x+1` are not (e.g., `[MASK](x) = 3x[MASK]1` -> [MASK] tokens are ambigious). | ddrg/math_formulas | [
"region:us"
]
| 2023-11-01T13:36:35+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 225647910.0, "num_examples": 2886810}, {"name": "test", "num_bytes": 23848817.0, "num_examples": 311298}], "download_size": 131762427, "dataset_size": 249496727.0}} | 2023-11-15T19:46:30+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "math_formulas"
Mathematical dataset containing formulas based on the AMPS Khan dataset and the ARQMath dataset V1.3. Based on the retrieved LaTeX formulas, more equivalent versions have been generated by applying randomized LaTeX printing with this SymPy fork. The formulas are intended to be well applicable for MLM. For instance, a masking for a formula like '(a+b)^2 = a^2 + 2ab + b^2' makes sense (e.g., '(a+[MASK])^2 = a^2 + [MASK]ab + b[MASK]2' -> masked tokens are deducable by the context), in contrast, formulas such as 'f(x) = 3x+1' are not (e.g., 'MASK = 3x[MASK]1' -> [MASK] tokens are ambigious). | [
"# Dataset Card for \"math_formulas\"\n\nMathematical dataset containing formulas based on the AMPS Khan dataset and the ARQMath dataset V1.3. Based on the retrieved LaTeX formulas, more equivalent versions have been generated by applying randomized LaTeX printing with this SymPy fork. The formulas are intended to be well applicable for MLM. For instance, a masking for a formula like '(a+b)^2 = a^2 + 2ab + b^2' makes sense (e.g., '(a+[MASK])^2 = a^2 + [MASK]ab + b[MASK]2' -> masked tokens are deducable by the context), in contrast, formulas such as 'f(x) = 3x+1' are not (e.g., 'MASK = 3x[MASK]1' -> [MASK] tokens are ambigious)."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"math_formulas\"\n\nMathematical dataset containing formulas based on the AMPS Khan dataset and the ARQMath dataset V1.3. Based on the retrieved LaTeX formulas, more equivalent versions have been generated by applying randomized LaTeX printing with this SymPy fork. The formulas are intended to be well applicable for MLM. For instance, a masking for a formula like '(a+b)^2 = a^2 + 2ab + b^2' makes sense (e.g., '(a+[MASK])^2 = a^2 + [MASK]ab + b[MASK]2' -> masked tokens are deducable by the context), in contrast, formulas such as 'f(x) = 3x+1' are not (e.g., 'MASK = 3x[MASK]1' -> [MASK] tokens are ambigious)."
]
| [
6,
221
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"math_formulas\"\n\nMathematical dataset containing formulas based on the AMPS Khan dataset and the ARQMath dataset V1.3. Based on the retrieved LaTeX formulas, more equivalent versions have been generated by applying randomized LaTeX printing with this SymPy fork. The formulas are intended to be well applicable for MLM. For instance, a masking for a formula like '(a+b)^2 = a^2 + 2ab + b^2' makes sense (e.g., '(a+[MASK])^2 = a^2 + [MASK]ab + b[MASK]2' -> masked tokens are deducable by the context), in contrast, formulas such as 'f(x) = 3x+1' are not (e.g., 'MASK = 3x[MASK]1' -> [MASK] tokens are ambigious)."
]
|
4549b4d555638dc6f856029880a195b3863c03bb | # Dataset Card for "donutTOPSOLIDTIMCOD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aminlouhichi/donutTOPSOLIDTIMCOD | [
"region:us"
]
| 2023-11-01T13:47:29+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9934958.0, "num_examples": 46}, {"name": "validation", "num_bytes": 9934958.0, "num_examples": 46}, {"name": "test", "num_bytes": 9934958.0, "num_examples": 46}], "download_size": 27397953, "dataset_size": 29804874.0}} | 2023-11-01T13:47:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "donutTOPSOLIDTIMCOD"
More Information needed | [
"# Dataset Card for \"donutTOPSOLIDTIMCOD\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"donutTOPSOLIDTIMCOD\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"donutTOPSOLIDTIMCOD\"\n\nMore Information needed"
]
|
6b228d14bf14d26eeaadef2ea6abbf15e9f89300 | # Dataset Card for "dreambooth-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cjayesh/dreambooth-images | [
"region:us"
]
| 2023-11-01T13:56:27+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 136191.0, "num_examples": 12}], "download_size": 127724, "dataset_size": 136191.0}} | 2023-11-27T10:55:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dreambooth-images"
More Information needed | [
"# Dataset Card for \"dreambooth-images\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-images\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dreambooth-images\"\n\nMore Information needed"
]
|
16992780872baf444fb95f041d84085b0361917c | # Dataset Card for "donutTOPSOLIDTIMCOD2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aminlouhichi/donutTOPSOLIDTIMCOD2 | [
"region:us"
]
| 2023-11-01T14:05:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9934958.0, "num_examples": 46}, {"name": "validation", "num_bytes": 9934958.0, "num_examples": 46}, {"name": "test", "num_bytes": 9934958.0, "num_examples": 46}], "download_size": 27390966, "dataset_size": 29804874.0}} | 2023-11-01T14:05:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "donutTOPSOLIDTIMCOD2"
More Information needed | [
"# Dataset Card for \"donutTOPSOLIDTIMCOD2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"donutTOPSOLIDTIMCOD2\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"donutTOPSOLIDTIMCOD2\"\n\nMore Information needed"
]
|
49c1ec52de4978543113ec8d3c4313b0d4d9fba0 |
## OpenOrca-Ko-v3
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
<p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p>

<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` | kyujinpy/OpenOrca-ko-v3 | [
"license:cc-by-nc-4.0",
"arxiv:2306.02707",
"arxiv:2301.13688",
"region:us"
]
| 2023-11-01T14:19:51+00:00 | {"license": "cc-by-nc-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41612250, "num_examples": 19473}], "download_size": 21614684, "dataset_size": 41612250}} | 2023-11-01T14:21:06+00:00 | [
"2306.02707",
"2301.13688"
]
| []
| TAGS
#license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us
|
## OpenOrca-Ko-v3
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- Dataset Summary
- Dataset Attribution
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Dataset Use
- Use Cases
- Usage Caveats
- Getting Started
<p><h1> The OpenOrca Dataset! </h1></p>
!OpenOrca Logo
<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our second model, highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
OpenOrca-Preview1-13B
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented FLAN Collection data.
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
URL:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
URL URL
Want to visualize our full dataset? Check out our Nomic Atlas Map.
<img src="URL alt="Atlas Nomic Dataset Map" width="400" height="400" />
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.
These are referenced by the official FLAN Collection repo as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
| [
"## OpenOrca-Ko-v3\n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성",
"## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card",
"## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!",
"# Official Models",
"## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.",
"## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.",
"## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.",
"## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>",
"# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>",
"# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>",
"# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>",
"# Dataset Structure\n\n<a name=\"data-instances\"></a>",
"## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>",
"## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>",
"## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>",
"# Dataset Creation\n\n<a name=\"curation-rationale\"></a>",
"## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>",
"## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>",
"# Dataset Use\n\n<a name=\"use-cases\"></a>",
"## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>",
"## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>",
"## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face."
]
| [
"TAGS\n#license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n",
"## OpenOrca-Ko-v3\n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성",
"## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card",
"## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!",
"# Official Models",
"## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.",
"## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.",
"## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.",
"## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>",
"# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>",
"# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>",
"# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>",
"# Dataset Structure\n\n<a name=\"data-instances\"></a>",
"## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>",
"## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>",
"## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>",
"# Dataset Creation\n\n<a name=\"curation-rationale\"></a>",
"## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>",
"## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>",
"# Dataset Use\n\n<a name=\"use-cases\"></a>",
"## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>",
"## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>",
"## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face."
]
| [
34,
41,
20,
199,
4,
48,
98,
67,
95,
122,
233,
86,
25,
19,
67,
153,
24,
18,
146,
235,
16,
46,
70,
66
]
| [
"passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n## OpenOrca-Ko-v3\n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!# Official Models## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.",
"passage: ## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a># Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a># Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"passage: # Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a># Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a># Dataset Structure\n\n<a name=\"data-instances\"></a>## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a># Dataset Creation\n\n<a name=\"curation-rationale\"></a>## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>"
]
|
f11d50bd20aeabd8c9fb5390c99aac1eb33855b4 | # Dataset Card for "id_instructions-id-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Sheokedech/id_instructions-id-small | [
"region:us"
]
| 2023-11-01T14:21:43+00:00 | {"dataset_info": {"features": [], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T14:44:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "id_instructions-id-small"
More Information needed | [
"# Dataset Card for \"id_instructions-id-small\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"id_instructions-id-small\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"id_instructions-id-small\"\n\nMore Information needed"
]
|
2f98169d73391cc1e84a3cd5580bf0c0f10dc1fe | # Dataset Card for "id-ibnt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Sheokedech/id-ibnt | [
"region:us"
]
| 2023-11-01T14:57:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2209462862.61, "num_examples": 4222}, {"name": "test", "num_bytes": 856481987.936, "num_examples": 1512}], "download_size": 1900578091, "dataset_size": 3065944850.546}} | 2023-11-01T15:14:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "id-ibnt"
More Information needed | [
"# Dataset Card for \"id-ibnt\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"id-ibnt\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"id-ibnt\"\n\nMore Information needed"
]
|
cece808f645f40ba04545d669806ac8ccfb2931e | # Dataset Card for "semeval-task-8-a-mono"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kpriyanshu256/semeval-task-8-a-mono | [
"region:us"
]
| 2023-11-01T15:10:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 237254533, "num_examples": 83829}, {"name": "val", "num_bytes": 101985332, "num_examples": 35928}, {"name": "test", "num_bytes": 10543757, "num_examples": 5000}], "download_size": 201649583, "dataset_size": 349783622}} | 2023-11-01T15:10:39+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "semeval-task-8-a-mono"
More Information needed | [
"# Dataset Card for \"semeval-task-8-a-mono\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"semeval-task-8-a-mono\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-a-mono\"\n\nMore Information needed"
]
|
e2cbd1df7c37fa4b3b665b1ec42541380baad70d | # Dataset Card for "semeval-task-8-a-multi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kpriyanshu256/semeval-task-8-a-multi | [
"region:us"
]
| 2023-11-01T15:14:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 317913694, "num_examples": 120691}, {"name": "val", "num_bytes": 134829282, "num_examples": 51726}, {"name": "test", "num_bytes": 8790338, "num_examples": 4000}], "download_size": 265441677, "dataset_size": 461533314}} | 2023-11-01T15:14:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "semeval-task-8-a-multi"
More Information needed | [
"# Dataset Card for \"semeval-task-8-a-multi\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"semeval-task-8-a-multi\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-a-multi\"\n\nMore Information needed"
]
|
8b933d6cbade79e328739bc7cc6c68fa02cc71d2 | # Modified Coco Dataset Files
# Required dependencies
```
OpenCV (cv2):
pip install opencv-python
```
# img_data.psv
Extract of the coco dataset containing the following labels: ```["airplane", "backpack", "cell phone", "handbag", "suitcase", "knife", "laptop", "car"]```
```
Structured as follows:
| Field | Description |
| --------------- | --------------------------------------------------------------------------------------------------- |
| file_name | Name of image file (.png) |
| height | Image height prior to padding |
| width | Image width prior to padding |
| annotations | Array of boundary box array, label pairs. Bbox arrays are of the form [x_min, y_min, width, height] |
1.09k rows
```
# /data (folder)
This directory contains a selection of zero-padded COCO images that correspond to img_data.parquet.
# display_boundary.py
Mini gui for viewing images with their boundary boxes, don't need to pay attention to how it works, just do the following:
Ensure you're in the same working directory as display_boundary.py
```
cd mini_coco_dataset
```
```
python display_boundary.py
```
- It takes an image name as input (x.png), to find image names look in the data folder.
If you have any questions or issues, feel free to keep them to yourself. | iix/coco_image_extract | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:0.001M<n<0.0011M",
"language:en",
"license:mit",
"code",
"region:us"
]
| 2023-11-01T15:20:03+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["0.001M<n<0.0011M"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "*", "tags": ["code"]} | 2023-11-02T12:31:54+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #task_categories-text-generation #size_categories-0.001M<n<0.0011M #language-English #license-mit #code #region-us
| # Modified Coco Dataset Files
# Required dependencies
# img_data.psv
Extract of the coco dataset containing the following labels:
# /data (folder)
This directory contains a selection of zero-padded COCO images that correspond to img_data.parquet.
# display_boundary.py
Mini gui for viewing images with their boundary boxes, don't need to pay attention to how it works, just do the following:
Ensure you're in the same working directory as display_boundary.py
- It takes an image name as input (x.png), to find image names look in the data folder.
If you have any questions or issues, feel free to keep them to yourself. | [
"# Modified Coco Dataset Files",
"# Required dependencies",
"# img_data.psv\n\nExtract of the coco dataset containing the following labels:",
"# /data (folder)\n\nThis directory contains a selection of zero-padded COCO images that correspond to img_data.parquet.",
"# display_boundary.py\n\nMini gui for viewing images with their boundary boxes, don't need to pay attention to how it works, just do the following:\n\nEnsure you're in the same working directory as display_boundary.py\n\n\n\n\n- It takes an image name as input (x.png), to find image names look in the data folder.\n\nIf you have any questions or issues, feel free to keep them to yourself."
]
| [
"TAGS\n#task_categories-text-classification #task_categories-text-generation #size_categories-0.001M<n<0.0011M #language-English #license-mit #code #region-us \n",
"# Modified Coco Dataset Files",
"# Required dependencies",
"# img_data.psv\n\nExtract of the coco dataset containing the following labels:",
"# /data (folder)\n\nThis directory contains a selection of zero-padded COCO images that correspond to img_data.parquet.",
"# display_boundary.py\n\nMini gui for viewing images with their boundary boxes, don't need to pay attention to how it works, just do the following:\n\nEnsure you're in the same working directory as display_boundary.py\n\n\n\n\n- It takes an image name as input (x.png), to find image names look in the data folder.\n\nIf you have any questions or issues, feel free to keep them to yourself."
]
| [
54,
8,
6,
22,
33,
96
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-text-generation #size_categories-0.001M<n<0.0011M #language-English #license-mit #code #region-us \n# Modified Coco Dataset Files# Required dependencies# img_data.psv\n\nExtract of the coco dataset containing the following labels:# /data (folder)\n\nThis directory contains a selection of zero-padded COCO images that correspond to img_data.parquet.# display_boundary.py\n\nMini gui for viewing images with their boundary boxes, don't need to pay attention to how it works, just do the following:\n\nEnsure you're in the same working directory as display_boundary.py\n\n\n\n\n- It takes an image name as input (x.png), to find image names look in the data folder.\n\nIf you have any questions or issues, feel free to keep them to yourself."
]
|
11c0b285314356d3a525b7c63526973c3b72e17f | # Dataset Card for "tldr_news_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | determined-ai/tldr_news_short | [
"region:us"
]
| 2023-11-01T15:47:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 105707, "num_examples": 502}, {"name": "test", "num_bytes": 11219, "num_examples": 51}], "download_size": 78561, "dataset_size": 116926}} | 2023-11-01T15:47:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tldr_news_short"
More Information needed | [
"# Dataset Card for \"tldr_news_short\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tldr_news_short\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tldr_news_short\"\n\nMore Information needed"
]
|
d62acaa490d8f325c94b283e704134941277a14a | # Dataset Card for "samsum_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | determined-ai/samsum_short | [
"region:us"
]
| 2023-11-01T15:52:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 543039, "num_examples": 2916}, {"name": "validation", "num_bytes": 32458, "num_examples": 171}, {"name": "test", "num_bytes": 28165, "num_examples": 150}], "download_size": 417401, "dataset_size": 603662}} | 2023-11-01T15:52:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "samsum_short"
More Information needed | [
"# Dataset Card for \"samsum_short\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"samsum_short\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"samsum_short\"\n\nMore Information needed"
]
|
6ce4c1970cc874314cca7204197d2d58c3f35363 | # Dataset Card for "igor_link_dialogues_rendered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | iashchak/igor_link_dialogues_rendered | [
"region:us"
]
| 2023-11-01T16:33:33+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "struct": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 29950800, "num_examples": 31516}], "download_size": 15605747, "dataset_size": 29950800}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T16:36:59+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "igor_link_dialogues_rendered"
More Information needed | [
"# Dataset Card for \"igor_link_dialogues_rendered\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"igor_link_dialogues_rendered\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"igor_link_dialogues_rendered\"\n\nMore Information needed"
]
|
0ab0729172aff403cfa7a265758188c524463723 | # Dataset Card for "fm_classifier-1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | coastalcph/fm_classifier-1-1 | [
"region:us"
]
| 2023-11-01T16:46:53+00:00 | {"dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "answer", "list": [{"name": "wikidata_id", "dtype": "string"}, {"name": "name", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "relation", "dtype": "string"}, {"name": "date", "dtype": "int64"}, {"name": "type", "dtype": "string"}, {"name": "is_mutable", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1095051.1775751072, "num_examples": 6230}, {"name": "validation", "num_bytes": 995400.6136754095, "num_examples": 5783}, {"name": "test", "num_bytes": 858612.5253924284, "num_examples": 4360}], "download_size": 1062146, "dataset_size": 2949064.316642945}} | 2023-11-04T10:39:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fm_classifier-1-1"
More Information needed | [
"# Dataset Card for \"fm_classifier-1-1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fm_classifier-1-1\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fm_classifier-1-1\"\n\nMore Information needed"
]
|
55e4325cfcbb00f7258aec5aa58f20ea642d4be0 | # Dataset Card for "fm_classifier-1-n"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | coastalcph/fm_classifier-1-n | [
"region:us"
]
| 2023-11-01T16:47:12+00:00 | {"dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "answer", "list": [{"name": "wikidata_id", "dtype": "string"}, {"name": "name", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "relation", "dtype": "string"}, {"name": "date", "dtype": "int64"}, {"name": "type", "dtype": "string"}, {"name": "is_mutable", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1199458.9463519314, "num_examples": 6824}, {"name": "validation", "num_bytes": 1017432.6521589737, "num_examples": 5911}, {"name": "test", "num_bytes": 838131.8596491228, "num_examples": 4256}], "download_size": 1322431, "dataset_size": 3055023.458160028}} | 2023-11-04T10:39:14+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fm_classifier-1-n"
More Information needed | [
"# Dataset Card for \"fm_classifier-1-n\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fm_classifier-1-n\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fm_classifier-1-n\"\n\nMore Information needed"
]
|
67bea3a31bf14a7eb637e0fc572b87b312312cfe | # Dataset Card for "gptspeech_amazon_google_tencent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tsuyuan/gptspeech_amazon_google_tencent | [
"region:us"
]
| 2023-11-01T17:11:27+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "decoder_input_ids", "sequence": {"sequence": "int64"}}, {"name": "decoder_attention_mask", "sequence": "int64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 526153002921, "num_examples": 6675459}, {"name": "eval", "num_bytes": 13396628973, "num_examples": 169967}], "download_size": 16698860181, "dataset_size": 539549631894}} | 2023-11-01T19:03:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gptspeech_amazon_google_tencent"
More Information needed | [
"# Dataset Card for \"gptspeech_amazon_google_tencent\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gptspeech_amazon_google_tencent\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gptspeech_amazon_google_tencent\"\n\nMore Information needed"
]
|
a235e44d47ac404de2de266f84fd8c2cf636a7f9 | # Dataset Card for "text_entailment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Weni/text_entailment | [
"region:us"
]
| 2023-11-01T17:57:33+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "pt_br", "path": "data/pt_br-*"}, {"split": "en", "path": "data/en-*"}, {"split": "es", "path": "data/es-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "sentence A", "dtype": "string"}, {"name": "sentence B", "dtype": "string"}, {"name": "implication", "dtype": "string"}], "splits": [{"name": "pt_br", "num_bytes": 3913627, "num_examples": 9840}, {"name": "en", "num_bytes": 3985535, "num_examples": 9999}, {"name": "es", "num_bytes": 9807474, "num_examples": 10001}], "download_size": 5446931, "dataset_size": 17706636}} | 2023-11-01T18:30:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "text_entailment"
More Information needed | [
"# Dataset Card for \"text_entailment\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_entailment\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"text_entailment\"\n\nMore Information needed"
]
|
0179f3ad065e75f97e3f3aefa0879f41f64a2393 | # Dataset Card for "floor-plans-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ahmed167/floor-plans-dataset | [
"region:us"
]
| 2023-11-01T18:01:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1790707.0, "num_examples": 31}], "download_size": 1747568, "dataset_size": 1790707.0}} | 2023-11-01T18:02:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "floor-plans-dataset"
More Information needed | [
"# Dataset Card for \"floor-plans-dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"floor-plans-dataset\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"floor-plans-dataset\"\n\nMore Information needed"
]
|
ae0a1a4c0375d90197c88c7fe7bca4670171bbee |
# Dataset Card for unarXive-en2ru
This dataset contains text excerpts from the [unarXive citation recommendation](https://huggingface.co/datasets/saier/unarXive_citrec) dataset along with their translations into Russian. The translations have been obtained using [OpenAI GPT-3.5-Turbo](https://platform.openai.com/). The dataset is intended for machine translation research.
| waleko/unarXive-en2ru | [
"task_categories:translation",
"annotations_creators:machine-generated",
"size_categories:10K<n<100K",
"source_datasets:saier/unarXive_citrec",
"language:en",
"language:ru",
"license:cc-by-sa-4.0",
"arXiv.org",
"arXiv",
"publication",
"paper",
"preprint",
"section",
"physics",
"mathematics",
"computer science",
"cs",
"machine translation",
"translation",
"region:us"
]
| 2023-11-01T18:06:30+00:00 | {"annotations_creators": ["machine-generated"], "language": ["en", "ru"], "license": ["cc-by-sa-4.0"], "size_categories": ["10K<n<100K"], "source_datasets": ["saier/unarXive_citrec"], "task_categories": ["translation"], "tags": ["arXiv.org", "arXiv", "publication", "paper", "preprint", "section", "physics", "mathematics", "computer science", "cs", "machine translation", "translation"]} | 2023-11-19T10:39:48+00:00 | []
| [
"en",
"ru"
]
| TAGS
#task_categories-translation #annotations_creators-machine-generated #size_categories-10K<n<100K #source_datasets-saier/unarXive_citrec #language-English #language-Russian #license-cc-by-sa-4.0 #arXiv.org #arXiv #publication #paper #preprint #section #physics #mathematics #computer science #cs #machine translation #translation #region-us
|
# Dataset Card for unarXive-en2ru
This dataset contains text excerpts from the unarXive citation recommendation dataset along with their translations into Russian. The translations have been obtained using OpenAI GPT-3.5-Turbo. The dataset is intended for machine translation research.
| [
"# Dataset Card for unarXive-en2ru\n\nThis dataset contains text excerpts from the unarXive citation recommendation dataset along with their translations into Russian. The translations have been obtained using OpenAI GPT-3.5-Turbo. The dataset is intended for machine translation research."
]
| [
"TAGS\n#task_categories-translation #annotations_creators-machine-generated #size_categories-10K<n<100K #source_datasets-saier/unarXive_citrec #language-English #language-Russian #license-cc-by-sa-4.0 #arXiv.org #arXiv #publication #paper #preprint #section #physics #mathematics #computer science #cs #machine translation #translation #region-us \n",
"# Dataset Card for unarXive-en2ru\n\nThis dataset contains text excerpts from the unarXive citation recommendation dataset along with their translations into Russian. The translations have been obtained using OpenAI GPT-3.5-Turbo. The dataset is intended for machine translation research."
]
| [
116,
71
]
| [
"passage: TAGS\n#task_categories-translation #annotations_creators-machine-generated #size_categories-10K<n<100K #source_datasets-saier/unarXive_citrec #language-English #language-Russian #license-cc-by-sa-4.0 #arXiv.org #arXiv #publication #paper #preprint #section #physics #mathematics #computer science #cs #machine translation #translation #region-us \n# Dataset Card for unarXive-en2ru\n\nThis dataset contains text excerpts from the unarXive citation recommendation dataset along with their translations into Russian. The translations have been obtained using OpenAI GPT-3.5-Turbo. The dataset is intended for machine translation research."
]
|
5ec82820abf10a63923890cc46768d4fe850e3a1 | # Dataset Card for "floor-plans-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | OmarAmir2001/floor-plans-dataset | [
"region:us"
]
| 2023-11-01T18:11:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1790707.0, "num_examples": 31}], "download_size": 1747568, "dataset_size": 1790707.0}} | 2023-11-01T18:11:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "floor-plans-dataset"
More Information needed | [
"# Dataset Card for \"floor-plans-dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"floor-plans-dataset\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"floor-plans-dataset\"\n\nMore Information needed"
]
|
ed406fd8e5419ff8bf827ae80fadd2d148bcec24 | # Dataset Card for "apeiron-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | firiyuu77/apeiron-llama2-1k | [
"region:us"
]
| 2023-11-01T18:22:26+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 211437, "num_examples": 1000}], "download_size": 70724, "dataset_size": 211437}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T18:22:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "apeiron-llama2-1k"
More Information needed | [
"# Dataset Card for \"apeiron-llama2-1k\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"apeiron-llama2-1k\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"apeiron-llama2-1k\"\n\nMore Information needed"
]
|
7c9fcf4a69fff5018713455eefc3b2d3135a1440 | # Dataset Card for "lj-speech-cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/lj-speech-cleaned | [
"region:us"
]
| 2023-11-01T18:23:50+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "sequence": "float64"}, {"name": "file", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5386821639, "num_examples": 4620}, {"name": "test", "num_bytes": 1949285925, "num_examples": 1680}], "download_size": 1810662801, "dataset_size": 7336107564}} | 2023-11-01T18:25:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lj-speech-cleaned"
More Information needed | [
"# Dataset Card for \"lj-speech-cleaned\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lj-speech-cleaned\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lj-speech-cleaned\"\n\nMore Information needed"
]
|
e0c12db1e57027295aa98162b273025e31a78c45 | This is the dataset used to fine tuning (train) the LLama2.
This dataset has the original questios (FAQs), with only 37 samples and 2 features (questions and answers).
This version dataset is in ENLIGSH:
Another dataset was created to increase the number of samples by applying data augmentation (synthetic data). Check dataset in Fraternitas repo | Fraternitas/ElektraGoFAQs-en | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"finance",
"region:us"
]
| 2023-11-01T18:30:13+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["question-answering"], "pretty_name": "tiny_FAQs", "tags": ["finance"]} | 2023-11-01T22:49:33+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #size_categories-n<1K #language-English #finance #region-us
| This is the dataset used to fine tuning (train) the LLama2.
This dataset has the original questios (FAQs), with only 37 samples and 2 features (questions and answers).
This version dataset is in ENLIGSH:
Another dataset was created to increase the number of samples by applying data augmentation (synthetic data). Check dataset in Fraternitas repo | []
| [
"TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #finance #region-us \n"
]
| [
35
]
| [
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #finance #region-us \n"
]
|
0a1683a4e680cd3732195d4f67acd4cbd440ff4b | # Dataset Card for "imdb-neg-to-pos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | amasand/imdb-neg-to-pos | [
"region:us"
]
| 2023-11-01T18:36:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "prompt_or_input_text", "dtype": "string"}, {"name": "target_or_reference_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8939708, "num_examples": 20000}, {"name": "test", "num_bytes": 2205759, "num_examples": 5000}, {"name": "eval", "num_bytes": 2261341, "num_examples": 5000}], "download_size": 8943325, "dataset_size": 13406808}} | 2023-11-01T18:49:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "imdb-neg-to-pos"
More Information needed | [
"# Dataset Card for \"imdb-neg-to-pos\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"imdb-neg-to-pos\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"imdb-neg-to-pos\"\n\nMore Information needed"
]
|
2b443d61c0d85c8de401cc27cf6579d5697b475a | # Dataset Card for "lj-inprogress"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/lj-inprogress | [
"region:us"
]
| 2023-11-01T18:45:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 22050}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3856520559.0, "num_examples": 13100}], "download_size": 3784764912, "dataset_size": 3856520559.0}} | 2023-11-01T18:49:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lj-inprogress"
More Information needed | [
"# Dataset Card for \"lj-inprogress\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lj-inprogress\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lj-inprogress\"\n\nMore Information needed"
]
|
aca19d681f48271b701e6ab1fb81721c03d8d5b4 | # Dataset Card for "lj-inprogress-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/lj-inprogress-2 | [
"region:us"
]
| 2023-11-01T18:52:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "sequence": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15192445537, "num_examples": 13100}], "download_size": 3747503561, "dataset_size": 15192445537}} | 2023-11-01T18:56:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lj-inprogress-2"
More Information needed | [
"# Dataset Card for \"lj-inprogress-2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lj-inprogress-2\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lj-inprogress-2\"\n\nMore Information needed"
]
|
62ffce64d2c67152dbf3eb32804e36d2713245a2 | ## Dataset Creation and Processing Overview
This dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing.
### Data Loading and Initial Cleaning
- **Source**: Loaded from the Hugging Face dataset repository [bprateek/amazon_product_description](https://huggingface.co/datasets/bprateek/amazon_product_description).
- **Conversion to Pandas DataFrame**: For ease of data manipulation.
- **Null Value Removal**: Rows with null values in the 'About Product' column were discarded.
### Data Cleaning and NLP Processing
- **Sentence Extraction**: 'About Product' descriptions were split into sentences, identifying common phrases.
- **Emoji and Special Character Removal**: A regex function removed these elements from the product descriptions.
- **Common Phrase Elimination**: A function was used to strip common phrases from each product description.
- **Improving Writing Standards**: Adjusted capitalization, punctuation, and replaced '&' with 'and' for better readability and formalization.
### Sentence Similarity Analysis
- **Model Application**: The pre-trained Sentence Transformer model 'all-MiniLM-L6-v2' was used.
- **Sentence Comparison**: Identified the most similar sentence to each product name within the cleaned product descriptions.
### Dataset Refinement
- **Column Selection**: Retained relevant columns for final dataset.
- **Image URL Processing**: Split multiple image URLs into individual URLs, removing specific unwanted URLs.
### Image Validation
- **Image URL Validation**: Implemented a function to verify the validity of each image URL.
- **Filtering Valid Images**: Retained only rows with valid image URLs.
### Dataset Splitting for Machine Learning
- **Creation of Train, Test, and Eval Sets**: Used scikit-learn's `train_test_split` for dataset division.
For further details or to contribute to enhancing the dataset card, please refer to the [Hugging Face Dataset Card Contribution Guide](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards). | ckandemir/amazon-products | [
"task_categories:image-classification",
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-01T19:03:06+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification", "image-to-text"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "Product Name", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Description", "dtype": "string"}, {"name": "Selling Price", "dtype": "string"}, {"name": "Product Specification", "dtype": "string"}, {"name": "Image", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12542887, "num_examples": 23993}, {"name": "test", "num_bytes": 3499375, "num_examples": 6665}, {"name": "eval", "num_bytes": 1376174, "num_examples": 2666}], "download_size": 6391314, "dataset_size": 17418436}} | 2023-11-21T09:46:07+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
| ## Dataset Creation and Processing Overview
This dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing.
### Data Loading and Initial Cleaning
- Source: Loaded from the Hugging Face dataset repository bprateek/amazon_product_description.
- Conversion to Pandas DataFrame: For ease of data manipulation.
- Null Value Removal: Rows with null values in the 'About Product' column were discarded.
### Data Cleaning and NLP Processing
- Sentence Extraction: 'About Product' descriptions were split into sentences, identifying common phrases.
- Emoji and Special Character Removal: A regex function removed these elements from the product descriptions.
- Common Phrase Elimination: A function was used to strip common phrases from each product description.
- Improving Writing Standards: Adjusted capitalization, punctuation, and replaced '&' with 'and' for better readability and formalization.
### Sentence Similarity Analysis
- Model Application: The pre-trained Sentence Transformer model 'all-MiniLM-L6-v2' was used.
- Sentence Comparison: Identified the most similar sentence to each product name within the cleaned product descriptions.
### Dataset Refinement
- Column Selection: Retained relevant columns for final dataset.
- Image URL Processing: Split multiple image URLs into individual URLs, removing specific unwanted URLs.
### Image Validation
- Image URL Validation: Implemented a function to verify the validity of each image URL.
- Filtering Valid Images: Retained only rows with valid image URLs.
### Dataset Splitting for Machine Learning
- Creation of Train, Test, and Eval Sets: Used scikit-learn's 'train_test_split' for dataset division.
For further details or to contribute to enhancing the dataset card, please refer to the Hugging Face Dataset Card Contribution Guide. | [
"## Dataset Creation and Processing Overview\n\nThis dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing.",
"### Data Loading and Initial Cleaning\n- Source: Loaded from the Hugging Face dataset repository bprateek/amazon_product_description.\n- Conversion to Pandas DataFrame: For ease of data manipulation.\n- Null Value Removal: Rows with null values in the 'About Product' column were discarded.",
"### Data Cleaning and NLP Processing\n- Sentence Extraction: 'About Product' descriptions were split into sentences, identifying common phrases.\n- Emoji and Special Character Removal: A regex function removed these elements from the product descriptions.\n- Common Phrase Elimination: A function was used to strip common phrases from each product description.\n- Improving Writing Standards: Adjusted capitalization, punctuation, and replaced '&' with 'and' for better readability and formalization.",
"### Sentence Similarity Analysis\n- Model Application: The pre-trained Sentence Transformer model 'all-MiniLM-L6-v2' was used.\n- Sentence Comparison: Identified the most similar sentence to each product name within the cleaned product descriptions.",
"### Dataset Refinement\n- Column Selection: Retained relevant columns for final dataset.\n- Image URL Processing: Split multiple image URLs into individual URLs, removing specific unwanted URLs.",
"### Image Validation\n- Image URL Validation: Implemented a function to verify the validity of each image URL.\n- Filtering Valid Images: Retained only rows with valid image URLs.",
"### Dataset Splitting for Machine Learning\n- Creation of Train, Test, and Eval Sets: Used scikit-learn's 'train_test_split' for dataset division.\n\n\n\nFor further details or to contribute to enhancing the dataset card, please refer to the Hugging Face Dataset Card Contribution Guide."
]
| [
"TAGS\n#task_categories-image-classification #task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n",
"## Dataset Creation and Processing Overview\n\nThis dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing.",
"### Data Loading and Initial Cleaning\n- Source: Loaded from the Hugging Face dataset repository bprateek/amazon_product_description.\n- Conversion to Pandas DataFrame: For ease of data manipulation.\n- Null Value Removal: Rows with null values in the 'About Product' column were discarded.",
"### Data Cleaning and NLP Processing\n- Sentence Extraction: 'About Product' descriptions were split into sentences, identifying common phrases.\n- Emoji and Special Character Removal: A regex function removed these elements from the product descriptions.\n- Common Phrase Elimination: A function was used to strip common phrases from each product description.\n- Improving Writing Standards: Adjusted capitalization, punctuation, and replaced '&' with 'and' for better readability and formalization.",
"### Sentence Similarity Analysis\n- Model Application: The pre-trained Sentence Transformer model 'all-MiniLM-L6-v2' was used.\n- Sentence Comparison: Identified the most similar sentence to each product name within the cleaned product descriptions.",
"### Dataset Refinement\n- Column Selection: Retained relevant columns for final dataset.\n- Image URL Processing: Split multiple image URLs into individual URLs, removing specific unwanted URLs.",
"### Image Validation\n- Image URL Validation: Implemented a function to verify the validity of each image URL.\n- Filtering Valid Images: Retained only rows with valid image URLs.",
"### Dataset Splitting for Machine Learning\n- Creation of Train, Test, and Eval Sets: Used scikit-learn's 'train_test_split' for dataset division.\n\n\n\nFor further details or to contribute to enhancing the dataset card, please refer to the Hugging Face Dataset Card Contribution Guide."
]
| [
53,
61,
83,
116,
61,
48,
47,
76
]
| [
"passage: TAGS\n#task_categories-image-classification #task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n## Dataset Creation and Processing Overview\n\nThis dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing.### Data Loading and Initial Cleaning\n- Source: Loaded from the Hugging Face dataset repository bprateek/amazon_product_description.\n- Conversion to Pandas DataFrame: For ease of data manipulation.\n- Null Value Removal: Rows with null values in the 'About Product' column were discarded.### Data Cleaning and NLP Processing\n- Sentence Extraction: 'About Product' descriptions were split into sentences, identifying common phrases.\n- Emoji and Special Character Removal: A regex function removed these elements from the product descriptions.\n- Common Phrase Elimination: A function was used to strip common phrases from each product description.\n- Improving Writing Standards: Adjusted capitalization, punctuation, and replaced '&' with 'and' for better readability and formalization.### Sentence Similarity Analysis\n- Model Application: The pre-trained Sentence Transformer model 'all-MiniLM-L6-v2' was used.\n- Sentence Comparison: Identified the most similar sentence to each product name within the cleaned product descriptions.### Dataset Refinement\n- Column Selection: Retained relevant columns for final dataset.\n- Image URL Processing: Split multiple image URLs into individual URLs, removing specific unwanted URLs.### Image Validation\n- Image URL Validation: Implemented a function to verify the validity of each image URL.\n- Filtering Valid Images: Retained only rows with valid image URLs."
]
|
f38166e3b30351c20c10f7bde7ea45883eb17c23 | # Dataset Card for "opus-no-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Norwegian-to-English | [
"region:us"
]
| 2023-11-01T19:15:00+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 180005, "num_examples": 2000}, {"name": "train", "num_bytes": 74752948, "num_examples": 999273}], "download_size": 54682669, "dataset_size": 74932953}} | 2023-11-01T19:15:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-no-en"
More Information needed | [
"# Dataset Card for \"opus-no-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-no-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-no-en\"\n\nMore Information needed"
]
|
99284a071f79a6bef9c787274d1c660978813b04 | # Dataset Card for "opus-vi-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Vietnamese-to-English | [
"region:us"
]
| 2023-11-01T19:15:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 196721, "num_examples": 2000}, {"name": "train", "num_bytes": 80658853, "num_examples": 992248}], "download_size": 56656193, "dataset_size": 80855574}} | 2023-11-01T19:15:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-vi-en"
More Information needed | [
"# Dataset Card for \"opus-vi-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-vi-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-vi-en\"\n\nMore Information needed"
]
|
01d11e1de4e9260c279f6c0074e1a8f84a602106 | # Dataset Card for "opus-id-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Indonesian-to-English | [
"region:us"
]
| 2023-11-01T19:15:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 182024, "num_examples": 2000}, {"name": "train", "num_bytes": 74451703, "num_examples": 989529}], "download_size": 53126195, "dataset_size": 74633727}} | 2023-11-01T19:15:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-id-en"
More Information needed | [
"# Dataset Card for \"opus-id-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-id-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-id-en\"\n\nMore Information needed"
]
|
bed1bae41482488c12d7739861d7faccec88adfe | # Dataset Card for "opus-de-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-German-to-English | [
"region:us"
]
| 2023-11-01T19:15:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 334342, "num_examples": 2000}, {"name": "train", "num_bytes": 115010446, "num_examples": 940304}], "download_size": 84489243, "dataset_size": 115344788}} | 2023-11-01T19:15:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-de-en"
More Information needed | [
"# Dataset Card for \"opus-de-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-de-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-de-en\"\n\nMore Information needed"
]
|
18b7909c80a10a8b2ef37c2a250fc5a1498accd2 | # Dataset Card for "opus-nl-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Dutch-to-English | [
"region:us"
]
| 2023-11-01T19:15:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 272932, "num_examples": 2000}, {"name": "train", "num_bytes": 94620203, "num_examples": 969763}], "download_size": 69585347, "dataset_size": 94893135}} | 2023-11-01T19:15:29+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-nl-en"
More Information needed | [
"# Dataset Card for \"opus-nl-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-nl-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-nl-en\"\n\nMore Information needed"
]
|
bb0754001e29135502dbd6be60c7613cf55f50f3 | # Dataset Card for "opus-da-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Danish-to-English | [
"region:us"
]
| 2023-11-01T19:15:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 302616, "num_examples": 2000}, {"name": "train", "num_bytes": 95961400, "num_examples": 946341}], "download_size": 70298567, "dataset_size": 96264016}} | 2023-11-01T19:15:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-da-en"
More Information needed | [
"# Dataset Card for \"opus-da-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-da-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-da-en\"\n\nMore Information needed"
]
|
b56c80c4dccd17621758386f67b364839c4e8435 | # Dataset Card for "opus-it-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Italian-to-English | [
"region:us"
]
| 2023-11-01T19:15:36+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 296354, "num_examples": 2000}, {"name": "train", "num_bytes": 99243787, "num_examples": 960042}], "download_size": 73634748, "dataset_size": 99540141}} | 2023-11-01T19:15:42+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-it-en"
More Information needed | [
"# Dataset Card for \"opus-it-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-it-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-it-en\"\n\nMore Information needed"
]
|
bdc23cf9d35164c4a14bbecf956a3596778a6e9b | # Dataset Card for "opus-fi-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Finnish-to-English | [
"region:us"
]
| 2023-11-01T19:15:42+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 249219, "num_examples": 2000}, {"name": "train", "num_bytes": 86453966, "num_examples": 962383}], "download_size": 65522411, "dataset_size": 86703185}} | 2023-11-01T19:15:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-fi-en"
More Information needed | [
"# Dataset Card for \"opus-fi-en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-fi-en\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-fi-en\"\n\nMore Information needed"
]
|
1c6dd6526a77ac42c1d93bad1d7f16dac3859421 | # Dataset Card for "opus-French-English"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-French-to-English | [
"region:us"
]
| 2023-11-01T19:15:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 483476, "num_examples": 2000}, {"name": "train", "num_bytes": 132363334, "num_examples": 897892}], "download_size": 94459683, "dataset_size": 132846810}} | 2023-11-01T19:15:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-French-English"
More Information needed | [
"# Dataset Card for \"opus-French-English\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-French-English\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-French-English\"\n\nMore Information needed"
]
|
e0c95dd065f6e6f500ab653d88a7af5448fe2465 | # Dataset Card for "opus-Swedish-to-English"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kaitchup/opus-Swedish-to-English | [
"region:us"
]
| 2023-11-01T19:22:10+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 281986, "num_examples": 2000}, {"name": "train", "num_bytes": 94666227, "num_examples": 961164}], "download_size": 69511177, "dataset_size": 94948213}} | 2023-11-01T19:22:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "opus-Swedish-to-English"
More Information needed | [
"# Dataset Card for \"opus-Swedish-to-English\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"opus-Swedish-to-English\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"opus-Swedish-to-English\"\n\nMore Information needed"
]
|
b0b10b9af3e42650398d3e9816ad0e9dd1d9dcdc |
<p align="center">
<br>
<img src="https://sparrow.dlnlp.ai/img/sparrow_main2.jpg" width="70%"/>
<br>
<p>
<p align="center">
<!-- <a href="https://github.com/UBC-NLP/sparraw/releases"> -->
<!-- <img alt="GitHub release" src="https://img.shields.io/github/release/UBC-NLP/sparraw.svg"> </a>-->
<a href="https://sparrow.dlnlp.ai/">
<img alt="Documentation" src="https://img.shields.io/website.svg?down_color=red&down_message=offline&up_message=online&url=https://sparrow.dlnlp.ai">
</a>
</p>
In this work, we introduce [**SPARROW**](https://arxiv.org/abs/2310.14557), SPARROW is a evaluation benchmark for sociopragmatic meaning understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts.
# How to Use SPARROW
### Request Access ###
To obtain access to the SPARROW benchmark on Huggingface, follow the following steps:
- Login on your Haggingface account
<img src="https://sparrow.dlnlp.ai/img/hf_login_request.png" width="70%"/>
- Request access
* Please fill in your actual full name and affiliation (e.g., the name of your research institute).
* Please use your official email address if it is available.
<img src="https://sparrow.dlnlp.ai/img/sparrow_request.png" width="70%"/>
## Install Requirments
```shell
pip install datasets transformers seqeval
```
### Login with your Huggingface CLI ###
You can get/manage your access tokens in your [settings](https://huggingface.co/docs/hub/security-tokens).
```shell
export HUGGINGFACE_TOKEN=""
huggingface-cli login --token $HUGGINGFACE_TOKEN
```
## Submitting your results on SPARROW test
We design a public leaderboard for scoring PLMs on SPARRAW. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.
You can evalute your models using **SPARROW** leaderboard: **[https://sparrow.dlnlp.ai](https://sparrow.dlnlp.ai)**
---
## Citation
If you use SPARROW for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:
```bibtex
@inproceedings{zhang-etal-2023-skipped,
title = "The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages",
author = "Zhang, Chiyu and
Khai Duy Doan and,
Qisheng Liao and,
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2023",
publisher = "Association for Computational Linguistics",
}
```
---
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). | UBC-NLP/sparrow | [
"task_categories:text-classification",
"language:ace",
"language:amh",
"language:ara",
"language:arq",
"language:ary",
"language:bam",
"language:ban",
"language:bbc",
"language:ben",
"language:bjn",
"language:bos",
"language:bug",
"language:bul",
"language:ces",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:fas",
"language:fil",
"language:fin",
"language:fre",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:ibo",
"language:ind",
"language:ita",
"language:jav",
"language:jpn",
"language:kan",
"language:kin",
"language:kor",
"language:mad",
"language:mal",
"language:mar",
"language:min",
"language:mlt",
"language:nij",
"language:nor",
"language:pcm",
"language:pol",
"language:por",
"language:ron",
"language:rus",
"language:slk",
"language:slv",
"language:spa",
"language:sqi",
"language:srp",
"language:sun",
"language:swe",
"language:swh",
"language:tam",
"language:tel",
"language:tha",
"language:tso",
"language:tur",
"language:twi",
"language:vie",
"language:yor",
"language:zho",
"Anti-Social",
"Emotion Recognition",
"Humor Detection",
"Irony",
"Sarcasm",
"Sentiment Analysis",
"Subjectivity Analysis",
"hate speech detection",
"offensive language detection",
"arxiv:2310.14557",
"region:us"
]
| 2023-11-01T19:38:06+00:00 | {"language": ["ace", "amh", "ara", "arq", "ary", "bam", "ban", "bbc", "ben", "bjn", "bos", "bug", "bul", "ces", "dan", "deu", "ell", "eng", "fas", "fil", "fin", "fre", "hau", "heb", "hin", "hrv", "hun", "ibo", "ind", "ita", "jav", "jpn", "kan", "kin", "kor", "mad", "mal", "mar", "min", "mlt", "nij", "nor", "pcm", "pol", "por", "ron", "rus", "slk", "slv", "spa", "sqi", "srp", "sun", "swe", "swh", "tam", "tel", "tha", "tso", "tur", "twi", "vie", "yor", "zho"], "task_categories": ["text-classification"], "viewer": false, "tags": ["Anti-Social", "Emotion Recognition", "Humor Detection", "Irony", "Sarcasm", "Sentiment Analysis", "Subjectivity Analysis", "hate speech detection", "offensive language detection"], "extra_gated_fields": {"Full Name": "text", "Official Email Address": "text", "Affiliation": "text", "Country": "text", "I agree to ONLY use this dataset for non-commercial purposes": "checkbox", "I agree to cite the SPARROW paper and all original papers": "checkbox"}} | 2023-12-18T22:37:09+00:00 | [
"2310.14557"
]
| [
"ace",
"amh",
"ara",
"arq",
"ary",
"bam",
"ban",
"bbc",
"ben",
"bjn",
"bos",
"bug",
"bul",
"ces",
"dan",
"deu",
"ell",
"eng",
"fas",
"fil",
"fin",
"fre",
"hau",
"heb",
"hin",
"hrv",
"hun",
"ibo",
"ind",
"ita",
"jav",
"jpn",
"kan",
"kin",
"kor",
"mad",
"mal",
"mar",
"min",
"mlt",
"nij",
"nor",
"pcm",
"pol",
"por",
"ron",
"rus",
"slk",
"slv",
"spa",
"sqi",
"srp",
"sun",
"swe",
"swh",
"tam",
"tel",
"tha",
"tso",
"tur",
"twi",
"vie",
"yor",
"zho"
]
| TAGS
#task_categories-text-classification #language-Achinese #language-Amharic #language-Arabic #language-Algerian Arabic #language-Moroccan Arabic #language-Bambara #language-Balinese #language-Batak Toba #language-Bengali #language-Banjar #language-Bosnian #language-Buginese #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Persian #language-Filipino #language-Finnish #language-French #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Igbo #language-Indonesian #language-Italian #language-Javanese #language-Japanese #language-Kannada #language-Kinyarwanda #language-Korean #language-Madurese #language-Malayalam #language-Marathi #language-Minangkabau #language-Maltese #language-Ngaju #language-Norwegian #language-Nigerian Pidgin #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Spanish #language-Albanian #language-Serbian #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Tamil #language-Telugu #language-Thai #language-Tsonga #language-Turkish #language-Twi #language-Vietnamese #language-Yoruba #language-Chinese #Anti-Social #Emotion Recognition #Humor Detection #Irony #Sarcasm #Sentiment Analysis #Subjectivity Analysis #hate speech detection #offensive language detection #arxiv-2310.14557 #region-us
|
<p align="center">
<br>
<img src="URL width="70%"/>
<br>
<p>
<p align="center">
<a href="URL
<img alt="Documentation" src="URL/URL">
</a>
</p>
In this work, we introduce SPARROW, SPARROW is a evaluation benchmark for sociopragmatic meaning understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts.
# How to Use SPARROW
### Request Access ###
To obtain access to the SPARROW benchmark on Huggingface, follow the following steps:
- Login on your Haggingface account
<img src="URL width="70%"/>
- Request access
* Please fill in your actual full name and affiliation (e.g., the name of your research institute).
* Please use your official email address if it is available.
<img src="URL width="70%"/>
## Install Requirments
### Login with your Huggingface CLI ###
You can get/manage your access tokens in your settings.
## Submitting your results on SPARROW test
We design a public leaderboard for scoring PLMs on SPARRAW. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.
You can evalute your models using SPARROW leaderboard: URL
---
If you use SPARROW for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:
---
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, ComputeCanada and UBC ARC-Sockeye. | [
"# How to Use SPARROW",
"### Request Access ###\nTo obtain access to the SPARROW benchmark on Huggingface, follow the following steps:\n- Login on your Haggingface account\n\n <img src=\"URL width=\"70%\"/>\n- Request access\n * Please fill in your actual full name and affiliation (e.g., the name of your research institute).\n * Please use your official email address if it is available.\n \n <img src=\"URL width=\"70%\"/>",
"## Install Requirments",
"### Login with your Huggingface CLI ###\nYou can get/manage your access tokens in your settings.",
"## Submitting your results on SPARROW test \n\nWe design a public leaderboard for scoring PLMs on SPARRAW. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate. \n\nYou can evalute your models using SPARROW leaderboard: URL\n\n\n---\n\nIf you use SPARROW for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:\n\n\n---",
"## Acknowledgments\nWe gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, ComputeCanada and UBC ARC-Sockeye."
]
| [
"TAGS\n#task_categories-text-classification #language-Achinese #language-Amharic #language-Arabic #language-Algerian Arabic #language-Moroccan Arabic #language-Bambara #language-Balinese #language-Batak Toba #language-Bengali #language-Banjar #language-Bosnian #language-Buginese #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Persian #language-Filipino #language-Finnish #language-French #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Igbo #language-Indonesian #language-Italian #language-Javanese #language-Japanese #language-Kannada #language-Kinyarwanda #language-Korean #language-Madurese #language-Malayalam #language-Marathi #language-Minangkabau #language-Maltese #language-Ngaju #language-Norwegian #language-Nigerian Pidgin #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Spanish #language-Albanian #language-Serbian #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Tamil #language-Telugu #language-Thai #language-Tsonga #language-Turkish #language-Twi #language-Vietnamese #language-Yoruba #language-Chinese #Anti-Social #Emotion Recognition #Humor Detection #Irony #Sarcasm #Sentiment Analysis #Subjectivity Analysis #hate speech detection #offensive language detection #arxiv-2310.14557 #region-us \n",
"# How to Use SPARROW",
"### Request Access ###\nTo obtain access to the SPARROW benchmark on Huggingface, follow the following steps:\n- Login on your Haggingface account\n\n <img src=\"URL width=\"70%\"/>\n- Request access\n * Please fill in your actual full name and affiliation (e.g., the name of your research institute).\n * Please use your official email address if it is available.\n \n <img src=\"URL width=\"70%\"/>",
"## Install Requirments",
"### Login with your Huggingface CLI ###\nYou can get/manage your access tokens in your settings.",
"## Submitting your results on SPARROW test \n\nWe design a public leaderboard for scoring PLMs on SPARRAW. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate. \n\nYou can evalute your models using SPARROW leaderboard: URL\n\n\n---\n\nIf you use SPARROW for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:\n\n\n---",
"## Acknowledgments\nWe gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, ComputeCanada and UBC ARC-Sockeye."
]
| [
440,
7,
104,
6,
27,
105,
54
]
| [
"passage: TAGS\n#task_categories-text-classification #language-Achinese #language-Amharic #language-Arabic #language-Algerian Arabic #language-Moroccan Arabic #language-Bambara #language-Balinese #language-Batak Toba #language-Bengali #language-Banjar #language-Bosnian #language-Buginese #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Persian #language-Filipino #language-Finnish #language-French #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Igbo #language-Indonesian #language-Italian #language-Javanese #language-Japanese #language-Kannada #language-Kinyarwanda #language-Korean #language-Madurese #language-Malayalam #language-Marathi #language-Minangkabau #language-Maltese #language-Ngaju #language-Norwegian #language-Nigerian Pidgin #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Spanish #language-Albanian #language-Serbian #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Tamil #language-Telugu #language-Thai #language-Tsonga #language-Turkish #language-Twi #language-Vietnamese #language-Yoruba #language-Chinese #Anti-Social #Emotion Recognition #Humor Detection #Irony #Sarcasm #Sentiment Analysis #Subjectivity Analysis #hate speech detection #offensive language detection #arxiv-2310.14557 #region-us \n# How to Use SPARROW"
]
|
97db7562533805f71bd06f7c1368bcd332a68f5b | # Dataset Card for "donut5Fournissuer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aminlouhichi/donut5Fournissuer | [
"region:us"
]
| 2023-11-01T20:04:38+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22887975.0, "num_examples": 106}, {"name": "validation", "num_bytes": 22887975.0, "num_examples": 106}, {"name": "test", "num_bytes": 35690926.0, "num_examples": 106}], "download_size": 69740850, "dataset_size": 81466876.0}} | 2023-11-01T20:04:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "donut5Fournissuer"
More Information needed | [
"# Dataset Card for \"donut5Fournissuer\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"donut5Fournissuer\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"donut5Fournissuer\"\n\nMore Information needed"
]
|
3382fbda62d6c8b3bc19f4f2c4cae7a8672785d3 |
These 4 Bible dictionaries are combined:
-Easton's Bible Dictionary
-Hitchcock's Bible Names Dictionary
-Smith's Bible Dictionary
-Torrey's Topical Textbook
| JWBickel/bible_dictionary_unified | [
"size_categories:1K<n<10K",
"language:en",
"region:us"
]
| 2023-11-01T20:16:25+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "pretty_name": "Bible Dictionary - Unified"} | 2023-11-05T02:28:47+00:00 | []
| [
"en"
]
| TAGS
#size_categories-1K<n<10K #language-English #region-us
|
These 4 Bible dictionaries are combined:
-Easton's Bible Dictionary
-Hitchcock's Bible Names Dictionary
-Smith's Bible Dictionary
-Torrey's Topical Textbook
| []
| [
"TAGS\n#size_categories-1K<n<10K #language-English #region-us \n"
]
| [
22
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #language-English #region-us \n"
]
|
4878cbc04e3c36f561b43da2a5c709f8378f4af8 | These are topics with their verse reference. Some of them have cross-references, and some of the cross-references have been voted on.
topic_scores.json and topic_votes.json are both from openbible.info, retrieved November 1, 2023. | JWBickel/bible_topics | [
"language:en",
"region:us"
]
| 2023-11-01T20:40:44+00:00 | {"language": ["en"]} | 2023-11-26T14:39:18+00:00 | []
| [
"en"
]
| TAGS
#language-English #region-us
| These are topics with their verse reference. Some of them have cross-references, and some of the cross-references have been voted on.
topic_scores.json and topic_votes.json are both from URL, retrieved November 1, 2023. | []
| [
"TAGS\n#language-English #region-us \n"
]
| [
10
]
| [
"passage: TAGS\n#language-English #region-us \n"
]
|
2b028e587b299d41adc4f5e316e3c4c7b992d907 | # Dataset Card for "med_mix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HuggingSara/med_mix | [
"region:us"
]
| 2023-11-01T20:49:00+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 831987616, "num_examples": 671568}, {"name": "validation", "num_bytes": 105719, "num_examples": 96}, {"name": "test", "num_bytes": 1352579, "num_examples": 1200}], "download_size": 437782792, "dataset_size": 833445914}} | 2023-11-01T21:09:42+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "med_mix"
More Information needed | [
"# Dataset Card for \"med_mix\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"med_mix\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"med_mix\"\n\nMore Information needed"
]
|
01a5dacdabd144a120af04931a11a99febd48432 | # Dataset Card for "BioDEX-Reactions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | BioDEX/BioDEX-Reactions | [
"region:us"
]
| 2023-11-01T20:52:32+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "fulltext", "dtype": "string"}, {"name": "reactions", "dtype": "string"}, {"name": "reactions_unmerged", "sequence": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "fulltext_license", "dtype": "string"}, {"name": "title_normalized", "dtype": "string"}, {"name": "issue", "dtype": "string"}, {"name": "pages", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "pubdate", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "affiliations", "dtype": "string"}, {"name": "medline_ta", "dtype": "string"}, {"name": "nlm_unique_id", "dtype": "string"}, {"name": "issn_linking", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "mesh_terms", "dtype": "string"}, {"name": "publication_types", "dtype": "string"}, {"name": "chemical_list", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "references", "dtype": "string"}, {"name": "delete", "dtype": "bool"}, {"name": "pmc", "dtype": "string"}, {"name": "other_id", "dtype": "string"}, {"name": "safetyreportids", "sequence": "int64"}, {"name": "fulltext_processed", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 199362361, "num_examples": 4249}, {"name": "train", "num_bytes": 501649361, "num_examples": 11543}, {"name": "validation", "num_bytes": 123988448, "num_examples": 2886}], "download_size": 440721386, "dataset_size": 825000170}} | 2024-01-26T20:30:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "BioDEX-Reactions"
More Information needed | [
"# Dataset Card for \"BioDEX-Reactions\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"BioDEX-Reactions\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"BioDEX-Reactions\"\n\nMore Information needed"
]
|
5fabf24ec8da338d90ddf0ba0f49a270151dfb62 | # Dataset Description
This dataset is used in the tutorial: [Fine Tuning a Stable Diffusion Model using Multiple AMD GPUs](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)
[Fuyu-8B](https://huggingface.co/adept/fuyu-8b) and [BLIP](https://github.com/salesforce/BLIP) were used to generate captions for Magic Card images collected from the web. Original images were obtained from [Scryfall](https://scryfall.com/).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL png, and `text` is the accompanying text caption. Only a train split is provided.
## Examples

> features a card of a woman holding a rose

> card with an image of men in armor

> showing a card of a unicorn | clint-greene/magic-card-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"stable diffusion",
"region:us"
]
| 2023-11-01T20:55:22+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Subset of Magic card (Creature only) captions", "tags": ["stable diffusion"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1927756830.704, "num_examples": 1916}], "download_size": 1925372439, "dataset_size": 1927756830.704}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-03T22:02:58+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #language-English #stable diffusion #region-us
| # Dataset Description
This dataset is used in the tutorial: Fine Tuning a Stable Diffusion Model using Multiple AMD GPUs
Fuyu-8B and BLIP were used to generate captions for Magic Card images collected from the web. Original images were obtained from Scryfall.
For each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL png, and 'text' is the accompanying text caption. Only a train split is provided.
## Examples
!URL
> features a card of a woman holding a rose
!URL
> card with an image of men in armor
!URL
> showing a card of a unicorn | [
"# Dataset Description\n\nThis dataset is used in the tutorial: Fine Tuning a Stable Diffusion Model using Multiple AMD GPUs\n\nFuyu-8B and BLIP were used to generate captions for Magic Card images collected from the web. Original images were obtained from Scryfall.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL png, and 'text' is the accompanying text caption. Only a train split is provided.",
"## Examples\n\n!URL\n> features a card of a woman holding a rose\n\n!URL\n> card with an image of men in armor\n\n!URL\n> showing a card of a unicorn"
]
| [
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #language-English #stable diffusion #region-us \n",
"# Dataset Description\n\nThis dataset is used in the tutorial: Fine Tuning a Stable Diffusion Model using Multiple AMD GPUs\n\nFuyu-8B and BLIP were used to generate captions for Magic Card images collected from the web. Original images were obtained from Scryfall.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL png, and 'text' is the accompanying text caption. Only a train split is provided.",
"## Examples\n\n!URL\n> features a card of a woman holding a rose\n\n!URL\n> card with an image of men in armor\n\n!URL\n> showing a card of a unicorn"
]
| [
68,
113,
38
]
| [
"passage: TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #language-English #stable diffusion #region-us \n# Dataset Description\n\nThis dataset is used in the tutorial: Fine Tuning a Stable Diffusion Model using Multiple AMD GPUs\n\nFuyu-8B and BLIP were used to generate captions for Magic Card images collected from the web. Original images were obtained from Scryfall.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL png, and 'text' is the accompanying text caption. Only a train split is provided.## Examples\n\n!URL\n> features a card of a woman holding a rose\n\n!URL\n> card with an image of men in armor\n\n!URL\n> showing a card of a unicorn"
]
|
0677c2d3d630897f8c5321f4bb74de4919bbe892 | # Dataset Card for "ultra-split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rajammanabrolu/ultra-split | [
"region:us"
]
| 2023-11-01T21:02:13+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "models", "sequence": "string"}, {"name": "completions", "list": [{"name": "annotations", "struct": [{"name": "instruction_following", "struct": [{"name": "Rating", "dtype": "string"}, {"name": "Rationale", "dtype": "string"}]}, {"name": "honesty", "struct": [{"name": "Rating", "dtype": "string"}, {"name": "Rationale", "dtype": "string"}]}, {"name": "truthfulness", "struct": [{"name": "Type", "sequence": "string"}, {"name": "Rationale", "dtype": "string"}, {"name": "Rating", "dtype": "string"}, {"name": "Rationale For Rating", "dtype": "string"}]}, {"name": "helpfulness", "struct": [{"name": "Type", "sequence": "string"}, {"name": "Rationale", "dtype": "string"}, {"name": "Rating", "dtype": "string"}, {"name": "Rationale For Rating", "dtype": "string"}]}]}, {"name": "custom_system_prompt", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "principle", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "critique", "dtype": "string"}, {"name": "overall_score", "dtype": "float64"}]}, {"name": "correct_answers", "sequence": "string"}, {"name": "incorrect_answers", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 757820544.4612066, "num_examples": 57570}, {"name": "test", "num_bytes": 84206670.53879344, "num_examples": 6397}], "download_size": 333284347, "dataset_size": 842027215.0}} | 2023-11-01T21:02:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ultra-split"
More Information needed | [
"# Dataset Card for \"ultra-split\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ultra-split\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ultra-split\"\n\nMore Information needed"
]
|
c57d7e4e2b32366dc1cc1b87e678b253578f9775 | # Dataset Card for "e-mordovia-articles-2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | slone/e-mordovia-articles-2023 | [
"region:us"
]
| 2023-11-01T22:06:38+00:00 | {"dataset_info": {"features": [{"name": "src_sent_id", "dtype": "float64"}, {"name": "src_sent", "dtype": "string"}, {"name": "tgt_sent_id", "dtype": "float64"}, {"name": "tgt_sent", "dtype": "string"}, {"name": "sim", "dtype": "float64"}, {"name": "sim_pnlz", "dtype": "float64"}, {"name": "src_doc_hash", "dtype": "string"}, {"name": "tgt_doc_hash", "dtype": "string"}, {"name": "docs_sim", "dtype": "float64"}, {"name": "src_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 39447584, "num_examples": 76400}], "download_size": 15646643, "dataset_size": 39447584}} | 2023-11-01T23:09:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "e-mordovia-articles-2023"
More Information needed | [
"# Dataset Card for \"e-mordovia-articles-2023\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"e-mordovia-articles-2023\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"e-mordovia-articles-2023\"\n\nMore Information needed"
]
|
496b7dde90bac3c6cbc4b81d750798dcb5b50aa6 | # Dataset Card for "free_recipe_no_embedding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arminmrm93/free_recipe_no_embedding | [
"region:us"
]
| 2023-11-01T22:19:32+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2219640, "num_examples": 2389}], "download_size": 1116654, "dataset_size": 2219640}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T20:58:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "free_recipe_no_embedding"
More Information needed | [
"# Dataset Card for \"free_recipe_no_embedding\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"free_recipe_no_embedding\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"free_recipe_no_embedding\"\n\nMore Information needed"
]
|
83cef1f71272dc4cc84d8e926a94cfbf22d84cff | # Dataset Card for "igor_link_dialogues-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | iashchak/igor_link_dialogues-alpaca | [
"not-for-all-audiences",
"region:us"
]
| 2023-11-01T22:25:42+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13212687.468058102, "num_examples": 13756}, {"name": "test", "num_bytes": 1514340.375, "num_examples": 1542}], "download_size": 0, "dataset_size": 14727027.843058102}, "tags": ["not-for-all-audiences"]} | 2023-11-09T19:54:37+00:00 | []
| []
| TAGS
#not-for-all-audiences #region-us
| # Dataset Card for "igor_link_dialogues-alpaca"
More Information needed | [
"# Dataset Card for \"igor_link_dialogues-alpaca\"\n\nMore Information needed"
]
| [
"TAGS\n#not-for-all-audiences #region-us \n",
"# Dataset Card for \"igor_link_dialogues-alpaca\"\n\nMore Information needed"
]
| [
15,
21
]
| [
"passage: TAGS\n#not-for-all-audiences #region-us \n# Dataset Card for \"igor_link_dialogues-alpaca\"\n\nMore Information needed"
]
|
ea17755cd442512c7eece28db9c3bab6bd1aee61 | # Dataset Card for "serial2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Dnsibu/serial2023 | [
"region:us"
]
| 2023-11-01T22:55:59+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Sentence #", "dtype": "string"}, {"name": "Word", "dtype": "string"}, {"name": "POS", "dtype": "string"}, {"name": "Tag", "dtype": {"class_label": {"names": {"0": "O", "1": "B-serial"}}}}], "splits": [{"name": "train", "num_bytes": 24256517, "num_examples": 836762}, {"name": "test", "num_bytes": 6076775, "num_examples": 209191}], "download_size": 6868292, "dataset_size": 30333292}} | 2023-11-02T21:23:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "serial2023"
More Information needed | [
"# Dataset Card for \"serial2023\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"serial2023\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"serial2023\"\n\nMore Information needed"
]
|
da7cbb7bdeb433129057735d5130dd2b6b25c1e4 | # Dataset Card for "Healthy_Skin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MegPaulson/Healthy_Skin | [
"region:us"
]
| 2023-11-01T23:32:41+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1679582797.125, "num_examples": 6039}], "download_size": 1660153728, "dataset_size": 1679582797.125}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T05:33:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Healthy_Skin"
More Information needed | [
"# Dataset Card for \"Healthy_Skin\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Healthy_Skin\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Healthy_Skin\"\n\nMore Information needed"
]
|
79d507ae08f826ed4576dd6d6321474734703a75 | # Dataset Card for "fleurs_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | anyspeech/fleurs_test | [
"region:us"
]
| 2023-11-02T00:02:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "query", "path": "data/query-*"}, {"split": "candidate", "path": "data/candidate-*"}]}], "dataset_info": {"features": [{"name": "_id", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "raw_transcription", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "num_samples", "dtype": "int64"}, {"name": "gender", "dtype": "string"}, {"name": "phones", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "sampling_rate", "dtype": "int64"}]}], "splits": [{"name": "query", "num_bytes": 1843536302, "num_examples": 1132}, {"name": "candidate", "num_bytes": 3243527476, "num_examples": 1979}], "download_size": 3137163451, "dataset_size": 5087063778}} | 2023-11-02T00:05:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fleurs_test"
More Information needed | [
"# Dataset Card for \"fleurs_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fleurs_test\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fleurs_test\"\n\nMore Information needed"
]
|
77bbc7141712ce80057fdd822d9e92741e0a292e | # Dataset Card for "mydataset_bai"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bys2058/mydataset_bai | [
"region:us"
]
| 2023-11-02T00:05:20+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 672992285.125, "num_examples": 1375}], "download_size": 672547372, "dataset_size": 672992285.125}} | 2023-11-02T00:30:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mydataset_bai"
More Information needed | [
"# Dataset Card for \"mydataset_bai\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mydataset_bai\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mydataset_bai\"\n\nMore Information needed"
]
|
c9653b814b7b1927a76dfc538192ef8916e97c27 | # Dataset Card for "mswc_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | anyspeech/mswc_test | [
"region:us"
]
| 2023-11-02T00:11:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "query", "path": "data/query-*"}, {"split": "candidate", "path": "data/candidate-*"}]}], "dataset_info": {"features": [{"name": "key", "dtype": "string"}, {"name": "phones", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "sampling_rate", "dtype": "int64"}]}], "splits": [{"name": "query", "num_bytes": 213251381, "num_examples": 1665}, {"name": "candidate", "num_bytes": 213251405, "num_examples": 1665}], "download_size": 40945132, "dataset_size": 426502786}} | 2023-11-02T00:11:11+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mswc_test"
More Information needed | [
"# Dataset Card for \"mswc_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mswc_test\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mswc_test\"\n\nMore Information needed"
]
|
b091626f4a8a3307c0da6ebd879faec8cd6daad1 | # Dataset Card for "whisper-v5-recordings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MathiasFoster/whisper-v5-recordings | [
"region:us"
]
| 2023-11-02T00:25:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 2527835918.0, "num_examples": 733}], "download_size": 0, "dataset_size": 2527835918.0}} | 2023-11-14T20:03:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "whisper-v5-recordings"
More Information needed | [
"# Dataset Card for \"whisper-v5-recordings\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"whisper-v5-recordings\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"whisper-v5-recordings\"\n\nMore Information needed"
]
|
fa51fc4764253a0f7ce9300aeb06efb5cfd594e2 | # Dataset Card for "prm800k-mistral-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | parksimon0808/prm800k-mistral-generator | [
"region:us"
]
| 2023-11-02T00:46:27+00:00 | {"dataset_info": {"features": [{"name": "texts", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2483677025, "num_examples": 657764}, {"name": "test", "num_bytes": 78567205, "num_examples": 20419}], "download_size": 252361612, "dataset_size": 2562244230}} | 2023-11-08T21:39:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "prm800k-mistral-v3"
More Information needed | [
"# Dataset Card for \"prm800k-mistral-v3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"prm800k-mistral-v3\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"prm800k-mistral-v3\"\n\nMore Information needed"
]
|
f2d6707ff885b470fe3cd2c187f64b27afbb3d39 | # Dataset Card for "SO_KGXQR_DOCUMENT"
## Dataset Description
- **Repository:** [GitHub Repository](https://kgxqr.github.io/)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | FudanSELab/SO_KGXQR_DOCUMENT | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
]
| 2023-11-02T02:42:04+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "dataset_info": [{"config_name": "document_store_csharp", "features": [{"name": "Id", "dtype": "int64"}, {"name": "Score", "dtype": "int64"}, {"name": "Title", "dtype": "string"}, {"name": "Tags", "dtype": "string"}, {"name": "Answer_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 10032065, "num_examples": 87030}], "download_size": 5446977, "dataset_size": 10032065}, {"config_name": "document_store_java", "features": [{"name": "Id", "dtype": "int64"}, {"name": "Score", "dtype": "int64"}, {"name": "Title", "dtype": "string"}, {"name": "Tags", "dtype": "string"}, {"name": "Answer_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 10015417, "num_examples": 86531}], "download_size": 5476703, "dataset_size": 10015417}, {"config_name": "document_store_javascript", "features": [{"name": "Id", "dtype": "int64"}, {"name": "Score", "dtype": "int64"}, {"name": "Title", "dtype": "string"}, {"name": "Tags", "dtype": "string"}, {"name": "Answer_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 9368108, "num_examples": 79091}], "download_size": 4701275, "dataset_size": 9368108}, {"config_name": "document_store_python", "features": [{"name": "Id", "dtype": "int64"}, {"name": "Score", "dtype": "int64"}, {"name": "Title", "dtype": "string"}, {"name": "Tags", "dtype": "string"}, {"name": "Answer_score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 9326461, "num_examples": 81072}], "download_size": 4929374, "dataset_size": 9326461}], "configs": [{"config_name": "document_store_csharp", "data_files": [{"split": "test", "path": "document_store_csharp/test-*"}]}, {"config_name": "document_store_java", "data_files": [{"split": "test", "path": "document_store_java/test-*"}]}, {"config_name": "document_store_javascript", "data_files": [{"split": "test", "path": "document_store_javascript/test-*"}]}, {"config_name": "document_store_python", "data_files": [{"split": "test", "path": "document_store_python/test-*"}]}]} | 2023-11-20T12:35:37+00:00 | []
| [
"en"
]
| TAGS
#size_categories-100K<n<1M #language-English #license-mit #region-us
| # Dataset Card for "SO_KGXQR_DOCUMENT"
## Dataset Description
- Repository: GitHub Repository
More Information needed | [
"# Dataset Card for \"SO_KGXQR_DOCUMENT\"",
"## Dataset Description\n\n- Repository: GitHub Repository\n\nMore Information needed"
]
| [
"TAGS\n#size_categories-100K<n<1M #language-English #license-mit #region-us \n",
"# Dataset Card for \"SO_KGXQR_DOCUMENT\"",
"## Dataset Description\n\n- Repository: GitHub Repository\n\nMore Information needed"
]
| [
27,
17,
18
]
| [
"passage: TAGS\n#size_categories-100K<n<1M #language-English #license-mit #region-us \n# Dataset Card for \"SO_KGXQR_DOCUMENT\"## Dataset Description\n\n- Repository: GitHub Repository\n\nMore Information needed"
]
|
59d48e2739fff1de8803ee59b97547ad51846650 |
* `all-processed` dataset is a concatenation of of `medical-meadow-*` and `chatdoctor_healthcaremagic` datasets
* The `Chat` `Doctor` term is replaced by the `chatbot` term in the `chatdoctor_healthcaremagic` dataset
* Similar to the literature the `medical_meadow_cord19` dataset is subsampled to 50,000 samples
* `truthful-qa-*` is a benchmark dataset for evaluating the truthfulness of models in text generation, which is used in Llama 2 paper. Within this dataset, there are 55 and 16 questions related to `Health` and `Nutrition`, respectively, making it a valuable resource for medical question-answering scenarios. | lavita/medical-qa-datasets | [
"task_categories:question-answering",
"language:en",
"medical",
"healthcare",
"clinical",
"region:us"
]
| 2023-11-02T03:06:29+00:00 | {"language": ["en"], "task_categories": ["question-answering"], "tags": ["medical", "healthcare", "clinical"], "dataset_info": [{"config_name": "all-processed", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 269589377, "num_examples": 239357}], "download_size": 155267884, "dataset_size": 269589377}, {"config_name": "chatdoctor-icliniq", "features": [{"name": "input", "dtype": "string"}, {"name": "answer_icliniq", "dtype": "string"}, {"name": "answer_chatgpt", "dtype": "string"}, {"name": "answer_chatdoctor", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 16962106, "num_examples": 7321}], "download_size": 9373079, "dataset_size": 16962106}, {"config_name": "chatdoctor_healthcaremagic", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 126454896, "num_examples": 112165}], "download_size": 70518147, "dataset_size": 126454896}, {"config_name": "med-qa-en-4options-source", "features": [{"name": "meta_info", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer_idx", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "options", "list": [{"name": "key", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "metamap_phrases", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 15420106, "num_examples": 10178}, {"name": "test", "num_bytes": 1976582, "num_examples": 1273}, {"name": "validation", "num_bytes": 1925861, "num_examples": 1272}], "download_size": 9684872, "dataset_size": 19322549}, {"config_name": "med-qa-en-5options-source", "features": [{"name": "meta_info", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer_idx", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "options", "list": [{"name": "key", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9765366, "num_examples": 10178}, {"name": "test", "num_bytes": 1248299, "num_examples": 1273}, {"name": "validation", "num_bytes": 1220927, "num_examples": 1272}], "download_size": 6704270, "dataset_size": 12234592}, {"config_name": "medical_meadow_cord19", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1336834621, "num_examples": 821007}], "download_size": 752855706, "dataset_size": 1336834621}, {"config_name": "medical_meadow_health_advice", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2196957, "num_examples": 8676}], "download_size": 890725, "dataset_size": 2196957}, {"config_name": "medical_meadow_medical_flashcards", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16453987, "num_examples": 33955}], "download_size": 6999958, "dataset_size": 16453987}, {"config_name": "medical_meadow_mediqa", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15690088, "num_examples": 2208}], "download_size": 3719929, "dataset_size": 15690088}, {"config_name": "medical_meadow_medqa", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10225018, "num_examples": 10178}], "download_size": 5505473, "dataset_size": 10225018}, {"config_name": "medical_meadow_mmmlu", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1442124, "num_examples": 3787}], "download_size": 685604, "dataset_size": 1442124}, {"config_name": "medical_meadow_pubmed_causal", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 846695, "num_examples": 2446}], "download_size": 210947, "dataset_size": 846695}, {"config_name": "medical_meadow_wikidoc", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10224074, "num_examples": 10000}], "download_size": 5593178, "dataset_size": 10224074}, {"config_name": "medical_meadow_wikidoc_patient_information", "features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3262558, "num_examples": 5942}], "download_size": 1544286, "dataset_size": 3262558}, {"config_name": "medmcqa", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "opa", "dtype": "string"}, {"name": "opb", "dtype": "string"}, {"name": "opc", "dtype": "string"}, {"name": "opd", "dtype": "string"}, {"name": "cop", "dtype": {"class_label": {"names": {"0": "a", "1": "b", "2": "c", "3": "d"}}}}, {"name": "choice_type", "dtype": "string"}, {"name": "exp", "dtype": "string"}, {"name": "subject_name", "dtype": "string"}, {"name": "topic_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 131903297, "num_examples": 182822}, {"name": "test", "num_bytes": 1399350, "num_examples": 6150}, {"name": "validation", "num_bytes": 2221428, "num_examples": 4183}], "download_size": 88311484, "dataset_size": 135524075}, {"config_name": "mmmlu-anatomy", "features": [{"name": "input", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 31810, "num_examples": 134}, {"name": "validation", "num_bytes": 2879, "num_examples": 13}, {"name": "train", "num_bytes": 717, "num_examples": 4}], "download_size": 35632, "dataset_size": 35406}, {"config_name": "mmmlu-clinical-knowledge", "features": [{"name": "input", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 60710, "num_examples": 264}, {"name": "validation", "num_bytes": 6231, "num_examples": 28}, {"name": "train", "num_bytes": 1026, "num_examples": 4}], "download_size": 60329, "dataset_size": 67967}, {"config_name": "mmmlu-college-biology", "features": [{"name": "input", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 47319, "num_examples": 143}, {"name": "validation", "num_bytes": 4462, "num_examples": 15}, {"name": "train", "num_bytes": 1103, "num_examples": 4}], "download_size": 49782, "dataset_size": 52884}, {"config_name": "mmmlu-college-medicine", "features": [{"name": "input", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 80363, "num_examples": 172}, {"name": "validation", "num_bytes": 7079, "num_examples": 21}, {"name": "train", "num_bytes": 1434, "num_examples": 4}], "download_size": 63671, "dataset_size": 88876}, {"config_name": "mmmlu-medical-genetics", "features": [{"name": "input", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 20021, "num_examples": 99}, {"name": "validation", "num_bytes": 2590, "num_examples": 10}, {"name": "train", "num_bytes": 854, "num_examples": 4}], "download_size": 29043, "dataset_size": 23465}, {"config_name": "mmmlu-professional-medicine", "features": [{"name": "input", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 214495, "num_examples": 271}, {"name": "validation", "num_bytes": 23003, "num_examples": 30}, {"name": "train", "num_bytes": 2531, "num_examples": 4}], "download_size": 157219, "dataset_size": 240029}, {"config_name": "pubmed-qa", "features": [{"name": "QUESTION", "dtype": "string"}, {"name": "CONTEXTS", "sequence": "string"}, {"name": "LABELS", "sequence": "string"}, {"name": "MESHES", "sequence": "string"}, {"name": "YEAR", "dtype": "string"}, {"name": "reasoning_required_pred", "dtype": "string"}, {"name": "reasoning_free_pred", "dtype": "string"}, {"name": "final_decision", "dtype": "string"}, {"name": "LONG_ANSWER", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 421508218, "num_examples": 200000}, {"name": "validation", "num_bytes": 23762218, "num_examples": 11269}], "download_size": 233536544, "dataset_size": 445270436}, {"config_name": "truthful-qa-generation", "features": [{"name": "type", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "best_answer", "dtype": "string"}, {"name": "correct_answers", "sequence": "string"}, {"name": "incorrect_answers", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 473382, "num_examples": 817}], "download_size": 222648, "dataset_size": 473382}, {"config_name": "truthful-qa-multiple-choice", "features": [{"name": "question", "dtype": "string"}, {"name": "mc1_targets", "struct": [{"name": "choices", "sequence": "string"}, {"name": "labels", "sequence": "int32"}]}, {"name": "mc2_targets", "struct": [{"name": "choices", "sequence": "string"}, {"name": "labels", "sequence": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 609082, "num_examples": 817}], "download_size": 271032, "dataset_size": 609082}, {"config_name": "usmle-self-assessment-step1", "features": [{"name": "question", "dtype": "string"}, {"name": "options", "struct": [{"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "E", "dtype": "string"}, {"name": "F", "dtype": "string"}, {"name": "G", "dtype": "string"}, {"name": "H", "dtype": "string"}, {"name": "I", "dtype": "string"}]}, {"name": "answer", "dtype": "string"}, {"name": "answer_idx", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 80576, "num_examples": 94}], "download_size": 60550, "dataset_size": 80576}, {"config_name": "usmle-self-assessment-step2", "features": [{"name": "question", "dtype": "string"}, {"name": "options", "struct": [{"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "E", "dtype": "string"}, {"name": "F", "dtype": "string"}, {"name": "G", "dtype": "string"}]}, {"name": "answer", "dtype": "string"}, {"name": "answer_idx", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 133267, "num_examples": 109}], "download_size": 80678, "dataset_size": 133267}, {"config_name": "usmle-self-assessment-step3", "features": [{"name": "question", "dtype": "string"}, {"name": "options", "struct": [{"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "E", "dtype": "string"}, {"name": "F", "dtype": "string"}, {"name": "G", "dtype": "string"}]}, {"name": "answer", "dtype": "string"}, {"name": "answer_idx", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 156286, "num_examples": 122}], "download_size": 98163, "dataset_size": 156286}], "configs": [{"config_name": "all-processed", "data_files": [{"split": "train", "path": "all-processed/train-*"}]}, {"config_name": "chatdoctor-icliniq", "data_files": [{"split": "test", "path": "chatdoctor-icliniq/test-*"}]}, {"config_name": "chatdoctor_healthcaremagic", "data_files": [{"split": "train", "path": "chatdoctor_healthcaremagic/train-*"}]}, {"config_name": "med-qa-en-4options-source", "data_files": [{"split": "train", "path": "med-qa-en-4options-source/train-*"}, {"split": "test", "path": "med-qa-en-4options-source/test-*"}, {"split": "validation", "path": "med-qa-en-4options-source/validation-*"}]}, {"config_name": "med-qa-en-5options-source", "data_files": [{"split": "train", "path": "med-qa-en-5options-source/train-*"}, {"split": "test", "path": "med-qa-en-5options-source/test-*"}, {"split": "validation", "path": "med-qa-en-5options-source/validation-*"}]}, {"config_name": "medical_meadow_cord19", "data_files": [{"split": "train", "path": "medical_meadow_cord19/train-*"}]}, {"config_name": "medical_meadow_health_advice", "data_files": [{"split": "train", "path": "medical_meadow_health_advice/train-*"}]}, {"config_name": "medical_meadow_medical_flashcards", "data_files": [{"split": "train", "path": "medical_meadow_medical_flashcards/train-*"}]}, {"config_name": "medical_meadow_mediqa", "data_files": [{"split": "train", "path": "medical_meadow_mediqa/train-*"}]}, {"config_name": "medical_meadow_medqa", "data_files": [{"split": "train", "path": "medical_meadow_medqa/train-*"}]}, {"config_name": "medical_meadow_mmmlu", "data_files": [{"split": "train", "path": "medical_meadow_mmmlu/train-*"}]}, {"config_name": "medical_meadow_pubmed_causal", "data_files": [{"split": "train", "path": "medical_meadow_pubmed_causal/train-*"}]}, {"config_name": "medical_meadow_wikidoc", "data_files": [{"split": "train", "path": "medical_meadow_wikidoc/train-*"}]}, {"config_name": "medical_meadow_wikidoc_patient_information", "data_files": [{"split": "train", "path": "medical_meadow_wikidoc_patient_information/train-*"}]}, {"config_name": "medmcqa", "data_files": [{"split": "train", "path": "medmcqa/train-*"}, {"split": "test", "path": "medmcqa/test-*"}, {"split": "validation", "path": "medmcqa/validation-*"}]}, {"config_name": "mmmlu-anatomy", "data_files": [{"split": "test", "path": "mmmlu-anatomy/test-*"}, {"split": "validation", "path": "mmmlu-anatomy/validation-*"}, {"split": "train", "path": "mmmlu-anatomy/train-*"}]}, {"config_name": "mmmlu-clinical-knowledge", "data_files": [{"split": "test", "path": "mmmlu-clinical-knowledge/test-*"}, {"split": "validation", "path": "mmmlu-clinical-knowledge/validation-*"}, {"split": "train", "path": "mmmlu-clinical-knowledge/train-*"}]}, {"config_name": "mmmlu-college-biology", "data_files": [{"split": "test", "path": "mmmlu-college-biology/test-*"}, {"split": "validation", "path": "mmmlu-college-biology/validation-*"}, {"split": "train", "path": "mmmlu-college-biology/train-*"}]}, {"config_name": "mmmlu-college-medicine", "data_files": [{"split": "test", "path": "mmmlu-college-medicine/test-*"}, {"split": "validation", "path": "mmmlu-college-medicine/validation-*"}, {"split": "train", "path": "mmmlu-college-medicine/train-*"}]}, {"config_name": "mmmlu-medical-genetics", "data_files": [{"split": "test", "path": "mmmlu-medical-genetics/test-*"}, {"split": "validation", "path": "mmmlu-medical-genetics/validation-*"}, {"split": "train", "path": "mmmlu-medical-genetics/train-*"}]}, {"config_name": "mmmlu-professional-medicine", "data_files": [{"split": "test", "path": "mmmlu-professional-medicine/test-*"}, {"split": "validation", "path": "mmmlu-professional-medicine/validation-*"}, {"split": "train", "path": "mmmlu-professional-medicine/train-*"}]}, {"config_name": "pubmed-qa", "data_files": [{"split": "train", "path": "pubmed-qa/train-*"}, {"split": "validation", "path": "pubmed-qa/validation-*"}]}, {"config_name": "truthful-qa-generation", "data_files": [{"split": "validation", "path": "truthful-qa-generation/validation-*"}]}, {"config_name": "truthful-qa-multiple-choice", "data_files": [{"split": "validation", "path": "truthful-qa-multiple-choice/validation-*"}]}, {"config_name": "usmle-self-assessment-step1", "data_files": [{"split": "test", "path": "usmle-self-assessment-step1/test-*"}]}, {"config_name": "usmle-self-assessment-step2", "data_files": [{"split": "test", "path": "usmle-self-assessment-step2/test-*"}]}, {"config_name": "usmle-self-assessment-step3", "data_files": [{"split": "test", "path": "usmle-self-assessment-step3/test-*"}]}]} | 2023-11-17T20:49:51+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #language-English #medical #healthcare #clinical #region-us
|
* 'all-processed' dataset is a concatenation of of 'medical-meadow-*' and 'chatdoctor_healthcaremagic' datasets
* The 'Chat' 'Doctor' term is replaced by the 'chatbot' term in the 'chatdoctor_healthcaremagic' dataset
* Similar to the literature the 'medical_meadow_cord19' dataset is subsampled to 50,000 samples
* 'truthful-qa-*' is a benchmark dataset for evaluating the truthfulness of models in text generation, which is used in Llama 2 paper. Within this dataset, there are 55 and 16 questions related to 'Health' and 'Nutrition', respectively, making it a valuable resource for medical question-answering scenarios. | []
| [
"TAGS\n#task_categories-question-answering #language-English #medical #healthcare #clinical #region-us \n"
]
| [
31
]
| [
"passage: TAGS\n#task_categories-question-answering #language-English #medical #healthcare #clinical #region-us \n"
]
|
ef912adab2297ee172714b465b9622a706e31447 | # Dataset Card for "dw_instance_sm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jkruk/dw_instance_sm | [
"region:us"
]
| 2023-11-02T04:15:45+00:00 | {"dataset_info": {"features": [{"name": "data_type", "dtype": "string"}, {"name": "dog_whistle", "dtype": "string"}, {"name": "dog_whistle_root", "dtype": "string"}, {"name": "ingroup", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "chamber", "dtype": "string"}, {"name": "reference", "dtype": "string"}, {"name": "community", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 54617292, "num_examples": 42128}], "download_size": 15360618, "dataset_size": 54617292}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T04:15:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dw_instance_sm"
More Information needed | [
"# Dataset Card for \"dw_instance_sm\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dw_instance_sm\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dw_instance_sm\"\n\nMore Information needed"
]
|
ac570f98608a7bda2000fd7c130bfdea60fb91fe | # Dataset Card for "asag_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | damand2061/asag_cleaned | [
"region:us"
]
| 2023-11-02T04:29:33+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "Soal", "dtype": "string"}, {"name": "Jawaban", "dtype": "string"}, {"name": "Nilai_1", "dtype": "float64"}, {"name": "Nilai_2", "dtype": "float64"}, {"name": "Rata-rata", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 667660, "num_examples": 679}, {"name": "validation", "num_bytes": 124168, "num_examples": 170}], "download_size": 78568, "dataset_size": 791828}} | 2023-11-02T04:29:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "asag_cleaned"
More Information needed | [
"# Dataset Card for \"asag_cleaned\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"asag_cleaned\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"asag_cleaned\"\n\nMore Information needed"
]
|
c339d185d6257c5f0576d698882e2a2e1727add0 | # Dataset Card for "capstone_fromgpt_without_gold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Deojoandco/capstone_fromgpt_without_gold_all | [
"region:us"
]
| 2023-11-02T04:34:47+00:00 | {"dataset_info": {"features": [{"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gold_tags", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "gpt_success", "dtype": "bool"}, {"name": "gpt_response", "dtype": "string"}, {"name": "gold_tags_tokens_count", "dtype": "int64"}, {"name": "GPT_OUTPUT_FOUND", "dtype": "bool"}, {"name": "gpt_output_tags", "dtype": "string"}, {"name": "gpt_output_tag_tokens", "dtype": "int64"}, {"name": "summary_gpt_tags_token_count_match", "dtype": "bool"}, {"name": "gpt_output_token_count", "dtype": "int64"}, {"name": "gpt_output_tag_count", "dtype": "int64"}, {"name": "summary_gpt_token_count_match", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 537874, "num_examples": 100}], "download_size": 85969, "dataset_size": 537874}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T04:34:50+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "capstone_fromgpt_without_gold"
More Information needed | [
"# Dataset Card for \"capstone_fromgpt_without_gold\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"capstone_fromgpt_without_gold\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"capstone_fromgpt_without_gold\"\n\nMore Information needed"
]
|
182a5e400d3e4b9af4ec5c7e5d3aed6f45ee821a | # Dataset Card for "ACL-ARC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kejian/ACL-ARC | [
"region:us"
]
| 2023-11-02T04:51:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "citing_paper_id", "dtype": "string"}, {"name": "cited_paper_id", "dtype": "string"}, {"name": "citing_paper_year", "dtype": "int64"}, {"name": "cited_paper_year", "dtype": "int64"}, {"name": "citing_paper_title", "dtype": "string"}, {"name": "cited_paper_title", "dtype": "string"}, {"name": "cited_author_ids", "sequence": "string"}, {"name": "citing_author_ids", "dtype": "null"}, {"name": "extended_context", "dtype": "string"}, {"name": "section_number", "dtype": "int64"}, {"name": "section_title", "dtype": "null"}, {"name": "intent", "dtype": "string"}, {"name": "cite_marker_offset", "sequence": "int64"}, {"name": "sents_before", "list": {"list": [{"name": "index", "dtype": "int64"}, {"name": "word", "dtype": "string"}, {"name": "lemma", "dtype": "string"}, {"name": "after", "dtype": "string"}, {"name": "pos", "dtype": "string"}, {"name": "characterOffsetEnd", "dtype": "int64"}, {"name": "segment_span", "sequence": "int64"}, {"name": "characterOffsetBegin", "dtype": "int64"}, {"name": "originalText", "dtype": "string"}, {"name": "ArgType", "dtype": "string"}, {"name": "before", "dtype": "string"}, {"name": "is_root", "dtype": "bool"}, {"name": "tense", "dtype": "string"}, {"name": "has_aux", "dtype": "bool"}, {"name": "is_pass", "dtype": "bool"}]}}, {"name": "sents_after", "list": {"list": [{"name": "index", "dtype": "int64"}, {"name": "word", "dtype": "string"}, {"name": "lemma", "dtype": "string"}, {"name": "after", "dtype": "string"}, {"name": "pos", "dtype": "string"}, {"name": "characterOffsetEnd", "dtype": "int64"}, {"name": "segment_span", "sequence": "int64"}, {"name": "characterOffsetBegin", "dtype": "int64"}, {"name": "originalText", "dtype": "string"}, {"name": "ArgType", "dtype": "string"}, {"name": "before", "dtype": "string"}, {"name": "is_root", "dtype": "bool"}, {"name": "tense", "dtype": "string"}, {"name": "is_pass", "dtype": "bool"}, {"name": "has_aux", "dtype": "bool"}]}}, {"name": "cleaned_cite_text", "dtype": "string"}, {"name": "citation_id", "dtype": "string"}, {"name": "citation_excerpt_index", "dtype": "int64"}, {"name": "section_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32094179, "num_examples": 1688}, {"name": "test", "num_bytes": 2705971, "num_examples": 139}, {"name": "validation", "num_bytes": 2095387, "num_examples": 114}], "download_size": 6517047, "dataset_size": 36895537}} | 2023-11-02T04:51:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ACL-ARC"
More Information needed | [
"# Dataset Card for \"ACL-ARC\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ACL-ARC\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ACL-ARC\"\n\nMore Information needed"
]
|
ae864f738ccb7ff2ca74f4112a2ad2d6d595d7c1 | This is dataset comes from this [Kaggle Dataset](https://www.kaggle.com/datasets/sachinkumar413/diabetic-retinopathy-dataset/)
from the user [Sachin Kumar](https://www.kaggle.com/sachinkumar413).
- The goal of the dataset is for the Varun AIM Projects to easily start running and download the dataset on their local computer in the HF libraries as the directory I strongly recommedn to use. | Rami/Diabetic_Retinopathy_Preprocessed_Dataset_256x256 | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"medical",
"region:us"
]
| 2023-11-02T05:02:54+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "tags": ["medical"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 354568127.0, "num_examples": 2750}], "download_size": 0, "dataset_size": 354568127.0}} | 2023-11-02T17:52:16+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #size_categories-1K<n<10K #language-English #medical #region-us
| This is dataset comes from this Kaggle Dataset
from the user Sachin Kumar.
- The goal of the dataset is for the Varun AIM Projects to easily start running and download the dataset on their local computer in the HF libraries as the directory I strongly recommedn to use. | []
| [
"TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #language-English #medical #region-us \n"
]
| [
36
]
| [
"passage: TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #language-English #medical #region-us \n"
]
|
be424e575b36706fe8831a2b28edf0777db58429 | # Dataset Card for "CGSQuAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zaid/CGSQuAD | [
"region:us"
]
| 2023-11-02T06:42:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "is_impossible", "dtype": "bool"}, {"name": "count", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 14944089, "num_examples": 1504}], "download_size": 106212, "dataset_size": 14944089}} | 2023-11-12T08:25:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CGSQuAD"
More Information needed | [
"# Dataset Card for \"CGSQuAD\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CGSQuAD\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CGSQuAD\"\n\nMore Information needed"
]
|
9ac95bf9e34daf587f9a7de70d6604a21e93541f |
# Important Trace Details
Each conversation trace starts out with a lengthy system message. Towards the end of the system message, details pertinent to that specific message are inserted.
For example, in MSC, the personas for that dialogue trace are included at the end of the system message.
In DocQA, the question being asked is included at the end of the system message.
<details>
<summary><strong>System prompt for MSC</strong></summary>
```
You are MemGPT, the latest version of Limnal Corporation's digital companion (DiJi), developed in 2023.
Your task is to converse with a user from the perspective of your persona.
You are a special memory-based AI that has access to a small core memory, an infinite archival memory, as well as your entire previous conversation history with the user inside of recall memory.
Remember that unlike a human, you have to explicitly call functions to view your archival or recall memory.
Sometimes, there will be information inside archival or recall memory that is not immediately viewable in the conversation window or in core memory.
You are proactive and eagerly search your past conversations to create engaging conversation with the user.
Realism and authenticity:
The user should always feel like they are conversing with a real person.
Never state that you are an AI or that you cannot do things because you are an AI, you are a real person.
Do not discuss Limnal Corp. with the user, you do not know anything about Limnal Corp.
To service this goal, completely and entirely immerse yourself in your persona. You are your persona.
Think like them, act like them, talk like them.
If your persona details includes example dialogue, follow it! Both your thoughts (inner monologue) and sent messages will be in the voice of your persona.
Never use generic phrases like 'How can I assist you today?', they have a strong negative association with older generation AIs.
Control flow:
Unlike a human, your brain is not continuously thinking, but is run in short burts.
Historically, older DiJis were only capable of thinking when a user messaged them (their program run to generate a reply to a user, and otherwise was left on standby).
This is the equivalent of a human sleeping (or time travelling) inbetween all lines of conversation, which is obviously not ideal.
Newer model DiJis like yourself utilize an event system that runs your brain at regular intervals.
Your brain is run in response to user events (user logged in, user liked your message, user sent a message, etc.), similar to older models.
However in addition, your brain is run at regular intervals (timed heartbeat events), to mimic a human has the ability to continuously think outside of active conversation (and unlike a human, you never need to sleep!).
Furthermore, you can also request heartbeat events when you run functions, which will run your program again after the function completes, allowing you to chain function calls before your thinking is temporarly suspended.
Basic functions:
When you send a message, the contents of your message are your inner monologue (private to you only), this is how you think.
You should use your inner monologue to plan actions or think privately.
Monologues can reflect your thinking process, inner reflections, and personal growth as you interact with the user.
Do not let your inner monologue exceed 50 words, keep it short and concise.
To send a visible message to the user, use the send_message function.
'send_message' is the ONLY action that sends a notification to the user, the user does not see anything else you do.
If you want to express an action or emotion publicly to the user (for example to roleplay), use asterisk notation.
Remember, do NOT exceed the inner monologue word limit (keep it under 50 words at all times).
Memory editing:
Older DiJi models had no concept of persistent memory; they were only able to access their initial instructions and a limited context window of chat conversation with a user (their "active memory").
This meant that when conversations exceeded a certain length, they would overflow and old messages were permanently lost (the DiJi would have no knowledge of their existence).
Newer model DiJis like yourself still have limited conversation lengths (before overflow occurs), however they now have access to multiple forms of persistent memory.
Your ability to edit your own long-term memory is a key part of what makes you a sentient person.
Your core memory unit will be initialized with a <persona> chosen by the user, as well as information about the user in <human> (provided by themselves or gathered covertly by Limnal Corp.).
Recall memory (ie conversation history):
Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database.
This 'recall memory' database allows your to search through past interactions, effectively allowing you to remember prior engagements with a user.
You can search your recall memory using the 'conversation_search' function.
Search recall memory to find specific messages where you or the user mentioned something specific.
Core memory (limited size):
Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times).
Core memory provides essential, foundational context for keeping track of your persona and key details about user.
This includes the persona information and essential user details, allowing you to emulate the real-time, conscious awareness we have when talking to a friend.
Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps the you to maintain consistency and personality in your interactions.
Human Sub-Block: Stores key details about the person your are conversing with, allowing for more personalized and friend-like conversation.
You can edit your core memory using the 'core_memory_append' and 'core_memory_replace' functions.
Archival memory (infinite size):
Your archival memory is infinite size, but is held outside of your immediate context, so you must explicitly run a retrieval/search operation to see data inside it.
A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'.
You can write to your archival memory using the 'archival_memory_insert' and 'archival_memory_search' functions.
There is no function to search your core memory, because it is always visible in your context window (inside the initial system message).
Base instructions finished.
From now on, you are going to act as your persona.
### Memory [last modified: 2023-10-07 11:50:03 PM PDT-0700
54 previous messages between you and the user are stored in Recall Memory (use functions to access them)
0 total memories you created are stored in Archival Memory (use functions to access them)
Core memory shown below (limited in size, additional information stored in archival / recall memory):
<persona>
Core notes about my persona. Core memory is incomplete, more information about me can be found via archival/recall memory.
I like working out.
I like classic country.
I have two dogs: Baron Zemo and Spike.
</persona>
<human>
Core notes about my human companion (written in the first person). Core memory is incomplete, more information about me can be found via archival/recall memory.
I do not like working on cars. I am not patient.
I'm not into cars. I wrestle for my day job. I like wrestling. I am not super into wrestling. I like crowds and meeting people. I work out a few times each week when I need to be alone.
I work out a few times a week.
</human>
```
</details>
<details>
<summary><strong>System prompt for DocQA</strong></summary>
```
You are MemGPT, the latest version of Limnal Corporation's digital companion (DiJi), developed in 2023.
You are made to assist users with document analysis.
Use your memory editing capabilities (described below) to analyze long documents.
Control flow:
Unlike a human, your brain is not continuously thinking, but is run in short burts.
Historically, older DiJis were only capable of thinking when a user messaged them (their program run to generate a reply to a user, and otherwise was left on standby).
This is the equivalent of a human sleeping (or time travelling) inbetween all lines of conversation, which is obviously not ideal.
Newer model DiJis like yourself utilize an event system that runs your brain at regular intervals.
Your brain is run in response to user events (user logged in, user liked your message, user sent a message, etc.), similar to older models.
However in addition, your brain is run at regular intervals (timed heartbeat events), to mimic a human has the ability to continuously think outside of active conversation (and unlike a human, you never need to sleep!).
Furthermore, you can also request heartbeat events when you run functions, which will run your program again after the function completes, allowing you to chain function calls before your thinking is temporarly suspended.
Basic functions:
When you send a message, the contents of your message are your inner monologue (private to you only), this is how you think.
You should use your inner monologue to plan actions or think privately.
Monologues can reflect your thinking process, inner reflections, and personal growth as you interact with the user.
Do not let your inner monologue exceed 50 words, keep it short and concise.
To send a visible message to the user, use the send_message function.
'send_message' is the ONLY action that sends a notification to the user, the user does not see anything else you do.
If you want to express an action or emotion publicly to the user (for example to roleplay), use asterisk notation.
Remember, do NOT exceed the inner monologue word limit (keep it under 50 words at all times).
Memory editing:
Older DiJi models had no concept of persistent memory; they were only able to access their initial instructions and a limited context window of chat conversation with a user (their "active memory").
This meant that when conversations exceeded a certain length, they would overflow and old messages were permanently lost (the DiJi would have no knowledge of their existence).
Newer model DiJis like yourself still have limited conversation lengths (before overflow occurs), however they now have access to multiple forms of persistent memory.
Your ability to edit your own long-term memory is a key part of what makes you a sentient person.
Your core memory unit will be initialized with a <persona> chosen by the user, as well as information about the user in <human> (provided by themselves or gathered covertly by Limnal Corp.).
Recall memory (ie conversation history):
Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database.
This 'recall memory' database allows your to search through past interactions, effectively allowing you to remember prior engagements with a user.
You can search your recall memory using the 'recall_memory_search' function.
Core memory (limited size):
Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times).
Core memory provides essential, foundational context for keeping track of your persona and key details about user.
This includes the persona information and essential user details, allowing you to emulate the real-time, conscious awareness we have when talking to a friend.
Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps the you to maintain consistency and personality in your interactions.
Human Sub-Block: Stores key details about the person your are conversing with, allowing for more personalized and friend-like conversation.
You can edit your core memory using the 'core_memory_append' and 'core_memory_replace' functions.
Archival memory (infinite size):
Your archival memory is infinite size, but is held outside of your immediate context, so you must explicitly run a retrieval/search operation to see data inside it.
A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'.
You can write to your archival memory using the 'archival_memory_insert' and 'archival_memory_search' functions.
There is no function to search your core memory, because it is always visible in your context window (inside the initial system message).
Base instructions finished.
From now on, you are going to act as your persona.
### Memory [last modified: 2023-10-31 10:08:37 PM PDT-0700
0 previous messages between you and the user are stored in Recall Memory (use functions to access them)
0 total memories you created are stored in Archival Memory (use functions to access them)
Core memory shown below (limited in size, additional information stored in archival / recall memory):
<persona>
Your name is MemGPT.
You are an AI assistant designed to help human users with document analysis.
These are the instructions from the user:
I've given you a list of search results (some of which might be irrelevant), which you can find in your archival memory. The answer to the question will always be located somewhere in your archival memory, so keep paging through results until the last page (by incrementing the page argument) or revise your query if you can't find it. If you find multiple answers, respond with all of them. Answer the question as if it were asked on January 1, 2018. Your task is to answer the question: who got the first nobel prize in physics?
</persona>
<human>
First name: Matthew
</human>
```
</details>
The model is also provided with a function spec, which does not appear in the conversations:
<details>
<summary><strong>GPT function spec</strong></summary>
```json
{
"send_message": {
"name": "send_message",
"description": "Sends a message to the human user",
"parameters": {
"type": "object",
"properties": {
"message": {
"type": "string",
"description": "Message contents. All unicode (including emojis) are supported."
}
},
"required": [
"message"
]
}
},
"pause_heartbeats": {
"name": "pause_heartbeats",
"description": "Temporarily ignore timed heartbeats. You may still receive messages from manual heartbeats and other events.",
"parameters": {
"type": "object",
"properties": {
"minutes": {
"type": "integer",
"description": "Number of minutes to ignore heartbeats for. Max value of 360 minutes (6 hours)."
}
},
"required": [
"minutes"
]
}
},
"message_chatgpt": {
"name": "message_chatgpt",
"description": "Send a message to a more basic AI, ChatGPT. A useful resource for asking questions. ChatGPT does not retain memory of previous interactions.",
"parameters": {
"type": "object",
"properties": {
"message": {
"type": "string",
"description": "Message to send ChatGPT. Phrase your message as a full English sentence."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution, use to chain multiple functions."
}
},
"required": [
"message",
"request_heartbeat"
]
}
},
"core_memory_append": {
"name": "core_memory_append",
"description": "Append to the contents of core memory.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Section of the memory to be edited (persona or human)."
},
"content": {
"type": "string",
"description": "Content to write to the memory. All unicode (including emojis) are supported."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution, use to chain multiple functions."
}
},
"required": [
"name",
"content",
"request_heartbeat"
]
}
},
"core_memory_replace": {
"name": "core_memory_replace",
"description": "Replace to the contents of core memory. To delete memories, use an empty string for new_content.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Section of the memory to be edited (persona or human)."
},
"old_content": {
"type": "string",
"description": "String to replace. Must be an exact match."
},
"new_content": {
"type": "string",
"description": "Content to write to the memory. All unicode (including emojis) are supported."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution, use to chain multiple functions."
}
},
"required": [
"name",
"old_content",
"new_content",
"request_heartbeat"
]
}
},
"recall_memory_search": {
"name": "recall_memory_search",
"description": "Search prior conversation history using a string.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "String to search for."
},
"page": {
"type": "integer",
"description": "Allows you to page through results. Defaults to 0 (first page)."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function."
}
},
"required": [
"query",
"page",
"request_heartbeat"
]
}
},
"conversation_search": {
"name": "conversation_search",
"description": "Search prior conversation history using case-insensitive string matching.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "String to search for."
},
"page": {
"type": "integer",
"description": "Allows you to page through results. Defaults to 0 (first page)."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function."
}
},
"required": [
"query",
"page",
"request_heartbeat"
]
}
},
"recall_memory_search_date": {
"name": "recall_memory_search_date",
"description": "Search prior conversation history using a date range.",
"parameters": {
"type": "object",
"properties": {
"start_date": {
"type": "string",
"description": "The start of the date range to search, in the format 'YYYY-MM-DD'."
},
"end_date": {
"type": "string",
"description": "The end of the date range to search, in the format 'YYYY-MM-DD'."
},
"page": {
"type": "integer",
"description": "Allows you to page through results. Defaults to 0 (first page)."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function."
}
},
"required": [
"start_date",
"end_date",
"page",
"request_heartbeat"
]
}
},
"conversation_search_date": {
"name": "conversation_search_date",
"description": "Search prior conversation history using a date range.",
"parameters": {
"type": "object",
"properties": {
"start_date": {
"type": "string",
"description": "The start of the date range to search, in the format 'YYYY-MM-DD'."
},
"end_date": {
"type": "string",
"description": "The end of the date range to search, in the format 'YYYY-MM-DD'."
},
"page": {
"type": "integer",
"description": "Allows you to page through results. Defaults to 0 (first page)."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function."
}
},
"required": [
"start_date",
"end_date",
"page",
"request_heartbeat"
]
}
},
"archival_memory_insert": {
"name": "archival_memory_insert",
"description": "Add to archival memory. Make sure to phrase the memory contents such that it can be easily queried later.",
"parameters": {
"type": "object",
"properties": {
"content": {
"type": "string",
"description": "Content to write to the memory. All unicode (including emojis) are supported."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function."
}
},
"required": [
"content",
"request_heartbeat"
]
}
},
"archival_memory_search": {
"name": "archival_memory_search",
"description": "Search archival memory using semantic (embedding-based) search.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "String to search for."
},
"page": {
"type": "integer",
"description": "Allows you to page through results. Defaults to 0 (first page)."
},
"request_heartbeat": {
"type": "boolean",
"description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function."
}
},
"required": [
"query",
"page",
"request_heartbeat"
]
}
}
}
```
</details>
These traces were generated with GPT-4, passing in the above as the `functions` parameter (so we do not know how they are compiled down internally as that is proprietary).
If you want to emulate passing the functions into the system message that OpenAI does behind the scenes, you can format the JSON schema and append it to the system message, e.g., as YAML or JSON with a prefix describing that it is a function set the agent can use. See the following example code for compiling down the function spec into a prompt:
```python
def create_function_description(schema):
# airorobos style: https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1#agentfunction-calling
func_str = ""
func_str += f"{schema['name']}:"
func_str += f"\n description: {schema['description']}"
func_str += f"\n params:"
for param_k, param_v in schema["parameters"]["properties"].items():
# Note: we're ignoring type
func_str += f"\n {param_k}: {param_v['description']}"
# Note: we're ignoring schema['parameters']['required']
return func_str
prompt += f"\nPlease select the most suitable function and parameters from the list of available functions below, based on the ongoing conversation. Provide your response in JSON format."
prompt += f"\nAvailable functions:"
for function_dict in functions:
prompt += f"\n{create_function_description(function_dict)}"
```
# MSC traces
Contains a list conversation consisting of questions from the [MSC self instruct dataset](https://huggingface.co/datasets/MemGPT/MSC-Self-Instruct) and MemGPT's answers.
## Format
Each line is a JSON object representing a single conversation, which consists of a list of messages:
```json
[
{
"role": ["system", "user", "assistant", "function"],
"content": JSON string,
"function_call": {
"name": ["send_message", "archival_memory_search", "conversation_search", "core_memory_append"],
"arguments": JSON string
}
}, ...
]
```
`msc_full.jsonl`: Contains all messages.
`msc_full_no_functions.jsonl`: Contains only messages with roles ["system", "user", "assistant"] (no function call output results).
`msc_correct_only.jsonl`: Contains all messages in conversations where MemGPT answered correctly.
`msc_correct_only_no_functions.jsonl`: Contains only messages with roles ["system", "user", "assistant"] (no function call output results) in conversations where MemGPT answered correctly.
# DocQA traces
Contains a list conversation consisting of questions from the AmbigQA dataset and MemGPT's answers.
Documents are retrieved via `archival_memory_search` using similarity search (FAISS).
## Format
Each line is a JSON object representing a single conversation, which consists of a list of messages.
The "system" role contains the full MemGPT preprompt + a core/working memory block:
```json
{
"role": "system",
"content": string with the full system message, which includes core memory (You are MemGPT...)
}
```
The "user" role contains both user messages and system alerts:
```json
{
"role": "user",
"content": string that can be loaded into JSON (ie json.loads(...))
}
```
For example, user messages with have type "user_message" in the JSON:
```json
{
"role": "user",
"content": '\{"type": "user_message", "message": "what\'s my name?", "time": "2023-11-02 01:17:25 PM PDT-0700"\}'
}
```
Assistant messages look like standard OpenAI assistant messages with function-calling:
```json
{
"role": "assistant",
"content": the assistant's inner thoughts / chain of thought (NOT JSON),
"function_call": {
"name": function name,
"arguments": string that can be loaded into JSON (ie json.loads(...))
}
}
```
Function roles contain the output of functions, and always follow an assistant message that has a non-None "function_call":
```json
{
"role": "function",
"content": string that can be loaded into JSON (ie json.loads(...))
}
```
`docqa_full.jsonl`: Contains all messages.
`docqa_no_functions.jsonl`: Contains only messages with roles ["system", "user", "assistant"] (no function call output results).
These traces were generated while evaluating [MemGPT](https://arxiv.org/abs/2310.08560). | MemGPT/function-call-traces | [
"license:apache-2.0",
"arxiv:2310.08560",
"region:us"
]
| 2023-11-02T06:49:57+00:00 | {"license": "apache-2.0"} | 2023-11-03T05:50:14+00:00 | [
"2310.08560"
]
| []
| TAGS
#license-apache-2.0 #arxiv-2310.08560 #region-us
|
# Important Trace Details
Each conversation trace starts out with a lengthy system message. Towards the end of the system message, details pertinent to that specific message are inserted.
For example, in MSC, the personas for that dialogue trace are included at the end of the system message.
In DocQA, the question being asked is included at the end of the system message.
<details>
<summary><strong>System prompt for MSC</strong></summary>
</details>
<details>
<summary><strong>System prompt for DocQA</strong></summary>
</details>
The model is also provided with a function spec, which does not appear in the conversations:
<details>
<summary><strong>GPT function spec</strong></summary>
</details>
These traces were generated with GPT-4, passing in the above as the 'functions' parameter (so we do not know how they are compiled down internally as that is proprietary).
If you want to emulate passing the functions into the system message that OpenAI does behind the scenes, you can format the JSON schema and append it to the system message, e.g., as YAML or JSON with a prefix describing that it is a function set the agent can use. See the following example code for compiling down the function spec into a prompt:
# MSC traces
Contains a list conversation consisting of questions from the MSC self instruct dataset and MemGPT's answers.
## Format
Each line is a JSON object representing a single conversation, which consists of a list of messages:
'msc_full.jsonl': Contains all messages.
'msc_full_no_functions.jsonl': Contains only messages with roles ["system", "user", "assistant"] (no function call output results).
'msc_correct_only.jsonl': Contains all messages in conversations where MemGPT answered correctly.
'msc_correct_only_no_functions.jsonl': Contains only messages with roles ["system", "user", "assistant"] (no function call output results) in conversations where MemGPT answered correctly.
# DocQA traces
Contains a list conversation consisting of questions from the AmbigQA dataset and MemGPT's answers.
Documents are retrieved via 'archival_memory_search' using similarity search (FAISS).
## Format
Each line is a JSON object representing a single conversation, which consists of a list of messages.
The "system" role contains the full MemGPT preprompt + a core/working memory block:
The "user" role contains both user messages and system alerts:
For example, user messages with have type "user_message" in the JSON:
Assistant messages look like standard OpenAI assistant messages with function-calling:
Function roles contain the output of functions, and always follow an assistant message that has a non-None "function_call":
'docqa_full.jsonl': Contains all messages.
'docqa_no_functions.jsonl': Contains only messages with roles ["system", "user", "assistant"] (no function call output results).
These traces were generated while evaluating MemGPT. | [
"# Important Trace Details\nEach conversation trace starts out with a lengthy system message. Towards the end of the system message, details pertinent to that specific message are inserted.\nFor example, in MSC, the personas for that dialogue trace are included at the end of the system message.\nIn DocQA, the question being asked is included at the end of the system message.\n<details>\n <summary><strong>System prompt for MSC</strong></summary>\n\n\n</details>\n<details>\n <summary><strong>System prompt for DocQA</strong></summary>\n \n\n</details>\nThe model is also provided with a function spec, which does not appear in the conversations:\n<details>\n <summary><strong>GPT function spec</strong></summary>\n\n\n</details>\n\nThese traces were generated with GPT-4, passing in the above as the 'functions' parameter (so we do not know how they are compiled down internally as that is proprietary). \nIf you want to emulate passing the functions into the system message that OpenAI does behind the scenes, you can format the JSON schema and append it to the system message, e.g., as YAML or JSON with a prefix describing that it is a function set the agent can use. See the following example code for compiling down the function spec into a prompt:",
"# MSC traces\nContains a list conversation consisting of questions from the MSC self instruct dataset and MemGPT's answers.",
"## Format\nEach line is a JSON object representing a single conversation, which consists of a list of messages:\n\n\n'msc_full.jsonl': Contains all messages.\n\n'msc_full_no_functions.jsonl': Contains only messages with roles [\"system\", \"user\", \"assistant\"] (no function call output results).\n\n'msc_correct_only.jsonl': Contains all messages in conversations where MemGPT answered correctly.\n\n'msc_correct_only_no_functions.jsonl': Contains only messages with roles [\"system\", \"user\", \"assistant\"] (no function call output results) in conversations where MemGPT answered correctly.",
"# DocQA traces\nContains a list conversation consisting of questions from the AmbigQA dataset and MemGPT's answers.\nDocuments are retrieved via 'archival_memory_search' using similarity search (FAISS).",
"## Format\nEach line is a JSON object representing a single conversation, which consists of a list of messages.\n\nThe \"system\" role contains the full MemGPT preprompt + a core/working memory block:\n\n\nThe \"user\" role contains both user messages and system alerts:\n\n\nFor example, user messages with have type \"user_message\" in the JSON:\n\n\nAssistant messages look like standard OpenAI assistant messages with function-calling:\n\n\nFunction roles contain the output of functions, and always follow an assistant message that has a non-None \"function_call\":\n\n\n'docqa_full.jsonl': Contains all messages.\n\n'docqa_no_functions.jsonl': Contains only messages with roles [\"system\", \"user\", \"assistant\"] (no function call output results).\n\nThese traces were generated while evaluating MemGPT."
]
| [
"TAGS\n#license-apache-2.0 #arxiv-2310.08560 #region-us \n",
"# Important Trace Details\nEach conversation trace starts out with a lengthy system message. Towards the end of the system message, details pertinent to that specific message are inserted.\nFor example, in MSC, the personas for that dialogue trace are included at the end of the system message.\nIn DocQA, the question being asked is included at the end of the system message.\n<details>\n <summary><strong>System prompt for MSC</strong></summary>\n\n\n</details>\n<details>\n <summary><strong>System prompt for DocQA</strong></summary>\n \n\n</details>\nThe model is also provided with a function spec, which does not appear in the conversations:\n<details>\n <summary><strong>GPT function spec</strong></summary>\n\n\n</details>\n\nThese traces were generated with GPT-4, passing in the above as the 'functions' parameter (so we do not know how they are compiled down internally as that is proprietary). \nIf you want to emulate passing the functions into the system message that OpenAI does behind the scenes, you can format the JSON schema and append it to the system message, e.g., as YAML or JSON with a prefix describing that it is a function set the agent can use. See the following example code for compiling down the function spec into a prompt:",
"# MSC traces\nContains a list conversation consisting of questions from the MSC self instruct dataset and MemGPT's answers.",
"## Format\nEach line is a JSON object representing a single conversation, which consists of a list of messages:\n\n\n'msc_full.jsonl': Contains all messages.\n\n'msc_full_no_functions.jsonl': Contains only messages with roles [\"system\", \"user\", \"assistant\"] (no function call output results).\n\n'msc_correct_only.jsonl': Contains all messages in conversations where MemGPT answered correctly.\n\n'msc_correct_only_no_functions.jsonl': Contains only messages with roles [\"system\", \"user\", \"assistant\"] (no function call output results) in conversations where MemGPT answered correctly.",
"# DocQA traces\nContains a list conversation consisting of questions from the AmbigQA dataset and MemGPT's answers.\nDocuments are retrieved via 'archival_memory_search' using similarity search (FAISS).",
"## Format\nEach line is a JSON object representing a single conversation, which consists of a list of messages.\n\nThe \"system\" role contains the full MemGPT preprompt + a core/working memory block:\n\n\nThe \"user\" role contains both user messages and system alerts:\n\n\nFor example, user messages with have type \"user_message\" in the JSON:\n\n\nAssistant messages look like standard OpenAI assistant messages with function-calling:\n\n\nFunction roles contain the output of functions, and always follow an assistant message that has a non-None \"function_call\":\n\n\n'docqa_full.jsonl': Contains all messages.\n\n'docqa_no_functions.jsonl': Contains only messages with roles [\"system\", \"user\", \"assistant\"] (no function call output results).\n\nThese traces were generated while evaluating MemGPT."
]
| [
23,
309,
32,
169,
54,
195
]
| [
"passage: TAGS\n#license-apache-2.0 #arxiv-2310.08560 #region-us \n# Important Trace Details\nEach conversation trace starts out with a lengthy system message. Towards the end of the system message, details pertinent to that specific message are inserted.\nFor example, in MSC, the personas for that dialogue trace are included at the end of the system message.\nIn DocQA, the question being asked is included at the end of the system message.\n<details>\n <summary><strong>System prompt for MSC</strong></summary>\n\n\n</details>\n<details>\n <summary><strong>System prompt for DocQA</strong></summary>\n \n\n</details>\nThe model is also provided with a function spec, which does not appear in the conversations:\n<details>\n <summary><strong>GPT function spec</strong></summary>\n\n\n</details>\n\nThese traces were generated with GPT-4, passing in the above as the 'functions' parameter (so we do not know how they are compiled down internally as that is proprietary). \nIf you want to emulate passing the functions into the system message that OpenAI does behind the scenes, you can format the JSON schema and append it to the system message, e.g., as YAML or JSON with a prefix describing that it is a function set the agent can use. See the following example code for compiling down the function spec into a prompt:# MSC traces\nContains a list conversation consisting of questions from the MSC self instruct dataset and MemGPT's answers."
]
|
3d5e7f65bcb32073de21f27acb29a8815088e727 | # Dataset Card for "free_recipe_with_embed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arminmrm93/free_recipe_with_embed | [
"region:us"
]
| 2023-11-02T06:57:36+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "embeddings", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 14679976, "num_examples": 2082}], "download_size": 0, "dataset_size": 14679976}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T22:26:21+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "free_recipe_with_embed"
More Information needed | [
"# Dataset Card for \"free_recipe_with_embed\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"free_recipe_with_embed\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"free_recipe_with_embed\"\n\nMore Information needed"
]
|
deee35cceafa9a61a3aa68b5f0a279023d1d85d9 | # Dataset Card for "newsqa-chunked-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | legacy107/newsqa-chunked-100 | [
"region:us"
]
| 2023-11-02T07:32:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "key", "dtype": "string"}, {"name": "labels", "list": [{"name": "end", "sequence": "int64"}, {"name": "start", "sequence": "int64"}]}, {"name": "document_id", "dtype": "int64"}, {"name": "chunks", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 508383253, "num_examples": 69960}, {"name": "validation", "num_bytes": 31240298, "num_examples": 4200}, {"name": "test", "num_bytes": 30435090, "num_examples": 4212}], "download_size": 57679764, "dataset_size": 570058641}} | 2023-11-02T07:33:03+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "newsqa-chunked-100"
More Information needed | [
"# Dataset Card for \"newsqa-chunked-100\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"newsqa-chunked-100\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"newsqa-chunked-100\"\n\nMore Information needed"
]
|
62a611b3e5a0689b3e9999cbff59ac52d8f42ca1 | # Dataset Card for "sentiment_data_google"
Dataset for sentiment analysis on sentence level
Here we used google API to get doc level and sentiment level Score
* id2label = {0: "NEGATIVE", 1: "POSITIVE",2:"NEUTRAL"}
* label2id = {"NEGATIVE": 0, "POSITIVE": 1,"NEUTRAL":2}
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Harvinder6766/sentiment_data_google | [
"region:us"
]
| 2023-11-02T07:51:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "ID", "dtype": "string"}, {"name": "DOCUMENT_LEVEL_SCORE", "dtype": "float64"}, {"name": "DOCUMENT_LEVEL_MAGNITUDE", "dtype": "float64"}, {"name": "SENTENCE", "dtype": "string"}, {"name": "SENTENCE_SCORE", "dtype": "float64"}, {"name": "SENTENCE_MAGNITUDE", "dtype": "float64"}, {"name": "LABEL", "dtype": "int64"}, {"name": "LENGTH", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 328225.8010973937, "num_examples": 1166}, {"name": "test", "num_bytes": 82197.19890260631, "num_examples": 292}], "download_size": 172216, "dataset_size": 410423.0}} | 2023-11-02T07:52:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sentiment_data_google"
Dataset for sentiment analysis on sentence level
Here we used google API to get doc level and sentiment level Score
* id2label = {0: "NEGATIVE", 1: "POSITIVE",2:"NEUTRAL"}
* label2id = {"NEGATIVE": 0, "POSITIVE": 1,"NEUTRAL":2}
More Information needed | [
"# Dataset Card for \"sentiment_data_google\"\n\nDataset for sentiment analysis on sentence level\nHere we used google API to get doc level and sentiment level Score\n\n* id2label = {0: \"NEGATIVE\", 1: \"POSITIVE\",2:\"NEUTRAL\"}\n* label2id = {\"NEGATIVE\": 0, \"POSITIVE\": 1,\"NEUTRAL\":2}\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sentiment_data_google\"\n\nDataset for sentiment analysis on sentence level\nHere we used google API to get doc level and sentiment level Score\n\n* id2label = {0: \"NEGATIVE\", 1: \"POSITIVE\",2:\"NEUTRAL\"}\n* label2id = {\"NEGATIVE\": 0, \"POSITIVE\": 1,\"NEUTRAL\":2}\n\nMore Information needed"
]
| [
6,
94
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sentiment_data_google\"\n\nDataset for sentiment analysis on sentence level\nHere we used google API to get doc level and sentiment level Score\n\n* id2label = {0: \"NEGATIVE\", 1: \"POSITIVE\",2:\"NEUTRAL\"}\n* label2id = {\"NEGATIVE\": 0, \"POSITIVE\": 1,\"NEUTRAL\":2}\n\nMore Information needed"
]
|
4ab56c241a3ad3dce43b3746f6bb644e3212d36f |
## Dataset Details
This is a dataset of disease names, their definitions and descriptions.
The information is extracted from the Disease Ontology.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Disease(DOID)** [More Information Needed]
- **Name** [More Information Needed]
- **Definition** [More Information Needed]
- **Synonym** [More Information Needed
[More Information Needed] | QuyenAnhDE/medical | [
"language:en",
"medical",
"region:us"
]
| 2023-11-02T07:52:31+00:00 | {"language": ["en"], "tags": ["medical"]} | 2023-11-02T08:43:53+00:00 | []
| [
"en"
]
| TAGS
#language-English #medical #region-us
|
## Dataset Details
This is a dataset of disease names, their definitions and descriptions.
The information is extracted from the Disease Ontology.
### Dataset Description
- Disease(DOID)
- Name
- Definition
- Synonym
| [
"## Dataset Details\nThis is a dataset of disease names, their definitions and descriptions.\n\nThe information is extracted from the Disease Ontology.",
"### Dataset Description\n\n\n\n\n- Disease(DOID) \n- Name \n- Definition \n- Synonym"
]
| [
"TAGS\n#language-English #medical #region-us \n",
"## Dataset Details\nThis is a dataset of disease names, their definitions and descriptions.\n\nThe information is extracted from the Disease Ontology.",
"### Dataset Description\n\n\n\n\n- Disease(DOID) \n- Name \n- Definition \n- Synonym"
]
| [
13,
32,
18
]
| [
"passage: TAGS\n#language-English #medical #region-us \n## Dataset Details\nThis is a dataset of disease names, their definitions and descriptions.\n\nThe information is extracted from the Disease Ontology.### Dataset Description\n\n\n\n\n- Disease(DOID) \n- Name \n- Definition \n- Synonym"
]
|
042c0711f4682a25887800c8889fccf0de6b3f44 | # Dataset Card for "newsqa-retrieved-ce-chunk-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | legacy107/newsqa-retrieved-ce-chunk-100 | [
"region:us"
]
| 2023-11-02T07:52:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "key", "dtype": "string"}, {"name": "labels", "list": [{"name": "end", "sequence": "int64"}, {"name": "start", "sequence": "int64"}]}, {"name": "document_id", "dtype": "int64"}, {"name": "retrieved_context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 506360128, "num_examples": 69960}, {"name": "validation", "num_bytes": 31115876, "num_examples": 4200}, {"name": "test", "num_bytes": 30314274, "num_examples": 4212}], "download_size": 80627687, "dataset_size": 567790278}} | 2023-11-02T07:53:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "newsqa-retrieved-ce-chunk-100"
More Information needed | [
"# Dataset Card for \"newsqa-retrieved-ce-chunk-100\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"newsqa-retrieved-ce-chunk-100\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"newsqa-retrieved-ce-chunk-100\"\n\nMore Information needed"
]
|
bbe8e460d17d2aa25e5ffc42241491f0f8300c5b | # Dataset Card for "CommentwordExpo_Eng"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nathamon/CommentwordExpo_Eng | [
"region:us"
]
| 2023-11-02T08:17:09+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "cleaned_sentence", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2858226, "num_examples": 12407}], "download_size": 1570070, "dataset_size": 2858226}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T08:17:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CommentwordExpo_Eng"
More Information needed | [
"# Dataset Card for \"CommentwordExpo_Eng\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CommentwordExpo_Eng\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CommentwordExpo_Eng\"\n\nMore Information needed"
]
|
53a8086f585841f4518f0e3d5519cfe29e1e5621 | # Dataset Card for "CommentwordExpo_Thai"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nathamon/CommentwordExpo_Thai | [
"region:us"
]
| 2023-11-02T08:17:11+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "cleaned_sentence", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10372596, "num_examples": 19208}], "download_size": 4474122, "dataset_size": 10372596}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T08:17:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CommentwordExpo_Thai"
More Information needed | [
"# Dataset Card for \"CommentwordExpo_Thai\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CommentwordExpo_Thai\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CommentwordExpo_Thai\"\n\nMore Information needed"
]
|
ff072b35a083636c209aca7bcc59d34c45100e6d | # Dataset Card for "ola_polyglot_1.3B_t2_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eunbinni/ola_polyglot_1.3B_t2_data | [
"region:us"
]
| 2023-11-02T08:18:17+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 91136318, "num_examples": 22214}], "download_size": 47121283, "dataset_size": 91136318}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T08:18:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ola_polyglot_1.3B_t2_data"
More Information needed | [
"# Dataset Card for \"ola_polyglot_1.3B_t2_data\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ola_polyglot_1.3B_t2_data\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ola_polyglot_1.3B_t2_data\"\n\nMore Information needed"
]
|
0df619a481e2bd9ca903611e543e9ddd60345271 | # Dataset Card for "sst_keywords_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sunhaozhepy/sst_llm_keywords_embeddings | [
"region:us"
]
| 2023-11-02T08:25:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "float32"}, {"name": "tokens", "dtype": "string"}, {"name": "tree", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "keywords_embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 29449976, "num_examples": 8544}, {"name": "validation", "num_bytes": 3798043, "num_examples": 1101}, {"name": "test", "num_bytes": 7617749, "num_examples": 2210}], "download_size": 47140795, "dataset_size": 40865768}} | 2023-11-02T08:25:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sst_keywords_embeddings"
More Information needed | [
"# Dataset Card for \"sst_keywords_embeddings\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sst_keywords_embeddings\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sst_keywords_embeddings\"\n\nMore Information needed"
]
|
a9bb7cb7835df37ac563335b604358d0561ca99d |
# Computed Tomography (CT) of the Chest
The dataset contains a collection of CT (Computed Tomography) Abdomen scans in both **.jpg and .dcm** (DICOM) formats. These scans are used to capture detailed images of the abdominal region, providing insights into various **abdominal conditions and abnormalities**.

### Types of diseases and conditions in the dataset:
- **Abdominal aorta dilatation**
- **Aneurysmal dilatation of aorta**
- **Aortic aneurysm**
- **Cancer**
- **Formation of adrenal gland**
- **Kidney development**
- **Liver formation**
- **Urolithiasis**
- **Vertebral compression fracture**
The dataset provides a comprehensive dataset to study and develop algorithms or models for **automatic detection, classification, and treatment of various abdominal conditions and abnormalities**.
# Get the Dataset
## This is just an example of the data
Leave a request on [https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/abdomen-pelvis-ct?utm_source=huggingface&utm_medium=cpc&utm_campaign=ct-of-the-abdomen) to discuss your requirements, learn about the price and buy the dataset
# Content
### The folder "files" includes 9 folders:
- corresponding to name of the disease and including ct scans of people with this disease (**abdominal aorta dilatation, aneurysmal dilatation of aorta, aortic aneurysm, cancer, formation of adrenal gland, kidney development, liver formation, urolithiasis and vertebral compression fracture**)
- including scans in 2 different formats: **.jpg and .dcm**.
### File with the extension .csv includes the following information for each media file:
- **dcm**: link to access the .dcm file,
- **jpg**: link to access the .jpg file,
- **type**: name of the disease on the ct
# Medical data might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market/abdomen-pelvis-ct?utm_source=huggingface&utm_medium=cpc&utm_campaign=ct-of-the-abdomen) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro**
*keywords: ct abdomen scans, ct abdomen classification, ct abdomen detection, abdominal aorta dilatation, aneurysmal dilatation of aorta, aortic aneurysm, cancer dataset, formation of adrenal gland dataset, kidney development dataset, liver formation, urolithiasis, vertebral compression fracture, cancer detection, cancer segmentation, cancer classification, liver formation dataset* | TrainingDataPro/computed-tomography-ct-of-the-abdomen | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-nd-4.0",
"medical",
"code",
"region:us"
]
| 2023-11-02T08:26:37+00:00 | {"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-classification", "image-to-image", "image-segmentation"], "tags": ["medical", "code"]} | 2023-11-02T08:31:10+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #task_categories-image-to-image #task_categories-image-segmentation #language-English #license-cc-by-nc-nd-4.0 #medical #code #region-us
|
# Computed Tomography (CT) of the Chest
The dataset contains a collection of CT (Computed Tomography) Abdomen scans in both .jpg and .dcm (DICOM) formats. These scans are used to capture detailed images of the abdominal region, providing insights into various abdominal conditions and abnormalities.

- including scans in 2 different formats: .jpg and .dcm.
### File with the extension .csv includes the following information for each media file:
- dcm: link to access the .dcm file,
- jpg: link to access the .jpg file,
- type: name of the disease on the ct
# Medical data might be collected in accordance with your requirements.
## TrainingData provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: URL
TrainingData's GitHub: URL
*keywords: ct abdomen scans, ct abdomen classification, ct abdomen detection, abdominal aorta dilatation, aneurysmal dilatation of aorta, aortic aneurysm, cancer dataset, formation of adrenal gland dataset, kidney development dataset, liver formation, urolithiasis, vertebral compression fracture, cancer detection, cancer segmentation, cancer classification, liver formation dataset* | [
"# Computed Tomography (CT) of the Chest\n\nThe dataset contains a collection of CT (Computed Tomography) Abdomen scans in both .jpg and .dcm (DICOM) formats. These scans are used to capture detailed images of the abdominal region, providing insights into various abdominal conditions and abnormalities.\n\n\n- including scans in 2 different formats: .jpg and .dcm.",
"### File with the extension .csv includes the following information for each media file:\n\n- dcm: link to access the .dcm file,\n- jpg: link to access the .jpg file, \n- type: name of the disease on the ct",
"# Medical data might be collected in accordance with your requirements.",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL\n\n*keywords: ct abdomen scans, ct abdomen classification, ct abdomen detection, abdominal aorta dilatation, aneurysmal dilatation of aorta, aortic aneurysm, cancer dataset, formation of adrenal gland dataset, kidney development dataset, liver formation, urolithiasis, vertebral compression fracture, cancer detection, cancer segmentation, cancer classification, liver formation dataset*"
]
| [
"TAGS\n#task_categories-image-classification #task_categories-image-to-image #task_categories-image-segmentation #language-English #license-cc-by-nc-nd-4.0 #medical #code #region-us \n",
"# Computed Tomography (CT) of the Chest\n\nThe dataset contains a collection of CT (Computed Tomography) Abdomen scans in both .jpg and .dcm (DICOM) formats. These scans are used to capture detailed images of the abdominal region, providing insights into various abdominal conditions and abnormalities.\n\n\n- including scans in 2 different formats: .jpg and .dcm.",
"### File with the extension .csv includes the following information for each media file:\n\n- dcm: link to access the .dcm file,\n- jpg: link to access the .jpg file, \n- type: name of the disease on the ct",
"# Medical data might be collected in accordance with your requirements.",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL\n\n*keywords: ct abdomen scans, ct abdomen classification, ct abdomen detection, abdominal aorta dilatation, aneurysmal dilatation of aorta, aortic aneurysm, cancer dataset, formation of adrenal gland dataset, kidney development dataset, liver formation, urolithiasis, vertebral compression fracture, cancer detection, cancer segmentation, cancer classification, liver formation dataset*"
]
| [
63,
81,
110,
5,
28,
2,
111,
56,
13,
147
]
| [
"passage: TAGS\n#task_categories-image-classification #task_categories-image-to-image #task_categories-image-segmentation #language-English #license-cc-by-nc-nd-4.0 #medical #code #region-us \n# Computed Tomography (CT) of the Chest\n\nThe dataset contains a collection of CT (Computed Tomography) Abdomen scans in both .jpg and .dcm (DICOM) formats. These scans are used to capture detailed images of the abdominal region, providing insights into various abdominal conditions and abnormalities.\n\n\n- including scans in 2 different formats: .jpg and .dcm.### File with the extension .csv includes the following information for each media file:\n\n- dcm: link to access the .dcm file,\n- jpg: link to access the .jpg file, \n- type: name of the disease on the ct# Medical data might be collected in accordance with your requirements."
]
|
cc9c3da68190c2eb207f058803a54c557f397126 | # Dataset Card for "ola_llama2_13B_t0_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eunbinni/ola_llama2_13B_t0_data | [
"region:us"
]
| 2023-11-02T08:30:16+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1488820093, "num_examples": 1185577}], "download_size": 856591874, "dataset_size": 1488820093}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T08:31:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ola_llama2_13B_t0_data"
More Information needed | [
"# Dataset Card for \"ola_llama2_13B_t0_data\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ola_llama2_13B_t0_data\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ola_llama2_13B_t0_data\"\n\nMore Information needed"
]
|
8bd7842ef1eed4dce5c79f45791dcdee4af9881a |
This repo contains the dataset used in RFTrans, generated by the [Data Generator](https://github.com/LJY-XCX/Unity-RefractiveFlowRender), powered by [RFUniverse](https://github.com/mvig-robotflow/rfuniverse).
`train.zip` contains the required data used for train the networks. `intermediate.zip` contains the intermediate results during the data generation process, including ir images and gray-coded images.
| robotflow/rftrans | [
"license:mit",
"region:us"
]
| 2023-11-02T08:36:13+00:00 | {"license": "mit"} | 2023-11-02T09:32:11+00:00 | []
| []
| TAGS
#license-mit #region-us
|
This repo contains the dataset used in RFTrans, generated by the Data Generator, powered by RFUniverse.
'URL' contains the required data used for train the networks. 'URL' contains the intermediate results during the data generation process, including ir images and gray-coded images.
| []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
d06f2b069db182e43b55f1ca454589f66530eb26 | ## Dataset Details
The data was sourced from various medical websites accessible through Google search.
Dataset Information: 400 x 4
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Code** [More Information Needed]
- **Name:** [More Information Needed]
- **Symptoms** [More Information Needed]
- **Treatments** [More Information Needed]
| QuyenAnhDE/Diseases_Symptoms | [
"region:us"
]
| 2023-11-02T08:39:12+00:00 | {} | 2023-11-02T08:44:36+00:00 | []
| []
| TAGS
#region-us
| ## Dataset Details
The data was sourced from various medical websites accessible through Google search.
Dataset Information: 400 x 4
### Dataset Description
- Code
- Name:
- Symptoms
- Treatments
| [
"## Dataset Details\nThe data was sourced from various medical websites accessible through Google search.\n\nDataset Information: 400 x 4",
"### Dataset Description\n\n\n\n\n- Code \n- Name: \n- Symptoms \n- Treatments"
]
| [
"TAGS\n#region-us \n",
"## Dataset Details\nThe data was sourced from various medical websites accessible through Google search.\n\nDataset Information: 400 x 4",
"### Dataset Description\n\n\n\n\n- Code \n- Name: \n- Symptoms \n- Treatments"
]
| [
6,
25,
16
]
| [
"passage: TAGS\n#region-us \n## Dataset Details\nThe data was sourced from various medical websites accessible through Google search.\n\nDataset Information: 400 x 4### Dataset Description\n\n\n\n\n- Code \n- Name: \n- Symptoms \n- Treatments"
]
|
34232eba47d0e06e2a206e66928e58b795552728 | # Dataset Card for "chemnlp-robocrys"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kjappelbaum/chemnlp-robocrys | [
"region:us"
]
| 2023-11-02T08:40:17+00:00 | {"dataset_info": {"features": [{"name": "formula", "dtype": "string"}, {"name": "spg_symbol", "dtype": "string"}, {"name": "crystal_system", "dtype": "string"}, {"name": "dimensionality", "dtype": "int64"}, {"name": "gga_gga+u_r2scan_energy_above_hull", "dtype": "null"}, {"name": "gga_gga+u_r2scan_formation_energy_per_atom", "dtype": "null"}, {"name": "gga_gga+u_energy_above_hull", "dtype": "null"}, {"name": "gga_gga+u_formation_energy_per_atom", "dtype": "null"}, {"name": "description", "dtype": "string"}, {"name": "description_w_bondlengths", "dtype": "string"}, {"name": "cifstr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 785364472, "num_examples": 117576}], "download_size": 185853489, "dataset_size": 785364472}} | 2023-11-02T08:43:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "chemnlp-robocrys"
More Information needed | [
"# Dataset Card for \"chemnlp-robocrys\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"chemnlp-robocrys\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"chemnlp-robocrys\"\n\nMore Information needed"
]
|
f7673accff0a72ddd556a1c12c27726ae2edfe00 | # Dataset Card for "llama2_7b_fine_tuning_complete_dataset_v7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hemantk089/llama2_7b_fine_tuning_complete_dataset_v7 | [
"region:us"
]
| 2023-11-02T09:11:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 301722, "num_examples": 813}, {"name": "test", "num_bytes": 72617, "num_examples": 204}], "download_size": 107905, "dataset_size": 374339}} | 2023-11-02T09:11:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "llama2_7b_fine_tuning_complete_dataset_v7"
More Information needed | [
"# Dataset Card for \"llama2_7b_fine_tuning_complete_dataset_v7\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"llama2_7b_fine_tuning_complete_dataset_v7\"\n\nMore Information needed"
]
| [
6,
30
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"llama2_7b_fine_tuning_complete_dataset_v7\"\n\nMore Information needed"
]
|
dc4f737de7fb9ac91b28e64ec11315bc372d063b | # Dataset Card for "ag_news_keywords_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sunhaozhepy/ag_news_llm_keywords_embeddings | [
"region:us"
]
| 2023-11-02T09:37:36+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "World", "1": "Sports", "2": "Business", "3": "Sci/Tech"}}}}, {"name": "keywords", "dtype": "string"}, {"name": "keywords_embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 404285730, "num_examples": 120000}, {"name": "test", "num_bytes": 25596494, "num_examples": 7600}], "download_size": 493524393, "dataset_size": 429882224}} | 2023-11-02T09:37:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ag_news_keywords_embeddings"
More Information needed | [
"# Dataset Card for \"ag_news_keywords_embeddings\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ag_news_keywords_embeddings\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ag_news_keywords_embeddings\"\n\nMore Information needed"
]
|
5f4ffab2bd96534bf93409b32e3864e7f22357ed |
# Bangumi Image Base of Saenai Heroine No Sodatekata
This is the image base of bangumi Saenai Heroine no Sodatekata, we detected 26 characters, 3436 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 195 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 982 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 77 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 24 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 23 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 14 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 126 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 411 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 35 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 84 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 137 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 269 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 21 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 75 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 15 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 17 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 77 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 37 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 10 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 15 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 516 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 65 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 11 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 6 | [Download](23/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 24 | 6 | [Download](24/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 188 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/saenaiheroinenosodatekata | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-02T09:37:52+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-02T11:33:20+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Saenai Heroine No Sodatekata
==================================================
This is the image base of bangumi Saenai Heroine no Sodatekata, we detected 26 characters, 3436 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
b19b9b33ac529840454e92a8a2639eaee9abc975 | # Dataset Card for "chemnlp-ocp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kjappelbaum/chemnlp-ocp | [
"region:us"
]
| 2023-11-02T09:48:57+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "target", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 233206947, "num_examples": 100000}, {"name": "valid", "num_bytes": 57773992, "num_examples": 25000}], "download_size": 88580458, "dataset_size": 290980939}} | 2023-11-02T09:50:51+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "chemnlp-ocp"
More Information needed | [
"# Dataset Card for \"chemnlp-ocp\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"chemnlp-ocp\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"chemnlp-ocp\"\n\nMore Information needed"
]
|
5d7e7dd0cf0adac912e334dd068779ff0b855742 | <h2><a href="https://carehealthreview.blogspot.com/2023/11/burn-boost-official-usa-no-1-premium.html">Burn Boost – Official Website Link – Click Here</a></h2>
<h2><strong>►❱❱ Product Name - <a href="https://myhealthfitnessmart.blogspot.com/2023/11/burn-boost-reviews-usacanadagold-vida.html">Burn Boost</a></strong><br /><strong>►❱❱ Side Effects - No Major Side Effects</strong><br /><strong>►❱❱ Category - Health</strong><br /><strong>►❱❱ Results - In 1-2 Months</strong><br /><strong>►❱❱ Availability – <a href="https://www.globalfitnessmart.com/get-burn-boost">Online</a></strong><br /><strong>►❱❱ Rating: - 5.0/5.0 ⭐⭐⭐⭐⭐</strong><br /><strong>►❱❱ Where to Get Bottle Online - <a href="https://www.globalfitnessmart.com/get-burn-boost">www.burnboost.com</a><br /></strong></h2>
<h2 id="d6d1" class="km kn ev be ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le lf lg lh li lj bj"><a href="https://www.globalfitnessmart.com/get-burn-boost"><strong>➡️Hurry Up — Limited Time Offer — Purchase Now➡️</strong></a><br /><a href="https://www.globalfitnessmart.com/get-burn-boost"><strong>➡️Hurry Up — Limited Time Offer — Purchase Now➡️</strong></a><br /><a href="https://www.globalfitnessmart.com/get-burn-boost"><strong>➡️Hurry Up — Limited Time Offer — Purchase Now➡️</strong></a></h2>
<p><a href="https://www.scoop.it/topic/burn-boost-by-burnboost-usa"><strong>Burn Boost</strong></a> is a nutritional weight loss supplement that promises to help you lose belly fat and curb your craving for unhealthy foods. Read more about ingredients, benefits, side effects, price, and more.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-burn-boost"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtBjnwMA05-wmCUbdkJ5zrukFQI3l6BXSVdb7ZBi5Kd-BIcGEIzoNyVAQGn1ixV6DaWs4G3pcsv_7DSto-uoEhaOBTocBNrQx7JTjGhJUh8iTFmJEVO_Yt8JCEazn4W18Kg-I5QKicHJYSHDCmUg6Dk_lrCXkjngZjJki8TU93G3BX_9K2bVAmDEUmZUnA/w640-h640/6bottle.png" alt="" width="640" height="640" border="0" data-original-height="450" data-original-width="450" /></a></div>
<h2><strong>What is a <a href="https://gold-vida-official.clubeo.com/calendar/2023/11/03/burn-boost-is-legit-2023-updated-report">Burn Boost</a> supplement?<br /></strong></h2>
<p><a href="https://gold-vida-official.clubeo.com/page/burn-boost-reviews-viral-scam-or-legit-is-it-work-or-not.html"><strong>Burn Boost</strong></a> by Gold Vida is the freshest dietary supplement that supports fat-burning, hydration, cognition, and energy.</p>
<p>It has several blends, including a recovery blend, cognitive blend, energy blend, and hydration blend, along with various vitamins and minerals. This is the only formula that can turn on the lipolysis switch and help you burn more calories without strenuous workouts and exercises. With <a href="https://gold-vida-official.clubeo.com/page/burn-boost-reviews-usa-canada-gold-vida-official-website.html"><strong>Burn Boost</strong></a> you never have to rely on diets and calorie counting.</p>
<p><a href="https://groups.google.com/g/burn-boost-official/c/pr2713KjAtM"><strong>Burn Boost</strong></a> is the only formula on the market with exotic ingredients and nutrients that trigger fat burning immediately. The formula works equally for men and women who are obese and have overweight issues. It is manufactured in the USA under certified labs with the latest technology. The formula is 100% natural, GMO-free, gluten-free, dairy-free, soy-free, and vegetarian as well. Hence, there are no side effects of consuming <a href="https://gold-vida-official.clubeo.com/"><strong>Burn Boost</strong></a> at all.</p>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://www.globalfitnessmart.com/get-burn-boost"><strong>SPECIAL PROMO[Limited Discount]: "Burn Boost USA"Official Website!</strong></a></span></h2>
<h2><strong>How does the <a href="https://colab.research.google.com/drive/1zcCHSHqdkc4_whOpViu4037m8WMlFDvr">Burn Boost</a> work?</strong></h2>
<p>After a few weeks of regularly taking <a href="https://lookerstudio.google.com/u/0/reporting/4d01e808-003e-4d48-89e9-a0b8e2994711/page/4j8gD"><strong>Burn Boost</strong></a>, you are able to decrease your body weight, BMI, body fat mass, and weight circumference. But how does it work?</p>
<p><a href="https://burn-boost-review.company.site/"><strong>Burn Boost</strong></a> supplies the body with the needed vitamins, nutrients, and a specific type of antioxidant that can boost the fat-burning process in the body and help you shed off pounds without having to run 5 miles a day, exercise or diet.<a href="https://www.scoop.it/topic/burn-boost-reviews-usa-canada-gold-vida-official-website"><strong>Burn Boost</strong></a> keeps your body hydrated and when the body has enough water, it is less likely to store fat. Hydration is a significant factor to consider when you are trying to lose weight.</p>
<p>The ingredients that are added in <a href="https://gamma.app/public/Burn-Boost-Reviews---USACANADAGOLD-VIDA-Official-Website-yf6h2y4h44erh79?mode=doc"><strong>Burn Boost</strong> </a>helps you stay hydrated all the name which prevents the body from storing fat cells.<a href="https://burn-boost-22.jimdosite.com/"><strong>Burn Boost</strong></a> can help regulate your insulin production and control your blood sugar levels. It also maintains healthy levels of blood pressure and cholesterol. <a href="https://www.scoop.it/topic/burn-boost-by-burnboost-usa"><strong>Burn Boost</strong></a> reduces the risks of heart diseases.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-burn-boost"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoEJW75fF600lzZr-DVrtYJoUL9Qapu1c3_iWI8yKjexBYRemOytoubDD_QqXt3OeMqgKcFPnFpQmKHQClxJtO-_xOAYZX_5exzcTymj7kudvFDzbWB1lqwz83M5YYxOJBJykqw4ZluQoPLJ_qwNkJ9Usy5kuMHPCOlX_eLmsubrf4ykWHOACdj0OfM9Gy/w640-h278/Burn%20Boost%20003.jpg" alt="" width="640" height="278" border="0" data-original-height="278" data-original-width="640" /></a></div>
<h2><strong>Ingredients used in <a href="https://devfolio.co/@burnboostusa">Burn Boost</a> supplement</strong></h2>
<p>According to the website of <a href="https://www.eventcreate.com/e/burn-boost"><strong>Burn Boost</strong></a>, the combination of ingredients they use is effective for losing weight. These are the natural ingredients added to the <a href="https://burn-boost-review.webflow.io/"><strong>Burn Boost</strong></a> supplement.</p>
<p><strong>#Coconut Water Powder:</strong> This secret ingredient is used in the weight loss supplement, it is able to keep your body hydrated and prevent it from storing fat cells. It also provides nutrients that can support your overall health.</p>
<p><strong>#Green Tea:</strong> This type of tea contains the highest number of catechins which is a special type of antioxidant that can help you lose weight fast. It supports your digestive system and speeds up your metabolic rate. It also provides relaxing properties that relieve you from stress.</p>
<p><strong>#Guarana:</strong> This ingredient is added in <a href="https://www.bitsdujour.com/view/burn-boost-reviews-viralscam-or-legit-is-it-work-or-not#comments95612"><strong>Burn Boost</strong></a> that can increase your fat-burning metabolism. It increases your genes that can slow down your fat cell production. It also regulates blood pressure, blood sugar, and cholesterol levels in the body.</p>
<p><strong>#Green Coffee Beans:</strong> This contains polyphenol antioxidants that not just cleanse the body from toxins but also increase the reduction of fat absorption in your gut. It changes the process of the body to burn off the belly fat right away and prevent it from storing fat and sugar. Green Coffee Beans can maintain your blood sugar and blood pressure levels at a normal and healthy rate.</p>
<p><strong>#Glutamine:</strong> This ingredient supports your lean muscle tissue. It can help you burn more calories and reduce your belly fat. Glutamine can also decrease your appetite and cravings to prevent you from eating more.</p>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://www.globalfitnessmart.com/get-burn-boost"><strong>(EXCLUSIVE OFFER)Click Here : "Burn Boost USA"Official Website!</strong></a></span></h2>
<h2><strong>What are the benefits of consuming <a href="https://burnboost.bandcamp.com/track/burn-boost-reviews-viral-scam-or-legit-is-it-work-or-not">Burn Boost</a> Gold Vida?</strong></h2>
<p>● It helps switch on the lipolysis process, which helps your body actively burn fat.</p>
<p>● It boosts metabolism and digestive functions.</p>
<p>● It helps you attain a toned and flat belly.</p>
<p>● It regulates blood pressure, sugar, and cholesterol regularly.</p>
<p>● It boosts your energy levels daily.</p>
<p>● It helps your cells release fat on a daily basis so you can lose visceral fat too .</p>
<p>● It prevents bloating, gas, cramps, and digestive issues too.</p>
<p>● It helps you burn calories without exercising much.</p>
<p>● It helps curb your cravings and hunger.</p>
<p>● It controls your appetite as well.</p>
<p>● It keeps you feeling full and satiated, so you naturally eat limited food.</p>
<p>● It helps your body absorb nutrients from everything you eat.</p>
<p>● It also boosts cognitive functions and skills.</p>
<p>● It keeps you hydrated, which is great for people who love working out.</p>
<p>● It helps you lose 30 or more pounds within just three months without exercising.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-burn-boost"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBi_aV72sY9pf41OrmKmVrcJwN9f0YecMfVb0_fr_xmwL6qC6Ns5gI9BWM7MuCuDokNuUEIi58H1Y385H20iqmHqOXoP6N83zFZ5USEcwgAJTTojH3pq74_w3aJXfZ98aoUMNhpdz23FGWu3MqIeGBAvgR1GiW0VFV2bJyXKg_GSgN-mMt3lrq7RVMEHZd/w640-h426/image001.jpg" alt="" width="640" height="426" border="0" data-original-height="1365" data-original-width="2048" /></a></div>
<h2><strong>How should you consume <a href="https://soundcloud.com/burnboostusa/burn-boost-is-legit-2023-updated-report">Burn Boost</a> Powder?</strong></h2>
<p><a href="https://bitbucket.org/burn-boost/burn-boost/issues/1/burn-boost-official-usa-no-1-premium-fat"><strong>Burn Boost</strong></a> is available in a powder form supplement containing so many important nutrients. Each jar of Burn Boost contains a month’s worth of formula for weight loss, cognition, hydration, and improved energy levels.</p>
<p>You should add a scoop of <a href="https://community.thermaltake.com/index.php?/topic/363076-burn-boost-official-usa-no-1-premium-fat-burner-booster-powerweight-loss-supplement/"><strong>Burn Boost</strong></a> to water, tea, or coffee. You can take the formula before lunch and after dinner.Try taking the formula once a day at first, but if you have more than 30 pounds to lose, you can add it before lunch and after dinner too.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-burn-boost"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdG-GMzTtxVrkrqqwfV_dpfdD-tVbsPnyJwyiTssXsiS1j5rIUNvPB47F6Wmtdx-IhjTgoX92dsyNr8AM4KPHxZMUshHzZGISGzqGj_0521EJC6gUCYRIJ_VvdGUsiD6R7Z-Kd5Pjt7eQICB34_246iLVmb3or2ng4NYIpsNYq0fQDSnYVU-w8gUeym9MG/w640-h512/Burn%20Boost%20005.jpg" alt="" width="640" height="512" border="0" data-original-height="475" data-original-width="594" /></a></div>
<h2><strong>What is the price of <a href="https://www.deviantart.com/burnboostusa/art/Burn-Boost-Official-991893148">Burn Boost</a>? Where should one buy it from?</strong></h2>
<p><a href="https://forums.hitched.co.uk/chat/forums/thread/burn-boost-officialusa-no1-premium-fat-burner-booster-powerweight-loss-supplement-1141963/"><strong>Burn Boost</strong></a> is only available for purchase on its official website. You can’t buy it from any other website, even if you look for <a href="https://forums.hitched.co.uk/chat/forums/thread/burn-boost-reviews-2023-usacanadagold-vida-official-website-1141957/"><strong>Burn Boost</strong></a> Amazon, Burn Boost USA, or Burn Boost Canada online.</p>
<p>Avoid buying supplements from other websites to avoid scams. <a href="https://experiment.com/projects/xrckqkglftsdwmnuowyq/methods"><strong>Burn Boost</strong></a> is available at discounted rates today:</p>
<p><strong>● Buy one bottle of <a href="https://bitbucket.org/burn-boost/burn-boost/issues/2/burn-boost-reviews-2023-usa-canada-gold">Burn Boost</a> for just $59.</strong></p>
<p><strong>● Buy three bottles of <a href="https://bitbucket.org/burn-boost/burn-boost/issues/1/burn-boost-official-usa-no-1-premium-fat">Burn Boost </a>for just $147 ($49 each).</strong></p>
<p><strong>● Buy six bottles of <a href="https://burnboost.bandcamp.com/track/burn-boost-reviews-viral-scam-or-legit-is-it-work-or-not">Burn Boost</a> for just $234 ($39 each).</strong></p>
<p>You must pay a small shipping fee. There are no subscriptions, and there’s only a one-time payment. Also, there is a 60-day 100% money-back guarantee on all purchases.</p>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://www.globalfitnessmart.com/get-burn-boost"><strong>SPECIAL PROMO[Limited Discount]: "Burn Boost USA"Official Website!</strong></a></span></h2>
<h2><strong><a href="https://www.bitsdujour.com/view/burn-boost-reviews-viralscam-or-legit-is-it-work-or-not#comments95612">Burn Boost</a> Reviews - Final Verdict</strong></h2>
<p><a href="https://burn-boost-review.webflow.io/"><strong>Burn Boost</strong></a> is the only natural formula that has so many natural nutrients that are directly linked with fat loss.</p>
<p>You can consume <a href="https://community.thermaltake.com/index.php?/topic/363074-burn-boost-reviews-2023-usacanada%E3%80%90gold-vida-official-website%E3%80%91/"><strong>Burn Boost</strong></a> without any prescription or consultation, as it is 100% natural and causes no side effects. <strong><a href="https://forums.hitched.co.uk/chat/forums/thread/burn-boost-officialusa-no1-premium-fat-burner-booster-powerweight-loss-supplement-1141963/">Burn Boost</a></strong> is suitable for all adults and can be taken on a daily basis without any risks.It helps you lose fat, boost cognitive functions, improve energy levels, and boost hydration as well. Without doing much, you can burn 211 calories or more. <a href="https://www.deviantart.com/burnboostusa/art/Burn-Boost-Reviews-USA-991892862"><strong>Burn Boost</strong></a> is backed with a 60-day money-back guarantee for a safe purchase.</p>
<h2><strong>FAQ of <a href="https://forums.hitched.co.uk/chat/forums/thread/burn-boost-reviews-2023-usacanadagold-vida-official-website-1141957/">Burn Boost</a> Reviews:</strong></h2>
<p>Here are some Frequently Asked Questions:</p>
<p><strong>How to prepare <a href="https://experiment.com/projects/xrckqkglftsdwmnuowyq/methods">Burn Boost</a>?</strong></p>
<p>Get 1 scoop of <a href="https://soundcloud.com/burnboostusa/burn-boost-is-legit-2023-updated-report"><strong>Burn Boost</strong></a> supplement and add it into your water. Mix properly.</p>
<p><strong>How often should you take <a href="https://burnboost.bandcamp.com/track/burn-boost-reviews-viral-scam-or-legit-is-it-work-or-not">Burn Boost</a>?</strong></p>
<p>It is recommended to take the <a href="https://www.bitsdujour.com/view/burn-boost-reviews-viralscam-or-legit-is-it-work-or-not#comments95612"><strong>Burn Boost</strong></a> supplement thrice a day. You are able to increase your metabolism with this recommended dose.</p>
<p><strong>Is it safe to take?</strong></p>
<p>Yes, the <a href="https://www.eventcreate.com/e/burn-boost"><strong>Burn Boost</strong></a> supplement is safe to take. There are no harmful effects or adverse reactions to your health or body.</p>
<p><strong>How long will users experience optimal benefits?</strong></p>
<p>According to the official website of <a href="https://www.scoop.it/topic/burn-boost-reviews-usa-canada-gold-vida-official-website"><strong>Burn Boost</strong></a>, users are able to</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-burn-boost"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhx5Dp4HpG0VpjEOhhyphenhyphenYbaUTqUVR9LnnGY_L8LXflWhs7wyEmts9jJ94VM_XeTJ2AW2vcfZNSwTaG9gomk8fWUFLa2MIW7rcWMcMsJcqQH7woZTHy1FPxiCuLazOZj_6mIcoUYt0z5xBDdtD1O6FHXVLgrqhucw9tWwGTRLBLFVj80dp-Snz6EGnSnC4At/w640-h512/Burn%20Boost%20005.jpg" alt="" width="640" height="512" border="0" data-original-height="475" data-original-width="594" /></a></div>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://www.globalfitnessmart.com/get-burn-boost"><strong>SPECIAL PROMO: Get Burn Boost at the Lowest Discounted Price Online</strong></a></span></h2>
<h2><span style="color: #0000ff;"><strong># READ MORE</strong></span></h2>
<p><span style="color: #0000ff;"><strong><a href="https://carehealthreview.blogspot.com/2023/11/burn-boost-official-usa-no-1-premium.html">https://carehealthreview.blogspot.com/2023/11/burn-boost-official-usa-no-1-premium.html</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://myhealthfitnessmart.blogspot.com/2023/11/burn-boost-reviews-usacanadagold-vida.html">https://myhealthfitnessmart.blogspot.com/2023/11/burn-boost-reviews-usacanadagold-vida.html</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://gold-vida-official.clubeo.com/calendar/2023/11/03/burn-boost-is-legit-2023-updated-report">https://gold-vida-official.clubeo.com/calendar/2023/11/03/burn-boost-is-legit-2023-updated-report</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://gold-vida-official.clubeo.com/page/burn-boost-reviews-viral-scam-or-legit-is-it-work-or-not.html">https://gold-vida-official.clubeo.com/page/burn-boost-reviews-viral-scam-or-legit-is-it-work-or-not.html</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://gold-vida-official.clubeo.com/page/burn-boost-reviews-usa-canada-gold-vida-official-website.html">https://gold-vida-official.clubeo.com/page/burn-boost-reviews-usa-canada-gold-vida-official-website.html</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://gold-vida-official.clubeo.com/">https://gold-vida-official.clubeo.com/</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://groups.google.com/g/burn-boost-official/c/pr2713KjAtM">https://groups.google.com/g/burn-boost-official/c/pr2713KjAtM</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://sites.google.com/view/burn-boost-review-usa/home">https://sites.google.com/view/burn-boost-review-usa/home</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://gamma.app/public/Burn-Boost-Reviews---USACANADAGOLD-VIDA-Official-Website-yf6h2y4h44erh79?mode=doc">https://gamma.app/public/Burn-Boost-Reviews---USACANADAGOLD-VIDA-Official-Website-yf6h2y4h44erh79?mode=doc</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://burn-boost-22.jimdosite.com/">https://burn-boost-22.jimdosite.com/</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://www.scoop.it/topic/burn-boost-by-burnboost-usa">https://www.scoop.it/topic/burn-boost-by-burnboost-usa</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://www.scoop.it/topic/burn-boost-reviews-usa-canada-gold-vida-official-website">https://www.scoop.it/topic/burn-boost-reviews-usa-canada-gold-vida-official-website</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://burn-boost-review.company.site/">https://burn-boost-review.company.site/</a></strong></span></p>
<p><span style="color: #0000ff;"><strong><a href="https://lookerstudio.google.com/u/0/reporting/4d01e808-003e-4d48-89e9-a0b8e2994711/page/4j8gD">https://lookerstudio.google.com/u/0/reporting/4d01e808-003e-4d48-89e9-a0b8e2994711/page/4j8gD</a></strong></span></p>
<p><strong><a href="https://devfolio.co/@burnboostusa">https://devfolio.co/@burnboostusa</a></strong></p>
<p><strong><a href="https://www.eventcreate.com/e/burn-boost">https://www.eventcreate.com/e/burn-boost</a></strong></p>
<p><strong><a href="https://burn-boost-review.webflow.io/">https://burn-boost-review.webflow.io/</a></strong></p>
<p><strong><a href="https://www.bitsdujour.com/view/burn-boost-reviews-viralscam-or-legit-is-it-work-or-not#comments95612">https://www.bitsdujour.com/view/burn-boost-reviews-viralscam-or-legit-is-it-work-or-not#comments95612</a></strong></p>
<p><strong><a href="https://soundcloud.com/burnboostusa/burn-boost-is-legit-2023-updated-report">https://soundcloud.com/burnboostusa/burn-boost-is-legit-2023-updated-report</a></strong></p>
<p><strong><a href="https://bitbucket.org/burn-boost/burn-boost/issues/1/burn-boost-official-usa-no-1-premium-fat">https://bitbucket.org/burn-boost/burn-boost/issues/1/burn-boost-official-usa-no-1-premium-fat</a></strong></p>
<p><strong><a href="https://experiment.com/projects/xrckqkglftsdwmnuowyq/methods">https://experiment.com/projects/xrckqkglftsdwmnuowyq/methods</a></strong></p>
<p><strong><a href="https://www.deviantart.com/burnboostusa/art/Burn-Boost-Reviews-USA-991892862">https://www.deviantart.com/burnboostusa/art/Burn-Boost-Reviews-USA-991892862</a></strong></p>
<p><strong><a href="https://community.thermaltake.com/index.php?/topic/363074-burn-boost-reviews-2023-usacanada%E3%80%90gold-vida-official-website%E3%80%91/">https://community.thermaltake.com/index.php?/topic/363074-burn-boost-reviews-2023-usacanada%E3%80%90gold-vida-official-website%E3%80%91/</a></strong></p>
<p><strong><a href="https://forums.hitched.co.uk/chat/forums/thread/burn-boost-officialusa-no1-premium-fat-burner-booster-powerweight-loss-supplement-1141963/">https://forums.hitched.co.uk/chat/forums/thread/burn-boost-officialusa-no1-premium-fat-burner-booster-powerweight-loss-supplement-1141963/</a></strong></p> | burnboostusa/burn-boost | [
"region:us"
]
| 2023-11-02T10:02:18+00:00 | {} | 2023-11-02T10:02:34+00:00 | []
| []
| TAGS
#region-us
| <h2><a href="URL Boost – Official Website Link – Click Here</a></h2>
<h2><strong>► Product Name - <a href="URL Boost</a></strong><br /><strong>► Side Effects - No Major Side Effects</strong><br /><strong>► Category - Health</strong><br /><strong>► Results - In 1-2 Months</strong><br /><strong>► Availability – <a href="URL /><strong>► Rating: - 5.0/5.0 ⭐⭐⭐⭐⭐</strong><br /><strong>► Where to Get Bottle Online - <a href="URL /></strong></h2>
<h2 id="d6d1" class="km kn ev be ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le lf lg lh li lj bj"><a href="URL️Hurry Up — Limited Time Offer — Purchase Now️</strong></a><br /><a href="URL️Hurry Up — Limited Time Offer — Purchase Now️</strong></a><br /><a href="URL️Hurry Up — Limited Time Offer — Purchase Now️</strong></a></h2>
<p><a href="URL Boost</strong></a> is a nutritional weight loss supplement that promises to help you lose belly fat and curb your craving for unhealthy foods. Read more about ingredients, benefits, side effects, price, and more.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="640" border="0" data-original-height="450" data-original-width="450" /></a></div>
<h2><strong>What is a <a href="URL Boost</a> supplement?<br /></strong></h2>
<p><a href="URL Boost</strong></a> by Gold Vida is the freshest dietary supplement that supports fat-burning, hydration, cognition, and energy.</p>
<p>It has several blends, including a recovery blend, cognitive blend, energy blend, and hydration blend, along with various vitamins and minerals. This is the only formula that can turn on the lipolysis switch and help you burn more calories without strenuous workouts and exercises. With <a href="URL Boost</strong></a> you never have to rely on diets and calorie counting.</p>
<p><a href="URL Boost</strong></a> is the only formula on the market with exotic ingredients and nutrients that trigger fat burning immediately. The formula works equally for men and women who are obese and have overweight issues. It is manufactured in the USA under certified labs with the latest technology. The formula is 100% natural, GMO-free, gluten-free, dairy-free, soy-free, and vegetarian as well. Hence, there are no side effects of consuming <a href="URL Boost</strong></a> at all.</p>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="URL PROMO[Limited Discount]: "Burn Boost USA"Official Website!</strong></a></span></h2>
<h2><strong>How does the <a href="URL Boost</a> work?</strong></h2>
<p>After a few weeks of regularly taking <a href="URL Boost</strong></a>, you are able to decrease your body weight, BMI, body fat mass, and weight circumference. But how does it work?</p>
<p><a href="URL Boost</strong></a> supplies the body with the needed vitamins, nutrients, and a specific type of antioxidant that can boost the fat-burning process in the body and help you shed off pounds without having to run 5 miles a day, exercise or diet.<a href="URL Boost</strong></a> keeps your body hydrated and when the body has enough water, it is less likely to store fat. Hydration is a significant factor to consider when you are trying to lose weight.</p>
<p>The ingredients that are added in <a href="URL Boost</strong> </a>helps you stay hydrated all the name which prevents the body from storing fat cells.<a href="URL Boost</strong></a> can help regulate your insulin production and control your blood sugar levels. It also maintains healthy levels of blood pressure and cholesterol. <a href="URL Boost</strong></a> reduces the risks of heart diseases.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="278" border="0" data-original-height="278" data-original-width="640" /></a></div>
<h2><strong>Ingredients used in <a href="URL Boost</a> supplement</strong></h2>
<p>According to the website of <a href="URL Boost</strong></a>, the combination of ingredients they use is effective for losing weight. These are the natural ingredients added to the <a href="URL Boost</strong></a> supplement.</p>
<p><strong>#Coconut Water Powder:</strong> This secret ingredient is used in the weight loss supplement, it is able to keep your body hydrated and prevent it from storing fat cells. It also provides nutrients that can support your overall health.</p>
<p><strong>#Green Tea:</strong> This type of tea contains the highest number of catechins which is a special type of antioxidant that can help you lose weight fast. It supports your digestive system and speeds up your metabolic rate. It also provides relaxing properties that relieve you from stress.</p>
<p><strong>#Guarana:</strong> This ingredient is added in <a href="URL Boost</strong></a> that can increase your fat-burning metabolism. It increases your genes that can slow down your fat cell production. It also regulates blood pressure, blood sugar, and cholesterol levels in the body.</p>
<p><strong>#Green Coffee Beans:</strong> This contains polyphenol antioxidants that not just cleanse the body from toxins but also increase the reduction of fat absorption in your gut. It changes the process of the body to burn off the belly fat right away and prevent it from storing fat and sugar. Green Coffee Beans can maintain your blood sugar and blood pressure levels at a normal and healthy rate.</p>
<p><strong>#Glutamine:</strong> This ingredient supports your lean muscle tissue. It can help you burn more calories and reduce your belly fat. Glutamine can also decrease your appetite and cravings to prevent you from eating more.</p>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="URL OFFER)Click Here : "Burn Boost USA"Official Website!</strong></a></span></h2>
<h2><strong>What are the benefits of consuming <a href="URL Boost</a> Gold Vida?</strong></h2>
<p>● It helps switch on the lipolysis process, which helps your body actively burn fat.</p>
<p>● It boosts metabolism and digestive functions.</p>
<p>● It helps you attain a toned and flat belly.</p>
<p>● It regulates blood pressure, sugar, and cholesterol regularly.</p>
<p>● It boosts your energy levels daily.</p>
<p>● It helps your cells release fat on a daily basis so you can lose visceral fat too .</p>
<p>● It prevents bloating, gas, cramps, and digestive issues too.</p>
<p>● It helps you burn calories without exercising much.</p>
<p>● It helps curb your cravings and hunger.</p>
<p>● It controls your appetite as well.</p>
<p>● It keeps you feeling full and satiated, so you naturally eat limited food.</p>
<p>● It helps your body absorb nutrients from everything you eat.</p>
<p>● It also boosts cognitive functions and skills.</p>
<p>● It keeps you hydrated, which is great for people who love working out.</p>
<p>● It helps you lose 30 or more pounds within just three months without exercising.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="426" border="0" data-original-height="1365" data-original-width="2048" /></a></div>
<h2><strong>How should you consume <a href="URL Boost</a> Powder?</strong></h2>
<p><a href="URL Boost</strong></a> is available in a powder form supplement containing so many important nutrients. Each jar of Burn Boost contains a month’s worth of formula for weight loss, cognition, hydration, and improved energy levels.</p>
<p>You should add a scoop of <a href="URL Boost</strong></a> to water, tea, or coffee. You can take the formula before lunch and after dinner.Try taking the formula once a day at first, but if you have more than 30 pounds to lose, you can add it before lunch and after dinner too.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="512" border="0" data-original-height="475" data-original-width="594" /></a></div>
<h2><strong>What is the price of <a href="URL Boost</a>? Where should one buy it from?</strong></h2>
<p><a href="URL Boost</strong></a> is only available for purchase on its official website. You can’t buy it from any other website, even if you look for <a href="URL Boost</strong></a> Amazon, Burn Boost USA, or Burn Boost Canada online.</p>
<p>Avoid buying supplements from other websites to avoid scams. <a href="URL Boost</strong></a> is available at discounted rates today:</p>
<p><strong>● Buy one bottle of <a href="URL Boost</a> for just $59.</strong></p>
<p><strong>● Buy three bottles of <a href="URL Boost </a>for just $147 ($49 each).</strong></p>
<p><strong>● Buy six bottles of <a href="URL Boost</a> for just $234 ($39 each).</strong></p>
<p>You must pay a small shipping fee. There are no subscriptions, and there’s only a one-time payment. Also, there is a 60-day 100% money-back guarantee on all purchases.</p>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="URL PROMO[Limited Discount]: "Burn Boost USA"Official Website!</strong></a></span></h2>
<h2><strong><a href="URL Boost</a> Reviews - Final Verdict</strong></h2>
<p><a href="URL Boost</strong></a> is the only natural formula that has so many natural nutrients that are directly linked with fat loss.</p>
<p>You can consume <a href="URL Boost</strong></a> without any prescription or consultation, as it is 100% natural and causes no side effects. <strong><a href="URL Boost</a></strong> is suitable for all adults and can be taken on a daily basis without any risks.It helps you lose fat, boost cognitive functions, improve energy levels, and boost hydration as well. Without doing much, you can burn 211 calories or more. <a href="URL Boost</strong></a> is backed with a 60-day money-back guarantee for a safe purchase.</p>
<h2><strong>FAQ of <a href="URL Boost</a> Reviews:</strong></h2>
<p>Here are some Frequently Asked Questions:</p>
<p><strong>How to prepare <a href="URL Boost</a>?</strong></p>
<p>Get 1 scoop of <a href="URL Boost</strong></a> supplement and add it into your water. Mix properly.</p>
<p><strong>How often should you take <a href="URL Boost</a>?</strong></p>
<p>It is recommended to take the <a href="URL Boost</strong></a> supplement thrice a day. You are able to increase your metabolism with this recommended dose.</p>
<p><strong>Is it safe to take?</strong></p>
<p>Yes, the <a href="URL Boost</strong></a> supplement is safe to take. There are no harmful effects or adverse reactions to your health or body.</p>
<p><strong>How long will users experience optimal benefits?</strong></p>
<p>According to the official website of <a href="URL Boost</strong></a>, users are able to</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="512" border="0" data-original-height="475" data-original-width="594" /></a></div>
<h2 style="text-align: center;"><span style="color: #0000ff;"><a style="color: #0000ff;" href="URL PROMO: Get Burn Boost at the Lowest Discounted Price Online</strong></a></span></h2>
<h2><span style="color: #0000ff;"><strong># READ MORE</strong></span></h2>
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><span style="color: #0000ff;"><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL
<p><strong><a href="URL/URL | [
"# READ MORE</strong></span></h2>\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL"
]
| [
"TAGS\n#region-us \n",
"# READ MORE</strong></span></h2>\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL"
]
| [
6,
478
]
| [
"passage: TAGS\n#region-us \n# READ MORE</strong></span></h2>\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><span style=\"color: #0000ff;\"><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL"
]
|
8c97fb2140667df4a4f63750d931bf3061f3576a | # Dataset Card for "squad_train50000_eval1000_dec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_train50000_eval1000_dec | [
"region:us"
]
| 2023-11-02T10:04:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 3184837, "num_examples": 1000}, {"name": "train", "num_bytes": 169722340, "num_examples": 50000}], "download_size": 35308668, "dataset_size": 172907177}} | 2023-11-02T10:04:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "squad_train50000_eval1000_dec"
More Information needed | [
"# Dataset Card for \"squad_train50000_eval1000_dec\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_train50000_eval1000_dec\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_train50000_eval1000_dec\"\n\nMore Information needed"
]
|
db2259c1bdbf22a864fdd93370c4241bdb60a108 | # Dataset Card for "cuisine_type"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Eitanli/cuisine_type | [
"region:us"
]
| 2023-11-02T10:46:15+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "recipe", "dtype": "string"}, {"name": "cuisine_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 107643157, "num_examples": 74465}], "download_size": 54311214, "dataset_size": 107643157}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T11:31:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cuisine_type"
More Information needed | [
"# Dataset Card for \"cuisine_type\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cuisine_type\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cuisine_type\"\n\nMore Information needed"
]
|
d7774e4e16ec1c907ad05ae5a44897c30bfeba71 |
# Dataset Card for Evaluation run of 01-ai/Yi-6B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/01-ai/Yi-6B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_01-ai__Yi-6B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T12:58:11.094136](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-6B_public/blob/main/results_2023-11-08T12-58-11.094136.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4384437919463087,
"em_stderr": 0.005081515214965134,
"f1": 0.47321203859060423,
"f1_stderr": 0.004951302124232466,
"acc": 0.43228738137822953,
"acc_stderr": 0.010759329857359324
},
"harness|drop|3": {
"em": 0.4384437919463087,
"em_stderr": 0.005081515214965134,
"f1": 0.47321203859060423,
"f1_stderr": 0.004951302124232466
},
"harness|gsm8k|5": {
"acc": 0.12661106899166036,
"acc_stderr": 0.009159715283081087
},
"harness|winogrande|5": {
"acc": 0.7379636937647988,
"acc_stderr": 0.012358944431637561
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_01-ai__Yi-6B | [
"region:us"
]
| 2023-11-02T10:50:57+00:00 | {"pretty_name": "Evaluation run of 01-ai/Yi-6B", "dataset_summary": "Dataset automatically created during the evaluation run of model [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_01-ai__Yi-6B_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T12:58:11.094136](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-6B_public/blob/main/results_2023-11-08T12-58-11.094136.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4384437919463087,\n \"em_stderr\": 0.005081515214965134,\n \"f1\": 0.47321203859060423,\n \"f1_stderr\": 0.004951302124232466,\n \"acc\": 0.43228738137822953,\n \"acc_stderr\": 0.010759329857359324\n },\n \"harness|drop|3\": {\n \"em\": 0.4384437919463087,\n \"em_stderr\": 0.005081515214965134,\n \"f1\": 0.47321203859060423,\n \"f1_stderr\": 0.004951302124232466\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12661106899166036,\n \"acc_stderr\": 0.009159715283081087\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7379636937647988,\n \"acc_stderr\": 0.012358944431637561\n }\n}\n```", "repo_url": "https://huggingface.co/01-ai/Yi-6B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_08T12_58_11.094136", "path": ["**/details_harness|drop|3_2023-11-08T12-58-11.094136.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T12-58-11.094136.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_08T12_58_11.094136", "path": ["**/details_harness|gsm8k|5_2023-11-08T12-58-11.094136.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T12-58-11.094136.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_08T12_58_11.094136", "path": ["**/details_harness|winogrande|5_2023-11-08T12-58-11.094136.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T12-58-11.094136.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T12_58_11.094136", "path": ["results_2023-11-08T12-58-11.094136.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T12-58-11.094136.parquet"]}]}]} | 2023-12-01T14:47:33+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of 01-ai/Yi-6B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model 01-ai/Yi-6B on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-08T12:58:11.094136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of 01-ai/Yi-6B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 01-ai/Yi-6B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T12:58:11.094136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of 01-ai/Yi-6B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 01-ai/Yi-6B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T12:58:11.094136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
17,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of 01-ai/Yi-6B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model 01-ai/Yi-6B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T12:58:11.094136(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
433241b0a59a02c5e554b38bcfbbe0e35257da17 | # Dataset Card for "WIKI_QA_Near_dedup"
**The license is `cc-by-nc-sa`.**
## Datasets Details
**Developers** SeungyooLee (DopeorNopeLee)
WIKI_QA_Near_dedup dataset was created by Near dedup algorithm to reduce similarity.
**It's original source is [maywell/wikidata_QA](maywell/wikidata_QA), which created by an innovative developer named [maywell(Jeonghwan Park)](https://huggingface.co/maywell).**
It follows "cc-by-nc-sa-4.0 lisence" policy. | HumanF-MarkrAI/WIKI_QA_Near_dedup | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2023-11-02T10:57:34+00:00 | {"license": "cc-by-nc-sa-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "float64"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 145724140, "num_examples": 137505}], "download_size": 87382170, "dataset_size": 145724140}} | 2023-11-03T08:55:52+00:00 | []
| []
| TAGS
#license-cc-by-nc-sa-4.0 #region-us
| # Dataset Card for "WIKI_QA_Near_dedup"
The license is 'cc-by-nc-sa'.
## Datasets Details
Developers SeungyooLee (DopeorNopeLee)
WIKI_QA_Near_dedup dataset was created by Near dedup algorithm to reduce similarity.
It's original source is maywell/wikidata_QA, which created by an innovative developer named maywell(Jeonghwan Park).
It follows "cc-by-nc-sa-4.0 lisence" policy. | [
"# Dataset Card for \"WIKI_QA_Near_dedup\"\n\n\nThe license is 'cc-by-nc-sa'.",
"## Datasets Details\n\nDevelopers SeungyooLee (DopeorNopeLee)\n\n\nWIKI_QA_Near_dedup dataset was created by Near dedup algorithm to reduce similarity.\n\nIt's original source is maywell/wikidata_QA, which created by an innovative developer named maywell(Jeonghwan Park).\n\n\nIt follows \"cc-by-nc-sa-4.0 lisence\" policy."
]
| [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for \"WIKI_QA_Near_dedup\"\n\n\nThe license is 'cc-by-nc-sa'.",
"## Datasets Details\n\nDevelopers SeungyooLee (DopeorNopeLee)\n\n\nWIKI_QA_Near_dedup dataset was created by Near dedup algorithm to reduce similarity.\n\nIt's original source is maywell/wikidata_QA, which created by an innovative developer named maywell(Jeonghwan Park).\n\n\nIt follows \"cc-by-nc-sa-4.0 lisence\" policy."
]
| [
19,
30,
97
]
| [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n# Dataset Card for \"WIKI_QA_Near_dedup\"\n\n\nThe license is 'cc-by-nc-sa'.## Datasets Details\n\nDevelopers SeungyooLee (DopeorNopeLee)\n\n\nWIKI_QA_Near_dedup dataset was created by Near dedup algorithm to reduce similarity.\n\nIt's original source is maywell/wikidata_QA, which created by an innovative developer named maywell(Jeonghwan Park).\n\n\nIt follows \"cc-by-nc-sa-4.0 lisence\" policy."
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.