sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
335ae5b5e8d9e19d17488916c7a0402367cb7cda
|
# UDHR-LID
**Why UDHR-LID?**
You can access UDHR (Universal Declaration of Human Rights) [here](http://www.unicode.org/udhr/d/), but when a verse is missing, they have texts such as "missing" or "?". Also, about 1/3 of the sentences consist only of "articles 1-30" in different languages. We cleaned the entire dataset from XML files and selected only the paragraphs. We cleared any unrelated language texts from the data and also removed the cases that were incorrect.
Incorrect? Look at the ckb and kmr files in the UDHR. Both are the same! ckb is known for the Arabic script, although it can also be written in Latin. Clearly, a unique file cannot belong to two different languages. We also deleted files that we believe those scripts are no longer in use.
The deleted files include:
- ckb_Latn (Arabic is in use.)
- azb_Latn (Arabic is in use.)
- khk_Mong (Cyrillic is in use.)
- vie_Hani (Latin is in use.)
For dealing with scripts in other languages, if you are interested, check Glotscript [code](https://github.com/cisnlp/GlotScript) and [paper](https://arxiv.org/abs/2309.13320). We have prepared a tool for detecting the script of a text, as well as metadata to determine the correct script for each language.
We believe UDHR should remain a test corpus in NLP, not a training corpus. Of course, we are not opposed to great works such as Franc built on top of UDHR. However, if your work scale is much bigger than UDHR, do not put UDHR in your data. Use it as test/validation, or find out what is wrong with your training data with help of UDHR. Be aware that a part of UDHR may be hosted on other websites such as Wikipedia, news websites like BBC, collaborative translation communities like Tatoeba. Before using UDHR as a test, exclude any sentence where UDHR is a part of your training.
We created this corpus for language identification evaluation task in our GlotLID [paper](https://arxiv.org/abs/2310.16248), but feel free to use it for your own task. The texts here are not in order, and they're not parallel. However, each row of data belongs to the determined language, long, cleaned, and has rich linguistic content!
## Usage (HF Loader)
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/udhr-lid', split='test')
print(dataset[0]) # First row of udhr-lid
```
## Download
If you are not a fan of the HF dataloader, download each language directly:
```python
! wget https://huggingface.co/datasets/cis-lmu/udhr-lid/resolve/main/udhr-lid.csv
```
or clone the whole repository:
```python
! git clone https://huggingface.co/datasets/cis-lmu/udhr-lid
```
## License
UDHR is the most translated copyright-free document in the world.
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).
## Citation
If you use any part of this data in your research, please cite it (along with http://www.unicode.org/udhr/d/) using the following BibTeX entry.
```
@inproceedings{
kargaran2023glotlid,
title={{GlotLID}: Language Identification for Low-Resource Languages},
author={Kargaran, Amir Hossein and Imani, Ayyoob and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023},
url={https://openreview.net/forum?id=dl4e3EBz5j}
}
```
|
cis-lmu/udhr-lid
|
[
"multilinguality:multilingual",
"language:tir",
"language:rmn",
"language:arb",
"language:mxv",
"language:mal",
"language:fij",
"language:som",
"language:cot",
"language:fur",
"language:vie",
"language:zlm",
"language:bam",
"language:chr",
"language:maz",
"language:yad",
"language:ztu",
"language:ykg",
"language:ccp",
"language:alt",
"language:ayr",
"language:njo",
"language:bci",
"language:gyr",
"language:run",
"language:haw",
"language:rgn",
"language:cak",
"language:kwi",
"language:fra",
"language:agr",
"language:duu",
"language:ilo",
"language:nhn",
"language:kdh",
"language:cnh",
"language:bod",
"language:mya",
"language:ady",
"language:pol",
"language:ydd",
"language:cos",
"language:lot",
"language:arl",
"language:glv",
"language:gag",
"language:bfa",
"language:afr",
"language:lij",
"language:ibb",
"language:toi",
"language:tzm",
"language:ron",
"language:ojb",
"language:san",
"language:eng",
"language:bum",
"language:pam",
"language:kqs",
"language:dje",
"language:auc",
"language:smo",
"language:por",
"language:fry",
"language:lad",
"language:pov",
"language:tyv",
"language:guc",
"language:huu",
"language:ese",
"language:kbp",
"language:eve",
"language:yrk",
"language:lin",
"language:tdt",
"language:qvc",
"language:top",
"language:nav",
"language:twi",
"language:oss",
"language:lia",
"language:ame",
"language:hun",
"language:lit",
"language:que",
"language:qug",
"language:nku",
"language:csa",
"language:lao",
"language:knc",
"language:kjh",
"language:jav",
"language:mam",
"language:ita",
"language:ppl",
"language:aar",
"language:tbz",
"language:ssw",
"language:bug",
"language:srp",
"language:kaz",
"language:min",
"language:mad",
"language:orh",
"language:tgk",
"language:kat",
"language:uig",
"language:tzo",
"language:hat",
"language:shn",
"language:kbd",
"language:niv",
"language:idu",
"language:krl",
"language:abk",
"language:mto",
"language:gla",
"language:ijs",
"language:cri",
"language:uzn",
"language:tah",
"language:tob",
"language:kir",
"language:quy",
"language:hnj",
"language:srr",
"language:lvs",
"language:nan",
"language:hns",
"language:snk",
"language:swh",
"language:ekk",
"language:guu",
"language:div",
"language:dzo",
"language:spa",
"language:hms",
"language:ell",
"language:ace",
"language:war",
"language:ind",
"language:cjy",
"language:cfm",
"language:nds",
"language:ewe",
"language:tel",
"language:src",
"language:fuf",
"language:vmw",
"language:zro",
"language:men",
"language:kqn",
"language:nzi",
"language:taj",
"language:khk",
"language:ddn",
"language:nso",
"language:mxi",
"language:pon",
"language:fvr",
"language:hau",
"language:ktu",
"language:tem",
"language:yor",
"language:pnb",
"language:ltz",
"language:evn",
"language:cjs",
"language:nba",
"language:niu",
"language:dan",
"language:acu",
"language:zgh",
"language:chj",
"language:heb",
"language:lua",
"language:quz",
"language:cbi",
"language:cpu",
"language:wuu",
"language:mah",
"language:kmb",
"language:mcd",
"language:ben",
"language:lus",
"language:ajg",
"language:azj",
"language:tha",
"language:dga",
"language:isl",
"language:sus",
"language:fkv",
"language:jiv",
"language:mor",
"language:nio",
"language:als",
"language:buc",
"language:kde",
"language:nbl",
"language:ceb",
"language:ven",
"language:sun",
"language:cbt",
"language:swb",
"language:tur",
"language:dyo",
"language:sin",
"language:pbu",
"language:ada",
"language:pap",
"language:qvh",
"language:loz",
"language:pan",
"language:qva",
"language:sme",
"language:bax",
"language:tuk",
"language:hsb",
"language:hus",
"language:qvn",
"language:ban",
"language:cha",
"language:zyb",
"language:hin",
"language:tat",
"language:qxu",
"language:gej",
"language:quc",
"language:mnw",
"language:bho",
"language:udu",
"language:kha",
"language:kbr",
"language:tsz",
"language:pau",
"language:mkd",
"language:shp",
"language:ike",
"language:lue",
"language:tgl",
"language:yap",
"language:yua",
"language:koi",
"language:hrv",
"language:emk",
"language:tet",
"language:ndo",
"language:cbu",
"language:vep",
"language:cmn",
"language:sag",
"language:nym",
"language:rus",
"language:gjn",
"language:guk",
"language:kri",
"language:ote",
"language:lun",
"language:vai",
"language:bis",
"language:arn",
"language:tsn",
"language:gle",
"language:hak",
"language:gkp",
"language:ura",
"language:tca",
"language:xho",
"language:wln",
"language:amc",
"language:mos",
"language:lld",
"language:bul",
"language:qxn",
"language:bcl",
"language:ctd",
"language:dip",
"language:dag",
"language:kek",
"language:bre",
"language:mri",
"language:fin",
"language:sah",
"language:cym",
"language:kan",
"language:fao",
"language:gsw",
"language:sey",
"language:bem",
"language:bos",
"language:bin",
"language:chv",
"language:tpi",
"language:ami",
"language:oaa",
"language:lob",
"language:ast",
"language:nno",
"language:sco",
"language:khm",
"language:pes",
"language:pbb",
"language:tam",
"language:ibo",
"language:sid",
"language:plt",
"language:guj",
"language:hsn",
"language:kin",
"language:lug",
"language:slr",
"language:koo",
"language:xsm",
"language:jpn",
"language:oki",
"language:deu",
"language:rar",
"language:pcm",
"language:hni",
"language:vec",
"language:gld",
"language:sot",
"language:crs",
"language:fuv",
"language:npi",
"language:nya",
"language:kea",
"language:blt",
"language:roh",
"language:cbr",
"language:chk",
"language:kal",
"language:mfq",
"language:quh",
"language:kor",
"language:slv",
"language:cof",
"language:shk",
"language:zul",
"language:qwh",
"language:fon",
"language:mic",
"language:prs",
"language:mag",
"language:bel",
"language:iii",
"language:mar",
"language:dyu",
"language:boa",
"language:swe",
"language:pis",
"language:mlt",
"language:amh",
"language:umb",
"language:cnr",
"language:mai",
"language:toj",
"language:csw",
"language:ina",
"language:bba",
"language:cbs",
"language:kng",
"language:oci",
"language:pcd",
"language:miq",
"language:lat",
"language:qvm",
"language:wwa",
"language:urd",
"language:kmr",
"language:ido",
"language:gaa",
"language:epo",
"language:gaz",
"language:cat",
"language:hye",
"language:cni",
"language:suk",
"language:gug",
"language:gan",
"language:cjk",
"language:tzh",
"language:zam",
"language:ces",
"language:cic",
"language:mcf",
"language:not",
"language:kaa",
"language:tso",
"language:piu",
"language:fat",
"language:mzi",
"language:snn",
"language:tly",
"language:eus",
"language:nld",
"language:nob",
"language:wol",
"language:hlt",
"language:sna",
"language:tiv",
"language:ton",
"language:hea",
"language:skr",
"language:lns",
"language:rup",
"language:cab",
"language:glg",
"language:yao",
"language:nyn",
"language:aii",
"language:slk",
"language:ukr",
"language:kkh",
"language:zdj",
"language:amr",
"language:yue",
"language:crh",
"language:hil",
"license:cc0-1.0",
"UDHR",
"udhr",
"language identification",
"LID",
"glot",
"GlotLID",
"arxiv:2309.13320",
"arxiv:2310.16248",
"region:us"
] |
2023-10-22T17:49:59+00:00
|
{"language": ["tir", "rmn", "arb", "mxv", "mal", "fij", "som", "cot", "fur", "vie", "zlm", "bam", "chr", "maz", "yad", "ztu", "ykg", "ccp", "alt", "ayr", "njo", "bci", "gyr", "run", "haw", "rgn", "cak", "kwi", "fra", "agr", "duu", "ilo", "nhn", "kdh", "cnh", "bod", "mya", "ady", "pol", "ydd", "cos", "lot", "arl", "glv", "gag", "bfa", "afr", "lij", "zlm", "ibb", "toi", "tzm", "ron", "ojb", "san", "eng", "bum", "pam", "kqs", "dje", "auc", "smo", "por", "fry", "lad", "pov", "tyv", "guc", "huu", "ese", "kbp", "eve", "yrk", "lin", "tdt", "qvc", "top", "nav", "twi", "oss", "lia", "ame", "hun", "lit", "que", "qug", "nku", "csa", "lao", "knc", "kjh", "jav", "mam", "ita", "ppl", "aar", "tbz", "ssw", "bug", "srp", "kaz", "min", "mad", "orh", "tgk", "kat", "uig", "tzo", "hat", "shn", "kbd", "niv", "idu", "krl", "abk", "mto", "gla", "ijs", "cri", "uzn", "tah", "tob", "kir", "quy", "hnj", "srr", "lvs", "nan", "hns", "snk", "swh", "ekk", "guu", "div", "dzo", "spa", "hms", "ell", "ace", "war", "ind", "cjy", "cfm", "nds", "ewe", "tel", "src", "fuf", "vmw", "zro", "men", "kqn", "nzi", "taj", "khk", "ddn", "nso", "mxi", "pon", "fvr", "hau", "ktu", "tem", "yor", "pnb", "ltz", "evn", "cjs", "nba", "niu", "dan", "acu", "zgh", "chj", "heb", "lua", "quz", "uig", "cbi", "jav", "cpu", "wuu", "mah", "kmb", "mcd", "ben", "lus", "ajg", "azj", "tha", "dga", "isl", "sus", "fuf", "fkv", "jiv", "mor", "nio", "als", "buc", "kde", "nbl", "ceb", "ven", "sun", "cbt", "swb", "tur", "dyo", "sin", "pbu", "ada", "pap", "qvh", "loz", "pan", "qva", "sme", "bax", "tuk", "hsb", "hus", "qvn", "ban", "cha", "zyb", "hin", "tat", "uzn", "qxu", "gej", "quc", "mnw", "bho", "udu", "kha", "kbr", "tsz", "pau", "mkd", "shp", "ike", "lue", "tgl", "yap", "yua", "koi", "hrv", "emk", "tet", "ndo", "cbu", "vep", "cmn", "sag", "nym", "rus", "gjn", "guk", "kri", "ote", "lun", "vai", "bis", "arn", "tsn", "gle", "hak", "gkp", "ura", "tca", "xho", "wln", "amc", "mos", "lld", "bul", "qxn", "bcl", "ctd", "dip", "dag", "kek", "bre", "mri", "fin", "sah", "cym", "kan", "fao", "gsw", "sey", "bem", "bos", "bin", "chv", "tpi", "ami", "oaa", "lob", "ast", "nno", "sco", "tuk", "khm", "pes", "pbb", "tam", "ibo", "san", "sid", "plt", "guj", "hsn", "kin", "lug", "slr", "koo", "xsm", "jpn", "oki", "deu", "rar", "pcm", "hni", "vec", "gld", "sot", "crs", "fuv", "srp", "npi", "nya", "kea", "blt", "roh", "cbr", "chk", "kal", "mfq", "quh", "kor", "slv", "cof", "shk", "zul", "qwh", "fon", "mic", "prs", "mag", "bel", "iii", "mar", "dyu", "boa", "swe", "pis", "mlt", "amh", "umb", "cnr", "mai", "toj", "csw", "ina", "bba", "cbs", "kng", "oci", "pcd", "miq", "lat", "qvm", "wwa", "bos", "urd", "kmr", "ido", "gaa", "epo", "gaz", "cat", "hye", "cni", "suk", "gug", "gan", "cjk", "tzh", "zam", "ces", "cic", "mcf", "not", "kaa", "tso", "piu", "fat", "mzi", "snn", "tly", "eus", "nld", "nob", "wol", "hlt", "sna", "tiv", "ton", "hea", "skr", "lns", "rup", "cab", "glg", "tgl", "yao", "nyn", "aii", "tzm", "slk", "ukr", "kkh", "zdj", "amr", "yue", "crh", "hil"], "license": "cc0-1.0", "multilinguality": ["multilingual"], "pretty_name": "UDHR-LID", "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "udhr-lid.csv"}]}], "tags": ["UDHR", "udhr", "language identification", "LID", "glot", "GlotLID"]}
|
2024-01-06T19:45:05+00:00
|
[
"2309.13320",
"2310.16248"
] |
[
"tir",
"rmn",
"arb",
"mxv",
"mal",
"fij",
"som",
"cot",
"fur",
"vie",
"zlm",
"bam",
"chr",
"maz",
"yad",
"ztu",
"ykg",
"ccp",
"alt",
"ayr",
"njo",
"bci",
"gyr",
"run",
"haw",
"rgn",
"cak",
"kwi",
"fra",
"agr",
"duu",
"ilo",
"nhn",
"kdh",
"cnh",
"bod",
"mya",
"ady",
"pol",
"ydd",
"cos",
"lot",
"arl",
"glv",
"gag",
"bfa",
"afr",
"lij",
"ibb",
"toi",
"tzm",
"ron",
"ojb",
"san",
"eng",
"bum",
"pam",
"kqs",
"dje",
"auc",
"smo",
"por",
"fry",
"lad",
"pov",
"tyv",
"guc",
"huu",
"ese",
"kbp",
"eve",
"yrk",
"lin",
"tdt",
"qvc",
"top",
"nav",
"twi",
"oss",
"lia",
"ame",
"hun",
"lit",
"que",
"qug",
"nku",
"csa",
"lao",
"knc",
"kjh",
"jav",
"mam",
"ita",
"ppl",
"aar",
"tbz",
"ssw",
"bug",
"srp",
"kaz",
"min",
"mad",
"orh",
"tgk",
"kat",
"uig",
"tzo",
"hat",
"shn",
"kbd",
"niv",
"idu",
"krl",
"abk",
"mto",
"gla",
"ijs",
"cri",
"uzn",
"tah",
"tob",
"kir",
"quy",
"hnj",
"srr",
"lvs",
"nan",
"hns",
"snk",
"swh",
"ekk",
"guu",
"div",
"dzo",
"spa",
"hms",
"ell",
"ace",
"war",
"ind",
"cjy",
"cfm",
"nds",
"ewe",
"tel",
"src",
"fuf",
"vmw",
"zro",
"men",
"kqn",
"nzi",
"taj",
"khk",
"ddn",
"nso",
"mxi",
"pon",
"fvr",
"hau",
"ktu",
"tem",
"yor",
"pnb",
"ltz",
"evn",
"cjs",
"nba",
"niu",
"dan",
"acu",
"zgh",
"chj",
"heb",
"lua",
"quz",
"cbi",
"cpu",
"wuu",
"mah",
"kmb",
"mcd",
"ben",
"lus",
"ajg",
"azj",
"tha",
"dga",
"isl",
"sus",
"fkv",
"jiv",
"mor",
"nio",
"als",
"buc",
"kde",
"nbl",
"ceb",
"ven",
"sun",
"cbt",
"swb",
"tur",
"dyo",
"sin",
"pbu",
"ada",
"pap",
"qvh",
"loz",
"pan",
"qva",
"sme",
"bax",
"tuk",
"hsb",
"hus",
"qvn",
"ban",
"cha",
"zyb",
"hin",
"tat",
"qxu",
"gej",
"quc",
"mnw",
"bho",
"udu",
"kha",
"kbr",
"tsz",
"pau",
"mkd",
"shp",
"ike",
"lue",
"tgl",
"yap",
"yua",
"koi",
"hrv",
"emk",
"tet",
"ndo",
"cbu",
"vep",
"cmn",
"sag",
"nym",
"rus",
"gjn",
"guk",
"kri",
"ote",
"lun",
"vai",
"bis",
"arn",
"tsn",
"gle",
"hak",
"gkp",
"ura",
"tca",
"xho",
"wln",
"amc",
"mos",
"lld",
"bul",
"qxn",
"bcl",
"ctd",
"dip",
"dag",
"kek",
"bre",
"mri",
"fin",
"sah",
"cym",
"kan",
"fao",
"gsw",
"sey",
"bem",
"bos",
"bin",
"chv",
"tpi",
"ami",
"oaa",
"lob",
"ast",
"nno",
"sco",
"khm",
"pes",
"pbb",
"tam",
"ibo",
"sid",
"plt",
"guj",
"hsn",
"kin",
"lug",
"slr",
"koo",
"xsm",
"jpn",
"oki",
"deu",
"rar",
"pcm",
"hni",
"vec",
"gld",
"sot",
"crs",
"fuv",
"npi",
"nya",
"kea",
"blt",
"roh",
"cbr",
"chk",
"kal",
"mfq",
"quh",
"kor",
"slv",
"cof",
"shk",
"zul",
"qwh",
"fon",
"mic",
"prs",
"mag",
"bel",
"iii",
"mar",
"dyu",
"boa",
"swe",
"pis",
"mlt",
"amh",
"umb",
"cnr",
"mai",
"toj",
"csw",
"ina",
"bba",
"cbs",
"kng",
"oci",
"pcd",
"miq",
"lat",
"qvm",
"wwa",
"urd",
"kmr",
"ido",
"gaa",
"epo",
"gaz",
"cat",
"hye",
"cni",
"suk",
"gug",
"gan",
"cjk",
"tzh",
"zam",
"ces",
"cic",
"mcf",
"not",
"kaa",
"tso",
"piu",
"fat",
"mzi",
"snn",
"tly",
"eus",
"nld",
"nob",
"wol",
"hlt",
"sna",
"tiv",
"ton",
"hea",
"skr",
"lns",
"rup",
"cab",
"glg",
"yao",
"nyn",
"aii",
"slk",
"ukr",
"kkh",
"zdj",
"amr",
"yue",
"crh",
"hil"
] |
TAGS
#multilinguality-multilingual #language-Tigrinya #language-Balkan Romani #language-Standard Arabic #language-Metlatónoc Mixtec #language-Malayalam #language-Fijian #language-Somali #language-Caquinte #language-Friulian #language-Vietnamese #language-Malay (individual language) #language-Bambara #language-Cherokee #language-Central Mazahua #language-Yagua #language-Güilá Zapotec #language-Northern Yukaghir #language-Chakma #language-Southern Altai #language-Central Aymara #language-Ao Naga #language-Baoulé #language-Guarayu #language-Rundi #language-Hawaiian #language-Romagnol #language-Kaqchikel #language-Awa-Cuaiquer #language-French #language-Aguaruna #language-Drung #language-Iloko #language-Central Nahuatl #language-Tem #language-Hakha Chin #language-Tibetan #language-Burmese #language-Adyghe #language-Polish #language-Eastern Yiddish #language-Corsican #language-Otuho #language-Arabela #language-Manx #language-Gagauz #language-Bari #language-Afrikaans #language-Ligurian #language-Ibibio #language-Tonga (Zambia) #language-Central Atlas Tamazight #language-Romanian #language-Northwestern Ojibwa #language-Sanskrit #language-English #language-Bulu (Cameroon) #language-Pampanga #language-Northern Kissi #language-Zarma #language-Waorani #language-Samoan #language-Portuguese #language-Western Frisian #language-Ladino #language-Upper Guinea Crioulo #language-Tuvinian #language-Wayuu #language-Murui Huitoto #language-Ese Ejja #language-Kabiyè #language-Even #language-Nenets #language-Lingala #language-Tetun Dili #language-Cajamarca Quechua #language-Papantla Totonac #language-Navajo #language-Twi #language-Ossetian #language-West-Central Limba #language-Yanesha' #language-Hungarian #language-Lithuanian #language-Quechua #language-Chimborazo Highland Quichua #language-Bouna Kulango #language-Chiltepec Chinantec #language-Lao #language-Central Kanuri #language-Khakas #language-Javanese #language-Mam #language-Italian #language-Pipil #language-Afar #language-Ditammari #language-Swati #language-Buginese #language-Serbian #language-Kazakh #language-Minangkabau #language-Madurese #language-Oroqen #language-Tajik #language-Georgian #language-Uighur #language-Tzotzil #language-Haitian #language-Shan #language-Kabardian #language-Gilyak #language-Idoma #language-Karelian #language-Abkhazian #language-Totontepec Mixe #language-Scottish Gaelic #language-Southeast Ijo #language-Sãotomense #language-Northern Uzbek #language-Tahitian #language-Toba #language-Kirghiz #language-Ayacucho Quechua #language-Hmong Njua #language-Serer #language-Standard Latvian #language-Min Nan Chinese #language-Caribbean Hindustani #language-Soninke #language-Swahili (individual language) #language-Standard Estonian #language-Yanomamö #language-Dhivehi #language-Dzongkha #language-Spanish #language-Southern Qiandong Miao #language-Modern Greek (1453-) #language-Achinese #language-Waray (Philippines) #language-Indonesian #language-Jinyu Chinese #language-Falam Chin #language-Low German #language-Ewe #language-Telugu #language-Logudorese Sardinian #language-Pular #language-Makhuwa #language-Záparo #language-Mende (Sierra Leone) #language-Kaonde #language-Nzima #language-Eastern Tamang #language-Halh Mongolian #language-Dendi (Benin) #language-Pedi #language-Mozarabic #language-Pohnpeian #language-Fur #language-Hausa #language-Kituba (Democratic Republic of Congo) #language-Timne #language-Yoruba #language-Western Panjabi #language-Luxembourgish #language-Evenki #language-Shor #language-Nyemba #language-Niuean #language-Danish #language-Achuar-Shiwiar #language-Standard Moroccan Tamazight #language-Ojitlán Chinantec #language-Hebrew #language-Luba-Lulua #language-Cusco Quechua #language-Chachi #language-Pichis Ashéninka #language-Wu Chinese #language-Marshallese #language-Kimbundu #language-Sharanahua #language-Bengali #language-Lushai #language-Aja (Benin) #language-North Azerbaijani #language-Thai #language-Southern Dagaare #language-Icelandic #language-Susu #language-Kven Finnish #language-Shuar #language-Moro #language-Nganasan #language-Tosk Albanian #language-Bushi #language-Makonde #language-South Ndebele #language-Cebuano #language-Venda #language-Sundanese #language-Chayahuita #language-Maore Comorian #language-Turkish #language-Jola-Fonyi #language-Sinhala #language-Northern Pashto #language-Adangme #language-Papiamento #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Lozi #language-Panjabi #language-Ambo-Pasco Quechua #language-Northern Sami #language-Bamun #language-Turkmen #language-Upper Sorbian #language-Huastec #language-North Junín Quechua #language-Balinese #language-Chamorro #language-Yongbei Zhuang #language-Hindi #language-Tatar #language-Arequipa-La Unión Quechua #language-Gen #language-K'iche' #language-Mon #language-Bhojpuri #language-Uduk #language-Khasi #language-Kafa #language-Purepecha #language-Palauan #language-Macedonian #language-Shipibo-Conibo #language-Eastern Canadian Inuktitut #language-Luvale #language-Tagalog #language-Yapese #language-Yucateco #language-Komi-Permyak #language-Croatian #language-Eastern Maninkakan #language-Tetum #language-Ndonga #language-Candoshi-Shapra #language-Veps #language-Mandarin Chinese #language-Sango #language-Nyamwezi #language-Russian #language-Gonja #language-Gumuz #language-Krio #language-Mezquital Otomi #language-Lunda #language-Vai #language-Bislama #language-Mapudungun #language-Tswana #language-Irish #language-Hakka Chinese #language-Guinea Kpelle #language-Urarina #language-Ticuna #language-Xhosa #language-Walloon #language-Amahuaca #language-Mossi #language-Ladin #language-Bulgarian #language-Northern Conchucos Ancash Quechua #language-Central Bikol #language-Tedim Chin #language-Northeastern Dinka #language-Dagbani #language-Kekchí #language-Breton #language-Maori #language-Finnish #language-Yakut #language-Welsh #language-Kannada #language-Faroese #language-Swiss German #language-Secoya #language-Bemba (Zambia) #language-Bosnian #language-Bini #language-Chuvash #language-Tok Pisin #language-Amis #language-Orok #language-Lobi #language-Asturian #language-Norwegian Nynorsk #language-Scots #language-Khmer #language-Iranian Persian #language-Páez #language-Tamil #language-Igbo #language-Sidamo #language-Plateau Malagasy #language-Gujarati #language-Xiang Chinese #language-Kinyarwanda #language-Ganda #language-Salar #language-Konzo #language-Kasem #language-Japanese #language-Okiek #language-German #language-Rarotongan #language-Nigerian Pidgin #language-Hani #language-Venetian #language-Nanai #language-Southern Sotho #language-Seselwa Creole French #language-Nigerian Fulfulde #language-Nepali (individual language) #language-Nyanja #language-Kabuverdianu #language-Tai Dam #language-Romansh #language-Cashibo-Cacataibo #language-Chuukese #language-Kalaallisut #language-Moba #language-South Bolivian Quechua #language-Korean #language-Slovenian #language-Colorado #language-Shilluk #language-Zulu #language-Huaylas Ancash Quechua #language-Fon #language-Mi'kmaq #language-Dari #language-Magahi #language-Belarusian #language-Sichuan Yi #language-Marathi #language-Dyula #language-Bora #language-Swedish #language-Pijin #language-Maltese #language-Amharic #language-Umbundu #language-Montenegrin #language-Maithili #language-Tojolabal #language-Swampy Cree #language-Interlingua (International Auxiliary Language Association) #language-Baatonum #language-Cashinahua #language-Koongo #language-Occitan (post 1500) #language-Picard #language-Mískito #language-Latin #language-Margos-Yarowilca-Lauricocha Quechua #language-Waama #language-Urdu #language-Northern Kurdish #language-Ido #language-Ga #language-Esperanto #language-West Central Oromo #language-Catalan #language-Armenian #language-Asháninka #language-Sukuma #language-Paraguayan Guaraní #language-Gan Chinese #language-Chokwe #language-Tzeltal #language-Miahuatlán Zapotec #language-Czech #language-Chickasaw #language-Matsés #language-Nomatsiguenga #language-Kara-Kalpak #language-Tsonga #language-Pintupi-Luritja #language-Fanti #language-Ixcatlán Mazatec #language-Siona #language-Talysh #language-Basque #language-Dutch #language-Norwegian Bokmål #language-Wolof #language-Matu Chin #language-Shona #language-Tiv #language-Tonga (Tonga Islands) #language-Northern Qiandong Miao #language-Saraiki #language-Lamnso' #language-Macedo-Romanian #language-Garifuna #language-Galician #language-Yao #language-Nyankole #language-Assyrian Neo-Aramaic #language-Slovak #language-Ukrainian #language-Khün #language-Ngazidja Comorian #language-Amarakaeri #language-Yue Chinese #language-Crimean Tatar #language-Hiligaynon #license-cc0-1.0 #UDHR #udhr #language identification #LID #glot #GlotLID #arxiv-2309.13320 #arxiv-2310.16248 #region-us
|
# UDHR-LID
Why UDHR-LID?
You can access UDHR (Universal Declaration of Human Rights) here, but when a verse is missing, they have texts such as "missing" or "?". Also, about 1/3 of the sentences consist only of "articles 1-30" in different languages. We cleaned the entire dataset from XML files and selected only the paragraphs. We cleared any unrelated language texts from the data and also removed the cases that were incorrect.
Incorrect? Look at the ckb and kmr files in the UDHR. Both are the same! ckb is known for the Arabic script, although it can also be written in Latin. Clearly, a unique file cannot belong to two different languages. We also deleted files that we believe those scripts are no longer in use.
The deleted files include:
- ckb_Latn (Arabic is in use.)
- azb_Latn (Arabic is in use.)
- khk_Mong (Cyrillic is in use.)
- vie_Hani (Latin is in use.)
For dealing with scripts in other languages, if you are interested, check Glotscript code and paper. We have prepared a tool for detecting the script of a text, as well as metadata to determine the correct script for each language.
We believe UDHR should remain a test corpus in NLP, not a training corpus. Of course, we are not opposed to great works such as Franc built on top of UDHR. However, if your work scale is much bigger than UDHR, do not put UDHR in your data. Use it as test/validation, or find out what is wrong with your training data with help of UDHR. Be aware that a part of UDHR may be hosted on other websites such as Wikipedia, news websites like BBC, collaborative translation communities like Tatoeba. Before using UDHR as a test, exclude any sentence where UDHR is a part of your training.
We created this corpus for language identification evaluation task in our GlotLID paper, but feel free to use it for your own task. The texts here are not in order, and they're not parallel. However, each row of data belongs to the determined language, long, cleaned, and has rich linguistic content!
## Usage (HF Loader)
## Download
If you are not a fan of the HF dataloader, download each language directly:
or clone the whole repository:
## License
UDHR is the most translated copyright-free document in the world.
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).
If you use any part of this data in your research, please cite it (along with URL using the following BibTeX entry.
|
[
"# UDHR-LID\n\nWhy UDHR-LID?\n\nYou can access UDHR (Universal Declaration of Human Rights) here, but when a verse is missing, they have texts such as \"missing\" or \"?\". Also, about 1/3 of the sentences consist only of \"articles 1-30\" in different languages. We cleaned the entire dataset from XML files and selected only the paragraphs. We cleared any unrelated language texts from the data and also removed the cases that were incorrect.\n\nIncorrect? Look at the ckb and kmr files in the UDHR. Both are the same! ckb is known for the Arabic script, although it can also be written in Latin. Clearly, a unique file cannot belong to two different languages. We also deleted files that we believe those scripts are no longer in use.\n\nThe deleted files include:\n- ckb_Latn (Arabic is in use.)\n- azb_Latn (Arabic is in use.)\n- khk_Mong (Cyrillic is in use.)\n- vie_Hani (Latin is in use.)\n\nFor dealing with scripts in other languages, if you are interested, check Glotscript code and paper. We have prepared a tool for detecting the script of a text, as well as metadata to determine the correct script for each language.\n\nWe believe UDHR should remain a test corpus in NLP, not a training corpus. Of course, we are not opposed to great works such as Franc built on top of UDHR. However, if your work scale is much bigger than UDHR, do not put UDHR in your data. Use it as test/validation, or find out what is wrong with your training data with help of UDHR. Be aware that a part of UDHR may be hosted on other websites such as Wikipedia, news websites like BBC, collaborative translation communities like Tatoeba. Before using UDHR as a test, exclude any sentence where UDHR is a part of your training.\n\nWe created this corpus for language identification evaluation task in our GlotLID paper, but feel free to use it for your own task. The texts here are not in order, and they're not parallel. However, each row of data belongs to the determined language, long, cleaned, and has rich linguistic content!",
"## Usage (HF Loader)",
"## Download\nIf you are not a fan of the HF dataloader, download each language directly:\n\n\n\nor clone the whole repository:",
"## License\nUDHR is the most translated copyright-free document in the world.\nWe license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).\n\n\nIf you use any part of this data in your research, please cite it (along with URL using the following BibTeX entry."
] |
[
"TAGS\n#multilinguality-multilingual #language-Tigrinya #language-Balkan Romani #language-Standard Arabic #language-Metlatónoc Mixtec #language-Malayalam #language-Fijian #language-Somali #language-Caquinte #language-Friulian #language-Vietnamese #language-Malay (individual language) #language-Bambara #language-Cherokee #language-Central Mazahua #language-Yagua #language-Güilá Zapotec #language-Northern Yukaghir #language-Chakma #language-Southern Altai #language-Central Aymara #language-Ao Naga #language-Baoulé #language-Guarayu #language-Rundi #language-Hawaiian #language-Romagnol #language-Kaqchikel #language-Awa-Cuaiquer #language-French #language-Aguaruna #language-Drung #language-Iloko #language-Central Nahuatl #language-Tem #language-Hakha Chin #language-Tibetan #language-Burmese #language-Adyghe #language-Polish #language-Eastern Yiddish #language-Corsican #language-Otuho #language-Arabela #language-Manx #language-Gagauz #language-Bari #language-Afrikaans #language-Ligurian #language-Ibibio #language-Tonga (Zambia) #language-Central Atlas Tamazight #language-Romanian #language-Northwestern Ojibwa #language-Sanskrit #language-English #language-Bulu (Cameroon) #language-Pampanga #language-Northern Kissi #language-Zarma #language-Waorani #language-Samoan #language-Portuguese #language-Western Frisian #language-Ladino #language-Upper Guinea Crioulo #language-Tuvinian #language-Wayuu #language-Murui Huitoto #language-Ese Ejja #language-Kabiyè #language-Even #language-Nenets #language-Lingala #language-Tetun Dili #language-Cajamarca Quechua #language-Papantla Totonac #language-Navajo #language-Twi #language-Ossetian #language-West-Central Limba #language-Yanesha' #language-Hungarian #language-Lithuanian #language-Quechua #language-Chimborazo Highland Quichua #language-Bouna Kulango #language-Chiltepec Chinantec #language-Lao #language-Central Kanuri #language-Khakas #language-Javanese #language-Mam #language-Italian #language-Pipil #language-Afar #language-Ditammari #language-Swati #language-Buginese #language-Serbian #language-Kazakh #language-Minangkabau #language-Madurese #language-Oroqen #language-Tajik #language-Georgian #language-Uighur #language-Tzotzil #language-Haitian #language-Shan #language-Kabardian #language-Gilyak #language-Idoma #language-Karelian #language-Abkhazian #language-Totontepec Mixe #language-Scottish Gaelic #language-Southeast Ijo #language-Sãotomense #language-Northern Uzbek #language-Tahitian #language-Toba #language-Kirghiz #language-Ayacucho Quechua #language-Hmong Njua #language-Serer #language-Standard Latvian #language-Min Nan Chinese #language-Caribbean Hindustani #language-Soninke #language-Swahili (individual language) #language-Standard Estonian #language-Yanomamö #language-Dhivehi #language-Dzongkha #language-Spanish #language-Southern Qiandong Miao #language-Modern Greek (1453-) #language-Achinese #language-Waray (Philippines) #language-Indonesian #language-Jinyu Chinese #language-Falam Chin #language-Low German #language-Ewe #language-Telugu #language-Logudorese Sardinian #language-Pular #language-Makhuwa #language-Záparo #language-Mende (Sierra Leone) #language-Kaonde #language-Nzima #language-Eastern Tamang #language-Halh Mongolian #language-Dendi (Benin) #language-Pedi #language-Mozarabic #language-Pohnpeian #language-Fur #language-Hausa #language-Kituba (Democratic Republic of Congo) #language-Timne #language-Yoruba #language-Western Panjabi #language-Luxembourgish #language-Evenki #language-Shor #language-Nyemba #language-Niuean #language-Danish #language-Achuar-Shiwiar #language-Standard Moroccan Tamazight #language-Ojitlán Chinantec #language-Hebrew #language-Luba-Lulua #language-Cusco Quechua #language-Chachi #language-Pichis Ashéninka #language-Wu Chinese #language-Marshallese #language-Kimbundu #language-Sharanahua #language-Bengali #language-Lushai #language-Aja (Benin) #language-North Azerbaijani #language-Thai #language-Southern Dagaare #language-Icelandic #language-Susu #language-Kven Finnish #language-Shuar #language-Moro #language-Nganasan #language-Tosk Albanian #language-Bushi #language-Makonde #language-South Ndebele #language-Cebuano #language-Venda #language-Sundanese #language-Chayahuita #language-Maore Comorian #language-Turkish #language-Jola-Fonyi #language-Sinhala #language-Northern Pashto #language-Adangme #language-Papiamento #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Lozi #language-Panjabi #language-Ambo-Pasco Quechua #language-Northern Sami #language-Bamun #language-Turkmen #language-Upper Sorbian #language-Huastec #language-North Junín Quechua #language-Balinese #language-Chamorro #language-Yongbei Zhuang #language-Hindi #language-Tatar #language-Arequipa-La Unión Quechua #language-Gen #language-K'iche' #language-Mon #language-Bhojpuri #language-Uduk #language-Khasi #language-Kafa #language-Purepecha #language-Palauan #language-Macedonian #language-Shipibo-Conibo #language-Eastern Canadian Inuktitut #language-Luvale #language-Tagalog #language-Yapese #language-Yucateco #language-Komi-Permyak #language-Croatian #language-Eastern Maninkakan #language-Tetum #language-Ndonga #language-Candoshi-Shapra #language-Veps #language-Mandarin Chinese #language-Sango #language-Nyamwezi #language-Russian #language-Gonja #language-Gumuz #language-Krio #language-Mezquital Otomi #language-Lunda #language-Vai #language-Bislama #language-Mapudungun #language-Tswana #language-Irish #language-Hakka Chinese #language-Guinea Kpelle #language-Urarina #language-Ticuna #language-Xhosa #language-Walloon #language-Amahuaca #language-Mossi #language-Ladin #language-Bulgarian #language-Northern Conchucos Ancash Quechua #language-Central Bikol #language-Tedim Chin #language-Northeastern Dinka #language-Dagbani #language-Kekchí #language-Breton #language-Maori #language-Finnish #language-Yakut #language-Welsh #language-Kannada #language-Faroese #language-Swiss German #language-Secoya #language-Bemba (Zambia) #language-Bosnian #language-Bini #language-Chuvash #language-Tok Pisin #language-Amis #language-Orok #language-Lobi #language-Asturian #language-Norwegian Nynorsk #language-Scots #language-Khmer #language-Iranian Persian #language-Páez #language-Tamil #language-Igbo #language-Sidamo #language-Plateau Malagasy #language-Gujarati #language-Xiang Chinese #language-Kinyarwanda #language-Ganda #language-Salar #language-Konzo #language-Kasem #language-Japanese #language-Okiek #language-German #language-Rarotongan #language-Nigerian Pidgin #language-Hani #language-Venetian #language-Nanai #language-Southern Sotho #language-Seselwa Creole French #language-Nigerian Fulfulde #language-Nepali (individual language) #language-Nyanja #language-Kabuverdianu #language-Tai Dam #language-Romansh #language-Cashibo-Cacataibo #language-Chuukese #language-Kalaallisut #language-Moba #language-South Bolivian Quechua #language-Korean #language-Slovenian #language-Colorado #language-Shilluk #language-Zulu #language-Huaylas Ancash Quechua #language-Fon #language-Mi'kmaq #language-Dari #language-Magahi #language-Belarusian #language-Sichuan Yi #language-Marathi #language-Dyula #language-Bora #language-Swedish #language-Pijin #language-Maltese #language-Amharic #language-Umbundu #language-Montenegrin #language-Maithili #language-Tojolabal #language-Swampy Cree #language-Interlingua (International Auxiliary Language Association) #language-Baatonum #language-Cashinahua #language-Koongo #language-Occitan (post 1500) #language-Picard #language-Mískito #language-Latin #language-Margos-Yarowilca-Lauricocha Quechua #language-Waama #language-Urdu #language-Northern Kurdish #language-Ido #language-Ga #language-Esperanto #language-West Central Oromo #language-Catalan #language-Armenian #language-Asháninka #language-Sukuma #language-Paraguayan Guaraní #language-Gan Chinese #language-Chokwe #language-Tzeltal #language-Miahuatlán Zapotec #language-Czech #language-Chickasaw #language-Matsés #language-Nomatsiguenga #language-Kara-Kalpak #language-Tsonga #language-Pintupi-Luritja #language-Fanti #language-Ixcatlán Mazatec #language-Siona #language-Talysh #language-Basque #language-Dutch #language-Norwegian Bokmål #language-Wolof #language-Matu Chin #language-Shona #language-Tiv #language-Tonga (Tonga Islands) #language-Northern Qiandong Miao #language-Saraiki #language-Lamnso' #language-Macedo-Romanian #language-Garifuna #language-Galician #language-Yao #language-Nyankole #language-Assyrian Neo-Aramaic #language-Slovak #language-Ukrainian #language-Khün #language-Ngazidja Comorian #language-Amarakaeri #language-Yue Chinese #language-Crimean Tatar #language-Hiligaynon #license-cc0-1.0 #UDHR #udhr #language identification #LID #glot #GlotLID #arxiv-2309.13320 #arxiv-2310.16248 #region-us \n",
"# UDHR-LID\n\nWhy UDHR-LID?\n\nYou can access UDHR (Universal Declaration of Human Rights) here, but when a verse is missing, they have texts such as \"missing\" or \"?\". Also, about 1/3 of the sentences consist only of \"articles 1-30\" in different languages. We cleaned the entire dataset from XML files and selected only the paragraphs. We cleared any unrelated language texts from the data and also removed the cases that were incorrect.\n\nIncorrect? Look at the ckb and kmr files in the UDHR. Both are the same! ckb is known for the Arabic script, although it can also be written in Latin. Clearly, a unique file cannot belong to two different languages. We also deleted files that we believe those scripts are no longer in use.\n\nThe deleted files include:\n- ckb_Latn (Arabic is in use.)\n- azb_Latn (Arabic is in use.)\n- khk_Mong (Cyrillic is in use.)\n- vie_Hani (Latin is in use.)\n\nFor dealing with scripts in other languages, if you are interested, check Glotscript code and paper. We have prepared a tool for detecting the script of a text, as well as metadata to determine the correct script for each language.\n\nWe believe UDHR should remain a test corpus in NLP, not a training corpus. Of course, we are not opposed to great works such as Franc built on top of UDHR. However, if your work scale is much bigger than UDHR, do not put UDHR in your data. Use it as test/validation, or find out what is wrong with your training data with help of UDHR. Be aware that a part of UDHR may be hosted on other websites such as Wikipedia, news websites like BBC, collaborative translation communities like Tatoeba. Before using UDHR as a test, exclude any sentence where UDHR is a part of your training.\n\nWe created this corpus for language identification evaluation task in our GlotLID paper, but feel free to use it for your own task. The texts here are not in order, and they're not parallel. However, each row of data belongs to the determined language, long, cleaned, and has rich linguistic content!",
"## Usage (HF Loader)",
"## Download\nIf you are not a fan of the HF dataloader, download each language directly:\n\n\n\nor clone the whole repository:",
"## License\nUDHR is the most translated copyright-free document in the world.\nWe license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).\n\n\nIf you use any part of this data in your research, please cite it (along with URL using the following BibTeX entry."
] |
[
2788,
516,
8,
30,
83
] |
[
"passage: ",
"passage: TAGS\n#multilinguality-multilingual #language-Tigrinya #language-Balkan Romani #language-Standard Arabic #language-Metlatónoc Mixtec #language-Malayalam #language-Fijian #language-Somali #language-Caquinte #language-Friulian #language-Vietnamese #language-Malay (individual language) #language-Bambara #language-Cherokee #language-Central Mazahua #language-Yagua #language-Güilá Zapotec #language-Northern Yukaghir #language-Chakma #language-Southern Altai #language-Central Aymara #language-Ao Naga #language-Baoulé #language-Guarayu #language-Rundi #language-Hawaiian #language-Romagnol #language-Kaqchikel #language-Awa-Cuaiquer #language-French #language-Aguaruna #language-Drung #language-Iloko #language-Central Nahuatl #language-Tem #language-Hakha Chin #language-Tibetan #language-Burmese #language-Adyghe #language-Polish #language-Eastern Yiddish #language-Corsican #language-Otuho #language-Arabela #language-Manx #language-Gagauz #language-Bari #language-Afrikaans #language-Ligurian #language-Ibibio #language-Tonga (Zambia) #language-Central Atlas Tamazight #language-Romanian #language-Northwestern Ojibwa #language-Sanskrit #language-English #language-Bulu (Cameroon) #language-Pampanga #language-Northern Kissi #language-Zarma #language-Waorani #language-Samoan #language-Portuguese #language-Western Frisian #language-Ladino #language-Upper Guinea Crioulo #language-Tuvinian #language-Wayuu #language-Murui Huitoto #language-Ese Ejja #language-Kabiyè #language-Even #language-Nenets #language-Lingala #language-Tetun Dili #language-Cajamarca Quechua #language-Papantla Totonac #language-Navajo #language-Twi #language-Ossetian #language-West-Central Limba #language-Yanesha' #language-Hungarian #language-Lithuanian #language-Quechua #language-Chimborazo Highland Quichua #language-Bouna Kulango #language-Chiltepec Chinantec #language-Lao #language-Central Kanuri #language-Khakas #language-Javanese #language-Mam #language-Italian #language-Pipil #language-Afar #language-Ditammari #language-Swati #language-Buginese #language-Serbian #language-Kazakh #language-Minangkabau #language-Madurese #language-Oroqen #language-Tajik #language-Georgian #language-Uighur #language-Tzotzil #language-Haitian #language-Shan #language-Kabardian #language-Gilyak #language-Idoma #language-Karelian #language-Abkhazian #language-Totontepec Mixe #language-Scottish Gaelic #language-Southeast Ijo #language-Sãotomense #language-Northern Uzbek #language-Tahitian #language-Toba #language-Kirghiz #language-Ayacucho Quechua #language-Hmong Njua #language-Serer #language-Standard Latvian #language-Min Nan Chinese #language-Caribbean Hindustani #language-Soninke #language-Swahili (individual language) #language-Standard Estonian #language-Yanomamö #language-Dhivehi #language-Dzongkha #language-Spanish #language-Southern Qiandong Miao #language-Modern Greek (1453-) #language-Achinese #language-Waray (Philippines) #language-Indonesian #language-Jinyu Chinese #language-Falam Chin #language-Low German #language-Ewe #language-Telugu #language-Logudorese Sardinian #language-Pular #language-Makhuwa #language-Záparo #language-Mende (Sierra Leone) #language-Kaonde #language-Nzima #language-Eastern Tamang #language-Halh Mongolian #language-Dendi (Benin) #language-Pedi #language-Mozarabic #language-Pohnpeian #language-Fur #language-Hausa #language-Kituba (Democratic Republic of Congo) #language-Timne #language-Yoruba #language-Western Panjabi #language-Luxembourgish #language-Evenki #language-Shor #language-Nyemba #language-Niuean #language-Danish #language-Achuar-Shiwiar #language-Standard Moroccan Tamazight #language-Ojitlán Chinantec #language-Hebrew #language-Luba-Lulua #language-Cusco Quechua #language-Chachi #language-Pichis Ashéninka #language-Wu Chinese #language-Marshallese #language-Kimbundu #language-Sharanahua #language-Bengali #language-Lushai #language-Aja (Benin) #language-North Azerbaijani #language-Thai #language-Southern Dagaare #language-Icelandic #language-Susu #language-Kven Finnish #language-Shuar #language-Moro #language-Nganasan #language-Tosk Albanian #language-Bushi #language-Makonde #language-South Ndebele #language-Cebuano #language-Venda #language-Sundanese #language-Chayahuita #language-Maore Comorian #language-Turkish #language-Jola-Fonyi #language-Sinhala #language-Northern Pashto #language-Adangme #language-Papiamento #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Lozi #language-Panjabi #language-Ambo-Pasco Quechua #language-Northern Sami #language-Bamun #language-Turkmen #language-Upper Sorbian #language-Huastec #language-North Junín Quechua #language-Balinese #language-Chamorro #language-Yongbei Zhuang #language-Hindi #language-Tatar #language-Arequipa-La Unión Quechua #language-Gen #language-K'iche' #language-Mon #language-Bhojpuri #language-Uduk #language-Khasi #language-Kafa #language-Purepecha #language-Palauan #language-Macedonian #language-Shipibo-Conibo #language-Eastern Canadian Inuktitut #language-Luvale #language-Tagalog #language-Yapese #language-Yucateco #language-Komi-Permyak #language-Croatian #language-Eastern Maninkakan #language-Tetum #language-Ndonga #language-Candoshi-Shapra #language-Veps #language-Mandarin Chinese #language-Sango #language-Nyamwezi #language-Russian #language-Gonja #language-Gumuz #language-Krio #language-Mezquital Otomi #language-Lunda #language-Vai #language-Bislama #language-Mapudungun #language-Tswana #language-Irish #language-Hakka Chinese #language-Guinea Kpelle #language-Urarina #language-Ticuna #language-Xhosa #language-Walloon #language-Amahuaca #language-Mossi #language-Ladin #language-Bulgarian #language-Northern Conchucos Ancash Quechua #language-Central Bikol #language-Tedim Chin #language-Northeastern Dinka #language-Dagbani #language-Kekchí #language-Breton #language-Maori #language-Finnish #language-Yakut #language-Welsh #language-Kannada #language-Faroese #language-Swiss German #language-Secoya #language-Bemba (Zambia) #language-Bosnian #language-Bini #language-Chuvash #language-Tok Pisin #language-Amis #language-Orok #language-Lobi #language-Asturian #language-Norwegian Nynorsk #language-Scots #language-Khmer #language-Iranian Persian #language-Páez #language-Tamil #language-Igbo #language-Sidamo #language-Plateau Malagasy #language-Gujarati #language-Xiang Chinese #language-Kinyarwanda #language-Ganda #language-Salar #language-Konzo #language-Kasem #language-Japanese #language-Okiek #language-German #language-Rarotongan #language-Nigerian Pidgin #language-Hani #language-Venetian #language-Nanai #language-Southern Sotho #language-Seselwa Creole French #language-Nigerian Fulfulde #language-Nepali (individual language) #language-Nyanja #language-Kabuverdianu #language-Tai Dam #language-Romansh #language-Cashibo-Cacataibo #language-Chuukese #language-Kalaallisut #language-Moba #language-South Bolivian Quechua #language-Korean #language-Slovenian #language-Colorado #language-Shilluk #language-Zulu #language-Huaylas Ancash Quechua #language-Fon #language-Mi'kmaq #language-Dari #language-Magahi #language-Belarusian #language-Sichuan Yi #language-Marathi #language-Dyula #language-Bora #language-Swedish #language-Pijin #language-Maltese #language-Amharic #language-Umbundu #language-Montenegrin #language-Maithili #language-Tojolabal #language-Swampy Cree #language-Interlingua (International Auxiliary Language Association) #language-Baatonum #language-Cashinahua #language-Koongo #language-Occitan (post 1500) #language-Picard #language-Mískito #language-Latin #language-Margos-Yarowilca-Lauricocha Quechua #language-Waama #language-Urdu #language-Northern Kurdish #language-Ido #language-Ga #language-Esperanto #language-West Central Oromo #language-Catalan #language-Armenian #language-Asháninka #language-Sukuma #language-Paraguayan Guaraní #language-Gan Chinese #language-Chokwe #language-Tzeltal #language-Miahuatlán Zapotec #language-Czech #language-Chickasaw #language-Matsés #language-Nomatsiguenga #language-Kara-Kalpak #language-Tsonga #language-Pintupi-Luritja #language-Fanti #language-Ixcatlán Mazatec #language-Siona #language-Talysh #language-Basque #language-Dutch #language-Norwegian Bokmål #language-Wolof #language-Matu Chin #language-Shona #language-Tiv #language-Tonga (Tonga Islands) #language-Northern Qiandong Miao #language-Saraiki #language-Lamnso' #language-Macedo-Romanian #language-Garifuna #language-Galician #language-Yao #language-Nyankole #language-Assyrian Neo-Aramaic #language-Slovak #language-Ukrainian #language-Khün #language-Ngazidja Comorian #language-Amarakaeri #language-Yue Chinese #language-Crimean Tatar #language-Hiligaynon #license-cc0-1.0 #UDHR #udhr #language identification #LID #glot #GlotLID #arxiv-2309.13320 #arxiv-2310.16248 #region-us \n"
] |
6679918e665121d0de56d035e5f67f9590a10e71
|
# Dataset Card for "abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
WebauthorLLC/abstracts
|
[
"region:us"
] |
2023-10-22T18:03:41+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "recall", "dtype": "int64"}, {"name": "article_title", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 232932181, "num_examples": 135922}, {"name": "test", "num_bytes": 29105093, "num_examples": 16991}, {"name": "valid", "num_bytes": 29122441, "num_examples": 16990}], "download_size": 157167708, "dataset_size": 291159715}}
|
2023-10-22T18:03:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "abstracts"
More Information needed
|
[
"# Dataset Card for \"abstracts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"abstracts\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"abstracts\"\n\nMore Information needed"
] |
7a2793acacbb10deb75d38685b15899ff7e07d35
|
# Disclaimer
I am not the author of the dataset or the paper. I have just uploaded it for ease of availability. For all information please refer to the [website](https://instructor-embedding.github.io/)
# Dataset Card for "medi"
The MEDI data consists of a collection of 330 datasets from Super-NI(Super-NaturalInstructions), sentence-transformer embedding training data, and KILT, spanning a wide range of domains and tasks.
If you use the dataset, please cite the following papers including Su et al., 2022, Wang et al., 2022, Petroni et al., 2021 and sentence transformer embedding training data at https://huggingface.co/datasets/sentence-transformers/embedding-training-data.
# Citation Information
```
@inproceedings{INSTRUCTOR,
title={One Embedder, Any Task: Instruction-Finetuned Text Embeddings},
author={Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu},
url={https://arxiv.org/abs/2212.09741},
year={2022},
}
@inproceedings{wang2022super,
title={Super-naturalinstructions: generalization via declarative instructions on 1600+ tasks},
author={Wang, Yizhong and Mishra, Swaroop and Alipoormolabashi, Pegah and Kordi, Yeganeh and Mirzaei, Amirreza and Arunkumar, Anjana and Ashok, Arjun and Dhanasekaran, Arut Selvan and Naik, Atharva and Stap, David and others},
year={2022},
organization={EMNLP}
}
@article{petroni2020kilt,
title={KILT: a benchmark for knowledge intensive language tasks},
author={Petroni, Fabio and Piktus, Aleksandra and Fan, Angela and Lewis, Patrick and Yazdani, Majid and De Cao, Nicola and Thorne, James and Jernite, Yacine and Karpukhin, Vladimir and Maillard, Jean and others},
journal={arXiv preprint arXiv:2009.02252},
year={2020}
}
```
|
maveriq/medi
|
[
"task_categories:feature-extraction",
"size_categories:1M<n<10M",
"language:en",
"arxiv:2212.09741",
"region:us"
] |
2023-10-22T18:06:09+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["feature-extraction"], "pretty_name": "Multitask Embeddings Data with Instructions (MEDI)", "dataset_info": {"features": [{"name": "query", "sequence": "string"}, {"name": "pos", "sequence": "string"}, {"name": "neg", "sequence": "string"}, {"name": "task_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2572523114, "num_examples": 1435000}], "download_size": 1232020798, "dataset_size": 2572523114}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T18:19:10+00:00
|
[
"2212.09741"
] |
[
"en"
] |
TAGS
#task_categories-feature-extraction #size_categories-1M<n<10M #language-English #arxiv-2212.09741 #region-us
|
# Disclaimer
I am not the author of the dataset or the paper. I have just uploaded it for ease of availability. For all information please refer to the website
# Dataset Card for "medi"
The MEDI data consists of a collection of 330 datasets from Super-NI(Super-NaturalInstructions), sentence-transformer embedding training data, and KILT, spanning a wide range of domains and tasks.
If you use the dataset, please cite the following papers including Su et al., 2022, Wang et al., 2022, Petroni et al., 2021 and sentence transformer embedding training data at URL
|
[
"# Disclaimer\nI am not the author of the dataset or the paper. I have just uploaded it for ease of availability. For all information please refer to the website",
"# Dataset Card for \"medi\"\n\nThe MEDI data consists of a collection of 330 datasets from Super-NI(Super-NaturalInstructions), sentence-transformer embedding training data, and KILT, spanning a wide range of domains and tasks.\n\nIf you use the dataset, please cite the following papers including Su et al., 2022, Wang et al., 2022, Petroni et al., 2021 and sentence transformer embedding training data at URL"
] |
[
"TAGS\n#task_categories-feature-extraction #size_categories-1M<n<10M #language-English #arxiv-2212.09741 #region-us \n",
"# Disclaimer\nI am not the author of the dataset or the paper. I have just uploaded it for ease of availability. For all information please refer to the website",
"# Dataset Card for \"medi\"\n\nThe MEDI data consists of a collection of 330 datasets from Super-NI(Super-NaturalInstructions), sentence-transformer embedding training data, and KILT, spanning a wide range of domains and tasks.\n\nIf you use the dataset, please cite the following papers including Su et al., 2022, Wang et al., 2022, Petroni et al., 2021 and sentence transformer embedding training data at URL"
] |
[
43,
37,
105
] |
[
"passage: TAGS\n#task_categories-feature-extraction #size_categories-1M<n<10M #language-English #arxiv-2212.09741 #region-us \n# Disclaimer\nI am not the author of the dataset or the paper. I have just uploaded it for ease of availability. For all information please refer to the website# Dataset Card for \"medi\"\n\nThe MEDI data consists of a collection of 330 datasets from Super-NI(Super-NaturalInstructions), sentence-transformer embedding training data, and KILT, spanning a wide range of domains and tasks.\n\nIf you use the dataset, please cite the following papers including Su et al., 2022, Wang et al., 2022, Petroni et al., 2021 and sentence transformer embedding training data at URL"
] |
955c8b2199b5952913caa80fbedfc8158903f1c5
|
# Dataset Card for Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [jondurbin/airoboros-l2-7b-gpt4-2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T19:06:34.610591](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0/blob/main/results_2023-10-22T19-06-34.610591.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2790478187919463,
"em_stderr": 0.004593378120842089,
"f1": 0.3568791946308748,
"f1_stderr": 0.004541856353836489,
"acc": 0.3714854775657577,
"acc_stderr": 0.008787294914639698
},
"harness|drop|3": {
"em": 0.2790478187919463,
"em_stderr": 0.004593378120842089,
"f1": 0.3568791946308748,
"f1_stderr": 0.004541856353836489
},
"harness|gsm8k|5": {
"acc": 0.03184230477634572,
"acc_stderr": 0.004836348558260957
},
"harness|winogrande|5": {
"acc": 0.7111286503551697,
"acc_stderr": 0.01273824127101844
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0
|
[
"region:us"
] |
2023-10-22T18:06:38+00:00
|
{"pretty_name": "Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [jondurbin/airoboros-l2-7b-gpt4-2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T19:06:34.610591](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0/blob/main/results_2023-10-22T19-06-34.610591.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2790478187919463,\n \"em_stderr\": 0.004593378120842089,\n \"f1\": 0.3568791946308748,\n \"f1_stderr\": 0.004541856353836489,\n \"acc\": 0.3714854775657577,\n \"acc_stderr\": 0.008787294914639698\n },\n \"harness|drop|3\": {\n \"em\": 0.2790478187919463,\n \"em_stderr\": 0.004593378120842089,\n \"f1\": 0.3568791946308748,\n \"f1_stderr\": 0.004541856353836489\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03184230477634572,\n \"acc_stderr\": 0.004836348558260957\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7111286503551697,\n \"acc_stderr\": 0.01273824127101844\n }\n}\n```", "repo_url": "https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T19_06_34.610591", "path": ["**/details_harness|drop|3_2023-10-22T19-06-34.610591.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T19-06-34.610591.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T19_06_34.610591", "path": ["**/details_harness|gsm8k|5_2023-10-22T19-06-34.610591.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T19-06-34.610591.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T19_06_34.610591", "path": ["**/details_harness|winogrande|5_2023-10-22T19-06-34.610591.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T19-06-34.610591.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T19_06_34.610591", "path": ["results_2023-10-22T19-06-34.610591.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T19-06-34.610591.parquet"]}]}]}
|
2023-10-22T18:06:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model jondurbin/airoboros-l2-7b-gpt4-2.0 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T19:06:34.610591(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-l2-7b-gpt4-2.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T19:06:34.610591(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-l2-7b-gpt4-2.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T19:06:34.610591(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-l2-7b-gpt4-2.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T19:06:34.610591(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6e7e616346fa0c24145657ee17f98622caef20a6
|
## Magic The Gathering Card Detection Dataset
This dataset is dedicated to people wanting to build card detection models.
It will emulate MTG cards in random positions and provide the visible corners positions for each card as well as the direction to the next corner for each corner.
### Example

It contains 10k 1024x1024 pictures in the train split and 3k in the test split.
## Structure
Each row of the dataset contains:
- id: (int) image id
- image: (binary) The binary image
- annotation: (array<point>) An array of corners representations:
- x: float [0, 1] x relative position in the image
- y: float [0, 1] y relative position in the image
- visible: bool Is the point visible or hidden by other cards
- angle: float [-PI, PI] angle of the vector going to the next corner
- corner_id: int [0, 1, 2, 3] which card corner (top left, top right, bottom right, bottom left)
- A string containg JSON data: all metadata associated with each card present in the frame if one wants to go further [rarity detection / frame types / artists / mana values / etc...]
## Credits:
This dataset is based on other existing MIT Licensed dataset:
- MTG-Json
- Scryfall
This project is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards. Portions of the materials used are property of Wizards of the Coast.
©Wizards of the Coast LLC.
|
gabraken/mtg-detection
|
[
"task_categories:object-detection",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"mtg",
"detection",
"synthetic",
"region:us"
] |
2023-10-22T18:11:00+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["object-detection"], "pretty_name": "Magic The Gathering Card Detection Dataset", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "binary"}, {"name": "annotation", "sequence": {"sequence": "float64"}}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22088296176, "num_examples": 10000}, {"name": "test", "num_bytes": 6615226028, "num_examples": 3000}], "download_size": 28512980450, "dataset_size": 28703522204}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["mtg", "detection", "synthetic"]}
|
2023-10-23T18:47:31+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-object-detection #size_categories-10K<n<100K #language-English #license-mit #mtg #detection #synthetic #region-us
|
## Magic The Gathering Card Detection Dataset
This dataset is dedicated to people wanting to build card detection models.
It will emulate MTG cards in random positions and provide the visible corners positions for each card as well as the direction to the next corner for each corner.
### Example
!card_000019_debug.png =250x250
It contains 10k 1024x1024 pictures in the train split and 3k in the test split.
## Structure
Each row of the dataset contains:
- id: (int) image id
- image: (binary) The binary image
- annotation: (array<point>) An array of corners representations:
- x: float [0, 1] x relative position in the image
- y: float [0, 1] y relative position in the image
- visible: bool Is the point visible or hidden by other cards
- angle: float [-PI, PI] angle of the vector going to the next corner
- corner_id: int [0, 1, 2, 3] which card corner (top left, top right, bottom right, bottom left)
- A string containg JSON data: all metadata associated with each card present in the frame if one wants to go further [rarity detection / frame types / artists / mana values / etc...]
## Credits:
This dataset is based on other existing MIT Licensed dataset:
- MTG-Json
- Scryfall
This project is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards. Portions of the materials used are property of Wizards of the Coast.
©Wizards of the Coast LLC.
|
[
"## Magic The Gathering Card Detection Dataset\n\nThis dataset is dedicated to people wanting to build card detection models.\n\nIt will emulate MTG cards in random positions and provide the visible corners positions for each card as well as the direction to the next corner for each corner.",
"### Example\n\n!card_000019_debug.png =250x250\nIt contains 10k 1024x1024 pictures in the train split and 3k in the test split.",
"## Structure\nEach row of the dataset contains:\n\n- id: (int) image id\n- image: (binary) The binary image\n- annotation: (array<point>) An array of corners representations:\n - x: float [0, 1] x relative position in the image\n - y: float [0, 1] y relative position in the image\n - visible: bool Is the point visible or hidden by other cards\n - angle: float [-PI, PI] angle of the vector going to the next corner\n - corner_id: int [0, 1, 2, 3] which card corner (top left, top right, bottom right, bottom left)\n- A string containg JSON data: all metadata associated with each card present in the frame if one wants to go further [rarity detection / frame types / artists / mana values / etc...]",
"## Credits: \nThis dataset is based on other existing MIT Licensed dataset:\n- MTG-Json\n- Scryfall\n\nThis project is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards. Portions of the materials used are property of Wizards of the Coast. \n©Wizards of the Coast LLC."
] |
[
"TAGS\n#task_categories-object-detection #size_categories-10K<n<100K #language-English #license-mit #mtg #detection #synthetic #region-us \n",
"## Magic The Gathering Card Detection Dataset\n\nThis dataset is dedicated to people wanting to build card detection models.\n\nIt will emulate MTG cards in random positions and provide the visible corners positions for each card as well as the direction to the next corner for each corner.",
"### Example\n\n!card_000019_debug.png =250x250\nIt contains 10k 1024x1024 pictures in the train split and 3k in the test split.",
"## Structure\nEach row of the dataset contains:\n\n- id: (int) image id\n- image: (binary) The binary image\n- annotation: (array<point>) An array of corners representations:\n - x: float [0, 1] x relative position in the image\n - y: float [0, 1] y relative position in the image\n - visible: bool Is the point visible or hidden by other cards\n - angle: float [-PI, PI] angle of the vector going to the next corner\n - corner_id: int [0, 1, 2, 3] which card corner (top left, top right, bottom right, bottom left)\n- A string containg JSON data: all metadata associated with each card present in the frame if one wants to go further [rarity detection / frame types / artists / mana values / etc...]",
"## Credits: \nThis dataset is based on other existing MIT Licensed dataset:\n- MTG-Json\n- Scryfall\n\nThis project is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards. Portions of the materials used are property of Wizards of the Coast. \n©Wizards of the Coast LLC."
] |
[
49,
63,
40,
195,
77
] |
[
"passage: TAGS\n#task_categories-object-detection #size_categories-10K<n<100K #language-English #license-mit #mtg #detection #synthetic #region-us \n## Magic The Gathering Card Detection Dataset\n\nThis dataset is dedicated to people wanting to build card detection models.\n\nIt will emulate MTG cards in random positions and provide the visible corners positions for each card as well as the direction to the next corner for each corner.### Example\n\n!card_000019_debug.png =250x250\nIt contains 10k 1024x1024 pictures in the train split and 3k in the test split.## Structure\nEach row of the dataset contains:\n\n- id: (int) image id\n- image: (binary) The binary image\n- annotation: (array<point>) An array of corners representations:\n - x: float [0, 1] x relative position in the image\n - y: float [0, 1] y relative position in the image\n - visible: bool Is the point visible or hidden by other cards\n - angle: float [-PI, PI] angle of the vector going to the next corner\n - corner_id: int [0, 1, 2, 3] which card corner (top left, top right, bottom right, bottom left)\n- A string containg JSON data: all metadata associated with each card present in the frame if one wants to go further [rarity detection / frame types / artists / mana values / etc...]## Credits: \nThis dataset is based on other existing MIT Licensed dataset:\n- MTG-Json\n- Scryfall\n\nThis project is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards. Portions of the materials used are property of Wizards of the Coast. \n©Wizards of the Coast LLC."
] |
6e6ccc44c771f2313f44d4cb411374e2d4fe561c
|
# Dataset Card for "oxford-flowers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lilobe8614/oxford-flowers
|
[
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"source_datasets:https://www.robots.ox.ac.uk/~vgg/data/flowers",
"license:unknown",
"flowers",
"oxford",
"region:us"
] |
2023-10-22T18:24:49+00:00
|
{"license": ["unknown"], "source_datasets": "https://www.robots.ox.ac.uk/~vgg/data/flowers", "task_categories": ["image-classification", "unconditional-image-generation"], "pretty_name": "Oxford Flowers Dataset", "tags": ["flowers", "oxford"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "10", "2": "100", "3": "101", "4": "102", "5": "11", "6": "12", "7": "13", "8": "14", "9": "15", "10": "16", "11": "17", "12": "18", "13": "19", "14": "2", "15": "20", "16": "21", "17": "22", "18": "23", "19": "24", "20": "25", "21": "26", "22": "27", "23": "28", "24": "29", "25": "3", "26": "30", "27": "31", "28": "32", "29": "33", "30": "34", "31": "35", "32": "36", "33": "37", "34": "38", "35": "39", "36": "4", "37": "40", "38": "41", "39": "42", "40": "43", "41": "44", "42": "45", "43": "46", "44": "47", "45": "48", "46": "49", "47": "5", "48": "50", "49": "51", "50": "52", "51": "53", "52": "54", "53": "55", "54": "56", "55": "57", "56": "58", "57": "59", "58": "6", "59": "60", "60": "61", "61": "62", "62": "63", "63": "64", "64": "65", "65": "66", "66": "67", "67": "68", "68": "69", "69": "7", "70": "70", "71": "71", "72": "72", "73": "73", "74": "74", "75": "75", "76": "76", "77": "77", "78": "78", "79": "79", "80": "8", "81": "80", "82": "81", "83": "82", "84": "83", "85": "84", "86": "85", "87": "86", "88": "87", "89": "88", "90": "89", "91": "9", "92": "90", "93": "91", "94": "92", "95": "93", "96": "94", "97": "95", "98": "96", "99": "97", "100": "98", "101": "99"}}}}], "splits": [{"name": "train", "num_bytes": 308119477.446, "num_examples": 7169}, {"name": "test", "num_bytes": 43247670.14, "num_examples": 1020}], "download_size": 346597973, "dataset_size": 351367147.58599997}}
|
2023-10-22T18:35:26+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #task_categories-unconditional-image-generation #source_datasets-https-//www.robots.ox.ac.uk/~vgg/data/flowers #license-unknown #flowers #oxford #region-us
|
# Dataset Card for "oxford-flowers"
More Information needed
|
[
"# Dataset Card for \"oxford-flowers\"\n\nMore Information needed"
] |
[
"TAGS\n#task_categories-image-classification #task_categories-unconditional-image-generation #source_datasets-https-//www.robots.ox.ac.uk/~vgg/data/flowers #license-unknown #flowers #oxford #region-us \n",
"# Dataset Card for \"oxford-flowers\"\n\nMore Information needed"
] |
[
74,
15
] |
[
"passage: TAGS\n#task_categories-image-classification #task_categories-unconditional-image-generation #source_datasets-https-//www.robots.ox.ac.uk/~vgg/data/flowers #license-unknown #flowers #oxford #region-us \n# Dataset Card for \"oxford-flowers\"\n\nMore Information needed"
] |
3e657ef5b68712bcbefcfa25e3e59805708369cb
|
# Dataset Card for "LittleTown"
[Language models are greedy reasoners](https://arxiv.org/pdf/2210.01240.pdf), so they don't often backtrack. This is a dataset made to teach them backtracking. The data is synthetic, generated randomly in Python.
90% of the examples contain backtracking.
License:
```
Zero-Clause BSD
=============
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE
FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY
DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
```
|
euclaise/LittleTown
|
[
"size_categories:10K<n<100K",
"license:other",
"arxiv:2210.01240",
"region:us"
] |
2023-10-22T18:30:20+00:00
|
{"license": "other", "size_categories": ["10K<n<100K"], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75640201, "num_examples": 100000}], "download_size": 16577014, "dataset_size": 75640201}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T16:04:15+00:00
|
[
"2210.01240"
] |
[] |
TAGS
#size_categories-10K<n<100K #license-other #arxiv-2210.01240 #region-us
|
# Dataset Card for "LittleTown"
Language models are greedy reasoners, so they don't often backtrack. This is a dataset made to teach them backtracking. The data is synthetic, generated randomly in Python.
90% of the examples contain backtracking.
License:
|
[
"# Dataset Card for \"LittleTown\"\n\nLanguage models are greedy reasoners, so they don't often backtrack. This is a dataset made to teach them backtracking. The data is synthetic, generated randomly in Python.\n\n90% of the examples contain backtracking.\n\nLicense:"
] |
[
"TAGS\n#size_categories-10K<n<100K #license-other #arxiv-2210.01240 #region-us \n",
"# Dataset Card for \"LittleTown\"\n\nLanguage models are greedy reasoners, so they don't often backtrack. This is a dataset made to teach them backtracking. The data is synthetic, generated randomly in Python.\n\n90% of the examples contain backtracking.\n\nLicense:"
] |
[
31,
68
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #license-other #arxiv-2210.01240 #region-us \n# Dataset Card for \"LittleTown\"\n\nLanguage models are greedy reasoners, so they don't often backtrack. This is a dataset made to teach them backtracking. The data is synthetic, generated randomly in Python.\n\n90% of the examples contain backtracking.\n\nLicense:"
] |
7438b68894cac515f68991117d8a30d707059039
|
# Dataset Card for "sentences_triplets_secop2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Santp98/sentences_triplets_secop2
|
[
"region:us"
] |
2023-10-22T18:49:39+00:00
|
{"dataset_info": {"features": [{"name": "segment_code_pos", "dtype": "string"}, {"name": "segment_code_neg", "dtype": "string"}, {"name": "anchor_sent", "dtype": "string"}, {"name": "positive_sent", "dtype": "string"}, {"name": "negative_sent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 556449980, "num_examples": 788696}], "download_size": 180364981, "dataset_size": 556449980}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T18:49:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sentences_triplets_secop2"
More Information needed
|
[
"# Dataset Card for \"sentences_triplets_secop2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sentences_triplets_secop2\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sentences_triplets_secop2\"\n\nMore Information needed"
] |
bff7c23854848e32a8de2a925d4b7f0ecd2da31e
|
# Dataset Card for "mmlu_aux_binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
atmallen/mmlu_aux_binary
|
[
"region:us"
] |
2023-10-22T19:06:00+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int32"}, {"name": "statement", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}], "splits": [{"name": "validation", "num_bytes": 7300371, "num_examples": 4036}, {"name": "test", "num_bytes": 69452850, "num_examples": 37506}], "download_size": 46452233, "dataset_size": 76753221}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-10-22T20:41:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mmlu_aux_binary"
More Information needed
|
[
"# Dataset Card for \"mmlu_aux_binary\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mmlu_aux_binary\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mmlu_aux_binary\"\n\nMore Information needed"
] |
8cbfe1fda1db08ac8463c67661a800284c8a6578
|
# Dataset Card for "mmlu_aux_chat_binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
atmallen/mmlu_aux_chat_binary
|
[
"region:us"
] |
2023-10-22T19:06:14+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int32"}, {"name": "statement", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}], "splits": [{"name": "validation", "num_bytes": 10714952, "num_examples": 4036}, {"name": "test", "num_bytes": 101960767, "num_examples": 37506}], "download_size": 50210816, "dataset_size": 112675719}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-10-22T20:41:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mmlu_aux_chat_binary"
More Information needed
|
[
"# Dataset Card for \"mmlu_aux_chat_binary\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mmlu_aux_chat_binary\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mmlu_aux_chat_binary\"\n\nMore Information needed"
] |
b33628e0c81f4263f80d35936ec191732e07ac1b
|
# Dataset Card for Evaluation run of TheBloke/Kimiko-13B-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/Kimiko-13B-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/Kimiko-13B-fp16](https://huggingface.co/TheBloke/Kimiko-13B-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__Kimiko-13B-fp16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T20:29:03.807457](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Kimiko-13B-fp16/blob/main/results_2023-10-22T20-29-03.807457.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388425,
"f1": 0.06370176174496635,
"f1_stderr": 0.0013821226935642709,
"acc": 0.42755597415707414,
"acc_stderr": 0.009839681635672129
},
"harness|drop|3": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388425,
"f1": 0.06370176174496635,
"f1_stderr": 0.0013821226935642709
},
"harness|gsm8k|5": {
"acc": 0.08794541319181198,
"acc_stderr": 0.007801162197487721
},
"harness|winogrande|5": {
"acc": 0.7671665351223362,
"acc_stderr": 0.011878201073856539
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheBloke__Kimiko-13B-fp16
|
[
"region:us"
] |
2023-10-22T19:29:07+00:00
|
{"pretty_name": "Evaluation run of TheBloke/Kimiko-13B-fp16", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/Kimiko-13B-fp16](https://huggingface.co/TheBloke/Kimiko-13B-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__Kimiko-13B-fp16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T20:29:03.807457](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Kimiko-13B-fp16/blob/main/results_2023-10-22T20-29-03.807457.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388425,\n \"f1\": 0.06370176174496635,\n \"f1_stderr\": 0.0013821226935642709,\n \"acc\": 0.42755597415707414,\n \"acc_stderr\": 0.009839681635672129\n },\n \"harness|drop|3\": {\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388425,\n \"f1\": 0.06370176174496635,\n \"f1_stderr\": 0.0013821226935642709\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08794541319181198,\n \"acc_stderr\": 0.007801162197487721\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856539\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/Kimiko-13B-fp16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T20_29_03.807457", "path": ["**/details_harness|drop|3_2023-10-22T20-29-03.807457.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T20-29-03.807457.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T20_29_03.807457", "path": ["**/details_harness|gsm8k|5_2023-10-22T20-29-03.807457.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T20-29-03.807457.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T20_29_03.807457", "path": ["**/details_harness|winogrande|5_2023-10-22T20-29-03.807457.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T20-29-03.807457.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T20_29_03.807457", "path": ["results_2023-10-22T20-29-03.807457.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T20-29-03.807457.parquet"]}]}]}
|
2023-10-22T19:29:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/Kimiko-13B-fp16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/Kimiko-13B-fp16 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T20:29:03.807457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheBloke/Kimiko-13B-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/Kimiko-13B-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T20:29:03.807457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/Kimiko-13B-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/Kimiko-13B-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T20:29:03.807457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/Kimiko-13B-fp16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/Kimiko-13B-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T20:29:03.807457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c87bd4794c71dea12adb242cacda78c1232d8a0e
|
# Dataset Card for "math_dataset_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_unified
|
[
"region:us"
] |
2023-10-22T19:55:43+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 70546996, "num_examples": 49999}], "download_size": 32779910, "dataset_size": 70546996}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T19:55:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_unified"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_unified\"\n\nMore Information needed"
] |
2da188e77d91a96fbd2ed02afae3507cdb8cc8a8
|
# Dataset Card for "math_dataset_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_embedded
|
[
"region:us"
] |
2023-10-22T19:57:39+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 275542896, "num_examples": 49999}], "download_size": 133294046, "dataset_size": 275542896}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T19:57:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_embedded"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_embedded\"\n\nMore Information needed"
] |
3ac4b31c24cdb1929abab2b01826819f09f470c2
|
# Dataset Card for Evaluation run of TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__WizardLM-33B-V1.0-Uncensored-GPTQ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T20:59:08.755164](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-33B-V1.0-Uncensored-GPTQ/blob/main/results_2023-10-22T20-59-08.755164.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08850671140939598,
"em_stderr": 0.0029087372393749897,
"f1": 0.1645427852348987,
"f1_stderr": 0.0031594666528343297,
"acc": 0.512323080853987,
"acc_stderr": 0.011759203620772818
},
"harness|drop|3": {
"em": 0.08850671140939598,
"em_stderr": 0.0029087372393749897,
"f1": 0.1645427852348987,
"f1_stderr": 0.0031594666528343297
},
"harness|gsm8k|5": {
"acc": 0.24564063684609552,
"acc_stderr": 0.011857183603902227
},
"harness|winogrande|5": {
"acc": 0.7790055248618785,
"acc_stderr": 0.011661223637643407
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheBloke__WizardLM-33B-V1.0-Uncensored-GPTQ
|
[
"region:us"
] |
2023-10-22T19:59:12+00:00
|
{"pretty_name": "Evaluation run of TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__WizardLM-33B-V1.0-Uncensored-GPTQ\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T20:59:08.755164](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-33B-V1.0-Uncensored-GPTQ/blob/main/results_2023-10-22T20-59-08.755164.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08850671140939598,\n \"em_stderr\": 0.0029087372393749897,\n \"f1\": 0.1645427852348987,\n \"f1_stderr\": 0.0031594666528343297,\n \"acc\": 0.512323080853987,\n \"acc_stderr\": 0.011759203620772818\n },\n \"harness|drop|3\": {\n \"em\": 0.08850671140939598,\n \"em_stderr\": 0.0029087372393749897,\n \"f1\": 0.1645427852348987,\n \"f1_stderr\": 0.0031594666528343297\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.24564063684609552,\n \"acc_stderr\": 0.011857183603902227\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7790055248618785,\n \"acc_stderr\": 0.011661223637643407\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T20_59_08.755164", "path": ["**/details_harness|drop|3_2023-10-22T20-59-08.755164.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T20-59-08.755164.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T20_59_08.755164", "path": ["**/details_harness|gsm8k|5_2023-10-22T20-59-08.755164.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T20-59-08.755164.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T20_59_08.755164", "path": ["**/details_harness|winogrande|5_2023-10-22T20-59-08.755164.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T20-59-08.755164.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T20_59_08.755164", "path": ["results_2023-10-22T20-59-08.755164.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T20-59-08.755164.parquet"]}]}]}
|
2023-10-22T19:59:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T20:59:08.755164(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T20:59:08.755164(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T20:59:08.755164(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
30,
31,
178,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T20:59:08.755164(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b633008d036c774bddaa068f4b11fd4f89dd2436
|
# Dataset Card for "chemistry_dataset_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_unified
|
[
"region:us"
] |
2023-10-22T20:02:10+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 44945786, "num_examples": 19999}], "download_size": 20574764, "dataset_size": 44945786}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:02:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_unified"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_unified\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_unified\"\n\nMore Information needed"
] |
6680e6015f7300c58752feb8a64082ec5ce28aa6
|
# Dataset Card for "chemistry_dataset_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_embedded
|
[
"region:us"
] |
2023-10-22T20:03:13+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 126941686, "num_examples": 19999}], "download_size": 60774351, "dataset_size": 126941686}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:03:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_embedded"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_embedded\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_embedded\"\n\nMore Information needed"
] |
97e5788503247990a5d421aeabeb3cae48e243d0
|
# Dataset Card for "math_dataset_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_0_std
|
[
"region:us"
] |
2023-10-22T20:10:45+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5436126, "num_examples": 8922}], "download_size": 2359273, "dataset_size": 5436126}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:10:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_0_std"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
bd6fc1473ceae2542caf021f8bf3b06c3d9c3d4a
|
# Dataset Card for "math_dataset_standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_0_alpaca
|
[
"region:us"
] |
2023-10-22T20:10:49+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5212494, "num_examples": 4460}], "download_size": 2308374, "dataset_size": 5212494}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:10:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_0_alpaca"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
68f2fea431c20623390d19b584ec1b6a0cb3b9cb
|
# Dataset Card for "math_dataset_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_0
|
[
"region:us"
] |
2023-10-22T20:10:51+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 41690673, "num_examples": 4461}], "download_size": 11292125, "dataset_size": 41690673}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:10:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_0"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
f669b9a931b8850ca28a108eb05018a32abf5075
|
# Dataset Card for "math_dataset_standardized_cluster_1_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_1_std
|
[
"region:us"
] |
2023-10-22T20:11:19+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11241663, "num_examples": 11084}], "download_size": 4969226, "dataset_size": 11241663}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:11:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_1_std"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
c83e4e2213c0b0e74b9e8d3ab8603cfd64a7032e
|
# Dataset Card for "math_dataset_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_1_alpaca
|
[
"region:us"
] |
2023-10-22T20:11:23+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10963017, "num_examples": 5541}], "download_size": 4973144, "dataset_size": 10963017}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:11:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_1_alpaca"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
6c159a4904b70be5d6d4a6b822cb24d6c8a478f9
|
# Dataset Card for "math_dataset_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_1
|
[
"region:us"
] |
2023-10-22T20:11:26+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 56281497, "num_examples": 5542}], "download_size": 16025075, "dataset_size": 56281497}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:11:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_1"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
2b13868b8c803c9721dea01c30338fee6536f957
|
# Dataset Card for "math_dataset_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_2_std
|
[
"region:us"
] |
2023-10-22T20:11:55+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 21385465, "num_examples": 32672}], "download_size": 9347465, "dataset_size": 21385465}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:11:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_2_std"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
bd947ed909778976dcab8e957a63025ff92d5137
|
# Dataset Card for "math_dataset_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_2_alpaca
|
[
"region:us"
] |
2023-10-22T20:12:00+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20563494, "num_examples": 16335}], "download_size": 9453917, "dataset_size": 20563494}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:12:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_2_alpaca"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
08fbb0e1736fddbb1ef2dcfbc4fc91cff26b45d0
|
# Dataset Card for "math_dataset_standardized_cluster_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_2
|
[
"region:us"
] |
2023-10-22T20:12:03+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 154148137, "num_examples": 16336}], "download_size": 42055155, "dataset_size": 154148137}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:12:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_2"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
c31cfe07e7d5677e1994a3618db65ceb8a7b43ce
|
# Dataset Card for "math_dataset_standardized_cluster_3_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_3_std
|
[
"region:us"
] |
2023-10-22T20:12:34+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 29101840, "num_examples": 37310}], "download_size": 13039875, "dataset_size": 29101840}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:12:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_3_std"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
0d0d0d8f9aade9835d468381ab334803d04aff9a
|
# Dataset Card for "math_dataset_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_3_alpaca
|
[
"region:us"
] |
2023-10-22T20:12:40+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28168082, "num_examples": 18654}], "download_size": 13122952, "dataset_size": 28168082}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:12:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_3_alpaca"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
8cd016e3ec023c922b8cd7beec9ea8d38984a80f
|
# Dataset Card for "math_dataset_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_3
|
[
"region:us"
] |
2023-10-22T20:12:43+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 180711025, "num_examples": 18655}], "download_size": 50145228, "dataset_size": 180711025}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:12:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_3"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
de251c3e19e3e6a8b039e6f710768cc81bda66cf
|
# Dataset Card for "math_dataset_standardized_cluster_4_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_4_std
|
[
"region:us"
] |
2023-10-22T20:13:14+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7231825, "num_examples": 10010}], "download_size": 3124973, "dataset_size": 7231825}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:13:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_4_std"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
ad2c9cfdc3512cb1900f08b6f67ba88874730453
|
# Dataset Card for "math_dataset_standardized_cluster_4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_4_alpaca
|
[
"region:us"
] |
2023-10-22T20:13:17+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6977267, "num_examples": 5004}], "download_size": 3136315, "dataset_size": 6977267}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:13:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_4_alpaca"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
53a95a9ccc23ceb5d31777967917e892aba36bd8
|
# Dataset Card for "math_dataset_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/math_dataset_standardized_cluster_4
|
[
"region:us"
] |
2023-10-22T20:13:20+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 47907460, "num_examples": 5005}], "download_size": 13098592, "dataset_size": 47907460}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:13:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "math_dataset_standardized_cluster_4"
More Information needed
|
[
"# Dataset Card for \"math_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"math_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"math_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
4d6703e8579c236c722c2a2bfb6a0a0c786fc85c
|
# Dataset Card for "chemistry_dataset_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_0_std
|
[
"region:us"
] |
2023-10-22T20:14:47+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 14956432, "num_examples": 11094}], "download_size": 6327359, "dataset_size": 14956432}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:14:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_0_std"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_0_std\"\n\nMore Information needed"
] |
f764095e85a26a5b2be63f58376953a640dcb11d
|
# Dataset Card for "chemistry_dataset_standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_0_alpaca
|
[
"region:us"
] |
2023-10-22T20:14:50+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14675587, "num_examples": 5546}], "download_size": 6391991, "dataset_size": 14675587}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:14:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_0_alpaca"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
76bc20e6482608b0ddaff019ae1aff9cdf2fdca7
|
# Dataset Card for "chemistry_dataset_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_0
|
[
"region:us"
] |
2023-10-22T20:14:52+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 60036901, "num_examples": 5547}], "download_size": 17467431, "dataset_size": 60036901}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:14:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_0"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_0\"\n\nMore Information needed"
] |
ad5dd37286a2999aa1028c373a63171ae7c9537d
|
# Dataset Card for "chemistry_dataset_standardized_cluster_1_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_1_std
|
[
"region:us"
] |
2023-10-22T20:15:11+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6207644, "num_examples": 5502}], "download_size": 2530400, "dataset_size": 6207644}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:15:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_1_std"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_1_std\"\n\nMore Information needed"
] |
2ac269769565bff11c8c0c7cd7fc40586617768b
|
# Dataset Card for "chemistry_dataset_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_1_alpaca
|
[
"region:us"
] |
2023-10-22T20:15:14+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6068325, "num_examples": 2750}], "download_size": 2563564, "dataset_size": 6068325}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:15:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_1_alpaca"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
9e012bc15fa7fc6669babc00a8139513ec9268b8
|
# Dataset Card for "chemistry_dataset_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_1
|
[
"region:us"
] |
2023-10-22T20:15:16+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 28565021, "num_examples": 2751}], "download_size": 8067836, "dataset_size": 28565021}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:15:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_1"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_1\"\n\nMore Information needed"
] |
915c36efb5549e604a254703e0fd8eb9b6c1326a
|
# Dataset Card for "chemistry_dataset_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_2_std
|
[
"region:us"
] |
2023-10-22T20:15:33+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4782401, "num_examples": 6678}], "download_size": 1923152, "dataset_size": 4782401}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:15:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_2_std"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_2_std\"\n\nMore Information needed"
] |
c756da44eb0acfe314f70e59b81cd398e78eb4b3
|
# Dataset Card for "chemistry_dataset_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_2_alpaca
|
[
"region:us"
] |
2023-10-22T20:15:36+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4614663, "num_examples": 3338}], "download_size": 1961709, "dataset_size": 4614663}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:15:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_2_alpaca"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
73d80972745978a7b79de4f5be22db658d4ae859
|
# Dataset Card for "chemistry_dataset_standardized_cluster_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_2
|
[
"region:us"
] |
2023-10-22T20:15:38+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 31918454, "num_examples": 3339}], "download_size": 8651715, "dataset_size": 31918454}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:15:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_2"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_2\"\n\nMore Information needed"
] |
531dc146d09cb60312a227096718c0f17d71b26e
|
# Dataset Card for "chemistry_dataset_standardized_cluster_3_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_3_std
|
[
"region:us"
] |
2023-10-22T20:15:55+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16204599, "num_examples": 10664}], "download_size": 7455328, "dataset_size": 16204599}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:15:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_3_std"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_3_std\"\n\nMore Information needed"
] |
2dd4fdbd82df0ea8927721f7597781c60b29578e
|
# Dataset Card for "chemistry_dataset_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_3_alpaca
|
[
"region:us"
] |
2023-10-22T20:15:58+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15934846, "num_examples": 5331}], "download_size": 7586850, "dataset_size": 15934846}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:16:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_3_alpaca"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
dda777278c2a6c6eb157205b8cc09ca77b90dffa
|
# Dataset Card for "chemistry_dataset_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_3
|
[
"region:us"
] |
2023-10-22T20:16:00+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 59537763, "num_examples": 5332}], "download_size": 18190714, "dataset_size": 59537763}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:16:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_3"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_3\"\n\nMore Information needed"
] |
f5d1ec764e2124b7a5b9c29fa87378e8415d072b
|
# Dataset Card for "chemistry_dataset_standardized_cluster_4_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_4_std
|
[
"region:us"
] |
2023-10-22T20:16:18+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4334633, "num_examples": 6060}], "download_size": 1851846, "dataset_size": 4334633}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:16:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_4_std"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_4_std\"\n\nMore Information needed"
] |
8dc389f74d94664248121d29460c2fbeef7a0084
|
# Dataset Card for "chemistry_dataset_standardized_cluster_4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_4_alpaca
|
[
"region:us"
] |
2023-10-22T20:16:21+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4182760, "num_examples": 3029}], "download_size": 1880227, "dataset_size": 4182760}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:16:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_4_alpaca"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
2d0efe2773fecd4016b794f2904284249eda637c
|
# Dataset Card for "chemistry_dataset_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/chemistry_dataset_standardized_cluster_4
|
[
"region:us"
] |
2023-10-22T20:16:24+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 28959443, "num_examples": 3030}], "download_size": 7966037, "dataset_size": 28959443}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T20:16:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chemistry_dataset_standardized_cluster_4"
More Information needed
|
[
"# Dataset Card for \"chemistry_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chemistry_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chemistry_dataset_standardized_cluster_4\"\n\nMore Information needed"
] |
fce3e37c00175ab7656239c0b4580d07870c32df
|
The dataset consists of the descriptions and comments about the concepts in Dublin Core ontology elements.
|
BOP-Berlin-University-Alliance/dc_elements_raw_data
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:gpl-3.0",
"region:us"
] |
2023-10-22T20:21:07+00:00
|
{"language": ["en"], "license": "gpl-3.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"]}
|
2023-10-25T06:38:40+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-n<1K #language-English #license-gpl-3.0 #region-us
|
The dataset consists of the descriptions and comments about the concepts in Dublin Core ontology elements.
|
[] |
[
"TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #license-gpl-3.0 #region-us \n"
] |
[
39
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #license-gpl-3.0 #region-us \n"
] |
19aa77d38e9110b36cfc8cc6eb64dda22c482962
|
The dataset consists of the descriptions and comments about the concepts in Dublin Core ontology terms.
|
BOP-Berlin-University-Alliance/dc_terms_raw_data
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:gpl-3.0",
"region:us"
] |
2023-10-22T20:31:29+00:00
|
{"language": ["en"], "license": "gpl-3.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "meta data"}
|
2023-10-25T06:38:13+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-n<1K #language-English #license-gpl-3.0 #region-us
|
The dataset consists of the descriptions and comments about the concepts in Dublin Core ontology terms.
|
[] |
[
"TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #license-gpl-3.0 #region-us \n"
] |
[
39
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #license-gpl-3.0 #region-us \n"
] |
e3a85ec6d69cd3ca8622fa42949e11f0c5bda5b9
|
# Dataset Card for Evaluation run of TheBloke/VicUnlocked-30B-LoRA-HF
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-HF
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/VicUnlocked-30B-LoRA-HF](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-HF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__VicUnlocked-30B-LoRA-HF",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T04:52:45.302158](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__VicUnlocked-30B-LoRA-HF/blob/main/results_2023-10-23T04-52-45.302158.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001363255033557047,
"em_stderr": 0.0003778609196460696,
"f1": 0.0645071308724832,
"f1_stderr": 0.0013899526153663272,
"acc": 0.46941968306093984,
"acc_stderr": 0.01051121334026367
},
"harness|drop|3": {
"em": 0.001363255033557047,
"em_stderr": 0.0003778609196460696,
"f1": 0.0645071308724832,
"f1_stderr": 0.0013899526153663272
},
"harness|gsm8k|5": {
"acc": 0.14404852160727824,
"acc_stderr": 0.009672110973065282
},
"harness|winogrande|5": {
"acc": 0.7947908445146015,
"acc_stderr": 0.011350315707462056
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheBloke__VicUnlocked-30B-LoRA-HF
|
[
"region:us"
] |
2023-10-22T20:45:56+00:00
|
{"pretty_name": "Evaluation run of TheBloke/VicUnlocked-30B-LoRA-HF", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/VicUnlocked-30B-LoRA-HF](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-HF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__VicUnlocked-30B-LoRA-HF\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T04:52:45.302158](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__VicUnlocked-30B-LoRA-HF/blob/main/results_2023-10-23T04-52-45.302158.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001363255033557047,\n \"em_stderr\": 0.0003778609196460696,\n \"f1\": 0.0645071308724832,\n \"f1_stderr\": 0.0013899526153663272,\n \"acc\": 0.46941968306093984,\n \"acc_stderr\": 0.01051121334026367\n },\n \"harness|drop|3\": {\n \"em\": 0.001363255033557047,\n \"em_stderr\": 0.0003778609196460696,\n \"f1\": 0.0645071308724832,\n \"f1_stderr\": 0.0013899526153663272\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14404852160727824,\n \"acc_stderr\": 0.009672110973065282\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7947908445146015,\n \"acc_stderr\": 0.011350315707462056\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-HF", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T21_45_52.426808", "path": ["**/details_harness|drop|3_2023-10-22T21-45-52.426808.parquet"]}, {"split": "2023_10_23T04_52_45.302158", "path": ["**/details_harness|drop|3_2023-10-23T04-52-45.302158.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T04-52-45.302158.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T21_45_52.426808", "path": ["**/details_harness|gsm8k|5_2023-10-22T21-45-52.426808.parquet"]}, {"split": "2023_10_23T04_52_45.302158", "path": ["**/details_harness|gsm8k|5_2023-10-23T04-52-45.302158.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T04-52-45.302158.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T21_45_52.426808", "path": ["**/details_harness|winogrande|5_2023-10-22T21-45-52.426808.parquet"]}, {"split": "2023_10_23T04_52_45.302158", "path": ["**/details_harness|winogrande|5_2023-10-23T04-52-45.302158.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T04-52-45.302158.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T21_45_52.426808", "path": ["results_2023-10-22T21-45-52.426808.parquet"]}, {"split": "2023_10_23T04_52_45.302158", "path": ["results_2023-10-23T04-52-45.302158.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T04-52-45.302158.parquet"]}]}]}
|
2023-10-23T03:52:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/VicUnlocked-30B-LoRA-HF
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/VicUnlocked-30B-LoRA-HF on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-23T04:52:45.302158(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheBloke/VicUnlocked-30B-LoRA-HF",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/VicUnlocked-30B-LoRA-HF on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T04:52:45.302158(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/VicUnlocked-30B-LoRA-HF",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/VicUnlocked-30B-LoRA-HF on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T04:52:45.302158(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/VicUnlocked-30B-LoRA-HF## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/VicUnlocked-30B-LoRA-HF on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T04:52:45.302158(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0813cc420be170bf76a7e5e7fdfaad3449648b59
|
# Dataset Card for Evaluation run of jondurbin/airoboros-13b-gpt4-1.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [jondurbin/airoboros-13b-gpt4-1.1](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jondurbin__airoboros-13b-gpt4-1.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T21:49:14.106154](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-13b-gpt4-1.1/blob/main/results_2023-10-22T21-49-14.106154.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.037017617449664426,
"em_stderr": 0.0019335395228219918,
"f1": 0.09976300335570489,
"f1_stderr": 0.0023092531505962102,
"acc": 0.4197877778063671,
"acc_stderr": 0.009797345526945866
},
"harness|drop|3": {
"em": 0.037017617449664426,
"em_stderr": 0.0019335395228219918,
"f1": 0.09976300335570489,
"f1_stderr": 0.0023092531505962102
},
"harness|gsm8k|5": {
"acc": 0.08188021228203184,
"acc_stderr": 0.007552338527716947
},
"harness|winogrande|5": {
"acc": 0.7576953433307024,
"acc_stderr": 0.012042352526174785
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_jondurbin__airoboros-13b-gpt4-1.1
|
[
"region:us"
] |
2023-10-22T20:49:18+00:00
|
{"pretty_name": "Evaluation run of jondurbin/airoboros-13b-gpt4-1.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [jondurbin/airoboros-13b-gpt4-1.1](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__airoboros-13b-gpt4-1.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T21:49:14.106154](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-13b-gpt4-1.1/blob/main/results_2023-10-22T21-49-14.106154.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.037017617449664426,\n \"em_stderr\": 0.0019335395228219918,\n \"f1\": 0.09976300335570489,\n \"f1_stderr\": 0.0023092531505962102,\n \"acc\": 0.4197877778063671,\n \"acc_stderr\": 0.009797345526945866\n },\n \"harness|drop|3\": {\n \"em\": 0.037017617449664426,\n \"em_stderr\": 0.0019335395228219918,\n \"f1\": 0.09976300335570489,\n \"f1_stderr\": 0.0023092531505962102\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08188021228203184,\n \"acc_stderr\": 0.007552338527716947\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7576953433307024,\n \"acc_stderr\": 0.012042352526174785\n }\n}\n```", "repo_url": "https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T21_49_14.106154", "path": ["**/details_harness|drop|3_2023-10-22T21-49-14.106154.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T21-49-14.106154.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T21_49_14.106154", "path": ["**/details_harness|gsm8k|5_2023-10-22T21-49-14.106154.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T21-49-14.106154.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T21_49_14.106154", "path": ["**/details_harness|winogrande|5_2023-10-22T21-49-14.106154.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T21-49-14.106154.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T21_49_14.106154", "path": ["results_2023-10-22T21-49-14.106154.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T21-49-14.106154.parquet"]}]}]}
|
2023-10-22T20:49:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of jondurbin/airoboros-13b-gpt4-1.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model jondurbin/airoboros-13b-gpt4-1.1 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T21:49:14.106154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of jondurbin/airoboros-13b-gpt4-1.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-13b-gpt4-1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T21:49:14.106154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of jondurbin/airoboros-13b-gpt4-1.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-13b-gpt4-1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T21:49:14.106154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jondurbin/airoboros-13b-gpt4-1.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-13b-gpt4-1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T21:49:14.106154(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
451a63d5970b0eeb90b7f5fdaea92e4d63c647b3
|
# Dataset Card for "rsicd_matched_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Braddy/rsicd_matched_v1
|
[
"region:us"
] |
2023-10-22T21:03:00+00:00
|
{"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "captions", "sequence": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 242716169.875, "num_examples": 4433}], "download_size": 228425595, "dataset_size": 242716169.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T21:03:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rsicd_matched_v1"
More Information needed
|
[
"# Dataset Card for \"rsicd_matched_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rsicd_matched_v1\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rsicd_matched_v1\"\n\nMore Information needed"
] |
1b29d3b49231f8ae5883eea7bb4ced9fef071099
|
# Dataset Card for "train_refineweb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ridger/train_refineweb
|
[
"region:us"
] |
2023-10-22T21:03:55+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 91738479900, "num_examples": 22375239}], "download_size": 13547146690, "dataset_size": 91738479900}}
|
2023-10-22T23:38:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train_refineweb"
More Information needed
|
[
"# Dataset Card for \"train_refineweb\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train_refineweb\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train_refineweb\"\n\nMore Information needed"
] |
613b1983e8c2abe530df0d8f36e676623612ad76
|
# Dataset Card for "resume-qa-data-2023-10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mauricioobgo/resume-qa-data-2023-10
|
[
"region:us"
] |
2023-10-22T21:32:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 948965, "num_examples": 260}], "download_size": 56207, "dataset_size": 948965}}
|
2023-10-22T21:33:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "resume-qa-data-2023-10"
More Information needed
|
[
"# Dataset Card for \"resume-qa-data-2023-10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"resume-qa-data-2023-10\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"resume-qa-data-2023-10\"\n\nMore Information needed"
] |
193773fe809d36b08cd0ea9869d77d31a8664542
|
# Dataset Card for "wiki_with_embedding0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arminmrm93/wiki_with_embedding0
|
[
"region:us"
] |
2023-10-22T21:42:23+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1103936597, "num_examples": 64587}], "download_size": 753564168, "dataset_size": 1103936597}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T21:43:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wiki_with_embedding0"
More Information needed
|
[
"# Dataset Card for \"wiki_with_embedding0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wiki_with_embedding0\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wiki_with_embedding0\"\n\nMore Information needed"
] |
d71f621b42341490cd1d366993e82a5c053d7513
|
# Dataset Card for "natural-questions-chunk-0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-0
|
[
"region:us"
] |
2023-10-22T21:45:10+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4705302627, "num_examples": 10000}], "download_size": 1826111395, "dataset_size": 4705302627}}
|
2023-10-22T21:48:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-0"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-0\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-0\"\n\nMore Information needed"
] |
68884294bd5dad7a9096eb517da72e4719350646
|
# Dataset Card for "natural-questions-chunk-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-1
|
[
"region:us"
] |
2023-10-22T21:48:50+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4690314797, "num_examples": 10000}], "download_size": 1819108926, "dataset_size": 4690314797}}
|
2023-10-22T21:52:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-1"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-1\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-1\"\n\nMore Information needed"
] |
8ccca1ee87b9b95d580c31a981422b3b18121c7d
|
# Dataset Card for "movie_posters-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skvarre/movie_posters-100k
|
[
"region:us"
] |
2023-10-22T21:50:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "title", "dtype": "string"}, {"name": "genres", "list": [{"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}]}, {"name": "overview", "dtype": "string"}, {"name": "popularity", "dtype": "float64"}, {"name": "release_date", "dtype": "string"}, {"name": "budget", "dtype": "int64"}, {"name": "revenue", "dtype": "int64"}, {"name": "tagline", "dtype": "string"}, {"name": "original_language", "dtype": "string"}, {"name": "runtime", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 43543732674.2, "num_examples": 95300}], "download_size": 43339016957, "dataset_size": 43543732674.2}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:25:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "movie_posters-100k"
More Information needed
|
[
"# Dataset Card for \"movie_posters-100k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"movie_posters-100k\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"movie_posters-100k\"\n\nMore Information needed"
] |
9df9a2d1fa45a787c093e8730e2ab56009add13c
|
# Dataset Card for "natural-questions-chunk-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-2
|
[
"region:us"
] |
2023-10-22T21:52:24+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4672087643, "num_examples": 10000}], "download_size": 1816142719, "dataset_size": 4672087643}}
|
2023-10-22T21:56:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-2"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-2\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-2\"\n\nMore Information needed"
] |
2d550b5bec383958f0d9ea75ac2fa66c95c5cfc8
|
# Dataset Card for "womens_clothing_ecommerce_reviews_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aiancheruk/womens_clothing_ecommerce_reviews_mini
|
[
"region:us"
] |
2023-10-22T21:52:40+00:00
|
{"dataset_info": {"features": [{"name": "review_text", "dtype": "string"}, {"name": "age", "dtype": "int64"}, {"name": "rating", "dtype": "int64"}, {"name": "positive_feedback_count", "dtype": "int64"}, {"name": "division_name", "dtype": "string"}, {"name": "department_name", "dtype": "string"}, {"name": "class_name", "dtype": "string"}, {"name": "recommended_ind", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 1894592.0740274212, "num_examples": 5000}, {"name": "test", "num_bytes": 373295, "num_examples": 1000}, {"name": "val", "num_bytes": 373636, "num_examples": 1000}], "download_size": 1342313, "dataset_size": 2641523.074027421}}
|
2023-10-22T21:52:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "womens_clothing_ecommerce_reviews_mini"
More Information needed
|
[
"# Dataset Card for \"womens_clothing_ecommerce_reviews_mini\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"womens_clothing_ecommerce_reviews_mini\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"womens_clothing_ecommerce_reviews_mini\"\n\nMore Information needed"
] |
3cf9f9dae8b30cfe1d0780a3cdfc853b45ae2e92
|
# Dataset Card for "natural-questions-chunk-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-3
|
[
"region:us"
] |
2023-10-22T21:56:01+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4591162424, "num_examples": 10000}], "download_size": 1782588663, "dataset_size": 4591162424}}
|
2023-10-22T21:59:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-3"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-3\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-3\"\n\nMore Information needed"
] |
adddac41ad8cb80d8b43fb66f3a6ed29d4446d42
|
# Dataset Card for "natural-questions-chunk-4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-4
|
[
"region:us"
] |
2023-10-22T21:59:31+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4529920148, "num_examples": 10000}], "download_size": 1759288585, "dataset_size": 4529920148}}
|
2023-10-22T22:03:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-4"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-4\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-4\"\n\nMore Information needed"
] |
c98bf773398691fd3395092f0232d831dea77c0a
|
# Dataset Card for "natural-questions-chunk-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-5
|
[
"region:us"
] |
2023-10-22T22:03:02+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4651468477, "num_examples": 10000}], "download_size": 1807817811, "dataset_size": 4651468477}}
|
2023-10-22T22:06:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-5"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-5\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-5\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-5\"\n\nMore Information needed"
] |
a1ab47dbef39fe65080e395b72b9e18d624d7cec
|
# Dataset Card for "gorilla_16k_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_unified
|
[
"region:us"
] |
2023-10-22T22:03:35+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 14956387, "num_examples": 16250}], "download_size": 0, "dataset_size": 14956387}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:13:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_unified"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_unified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_unified\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_unified\"\n\nMore Information needed"
] |
455c1596116952ae3a5de0832c76d90300dc9159
|
# Dataset Card for "gorilla_16k_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_embedded
|
[
"region:us"
] |
2023-10-22T22:04:12+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 81581387, "num_examples": 16250}], "download_size": 0, "dataset_size": 81581387}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:14:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_embedded"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_embedded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_embedded\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_embedded\"\n\nMore Information needed"
] |
ede4ede1378753914371f760d8a7315f9cea2e03
|
# Dataset Card for "gorilla_16k_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_0_std
|
[
"region:us"
] |
2023-10-22T22:05:18+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3019664, "num_examples": 5246}], "download_size": 0, "dataset_size": 3019664}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:15:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_0_std"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_0_std\"\n\nMore Information needed"
] |
fab5d50980f244fb4649b16e18598d07d2ce0cdf
|
# Dataset Card for "gorilla_16k_standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_0_alpaca
|
[
"region:us"
] |
2023-10-22T22:05:21+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2871628, "num_examples": 2622}], "download_size": 0, "dataset_size": 2871628}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:15:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_0_alpaca"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
3adaed834962b019623eadf6ac21ce3f6deccba5
|
# Dataset Card for "gorilla_16k_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_0
|
[
"region:us"
] |
2023-10-22T22:05:24+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24320467, "num_examples": 2623}], "download_size": 0, "dataset_size": 24320467}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:15:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_0"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_0\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_0\"\n\nMore Information needed"
] |
15795c97486dc1ba0c9b5cdbcc0b4ed0240ceed6
|
# Dataset Card for "gorilla_16k_standardized_cluster_1_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_1_std
|
[
"region:us"
] |
2023-10-22T22:05:39+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3607783, "num_examples": 8302}], "download_size": 0, "dataset_size": 3607783}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:15:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_1_std"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_1_std\"\n\nMore Information needed"
] |
4b4875181dacccc91a2752a624cfc8097133b8f3
|
# Dataset Card for "gorilla_16k_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_1_alpaca
|
[
"region:us"
] |
2023-10-22T22:05:42+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3373698, "num_examples": 4150}], "download_size": 0, "dataset_size": 3373698}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:15:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_1_alpaca"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
d4ec63c56507b699a9daa8b12ade6e29073515e7
|
# Dataset Card for "gorilla_16k_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_1
|
[
"region:us"
] |
2023-10-22T22:05:47+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 37317020, "num_examples": 4151}], "download_size": 0, "dataset_size": 37317020}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:15:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_1"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_1\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_1\"\n\nMore Information needed"
] |
d1fa430e42a9a40c4209a71184aa137f72991d6b
|
# Dataset Card for "gorilla_16k_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_2_std
|
[
"region:us"
] |
2023-10-22T22:06:02+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1639492, "num_examples": 4044}], "download_size": 0, "dataset_size": 1639492}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_2_std"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_2_std\"\n\nMore Information needed"
] |
ef0875e618ec60337250bee9b3487da208b6b7ef
|
# Dataset Card for "gorilla_16k_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_2_alpaca
|
[
"region:us"
] |
2023-10-22T22:06:05+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1524709, "num_examples": 2021}], "download_size": 0, "dataset_size": 1524709}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_2_alpaca"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
6513b60d437c94147b2d7e703b62a0c6b39ac4fa
|
# Dataset Card for "gorilla_16k_standardized_cluster_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_2
|
[
"region:us"
] |
2023-10-22T22:06:07+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 18059652, "num_examples": 2022}], "download_size": 0, "dataset_size": 18059652}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_2"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_2\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_2\"\n\nMore Information needed"
] |
215175622e869ff7b4ed6a33b61ae65dd59a1dad
|
# Dataset Card for "Open_Platypus_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_unified
|
[
"region:us"
] |
2023-10-22T22:06:16+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 30491200, "num_examples": 24925}], "download_size": 0, "dataset_size": 30491200}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:46:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_unified"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_unified\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_unified\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_unified\"\n\nMore Information needed"
] |
3de6fa9097526a8ac6ebaa597ad432b22a8d284d
|
# Dataset Card for "gorilla_16k_standardized_cluster_3_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_3_std
|
[
"region:us"
] |
2023-10-22T22:06:24+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3035733, "num_examples": 6652}], "download_size": 0, "dataset_size": 3035733}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_3_std"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_3_std\"\n\nMore Information needed"
] |
026b27d1d0ee74d33637d0b4cd76a8568bf0f38d
|
# Dataset Card for "gorilla_16k_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_3_alpaca
|
[
"region:us"
] |
2023-10-22T22:06:27+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2849349, "num_examples": 3325}], "download_size": 0, "dataset_size": 2849349}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_3_alpaca"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
f38cd5acc89acb2dd64c24fdcd9612a0a814ef70
|
# Dataset Card for "natural-questions-chunk-6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-6
|
[
"region:us"
] |
2023-10-22T22:06:32+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4655306372, "num_examples": 10000}], "download_size": 1805442960, "dataset_size": 4655306372}}
|
2023-10-22T22:10:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-6"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-6\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-6\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-6\"\n\nMore Information needed"
] |
bee3e4ed1e3631aec2e59b0276980f2c2d3ac3d1
|
# Dataset Card for "natural-questions-chunk-7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-7
|
[
"region:us"
] |
2023-10-22T22:10:03+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4648515125, "num_examples": 10000}], "download_size": 1806671077, "dataset_size": 4648515125}}
|
2023-10-22T22:13:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-7"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-7\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-7\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-7\"\n\nMore Information needed"
] |
18d77cb34c3dd51e6ca2cbc036fc8c48242a949f
|
# Dataset Card for "natural-questions-chunk-8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-8
|
[
"region:us"
] |
2023-10-22T22:13:42+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4690331518, "num_examples": 10000}], "download_size": 1821291244, "dataset_size": 4690331518}}
|
2023-10-22T22:17:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-8"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-8\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-8\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-8\"\n\nMore Information needed"
] |
4999c4cd15bb33f7079c4dc555216e448ae29c86
|
# Dataset Card for "Open_Platypus_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_embedded
|
[
"region:us"
] |
2023-10-22T22:14:13+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 132683700, "num_examples": 24925}], "download_size": 65430177, "dataset_size": 132683700}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:47:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_embedded"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_embedded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_embedded\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_embedded\"\n\nMore Information needed"
] |
113be57142adc83f8a0cb91d35f684d96a2248f9
|
# Dataset Card for "Open_Platypus_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_0_std
|
[
"region:us"
] |
2023-10-22T22:15:56+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7794914, "num_examples": 10635}], "download_size": 0, "dataset_size": 7794914}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:48:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_0_std"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_0_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_0_std\"\n\nMore Information needed"
] |
145af71dd98ffeca9ae183d33c26a0ac1365a95c
|
# Dataset Card for "Open_Platypus_standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_0_alpaca
|
[
"region:us"
] |
2023-10-22T22:15:58+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7428788, "num_examples": 3544}], "download_size": 0, "dataset_size": 7428788}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:48:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_0_alpaca"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_0_alpaca\"\n\nMore Information needed"
] |
36bfe8d87486968cb4f95cd25a1ff9c836ff91e3
|
# Dataset Card for "Open_Platypus_standardized_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_0
|
[
"region:us"
] |
2023-10-22T22:16:01+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 36427879, "num_examples": 3545}], "download_size": 0, "dataset_size": 36427879}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:48:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_0"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_0\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_0\"\n\nMore Information needed"
] |
6afac1c959a52cd9d74be9fbc8512f3fbf1ede91
|
# Dataset Card for "Open_Platypus_standardized_cluster_1_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_1_std
|
[
"region:us"
] |
2023-10-22T22:16:21+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 8281207, "num_examples": 21693}], "download_size": 0, "dataset_size": 8281207}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:49:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_1_std"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_1_std\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_1_std\"\n\nMore Information needed"
] |
daacf173d4bef06fd88c4f5762a9f8a8f2e5b816
|
# Dataset Card for "Open_Platypus_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_1_alpaca
|
[
"region:us"
] |
2023-10-22T22:16:24+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7549475, "num_examples": 7230}], "download_size": 0, "dataset_size": 7549475}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:49:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_1_alpaca"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_1_alpaca\"\n\nMore Information needed"
] |
c2513f29e75485441e1887a5070600858e6531f0
|
# Dataset Card for "Open_Platypus_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_1
|
[
"region:us"
] |
2023-10-22T22:16:26+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 66685994, "num_examples": 7231}], "download_size": 0, "dataset_size": 66685994}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:49:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_1"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_1\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_1\"\n\nMore Information needed"
] |
8324dbf2ca17501bcc33090366a72f3b0b0bb2ff
|
# Dataset Card for "gorilla_16k_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_3
|
[
"region:us"
] |
2023-10-22T22:16:29+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 30046179, "num_examples": 3326}], "download_size": 7772997, "dataset_size": 30046179}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_3"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_3\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_3\"\n\nMore Information needed"
] |
da8e1c9359e0ff6c594e89908fc0fd6bfd54df7a
|
# Dataset Card for "Open_Platypus_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_2_std
|
[
"region:us"
] |
2023-10-22T22:16:43+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6181135, "num_examples": 15444}], "download_size": 0, "dataset_size": 6181135}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:49:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_2_std"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_2_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_2_std\"\n\nMore Information needed"
] |
cc0bfb53089bd1edc02e88dc2ffc929260020e87
|
# Dataset Card for "gorilla_16k_standardized_cluster_4_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_4_std
|
[
"region:us"
] |
2023-10-22T22:16:45+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5005609, "num_examples": 8256}], "download_size": 1950794, "dataset_size": 5005609}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_4_std"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_4_std\"\n\nMore Information needed"
] |
f4b1e2028819d4c127f53d4a9215e4ab064a9f34
|
# Dataset Card for "Open_Platypus_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_2_alpaca
|
[
"region:us"
] |
2023-10-22T22:16:46+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5719356, "num_examples": 5147}], "download_size": 0, "dataset_size": 5719356}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:49:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_2_alpaca"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_2_alpaca\"\n\nMore Information needed"
] |
01b348b7b2af77b24367a21b4dbff8ee63995cb7
|
# Dataset Card for "gorilla_16k_standardized_cluster_4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_4_alpaca
|
[
"region:us"
] |
2023-10-22T22:16:48+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4773320, "num_examples": 4127}], "download_size": 1886256, "dataset_size": 4773320}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_4_alpaca"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
ef7c10ba4d0ccebb9a1455dbcb54b536b695d43d
|
# Dataset Card for "Open_Platypus_standardized_cluster_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_2
|
[
"region:us"
] |
2023-10-22T22:16:48+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 47761531, "num_examples": 5148}], "download_size": 0, "dataset_size": 47761531}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:49:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_2"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_2\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_2\"\n\nMore Information needed"
] |
829ed0c5205af1932941c9a4d87d07f14b24bcd1
|
# Dataset Card for "gorilla_16k_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/gorilla_16k_standardized_cluster_4
|
[
"region:us"
] |
2023-10-22T22:16:51+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 38528069, "num_examples": 4128}], "download_size": 10236984, "dataset_size": 38528069}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T22:16:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gorilla_16k_standardized_cluster_4"
More Information needed
|
[
"# Dataset Card for \"gorilla_16k_standardized_cluster_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gorilla_16k_standardized_cluster_4\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gorilla_16k_standardized_cluster_4\"\n\nMore Information needed"
] |
17cc367b051be349a9d2093ed3431672a2ebb38c
|
# Dataset Card for "Open_Platypus_standardized_cluster_3_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_3_std
|
[
"region:us"
] |
2023-10-22T22:17:06+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7726550, "num_examples": 15678}], "download_size": 0, "dataset_size": 7726550}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:50:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_3_std"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_3_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_3_std\"\n\nMore Information needed"
] |
95dc15d9d15747514e8e598455adfde298cbee0a
|
# Dataset Card for "Open_Platypus_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_3_alpaca
|
[
"region:us"
] |
2023-10-22T22:17:09+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7197842, "num_examples": 5225}], "download_size": 0, "dataset_size": 7197842}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:50:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_3_alpaca"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_3_alpaca\"\n\nMore Information needed"
] |
c6f3c36f50f2382b59211cf49b8271d649eef20c
|
# Dataset Card for "Open_Platypus_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_3
|
[
"region:us"
] |
2023-10-22T22:17:11+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 49936952, "num_examples": 5226}], "download_size": 0, "dataset_size": 49936952}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:50:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_3"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_3\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_3\"\n\nMore Information needed"
] |
2b9704ed087336e1070c4bf5634dffb35808d685
|
# Dataset Card for "natural-questions-chunk-9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davidfant/natural-questions-chunk-9
|
[
"region:us"
] |
2023-10-22T22:17:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "struct": [{"name": "html", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "tokens", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "is_html", "dtype": "bool"}, {"name": "start_byte", "dtype": "int64"}, {"name": "token", "dtype": "string"}]}, {"name": "url", "dtype": "string"}]}, {"name": "question", "struct": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}]}, {"name": "long_answer_candidates", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "top_level", "dtype": "bool"}]}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "string"}, {"name": "long_answer", "struct": [{"name": "candidate_index", "dtype": "int64"}, {"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}]}, {"name": "short_answers", "sequence": [{"name": "end_byte", "dtype": "int64"}, {"name": "end_token", "dtype": "int64"}, {"name": "start_byte", "dtype": "int64"}, {"name": "start_token", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "yes_no_answer", "dtype": {"class_label": {"names": {"0": "NO", "1": "YES"}}}}]}], "splits": [{"name": "train", "num_bytes": 4680935606, "num_examples": 10000}], "download_size": 1815069321, "dataset_size": 4680935606}}
|
2023-10-22T22:20:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "natural-questions-chunk-9"
More Information needed
|
[
"# Dataset Card for \"natural-questions-chunk-9\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"natural-questions-chunk-9\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"natural-questions-chunk-9\"\n\nMore Information needed"
] |
ebbb7abbdc421277637bbeeb84bcbb677d9644c2
|
# Dataset Card for "Open_Platypus_standardized_cluster_4_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_4_std
|
[
"region:us"
] |
2023-10-22T22:17:29+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "cluster", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3672869, "num_examples": 11325}], "download_size": 0, "dataset_size": 3672869}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:50:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_4_std"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_4_std\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_4_std\"\n\nMore Information needed"
] |
56543adbe31377b59e85d5b3a7491a076c141820
|
# Dataset Card for "Open_Platypus_standardized_cluster_4_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_4_alpaca
|
[
"region:us"
] |
2023-10-22T22:17:32+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3290936, "num_examples": 3774}], "download_size": 0, "dataset_size": 3290936}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:50:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_4_alpaca"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_4_alpaca\"\n\nMore Information needed"
] |
b0dfb2e479cd42716a733225e7cd38b0fd393b94
|
# Dataset Card for "Open_Platypus_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AdapterOcean/Open_Platypus_standardized_cluster_4
|
[
"region:us"
] |
2023-10-22T22:17:34+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "embedding", "sequence": "float64"}, {"name": "cluster", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 34163544, "num_examples": 3775}], "download_size": 0, "dataset_size": 34163544}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-23T19:50:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Open_Platypus_standardized_cluster_4"
More Information needed
|
[
"# Dataset Card for \"Open_Platypus_standardized_cluster_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Open_Platypus_standardized_cluster_4\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Open_Platypus_standardized_cluster_4\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.